evidentia

Projects that follow the best practices below can voluntarily self-certify and show that they've achieved an Open Source Security Foundation (OpenSSF) best practices badge.

There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different companies. To earn a badge, all MUST and MUST NOT criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be met OR unmet (we want them considered at least). If you want to enter justification text as a generic comment, instead of being a rationale that the situation is acceptable, start the text block with '//' followed by a space. Feedback is welcome via the GitHub site as issues or pull requests There is also a mailing list for general discussion.

We gladly provide the information in several locales, however, if there is any conflict or inconsistency between the translations, the English version is the authoritative version.
If this is your project, please show your badge status on your project page! The badge status looks like this: Badge level for project 12724 is silver Here is how to embed it:
You can show your badge status by embedding this in your markdown file:
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/12724/badge)](https://www.bestpractices.dev/projects/12724)
or by embedding this in your HTML:
<a href="https://www.bestpractices.dev/projects/12724"><img src="https://www.bestpractices.dev/projects/12724/badge"></a>


These are the Silver level criteria. You can also view the Passing or Gold level criteria.

Baseline Series: Baseline Level 1 Baseline Level 2 Baseline Level 3

        

 Basics 17/17

  • General

    Note that other projects may use the same name.

    Open-source Python GRC tool: gap analysis, AI risk statements, OSCAL-first compliance automation. Enterprise-grade evidence integrity (Sigstore + GPG), CycloneDX SBOM, PyPI Trusted Publisher OIDC + PEP 740 attestations.

    Please use SPDX license expression format; examples include "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause", "GPL-2.0+", "LGPL-3.0+", "MIT", and "(BSD-2-Clause OR Ruby)". Do not include single quotes or double quotes.
    If there is more than one language, list them as comma-separated values (spaces optional) and sort them from most to least used. If there is a long list, please list at least the first three most common ones. If there is no language (e.g., this is a documentation-only or test-only project), use the single character "-". Please use a conventional capitalization for each language, e.g., "JavaScript".
    The Common Platform Enumeration (CPE) is a structured naming scheme for information technology systems, software, and packages. It is used in a number of systems and databases when reporting vulnerabilities.
  • Prerequisites


    The project MUST achieve a passing level badge. [achieve_passing]

  • Basic project website content


    The information on how to contribute MUST include the requirements for acceptable contributions (e.g., a reference to any required coding standard). (URL required) [contribution_requirements]
  • Project oversight


    The project SHOULD have a legal mechanism where all developers of non-trivial amounts of project software assert that they are legally authorized to make these contributions. The most common and easily-implemented approach for doing this is by using a Developer Certificate of Origin (DCO), where users add "signed-off-by" in their commits and the project links to the DCO website. However, this MAY be implemented as a Contributor License Agreement (CLA), or other legal mechanism. (URL required) [dco]
    The DCO is the recommended mechanism because it's easy to implement, tracked in the source code, and git directly supports a "signed-off" feature using "commit -s". To be most effective it is best if the project documentation explains what "signed-off" means for that project. A CLA is a legal agreement that defines the terms under which intellectual works have been licensed to an organization or project. A contributor assignment agreement (CAA) is a legal agreement that transfers rights in an intellectual work to another party; projects are not required to have CAAs, since having CAA increases the risk that potential contributors will not contribute, especially if the receiver is a for-profit organization. The Apache Software Foundation CLAs (the individual contributor license and the corporate CLA) are examples of CLAs, for projects which determine that the risks of these kinds of CLAs to the project are less than their benefits.

    Single-maintainer project. All commits to date are authored by the project owner (Allen Byrd) under copyright explicitly granted to the project under Apache-2.0. A formal DCO/CLA flow will be adopted at the point a second contributor is onboarded; for now, the legal-authority chain is degenerate (one author = one signer). Apache-2.0 LICENSE: https://github.com/allenfbyrd/evidentia/blob/main/LICENSE.



    The project MUST clearly define and document its project governance model (the way it makes decisions, including key roles). (URL required) [governance]
    There needs to be some well-established documented way to make decisions and resolve disputes. In small projects, this may be as simple as "the project owner and lead makes all final decisions". There are various governance models, including benevolent dictator and formal meritocracy; for more details, see Governance models. Both centralized (e.g., single-maintainer) and decentralized (e.g., group maintainers) approaches have been successfully used in projects. The governance information does not need to document the possibility of creating a project fork, since that is always possible for FLOSS projects.

    Project governance is documented in GOVERNANCE.md: https://github.com/allenfbyrd/evidentia/blob/main/GOVERNANCE.md. Current model: BDFL (benevolent dictator for life) — Allen Byrd holds final decision authority. Roadmap is published in docs/ROADMAP.md and per-release plans (docs/v0.7.x-plan.md). Decisions on technical direction, scope, and breaking changes are made openly via GitHub Issues and PR discussion.



    The project MUST adopt a code of conduct and post it in a standard location. (URL required) [code_of_conduct]
    Projects may be able to improve the civility of their community and to set expectations about acceptable conduct by adopting a code of conduct. This can help avoid problems before they occur and make the project a more welcoming place to encourage contributions. This should focus only on behavior within the community/workplace of the project. Example codes of conduct are the Linux kernel code of conduct, the Contributor Covenant Code of Conduct, the Debian Code of Conduct, the Ubuntu Code of Conduct, the Fedora Code of Conduct, the GNOME Code Of Conduct, the KDE Community Code of Conduct, the Python Community Code of Conduct, The Ruby Community Conduct Guideline, and The Rust Code of Conduct.

    Contributor Covenant v2.1 adopted at https://github.com/allenfbyrd/evidentia/blob/main/CODE_OF_CONDUCT.md. Reporting channel and enforcement guidelines documented inline.



    The project MUST clearly define and publicly document the key roles in the project and their responsibilities, including any tasks those roles must perform. It MUST be clear who has which role(s), though this might not be documented in the same way. (URL required) [roles_responsibilities]
    The documentation for governance and roles and responsibilities may be in one place.

    Roles and responsibilities documented in GOVERNANCE.md: https://github.com/allenfbyrd/evidentia/blob/main/GOVERNANCE.md. Current roles: Maintainer (Allen Byrd) — owns all merge authority, release authority, and security disclosure handling. As collaborators join, a "Triager" role (issue triage, PR review) and "Catalog Curator" role (Tier-A/B/C catalog updates) will be defined explicitly.



    The project MUST be able to continue with minimal interruption if any one person dies, is incapacitated, or is otherwise unable or unwilling to continue support of the project. In particular, the project MUST be able to create and close issues, accept proposed changes, and release versions of software, within a week of confirmation of the loss of support from any one individual. This MAY be done by ensuring someone else has any necessary keys, passwords, and legal rights to continue the project. Individuals who run a FLOSS project MAY do this by providing keys in a lockbox and a will providing any needed legal rights (e.g., for DNS names). (URL required) [access_continuity]

    Concrete continuity plan documented at https://github.com/allenfbyrd/evidentia/blob/main/docs/access-continuity.md. Key elements:

    • Operational SLA: project commits to resuming normal operations (create + close issues, accept proposed changes, release versions) within 7 calendar days of confirmation of loss of support.
    • Keyless signing infrastructure (Sigstore PEP 740 + Trusted Publisher OIDC + cosign keyless) means no offline private keys exist that could be lost with the maintainer. Any successor with repo write access can continue releases without any key transfer.
    • Step-by-step recovery procedure (Step 1 confirm loss; Step 2 GitHub repo + organization access via account-recovery OR fork-and-redirect fallback; Step 3 PyPI project-owner-role transfer; Step 4 GHCR access; Step 5 DNS / domain registrar — none currently held; Step 6 first release post-transfer).
    • Named successor + emergency contact maintained in the maintainer's encrypted password manager + emergency-access designation + will / estate documents. Public disclosure of the successor identity is intentionally avoided per the doc's privacy rationale (avoids social-engineering attempts to claim the project; preserves the maintainer's flexibility to update the designation as relationships change). The OpenSSF criterion text doesn't require public disclosure of the successor identity — it requires that the project MUST be able to continue, which the public doc + the private-side designation jointly accomplish. Auditors can verify the private-side designation exists by direct contact with the maintainer (see SECURITY.md disclosure channel).
    • Plan reviewed at every release per release-checklist.md Step 5 + on a quarterly cadence regardless of release activity.

    Companion governance framing at https://github.com/allenfbyrd/evidentia/blob/main/GOVERNANCE.md §"Continuity and bus factor".



    The project SHOULD have a "bus factor" of 2 or more. (URL required) [bus_factor]
    A "bus factor" (aka "truck factor") is the minimum number of project members that have to suddenly disappear from a project ("hit by a bus") before the project stalls due to lack of knowledgeable or competent personnel. The truck-factor tool can estimate this for projects on GitHub. For more information, see Assessing the Bus Factor of Git Repositories by Cosentino et al.

    Current bus factor is 1 (single maintainer - Allen Byrd). Mitigation: keyless signing infrastructure (no offline keys to lose), Trusted Publisher OIDC bound to the repo (any maintainer with repo write can publish), all process documented in https://github.com/allenfbyrd/evidentia/blob/main/docs/release-checklist.md. Project is in early growth phase; second maintainer will be recruited as the contributor base develops.


  • Documentation


    The project MUST have a documented roadmap that describes what the project intends to do and not do for at least the next year. (URL required) [documentation_roadmap]
    The project might not achieve the roadmap, and that's fine; the purpose of the roadmap is to help potential users and contributors understand the intended direction of the project. It need not be detailed.

    Roadmap is documented at https://github.com/allenfbyrd/evidentia/blob/main/docs/ROADMAP.md with detailed per-release plans for the v0.7.x line: docs/v0.7.5-plan.md through docs/v0.7.9-plan.md (8-10 week ship target each). v0.8.0 plan also published (https://github.com/allenfbyrd/evidentia/blob/main/docs/v0.8.0-plan.md). Combined horizon exceeds 1 year.



    The project MUST include documentation of the architecture (aka high-level design) of the software produced by the project. If the project does not produce software, select "not applicable" (N/A). (URL required) [documentation_architecture]
    A software architecture explains a program's fundamental structures, i.e., the program's major components, the relationships among them, and the key properties of these components and relationships.

    Canonical architecture document is https://github.com/allenfbyrd/evidentia/blob/main/Evidentia-Architecture-and-Implementation-Plan.md covering the 6-package monorepo structure, OSCAL-first data model, AI integration patterns, collector + integration architecture, and security boundaries. Capability matrix (https://github.com/allenfbyrd/evidentia/blob/main/docs/capability-matrix.md) covers the public surface inventory across 5 surface tiers and 5 risk tiers.



    The project MUST document what the user can and cannot expect in terms of security from the software produced by the project (its "security requirements"). (URL required) [documentation_security]
    These are the security requirements that the software is intended to meet.

    Security requirements and threat boundary documented in https://github.com/allenfbyrd/evidentia/blob/main/docs/threat-model.md (~58 surfaces across 5 tiers including the v0.7.9 TPRM + vendor-risk-collector additions; explicit in-scope/out-of-scope; assumed-trust assumptions). Per-release security review (most recent: https://github.com/allenfbyrd/evidentia/blob/main/docs/security-review-v0.7.9.md) gives a CVSS/CWE/EPSS-classified view of the active surface. SECURITY.md (https://github.com/allenfbyrd/evidentia/blob/main/SECURITY.md) defines disclosure SLAs and supported-version policy with the supported-versions table refreshed at every release.



    The project MUST provide a "quick start" guide for new users to help them quickly do something with the software. (URL required) [documentation_quick_start]
    The idea is to show users how to get started and make the software do anything at all. This is critically important for potential users to get started.

    90-second quickstart at https://github.com/allenfbyrd/evidentia/blob/main/docs/quickstart.md. README also has a "Getting Started" section with a 4-step install + first-gap-analysis flow.



    The project MUST make an effort to keep the documentation consistent with the current version of the project results (including software produced by the project). Any known documentation defects making it inconsistent MUST be fixed. If the documentation is generally current, but erroneously includes some older information that is no longer true, just treat that as a defect, then track and fix as usual. [documentation_current]
    The documentation MAY include information about differences or changes between versions of the software and/or link to older versions of the documentation. The intent of this criterion is that an effort is made to keep the documentation consistent, not that the documentation must be perfect.

    Documentation is refreshed every release per docs/release-checklist.md Step 4 (DOC refresh). Version-pinned docs live alongside per-release plan files (docs/v0.7.x-plan.md through docs/v0.7.9-plan.md + docs/v0.8.0-plan.md). All v0.7.9-era staleness items were closed at v0.7.9 ship time: CHANGELOG [Unreleased] gaps for in-flight commits (commit 3315150), README collectors row (Vanta/Drata/BitSight/SSC + Databricks/Snowflake/SQL/Okta), ROADMAP NEXT/PLANNED → SHIPPED for v0.7.5/v0.7.6/v0.7.7, evidentia-collectors pyproject description + keywords. Two earlier-flagged stale strings (CONTRIBUTING.md test count, SECURITY.md supported-versions table) shipped in v0.7.9 P0.6 OpenSSF Silver-tier prep batch (commit 6f862eb).



    The project repository front page and/or website MUST identify and hyperlink to any achievements, including this best practices badge, within 48 hours of public recognition that the achievement has been attained. (URL required) [documentation_achievements]
    An achievement is any set of external criteria that the project has specifically worked to meet, including some badges. This information does not need to be on the project website front page. A project using GitHub can put achievements on the repository front page by adding them to the README file.

    Project achievements (OpenSSF Best Practices badge, OpenSSF Scorecard) are surfaced in the badge cluster at the top of the README: https://github.com/allenfbyrd/evidentia/blob/main/README.md. Live badge embed: OpenSSF Best Practices.


  • Accessibility and internationalization


    The project (both project sites and project results) SHOULD follow accessibility best practices so that persons with disabilities can still participate in the project and use the project results where it is reasonable to do so. [accessibility_best_practices]
    For web applications, see the Web Content Accessibility Guidelines (WCAG 2.0) and its supporting document Understanding WCAG 2.0; see also W3C accessibility information. For GUI applications, consider using the environment-specific accessibility guidelines (such as Gnome, KDE, XFCE, Android, iOS, Mac, and Windows). Some TUI applications (e.g. `ncurses` programs) can do certain things to make themselves more accessible (such as `alpine`'s `force-arrow-cursor` setting). Most command-line applications are fairly accessible as-is. This criterion is often N/A, e.g., for program libraries. Here are some examples of actions to take or issues to consider:
    • Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language ( WCAG 2.0 guideline 1.1)
    • Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. ( WCAG 2.0 guideline 1.4.1)
    • The visual presentation of text and images of text has a contrast ratio of at least 4.5:1, except for large text, incidental text, and logotypes ( WCAG 2.0 guideline 1.4.3)
    • Make all functionality available from a keyboard (WCAG guideline 2.1)
    • A GUI or web-based project SHOULD test with at least one screen-reader on the target platform(s) (e.g. NVDA, Jaws, or WindowEyes on Windows; VoiceOver on Mac & iOS; Orca on Linux/BSD; TalkBack on Android). TUI programs MAY work to reduce overdraw to prevent redundant reading by screen-readers.

    Public surfaces: GitHub repo + GitHub Pages (none yet) + the evidentia-ui SPA (alpha.2). The evidentia-ui frontend uses standard semantic HTML (React + accessible component primitives), keyboard-navigable forms, and ARIA labels on interactive elements. CLI output is plain text (screen-reader friendly by definition; no animated/colored-only UX). Documentation is plain Markdown rendered by GitHub (which provides standard a11y rendering). A formal WCAG 2.1 AA audit of evidentia-ui is planned for v0.8.0+.



    The software produced by the project SHOULD be internationalized to enable easy localization for the target audience's culture, region, or language. If internationalization (i18n) does not apply (e.g., the software doesn't generate text intended for end-users and doesn't sort human-readable text), select "not applicable" (N/A). [internationalization]
    Localization "refers to the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a locale)." Internationalization is the "design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language." (See W3C's "Localization vs. Internationalization".) Software meets this criterion simply by being internationalized. No localization for another specific language is required, since once software has been internationalized it's possible for others to work on localization.

    Evidentia produces compliance artifacts (OSCAL JSON/XML, gap reports, risk statements) — its end-user output is structured data targeting US/EU regulatory frameworks (NIST 800-53, ISO 27001, SOC 2, FedRAMP, FFIEC, OCC 2011-12, FRB SR 11-7) which are themselves authored in English. The CLI's operator-facing strings (help text, errors) are in English and intended for technical operators. There is no localized end-user text generation path that would benefit from i18n. The 89 bundled catalogs ship in their authoritative source language (English).


  • Other


    If the project sites (website, repository, and download URLs) store passwords for authentication of external users, the passwords MUST be stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., Argon2id, Bcrypt, Scrypt, or PBKDF2). If the project sites do not store passwords for this purpose, select "not applicable" (N/A). [sites_password_security]
    Note that the use of GitHub meets this criterion. This criterion only applies to passwords used for authentication of external users into the project sites (aka inbound authentication). If the project sites must log in to other sites (aka outbound authentication), they may need to store authorization tokens for that purpose differently (since storing a hash would be useless). This applies criterion crypto_password_storage to the project sites, similar to sites_https.

    Project sites do not store user passwords. GitHub repo handles its own auth; bestpractices.dev account is on the OpenSSF service; PyPI publishing is via Trusted Publisher OIDC (no API tokens stored). Evidentia OSS edition does not provide a public auth surface that would require password storage.


 Change Control 1/1

  • Previous versions


    The project MUST maintain the most often used older versions of the product or provide an upgrade path to newer versions. If the upgrade path is difficult, the project MUST document how to perform the upgrade (e.g., the interfaces that have changed and detailed suggested steps to help upgrade). [maintenance_or_update]

    Upgrade path is documented in CHANGELOG.md (https://github.com/allenfbyrd/evidentia/blob/main/CHANGELOG.md) with per-release "Changed", "Fixed", "Removed" sections following Keep a Changelog 1.1.0. SemVer adherence (pre-1.0: minor bumps for new feature surface, patches for fixes) means breaking changes carry an explicit Deprecation notice in the prior release. Supported-versions matrix documented in SECURITY.md. Older versions remain installable from PyPI for the duration of their security-supported window.


 Reporting 3/3

  • Bug-reporting process


    The project MUST use an issue tracker for tracking individual issues. [report_tracker]

    GitHub Issues is the project's issue tracker: https://github.com/allenfbyrd/evidentia/issues. Issue templates for bug reports and feature requests live at https://github.com/allenfbyrd/evidentia/tree/main/.github/ISSUE_TEMPLATE.


  • Vulnerability report process


    The project MUST give credit to the reporter(s) of all vulnerability reports resolved in the last 12 months, except for the reporter(s) who request anonymity. If there have been no vulnerabilities resolved in the last 12 months, select "not applicable" (N/A). (URL required) [vulnerability_report_credit]

    No external vulnerability reports have been received in the last 12 months. Upstream-CVE remediations (e.g., PR #8 addressing litellm + python-multipart) credited the upstream advisory IDs (GHSA-r75f-5x8p-qvmc + 3 others) in the commit + CHANGELOG entry. A "Security Acknowledgments" section will be added to SECURITY.md the first time an external reporter is involved.



    The project MUST have a documented process for responding to vulnerability reports. (URL required) [vulnerability_response_process]
    This is strongly related to vulnerability_report_process, which requires that there be a documented way to report vulnerabilities. It also related to vulnerability_report_response, which requires response to vulnerability reports within a certain time frame.

    Vulnerability response process is documented in https://github.com/allenfbyrd/evidentia/blob/main/SECURITY.md with: 3-business-day initial acknowledgement SLA, 10-business-day triage SLA, 90-day coordinated disclosure window (shortenable if upstream fixes exist, lengthenable by reporter agreement), in-scope and out-of-scope definitions, supported-versions matrix, and pointers to PEP 740 attestation + Sigstore/Rekor verification commands. Internal handling steps (triage → fix → CVE assignment → coordinated release → post-mortem) are documented in docs/release-checklist.md.


 Quality 19/19

  • Coding standards


    The project MUST identify the specific coding style guides for the primary languages it uses, and require that contributions generally comply with it. (URL required) [coding_standards]
    In most cases this is done by referring to some existing style guide(s), possibly listing differences. These style guides can include ways to improve readability and ways to reduce the likelihood of defects (including vulnerabilities). Many programming languages have one or more widely-used style guides. Examples of style guides include Google's style guides and SEI CERT Coding Standards.

    Coding standards are documented in CONTRIBUTING.md (https://github.com/allenfbyrd/evidentia/blob/main/CONTRIBUTING.md) and ruff config (https://github.com/allenfbyrd/evidentia/blob/main/pyproject.toml): Python 3.12+ following PEP 8 (enforced by ruff E/W rules), PEP 257 docstrings, isort import ordering (ruff I), pyflakes hygiene (F), pyupgrade modernization (UP), flake8-bugbear common-bug rules (B), flake8-simplify simplifications (SIM). Type annotations are required everywhere (mypy strict). Pydantic v2 models for all external inputs (extra="forbid"). TypeScript frontend uses ESLint + Prettier (config in packages/evidentia-ui/).



    The project MUST automatically enforce its selected coding style(s) if there is at least one FLOSS tool that can do so in the selected language(s). [coding_standards_enforced]
    This MAY be implemented using static analysis tool(s) and/or by forcing the code through code reformatters. In many cases the tool configuration is included in the project's repository (since different projects may choose different configurations). Projects MAY allow style exceptions (and typically will); where exceptions occur, they MUST be rare and documented in the code at their locations, so that these exceptions can be reviewed and so that tools can automatically handle them in the future. Examples of such tools include ESLint (JavaScript), Rubocop (Ruby), and devtools check (R).

    ruff (Python) + mypy strict (Python types) + ESLint (TypeScript) + Prettier (TypeScript formatting) all enforced as required CI status checks on every push and PR via https://github.com/allenfbyrd/evidentia/blob/main/.github/workflows/test.yml. Pre-commit hooks (https://github.com/allenfbyrd/evidentia/blob/main/.pre-commit-config.yaml) catch issues before commit-time.


  • Working build system


    Build systems for native binaries MUST honor the relevant compiler and linker (environment) variables passed in to them (e.g., CC, CFLAGS, CXX, CXXFLAGS, and LDFLAGS) and pass them to compiler and linker invocations. A build system MAY extend them with additional flags; it MUST NOT simply replace provided values with its own. If no native binaries are being generated, select "not applicable" (N/A). [build_standard_variables]
    It should be easy to enable special build features like Address Sanitizer (ASAN), or to comply with distribution hardening best practices (e.g., by easily turning on compiler flags to do so).

    Pure Python + TypeScript, no native binaries.



    The build and installation system SHOULD preserve debugging information if they are requested in the relevant flags (e.g., "install -s" is not used). If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_preserve_debug]
    E.G., setting CFLAGS (C) or CXXFLAGS (C++) should create the relevant debugging information if those languages are used, and they should not be stripped during installation. Debugging information is needed for support and analysis, and also useful for measuring the presence of hardening features in the compiled binaries.

    Pure Python; debug info is implicit via traceback.



    The build system for the software produced by the project MUST NOT recursively build subdirectories if there are cross-dependencies in the subdirectories. If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_non_recursive]
    The project build system's internal dependency information needs to be accurate, otherwise, changes to the project may not build correctly. Incorrect builds can lead to defects (including vulnerabilities). A common mistake in large build systems is to use a "recursive build" or "recursive make", that is, a hierarchy of subdirectories containing source files, where each subdirectory is independently built. Unless each subdirectory is fully independent, this is a mistake, because the dependency information is incorrect.

    uv workspace builds packages atomically; no recursive Make pattern.



    The project MUST be able to repeat the process of generating information from source files and get exactly the same bit-for-bit result. If no building occurs (e.g., scripting languages where the source code is used directly instead of being compiled), select "not applicable" (N/A). [build_repeatable]
    GCC and clang users may find the -frandom-seed option useful; in some cases, this can be resolved by forcing some sort order. More suggestions can be found at the reproducible build site.

    Builds are deterministic given uv.lock pinning. uv.lock (https://github.com/allenfbyrd/evidentia/blob/main/uv.lock) pins every transitive dependency by hash. Wheel building via hatchling is deterministic given fixed inputs. CI rebuilds on each commit produce reproducible artifacts (verifiable by re-running release.yml against a tag — same wheel hashes emerge).


  • Installation system


    The project MUST provide a way to easily install and uninstall the software produced by the project using a commonly-used convention. [installation_common]
    Examples include using a package manager (at the system or language level), "make install/uninstall" (supporting DESTDIR), a container in a standard format, or a virtual machine image in a standard format. The installation and uninstallation process (e.g., its packaging) MAY be implemented by a third party as long as it is FLOSS.

    Standard Python install: pip install evidentia (or pip install "evidentia[gui]" for full extras). Uninstall: pip uninstall evidentia. Container alternative: docker pull ghcr.io/allenfbyrd/evidentia:latest. Documented in https://github.com/allenfbyrd/evidentia/blob/main/docs/quickstart.md.



    The installation system for end-users MUST honor standard conventions for selecting the location where built artifacts are written to at installation time. For example, if it installs files on a POSIX system it MUST honor the DESTDIR environment variable. If there is no installation system or no standard convention, select "not applicable" (N/A). [installation_standard_variables]

    Python pip install respects standard --user / --prefix / virtualenv conventions; no autotools-style DESTDIR pattern applies.



    The project MUST provide a way for potential developers to quickly install all the project results and support environment necessary to make changes, including the tests and test environment. This MUST be performed with a commonly-used convention. [installation_development_quick]
    This MAY be implemented using a generated container and/or installation script(s). External dependencies would typically be installed by invoking system and/or language package manager(s), per external_dependencies.

    Dev bootstrap: git clone ..., then uv sync --all-packages installs all 6 packages + dev deps + tests in one command. Documented in CONTRIBUTING.md (https://github.com/allenfbyrd/evidentia/blob/main/CONTRIBUTING.md). devcontainer support also shipped (https://github.com/allenfbyrd/evidentia/blob/main/.devcontainer/) for one-click VS Code / GitHub Codespaces bring-up.


  • Externally-maintained components


    The project MUST list external dependencies in a computer-processable way. (URL required) [external_dependencies]
    Typically this is done using the conventions of package manager and/or build system. Note that this helps implement installation_development_quick.

    External dependencies are declared in 7 pyproject.toml files (workspace root + 6 packages) and resolved/locked in https://github.com/allenfbyrd/evidentia/blob/main/uv.lock. CycloneDX SBOM (spec 1.6) is emitted on every release and attached to the GitHub Release for full computer-processable SBOM consumption.



    Projects MUST monitor or periodically check their external dependencies (including convenience copies) to detect known vulnerabilities, and fix exploitable vulnerabilities or verify them as unexploitable. [dependency_monitoring]
    This can be done using an origin analyzer / dependency checking tool / software composition analysis tool such as OWASP's Dependency-Check, Sonatype's Nexus Auditor, Synopsys' Black Duck Software Composition Analysis, and Bundler-audit (for Ruby). Some package managers include mechanisms to do this. It is acceptable if the components' vulnerability cannot be exploited, but this analysis is difficult and it is sometimes easier to simply update or fix the part.

    Dependabot scans weekly per https://github.com/allenfbyrd/evidentia/blob/main/.github/dependabot.yml (uv, npm, GitHub Actions, Docker — grouped + security-isolated). osv-scanner runs against the SBOM at every release per docs/release-checklist.md Step 5 (most recent: 0 CVEs at v0.7.8). GitHub Code Scanning + Secret Scanning + Dependency Graph all enabled at the repo level.



    The project MUST either:
    1. make it easy to identify and update reused externally-maintained components; or
    2. use the standard components provided by the system or programming language.
    Then, if a vulnerability is found in a reused component, it will be easy to update that component. [updateable_reused_components]
    A typical way to meet this criterion is to use system and programming language package management systems. Many FLOSS programs are distributed with "convenience libraries" that are local copies of standard libraries (possibly forked). By itself, that's fine. However, if the program *must* use these local (forked) copies, then updating the "standard" libraries as a security update will leave these additional copies still vulnerable. This is especially an issue for cloud-based systems; if the cloud provider updates their "standard" libraries but the program won't use them, then the updates don't actually help. See, e.g., "Chromium: Why it isn't in Fedora yet as a proper package" by Tom Callaway.

    All reused dependencies come through standard package managers (PyPI for Python, npm for the frontend, GHCR for container images). Updates flow through Dependabot grouped PRs (config: https://github.com/allenfbyrd/evidentia/blob/main/.github/dependabot.yml). No vendored convenience copies of upstream code.



    The project SHOULD avoid using deprecated or obsolete functions and APIs where FLOSS alternatives are available in the set of technology it uses (its "technology stack") and to a supermajority of the users the project supports (so that users have ready access to the alternative). [interfaces_current]

    Codebase is on Python 3.12+ only (no legacy compat shims), Pydantic v2 (current major), latest LangChain/LiteLLM, FastAPI 0.110+, httpx (modern async-capable HTTP). ruff UP rule group continuously surfaces pyupgrade opportunities; backwards-compat hacks are rejected per project standard. Frontend is on Vite 8 + React 18 + TypeScript 5+ (current).


  • Automated test suite


    An automated test suite MUST be applied on each check-in to a shared repository for at least one branch. This test suite MUST produce a report on test success or failure. [automated_integration_testing]
    This requirement can be viewed as a subset of test_continuous_integration, but focused on just testing, without requiring continuous integration.

    test.yml runs pytest + ruff + mypy + frontend tests on every push to main and every pull request, with success/failure status reported as a required check. Workflow: https://github.com/allenfbyrd/evidentia/blob/main/.github/workflows/test.yml. Run history: https://github.com/allenfbyrd/evidentia/actions.



    The project MUST add regression tests to an automated test suite for at least 50% of the bugs fixed within the last six months. [regression_tests_added50]

    Per docs/release-checklist.md Step 5 + the pre-release-review v4 process, every fix lands with a regression test. Recent examples: F-V08-DAST-1 / F-V08-DAST-3 (v0.7.8 schema-fidelity fixes) shipped with new pytest cases for the affected endpoints; F-007 (v0.7.7.1 Dockerfile pin) shipped with the docker-run smoke test elevation; F-V08-CR-MEDIUM Snowflake quoted-identifier (v0.7.9 carry-over) shipped with 4 new TestQuotedIdentifierEscape tests; F-V08-CR-MEDIUM Power BI 1MB guard (v0.7.9 carry-over) shipped with 4 new TestPushRowsByteCapBisection tests; v0.7.9 P0.4 Continuous-review HIGH findings (H-1/H-2/H-3/H-4/H-5 + F-V09-S1) shipped with stuck-cursor-guard tests + SIG BYO partial-match test + vendor_id=None ingest test. Test count growth across patch releases (965 → 977 → 1103 → 1259 → 1540) primarily reflects regression tests for fixed bugs + tests for new features.



    The project MUST have FLOSS automated test suite(s) that provide at least 80% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language. [test_statement_coverage80]
    Many FLOSS tools are available to measure test coverage, including gcov/lcov, Blanket.js, Istanbul, JCov, and covr (R). Note that meeting this criterion is not a guarantee that the test suite is thorough, instead, failing to meet this criterion is a strong indicator of a poor test suite.

    Met. Statement coverage measured by Codecov (independent FLOSS test-coverage service): 81.87% at v0.7.10 ship, exceeding the 80% threshold. Coverage is published on every push to main via .github/workflows/test.yml (codecov-action@v6.0.0 SHA-pinned). The codecov.yml config locks the project gate at 80% with a 1% per-PR drop threshold so regressions fail CI. Live badge: https://codecov.io/gh/allenfbyrd/evidentia. Coverage scope + omit rationale documented in pyproject.toml [tool.coverage.run]. Tooling: pytest-cov (FLOSS, MIT-licensed) + Codecov upload. https://codecov.io/gh/allenfbyrd/evidentia


  • New functionality testing


    The project MUST have a formal written policy that as major new functionality is added, tests for the new functionality MUST be added to an automated test suite. [test_policy_mandated]

    CONTRIBUTING.md PR checklist line: "New features include at least one test" — required, not optional. https://github.com/allenfbyrd/evidentia/blob/main/CONTRIBUTING.md. Reinforced by docs/release-checklist.md Step 5 which gates every tag on test additions for new feature surface (a tag cannot be cut if a release includes new public surface without paired tests).



    The project MUST include, in its documented instructions for change proposals, the policy that tests are to be added for major new functionality. [tests_documented_added]
    However, even an informal rule is acceptable as long as the tests are being added in practice.

    The test-addition policy is documented in the CONTRIBUTING.md PR checklist: https://github.com/allenfbyrd/evidentia/blob/main/CONTRIBUTING.md.


  • Warning flags


    Projects MUST be maximally strict with warnings in the software produced by the project, where practical. [warnings_strict]
    Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

    mypy runs with strict = true (config in pyproject.toml) — implies disallow_untyped_defs, disallow_incomplete_defs, check_untyped_defs, disallow_untyped_decorators, no_implicit_optional, warn_redundant_casts, warn_unused_ignores, warn_return_any, no_implicit_reexport, and strict_equality. Pydantic v2 plugin is enabled for full schema validation in type-check. Ruff rule set is broad (8 rule groups). Type-checking covers all 138 source files at zero errors.


 Security 13/13

  • Secure development knowledge


    The project MUST implement secure design principles (from "know_secure_design"), where applicable. If the project is not producing software, select "not applicable" (N/A). [implement_secure_design]
    For example, the project results should have fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default). They should also have complete mediation (every access that might be limited must be checked for authority and be non-bypassable). Note that in some cases principles will conflict, in which case a choice must be made (e.g., many mechanisms can make things more complex, contravening "economy of mechanism" / keep it simple).

    Secure-design principles applied throughout the codebase per docs/threat-model.md and docs/security-review-v0.7.9.md:

    • Least privilege: GitHub Actions workflows default to read-only permissions with per-job elevation; collector API tokens are scoped to read-only (vendors:read for Vanta, vendor-inventory only for Drata, etc.).
    • Fail-safe defaults: Pydantic extra="forbid", offline mode is default-on for the AI module unless explicitly opted in, security-headers default OFF on localhost binds + auto-ON for non-loopback (--security-headers flag).
    • Complete mediation: every external input passes through validate_within() / Pydantic validation before reaching business logic; vendor inventory validates UUID-shape IDs at storage layer.
    • Separation of privilege: cosign keyless OIDC + Trusted Publisher OIDC (no long-lived secrets to compromise); vendor-risk-collector tokens never flow through CLI args or REST request bodies (env-var only).
    • Defense in depth: ruff + mypy + CodeQL + osv-scanner + Scorecard + manual /security-review per release. The v0.7.9 cycle ran THREE Continuous-variant pre-release-reviews mid-cycle (P0.1 close + P0.3+P0.2-first close + P0.4-quartet close) plus the final Pre-tag run at ship — surfacing 18 findings across the cycle (5 inline-fixed HIGH + 1 inline-fixed LOW security + 12 deferred MEDIUM/LOW).
    • Input validation as allowlist: Pydantic schemas enumerate accepted shapes; everything else rejected. CSV-injection defenses (CWE-1236) via _csv_safe in TPRM concentration-report + DD-questionnaire CSV/XLSX render paths.
    • Cross-host pagination guards: BitSight + Vanta + Drata pagination loops refuse to follow next URLs pointing off-host or to a TLS-downgraded HTTP scheme (CWE-319 defense, v0.7.9 P0.4 Continuous F-V09-S1).

  • Use basic good cryptographic practices

    Note that some software does not need to use cryptographic mechanisms. If your project produces software that (1) includes, activates, or enables encryption functionality, and (2) might be released from the United States (US) to outside the US or to a non-US-citizen, you may be legally required to take a few extra steps. Typically this just involves sending an email. For more information, see the encryption section of Understanding Open Source Technology & US Export Controls.

    The default security mechanisms within the software produced by the project MUST NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH). [crypto_weaknesses]
    Concerns about CBC mode in SSH are discussed in CERT: SSH CBC vulnerability.

    No SHA-1 (for security purposes), no CBC mode in SSH context, no deprecated TLS versions (relies on Python stdlib + httpx defaults which negotiate TLS 1.2+ with AEAD ciphers).



    The project SHOULD support multiple cryptographic algorithms, so users can quickly switch if one is broken. Common symmetric key algorithms include AES, Twofish, and Serpent. Common cryptographic hash algorithm alternatives include SHA-2 (including SHA-224, SHA-256, SHA-384 AND SHA-512) and SHA-3. [crypto_algorithm_agility]

    Hash functions: hashlib supports the full SHA-2 (224/256/384/512) and SHA-3 family; Evidentia uses SHA-256 by default but the digest helper at packages/evidentia-core/src/evidentia_core/oscal/digest.py is parameterizable. Sigstore signing supports both ECDSA P-256 and RSA — the cosign CLI lets users pick. TLS cipher selection is delegated to httpx/urllib3 which negotiate from a wide modern AEAD ciphersuite list.



    The project MUST support storing authentication credentials (such as passwords and dynamic tokens) and private cryptographic keys in files that are separate from other information (such as configuration files, databases, and logs), and permit users to update and replace them without code recompilation. If the project never processes authentication credentials and private cryptographic keys, select "not applicable" (N/A). [crypto_credential_agility]

    Where Evidentia handles outbound credentials (LLM provider API keys, collector credentials for Postgres/MySQL/Snowflake/Databricks/Tableau/Power BI/Okta/ServiceNow + the v0.7.9 vendor-risk APIs Vanta/Drata/BitSight/SecurityScorecard), they are read from environment variables or external config files (never embedded in code, never accepted via CLI args or REST request bodies). Operator updates the env file; no code recompilation required. Documented in docs/quickstart.md and per-collector docs (sql-collectors.md, cloud-dw-collectors.md, bi-integrations.md, tprm.md).



    The software produced by the project SHOULD support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 SHOULD be disabled by default, and only enabled if the user specifically configures it. If the software produced by the project does not support network communications, select "not applicable" (N/A). [crypto_used_network]

    All outbound network communication is HTTPS / TLS 1.2+ via httpx (LLM provider calls, BI publish API calls) or HTTPS via the SDK clients (databricks-sdk, snowflake-connector-python, etc.). No legacy protocol support (FTP, telnet, plain HTTP for non-localhost) is implemented or exposed.



    The software produced by the project SHOULD, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A). [crypto_tls12]

    All TLS connections go through Python's ssl module via httpx/urllib3 which negotiate TLS 1.2 or 1.3 by default against modern endpoints. Older TLS versions are not enabled.



    The software produced by the project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. If the software does not use TLS, select "not applicable" (N/A). [crypto_certificate_verification]

    Default certificate verification is on for httpx (verify=True is the default), urllib3, and all collector SDKs (databricks-sdk, snowflake-connector-python, tableau-server-client, etc.). Certificate verification is never disabled in project code; only an operator-set env var would change this behavior.



    The software produced by the project MUST, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). If the software does not use TLS, select "not applicable" (N/A). [crypto_verification_private]

    Same code path as crypto_certificate_verification — httpx and underlying urllib3/ssl perform the TLS handshake (with cert verification) before any application-layer request is sent. Headers including credentials are only emitted after the validated TLS session is established. No project code bypasses this ordering.


  • Secure release


    The project MUST cryptographically sign releases of the project results intended for widespread use, and there MUST be a documented process explaining to users how they can obtain the public signing keys and verify the signature(s). The private key for these signature(s) MUST NOT be on site(s) used to directly distribute the software to the public. If releases are not intended for widespread use, select "not applicable" (N/A). [signed_releases]
    The project results include both source code and any generated deliverables where applicable (e.g., executables, packages, and containers). Generated deliverables MAY be signed separately from source code. These MAY be implemented as signed git tags (using cryptographic digital signatures). Projects MAY provide generated results separately from tools like git, but in those cases, the separate results MUST be separately signed.

    Every PyPI release wheel + sdist is signed via Sigstore PEP 740 attestations (keyless OIDC, signed via the Sigstore public good instance and recorded in the Rekor transparency log). Container images are signed by cosign keyless OIDC against the image digest. SLSA L3 build provenance attestations are emitted for every release. Verification commands documented at https://github.com/allenfbyrd/evidentia/blob/main/docs/sigstore-quickstart.md. Private signing keys do not exist on the distribution side (keyless flow), satisfying the "private key not on distribution site" requirement by construction.



    It is SUGGESTED that in the version control system, each important version tag (a tag that is part of a major release, minor release, or fixes publicly noted vulnerabilities) be cryptographically signed and verifiable as described in signed_releases. [version_tags_signed]

    Git tags are not currently GPG/SSH-signed at the tag-object level. Release artifacts are signed via Sigstore PEP 740 + cosign keyless OIDC + SLSA L3 provenance, which is a stronger and more verifiable provenance chain than git tag signing (the artifact's identity is bound to the GitHub Actions workflow + commit SHA in the OIDC token). Adding signed git tags is a planned addition in v0.8.0 (will use the same Sigstore identity).


  • Other security issues


    The project results MUST check all inputs from potentially untrusted sources to ensure they are valid (an *allowlist*), and reject invalid inputs, if there are any restrictions on the data at all. [input_validation]
    Note that comparing input against a list of "bad formats" (aka a *denylist*) is normally not enough, because attackers can often work around a denylist. In particular, numbers are converted into internal formats and then checked if they are between their minimum and maximum (inclusive), and text strings are checked to ensure that they are valid text patterns (e.g., valid UTF-8, length, syntax, etc.). Some data may need to be "anything at all" (e.g., a file uploader), but these would typically be rare.

    All external inputs are validated via Pydantic v2 with extra="forbid" (reject unknown fields). Specific patterns: validate_within() helper for path inputs (CWE-22 prevention); SQL collector queries are parameterized + LIMIT-bounded; Snowflake quoted-identifier escaping per Snowflake's documented double-up convention (v0.7.9 carry-over hardening); YAML loaded via yaml.safe_load (CWE-502 prevention); subprocess calls are shell=False (CWE-78 prevention); LLM provider calls validate the provider/model allowlist before dispatch. Catalog 22-character column-truncation, 17-endpoint schema-fidelity validation, offline-mode enforcement, BitSight/Vanta/Drata/SSC cross-host pagination guards (cross-host + TLS-scheme downgrade refusal per CWE-319), and CSV-injection defenses on TPRM concentration-report + DD-questionnaire user-content cells (CWE-1236 via _csv_safe OWASP single-quote prefix) all act as additional allowlist gates. The Power BI 1MB byte-cap guard (v0.7.9 carry-over) splits batches that exceed Power BI's documented 1MB request-body limit. Documented in docs/threat-model.md and docs/security-review-v0.7.9.md.



    Hardening mechanisms SHOULD be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities. [hardening]
    Hardening mechanisms may include HTTP headers like Content Security Policy (CSP), compiler flags to mitigate attacks (such as -fstack-protector), or compiler flags to eliminate undefined behavior. For our purposes least privilege is not considered a hardening mechanism (least privilege is important, but separate).

    Hardening posture:

    • Container: distroless-style image (Dockerfile pins python by SHA digest); HEALTHCHECK present; non-root user where applicable.
    • HTTP API: FastAPI with Pydantic-validated request models; response headers include strict Content-Type; no stack-trace leakage in error responses (F-002, F-003 fixed in v0.7.7).
    • CI: workflow permissions default read-only with per-job elevation; SHA-pinned actions throughout (Scorecard Pinned-Dependencies signal green); CodeQL custom QL pack to suppress validate_within false positives.
    • Supply chain: PEP 740 + SLSA L3 + cosign signing closes the publish-side hardening loop.


    The project MUST provide an assurance case that justifies why its security requirements are met. The assurance case MUST include: a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered. (URL required) [assurance_case]
    An assurance case is "a documented body of evidence that provides a convincing and valid argument that a specified set of critical claims regarding a system’s properties are adequately justified for a given application in a given environment" ("Software Assurance Using Structured Assurance Case Models", Thomas Rhodes et al, NIST Interagency Report 7608). Trust boundaries are boundaries where data or execution changes its level of trust, e.g., a server's boundaries in a typical web application. It's common to list secure design principles (such as Saltzer and Schroeer) and common implementation security weaknesses (such as the OWASP top 10 or CWE/SANS top 25), and show how each are countered. The BadgeApp assurance case may be a useful example. This is related to documentation_security, documentation_architecture, and implement_secure_design.

    Assurance case is composed across three documents:

    1. Threat model: https://github.com/allenfbyrd/evidentia/blob/main/docs/threat-model.md — ~58 surfaces in 5 tiers (v0.7.9 ships TPRM module + 4 vendor-risk-collector additions to the surface inventory), explicit trust boundaries, in-scope/out-of-scope definitions.
    2. Security review (most recent release): https://github.com/allenfbyrd/evidentia/blob/main/docs/security-review-v0.7.9.md — applies CVSS/CWE/EPSS classification to the active surface and demonstrates that secure-design principles have been applied (least privilege, fail-safe defaults, complete mediation, separation of privilege, defense in depth) and common implementation weaknesses have been countered (CWE-22, CWE-78, CWE-89, CWE-502, CWE-209, CWE-319 cross-host TLS-downgrade, CWE-1236 CSV injection, CWE-693 protection-mechanism failure, etc.). The /pre-release-review v4 skill's mandatory /security-review invocations at Steps 3, 4, 6.C produce the input the document synthesizes.
    3. Accepted-findings registry: https://github.com/allenfbyrd/evidentia/blob/main/docs/enterprise-grade-accepted-findings.md — documents residual-risk acceptance with explicit rationale.

 Analysis 2/2

  • Static code analysis


    The project MUST use at least one static analysis tool with rules or approaches to look for common vulnerabilities in the analyzed language or environment, if there is at least one FLOSS tool that can implement this criterion in the selected language. [static_analysis_common_vulnerabilities]
    Static analysis tools that are specifically designed to look for common vulnerabilities are more likely to find them. That said, using any static tools will typically help find some problems, so we are suggesting but not requiring this for the 'passing' level badge.

    CodeQL ships with the default security-and-quality query packs which include rules for the OWASP Top 10 and CWE Top 25 (path traversal, SQL injection, XSS, command injection, deserialization vulns, hardcoded credentials, regex DoS, etc.). All these queries run on every push/PR.


  • Dynamic code analysis


    If the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) MUST be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
    Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

    Evidentia is implemented in pure Python (memory-safe) for the backend and TypeScript (memory-safe) for the evidentia-ui frontend. No memory-unsafe language is used in project source.



This data is available under the Community Data License Agreement – Permissive, Version 2.0 (CDLA-Permissive-2.0). This means that a Data Recipient may share the Data, with or without modifications, so long as the Data Recipient makes available the text of this agreement with the shared Data. Please credit Allen Byrd and the OpenSSF Best Practices badge contributors.

Project badge entry owned by: Allen Byrd.
Entry created on 2026-05-02 06:10:52 UTC, last updated on 2026-05-06 20:27:32 UTC. Last achieved passing badge on 2026-05-03 21:20:04 UTC.