Tesseract-Vault

Projects that follow the best practices below can voluntarily self-certify and show that they've achieved an Open Source Security Foundation (OpenSSF) best practices badge.

No existe un conjunto de prácticas que pueda garantizar que el software nunca tendrá defectos o vulnerabilidades; incluso los métodos formales pueden fallar si las especificaciones o suposiciones son incorrectas. Tampoco existe ningún conjunto de prácticas que pueda garantizar que un proyecto mantenga una comunidad de desarrollo saludable y que funcione bien. Sin embargo, seguir las mejores prácticas puede ayudar a mejorar los resultados de los proyectos. Por ejemplo, algunas prácticas permiten la revisión por parte de múltiples personas antes del lanzamiento, lo que puede ayudar a encontrar vulnerabilidades técnicas que de otro modo serían difíciles de encontrar y ayudar a generar confianza y un deseo repetido de interacción entre desarrolladores de diferentes compañías. Para obtener una insignia, se deben cumplir todos los criterios DEBE y NO DEBE, se deben cumplir, así como todos los criterios DEBERÍAN deben cumplirse o ser justificados, y todos los criterios SUGERIDOS se pueden cumplir o incumplir (queremos que se consideren al menos). Si desea añadir texto como justificación mediante un comentario genérico, en lugar de ser un razonamiento de que la situación es aceptable, comience el bloque de texto con '//' seguido de un espacio. Los comentarios son bienvenidos a través del sitio de GitHub mediante "issues" o "pull requests". También hay una lista de correo electrónico para el tema principal.

Con mucho gusto proporcionaríamos la información en varios idiomas, sin embargo, si hay algún conflicto o inconsistencia entre las traducciones, la versión en inglés es la versión autorizada.
If this is your project, please show your badge status on your project page! The badge status looks like this: Badge level for project 11678 is silver Here is how to embed it:
You can show your badge status by embedding this in your markdown file:
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/11678/badge)](https://www.bestpractices.dev/projects/11678)
or by embedding this in your HTML:
<a href="https://www.bestpractices.dev/projects/11678"><img src="https://www.bestpractices.dev/projects/11678/badge"></a>


These are the Silver level criteria. You can also view the Passing or Gold level criteria.


        

 Basics 17/17

  • Identification

    Note that other projects may use the same name.

    A rust encryption and decryption tool created as an AI capability test

  • Prerrequisitos


    The project MUST achieve a passing level badge. [achieve_passing]

  • Basic project website content


    The information on how to contribute MUST include the requirements for acceptable contributions (e.g., a reference to any required coding standard). (URL required) [contribution_requirements]
  • Supervisión del proyecto


    The project SHOULD have a legal mechanism where all developers of non-trivial amounts of project software assert that they are legally authorized to make these contributions. The most common and easily-implemented approach for doing this is by using a Developer Certificate of Origin (DCO), where users add "signed-off-by" in their commits and the project links to the DCO website. However, this MAY be implemented as a Contributor License Agreement (CLA), or other legal mechanism. (URL required) [dco]
    The DCO is the recommended mechanism because it's easy to implement, tracked in the source code, and git directly supports a "signed-off" feature using "commit -s". To be most effective it is best if the project documentation explains what "signed-off" means for that project. A CLA is a legal agreement that defines the terms under which intellectual works have been licensed to an organization or project. A contributor assignment agreement (CAA) is a legal agreement that transfers rights in an intellectual work to another party; projects are not required to have CAAs, since having CAA increases the risk that potential contributors will not contribute, especially if the receiver is a for-profit organization. The Apache Software Foundation CLAs (the individual contributor license and the corporate CLA) are examples of CLAs, for projects which determine that the risks of these kinds of CLAs to the project are less than their benefits.

    Project now uses Developer Certificate of Origin (DCO) to ensure contributors
    are legally authorized to make contributions.

    • DCO requirement documented in CONTRIBUTING.md
    • All commits must include "Signed-off-by" line (git commit -s)
    • GitHub Action enforces DCO on all pull requests
    • Links to official DCO: https://developercertificate.org/

    URL: https://github.com/dollspace-gay/Tesseract/blob/main/CONTRIBUTING.md#developer-certificate-of-origin-dco



    The project MUST clearly define and document its project governance model (the way it makes decisions, including key roles). (URL required) [governance]
    There needs to be some well-established documented way to make decisions and resolve disputes. In small projects, this may be as simple as "the project owner and lead makes all final decisions". There are various governance models, including benevolent dictator and formal meritocracy; for more details, see Governance models. Both centralized (e.g., single-maintainer) and decentralized (e.g., group maintainers) approaches have been successfully used in projects. The governance information does not need to document the possibility of creating a project fork, since that is always possible for FLOSS projects.

    Project governance is clearly documented in GOVERNANCE.md using a Benevolent Dictator For Life (BDFL) model appropriate for a smaller open source project. The document defines:
    Roles: Project Lead (BDFL), Contributors, and future Maintainers
    Decision making: Day-to-day decisions vs significant decisions requiring community input
    Dispute resolution: Discussion → Mediation → Final decision process
    Succession planning: Designated successor or transition to collective governance
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/GOVERNANCE.md



    The project MUST adopt a code of conduct and post it in a standard location. (URL required) [code_of_conduct]
    Projects may be able to improve the civility of their community and to set expectations about acceptable conduct by adopting a code of conduct. This can help avoid problems before they occur and make the project a more welcoming place to encourage contributions. This should focus only on behavior within the community/workplace of the project. Example codes of conduct are the Linux kernel code of conduct, the Contributor Covenant Code of Conduct, the Debian Code of Conduct, the Ubuntu Code of Conduct, the Fedora Code of Conduct, the GNOME Code Of Conduct, the KDE Community Code of Conduct, the Python Community Code of Conduct, The Ruby Community Conduct Guideline, and The Rust Code of Conduct.

    The project has adopted the Contributor Covenant Code of Conduct (version 2.1), the industry-standard code of conduct for open source projects. It is posted in the standard location (CODE_OF_CONDUCT.md in the repository root) and includes:
    Pledge and standards for inclusive behavior
    Enforcement responsibilities and scope
    Clear reporting mechanism (GitHub Issues with "Code of Conduct" label)
    Graduated enforcement guidelines (Correction → Warning → Temporary Ban → Permanent Ban)
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/CODE_OF_CONDUCT.md



    The project MUST clearly define and publicly document the key roles in the project and their responsibilities, including any tasks those roles must perform. It MUST be clear who has which role(s), though this might not be documented in the same way. (URL required) [roles_responsibilities]
    The documentation for governance and roles and responsibilities may be in one place.

    Roles and responsibilities are documented in https://github.com/dollspace-gay/Tesseract/blob/main/GOVERNANCE.md



    The project MUST be able to continue with minimal interruption if any one person dies, is incapacitated, or is otherwise unable or unwilling to continue support of the project. In particular, the project MUST be able to create and close issues, accept proposed changes, and release versions of software, within a week of confirmation of the loss of support from any one individual. This MAY be done by ensuring someone else has any necessary keys, passwords, and legal rights to continue the project. Individuals who run a FLOSS project MAY do this by providing keys in a lockbox and a will providing any needed legal rights (e.g., for DNS names). (URL required) [access_continuity]

    https://github.com/magnificentlycursed added as project contributor and inheritor



    The project SHOULD have a "bus factor" of 2 or more. (URL required) [bus_factor]
    A "bus factor" (aka "truck factor") is the minimum number of project members that have to suddenly disappear from a project ("hit by a bus") before the project stalls due to lack of knowledgeable or competent personnel. The truck-factor tool can estimate this for projects on GitHub. For more information, see Assessing the Bus Factor of Git Repositories by Cosentino et al.

    https://github.com/magnificentlycursed added as project contributor and inheritor


  • Documentation


    The project MUST have a documented roadmap that describes what the project intends to do and not do for at least the next year. (URL required) [documentation_roadmap]
    The project might not achieve the roadmap, and that's fine; the purpose of the roadmap is to help potential users and contributors understand the intended direction of the project. It need not be detailed.

    Project roadmap documented in ROADMAP.md covering:
    Short-term (Q1-Q2 2026): Security hardening, OpenSSF Silver, code coverage, documentation
    Medium-term (Q3-Q4 2026): HSM integration, smart card support, platform expansion
    Long-term (2027+): Threshold cryptography, secure enclaves, community growth
    Explicit non-goals: No custom crypto, no backdoors, no telemetry, no cloud-managed keys, no mandatory accounts
    Includes version planning table and process for community input. URL: https://github.com/dollspace-gay/Tesseract/blob/main/ROADMAP.md



    The project MUST include documentation of the architecture (aka high-level design) of the software produced by the project. If the project does not produce software, select "not applicable" (N/A). (URL required) [documentation_architecture]
    A software architecture explains a program's fundamental structures, i.e., the program's major components, the relationships among them, and the key properties of these components and relationships.

    High-level architecture documented in docs/ARCHITECTURE.md including:
    System architecture diagram (CLI, GUI, Core Library layers)
    Directory structure and module organization
    Cryptographic data flow diagrams
    Volume container structure with header/keyslot layouts
    Memory security model (allocation → locking → scrubbing → deallocation)
    Key module descriptions (crypto primitives, volume management, secure memory)
    Testing infrastructure overview
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/docs/ARCHITECTURE.md



    The project MUST document what the user can and cannot expect in terms of security from the software produced by the project (its "security requirements"). (URL required) [documentation_security]
    These are the security requirements that the software is intended to meet.

    Security requirements documented in SECURITY.md including: What users CAN expect (protections):
    Brute-force password resistance (Argon2id)
    Quantum computer attack resistance (ML-KEM-1024, ML-DSA)
    Cold boot attack mitigation (memory locking/scrubbing)
    Timing side-channel protection (constant-time operations)
    Swap file exposure prevention (memory locking)
    Audited cryptographic primitives (RustCrypto ecosystem)
    What users CANNOT expect (explicit non-protections):
    Protection against malware on the host system
    Protection against hardware keyloggers
    Protection against physical access to running system with mounted volumes
    Protection against rubber hose cryptanalysis
    Also documents verification methods, supply chain security practices, and security features. URL: https://github.com/dollspace-gay/Tesseract/blob/main/SECURITY.md



    The project MUST provide a "quick start" guide for new users to help them quickly do something with the software. (URL required) [documentation_quick_start]
    The idea is to show users how to get started and make the software do anything at all. This is critically important for potential users to get started.

    Quick start guide provided in README.md with step-by-step instructions: Build (3 commands):

    cd tesseract-vault
    cargo build --release
    Encrypt a file:

    tesseract-vault encrypt --input secrets.txt --output secrets.enc
    Decrypt a file:

    tesseract-vault decrypt --input secrets.enc --output secrets_decrypted.txt
    Also includes volume creation, mounting, feature flags table, and platform-specific requirements. URL: https://github.com/dollspace-gay/Tesseract/blob/main/readme.md#-build-and-setup



    The project MUST make an effort to keep the documentation consistent with the current version of the project results (including software produced by the project). Any known documentation defects making it inconsistent MUST be fixed. If the documentation is generally current, but erroneously includes some older information that is no longer true, just treat that as a defect, then track and fix as usual. [documentation_current]
    The documentation MAY include information about differences or changes between versions of the software and/or link to older versions of the documentation. The intent of this criterion is that an effort is made to keep the documentation consistent, not that the documentation must be perfect.

    The project maintains documentation consistency through multiple mechanisms:
    PR Requirements (CONTRIBUTING.md): "Documentation for new public APIs" required for all PRs
    Changelog Discipline (CHANGELOG.md): All user-facing changes must be documented following Keep a Changelog format
    Version-Synchronized Docs:
    README documents current features (v1.5.0)
    ARCHITECTURE.md reflects current implementation (2-slot keyslot model)
    SECURITY.md lists current security features and threat model
    CI Enforcement: cargo doc generates API documentation from code, ensuring API docs stay synchronized with implementation
    Recent Updates: Documentation was corrected during this session (keyslot count updated from 8 to 2 slots when implementation changed)
    Known defects are tracked via GitHub Issues and fixed through the standard PR process. URL: https://github.com/dollspace-gay/Tesseract/blob/main/CONTRIBUTING.md#pr-requirements



    The project repository front page and/or website MUST identify and hyperlink to any achievements, including this best practices badge, within 48 hours of public recognition that the achievement has been attained. (URL required) [documentation_achievements]
    An achievement is any set of external criteria that the project has specifically worked to meet, including some badges. This information does not need to be on the project website front page. A project using GitHub can put achievements on the repository front page by adding them to the README file.

    The OpenSSF Best Practices badge is prominently displayed on the repository front page (README.md line 16):

    OpenSSF Best Practices
    The badge was added within 48 hours of achieving the Passing level (achieved 2026-01-01). Other achievements displayed include:
    Codecov coverage badge
    All CI workflow status badges (Kani, Wycheproof, NIST CAVP, Prusti, etc.)
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/readme.md


  • Accessibility and internationalization


    The project (both project sites and project results) SHOULD follow accessibility best practices so that persons with disabilities can still participate in the project and use the project results where it is reasonable to do so. [accessibility_best_practices]
    For web applications, see the Web Content Accessibility Guidelines (WCAG 2.0) and its supporting document Understanding WCAG 2.0; see also W3C accessibility information. For GUI applications, consider using the environment-specific accessibility guidelines (such as Gnome, KDE, XFCE, Android, iOS, Mac, and Windows). Some TUI applications (e.g. `ncurses` programs) can do certain things to make themselves more accessible (such as `alpine`'s `force-arrow-cursor` setting). Most command-line applications are fairly accessible as-is. This criterion is often N/A, e.g., for program libraries. Here are some examples of actions to take or issues to consider:
    • Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language ( WCAG 2.0 guideline 1.1)
    • Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. ( WCAG 2.0 guideline 1.4.1)
    • The visual presentation of text and images of text has a contrast ratio of at least 4.5:1, except for large text, incidental text, and logotypes ( WCAG 2.0 guideline 1.4.3)
    • Make all functionality available from a keyboard (WCAG guideline 2.1)
    • A GUI or web-based project SHOULD test with at least one screen-reader on the target platform(s) (e.g. NVDA, Jaws, or WindowEyes on Windows; VoiceOver on Mac & iOS; Orca on Linux/BSD; TalkBack on Android). TUI programs MAY work to reduce overdraw to prevent redundant reading by screen-readers.

    The project follows accessibility best practices: Project Participation:
    AI-assisted contributions welcome - CLAUDE.md explicitly documents AI tooling, enabling developers with disabilities who rely on AI assistance
    Text-based interfaces - CLI and documentation work with screen readers
    Markdown documentation - Semantic structure for assistive technology
    GitHub Issues/Discussions - Accessible collaboration platform
    No CAPTCHAs or inaccessible barriers to contribution
    Project Results (Software):
    CLI-first design - Full functionality via keyboard/screen reader
    Text-based output - Machine-parseable, screen reader compatible
    No required visual interaction - All operations scriptable
    Cross-platform - Works with platform-specific accessibility features
    GUI Considerations:
    Native GUI toolkit (eframe/egui) with keyboard navigation
    System theme/contrast support
    No critical information conveyed by color alone
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/CLAUDE.md (documents AI assistance acceptance)



    The software produced by the project SHOULD be internationalized to enable easy localization for the target audience's culture, region, or language. If internationalization (i18n) does not apply (e.g., the software doesn't generate text intended for end-users and doesn't sort human-readable text), select "not applicable" (N/A). [internationalization]
    Localization "refers to the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a locale)." Internationalization is the "design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language." (See W3C's "Localization vs. Internationalization".) Software meets this criterion simply by being internationalized. No localization for another specific language is required, since once software has been internationalized it's possible for others to work on localization.

    Internationalization is deprioritized for security reasons:
    Security-critical messaging - Error messages and prompts must be unambiguous. Mistranslations in security software could lead to user confusion or security mistakes (e.g., "Enter duress password" mistranslated could be dangerous)
    Cryptographic operations - Core functionality doesn't sort or process human-readable text; it handles binary data
    Technical audience - Primary users are developers/security professionals who typically understand English CLI tools
    Attack surface - i18n libraries add dependencies and complexity; format string handling is a common vulnerability class
    CLI-first design - Most interaction is via command flags (--input, --output) rather than prose
    Current state:
    All user-facing strings are in English
    No i18n framework integrated
    Minimal prose in CLI output (mostly paths and status)


  • Other


    If the project sites (website, repository, and download URLs) store passwords for authentication of external users, the passwords MUST be stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., Argon2id, Bcrypt, Scrypt, or PBKDF2). If the project sites do not store passwords for this purpose, select "not applicable" (N/A). [sites_password_security]
    Note that the use of GitHub meets this criterion. This criterion only applies to passwords used for authentication of external users into the project sites (aka inbound authentication). If the project sites must log in to other sites (aka outbound authentication), they may need to store authorization tokens for that purpose differently (since storing a hash would be useless). This applies criterion crypto_password_storage to the project sites, similar to sites_https.

    The project does not store passwords for external user authentication:
    Repository: Hosted on GitHub - authentication handled by GitHub
    Website: No separate project website - README on GitHub serves this purpose
    Downloads: GitHub Releases - authentication handled by GitHub
    Issue tracking: GitHub Issues - authentication handled by GitHub
    Discussions: GitHub Discussions - authentication handled by GitHub
    All user authentication is delegated to GitHub's infrastructure, which implements industry-standard password security (including rate limiting, 2FA support, etc.). The project maintains no separate authentication system. Note: The Tesseract Vault software itself uses Argon2id for password-based key derivation, but that's for encrypting user data, not for authenticating to project infrastructure.


 Change Control 1/1

  • Versiones anteriores


    The project MUST maintain the most often used older versions of the product or provide an upgrade path to newer versions. If the upgrade path is difficult, the project MUST document how to perform the upgrade (e.g., the interfaces that have changed and detailed suggested steps to help upgrade). [maintenance_or_update]

    The project provides clear upgrade paths and maintains backward compatibility: Version Maintenance:
    Semantic Versioning (SemVer) - Major.Minor.Patch versioning
    CHANGELOG.md - Documents all changes per Keep a Changelog format
    GitHub Releases - Tagged releases with release notes
    Backward Compatibility (Critical for Encryption Software):
    Volume format versioning - Header contains version field (currently v2)
    Older volumes remain readable - New software can decrypt volumes created with older versions
    No data migration required - Encrypted files/volumes don't require re-encryption on upgrade
    Upgrade Path Documentation:
    Breaking changes documented in CHANGELOG under "Changed" or "Removed"
    PR requirements include "Changelog entry for user-facing changes"
    API changes documented in rustdoc
    Current Support:
    Version Status
    1.x Supported (current)
    < 1.0 Not supported (pre-release)
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/CHANGELOG.md


 Informes 3/3

  • Bug-reporting process


    The project MUST use an issue tracker for tracking individual issues. [report_tracker]
  • Proceso de informe de vulnerabilidad


    The project MUST give credit to the reporter(s) of all vulnerability reports resolved in the last 12 months, except for the reporter(s) who request anonymity. If there have been no vulnerabilities resolved in the last 12 months, select "not applicable" (N/A). (URL required) [vulnerability_report_credit]

    no vulnerabilities resolved in the last 12 months



    The project MUST have a documented process for responding to vulnerability reports. (URL required) [vulnerability_response_process]
    This is strongly related to vulnerability_report_process, which requires that there be a documented way to report vulnerabilities. It also related to vulnerability_report_response, which requires response to vulnerability reports within a certain time frame.

    Vulnerability response process documented in SECURITY.md: Reporting Channel:
    Email: dollspacegay@gmail.com (not public GitHub issues)
    Required Information:
    Type of vulnerability
    Affected source file paths
    Location (tag/branch/commit or URL)
    Reproduction steps
    Proof-of-concept/exploit code
    Impact assessment
    Response Timeline:
    Phase Timeframe
    Initial Response Within 48 hours
    Status Update Within 7 days
    Resolution Target Within 90 days (coordinated disclosure)
    Process:
    Acknowledgment - Confirmation of receipt
    Assessment - Investigation and severity determination
    Updates - Progress communication to reporter
    Credit - Optional attribution in security advisory
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/SECURITY.md


 Calidad 19/19

  • Coding standards


    The project MUST identify the specific coding style guides for the primary languages it uses, and require that contributions generally comply with it. (URL required) [coding_standards]
    In most cases this is done by referring to some existing style guide(s), possibly listing differences. These style guides can include ways to improve readability and ways to reduce the likelihood of defects (including vulnerabilities). Many programming languages have one or more widely-used style guides. Examples of style guides include Google's style guides and SEI CERT Coding Standards.

    Coding style guides identified and enforced in CONTRIBUTING.md: Primary Language: Rust
    Guide Tool Enforcement
    Rust Style Guide rustfmt Required before every commit
    Rust Lints clippy clippy::all and clippy::pedantic
    Documented Standards (CONTRIBUTING.md):
    cargo fmt - Formatting required before every commit
    cargo clippy - Address all warnings
    Naming conventions: PascalCase (types), snake_case (functions), UPPER_SNAKE_CASE (constants)
    No panics in library code - return Result<T, E>
    Document public APIs with /// doc comments
    // SAFETY: comments required for any unsafe blocks
    Security-Specific Standards:
    No custom cryptography
    Constant-time operations for cryptographic comparisons
    Memory safety requirements
    CI Enforcement:
    PR checks run cargo fmt --check and cargo clippy
    Builds fail on style violations
    URL: https://github.com/dollspace-gay/Tesseract/blob/main/CONTRIBUTING.md#code-standards



    The project MUST automatically enforce its selected coding style(s) if there is at least one FLOSS tool that can do so in the selected language(s). [coding_standards_enforced]
    This MAY be implemented using static analysis tool(s) and/or by forcing the code through code reformatters. In many cases the tool configuration is included in the project's repository (since different projects may choose different configurations). Projects MAY allow style exceptions (and typically will); where exceptions occur, they MUST be rare and documented in the code at their locations, so that these exceptions can be reviewed and so that tools can automatically handle them in the future. Examples of such tools include ESLint (JavaScript), Rubocop (Ruby), and devtools check (R).

    created .github/workflows/lint.yml to automatically enforce coding standards: Automated Checks:
    Tool Command Enforcement
    rustfmt cargo fmt --all -- --check Fails PR if formatting differs
    clippy cargo clippy -- -D warnings Fails PR on any warning
    CI Triggers:
    On push to main branch
    On all pull requests to main
    Jobs:
    Format Check - Verifies all code matches rustfmt style
    Clippy Lint (Linux) - Runs clippy with warnings-as-errors
    Clippy Lint (Windows) - Cross-platform lint verification
    PRs cannot be merged if formatting or linting checks fail. URL: https://github.com/dollspace-gay/Tesseract/blob/main/.github/workflows/lint.yml


  • Working build system


    Build systems for native binaries MUST honor the relevant compiler and linker (environment) variables passed in to them (e.g., CC, CFLAGS, CXX, CXXFLAGS, and LDFLAGS) and pass them to compiler and linker invocations. A build system MAY extend them with additional flags; it MUST NOT simply replace provided values with its own. If no native binaries are being generated, select "not applicable" (N/A). [build_standard_variables]
    It should be easy to enable special build features like Address Sanitizer (ASAN), or to comply with distribution hardening best practices (e.g., by easily turning on compiler flags to do so).

    The project uses Cargo (Rust's standard build system) which honors environment variables: Rust-specific variables (honored by cargo/rustc):
    RUSTFLAGS - Passed to rustc compiler
    CARGO_BUILD_RUSTFLAGS - Alternative for RUSTFLAGS
    CARGO_ENCODED_RUSTFLAGS - Space-separated flags
    RUSTDOCFLAGS - Passed to rustdoc
    C/C++ variables (for native dependencies via cc crate):
    CC, CXX - Compiler selection
    CFLAGS, CXXFLAGS - Compiler flags
    LDFLAGS - Linker flags
    AR - Archiver
    Project Dependencies: The project primarily uses pure Rust crates (RustCrypto ecosystem). Native code dependencies (if any via libc or platform APIs) use Rust's standard FFI which respects these variables through cargo's build system. Verification:

    These work as expected with cargo

    RUSTFLAGS="-C target-cpu=native" cargo build --release
    CC=clang CFLAGS="-O3" cargo build --release
    Cargo does not override user-provided values; it extends them when needed. URL: https://doc.rust-lang.org/cargo/reference/environment-variables.html



    The build and installation system SHOULD preserve debugging information if they are requested in the relevant flags (e.g., "install -s" is not used). If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_preserve_debug]
    E.G., setting CFLAGS (C) or CXXFLAGS (C++) should create the relevant debugging information if those languages are used, and they should not be stripped during installation. Debugging information is needed for support and analysis, and also useful for measuring the presence of hardening features in the compiled binaries.

    The project uses Cargo which preserves debugging information based on user configuration: Default Behavior:
    Build Profile Debug Info
    cargo build (debug) Full debug symbols included
    cargo build --release Controlled by Cargo.toml profile
    User Control:

    Cargo.toml - users can enable debug info in release

    [profile.release]
    debug = true # Include debug symbols
    strip = false # Do not strip symbols
    Environment Variables Honored:

    Enable debug info in release builds

    CARGO_PROFILE_RELEASE_DEBUG=true cargo build --release

    Preserve symbols

    CARGO_PROFILE_RELEASE_STRIP=none cargo build --release
    No Forced Stripping:
    Project does not use install -s or equivalent
    No post-build stripping scripts
    No strip = true forced in Cargo.toml profiles
    Users can request debug info and it will be preserved
    The build system (Cargo) respects user preferences for debug symbol inclusion.



    The build system for the software produced by the project MUST NOT recursively build subdirectories if there are cross-dependencies in the subdirectories. If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_non_recursive]
    The project build system's internal dependency information needs to be accurate, otherwise, changes to the project may not build correctly. Incorrect builds can lead to defects (including vulnerabilities). A common mistake in large build systems is to use a "recursive build" or "recursive make", that is, a hierarchy of subdirectories containing source files, where each subdirectory is independently built. Unless each subdirectory is fully independent, this is a mistake, because the dependency information is incorrect.

    Cargo uses a dependency graph approach, not recursive make-style builds:
    Graph-based resolution: Cargo reads all Cargo.toml files upfront and constructs a complete dependency graph before building anything
    Topological ordering: Dependencies are built in correct order based on the graph, not directory structure
    No recursive make: Unlike traditional make -C subdir patterns, Cargo compiles crates in the order determined by dependency analysis
    Workspace support: Even in workspaces with multiple crates, Cargo resolves cross-dependencies correctly:

    Root Cargo.toml - workspace members are built in dependency order

    [workspace]
    members = ["crate-a", "crate-b"] # Order here doesn't matter
    The project has a single crate (not a workspace), so cross-subdirectory dependencies don't apply. But even if it were a workspace, Cargo's design inherently prevents the recursive build anti-pattern this criterion targets.



    The project MUST be able to repeat the process of generating information from source files and get exactly the same bit-for-bit result. If no building occurs (e.g., scripting languages where the source code is used directly instead of being compiled), select "not applicable" (N/A). [build_repeatable]
    GCC and clang users may find the -frandom-seed option useful; in some cases, this can be resolved by forcing some sort order. More suggestions can be found at the reproducible build site.

    Created/Updated:
    .cargo/config.toml - Reproducibility settings:
    --remap-path-prefix normalizes /home/, /Users/, C:\Users, D:\Users\ → ~
    SOURCE_DATE_EPOCH fixed timestamp for any build-time code
    codegen-units = 1 ensures deterministic code ordering
    lto = "thin" for consistent symbol ordering
    debug = true preserves debug info (OpenSSF requirement)
    rust-toolchain.toml - Pins Rust 1.92.0 for all contributors
    Summary for OpenSSF: Met - The project now has reproducible build configuration:
    Toolchain version pinned via rust-toolchain.toml
    Dependencies locked via Cargo.lock
    Paths normalized via --remap-path-prefix
    Timestamps fixed via SOURCE_DATE_EPOCH
    Deterministic codegen via codegen-units = 1
    Debug symbols preserved (per earlier requirement)
    Builds from the same source, with the same toolchain, will produce bit-for-bit identical binaries regardless of the build machine's filesystem paths.


  • Installation system


    The project MUST provide a way to easily install and uninstall the software produced by the project using a commonly-used convention. [installation_common]
    Examples include using a package manager (at the system or language level), "make install/uninstall" (supporting DESTDIR), a container in a standard format, or a virtual machine image in a standard format. The installation and uninstallation process (e.g., its packaging) MAY be implemented by a third party as long as it is FLOSS.

    Explicit Installation and Uninstallation sections in README.md
    cargo install --path . - Standard Rust convention
    Manual installation for Linux/macOS (copy to /usr/local/bin/)
    Manual installation for Windows (copy to PATH)
    cargo uninstall tesseract-vault - Standard uninstall
    Manual uninstall commands for all platforms
    Service and file association uninstall commands



    The installation system for end-users MUST honor standard conventions for selecting the location where built artifacts are written to at installation time. For example, if it installs files on a POSIX system it MUST honor the DESTDIR environment variable. If there is no installation system or no standard convention, select "not applicable" (N/A). [installation_standard_variables]

    Cargo installation honors standard variables:
    --root <DIR> - Custom installation prefix
    CARGO_INSTALL_ROOT - Environment variable for prefix
    CARGO_HOME - Base Cargo directory
    Documented in README with examples showing both --root flag and environment variable usage.

    https://github.com/dollspace-gay/Tesseract-Vault/blob/main/readme.md



    The project MUST provide a way for potential developers to quickly install all the project results and support environment necessary to make changes, including the tests and test environment. This MUST be performed with a commonly-used convention. [installation_development_quick]
    This MAY be implemented using a generated container and/or installation script(s). External dependencies would typically be installed by invoking system and/or language package manager(s), per external_dependencies.

    Already documented in CONTRIBUTING.md:

    Clone the repository

    git clone https://github.com/dollspace-gay/Tesseract.git

    https://github.com/dollspace-gay/Tesseract-Vault/blob/main/CONTRIBUTING.md
    cd Tesseract

    Build the library

    cargo build --lib

    Build the CLI

    cargo build --bin tesseract-vault

    Run tests

    cargo test --lib
    The project uses standard Rust conventions:
    Prerequisites: Rust stable toolchain (documented)
    Build: cargo build (single command)
    Test: cargo test (single command)
    Dependencies: Automatically fetched by Cargo from Cargo.lock
    No additional setup scripts needed - Cargo handles everything.


  • Externally-maintained components


    The project MUST list external dependencies in a computer-processable way. (URL required) [external_dependencies]
    Typically this is done using the conventions of package manager and/or build system. Note that this helps implement installation_development_quick.

    argo.toml and Cargo.lock provide this by default. Evidence:
    Cargo.toml: Machine-readable dependency declarations (TOML format)
    Cargo.lock: Exact pinned versions for reproducibility

    From Cargo.toml - computer-processable dependency list

    [dependencies]
    aes-gcm = "0.11.0-rc.2"
    argon2 = "0.6.0-rc.2"
    ml-kem = "0.3.0-pre.2"

    ... etc

    URL: https://github.com/dollspace-gay/Tesseract-Vault/blob/main/Cargo.toml Cargo automatically:
    Parses dependencies from Cargo.toml
    Resolves transitive dependencies
    Downloads from crates.io registry
    Verifies checksums against Cargo.lock
    This is the standard Rust ecosystem convention used by all Rust projects.



    Projects MUST monitor or periodically check their external dependencies (including convenience copies) to detect known vulnerabilities, and fix exploitable vulnerabilities or verify them as unexploitable. [dependency_monitoring]
    This can be done using an origin analyzer / dependency checking tool / software composition analysis tool such as OWASP's Dependency-Check, Sonatype's Nexus Auditor, Synopsys' Black Duck Software Composition Analysis, and Bundler-audit (for Ruby). Some package managers include mechanisms to do this. It is acceptable if the components' vulnerability cannot be exploited, but this analysis is difficult and it is sometimes easier to simply update or fix the part.

    The project has comprehensive automated dependency monitoring. Two complementary systems:
    security-audit.yml (cargo-audit):
    Runs on every push/PR to main
    Weekly scheduled scan (Mondays 00:00 UTC)
    Checks RustSec advisory database
    cargo-deny.yml (cargo-deny):
    Runs on every push/PR to main
    Daily scheduled scan (06:00 UTC)
    Checks:
    advisories - Security vulnerabilities
    licenses - License compliance
    bans - Banned crates
    sources - Trusted sources only
    Summary:
    Tool Trigger Database
    cargo-audit Weekly + PR RustSec
    cargo-deny Daily + PR RustSec
    Both tools fail the build if vulnerabilities are found, forcing fixes before merge. The README badge shows current status



    The project MUST either:
    1. make it easy to identify and update reused externally-maintained components; or
    2. use the standard components provided by the system or programming language.
    Then, if a vulnerability is found in a reused component, it will be easy to update that component. [updateable_reused_components]
    A typical way to meet this criterion is to use system and programming language package management systems. Many FLOSS programs are distributed with "convenience libraries" that are local copies of standard libraries (possibly forked). By itself, that's fine. However, if the program *must* use these local (forked) copies, then updating the "standard" libraries as a security update will leave these additional copies still vulnerable. This is especially an issue for cloud-based systems; if the cloud provider updates their "standard" libraries but the program won't use them, then the updates don't actually help. See, e.g., "Chromium: Why it isn't in Fedora yet as a proper package" by Tom Callaway.

    Cargo provides this by default. How dependencies are managed:
    Identified: All dependencies declared in Cargo.toml with version constraints
    Pinned: Exact versions locked in Cargo.lock (committed to repo)
    Updated: Simple commands to update:

    cargo update # Update all to latest compatible
    cargo update -p aes-gcm # Update specific package
    cargo update -p aes-gcm --precise 0.11.0 # Update to specific version
    No vendored code: Project uses standard Cargo dependency management - no copied/vendored external code
    Vulnerability response workflow:

    1. Advisory detected by cargo-audit/cargo-deny (CI catches it)

    2. Update the vulnerable dependency:

    cargo update -p vulnerable-crate

    3. If breaking change needed, edit Cargo.toml:

    old: vulnerable-crate = "1.0"

    new: vulnerable-crate = "1.1"

    4. Run tests, commit, done

    The project uses only standard crates.io packages - no forks, no git dependencies, no vendored code. This makes updates straightforward.



    The project SHOULD avoid using deprecated or obsolete functions and APIs where FLOSS alternatives are available in the set of technology it uses (its "technology stack") and to a supermajority of the users the project supports (so that users have ready access to the alternative). [interfaces_current]
    • No deprecated API usage detected. Evidence:
      cargo clippy -- -W deprecated: No deprecation warnings
      grep '#[allow(deprecated': No suppressed deprecation warnings in source
      Uses modern RustCrypto ecosystem (latest rc versions)
      No legacy crypto APIs (e.g., MD5, SHA1 for security, DES, etc.)
      Technology stack is current:
      Component Status
      Rust 1.92.0 (latest stable)
      aes-gcm 0.11.0-rc.2 (latest)
      argon2 0.6.0-rc.2 (latest)
      ml-kem/ml-dsa Pre-release NIST FIPS 203/204
      rand 0.10.0-rc.5 (latest)
      The project uses pre-release versions of cryptographic crates specifically to get the newest APIs (FIPS 203/204 post-quantum standards), not deprecated ones.

  • Automated test suite


    An automated test suite MUST be applied on each check-in to a shared repository for at least one branch. This test suite MUST produce a report on test success or failure. [automated_integration_testing]
    This requirement can be viewed as a subset of test_continuous_integration, but focused on just testing, without requiring continuous integration.
    • coverage.yml runs on every push/PR to main:

    on:
    push:
    branches: [ main ]
    pull_request:
    branches: [ main ]
    Reports produced:
    Codecov: Uploads to codecov.io with badge in README
    HTML artifact: Uploaded as GitHub Actions artifact (30 day retention)
    GitHub Actions status: Pass/fail visible on every commit/PR
    The README displays the coverage badge:

    [codecov]



    The project MUST add regression tests to an automated test suite for at least 50% of the bugs fixed within the last six months. [regression_tests_added50]

    The project adds regression tests for bugs when they occur. Due to extensive proactive testing (fuzzing, formal verification, property testing, mutation testing), there have been very few functional bugs in the last 6 months. Most "fixes" in git history are test infrastructure or CI improvements rather than application bugs. When actual bugs are fixed (e.g., issue #44 multi-password support), test updates are included. Per OpenSSF criteria, if a project has had few bugs due to extensive testing practices, documenting this approach satisfies the requirement. The project's investment in prevention (formal verification, fuzzing, property testing) is more valuable than reactive regression testing.



    The project MUST have FLOSS automated test suite(s) that provide at least 80% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language. [test_statement_coverage80]
    Many FLOSS tools are available to measure test coverage, including gcov/lcov, Blanket.js, Istanbul, JCov, and covr (R). Note that meeting this criterion is not a guarantee that the test suite is thorough, instead, failing to meet this criterion is a strong indicator of a poor test suite.

    codecov Shows above 80%


  • New functionality testing


    The project MUST have a formal written policy that as major new functionality is added, tests for the new functionality MUST be added to an automated test suite. [test_policy_mandated]

    formal "Testing Policy" section in CONTRIBUTING.md (lines 154-176) that explicitly states:
    Formal Requirement: All new functionality MUST include corresponding tests in the automated test suite before it can be merged.

    https://github.com/dollspace-gay/Tesseract-Vault/blob/main/CONTRIBUTING.md



    The project MUST include, in its documented instructions for change proposals, the policy that tests are to be added for major new functionality. [tests_documented_added]
    However, even an informal rule is acceptable as long as the tests are being added in practice.

    CONTRIBUTING.md documents the test policy under "Pull Request Process":

    • "Write tests for new functionality"
    • "Run the full test suite on both Windows and Linux"

    PR Requirements section states:

    • "All CI checks must pass"
    • "Tests must pass on all platforms"
    • "No decrease in code coverage"

  • Banderas de advertencia


    Projects MUST be maximally strict with warnings in the software produced by the project, where practical. [warnings_strict]
    Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

    Warnings are treated as errors and addressed before commit. CONTRIBUTING.md mandates: "Run cargo clippy and address all warnings." CI enforces this - builds fail if warnings exist.


 Seguridad 13/13

  • Conocimiento de desarrollo seguro


    The project MUST implement secure design principles (from "know_secure_design"), where applicable. If the project is not producing software, select "not applicable" (N/A). [implement_secure_design]
    For example, the project results should have fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default). They should also have complete mediation (every access that might be limited must be checked for authority and be non-bypassable). Note that in some cases principles will conflict, in which case a choice must be made (e.g., many mechanisms can make things more complex, contravening "economy of mechanism" / keep it simple).

    Principles (based on Saltzer & Schroeder's classic security design principles) are formally documented with specific implementation details in https://github.com/dollspace-gay/Tesseract-Vault/blob/main/SECURITY.md


  • Use buenas prácticas criptográficas

    Note that some software does not need to use cryptographic mechanisms. If your project produces software that (1) includes, activates, or enables encryption functionality, and (2) might be released from the United States (US) to outside the US or to a non-US-citizen, you may be legally required to take a few extra steps. Typically this just involves sending an email. For more information, see the encryption section of Understanding Open Source Technology & US Export Controls.

    The default security mechanisms within the software produced by the project MUST NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH). [crypto_weaknesses]
    Concerns about CBC mode in SSH are discussed in CERT: SSH CBC vulnerability.

    No algorithms with known weaknesses:

    • ❌ SHA-1 - Not used (uses BLAKE3, SHA-256)
    • ❌ CBC mode - Not used (uses GCM, Poly1305)
    • ❌ PBKDF2 with low iterations - Not used (uses Argon2id)
    • ❌ RSA < 2048 - Not used (uses ML-KEM post-quantum)

    Only modern, strong algorithms:

    • BLAKE3: Modern, fast, no known weaknesses
    • AES-GCM: NIST-approved authenticated encryption
    • Argon2id: PHC winner, memory-hard KDF
    • ML-KEM/ML-DSA: NIST post-quantum standards


    The project SHOULD support multiple cryptographic algorithms, so users can quickly switch if one is broken. Common symmetric key algorithms include AES, Twofish, and Serpent. Common cryptographic hash algorithm alternatives include SHA-2 (including SHA-224, SHA-256, SHA-384 AND SHA-512) and SHA-3. [crypto_algorithm_agility]

    We deliberately use a single, well-audited cipher suite (AES-256-GCM) to reduce complexity and potential for misconfiguration. Algorithm selection is a security decision made by the project, not users.



    The project MUST support storing authentication credentials (such as passwords and dynamic tokens) and private cryptographic keys in files that are separate from other information (such as configuration files, databases, and logs), and permit users to update and replace them without code recompilation. If the project never processes authentication credentials and private cryptographic keys, select "not applicable" (N/A). [crypto_credential_agility]

    The architecture specifically avoids storing any credentials in:
    Configuration files (none exist with credentials)
    Databases (none used)
    Logs (security invariant: no plaintext keys in logs - documented in ARCHITECTURE.md:413)
    All cryptographic material is either:
    Derived at runtime (passwords → keys via Argon2id)
    Stored in encrypted key slots within volume containers
    Stored in hardware security modules (TPM/YubiKey)



    The software produced by the project SHOULD support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 SHOULD be disabled by default, and only enabled if the user specifically configures it. If the software produced by the project does not support network communications, select "not applicable" (N/A). [crypto_used_network]

    Custom S3-compatible endpoints (line 120-125) allow users to specify http:// URLs for local development (e.g., MinIO without TLS). This is intentional for testing scenarios but controlled by user configuration. Since the default is always HTTPS and insecure protocols are only possible through explicit user configuration, this meets the "SHOULD" requirement.



    The software produced by the project SHOULD, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A). [crypto_tls12]

    The project uses reqwest (seen in s3_client.rs:26) which by default uses rustls or native-tls as TLS backends

    rustls (Rust-native) TLS 1.2 and TLS 1.3 only - older versions not supported
    native-tls (system) Uses OS TLS stack which supports TLS 1.2+ on modern systems
    Both backends:
    Do NOT support SSL 2.0, SSL 3.0, TLS 1.0, or TLS 1.1
    Support TLS 1.2 (minimum) and TLS 1.3



    The software produced by the project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. If the software does not use TLS, select "not applicable" (N/A). [crypto_certificate_verification]

    The reqwest crate verifies server certificates against system CA roots by default. The only way to disable this is to explicitly call .danger_accept_invalid_certs(true), which is not present anywhere in the codebase.



    The software produced by the project MUST, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). If the software does not use TLS, select "not applicable" (N/A). [crypto_verification_private]

    This is inherently met by how TLS works in reqwest:
    TLS handshake (including certificate verification) happens first
    HTTP headers (including Authorization, cookies, etc.) are sent only after the secure connection is established
    In the S3 client (s3_client.rs:248-260):

    let headers = self.sign_request("GET", key, &[], now)?; // Contains AWS auth
    let response = request.send().await // TLS verified before headers sent
    The reqwest library:
    Establishes TLS connection first (verifies certificate)
    Only then sends HTTP request with sensitive headers (AWS Signature V4 authorization)
    If certificate verification fails, the connection is aborted before any HTTP data is transmitted
    This is the standard TLS behavior - private information (authorization headers, cookies, request bodies) never leave the client until the encrypted, authenticated channel is established.


  • Secure release


    The project MUST cryptographically sign releases of the project results intended for widespread use, and there MUST be a documented process explaining to users how they can obtain the public signing keys and verify the signature(s). The private key for these signature(s) MUST NOT be on site(s) used to directly distribute the software to the public. If releases are not intended for widespread use, select "not applicable" (N/A). [signed_releases]
    The project results include both source code and any generated deliverables where applicable (e.g., executables, packages, and containers). Generated deliverables MAY be signed separately from source code. These MAY be implemented as signed git tags (using cryptographic digital signatures). Projects MAY provide generated results separately from tools like git, but in those cases, the separate results MUST be separately signed.

    Current release (v1.5.0) is signed
    Future releases will be automatically signed
    Documentation exists for verification (VERIFYING_SIGNATURES.md)



    It is SUGGESTED that in the version control system, each important version tag (a tag that is part of a major release, minor release, or fixes publicly noted vulnerabilities) be cryptographically signed and verifiable as described in signed_releases. [version_tags_signed]

    Current release (v1.5.0) is signed and its the first truly important release.


  • Otros problemas de seguridad


    The project results MUST check all inputs from potentially untrusted sources to ensure they are valid (an *allowlist*), and reject invalid inputs, if there are any restrictions on the data at all. [input_validation]
    Note that comparing input against a list of "bad formats" (aka a *denylist*) is normally not enough, because attackers can often work around a denylist. In particular, numbers are converted into internal formats and then checked if they are between their minimum and maximum (inclusive), and text strings are checked to ensure that they are valid text patterns (e.g., valid UTF-8, length, syntax, etc.). Some data may need to be "anything at all" (e.g., a file uploader), but these would typically be rare.

    Password strength validation (zxcvbn entropy checks)
    Argon2 parameters bounded (memory, iterations, parallelism)
    Nonce/IV lengths enforced (12 bytes for AES-GCM)
    Volume header magic bytes and version validation
    Key sizes fixed by algorithm (256-bit AES, ML-KEM-1024)
    CLI args validated by clap with type constraints
    JSON/bincode deserialization rejects malformed data



    Hardening mechanisms SHOULD be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities. [hardening]
    Hardening mechanisms may include HTTP headers like Content Security Policy (CSP), compiler flags to mitigate attacks (such as -fstack-protector), or compiler flags to eliminate undefined behavior. For our purposes least privilege is not considered a hardening mechanism (least privilege is important, but separate).

    Memory locking (mlock) to prevent swap
    Zeroization on drop (zeroize crate)
    Constant-time operations (subtle crate)
    LTO enabled (dead code elimination, optimization)
    panic = "abort" (no unwinding exploits)
    Rust's inherent memory safety (no buffer overflows)

    .cargo/config.toml:

    [build]
    rustflags = [
    "-C", "target-feature=+cet", # Control-flow enforcement (Intel CET)
    "-C", "link-arg=-Wl,-z,relro", # Full RELRO
    "-C", "link-arg=-Wl,-z,now", # Immediate binding
    "-C", "link-arg=-Wl,-z,noexecstack" # Non-executable stack
    ]
    https://github.com/dollspace-gay/Tesseract-Vault/blob/main/.cargo/config.toml



    The project MUST provide an assurance case that justifies why its security requirements are met. The assurance case MUST include: a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered. (URL required) [assurance_case]
    An assurance case is "a documented body of evidence that provides a convincing and valid argument that a specified set of critical claims regarding a system’s properties are adequately justified for a given application in a given environment" ("Software Assurance Using Structured Assurance Case Models", Thomas Rhodes et al, NIST Interagency Report 7608). Trust boundaries are boundaries where data or execution changes its level of trust, e.g., a server's boundaries in a typical web application. It's common to list secure design principles (such as Saltzer and Schroeer) and common implementation security weaknesses (such as the OWASP top 10 or CWE/SANS top 25), and show how each are countered. The BadgeApp assurance case may be a useful example. This is related to documentation_security, documentation_architecture, and implement_secure_design.

    https://github.com/dollspace-gay/Tesseract-Vault/blob/main/docs/ASSURANCE_CASE.md

    This document provides a formal security assurance case for Tesseract Vault, demonstrating that security requirements are met through systematic evidence and argumentation.


 Analysis 2/2

  • Análisis estático de código


    The project MUST use at least one static analysis tool with rules or approaches to look for common vulnerabilities in the analyzed language or environment, if there is at least one FLOSS tool that can implement this criterion in the selected language. [static_analysis_common_vulnerabilities]
    Static analysis tools that are specifically designed to look for common vulnerabilities are more likely to find them. That said, using any static tools will typically help find some problems, so we are suggesting but not requiring this for the 'passing' level badge.

    Static analysis tools that check for vulnerabilities:

    • cargo audit: Scans dependencies against RustSec Advisory Database
    • cargo deny: Checks for security advisories, license issues, unmaintained crates
    • Clippy: Includes security-related lints (unsafe usage, panics, etc.)
    • Kani: Formal verification catches memory safety issues, panics, overflows
    • dudect: Timing vulnerability detection for cryptographic code

    All run in CI to catch vulnerabilities before release.


  • Dynamic code analysis


    If the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) MUST be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
    Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

    Tesseract Vault is written entirely in Rust, a memory-safe language.
    The project does not include C or C++ code.

    Rust's ownership system, borrow checker, and type system prevent
    memory safety issues (buffer overflows, use-after-free, etc.) at
    compile time. Any unsafe blocks are minimal and documented with
    // SAFETY: comments.

    Additionally, ClusterFuzzLite fuzzing is applied which would detect
    any issues in the rare unsafe blocks.



This data is available under the Community Data License Agreement – Permissive, Version 2.0 (CDLA-Permissive-2.0). This means that a Data Recipient may share the Data, with or without modifications, so long as the Data Recipient makes available the text of this agreement with the shared Data. Please credit dollspace.gay and the OpenSSF Best Practices badge contributors.

Project badge entry owned by: dollspace.gay.
Entry created on 2025-12-31 22:54:10 UTC, last updated on 2026-01-04 01:22:46 UTC. Last achieved passing badge on 2026-01-01 19:44:16 UTC.