EDDI

Projects that follow the best practices below can voluntarily self-certify and show that they've achieved an Open Source Security Foundation (OpenSSF) best practices badge.

There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different companies. To earn a badge, all MUST and MUST NOT criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be met OR unmet (we want them considered at least). If you want to enter justification text as a generic comment, instead of being a rationale that the situation is acceptable, start the text block with '//' followed by a space. Feedback is welcome via the GitHub site as issues or pull requests There is also a mailing list for general discussion.

We gladly provide the information in several locales, however, if there is any conflict or inconsistency between the translations, the English version is the authoritative version.
If this is your project, please show your badge status on your project page! The badge status looks like this: Badge level for project 12355 is silver Here is how to embed it:
You can show your badge status by embedding this in your markdown file:
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/12355/badge)](https://www.bestpractices.dev/projects/12355)
or by embedding this in your HTML:
<a href="https://www.bestpractices.dev/projects/12355"><img src="https://www.bestpractices.dev/projects/12355/badge"></a>


These are the Silver level criteria. You can also view the Passing or Gold level criteria.

Baseline Series: Baseline Level 1 Baseline Level 2 Baseline Level 3

        

 Basics 17/17

  • General

    Note that other projects may use the same name.

    Multi-agent orchestration middleware that coordinates between users, AI agents (LLMs), and business systems. It provides intelligent routing, conversation management, and API orchestration for building sophisticated AI-powered applications.

    Please use SPDX license expression format; examples include "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause", "GPL-2.0+", "LGPL-3.0+", "MIT", and "(BSD-2-Clause OR Ruby)". Do not include single quotes or double quotes.
    If there is more than one language, list them as comma-separated values (spaces optional) and sort them from most to least used. If there is a long list, please list at least the first three most common ones. If there is no language (e.g., this is a documentation-only or test-only project), use the single character "-". Please use a conventional capitalization for each language, e.g., "JavaScript".
    The Common Platform Enumeration (CPE) is a structured naming scheme for information technology systems, software, and packages. It is used in a number of systems and databases when reporting vulnerabilities.
  • Prerequisites


    The project MUST achieve a passing level badge. [achieve_passing]

  • Basic project website content


    The information on how to contribute MUST include the requirements for acceptable contributions (e.g., a reference to any required coding standard). (URL required) [contribution_requirements]
  • Project oversight


    The project SHOULD have a legal mechanism where all developers of non-trivial amounts of project software assert that they are legally authorized to make these contributions. The most common and easily-implemented approach for doing this is by using a Developer Certificate of Origin (DCO), where users add "signed-off-by" in their commits and the project links to the DCO website. However, this MAY be implemented as a Contributor License Agreement (CLA), or other legal mechanism. (URL required) [dco]
    The DCO is the recommended mechanism because it's easy to implement, tracked in the source code, and git directly supports a "signed-off" feature using "commit -s". To be most effective it is best if the project documentation explains what "signed-off" means for that project. A CLA is a legal agreement that defines the terms under which intellectual works have been licensed to an organization or project. A contributor assignment agreement (CAA) is a legal agreement that transfers rights in an intellectual work to another party; projects are not required to have CAAs, since having CAA increases the risk that potential contributors will not contribute, especially if the receiver is a for-profit organization. The Apache Software Foundation CLAs (the individual contributor license and the corporate CLA) are examples of CLAs, for projects which determine that the risks of these kinds of CLAs to the project are less than their benefits.

    https://github.com/labsai/EDDI/blob/main/CONTRIBUTING.md#licensing-of-contributions

    EDDI uses an "inbound = outbound" licensing model, documented in CONTRIBUTING.md (Section: Licensing of Contributions). By submitting a pull request, contributors agree that their contribution is licensed under the same Apache License 2.0 that covers the project. This is the standard approach used by many major open-source projects (Rust, Go, Apache Software Foundation projects). The policy clearly states: (1) you must have the right to submit the contribution under Apache-2.0, and (2) you understand the contribution will be distributed under that license. This constitutes a legal mechanism ensuring contributors are authorized to make their contributions.



    The project MUST clearly define and document its project governance model (the way it makes decisions, including key roles). (URL required) [governance]
    There needs to be some well-established documented way to make decisions and resolve disputes. In small projects, this may be as simple as "the project owner and lead makes all final decisions". There are various governance models, including benevolent dictator and formal meritocracy; for more details, see Governance models. Both centralized (e.g., single-maintainer) and decentralized (e.g., group maintainers) approaches have been successfully used in projects. The governance information does not need to document the possibility of creating a project fork, since that is always possible for FLOSS projects.

    https://github.com/labsai/EDDI/blob/main/GOVERNANCE.md

    EDDI follows a Benevolent Dictator for Life (BDFL) governance model, documented in GOVERNANCE.md. The project maintainer (Gregor Jarisch / @ginccc, Labs.ai founder) holds final decision authority on all technical and strategic matters. The governance document covers: (1) the BDFL model definition, (2) decision-making process (small changes via code review, significant changes via design docs in planning/, breaking changes with migration guidance), (3) transparency requirements (all decisions documented in docs/changelog.md with rationale), and (4) disagreement resolution process. Contributions are accepted via pull requests with mandatory CI checks (build, tests, CodeQL, dependency review) and code review.



    The project MUST adopt a code of conduct and post it in a standard location. (URL required) [code_of_conduct]
    Projects may be able to improve the civility of their community and to set expectations about acceptable conduct by adopting a code of conduct. This can help avoid problems before they occur and make the project a more welcoming place to encourage contributions. This should focus only on behavior within the community/workplace of the project. Example codes of conduct are the Linux kernel code of conduct, the Contributor Covenant Code of Conduct, the Debian Code of Conduct, the Ubuntu Code of Conduct, the Fedora Code of Conduct, the GNOME Code Of Conduct, the KDE Community Code of Conduct, the Python Community Code of Conduct, The Ruby Community Conduct Guideline, and The Rust Code of Conduct.

    https://github.com/labsai/EDDI/blob/main/CODE_OF_CONDUCT.md

    EDDI adopts the Contributor Covenant Code of Conduct version 2.1, posted at the root of the repository in CODE_OF_CONDUCT.md. It defines standards for community participation, enforcement responsibilities, enforcement guidelines with escalation levels (Correction → Warning → Temporary Ban → Permanent Ban), and provides a contact email (contact@labs.ai) for reporting violations.



    The project MUST clearly define and publicly document the key roles in the project and their responsibilities, including any tasks those roles must perform. It MUST be clear who has which role(s), though this might not be documented in the same way. (URL required) [roles_responsibilities]
    The documentation for governance and roles and responsibilities may be in one place.

    https://github.com/labsai/EDDI/blob/main/GOVERNANCE.md#roles-and-responsibilities

    Key roles are defined in GOVERNANCE.md (Section: Roles and Responsibilities):

    • Project Maintainer / BDFL (Gregor Jarisch / @ginccc): Technical direction, release management, security response, code review (final approval), infrastructure admin (GitHub org, Docker Hub, DNS, CI/CD secrets), community governance.
    • Contributors: Submit pull requests following CONTRIBUTING.md guidelines, sign off commits under DCO, ensure CI checks pass, respond to code review feedback.
    • Reviewers: CODEOWNERS file (.github/CODEOWNERS) assigns @labsai as default reviewer for all code. Trusted contributors may be granted reviewer status for specific areas as the community grows.
      Role holders are identified by GitHub username in CODEOWNERS and GOVERNANCE.md.


    The project MUST be able to continue with minimal interruption if any one person dies, is incapacitated, or is otherwise unable or unwilling to continue support of the project. In particular, the project MUST be able to create and close issues, accept proposed changes, and release versions of software, within a week of confirmation of the loss of support from any one individual. This MAY be done by ensuring someone else has any necessary keys, passwords, and legal rights to continue the project. Individuals who run a FLOSS project MAY do this by providing keys in a lockbox and a will providing any needed legal rights (e.g., for DNS names). (URL required) [access_continuity]

    https://github.com/labsai/EDDI/blob/main/GOVERNANCE.md#access-continuity

    EDDI's access continuity plan is documented in GOVERNANCE.md (Section: Access Continuity). The document includes:

    1. Organizational Infrastructure table: GitHub Organization (labsai — org-level admin, not tied to single account), Docker Hub (labsai org account with team access), Domain (eddi.labs.ai — registered under Labs.ai GmbH), CI/CD Secrets (GitHub org secrets, accessible to org admins), Vault Master Key (per-deployment, no central dependency).
    2. Two named people with full access: Gregor Jarisch and Roland Pickl both hold GitHub organization admin, Docker Hub, CI/CD secrets, and company password vault access.
    3. Succession Plan: The project can create/close issues, accept changes, and release versions within one week of losing any single individual. In the event of permanent maintainer unavailability, Labs.ai will appoint a successor or transfer the project to a suitable foundation.


    The project SHOULD have a "bus factor" of 2 or more. (URL required) [bus_factor]
    A "bus factor" (aka "truck factor") is the minimum number of project members that have to suddenly disappear from a project ("hit by a bus") before the project stalls due to lack of knowledgeable or competent personnel. The truck-factor tool can estimate this for projects on GitHub. For more information, see Assessing the Bus Factor of Git Repositories by Cosentino et al.

    https://github.com/labsai/EDDI/blob/main/GOVERNANCE.md#bus-factor

    EDDI has a bus factor of 2. Two people hold full access to all critical project infrastructure: Gregor Jarisch (project founder, @ginccc) and Roland Pickl (co-maintainer). Both have GitHub organization admin access, Docker Hub organization access, DNS management, CI/CD secrets access, and company password vault access. Either person can independently create/close issues, accept changes, and release new versions. This is documented in GOVERNANCE.md (Section: Bus Factor).


  • Documentation


    The project MUST have a documented roadmap that describes what the project intends to do and not do for at least the next year. (URL required) [documentation_roadmap]
    The project might not achieve the roadmap, and that's fine; the purpose of the roadmap is to help potential users and contributors understand the intended direction of the project. It need not be detailed.

    https://github.com/labsai/EDDI/blob/main/AGENTS.md#3-development-roadmap

    EDDI maintains a comprehensive development roadmap in AGENTS.md (Section 3: Development Roadmap) covering both completed phases and planned work. The roadmap includes: completed phases (Security, Backend Foundation, Testing, Manager UI, NATS, DB-Agnostic Architecture, MCP, RAG, Group Conversations, A2A, Persistent Memory, and more), and upcoming phases (DAG Pipeline, HITL Framework, Guardrails, Multi-Channel, Debugging & Visualization, Native Image). Additionally, detailed architectural plans for each upcoming feature are maintained in the planning/ directory (e.g., memory-architecture-plan.md, guardrails-architecture.md, multi-tenancy-plan.md, native-image-migration.md).



    The project MUST include documentation of the architecture (aka high-level design) of the software produced by the project. If the project does not produce software, select "not applicable" (N/A). (URL required) [documentation_architecture]
    A software architecture explains a program's fundamental structures, i.e., the program's major components, the relationships among them, and the key properties of these components and relationships.

    https://github.com/labsai/EDDI/blob/main/docs/architecture.md

    EDDI's architecture is extensively documented in docs/architecture.md (~1,000 lines). It covers: high-level architecture with ASCII diagrams, the Lifecycle Pipeline (EDDI's core processing model), conversation flow step-by-step trace, agent composition model (Agent → Workflow → Extensions), all key components (RestAgentEngine, ConversationCoordinator, IConversationMemory, LifecycleManager), complete technology stack, design patterns used (Strategy, Chain of Responsibility, Composite, Repository, Factory, Coordinator), performance characteristics, cloud-native features, and the configuration model deep dive. The project philosophy document (docs/project-philosophy.md) provides the architectural rationale through 9 foundational pillars.



    The project MUST document what the user can and cannot expect in terms of security from the software produced by the project (its "security requirements"). (URL required) [documentation_security]
    These are the security requirements that the software is intended to meet.

    https://github.com/labsai/EDDI/blob/main/docs/security.md

    Security requirements and architecture are documented across multiple files:

    • docs/security.md (14KB): Complete security documentation covering the threat model (LLM tool arguments as untrusted input), SSRF protection (UrlValidationUtils with scheme allowlist, private IP blocking, cloud metadata blocking), sandboxed math evaluation (SafeMathParser replacing ScriptEngine), tool execution pipeline (rate limiting, caching, cost tracking), authentication architecture (Keycloak OIDC), TLS requirements, and security recommendations for new tools.
    • SECURITY.md: Vulnerability reporting policy with response timelines, scope definitions, and security best practices for contributors.
    • docs/project-philosophy.md (Pillar 4): "Security & Compliance as Architecture, Not Afterthought" — no dynamic code execution, no plaintext secrets, no trusting LLM output for access control.
    • docs/secrets-vault.md: Envelope encryption (PBKDF2 + AES-256-GCM) architecture.


    The project MUST provide a "quick start" guide for new users to help them quickly do something with the software. (URL required) [documentation_quick_start]
    The idea is to show users how to get started and make the software do anything at all. This is critically important for potential users to get started.

    https://github.com/labsai/EDDI/blob/main/docs/getting-started.md

    EDDI provides multiple quick start paths in docs/getting-started.md:

    1. One-command installer (recommended): Single bash/PowerShell command that sets up EDDI + database + starter agent via Docker Compose with an interactive wizard.
    2. Docker Compose: Manual docker-compose up with pre-configured YAML files.
    3. Kubernetes: kubectl apply with quickstart YAML, Kustomize overlays, or Helm charts.
    4. From source: Clone → mvnw compile quarkus:dev with hot-reload.
      Additionally, docs/developer-quickstart.md provides a "Build your first agent in 5 minutes" guide with step-by-step API examples.


    The project MUST make an effort to keep the documentation consistent with the current version of the project results (including software produced by the project). Any known documentation defects making it inconsistent MUST be fixed. If the documentation is generally current, but erroneously includes some older information that is no longer true, just treat that as a defect, then track and fix as usual. [documentation_current]
    The documentation MAY include information about differences or changes between versions of the software and/or link to older versions of the documentation. The intent of this criterion is that an effort is made to keep the documentation consistent, not that the documentation must be perfect.

    https://docs.labs.ai

    Documentation is actively maintained alongside code changes. All docs are versioned at the current release. The project maintains a comprehensive changelog (docs/changelog.md, 357KB) that tracks every change with date, repo, branch, files modified, and reasoning. Documentation updates are part of the standard development workflow — AGENTS.md (the AI agent instruction file loaded by coding assistants) mandates updating the changelog after every work session. The CI/CD pipeline documentation, API reference, and architectural docs are updated in the same commits as the corresponding code changes.



    The project repository front page and/or website MUST identify and hyperlink to any achievements, including this best practices badge, within 48 hours of public recognition that the achievement has been attained. (URL required) [documentation_achievements]
    An achievement is any set of external criteria that the project has specifically worked to meet, including some badges. This information does not need to be on the project website front page. A project using GitHub can put achievements on the repository front page by adding them to the README file.

    https://github.com/labsai/EDDI/blob/main/README.md

    The README.md prominently displays achievement badges on line 5, including:

    • OpenSSF Best Practices badge (linking to bestpractices.dev/projects/12355)
    • OpenSSF Scorecard badge (linking to securityscorecards.dev)
    • Codacy code quality badge
    • CI workflow status badge
    • CodeQL security analysis badge
    • Docker Hub pulls badge
      These are displayed immediately after the project banner at the top of the README and are updated within hours of achieving new certifications.

  • Accessibility and internationalization


    The project (both project sites and project results) SHOULD follow accessibility best practices so that persons with disabilities can still participate in the project and use the project results where it is reasonable to do so. [accessibility_best_practices]
    For web applications, see the Web Content Accessibility Guidelines (WCAG 2.0) and its supporting document Understanding WCAG 2.0; see also W3C accessibility information. For GUI applications, consider using the environment-specific accessibility guidelines (such as Gnome, KDE, XFCE, Android, iOS, Mac, and Windows). Some TUI applications (e.g. `ncurses` programs) can do certain things to make themselves more accessible (such as `alpine`'s `force-arrow-cursor` setting). Most command-line applications are fairly accessible as-is. This criterion is often N/A, e.g., for program libraries. Here are some examples of actions to take or issues to consider:
    • Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language ( WCAG 2.0 guideline 1.1)
    • Color is not used as the only visual means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. ( WCAG 2.0 guideline 1.4.1)
    • The visual presentation of text and images of text has a contrast ratio of at least 4.5:1, except for large text, incidental text, and logotypes ( WCAG 2.0 guideline 1.4.3)
    • Make all functionality available from a keyboard (WCAG guideline 2.1)
    • A GUI or web-based project SHOULD test with at least one screen-reader on the target platform(s) (e.g. NVDA, Jaws, or WindowEyes on Windows; VoiceOver on Mac & iOS; Orca on Linux/BSD; TalkBack on Android). TUI programs MAY work to reduce overdraw to prevent redundant reading by screen-readers.

    EDDI addresses accessibility at multiple levels:

    1. Project website (eddi.labs.ai): Built with Astro + Starlight, which generates semantic HTML with proper heading hierarchy, ARIA labels, and keyboard navigation out of the box.
    2. Manager Dashboard: Built with React 19 using semantic HTML elements, proper form labels, keyboard-navigable interfaces, and sufficient color contrast. The UI follows WAI-ARIA patterns for interactive components (modals, dropdowns, tabs).
    3. Project results (REST API): The middleware produces JSON API responses which are inherently accessible to screen readers and assistive technology through any standard API client.
    4. Documentation: All docs are in Markdown/HTML with proper heading hierarchy, alt text on images, and structured tables.
      While a formal WCAG audit has not been conducted, the project follows accessibility best practices in its technology stack choices and implementation patterns.


    The software produced by the project SHOULD be internationalized to enable easy localization for the target audience's culture, region, or language. If internationalization (i18n) does not apply (e.g., the software doesn't generate text intended for end-users and doesn't sort human-readable text), select "not applicable" (N/A). [internationalization]
    Localization "refers to the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a locale)." Internationalization is the "design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language." (See W3C's "Localization vs. Internationalization".) Software meets this criterion simply by being internationalized. No localization for another specific language is required, since once software has been internationalized it's possible for others to work on localization.

    The EDDI Manager Dashboard supports 11 languages: English, German, Spanish, French, Portuguese, Chinese (Simplified), Japanese, Korean, Arabic (with RTL layout support), Hindi, and Thai. Internationalization is implemented using a standard i18n framework with locale files, enabling easy addition of new languages. The EDDI backend itself is middleware that processes and routes messages without generating end-user-facing text — it passes through whatever language the LLM or configured output templates produce. Date/time formatting respects locale settings.


  • Other


    If the project sites (website, repository, and download URLs) store passwords for authentication of external users, the passwords MUST be stored as iterated hashes with a per-user salt by using a key stretching (iterated) algorithm (e.g., Argon2id, Bcrypt, Scrypt, or PBKDF2). If the project sites do not store passwords for this purpose, select "not applicable" (N/A). [sites_password_security]
    Note that the use of GitHub meets this criterion. This criterion only applies to passwords used for authentication of external users into the project sites (aka inbound authentication). If the project sites must log in to other sites (aka outbound authentication), they may need to store authorization tokens for that purpose differently (since storing a hash would be useless). This applies criterion crypto_password_storage to the project sites, similar to sites_https.

    Not applicable. EDDI does not store passwords for authentication of external users. Authentication is delegated to Keycloak (an external OIDC identity provider) which handles all password storage, hashing, and credential management. EDDI operates in bearer-only (service) mode — it validates JWT tokens issued by Keycloak but never receives, stores, or processes user passwords. The Keycloak server uses bcrypt for password hashing by default.


 Change Control 1/1

  • Previous versions


    The project MUST maintain the most often used older versions of the product or provide an upgrade path to newer versions. If the upgrade path is difficult, the project MUST document how to perform the upgrade (e.g., the interfaces that have changed and detailed suggested steps to help upgrade). [maintenance_or_update]

    https://github.com/labsai/EDDI/blob/main/docs/release-versioning.md

    EDDI follows Semantic Versioning (MAJOR.MINOR.PATCH) as documented in docs/release-versioning.md. Supported versions are documented in SECURITY.md: v6.0.x receives active development, v5.6.x receives security fixes only, versions below 5.6 are end-of-life. Docker images are tagged with specific versions (e.g., labsai/eddi:6.0.1) for pinned deployments. The installer includes an 'eddi update' CLI command for easy upgrades. Major version transitions (5.x → 6.x) are documented with migration guidance. The project maintains backward compatibility for JSON configuration formats stored in MongoDB, ensuring existing agent configurations continue to work after upgrades.


 Reporting 3/3

  • Bug-reporting process


    The project MUST use an issue tracker for tracking individual issues. [report_tracker]
  • Vulnerability report process


    The project MUST give credit to the reporter(s) of all vulnerability reports resolved in the last 12 months, except for the reporter(s) who request anonymity. If there have been no vulnerabilities resolved in the last 12 months, select "not applicable" (N/A). (URL required) [vulnerability_report_credit]

    https://github.com/labsai/EDDI/blob/main/SECURITY.md

    SECURITY.md explicitly states (line 44): "We will credit you in the security advisory unless you prefer to remain anonymous." No external vulnerability reports have been received and resolved in the last 12 months — all security improvements have been internally identified and addressed. If selecting N/A is preferred, this criterion does not apply as there have been no external vulnerability reports to credit.



    The project MUST have a documented process for responding to vulnerability reports. (URL required) [vulnerability_response_process]
    This is strongly related to vulnerability_report_process, which requires that there be a documented way to report vulnerabilities. It also related to vulnerability_report_response, which requires response to vulnerability reports within a certain time frame.

    https://github.com/labsai/EDDI/blob/main/SECURITY.md

    SECURITY.md documents a complete vulnerability response process:

    1. Reporting channel: security@labs.ai (private email, NOT public GitHub issues)
    2. Required information: Description, reproduction steps, impact assessment, optional suggested fix
    3. Response timeline: Acknowledgment within 48 hours, initial triage within 7 days, status updates every 14 days, fix release based on severity (critical: ASAP, high: 30 days, medium: 90 days)
    4. Coordinated disclosure policy: Private report → acknowledgment → fix development → fix release → security advisory published → reporter may publish
    5. Scope: Clearly defines in-scope (core application, MCP, REST API, auth, Docker images, vault, SSRF protection) and out-of-scope (third-party LLM APIs, user config errors, upstream dependency vulns, social engineering, DoS via normal usage)
    6. Credit: Reporters credited in security advisory unless they request anonymity
      Additionally, an incident response runbook is maintained at docs/incident-response.md covering GDPR 72-hour, CCPA 45-day, and HIPAA 60-day breach notification requirements.

 Quality 19/19

  • Coding standards


    The project MUST identify the specific coding style guides for the primary languages it uses, and require that contributions generally comply with it. (URL required) [coding_standards]
    In most cases this is done by referring to some existing style guide(s), possibly listing differences. These style guides can include ways to improve readability and ways to reduce the likelihood of defects (including vulnerabilities). Many programming languages have one or more widely-used style guides. Examples of style guides include Google's style guides and SEI CERT Coding Standards.

    https://github.com/labsai/EDDI/blob/main/CONTRIBUTING.md#code-style

    EDDI identifies and documents its coding standards in CONTRIBUTING.md (Section: Code Style):

    • Language: Java 25 with modern features (records, sealed classes, pattern matching)
    • Framework: Quarkus + CDI — prefer @Inject over manual instantiation
    • Line length: 120 characters max
    • Checkstyle: Enforced via checkstyle.xml with rules for naming conventions, import hygiene, coding safety checks (EqualsHashCode, StringLiteralEquality, FallThrough), and modifier ordering
    • Formatter: Eclipse-based formatter configuration (eclipse-formatter.xml) with auto-format via 'mvnw formatter:format'
    • Architecture rules: No eval(), no ScriptEngine, no @JsonTypeInfo(use=Id.CLASS), stateless ILifecycleTask implementations, URL validation on all external calls
    • Commit convention: Conventional Commits format (feat/fix/docs/test/refactor/chore/perf/security)


    The project MUST automatically enforce its selected coding style(s) if there is at least one FLOSS tool that can do so in the selected language(s). [coding_standards_enforced]
    This MAY be implemented using static analysis tool(s) and/or by forcing the code through code reformatters. In many cases the tool configuration is included in the project's repository (since different projects may choose different configurations). Projects MAY allow style exceptions (and typically will); where exceptions occur, they MUST be rare and documented in the code at their locations, so that these exceptions can be reviewed and so that tools can automatically handle them in the future. Examples of such tools include ESLint (JavaScript), Rubocop (Ruby), and devtools check (R).

    https://github.com/labsai/EDDI/blob/main/checkstyle.xml

    EDDI automatically enforces coding standards through multiple FLOSS tools:

    1. Checkstyle (maven-checkstyle-plugin 3.6.0): Runs during the 'validate' phase of every build. Configuration in checkstyle.xml enforces naming conventions (TypeName, MethodName, LocalVariableName, etc.), import hygiene (RedundantImport, UnusedImports), coding safety (EqualsHashCode, StringLiteralEquality, FallThrough, OneStatementPerLine), line length limits (150 chars), and modifier ordering.
    2. Eclipse Formatter (formatter-maven-plugin 2.29.0): Auto-formats Java source files according to eclipse-formatter.xml during builds.
    3. CodeQL: Runs on every push and PR via GitHub Actions (.github/workflows/codeql.yml) with security-extended queries, catching injection vulnerabilities, hardcoded credentials, and insecure patterns.
    4. Maven Enforcer Plugin: Bans specific dependency groups (Jackson 3.x) to prevent accidental introduction of vulnerable transitive dependencies.
    5. Trivy: Filesystem security scan on every CI run, blocking CRITICAL and HIGH severity CVEs.

  • Working build system


    Build systems for native binaries MUST honor the relevant compiler and linker (environment) variables passed in to them (e.g., CC, CFLAGS, CXX, CXXFLAGS, and LDFLAGS) and pass them to compiler and linker invocations. A build system MAY extend them with additional flags; it MUST NOT simply replace provided values with its own. If no native binaries are being generated, select "not applicable" (N/A). [build_standard_variables]
    It should be easy to enable special build features like Address Sanitizer (ASAN), or to comply with distribution hardening best practices (e.g., by easily turning on compiler flags to do so).

    Not applicable. EDDI is a Java application built with Maven. No native binaries are generated in the standard build process. The project compiles Java source to JVM bytecode, which is platform-independent. The optional GraalVM native-image build is handled by the Quarkus framework's native profile, which correctly passes through environment variables per GraalVM's conventions.



    The build and installation system SHOULD preserve debugging information if they are requested in the relevant flags (e.g., "install -s" is not used). If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_preserve_debug]
    E.G., setting CFLAGS (C) or CXXFLAGS (C++) should create the relevant debugging information if those languages are used, and they should not be stripped during installation. Debugging information is needed for support and analysis, and also useful for measuring the presence of hardening features in the compiled binaries.

    Not applicable. EDDI is a Java application. Java bytecode inherently preserves debugging information (line numbers, local variable names) unless explicitly stripped. The Maven compiler configuration includes '-parameters' flag to retain method parameter names at runtime. The JVM provides full stack traces with line numbers by default. Debug information is controlled at runtime via JVM flags (e.g., -g), not at build time.



    The build system for the software produced by the project MUST NOT recursively build subdirectories if there are cross-dependencies in the subdirectories. If there is no build or installation system (e.g., typical JavaScript libraries), select "not applicable" (N/A). [build_non_recursive]
    The project build system's internal dependency information needs to be accurate, otherwise, changes to the project may not build correctly. Incorrect builds can lead to defects (including vulnerabilities). A common mistake in large build systems is to use a "recursive build" or "recursive make", that is, a hierarchy of subdirectories containing source files, where each subdirectory is independently built. Unless each subdirectory is fully independent, this is a mistake, because the dependency information is incorrect.

    Not applicable. EDDI is a single-module Maven project (one pom.xml at the root). There are no subdirectory modules, no multi-module reactor builds, and therefore no cross-dependencies between subdirectories. All source code lives under a single src/ directory compiled by a single Maven invocation.



    The project MUST be able to repeat the process of generating information from source files and get exactly the same bit-for-bit result. If no building occurs (e.g., scripting languages where the source code is used directly instead of being compiled), select "not applicable" (N/A). [build_repeatable]
    GCC and clang users may find the -frandom-seed option useful; in some cases, this can be resolved by forcing some sort order. More suggestions can be found at the reproducible build site.

    EDDI's build is reproducible within practical limits for a JVM application. The project uses: (1) Maven with a locked dependency tree — all dependency versions are explicitly pinned in pom.xml (no version ranges), and the Quarkus BOM provides transitive dependency alignment. (2) Maven wrapper (mvnw/mvnw.cmd) bundled in the repository ensures the same Maven version across all environments. (3) The CI pipeline (.github/workflows/ci.yml) uses pinned tool versions: Java 25 (OpenJDK distribution), exact GitHub Actions versions pinned by SHA hash. (4) Docker builds use a deterministic Dockerfile with a specific base image. (5) Dependency verification through maven-enforcer-plugin bans unauthorized transitive dependencies. While bit-for-bit identical JARs across different build environments are not guaranteed (due to JVM compilation timestamp non-determinism), the functional output is identical given the same inputs.


  • Installation system


    The project MUST provide a way to easily install and uninstall the software produced by the project using a commonly-used convention. [installation_common]
    Examples include using a package manager (at the system or language level), "make install/uninstall" (supporting DESTDIR), a container in a standard format, or a virtual machine image in a standard format. The installation and uninstallation process (e.g., its packaging) MAY be implemented by a third party as long as it is FLOSS.

    https://github.com/labsai/EDDI/blob/main/docs/getting-started.md

    EDDI provides multiple standard installation methods:

    1. One-command installer: 'curl -fsSL .../install.sh | bash' (Linux/macOS) or 'iwr -useb .../install.ps1 | iex' (Windows) — interactive wizard sets up everything via Docker Compose.
    2. Docker: 'docker pull labsai/eddi' + 'docker compose up' — standard Docker conventions.
    3. Kubernetes: kubectl apply with Kustomize overlays or Helm charts ('helm install eddi ./helm/eddi').
    4. Uninstall: 'docker compose down -v' removes all containers and volumes. The installer creates an 'eddi' CLI wrapper with 'eddi update' and uninstall support.
      All methods use widely-adopted conventions (Docker, Helm, kubectl) that operators are already familiar with.


    The installation system for end-users MUST honor standard conventions for selecting the location where built artifacts are written to at installation time. For example, if it installs files on a POSIX system it MUST honor the DESTDIR environment variable. If there is no installation system or no standard convention, select "not applicable" (N/A). [installation_standard_variables]

    Not applicable. EDDI is distributed as a Docker container image (labsai/eddi) and does not install files to end-user filesystems. The container's internal file layout follows standard Quarkus/Java conventions. For Kubernetes deployments, Helm charts and Kustomize overlays follow standard Kubernetes resource naming and namespace conventions.



    The project MUST provide a way for potential developers to quickly install all the project results and support environment necessary to make changes, including the tests and test environment. This MUST be performed with a commonly-used convention. [installation_development_quick]
    This MAY be implemented using a generated container and/or installation script(s). External dependencies would typically be installed by invoking system and/or language package manager(s), per external_dependencies.

    https://github.com/labsai/EDDI/blob/main/CONTRIBUTING.md#development-setup

    Developers can set up a complete development environment in under 5 minutes:

    1. Clone: 'git clone https://github.com/labsai/EDDI.git'
    2. Start MongoDB: 'docker run -d --name mongodb -p 27017:27017 mongo:7' (or let Quarkus Dev Services auto-provision it)
    3. Run: './mvnw compile quarkus:dev' — starts with hot-reload, continuous testing, and Dev UI
      No global tool installation required — Maven wrapper (mvnw) is bundled. IDE setup instructions for IntelliJ IDEA and VS Code are documented. The developer quickstart guide (docs/developer-quickstart.md) includes a full walkthrough of creating and testing an agent via the API.

  • Externally-maintained components


    The project MUST list external dependencies in a computer-processable way. (URL required) [external_dependencies]
    Typically this is done using the conventions of package manager and/or build system. Note that this helps implement installation_development_quick.

    https://github.com/labsai/EDDI/blob/main/pom.xml

    All external dependencies are declared in pom.xml, the standard Maven Project Object Model format. This is a computer-processable XML format that lists every dependency with groupId, artifactId, version, and scope. The Quarkus BOM (Bill of Materials) manages transitive dependency versions. The project also generates a THIRD-PARTY.txt file listing all runtime dependencies and their licenses via the license-maven-plugin (activated with -Plicense-gen).



    Projects MUST monitor or periodically check their external dependencies (including convenience copies) to detect known vulnerabilities, and fix exploitable vulnerabilities or verify them as unexploitable. [dependency_monitoring]
    This can be done using an origin analyzer / dependency checking tool / software composition analysis tool such as OWASP's Dependency-Check, Sonatype's Nexus Auditor, Synopsys' Black Duck Software Composition Analysis, and Bundler-audit (for Ruby). Some package managers include mechanisms to do this. It is acceptable if the components' vulnerability cannot be exploited, but this analysis is difficult and it is sometimes easier to simply update or fix the part.

    https://github.com/labsai/EDDI/blob/main/.github/dependabot.yml

    EDDI monitors external dependencies through multiple mechanisms:

    1. Dependabot (.github/dependabot.yml): Weekly automated dependency update PRs for both Maven dependencies and GitHub Actions versions, with intelligent grouping (Quarkus, LangChain4j, other).
    2. Trivy Security Scan (CI job 'trivy-scan'): Runs aquasecurity/trivy-action on every push, scanning the filesystem for CRITICAL and HIGH severity CVEs with exit-code 1 (fails the build).
    3. GitHub Dependency Review (.github/workflows/dependency-review.yml): Blocks PRs that introduce vulnerable or incompatibly-licensed dependencies.
    4. Maven Enforcer Plugin: Bans known-vulnerable dependency groups (e.g., Jackson 3.x tools.jackson.* namespace blocked due to CVE-2026-29062).
    5. Manual CVE overrides in pom.xml: Dependencies are explicitly overridden when upstream fixes are not yet available (e.g., jinjava pinned to 2.8.3 for CVE-2026-25526, reactor-netty-http pinned to 1.2.8 for CVE-2025-22227).


    The project MUST either:
    1. make it easy to identify and update reused externally-maintained components; or
    2. use the standard components provided by the system or programming language.
    Then, if a vulnerability is found in a reused component, it will be easy to update that component. [updateable_reused_components]
    A typical way to meet this criterion is to use system and programming language package management systems. Many FLOSS programs are distributed with "convenience libraries" that are local copies of standard libraries (possibly forked). By itself, that's fine. However, if the program *must* use these local (forked) copies, then updating the "standard" libraries as a security update will leave these additional copies still vulnerable. This is especially an issue for cloud-based systems; if the cloud provider updates their "standard" libraries but the program won't use them, then the updates don't actually help. See, e.g., "Chromium: Why it isn't in Fedora yet as a proper package" by Tom Callaway.

    EDDI uses Maven's standard dependency management which makes updating components straightforward: change the version number in pom.xml, run 'mvnw clean verify' to validate compatibility, and commit. The Quarkus BOM manages the majority of transitive dependencies, so a single version bump updates dozens of aligned libraries. Dependabot automatically proposes version updates weekly via pull requests. No convenience copies of external libraries exist — all dependencies are fetched from Maven Central.



    The project SHOULD avoid using deprecated or obsolete functions and APIs where FLOSS alternatives are available in the set of technology it uses (its "technology stack") and to a supermajority of the users the project supports (so that users have ready access to the alternative). [interfaces_current]

    EDDI actively avoids deprecated APIs:

    • Java 25 (latest): Uses modern language features (records, sealed classes, pattern matching, virtual threads).
    • Quarkus 3.34.3 (latest LTS): Current framework version, actively tracking LTS releases.
    • LangChain4j 1.13.0 (latest): Current LLM integration library.
    • Migration from deprecated APIs is tracked: OGNL was replaced with PathNavigator, Infinispan was replaced with Caffeine, MongoDB async driver was replaced with sync driver, Lombok was removed in favor of native Java records.
    • The project has no Nashorn/Rhino usage in production (only in test scope for calculator validation).

  • Automated test suite


    An automated test suite MUST be applied on each check-in to a shared repository for at least one branch. This test suite MUST produce a report on test success or failure. [automated_integration_testing]
    This requirement can be viewed as a subset of test_continuous_integration, but focused on just testing, without requiring continuous integration.

    The CI/CD pipeline (.github/workflows/ci.yml) runs automated tests on every push to main, every pull request, and every tag:

    • Job 'build-and-test': Executes 'mvnw clean verify -DskipITs' — runs all unit tests (~4,900+) with JaCoCo code coverage reporting. Results are uploaded as artifacts.
    • Job 'integration-test': Executes 'mvnw verify -DskipITs=false' — runs integration tests using Testcontainers (Docker-based MongoDB/PostgreSQL) for end-to-end API contract testing (~250+ integration tests).
    • Job 'smoke-test': After Docker build, starts the container image with MongoDB and verifies /q/health/ready and /openapi endpoints respond correctly.
      Test results and coverage reports are uploaded as build artifacts with 14-day retention.


    The project MUST add regression tests to an automated test suite for at least 50% of the bugs fixed within the last six months. [regression_tests_added50]

    EDDI's contribution guidelines (CONTRIBUTING.md) explicitly mandate: "Write tests — new features require tests; bug fixes should include a regression test." This policy is enforced through code review on all pull requests. The project maintains 4,900+ unit tests and 250+ integration tests. Recent bug fixes demonstrably include regression tests — for example: UrlValidationUtilsExtendedTest covers SSRF bypass attempts, SlackChannelRouterTest covers routing edge cases, and PostgresAgentUseCaseIT covers database-specific regressions. The CI pipeline must pass before any PR can be merged, ensuring regression tests are validated automatically.



    The project MUST have FLOSS automated test suite(s) that provide at least 80% statement coverage if there is at least one FLOSS tool that can measure this criterion in the selected language. [test_statement_coverage80]
    Many FLOSS tools are available to measure test coverage, including gcov/lcov, Blanket.js, Istanbul, JCov, and covr (R). Note that meeting this criterion is not a guarantee that the test suite is thorough, instead, failing to meet this criterion is a strong indicator of a poor test suite.

    Statement (instruction) coverage is 80.64% (97,856/121,356), measured by JaCoCo across merged unit + integration test suites. Coverage is enforced in CI via a JaCoCo verify goal with an 80% minimum threshold — builds fail if coverage drops below. See PR with coverage report: https://github.com/labsai/EDDI/pull/427


  • New functionality testing


    The project MUST have a formal written policy that as major new functionality is added, tests for the new functionality MUST be added to an automated test suite. [test_policy_mandated]

    https://github.com/labsai/EDDI/blob/main/CONTRIBUTING.md#pull-request-process

    CONTRIBUTING.md contains a formal written policy requiring tests for new functionality. Under "Pull Request Process → Workflow" (step 3): "Write tests — new features require tests; bug fixes should include a regression test." Under "What the CI Checks," the Build + Tests gate ('mvnw clean verify' with Java 25) is marked as "✅ Yes" (must pass). JaCoCo code coverage is reported on every build. Additionally, AGENTS.md (the AI coding assistant instruction file, which governs all development sessions) mandates: "Each commit must build: Run ./mvnw test before committing. Never commit broken code."



    The project MUST include, in its documented instructions for change proposals, the policy that tests are to be added for major new functionality. [tests_documented_added]
    However, even an informal rule is acceptable as long as the tests are being added in practice.

    https://github.com/labsai/EDDI/blob/main/CONTRIBUTING.md#pull-request-process

    The documented instructions for change proposals (CONTRIBUTING.md) explicitly include the testing policy:

    1. Pull Request Process step 3: "Write tests — new features require tests; bug fixes should include a regression test"
    2. Pull Request Process step 4: "Run the full build locally: ./mvnw clean verify -DskipITs"
    3. PR Guidelines: "One concern per PR — don't mix refactoring with features"
    4. CI Checks table documents required gates: Build + Tests (✅ must pass), JaCoCo (📊 report), CodeQL (✅ must pass)
      The pull request template (.github/PULL_REQUEST_TEMPLATE.md) also includes a checklist for test verification.

  • Warning flags


    Projects MUST be maximally strict with warnings in the software produced by the project, where practical. [warnings_strict]
    Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

    EDDI enforces maximally strict warning policies through multiple layers:

    1. Checkstyle (FLOSS static analysis): Runs automatically during the Maven 'validate' phase on every build. The configuration (checkstyle.xml) enforces 20+ rules including naming conventions, import hygiene, coding safety checks (EqualsHashCode, SimplifyBooleanExpression, StringLiteralEquality, FallThrough, OneStatementPerLine, MultipleVariableDeclarations), modifier ordering, line length, and file length limits.

    2. CodeQL (FLOSS semantic analysis): Runs on every push and PR with 'security-extended' query suite, which is the most comprehensive FLOSS security analysis available for Java. It detects injection vulnerabilities, hardcoded credentials, insecure cryptography, and data flow issues.

    3. Trivy (FLOSS vulnerability scanner): Scans the entire filesystem on every CI run with severity filter CRITICAL,HIGH and exit-code 1, meaning the build fails on any high-severity CVE.

    4. Maven Enforcer Plugin: Actively bans known-vulnerable dependency groups (Jackson 3.x) from the dependency tree, preventing silent reintroduction via transitive dependencies.

    5. Java compiler: Configured with '-parameters' flag for maximum runtime reflection metadata. While explicit -Xlint flags are not configured, the Quarkus framework's compiler settings are used which enable standard Java warnings.

    6. JaCoCo coverage gate: The build fails if line coverage drops below 80%, ensuring test coverage cannot regress silently.

    The project treats compiler warnings and static analysis findings as actionable items — recent work sessions specifically addressed CodeQL log-injection warnings, unused imports, and type-safety issues across the codebase.


 Security 13/13

  • Secure development knowledge


    The project MUST implement secure design principles (from "know_secure_design"), where applicable. If the project is not producing software, select "not applicable" (N/A). [implement_secure_design]
    For example, the project results should have fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default). They should also have complete mediation (every access that might be limited must be checked for authority and be non-bypassable). Note that in some cases principles will conflict, in which case a choice must be made (e.g., many mechanisms can make things more complex, contravening "economy of mechanism" / keep it simple).

    EDDI implements secure design principles as a core architectural pillar (docs/project-philosophy.md, Pillar 4: "Security & Compliance as Architecture, Not Afterthought"):

    1. Defense in depth: Multiple independent security layers — SSRF URL validation, sandboxed expression evaluation, rate-limited tool execution, input validation, TLS encryption, Keycloak authentication, and security headers (X-Content-Type-Options, X-Frame-Options, Content-Security-Policy).

    2. Least privilege: OAuth 2.0 role-based access control (admin/editor/viewer roles via Keycloak). Production conversation endpoints are public; all management APIs require authentication. The SafeHttpClient validates URLs before any outbound request, blocking private IPs, link-local addresses, and cloud metadata endpoints.

    3. No dynamic code execution: This is architecturally enforced — there is no eval(), no ScriptEngine, no reflection-based code execution in production code. Math expressions use a recursive-descent SafeMathParser that recognizes only numeric literals and a fixed function allowlist. Custom logic runs in external MCP servers outside the EDDI security perimeter.

    4. Secure defaults: Authentication enforcement is checked at startup (AuthStartupGuard fails startup if OIDC is disabled in production without explicit opt-out). Secrets are envelope-encrypted (PBKDF2 + AES-256-GCM) by default. Agent exports automatically scrub secrets. Tool rate limiting and cost tracking are enabled by default.

    5. Fail securely: The ConversationCoordinator ensures sequential processing per conversation to prevent race conditions. Queue capacity exhaustion returns HTTP 429 (not 500). Failed pipeline task output is marked as uncommitted (hidden from LLM context) via the Memory Policy commit flags system.

    6. Input validation (allowlist): UrlValidationUtils validates all URLs against an allowlist of allowed schemes (http/https only), blocks private IP ranges (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, fd00::/8, fe80::/10, ::1), and blocks cloud metadata hostnames (169.254.169.254, metadata.google.internal).

    7. Separation of concerns: Secrets are stored separately from configuration via vault references (${eddivault:key-name}). API keys never appear in agent configurations in plaintext.


  • Use basic good cryptographic practices

    Note that some software does not need to use cryptographic mechanisms. If your project produces software that (1) includes, activates, or enables encryption functionality, and (2) might be released from the United States (US) to outside the US or to a non-US-citizen, you may be legally required to take a few extra steps. Typically this just involves sending an email. For more information, see the encryption section of Understanding Open Source Technology & US Export Controls.

    The default security mechanisms within the software produced by the project MUST NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH). [crypto_weaknesses]
    Concerns about CBC mode in SSH are discussed in CERT: SSH CBC vulnerability.

    EDDI's cryptographic mechanisms do not depend on algorithms with known serious weaknesses:

    1. Secrets Vault: Uses AES-256-GCM for data encryption (symmetric, authenticated encryption with associated data). Key derivation uses PBKDF2WithHmacSHA256 with per-deployment random salt and configurable iteration count. Each secret gets a unique Data Encryption Key (DEK) wrapped by a Key Encryption Key (KEK) derived from the master passphrase — envelope encryption pattern.

    2. Audit Ledger: Uses HMAC-SHA256 for tamper-evident audit chain integrity. Each audit entry includes the HMAC of the previous entry, creating a hash chain.

    3. Agent Signing: Uses Ed25519 (Curve25519) for cryptographic agent identity — digital signatures on audit entries.

    4. Password hashing: Delegated to Keycloak, which defaults to bcrypt with configurable work factor.

    5. TLS: Java 25's built-in TLS implementation defaults to TLS 1.3 / TLS 1.2 minimum. No manual cipher suite configuration that could downgrade security.

    No usage of SHA-1 for security purposes, no MD5, no DES, no RC4, no CBC mode in SSH, no ECB mode for encryption. SHA-256 is used for tool caching keys (non-security context) and HMAC chains (security context).



    The project SHOULD support multiple cryptographic algorithms, so users can quickly switch if one is broken. Common symmetric key algorithms include AES, Twofish, and Serpent. Common cryptographic hash algorithm alternatives include SHA-2 (including SHA-224, SHA-256, SHA-384 AND SHA-512) and SHA-3. [crypto_algorithm_agility]

    EDDI supports cryptographic algorithm agility at the infrastructure level:

    1. Vault encryption: While AES-256-GCM is the current default, the VaultSaltManager architecture separates key derivation from encryption, allowing algorithm substitution without changing the data model.
    2. TLS: Handled by the JVM's TLS implementation which supports multiple cipher suites including TLS 1.3 suites (TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256) and TLS 1.2 suites. Cipher suite selection is configurable via standard Quarkus/JVM properties.
    3. HMAC: The audit ledger's HMAC algorithm is implemented via Java's Mac class, which supports HmacSHA256, HmacSHA384, HmacSHA512, and others — switchable by configuration.
    4. Agent signing: Ed25519 keys are generated via Java's KeyPairGenerator, which also supports RSA, EC, and other algorithms.


    The project MUST support storing authentication credentials (such as passwords and dynamic tokens) and private cryptographic keys in files that are separate from other information (such as configuration files, databases, and logs), and permit users to update and replace them without code recompilation. If the project never processes authentication credentials and private cryptographic keys, select "not applicable" (N/A). [crypto_credential_agility]

    https://github.com/labsai/EDDI/blob/main/docs/secrets-vault.md

    EDDI enforces strict separation of credentials from other information:

    1. Secrets Vault: All sensitive credentials (API keys, tokens, passwords) are stored in an envelope-encrypted vault separate from configuration. Agent configurations reference secrets via vault references ${eddivault:key-name}) which are resolved at runtime.
    2. Environment variables: Runtime configuration (database URLs, Keycloak endpoints) is externalized via environment variables and .env files, not compiled into the application.
    3. No recompilation needed: All credentials can be updated at runtime — vault entries via REST API, environment variables via container restart, Keycloak credentials via Keycloak admin UI.
    4. Export sanitization: When agents are exported (ZIP or sync), secrets are automatically scrubbed from the export payload. The importing instance must re-provision secrets in its own vault.
    5. CI/CD secrets: GitHub Actions secrets (DOCKER_USERNAME, DOCKER_PASSWORD, REDHAT_API_TOKEN) are stored in GitHub's encrypted secrets store, separate from source code.


    The software produced by the project SHOULD support secure protocols for all of its network communications, such as SSHv2 or later, TLS1.2 or later (HTTPS), IPsec, SFTP, and SNMPv3. Insecure protocols such as FTP, HTTP, telnet, SSLv3 or earlier, and SSHv1 SHOULD be disabled by default, and only enabled if the user specifically configures it. If the software produced by the project does not support network communications, select "not applicable" (N/A). [crypto_used_network]

    https://github.com/labsai/EDDI/blob/main/docs/security.md#tls-requirements

    EDDI supports and encourages secure protocols for all network communications:

    1. External API calls: All LLM provider integrations (OpenAI, Anthropic, Google, Azure, AWS, etc.) use HTTPS exclusively. UrlValidationUtils blocks non-HTTP/HTTPS schemes (file://, ftp://, gopher://, jar://).
    2. TLS termination: The docs/security.md TLS Requirements section documents both reverse-proxy TLS termination (recommended production pattern) and direct Quarkus TLS configuration via quarkus.http.ssl.* properties.
    3. Database connections: MongoDB and PostgreSQL connection strings support TLS natively. The compliance documentation (docs/hipaa-compliance.md) requires encrypted database connections for regulated deployments.
    4. No insecure protocols enabled by default: HTTP is the only unencrypted protocol available, intended for localhost development or behind a TLS-terminating reverse proxy. FTP, telnet, and other insecure protocols are not supported.


    The software produced by the project SHOULD, if it supports or uses TLS, support at least TLS version 1.2. Note that the predecessor of TLS was called SSL. If the software does not use TLS, select "not applicable" (N/A). [crypto_tls12]

    EDDI runs on Java 25, which defaults to TLS 1.3 and supports TLS 1.2 as a minimum. The Quarkus framework (3.34.3) uses the JVM's built-in TLS implementation via Vert.x/Netty, which enforces TLS 1.2+ by default. TLS 1.0 and TLS 1.1 are disabled in modern JVMs. No configuration in the project downgrades the minimum TLS version. For outbound connections to LLM providers, Java's HttpClient defaults to TLS 1.3 with TLS 1.2 fallback.



    The software produced by the project MUST, if it supports TLS, perform TLS certificate verification by default when using TLS, including on subresources. If the software does not use TLS, select "not applicable" (N/A). [crypto_certificate_verification]

    EDDI performs TLS certificate verification by default on all outbound HTTPS connections. Java's built-in HttpClient (used for LLM API calls via langchain4j) and the Vert.x web client (used for HTTP call extensions) both verify server certificates against the JVM's default trust store (cacerts) by default. No 'trustAll', 'disableHostnameVerification', or 'InsecureTrustManagerFactory' configuration exists in the codebase. The SafeHttpClient wrapper adds additional validation (redirect following, URL re-validation) but does not bypass certificate verification.



    The software produced by the project MUST, if it supports TLS, perform certificate verification before sending HTTP headers with private information (such as secure cookies). If the software does not use TLS, select "not applicable" (N/A). [crypto_verification_private]

    Java's HttpClient and Vert.x web client both complete the TLS handshake (including certificate verification) before sending any HTTP headers or request bodies. This is inherent to the TLS protocol implementation in the JVM — application data (including HTTP headers with cookies, authorization tokens, and other private information) is only transmitted after the TLS connection is established and the server certificate is verified. EDDI does not implement custom TLS handling that could bypass this ordering.


  • Secure release


    The project MUST cryptographically sign releases of the project results intended for widespread use, and there MUST be a documented process explaining to users how they can obtain the public signing keys and verify the signature(s). The private key for these signature(s) MUST NOT be on site(s) used to directly distribute the software to the public. If releases are not intended for widespread use, select "not applicable" (N/A). [signed_releases]
    The project results include both source code and any generated deliverables where applicable (e.g., executables, packages, and containers). Generated deliverables MAY be signed separately from source code. These MAY be implemented as signed git tags (using cryptographic digital signatures). Projects MAY provide generated results separately from tools like git, but in those cases, the separate results MUST be separately signed.

    All Docker image releases from v6.0.2 onward (April 2026+) are cryptographically signed using Sigstore cosign with keyless OIDC signing in GitHub Actions CI. Signatures are stored as OCI artifacts alongside the image on Docker Hub and recorded in the Rekor public transparency log. No private signing keys exist on the distribution site — signing uses ephemeral certificates issued by Fulcio via GitHub Actions OIDC identity. Users verify with: cosign verify --certificate-oidc-issuer https://token.actions.githubusercontent.com --certificate-identity-regexp https://github.com/labsai/EDDI/.github/workflows/ci.yml labsai/eddi:<tag>. See: https://github.com/labsai/EDDI/blob/main/docs/release-signing.md



    It is SUGGESTED that in the version control system, each important version tag (a tag that is part of a major release, minor release, or fixes publicly noted vulnerabilities) be cryptographically signed and verifiable as described in signed_releases. [version_tags_signed]

    Release process documents that important version tags should be created with git tag -s. From v6.0.2 onward, the primary release integrity guarantee is provided by Sigstore cosign keyless signing of Docker images in CI, which cryptographically binds every release to the specific GitHub Actions workflow that built it. See: https://github.com/labsai/EDDI/blob/main/docs/release-signing.md and https://github.com/labsai/EDDI/blob/main/docs/release-versioning.md#release-signing


  • Other security issues


    The project results MUST check all inputs from potentially untrusted sources to ensure they are valid (an *allowlist*), and reject invalid inputs, if there are any restrictions on the data at all. [input_validation]
    Note that comparing input against a list of "bad formats" (aka a *denylist*) is normally not enough, because attackers can often work around a denylist. In particular, numbers are converted into internal formats and then checked if they are between their minimum and maximum (inclusive), and text strings are checked to ensure that they are valid text patterns (e.g., valid UTF-8, length, syntax, etc.). Some data may need to be "anything at all" (e.g., a file uploader), but these would typically be rare.

    https://github.com/labsai/EDDI/blob/main/docs/security.md

    EDDI validates inputs from untrusted sources using allowlist approaches:

    1. URL validation (UrlValidationUtils): All URLs from LLM tool arguments are validated against an allowlist of permitted schemes (http, https only), with blocklists for private IP ranges, cloud metadata endpoints, and internal hostnames. Validation occurs BEFORE any network request.
    2. Math expression evaluation (SafeMathParser): A recursive-descent parser that accepts only numeric literals, a fixed set of arithmetic operators, and an allowlisted set of math functions (sqrt, sin, cos, etc.). Anything not in the grammar is rejected with a parse error.
    3. JSON schema validation: Input configurations are validated against expected schemas.
    4. Path traversal prevention: PathNavigator (replacing OGNL) uses a safe property path traversal mechanism that prevents arbitrary object graph navigation.
    5. OGNL/ScriptEngine elimination: All dynamic expression evaluation engines have been removed from the codebase and replaced with safe alternatives.
    6. Content-Type strict matching: HttpCallExecutor uses strict equals() (not startsWith) for Content-Type checking to prevent type confusion attacks.


    Hardening mechanisms SHOULD be used in the software produced by the project so that software defects are less likely to result in security vulnerabilities. [hardening]
    Hardening mechanisms may include HTTP headers like Content Security Policy (CSP), compiler flags to mitigate attacks (such as -fstack-protector), or compiler flags to eliminate undefined behavior. For our purposes least privilege is not considered a hardening mechanism (least privilege is important, but separate).

    EDDI implements multiple hardening mechanisms:

    1. Security headers: X-Content-Type-Options (nosniff), X-Frame-Options (DENY), Content-Security-Policy configured out of the box via Quarkus HTTP filter.
    2. SSRF protection: SafeHttpClient wraps all outbound HTTP calls with URL re-validation after redirects, preventing SSRF via redirect chains.
    3. Rate limiting: Token-bucket rate limiter on all LLM tool calls prevents resource exhaustion.
    4. Cost tracking: Per-conversation and per-tenant budget caps prevent runaway LLM costs.
    5. Queue capacity management: ConversationCoordinator throws RejectedExecutionException (HTTP 429) when queue capacity is exhausted, preventing unbounded resource consumption.
    6. Log injection protection: All user-provided values in log statements are sanitized to prevent log forging.
    7. Dependency banning: Maven Enforcer Plugin blocklists known-vulnerable dependency groups.
    8. Startup guards: AuthStartupGuard fails startup if production runs without authentication. ComplianceStartupChecks warns about missing TLS and database encryption.
    9. No dynamic code execution: Architecturally eliminated — no eval(), no ScriptEngine, no reflection-based execution.
    10. Memory safety: Java provides automatic memory management (garbage collection) and bounds checking, eliminating buffer overflow and use-after-free vulnerabilities.


    The project MUST provide an assurance case that justifies why its security requirements are met. The assurance case MUST include: a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered. (URL required) [assurance_case]
    An assurance case is "a documented body of evidence that provides a convincing and valid argument that a specified set of critical claims regarding a system’s properties are adequately justified for a given application in a given environment" ("Software Assurance Using Structured Assurance Case Models", Thomas Rhodes et al, NIST Interagency Report 7608). Trust boundaries are boundaries where data or execution changes its level of trust, e.g., a server's boundaries in a typical web application. It's common to list secure design principles (such as Saltzer and Schroeer) and common implementation security weaknesses (such as the OWASP top 10 or CWE/SANS top 25), and show how each are countered. The BadgeApp assurance case may be a useful example. This is related to documentation_security, documentation_architecture, and implement_secure_design.

    https://github.com/labsai/EDDI/blob/main/docs/security-assurance-case.md

    EDDI provides a comprehensive security assurance case in docs/security-assurance-case.md that addresses:

    1. Trust boundary architecture: 5 clearly defined boundaries with ASCII diagram — Authentication Boundary (Keycloak OIDC, AuthStartupGuard), Application Boundary (REST layer, input validation, rate limiting), Pipeline Sandbox Boundary (ILifecycleTask pipeline, SafeMathParser, PathNavigator, UrlValidationUtils), Persistence Boundary (MongoDB/PostgreSQL, Secrets Vault with envelope encryption, tamper-evident Audit Ledger), External Call Boundary (SafeHttpClient, SSRF guard, redirect re-validation).
    2. Threat model: 8 specific threats with mapped countermeasures — prompt injection → SSRF (UrlValidationUtils + SafeHttpClient), code injection (no eval/ScriptEngine, SafeMathParser), secret exfiltration (envelope encryption, export scrubbing), cross-tenant leakage (data-layer tenant isolation, memory visibility enforcement), log injection (sanitized output, CodeQL CWE-117), supply chain attacks (Dependabot + Trivy + Maven Enforcer + SHA-pinned Actions), authentication bypass (AuthStartupGuard startup check), resource exhaustion (rate limiting + cost tracking + queue capacity limits).
    3. Cryptographic design table: AES-256-GCM (vault), PBKDF2WithHmacSHA256 600K iterations (key derivation), HMAC-SHA256 (audit chain), Ed25519 (agent signing). No weak algorithms (no SHA-1, MD5, DES, RC4, ECB, CBC in security paths).
    4. CWE countermeasure matrix: 10 CWEs mapped to specific implementations (CWE-918, CWE-94, CWE-200, CWE-117, CWE-400, CWE-502, CWE-326, CWE-798, CWE-306, CWE-862).
    5. Compliance alignment: EU AI Act (audit ledger), GDPR (cascading erasure), CCPA (data subject requests), HIPAA (encryption at rest/transit).
    6. Secure development practices: Static analysis (CodeQL security-extended, Trivy, Checkstyle, Maven Enforcer, Dependency Review), testing (2,400+ unit tests, 250+ integration tests, security-specific test suites), CI/CD security (SHA-pinned Actions, Docker Hub org secrets, preflight certification).

 Analysis 2/2

  • Static code analysis


    The project MUST use at least one static analysis tool with rules or approaches to look for common vulnerabilities in the analyzed language or environment, if there is at least one FLOSS tool that can implement this criterion in the selected language. [static_analysis_common_vulnerabilities]
    Static analysis tools that are specifically designed to look for common vulnerabilities are more likely to find them. That said, using any static tools will typically help find some problems, so we are suggesting but not requiring this for the 'passing' level badge.

    https://github.com/labsai/EDDI/blob/main/.github/workflows/ci.yml

    EDDI uses multiple FLOSS static analysis tools that look for common vulnerabilities:

    1. CodeQL (GitHub's semantic code analysis engine, FLOSS): Runs on every push and pull request via .github/workflows/codeql.yml (also embedded in ci.yml as job 'codeql'). Configured with the 'security-extended' query suite — the most comprehensive security analysis pack available for Java. This detects: SQL injection, command injection, path traversal, XSS, hardcoded credentials, insecure cryptography, log injection (CWE-117), SSRF, deserialization vulnerabilities, and data flow analysis for taint tracking. Results are uploaded to GitHub Security tab.

    2. Trivy (Aqua Security, FLOSS): Runs as CI job 'trivy-scan' using aquasecurity/trivy-action. Performs filesystem scanning for known CVEs in dependencies, with CRITICAL and HIGH severity filter and exit-code 1 (build-breaking). Complementary to CodeQL — Trivy focuses on dependency CVEs while CodeQL focuses on source code patterns.

    3. Checkstyle (FLOSS): While primarily a style checker, several rules have security implications: EqualsHashCode prevents subtle equality bugs, StringLiteralEquality prevents == vs .equals() errors, FallThrough prevents accidental switch fallthrough.

    4. GitHub Dependency Review (.github/workflows/dependency-review.yml): Blocks pull requests that introduce dependencies with known vulnerabilities or incompatible licenses.

    Recent actions taken based on static analysis findings: CodeQL log-injection warnings in ConversationCoordinator classes were addressed by sanitizing all user-provided values in log statements. Trivy findings led to explicit CVE override pins in pom.xml (jinjava for CVE-2026-25526, reactor-netty-http for CVE-2025-22227).


  • Dynamic code analysis


    If the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) MUST be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
    Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

    Not applicable. EDDI is written entirely in Java, which is a memory-safe language. Java provides automatic memory management through garbage collection, bounds checking on all array and buffer accesses, and type safety enforcement through the JVM. There is no C, C++, or other memory-unsafe language code in the project. The Java runtime prevents buffer overflows, use-after-free, double-free, and other memory safety vulnerabilities at the language level.



This data is available under the Community Data License Agreement – Permissive, Version 2.0 (CDLA-Permissive-2.0). This means that a Data Recipient may share the Data, with or without modifications, so long as the Data Recipient makes available the text of this agreement with the shared Data. Please credit Gregor Jarisch and the OpenSSF Best Practices badge contributors.

Project badge entry owned by: Gregor Jarisch.
Entry created on 2026-04-02 22:12:57 UTC, last updated on 2026-04-22 19:22:58 UTC. Last achieved passing badge on 2026-04-10 23:35:34 UTC.