letsgolang

Projects that follow the best practices below can voluntarily self-certify and show that they've achieved an Open Source Security Foundation (OpenSSF) best practices badge.

There is no set of practices that can guarantee that software will never have defects or vulnerabilities; even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community. However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different companies. To earn a badge, all MUST and MUST NOT criteria must be met, all SHOULD criteria must be met OR be unmet with justification, and all SUGGESTED criteria must be met OR unmet (we want them considered at least). If you want to enter justification text as a generic comment, instead of being a rationale that the situation is acceptable, start the text block with '//' followed by a space. Feedback is welcome via the GitHub site as issues or pull requests There is also a mailing list for general discussion.

We gladly provide the information in several locales, however, if there is any conflict or inconsistency between the translations, the English version is the authoritative version.
If this is your project, please show your badge status on your project page! The badge status looks like this: Badge level for project 11658 is passing Here is how to embed it:
You can show your badge status by embedding this in your markdown file:
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/11658/badge)](https://www.bestpractices.dev/projects/11658)
or by embedding this in your HTML:
<a href="https://www.bestpractices.dev/projects/11658"><img src="https://www.bestpractices.dev/projects/11658/badge"></a>


These are the Passing level criteria. You can also view the Silver or Gold level criteria.


        

 Basics 13/13

  • Identification

    Note that other projects may use the same name.

    A minimalist, POSIX-compliant, non-root installer for the Go programming language on Linux.

    What programming language(s) are used to implement the project?
    If there is more than one language, list them as comma-separated values (spaces optional) and sort them from most to least used. If there is a long list, please list at least the first three most common ones. If there is no language (e.g., this is a documentation-only or test-only project), use the single character "-". Please use a conventional capitalization for each language, e.g., "JavaScript".
    The Common Platform Enumeration (CPE) is a structured naming scheme for information technology systems, software, and packages. It is used in a number of systems and databases when reporting vulnerabilities.
  • Basic project website content


    The project website MUST succinctly describe what the software does (what problem does it solve?). [description_good]
    This MUST be in language that potential users can understand (e.g., it uses minimal jargon).

    The project is hosted on GitHub, and the README.md file serves as the primary documentation and project description. It succinctly explains that the software is a non-root installer for the Go programming language.



    The project website MUST provide information on how to: obtain, provide feedback (as bug reports or enhancements), and contribute to the software. [interact]

    Source code and installation instructions are available in the README on GitHub. The project uses standard GitHub Issues for bug reports and feature requests, and accepts contributions via GitHub Pull Requests.



    Habari juu ya jinsi ya kuchangia LAZIMA ieleze mchakato wa uchangiaji (kwa mfano, je! Maombi ya kuvuta yanatumika?) (URL required) [contribution]
    Tunafikiria kuwa miradi kwenye GitHub hutumia maswala na kuvuta maombi isipokuwa palipoonyeshwa vingine. Habari hii inaweza kuwa fupi, kwa mfano, ikisema kuwa mradi hutumia maombi ya kuvuta, msako wa suala, au machapisho kwenye orodha ya barua (ipi?)

    Non-trivial contribution file in repository: https://github.com/jcsxdev/letsgolang/blob/main/CONTRIBUTING.md.



    Habari juu ya jinsi ya kuchangia INAPASWA kujumuisha mahitaji ya michango inayokubalika (k.m., rejeleo la kiwango chochote kinachohitajika cha usimbaji). (URL required) [contribution_requirements]

    The CONTRIBUTING.md (https://github.com/jcsxdev/letsgolang/blob/main/CONTRIBUTING.md) file explicitly defines coding standards, including mandatory ShellCheck validation, POSIX compliance, shfmt formatting rules, and Conventional Commits usage.


  • FLOSS license

    What license(s) is the project released under?
    Please use SPDX license expression format; examples include "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause", "GPL-2.0+", "LGPL-3.0+", "MIT", and "(BSD-2-Clause OR Ruby)". Do not include single quotes or double quotes.



    The software produced by the project MUST be released as FLOSS. [floss_license]
    FLOSS is software released in a way that meets the Open Source Definition or Free Software Definition. Examples of such licenses include the CC0, MIT, BSD 2-clause, BSD 3-clause revised, Apache 2.0, Lesser GNU General Public License (LGPL), and the GNU General Public License (GPL). For our purposes, this means that the license MUST be: The software MAY also be licensed other ways (e.g., "GPLv2 or proprietary" is acceptable).

    The MIT license is approved by the Open Source Initiative (OSI).



    It is SUGGESTED that any required license(s) for the software produced by the project be approved by the Open Source Initiative (OSI). [floss_license_osi]
    The OSI uses a rigorous approval process to determine which licenses are OSS.

    The MIT license is approved by the Open Source Initiative (OSI).



    The project MUST post the license(s) of its results in a standard location in their source repository. (URL required) [license_location]
    One convention is posting the license as a top-level file named LICENSE or COPYING, which MAY be followed by an extension such as ".txt" or ".md". An alternative convention is to have a directory named LICENSES containing license file(s); these files are typically named as their SPDX license identifier followed by an appropriate file extension, as described in the REUSE Specification. Note that this criterion is only a requirement on the source repository. You do NOT need to include the license file when generating something from the source code (such as an executable, package, or container). For example, when generating an R package for the Comprehensive R Archive Network (CRAN), follow standard CRAN practice: if the license is a standard license, use the standard short license specification (to avoid installing yet another copy of the text) and list the LICENSE file in an exclusion file such as .Rbuildignore. Similarly, when creating a Debian package, you may put a link in the copyright file to the license text in /usr/share/common-licenses, and exclude the license file from the created package (e.g., by deleting the file after calling dh_auto_install). We encourage including machine-readable license information in generated formats where practical.

    Non-trivial license location file in repository: https://github.com/jcsxdev/letsgolang/blob/main/LICENSE.md.


  • Documentation


    The project MUST provide basic documentation for the software produced by the project. [documentation_basics]
    This documentation must be in some media (such as text or video) that includes: how to install it, how to start it, how to use it (possibly with a tutorial using examples), and how to use it securely (e.g., what to do and what not to do) if that is an appropriate topic for the software. The security documentation need not be long. The project MAY use hypertext links to non-project material as documentation. If the project does not produce software, choose "not applicable" (N/A).

    The README.md (https://github.com/jcsxdev/letsgolang/blob/main/README.md) file serves as the primary documentation, providing installation instructions, usage examples, and project description.



    The project MUST provide reference documentation that describes the external interface (both input and output) of the software produced by the project. [documentation_interface]
    The documentation of an external interface explains to an end-user or developer how to use it. This would include its application program interface (API) if the software has one. If it is a library, document the major classes/types and methods/functions that can be called. If it is a web application, define its URL interface (often its REST interface). If it is a command-line interface, document the parameters and options it supports. In many cases it's best if most of this documentation is automatically generated, so that this documentation stays synchronized with the software as it changes, but this isn't required. The project MAY use hypertext links to non-project material as documentation. Documentation MAY be automatically generated (where practical this is often the best way to do so). Documentation of a REST interface may be generated using Swagger/OpenAPI. Code interface documentation MAY be generated using tools such as JSDoc (JavaScript), ESDoc (JavaScript), pydoc (Python), devtools (R), pkgdown (R), and Doxygen (many). Merely having comments in implementation code is not sufficient to satisfy this criterion; there needs to be an easy way to see the information without reading through all the source code. If the project does not produce software, choose "not applicable" (N/A).

    The software documentation describes the command-line interface, including supported flags and expected behavior. Additionally, the script provides a built-in --help option detailing all available inputs.


  • Other


    The project sites (website, repository, and download URLs) MUST support HTTPS using TLS. [sites_https]
    This requires that the project home page URL and the version control repository URL begin with "https:", not "http:". You can get free certificates from Let's Encrypt. Projects MAY implement this criterion using (for example) GitHub pages, GitLab pages, or SourceForge project pages. If you support HTTP, we urge you to redirect the HTTP traffic to HTTPS.

    Given only https: URLs.



    The project MUST have one or more mechanisms for discussion (including proposed changes and issues) that are searchable, allow messages and topics to be addressed by URL, enable new people to participate in some of the discussions, and do not require client-side installation of proprietary software. [discussion]
    Examples of acceptable mechanisms include archived mailing list(s), GitHub issue and pull request discussions, Bugzilla, Mantis, and Trac. Asynchronous discussion mechanisms (like IRC) are acceptable if they meet these criteria; make sure there is a URL-addressable archiving mechanism. Proprietary JavaScript, while discouraged, is permitted.

    GitHub supports discussions on issues and pull requests.



    The project SHOULD provide documentation in English and be able to accept bug reports and comments about code in English. [english]
    English is currently the lingua franca of computer technology; supporting English increases the number of different potential developers and reviewers worldwide. A project can meet this criterion even if its core developers' primary language is not English.

    All project documentation, including README, contributing guidelines, security policy, and code comments, is written in English. Bug reports and community interactions on GitHub are also conducted in English.



    The project MUST be maintained. [maintained]
    As a minimum, the project should attempt to respond to significant problem and vulnerability reports. A project that is actively pursuing a badge is probably maintained. All projects and people have limited resources, and typical projects must reject some proposed changes, so limited resources and proposal rejections do not by themselves indicate an unmaintained project.

    When a project knows that it will no longer be maintained, it should set this criterion to "Unmet" and use the appropriate mechanism(s) to indicate to others that it is not being maintained. For example, use “DEPRECATED” as the first heading of its README, add “DEPRECATED” near the beginning of its home page, add “DEPRECATED” to the beginning of its code repository project description, add a no-maintenance-intended badge in its README and/or home page, mark it as deprecated in any package repositories (e.g., npm deprecate), and/or use the code repository's marking system to archive it (e.g., GitHub's "archive" setting, GitLab’s "archived" marking, Gerrit's "readonly" status, or SourceForge’s "abandoned" project status). Additional discussion can be found here.




letsgolang is designed as a secure, POSIX-compliant shell script to install the Go programming language without requiring root privileges. The project prioritizes security through strict checksum validation (SHA256) of downloaded artifacts and enforces code quality via automated CI pipelines using ShellCheck and functional tests.

 Change Control 9/9

  • Public version-controlled source repository


    The project MUST have a version-controlled source repository that is publicly readable and has a URL. [repo_public]
    The URL MAY be the same as the project URL. The project MAY use private (non-public) branches in specific cases while the change is not publicly released (e.g., for fixing a vulnerability before it is revealed to the public).

    Repository on GitHub, which provides public git repositories with URLs.



    The project's source repository MUST track what changes were made, who made the changes, and when the changes were made. [repo_track]

    Repository on GitHub, which uses git. git can track the changes, who made them, and when they were made.



    To enable collaborative review, the project's source repository MUST include interim versions for review between releases; it MUST NOT include only final releases. [repo_interim]
    Projects MAY choose to omit specific interim versions from their public source repositories (e.g., ones that fix specific non-public security vulnerabilities, may never be publicly released, or include material that cannot be legally posted and are not in the final release).

    The project uses a public Git repository on GitHub. All development history is preserved, and interim commits are visible between formal releases: https://github.com/jcsxdev/letsgolang/commits/main.



    It is SUGGESTED that common distributed version control software be used (e.g., git) for the project's source repository. [repo_distributed]
    Git is not specifically required and projects can use centralized version control software (such as subversion) with justification.

    Repository on GitHub, which uses git. git is distributed.


  • Unique version numbering


    The project results MUST have a unique version identifier for each release intended to be used by users. [version_unique]
    This MAY be met in a variety of ways including a commit IDs (such as git commit id or mercurial changeset id) or a version number (including version numbers that use semantic versioning or date-based schemes like YYYYMMDD).

    The project uses unique version identifiers. The current version is defined in the script header and can be viewed via the --version flag. URL: https://github.com/jcsxdev/letsgolang/blob/main/src/letsgolang.sh.



    It is SUGGESTED that the Semantic Versioning (SemVer) or Calendar Versioning (CalVer) version numbering format be used for releases. It is SUGGESTED that those who use CalVer include a micro level value. [version_semver]
    Projects should generally prefer whatever format is expected by their users, e.g., because it is the normal format used by their ecosystem. Many ecosystems prefer SemVer, and SemVer is generally preferred for application programmer interfaces (APIs) and software development kits (SDKs). CalVer tends to be used by projects that are large, have an unusually large number of independently-developed dependencies, have a constantly-changing scope, or are time-sensitive. It is SUGGESTED that those who use CalVer include a micro level value, because including a micro level supports simultaneously-maintained branches whenever that becomes necessary. Other version numbering formats may be used as version numbers, including git commit IDs or mercurial changeset IDs, as long as they uniquely identify versions. However, some alternatives (such as git commit IDs) can cause problems as release identifiers, because users may not be able to easily determine if they are up-to-date. The version ID format may be unimportant for identifying software releases if all recipients only run the latest version (e.g., it is the code for a single website or internet service that is constantly updated via continuous delivery).


    It is SUGGESTED that projects identify each release within their version control system. For example, it is SUGGESTED that those using git identify each release using git tags. [version_tags]

    Each release is identified by a unique Git tag (e.g., v0.1.0) in the source control system.


  • Release notes


    The project MUST provide, in each release, release notes that are a human-readable summary of major changes in that release to help users determine if they should upgrade and what the upgrade impact will be. The release notes MUST NOT be the raw output of a version control log (e.g., the "git log" command results are not release notes). Projects whose results are not intended for reuse in multiple locations (such as the software for a single website or service) AND employ continuous delivery MAY select "N/A". (URL required) [release_notes]
    The release notes MAY be implemented in a variety of ways. Many projects provide them in a file named "NEWS", "CHANGELOG", or "ChangeLog", optionally with extensions such as ".txt", ".md", or ".html". Historically the term "change log" meant a log of every change, but to meet these criteria what is needed is a human-readable summary. The release notes MAY instead be provided by version control system mechanisms such as the GitHub Releases workflow.

    The project provides human-readable release notes for every version. Each release includes a categorized summary of features, bug fixes, and infrastructure changes (Changelog), along with SHA256/SHA512 checksums for artifact verification. These notes are manually curated to ensure clarity for users, satisfying the requirement to be more than a raw version control log. URL: https://github.com/jcsxdev/letsgolang/releases/tag/v0.1.0.



    The release notes MUST identify every publicly known run-time vulnerability fixed in this release that already had a CVE assignment or similar when the release was created. This criterion may be marked as not applicable (N/A) if users typically cannot practically update the software themselves (e.g., as is often true for kernel updates). This criterion applies only to the project results, not to its dependencies. If there are no release notes or there have been no publicly known vulnerabilities, choose N/A. [release_notes_vulns]
    This criterion helps users determine if a given update will fix a vulnerability that is publicly known, to help users make an informed decision about updating. If users typically cannot practically update the software themselves on their computers, but must instead depend on one or more intermediaries to perform the update (as is often the case for a kernel and low-level software that is intertwined with a kernel), the project may choose "not applicable" (N/A) instead, since this additional information will not be helpful to those users. Similarly, a project may choose N/A if all recipients only run the latest version (e.g., it is the code for a single website or internet service that is constantly updated via continuous delivery). This criterion only applies to the project results, not its dependencies. Listing the vulnerabilities of all transitive dependencies of a project becomes unwieldy as dependencies increase and vary, and is unnecessary since tools that examine and track dependencies can do this in a more scalable way.

    his is the initial release (v0.1.0) of the project. There are no previously known public vulnerabilities or CVEs associated with this project that require identification in the release notes.


 Reporting 8/8

  • Bug-reporting process


    The project MUST provide a process for users to submit bug reports (e.g., using an issue tracker or a mailing list). (URL required) [report_process]

    The project uses GitHub Issues as its official bug tracking system. Users can submit bug reports, track progress, and discuss issues directly in the repository. URL: https://github.com/jcsxdev/letsgolang/issues.



    The project SHOULD use an issue tracker for tracking individual issues. [report_tracker]

    The project utilizes GitHub Issues as its integrated issue tracker to manage, label, and track the lifecycle of individual bugs and feature requests. URL: https://github.com/jcsxdev/letsgolang/issues.



    The project MUST acknowledge a majority of bug reports submitted in the last 2-12 months (inclusive); the response need not include a fix. [report_responses]

    Since the project's inception, 100% of bug reports submitted have been acknowledged. While not all issues may have been fixed immediately, the project maintains a commitment to recognizing and addressing bug reports in a timely manner, well above the required majority for this criterion.



    The project SHOULD respond to a majority (>50%) of enhancement requests in the last 2-12 months (inclusive). [enhancement_responses]
    The response MAY be 'no' or a discussion about its merits. The goal is simply that there be some response to some requests, which indicates that the project is still alive. For purposes of this criterion, projects need not count fake requests (e.g., from spammers or automated systems). If a project is no longer making enhancements, please select "unmet" and include the URL that makes this situation clear to users. If a project tends to be overwhelmed by the number of enhancement requests, please select "unmet" and explain.

    Since the project’s inception, which was less than 2 months ago, 100% of enhancement requests and automated suggestions (such as those from StepSecurity and Dependabot) have been acknowledged and addressed. This response rate exceeds the required threshold of 50% for the most recent history of the project. Therefore, the project consistently maintains active engagement and responsiveness.



    The project MUST have a publicly available archive for reports and responses for later searching. (URL required) [report_archive]

    The project uses GitHub Issues, which maintains a permanent, publicly searchable archive of all reported bugs, feature requests, and their respective discussions and resolutions (both open and closed). URL: https://github.com/jcsxdev/letsgolang/issues?q=is%3Aissue+is%3Aclosed.


  • Vulnerability report process


    The project MUST publish the process for reporting vulnerabilities on the project site. (URL required) [vulnerability_report_process]
    Projects hosted on GitHub SHOULD consider enabling privately reporting a security vulnerability. Projects on GitLab SHOULD consider using its ability for privately reporting a vulnerability. Projects MAY identify a mailing address on https://PROJECTSITE/security, often in the form security@example.org. This vulnerability reporting process MAY be the same as its bug reporting process. Vulnerability reports MAY always be public, but many projects have a private vulnerability reporting mechanism.

    The project provides a clear vulnerability reporting process via a SECURITY.md file located in the root of the repository. This document outlines the security policy, supported versions, and provides a private contact method (email) for reporting vulnerabilities, ensuring a coordinated disclosure process. URL: https://github.com/jcsxdev/letsgolang/blob/main/SECURITY.md.



    If private vulnerability reports are supported, the project MUST include how to send the information in a way that is kept private. (URL required) [vulnerability_report_private]
    Examples include a private defect report submitted on the web using HTTPS (TLS) or an email encrypted using OpenPGP. If vulnerability reports are always public (so there are never private vulnerability reports), choose "not applicable" (N/A).

    The project provides clear instructions in the SECURITY.md file on how to report vulnerabilities privately. It specifies a direct email address for confidential disclosures, ensuring that sensitive security information is not leaked to the public issue tracker before a patch is available. URL: https://github.com/jcsxdev/letsgolang/blob/main/SECURITY.md.



    The project's initial response time for any vulnerability report received in the last 6 months MUST be less than or equal to 14 days. [vulnerability_report_response]
    If there have been no vulnerabilities reported in the last 6 months, choose "not applicable" (N/A).

    As the project was recently established (within the last 2 months) and no vulnerability reports have been received to date, this criterion is currently not applicable. The project maintainer is committed to responding to any future reports within the required 14-day timeframe, as outlined in the SECURITY.md file.


 Quality 13/13

  • Working build system


    Ikiwa programu iliyotengenezwa na mradi inahitaji ujenzi wa matumizi, mradi LAZIMA utoe mfumo wa kujenga ambao unaweza kujenga programu kiotomatiki kutoka kwa chanzo-msimbo. [build]
    Mfumo wa kujenga huamua ni hatua gani zinahitaji kutendeka ili kujenga tena programu (na kwa mpangilio gani), na kisha kutekeleza hatua hizo. Kwa mfano, inaweza kuomba kikusanyaji kukusanya fumbo-chanzo. Ikiwa inayoweza kutekelezwa imeundwa kutoka kwa fumbo-chanzo, lazima iwezeshe marekebisho kwenye fumbo-chanzo ya mradi na kisha itengeneze msasisho inayoweza kutekelezwa na marekebisho hayo. Ikiwa programu iliyotolewa na mradi unategemea maktaba ya nje, mfumo wa kujenga haina haja ya kujenga maktaba hizo za nje. Ikiwa hakuna haja ya kujenga chochote kutumia programu baada ya fumbo-chanzo kubadilishwa, chagua "haitumiki" (N / A).

    Non-trivial build file in repository: https://github.com/jcsxdev/letsgolang/blob/main/Makefile.



    INAPENDEKEZWA kuwa zana za kawaida zitumike kujenga programu. [build_common_tools]
    Kwa mfano, Maven, Ant, cmake, autotools, make, rake (Ruby), au devtools (R).

    Non-trivial build file in repository: https://github.com/jcsxdev/letsgolang/blob/main/Makefile.



    Mradi UNAPASWA kujengwa kwa kutumia zana za FLOSS pekee yake. [build_floss_tools]

    The project is composed entirely of POSIX-compliant shell scripts and is managed using a Makefile. It can be built, tested, and packaged using only FLOSS (Free/Libre and Open Source Software) tools such as make, tar, coreutils (sha256sum/sha512sum), and shellcheck. No proprietary compilers, IDEs, or libraries are required to build the distribution artifacts.


  • Automated test suite


    The project MUST use at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project). The project MUST clearly show or document how to run the test suite(s) (e.g., via a continuous integration (CI) script or via documentation in files such as BUILD.md, README.md, or CONTRIBUTING.md). [test]
    The project MAY use multiple automated test suites (e.g., one that runs quickly, vs. another that is more thorough but requires special equipment). There are many test frameworks and test support systems available, including Selenium (web browser automation), Junit (JVM, Java), RUnit (R), testthat (R).

    The project includes a dedicated automated test suite located in the test/ directory, which is released as FLOSS alongside the main source code. The suite is orchestrated via a Makefile and is automatically executed on every push and pull request through GitHub Actions. Clear documentation on how to run these tests locally (using make test) is provided in the BUILDING.md and README.md files.



    A test suite SHOULD be invocable in a standard way for that language. [test_invocation]
    For example, "make check", "mvn test", or "rake test" (Ruby).

    The project follows the standard convention for C and shell-based projects by using a Makefile to trigger the test suite. Users can invoke the tests using the idiomatic command make test, which is a widely recognized and accepted approach in this environment. This process is also documented in the BUILDING.md file, ensuring that users can easily set up and run the tests.



    It is SUGGESTED that the test suite cover most (or ideally all) the code branches, input fields, and functionality. [test_most]

    The project's automated test suite is designed to cover the primary functional branches of the installer, including environment pre-checks, version parsing, and directory structure creation. While we strive for high coverage, the suite currently validates all critical paths and error-handling scenarios for the initial release.



    It is SUGGESTED that the project implement continuous integration (where new or changed code is frequently integrated into a central code repository and automated tests are run on the result). [test_continuous_integration]

    The project implements Continuous Integration using GitHub Actions. Every commit pushed to the repository and every pull request triggers an automated workflow that runs the full test suite and the ShellCheck static analysis tool. This ensures that new code is verified immediately for regressions and quality issues before being merged into the main branch.


  • New functionality testing


    The project MUST have a general policy (formal or not) that as major new functionality is added to the software produced by the project, tests of that functionality should be added to an automated test suite. [test_policy]
    As long as a policy is in place, even by word of mouth, that says developers should add tests to the automated test suite for major new functionality, select "Met."

    The project maintains a policy that all major new functionality must be accompanied by corresponding automated tests. This is enforced through the development workflow, where new features are validated against the existing test suite and new test cases are added to the test/ directory to ensure long-term stability and prevent regressions.



    The project MUST have evidence that the test_policy for adding tests has been adhered to in the most recent major changes to the software produced by the project. [tests_are_added]
    Major functionality would typically be mentioned in the release notes. Perfection is not required, merely evidence that tests are typically being added in practice to the automated test suite when new major functionality is added to the software produced by the project.

    As evidenced by the initial v0.1.0 release, the project was launched with a comprehensive automated test suite in the test/ directory. This suite covers the core installer logic, environment validation, and version management—the primary functionalities of the current software. All recent commits involving logic changes have been verified against this suite, as shown in the GitHub Actions history.



    It is SUGGESTED that this policy on adding tests (see test_policy) be documented in the instructions for change proposals. [tests_documented_added]
    However, even an informal rule is acceptable as long as the tests are being added in practice.

    The project documents the expectation for adding tests in the CONTRIBUTING.md file. It explicitly states that new features or significant changes should be accompanied by automated tests to ensure project quality and prevent regressions. This guidance is provided to all potential contributors as part of the change proposal instructions.


  • Warning flags


    The project MUST enable one or more compiler warning flags, a "safe" language mode, or use a separate "linter" tool to look for code quality errors or common simple mistakes, if there is at least one FLOSS tool that can implement this criterion in the selected language. [warnings]
    Examples of compiler warning flags include gcc/clang "-Wall". Examples of a "safe" language mode include JavaScript "use strict" and perl5's "use warnings". A separate "linter" tool is simply a tool that examines the source code to look for code quality errors or common simple mistakes. These are typically enabled within the source code or build instructions.

    The project uses ShellCheck, a FLOSS static analysis and linting tool for shell scripts, to identify code quality errors, simple mistakes, and portability issues. This tool is integrated into the project's automated Continuous Integration (CI) pipeline via GitHub Actions, ensuring that every change is inspected for warnings and errors before being merged. This process helps maintain high code quality by catching common mistakes early and ensuring code portability across different environments.



    The project MUST address warnings. [warnings_fixed]
    These are the warnings identified by the implementation of the warnings criterion. The project should fix warnings or mark them in the source code as false positives. Ideally there would be no warnings, but a project MAY accept some warnings (typically less than 1 warning per 100 lines or less than 10 warnings).

    The project actively addresses all warnings identified by the ShellCheck linter. The Continuous Integration (CI) pipeline is configured to fail if any warnings are detected, ensuring that the codebase remains clean. All current warnings have been fixed, and the project maintains a "zero-warning" state for all releases.



    It is SUGGESTED that projects be maximally strict with warnings in the software produced by the project, where practical. [warnings_strict]
    Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

    The project implements a strict linting policy by integrating ShellCheck with its default comprehensive rule set into the CI pipeline. The CI is configured to treat warnings as errors, blocking any code that does not meet high-quality standards. This proactive approach ensures that potential issues—ranging from syntax errors to subtle portability and security pitfalls—are detected and resolved at the earliest possible stage of development.


 Security 16/16

  • Secure development knowledge


    The project MUST have at least one primary developer who knows how to design secure software. (See ‘details’ for the exact requirements.) [know_secure_design]
    This requires understanding the following design principles, including the 8 principles from Saltzer and Schroeder:
    • economy of mechanism (keep the design as simple and small as practical, e.g., by adopting sweeping simplifications)
    • fail-safe defaults (access decisions should deny by default, and projects' installation should be secure by default)
    • complete mediation (every access that might be limited must be checked for authority and be non-bypassable)
    • open design (security mechanisms should not depend on attacker ignorance of its design, but instead on more easily protected and changed information like keys and passwords)
    • separation of privilege (ideally, access to important objects should depend on more than one condition, so that defeating one protection system won't enable complete access. E.G., multi-factor authentication, such as requiring both a password and a hardware token, is stronger than single-factor authentication)
    • least privilege (processes should operate with the least privilege necessary)
    • least common mechanism (the design should minimize the mechanisms common to more than one user and depended on by all users, e.g., directories for temporary files)
    • psychological acceptability (the human interface must be designed for ease of use - designing for "least astonishment" can help)
    • limited attack surface (the attack surface - the set of the different points where an attacker can try to enter or extract data - should be limited)
    • input validation with allowlists (inputs should typically be checked to determine if they are valid before they are accepted; this validation should use allowlists (which only accept known-good values), not denylists (which attempt to list known-bad values)).
    A "primary developer" in a project is anyone who is familiar with the project's code base, is comfortable making changes to it, and is acknowledged as such by most other participants in the project. A primary developer would typically make a number of contributions over the past year (via code, documentation, or answering questions). Developers would typically be considered primary developers if they initiated the project (and have not left the project more than three years ago), have the option of receiving information on a private vulnerability reporting channel (if there is one), can accept commits on behalf of the project, or perform final releases of the project software. If there is only one developer, that individual is the primary developer. Many books and courses are available to help you understand how to develop more secure software and discuss design. For example, the Secure Software Development Fundamentals course is a free set of three courses that explain how to develop more secure software (it's free if you audit it; for an extra fee you can earn a certificate to prove you learned the material).

    The primary developer understands and applies core secure design principles, including those by Saltzer and Schroeder, as evidenced by the project's architecture:

    1. Fail-safe Defaults: Installation paths and environment configurations are secured by default; the script terminates on any error to prevent insecure states.
    2. Input Validation: Strict verification of Go versions and SHA checksums ensures only authenticated and valid data is processed.
    3. Least Privilege: The script is intentionally designed to run without root privileges where possible, minimizing the risk of privilege escalation.
    4. Economy of Mechanism & Limited Attack Surface: By maintaining a minimal, POSIX-compliant shell codebase, the project reduces complexity, making the logic easier to audit and reducing potential entry points for attackers.

    The developer prioritizes security throughout the software lifecycle, ensuring that "psychological acceptability" is met through clear user feedback and "open design" by relying on public cryptographic hashes rather than obscurity.



    At least one of the project's primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them. [know_common_errors]
    Examples (depending on the type of software) include SQL injection, OS injection, classic buffer overflow, cross-site scripting, missing authentication, and missing authorization. See the CWE/SANS top 25 or OWASP Top 10 for commonly used lists. Many books and courses are available to help you understand how to develop more secure software and discuss common implementation errors that lead to vulnerabilities. For example, the Secure Software Development Fundamentals course is a free set of three courses that explain how to develop more secure software (it's free if you audit it; for an extra fee you can earn a certificate to prove you learned the material).

    The primary developer is aware of common vulnerabilities specific to shell scripting, such as OS Command Injection, Word Splitting, and Globbing issues. To mitigate these, the project enforces strict variable quoting, avoids dangerous commands like eval with untrusted input, and uses ShellCheck as a mandatory static analysis tool to catch these patterns automatically. Furthermore, the developer understands how to prevent Insecure Temporary File creation by following POSIX-compliant safety standards.


  • Use basic good cryptographic practices

    Note that some software does not need to use cryptographic mechanisms. If your project produces software that (1) includes, activates, or enables encryption functionality, and (2) might be released from the United States (US) to outside the US or to a non-US-citizen, you may be legally required to take a few extra steps. Typically this just involves sending an email. For more information, see the encryption section of Understanding Open Source Technology & US Export Controls.

    Programu iliyotengenezwa na mradi LAZIMA itumie, kwa chaguo-msingi, tu itifaki za kriptografia na mifumbo ambazo zimechapishwa hadharani na kukaguliwa na wataalam (ikiwa itifaki za kriptografia na mafumbo imetumika). [crypto_published]
    Vigezo hivi vya kriptografia mara mingi havitumiki kwa sababu programu zingine hazina haja ya kutumia moja kwa moja uwezo wa kriptografia.

    The project exclusively uses expert-reviewed cryptographic protocols and algorithms. For data transit, the project enforces TLS 1.2 or higher and strictly allows only HTTPS via its run_curl wrapper. For file integrity, it utilizes SHA-256/512 hashes. These choices ensure the project relies only on industry-standard, publicly vetted cryptographic methods.



    Ikiwa programu iliyotengenezwa na mradi ni programu au maktaba, na kusudi lake la msingi sio kutekeleza usimbuaji, basi INAPASWA tu kuita programu iliyoundwa kihususa kutekeleza kazi za kielelezo; HAIPASWI kutekeleza-upya shughuli hiyo. [crypto_call]

    The project does not attempt to re-implement any cryptographic functions. Instead, it leverages standard, battle-tested system utilities specifically designed for these purposes. It calls curl for secure TLS communication and sha256sum or sha512sum for integrity verification. By using these external, specialized FLOSS tools, the project ensures that its security relies on widely audited implementations rather than custom code.



    Utendaji wote katika programu iliyotengenezwa na mradi ambayo inategemea usimbuaji LAZIMA iweze kutekelezwa kwa kutumia FLOSS. [crypto_floss]

    All cryptographic functionality in the project is implemented through standard, widely available FLOSS tools. The project utilizes curl for TLS-protected data transit and coreutils (specifically sha256sum and sha512sum) for integrity verification. These tools are open-source, POSIX-compliant, and can be audited or replaced by any compatible FLOSS implementation, ensuring there is no dependency on proprietary cryptographic software.



    Mifumo ya usalama ndani ya programu inayozalishwa na mradi LAZIMA itumie kwa msingi keylengths ambazo angalau zinakidhi mahitaji ya chini ya NIST kufikia mwaka wa 2030 (kama ilivyoelezwa mnamo 2012). LAZIMA iwe rahisi kusanidi programu ili keylengths ndogo zimezimwa kabisa. [crypto_keylength]
    Vipimo hivi vya urefu wa charaza ni: symmetric key 112, factoring modulus 2048, discrete logarithm key 224, discrete logarithmic group 2048, elliptic curve 224, na hash 224 (ufichuzi wa nywila haujashughulikiwa kwenye urefu wa charaza hii, maelezo zaidi ya ufichuzi wa nywila yanapatikana ndani ya kigezo cha crypto_password_storage). Ona https://www.keylength.com kwa mliganisho wa mapendekezo ya funguo-refu kutoka mashirika mbali mbali. Programu YAWEZA kubali funguo-refu ndogo katika usanidi (haifai kukubali, maana hii huwacha mashambulizi ya kushusha, lakini funguo-refu fupi wakati mwingine ina manufaa ya upatanifu).

    The project adheres to NIST security standards by using SHA-256 and SHA-512 for integrity verification, both of which exceed the minimum 224-bit hash requirement. For data transit, the project enforces TLS 1.2 or higher, which utilizes modern key lengths for asymmetric and symmetric encryption that meet or exceed the 2030 requirements. Furthermore, by explicitly configuring curl to use --tlsv1.2, the software ensures that older, insecure protocols with smaller keylengths are completely disabled and cannot be used as a fallback.



    The default security mechanisms within the software produced by the project MUST NOT depend on broken cryptographic algorithms (e.g., MD4, MD5, single DES, RC4, Dual_EC_DRBG), or use cipher modes that are inappropriate to the context, unless they are necessary to implement an interoperable protocol (where the protocol implemented is the most recent version of that standard broadly supported by the network ecosystem, that ecosystem requires the use of such an algorithm or mode, and that ecosystem does not offer any more secure alternative). The documentation MUST describe any relevant security risks and any known mitigations if these broken algorithms or modes are necessary for an interoperable protocol. [crypto_working]
    ECB mode is almost never appropriate because it reveals identical blocks within the ciphertext as demonstrated by the ECB penguin, and CTR mode is often inappropriate because it does not perform authentication and causes duplicates if the input state is repeated. In many cases it's best to choose a block cipher algorithm mode designed to combine secrecy and authentication, e.g., Galois/Counter Mode (GCM) and EAX. Projects MAY allow users to enable broken mechanisms (e.g., during configuration) where necessary for compatibility, but then users know they're doing it.

    The project's security mechanisms rely exclusively on modern, secure cryptographic algorithms. For file integrity, it uses SHA-256 and SHA-512, avoiding broken hashes like MD4, MD5, or SHA-1. For data transport, the run_curl wrapper enforces TLS 1.2+, which automatically excludes broken ciphers like RC4 or single DES. The project does not use insecure cipher modes like ECB; instead, it leverages the high-security defaults of the underlying system utilities (curl and coreutils).



    The default security mechanisms within the software produced by the project SHOULD NOT depend on cryptographic algorithms or modes with known serious weaknesses (e.g., the SHA-1 cryptographic hash algorithm or the CBC mode in SSH). [crypto_weaknesses]
    Concerns about CBC mode in SSH are discussed in CERT: SSH CBC vulnerability.

    The project avoids all cryptographic algorithms and modes with known weaknesses. Specifically, it uses SHA-256 and SHA-512 for file integrity instead of the weakened SHA-1 algorithm. For secure transport, it enforces TLS 1.2 or higher, which prioritizes modern authenticated encryption modes (like AES-GCM) over older, vulnerable modes like CBC. The project relies on current industry standards to ensure long-term resistance to known cryptographic attacks.



    Mifumo ya usalama ndani ya programu iliyotengenezwa na mradi INAPASWA kutekeleza kwa ukamilifu usiri wa umbele ya itifaki za makubaliano ya funguo ili funguo la kipindi kilicho tokana na kikao cha vifungo muda-mrefu haziwezi kuridhi mabaya ikiwa mojawapo ya vifunguo vya muda-mrefu imeridhi mabaya katika usoni. [crypto_pfs]

    The project implements Perfect Forward Secrecy (PFS) by enforcing TLS 1.2 or higher for all network communications via the run_curl wrapper. These modern TLS versions prioritize cipher suites that use ephemeral key exchanges (such as ECDHE), ensuring that the compromise of long-term server keys does not compromise the confidentiality of past sessions. By relying on up-to-date FLOSS tools (curl and openssl/gnutls), the project ensures that PFS is active by default.



    Ikiwa programu iliyotengenezwa na mradi imesababisha uhifadhi wa nywila kwa minajili ya uthibitishaji ya watumiaji wa kutoka nje, nywila LAZIMA zihifadhiwe kwa mficho uliorudiarudia na chumvi kwa kila-mtumiaji kwa kutumia kanuni ya upanuaji (rudiarudia) wa funguo (k.m., Argon2id, Bcrypt, Scrypt, or PBKDF2). Ona pia Kurasadogo ya Uhifadhi wa Nywila la OWASP). [crypto_password_storage]
    Kigezo hili linatumika tu wakati programu linatekeleza uthibitishaji wa watumiaji kutumia nywila kwa watumiaji wa nje (ambayo pia ni uthibitishaji unaelekezwa ndani), kama vile programu za tovuti zinazobakia seva). Haitumiki katika visa ambavyo programu inahifadhi nywila ili kudhibitisha ndani ya mifumo mingine (ambayo pia ni ithibitishaji unaelekezwa nje, k.m., programu inatekeleza teja la mfumo lingineyo), maana angalau sehemu za programu lazima ziwe na njia ya kupata hiyo nywila isigalifichwa.

    Not applicable. The project is a command-line installation utility and does not manage user accounts, perform authentication of external users, or store passwords in any form.



    Mifumo ya usalama ndani ya programu iliyotengenezwa na mradi LAZIMA itoe funguo zote za kriptologia na nonces kwa kutumia kitengeneza cha nambari za bahati kuptia kriptologia salama, na ISIWEZE kufanya hivo kutumia vitengenezi zisizo salama kikriptologia. [crypto_random]
    A cryptographically secure random number generator may be a hardware random number generator, or it may be a cryptographically secure pseudo-random number generator (CSPRNG) using an algorithm such as Hash_DRBG, HMAC_DRBG, CTR_DRBG, Yarrow, or Fortuna. Examples of calls to secure random number generators include Java's java.security.SecureRandom and JavaScript's window.crypto.getRandomValues. Examples of calls to insecure random number generators include Java's java.util.Random and JavaScript's Math.random.

    Not applicable. The project is an installer that downloads and verifies pre-existing binaries. It does not generate cryptographic keys, nonces, salts, or any other random values for security purposes. All security checks are based on deterministic comparisons of file hashes provided by the Go team.


  • Secured delivery against man-in-the-middle (MITM) attacks


    The project MUST use a delivery mechanism that counters MITM attacks. Using https or ssh+scp is acceptable. [delivery_mitm]
    An even stronger mechanism is releasing the software with digitally signed packages, since that mitigates attacks on the distribution system, but this only works if the users can be confident that the public keys for signatures are correct and if the users will actually check the signature.

    The project is distributed exclusively via GitHub, which enforces HTTPS for all web traffic and repository cloning. This provides a secure, encrypted delivery mechanism that counters Man-in-the-Middle (MITM) attacks. Furthermore, the installation script (letsgolang.sh) is designed to download Go binaries only over HTTPS using TLS 1.2+, ensuring that the entire delivery pipeline—from the source code to the final binary—is protected against interception and tampering.



    A cryptographic hash (e.g., a sha1sum) MUST NOT be retrieved over http and used without checking for a cryptographic signature. [delivery_unsigned]
    These hashes can be modified in transit.

    The project never retrieves cryptographic hashes or checksums over insecure HTTP. All checksums used to verify the integrity of Go binaries are fetched via the run_curl wrapper, which enforces HTTPS and TLS 1.2+. This ensures that both the data (the Go archive) and the verification metadata (the SHA hashes) are protected by the same high-strength encryption, preventing an attacker from tampering with the hash to mask a malicious payload.


  • Publicly known vulnerabilities fixed


    There MUST be no unpatched vulnerabilities of medium or higher severity that have been publicly known for more than 60 days. [vulnerabilities_fixed_60_days]
    The vulnerability must be patched and released by the project itself (patches may be developed elsewhere). A vulnerability becomes publicly known (for this purpose) once it has a CVE with publicly released non-paywalled information (reported, for example, in the National Vulnerability Database) or when the project has been informed and the information has been released to the public (possibly by the project). A vulnerability is considered medium or higher severity if its Common Vulnerability Scoring System (CVSS) base qualitative score is medium or higher. In CVSS versions 2.0 through 3.1, this is equivalent to a CVSS score of 4.0 or higher. Projects may use the CVSS score as published in a widely-used vulnerability database (such as the National Vulnerability Database) using the most-recent version of CVSS reported in that database. Projects may instead calculate the severity themselves using the latest version of CVSS at the time of the vulnerability disclosure, if the calculation inputs are publicly revealed once the vulnerability is publicly known. Note: this means that users might be left vulnerable to all attackers worldwide for up to 60 days. This criterion is often much easier to meet than what Google recommends in Rebooting responsible disclosure, because Google recommends that the 60-day period start when the project is notified even if the report is not public. Also note that this badge criterion, like other criteria, applies to the individual project. Some projects are part of larger umbrella organizations or larger projects, possibly in multiple layers, and many projects feed their results to other organizations and projects as part of a potentially-complex supply chain. An individual project often cannot control the rest, but an individual project can work to release a vulnerability patch in a timely way. Therefore, we focus solely on the individual project's response time. Once a patch is available from the individual project, others can determine how to deal with the patch (e.g., they can update to the newer version or they can apply just the patch as a cherry-picked solution).

    As of the current date, there are no publicly known vulnerabilities (CVEs) of medium or higher severity in this project. The project maintainer actively monitors security reports and commits to releasing patches for any vulnerabilities that meet the medium or higher severity criteria within 60 days of them becoming publicly known. The project utilizes automated static analysis tools (such as ShellCheck) in the CI pipeline, which helps identify potential issues early and prevent the introduction of security vulnerabilities. This proactive approach ensures that the code is continuously vetted for security flaws and any identified vulnerabilities are addressed promptly.



    Projects SHOULD fix all critical vulnerabilities rapidly after they are reported. [vulnerabilities_critical_fixed]

    The project maintainer is committed to addressing and fixing critical vulnerabilities as a top priority. In the event of a reported critical security flaw, the project follows an expedited workflow to develop, test, and release a patch as rapidly as possible, aiming well within the 60-day mandatory window. The integration of GitHub Actions and automated testing ensures that such critical fixes can be verified and deployed swiftly without introducing regressions.


  • Other security issues


    The public repositories MUST NOT leak a valid private credential (e.g., a working password or private key) that is intended to limit public access. [no_leaked_credentials]
    A project MAY leak "sample" credentials for testing and unimportant databases, as long as they are not intended to limit public access.

    The project repository is strictly monitored to ensure no private credentials, keys, or passwords are leaked. All Continuous Integration secrets (such as GitHub tokens) are managed securely through GitHub Actions Secrets and are never hardcoded in scripts or configuration files. A review of the commit history confirms that no valid private credentials intended to limit public access have been released.


 Analysis 8/8

  • Static code analysis


    At least one static code analysis tool (beyond compiler warnings and "safe" language modes) MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language. [static_analysis]
    A static code analysis tool examines the software code (as source code, intermediate code, or executable) without executing it with specific inputs. For purposes of this criterion, compiler warnings and "safe" language modes do not count as static code analysis tools (these typically avoid deep analysis because speed is vital). Some static analysis tools focus on detecting generic defects, others focus on finding specific kinds of defects (such as vulnerabilities), and some do a combination. Examples of such static code analysis tools include cppcheck (C, C++), clang static analyzer (C, C++), SpotBugs (Java), FindBugs (Java) (including FindSecurityBugs), PMD (Java), Brakeman (Ruby on Rails), lintr (R), goodpractice (R), Coverity Quality Analyzer, SonarQube, Codacy, and HP Enterprise Fortify Static Code Analyzer. Larger lists of tools can be found in places such as the Wikipedia list of tools for static code analysis, OWASP information on static code analysis, NIST list of source code security analyzers, and Wheeler's list of static analysis tools. If there are no FLOSS static analysis tools available for the implementation language(s) used, you may select 'N/A'.

    The project uses ShellCheck, an industry-standard FLOSS static analysis tool, to examine the source code for vulnerabilities, logic errors, and security weaknesses (such as command injection and insecure variable handling). This analysis is automatically performed on every push and pull request via GitHub Actions. The CI pipeline is configured to block any release that does not pass the ShellCheck verification, ensuring that major production releases are audited for common defects before deployment.



    It is SUGGESTED that at least one of the static analysis tools used for the static_analysis criterion include rules or approaches to look for common vulnerabilities in the analyzed language or environment. [static_analysis_common_vulnerabilities]
    Static analysis tools that are specifically designed to look for common vulnerabilities are more likely to find them. That said, using any static tools will typically help find some problems, so we are suggesting but not requiring this for the 'passing' level badge.

    The project utilizes ShellCheck, which is specifically designed to identify common vulnerabilities within the shell scripting environment. It enforces rules that mitigate high-risk security flaws, such as CWE-78 (OS Command Injection) and CWE-377 (Insecure Temporary Files). For example, the tool flags unquoted variables that could lead to word splitting or globbing exploits and warns against dangerous patterns like using eval with untrusted input. By integrating this specialized tool into the CI pipeline, the project ensures that code is analyzed against a database of known security anti-patterns relevant to the language.



    All medium and higher severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed. [static_analysis_fixed]
    A vulnerability is considered medium or higher severity if its Common Vulnerability Scoring System (CVSS) base qualitative score is medium or higher. In CVSS versions 2.0 through 3.1, this is equivalent to a CVSS score of 4.0 or higher. Projects may use the CVSS score as published in a widely-used vulnerability database (such as the National Vulnerability Database) using the most-recent version of CVSS reported in that database. Projects may instead calculate the severity themselves using the latest version of CVSS at the time of the vulnerability disclosure, if the calculation inputs are publicly revealed once the vulnerability is publicly known. Note that criterion vulnerabilities_fixed_60_days requires that all such vulnerabilities be fixed within 60 days of being made public.

    The project enforces a strict policy where all issues identified by static analysis must be resolved before code is merged or released. The CI pipeline (GitHub Actions) is configured to fail if ShellCheck detects any security vulnerabilities or logic errors (which typically correspond to medium or high severity issues in a shell environment, such as command injection). This ensures that no exploitable vulnerabilities discovered by the tool persist in the codebase. Furthermore, the maintainer commits to fixing any confirmed vulnerabilities in a timely manner, consistent with the project's 60-day maximum window for security patches.



    It is SUGGESTED that static source code analysis occur on every commit or at least daily. [static_analysis_often]

    Static source code analysis is performed automatically on every commit via the project's CI/CD pipeline. Using GitHub Actions, the ShellCheck tool scans the codebase for potential vulnerabilities and logic errors during every push and pull_request. This ensures a continuous feedback loop where security issues are identified and addressed immediately during the development process, rather than only at release intervals.


  • Dynamic code analysis


    It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release. [dynamic_analysis]
    A dynamic analysis tool examines the software by executing it with specific inputs. For example, the project MAY use a fuzzing tool (e.g., American Fuzzy Lop) or a web application scanner (e.g., OWASP ZAP or w3af). In some cases the OSS-Fuzz project may be willing to apply fuzz testing to your project. For purposes of this criterion the dynamic analysis tool needs to vary the inputs in some way to look for various kinds of problems or be an automated test suite with at least 80% branch coverage. The Wikipedia page on dynamic analysis and the OWASP page on fuzzing identify some dynamic analysis tools. The analysis tool(s) MAY be focused on looking for security vulnerabilities, but this is not required.

    The project employs an automated test suite that performs dynamic analysis by executing the installer in a controlled environment. The test suite varies inputs, such as Go versions and installation paths, to ensure the script behaves correctly under different scenarios. Furthermore, the code is executed with strict shell options (set -euo pipefail) which acts as a runtime error-detection mechanism. The dynamic tests specifically verify the security logic within the get_temporary_asset function, ensuring that umask settings, directory permissions, and trap cleanups function as intended during execution.



    It is SUGGESTED that if the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A). [dynamic_analysis_unsafe]
    Examples of mechanisms to detect memory safety problems include Address Sanitizer (ASAN) (available in GCC and LLVM), Memory Sanitizer, and valgrind. Other potentially-used tools include thread sanitizer and undefined behavior sanitizer. Widespread assertions would also work.

    Not applicable. The project is written entirely in Shell script, which is an interpreted language that does not require manual memory management by the developer. The project does not contain any code written in memory-unsafe languages such as C or C++. Therefore, tools like AddressSanitizer (ASAN) or Valgrind are not applicable to this codebase.



    It is SUGGESTED that the project use a configuration for at least some dynamic analysis (such as testing or fuzzing) which enables many assertions. In many cases these assertions should not be enabled in production builds. [dynamic_analysis_enable_assertions]
    This criterion does not suggest enabling assertions during production; that is entirely up to the project and its users to decide. This criterion's focus is instead to improve fault detection during dynamic analysis before deployment. Enabling assertions in production use is completely different from enabling assertions during dynamic analysis (such as testing). In some cases enabling assertions in production use is extremely unwise (especially in high-integrity components). There are many arguments against enabling assertions in production, e.g., libraries should not crash callers, their presence may cause rejection by app stores, and/or activating an assertion in production may expose private data such as private keys. Beware that in many Linux distributions NDEBUG is not defined, so C/C++ assert() will by default be enabled for production in those environments. It may be important to use a different assertion mechanism or defining NDEBUG for production in those environments.

    The project follows a fail-fast design philosophy, which incorporates a variety of runtime assertions to catch issues early during dynamic analysis and testing. These assertions are primarily aimed at validating conditions before the code proceeds to the next steps, ensuring that any violations are caught as soon as possible during development, without reaching production.

    Key Mechanisms in Testing:

    1. Strict Execution Mode: The script uses set -euo pipefail, which enforces a fail-fast mechanism, acting as a global assertion during testing. This ensures that the script immediately exits if a command fails or if an undefined variable is accessed, effectively preventing further erroneous execution.

    2. Explicit State Assertions: Critical functions, such as get_temporary_asset, include manual assertions to verify assumptions about the environment. For instance, the script checks if umask was set correctly and ensures that file permissions match the expected security profile (700 or 600).

    3. Defensive Validation: Internal parameters and conditions are validated using assertions. For example, the script checks if the correct number of arguments is passed (if $# -ne 1) and provides descriptive error messages to help developers catch issues early during testing.

    These assertions help the project to detect potential faults dynamically during test executions, and while they are part of the project's testing process, they are not enabled in production builds, in line with best practices.

    This approach enhances the reliability of the project by addressing many common issues during development, ensuring that production releases are stable and secure.



    All medium and higher severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed. [dynamic_analysis_fixed]
    If you are not running dynamic code analysis and thus have not found any vulnerabilities in this way, choose "not applicable" (N/A). A vulnerability is considered medium or higher severity if its Common Vulnerability Scoring System (CVSS) base qualitative score is medium or higher. In CVSS versions 2.0 through 3.1, this is equivalent to a CVSS score of 4.0 or higher. Projects may use the CVSS score as published in a widely-used vulnerability database (such as the National Vulnerability Database) using the most-recent version of CVSS reported in that database. Projects may instead calculate the severity themselves using the latest version of CVSS at the time of the vulnerability disclosure, if the calculation inputs are publicly revealed once the vulnerability is publicly known.

    The project maintains a policy of immediately addressing all vulnerabilities and critical logic errors identified during dynamic analysis. Since the test suite and the script itself utilize strict execution modes (set -euo pipefail), any medium or higher severity issue—such as a failed security assertion in the get_temporary_asset function or an unhandled error state—causes the script to terminate and the CI build to fail. This ensures that no exploitable vulnerabilities discovered during dynamic testing can persist in a production release. The maintainer is committed to resolving any such confirmed issues as part of the standard development and patching cycle.



This data is available under the Community Data License Agreement – Permissive, Version 2.0 (CDLA-Permissive-2.0). This means that a Data Recipient may share the Data, with or without modifications, so long as the Data Recipient makes available the text of this agreement with the shared Data. Please credit jcsxdev and the OpenSSF Best Practices badge contributors.

Project badge entry owned by: jcsxdev.
Entry created on 2025-12-27 06:19:38 UTC, last updated on 2025-12-27 08:43:46 UTC. Last achieved passing badge on 2025-12-27 08:43:46 UTC.