Criteria Discussion

There is no set of practices that can guarantee that software will never have defects or vulnerabilities. Even formal methods can fail if the specifications or assumptions are wrong. Nor is there any set of practices that can guarantee that a project will sustain a healthy and well-functioning development community.

However, following best practices can help improve the results of projects. For example, some practices enable multi-person review before release, which can both help find otherwise hard-to-find technical vulnerabilities and help build trust and a desire for repeated interaction among developers from different organizations.

This page discusses the set of best practices for Free/Libre and Open Source Software (FLOSS) projects developed for the Open Source Security Foundation (OpenSSF) Best Practices badge. Projects that follow these best practices will be able to voluntarily self-certify and show that they've achieved the relevant badge. Projects can do this, at no cost, by using a web application (BadgeApp) to explain how they meet these practices and their detailed criteria.


  1. encourage projects to follow best practices,
  2. help new projects discover what those practices are, and
  3. help users know which projects are following best practices (so users can prefer such projects).

The idiom "best practices" means "a procedure or set of procedures that is preferred or considered standard within an organization, industry, etc." (source: These criteria are what we believe are widely "preferred or considered standard" in the wider FLOSS community.

For more information on how these criteria were developed, see the OpenSSF Best Practices badge GitHub site.

You may also see the full criteria.


The best practices criteria are divided into three levels:

  • Passing focuses on best practices that well-run FLOSS projects typically already follow. Getting the passing badge is an achievement; at any one time only about 10% of projects pursuing a badge achieve the passing level.
  • Silver is a more stringent set of criteria than passing but is expected to be achievable by small and single-organization projects.
  • Gold is even more stringent than silver and includes criteria that are not achievable by small or single-organization projects.

Every criterion has a short name, shown as superscripted text inside square brackets after the criteria text.


The Linux Foundation also sponsors the OpenChain Project, which identifies criteria for a "high quality Free and Open Source Software (FOSS) compliance program." OpenChain focuses on how organizations can best use FLOSS and contribute back to FLOSS projects, while the OpenSSF Best Practices badge focuses on the FLOSS projects themselves. The OpenSSF Best Practices badge and OpenChain work together to help improve FLOSS and how FLOSS is used.


In some cases we automatically test and fill in information if the project follows standard conventions and is hosted on a site (e.g., GitHub) with decent API support.

We intend to improve this automation in the future. Improvements to the automation are welcome!

However, we have intentionally prioritized on "what is important", even if can't be affordably automated. We love automated measurements, but not everything important is automatable or can be automated affordably.


We expect that these practices and their detailed criteria will be updated over time. We plan to add new criteria but mark them as "future" criteria, so that projects can add that information and maintain their badge.

Feedback is very welcome via the GitHub site as issues or pull requests. There is also a mailing list for general discussion.


The key words "MUST", "MUST NOT", "SHOULD", "SHOULD NOT", and "MAY" in this document are to be interpreted as described in RFC 2119. The additional term SUGGESTED is added. In summary, these key words have the following meanings:

  • The term MUST is an absolute requirement, and MUST NOT is an absolute prohibition.
  • The term SHOULD indicates a criterion that is normally required, but there may exist valid reasons in particular circumstances to ignore it. However, the full implications must be understood and carefully weighed before choosing a different course.
  • The term SUGGESTED is used instead of SHOULD when the criterion must be considered, but the valid reasons to not do so are even more common than for SHOULD.
  • The term MAY provides one way something can be done, e.g., to make it clear that the described implementation is acceptable.

Often a criterion is stated as something that SHOULD be done, or is SUGGESTED, because it may be difficult to implement or the costs to do so may be high.


To obtain a badge, all MUST and MUST NOT criteria must be met, all SHOULD criteria must be either met OR unmet with justification, and all SUGGESTED criteria have to be considered (it must be rated as met or unmet, but justification is not required unless noted otherwise). An answer of N/A ("not applicable"), where allowed, is considered the same as being met. In some cases, especially in the higher levels, justification and/or a URL may be required.

Some criteria have special markings that influence this:

  • {N/A allowed} - "N/A" ("Not applicable") is allowed.
  • {N/A justification} - "N/A" ("Not applicable") is allowed and requires justification.
  • {Met justification} - "Met" requires justification.
  • {Met URL} - "Met" requires justification with a URL.
  • {Future} - the answer to this criterion currently has no effect, but it may be required in the future.

A project must achieve the previous level to achieve the next level. In some cases SHOULD criteria become MUST in higher level badges, and some SUGGESTED criteria at lower levels become SHOULD or MUST in higher level badges. The higher levels also require more justification, because we want others to understand how the criteria are being met.

The (many) cryptographic criteria do not always apply, because some software has no need to directly use cryptographic capabilities. In those cases, answer N/A.

There is one implied passing criterion - every project MUST have a public website with a stable URL. This is required to create a badge entry in the first place.


If you are not familiar with software development or running a FLOSS project, materials such as Producing Open Source Software by Karl Fogel may be helpful to you.


  • A project is an active entity that has project member(s) and produces project result(s). Its member(s) use project sites to coordinate and disseminate result(s). A project does not need to be a formal legal entity.
  • Project members are the group of one or more people or companies who work together to attempt to produce project results. Some FLOSS projects may have different kinds of members, with different roles, but that's outside our scope.
  • 项目结果是项目成员共同努力产生的最终目标。通常是软件,但是项目结果可能还包括其他内容。引用“项目产生的软件”的标准是指项目结果。
  • 项目站点是专用于支持开发和传播项目成果的站点,并且在适用的情况下包括项目网站,仓库和下载站点(请参考sites_https )。
  • 项目网站,又称项目主页,是新用户通常会访问以及查看有关项目信息的万维网(WWW)上的主页;它可能与项目的仓库站点相同(在GitHub上通常如此)。
  • 项目仓库管理和存储项目结果以及项目结果的修订历史记录。这也称为项目源仓库,因为我们仅需要管理和存储可编辑版本,而不是自动生成的结果(在许多情况下,生成的结果不存储在仓库中)。
  • A项目安全机制是交付的项目的软件提供的安全机制。



  • do not require any specific technology, product, or service. For example, they do not require git, GitHub, or GitLab. The criteria do provide guidance and help for common cases, since that information can help people understand and meet the criteria. There is some special automation for projects using git or GitHub, to help users in those common cases, but they are not required. Thus, as new tools and capabilities become available, projects can quickly switch to them. As exceptions, the criteria do require a project web page and TLS.
  • do not require or forbid any particular programming language. They do require that additional measures be taken for certain kinds of programming languages, but that is different.
  • never require the use of proprietary software, proprietary service, or a proprietary technology, since many free software developers would reject such criteria. Projects are allowed to use them and depend on them.
  • do not require active development or user discussion within a project. Some highly mature projects rarely change and thus may have little activity. The criteria do, however, require that the project be responsive if vulnerabilities are reported to the project.
  • 不需要费用就可以获得徽章。
  • do not require that all criteeria be implemented at once; most projects implement them over time.

The passing level does not include criteria that would be impractical for a single-person project, e.g., something that requires a significant amount of money. Many FLOSS projects are small, and we do not want to disenfranchise them.


Our application gives every project entry a unique id, but that doesn't help people searching for the project. For our purposes, the real name of a project is the URL for its repository, and where that is not available, the project "front page" URL. We rate limit changes to the repo URL to prevent some nonsense. Projects normally have a human-readable name, but these names are not unique enough.


The paper Open badges for education: what are the implications at the intersection of open systems and badging? identifies three general reasons for badging systems (all are valid for this):

  1. Badges as a motivator of behavior. We hope that by identifying best practices, we'll encourage projects to implement those best practices if they do not do them already.
  2. Badges as a pedagogical tool. Some projects may not be aware of some of the best practices applied by others, or how they can be practically applied. The badge will help them become aware of them and ways to implement them.
  3. Badges as a signifier or credential. Potential users want to use projects that are applying best practices to consistently produce good results; badges make it easy for projects to signify that they are following best practices, and make it easy for users to see which projects are doing so.


We have chosen to use self-certification, because this makes it possible for a large number of projects (even small ones) to participate. There are millions of FLOSS projects, and paying third parties to independently evaluate each one does not scale. There's a risk that projects may make false claims, but we think the risk is small, users can check the claims for themselves, and false claims can be overridden. We also use automation to override false claims where we can be confident in the results.