OpenSSL Communities

OpenSSL AI Policy

JB James Bourne Tue 9 Dec 2025 4:07PM Public Seen by 107

Hello all, I proposed the need for OpenSSL to develop an overarching AI policy earlier in the year. You can review my initial efforts and corresponding community feedback. After discussions with the BAC and TAC, please find below a first draft. It's important to note that:

  1. This is a first draft. Please treat it as such and provide appropriate critical feedback

  2. The draft policy is designed to be public policy rather than confidential or internal policy

  3. The draft policy has been designed to be as unambiguous and simple as possible

  4. The draft policy is not designed to restrict or prohibit the use of any specific AI tooling

  5. The draft policy has not been reviewed legally nor tested in a court of law

  6. The draft policy has been written in Australian English

  7. The CLA will need to be reviewed and updated to support this policy.

Additionally, policies of this nature are designed to be updated to meet the needs of the OpenSSL Library and community as a whole. As such, once the policy is ratified, approved, and adopted it should be regularly reviewed (e.g. at least annually) to ensure it still meets the needs of the OpenSSL Library, aligns with advances in AI, and meets necessary regulatory or statutory requirements.

So please review, provide feedback, with a view that I can setup a poll to approve the policy so that it can go live (hopefully) sometime in Q1 2026.

Thank you!


OpenSSL Library AI Policy

Version: 2.0
Effective Date: [DATE]
License: This policy is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and build upon this policy for any purpose, including commercial use, provided you give appropriate credit.


TL;DR — Executive Summary

When AI tools are used to write or assist with contributions to the OpenSSL Library, contributors must:

  1. Disclose AI involvement using the co-authored-by trailer format

  2. Review and test all AI-generated output before submission

  3. Verify that AI output does not reproduce copyrighted or incompatibly-licensed code

This policy supplements existing contribution requirements. The standard Signed-off-by attestation continues to apply.


1. Purpose and Objectives

This policy establishes requirements for the use of AI tooling in contributions to the OpenSSL Library. It aims to:

  1. Protect against AI slop — Prevent poorly reviewed, low-quality, or fabricated AI-generated content from entering the codebase

  2. Prevent accidental harm — Mitigate risks from AI hallucinations, outdated practices, or insecure coding patterns

  3. Ensure license compliance — Verify that AI-generated content does not reproduce material that is copyrighted or licensed incompatibly with the Apache License 2.0

  4. Support regulatory compliance — Maintain AI usage records to support software supply chain transparency requirements under frameworks such as the EU Cyber Resilience Act and US Executive Order 14028

This policy applies to new contributions from its effective date. It does not apply retroactively to existing code.


2. Scope

This policy applies when AI tools are used in the creation of:

  • Code contributions (source files, scripts, configuration)

  • Documentation

  • Tests

  • Security reports and vulnerability disclosures

This policy supplements, and does not replace, existing OpenSSL contribution requirements including the Developer Certificate of Origin (DCO) attestation via Signed-off-by.


3. AI Disclosure Requirements

3.1 When Disclosure is Required

Disclosure is required when AI tools were used to generate or substantially assist with the contributed content. This includes tools such as GitHub Copilot, ChatGPT, Claude, Gemini, Cursor, Codeium, and similar.

Disclosure is not required for:

  • Spell-checking or grammar correction tools

  • IDE autocompletion of syntax (e.g., bracket matching, import suggestions)

  • Search engines used for research

3.2 Disclosure Format

AI involvement must be disclosed using the Co-authored-by Git trailer format:

Co-authored-by: TOOL-NAME <[email protected]>

Examples:

Co-authored-by: GitHub Copilot <[email protected]>
Co-authored-by: ChatGPT <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: Gemini <[email protected]>

This format is machine-readable and consistent with existing Git conventions.

3.3 Multiple Tools

If multiple AI tools were used, include a Co-authored-by line for each.


4. Contributor Responsibilities

When submitting AI-assisted contributions, the contributor must:

  1. Review all output — Examine and understand every line of AI-generated content

  2. Test the code — Verify functionality and ensure it builds and passes tests

  3. Check for security issues — Review for vulnerabilities, hardcoded secrets, insecure patterns, and deprecated functions

  4. Verify license compliance — Ensure no copyrighted or incompatibly-licensed code has been reproduced

  5. Check dependencies — Verify that any suggested dependencies actually exist and are not hallucinated package names

  6. Apply coding standards — Ensure the contribution follows OpenSSL coding conventions

A practical guideline: if a reviewer can readily identify that code was AI-generated without checking the trailer, more work is needed before submission.


5. Security Reports

5.1 AI Disclosure for Security Reports

If AI tools were used to identify or analyse a potential vulnerability, this must be disclosed in the report.

5.2 Verification Requirement

AI tools frequently generate false positives and fabricate vulnerabilities. Before submitting a security report:

  1. Verify the vulnerability exists — Personally confirm the issue is real and behaves as described

  2. Write your own report — Describe the issue in your own words based on your verified understanding; do not submit AI-generated reports verbatim

Investigating unverified reports consumes significant maintainer time. Contributors who repeatedly submit fabricated or unverified AI-generated security reports may have future reports deprioritised or ignored.


6. Unacceptable Practices

The following will result in contribution rejection:

  • Undisclosed AI involvement — Submitting AI-generated code without the required Co-authored-by trailer

  • Unreviewed AI output — Directly submitting AI-generated code without examination, testing, or understanding

  • Hallucinated dependencies — Code that references packages, libraries, or APIs that do not exist

  • Reproduced copyrighted material — AI output that contains code protected by copyright or incompatible licenses


7. Review and Enforcement

Maintainers may:

  • Request clarification on AI disclosure

  • Reject contributions lacking required disclosures

  • Request evidence that AI-generated code has been reviewed and tested

Repeated violations may result in increased scrutiny of future contributions.


8. Acknowledgements

This policy was developed with reference to AI policies and guidance from the curl project, the Apache Software Foundation, and the OpenSSF Best Practices Working Group.


9. Version History

Version

Date

Description

1.0

10 Dec 2026

Initial draft

2.0

10 Dec 2026

Refocused on AI-specific requirements; adopted machine-readable disclosure format


This policy is licensed under CC BY 4.0. Attribution: OpenSSL Foundation.


CL

Clemens Lang Tue 9 Dec 2025 9:00PM

I have several problems with this:

  • First and foremost, this isn't just an AI Policy, it adds lots of other requirements for contribution. Either the title shouldn't be AI Policy, or all rules not related to AI should be split out of this document and discussed separately.

  • "The contributor bears full responsibility for ensuring submitted code […] free from security vulnerabilities, malicious content […]"
    What does bearing full responsibility mean here? Are you going to sue people that write the next Heartbleed, even if they didn't do it on purpose? If you're not going to do that, why is this listed here?
    Also, do we believe the next Jia Tan is going to care about any of this? They will ignore and contribute malicious code anyway, so what's the point of having this rule against malicious content?

  • "This policy applies to all contributions to the OpenSSL Library, including […] Security reports and vulnerability disclosures".
    Yeah, you're not going to ignore a security report just because the reporter didn't state "[t]he legal name of the corporation [and] [t]he legal domicile (country/jurisdiction) of the corporation". So if you're not going to do that, why does the policy as written require it? It feels like this problem wouldn't occur if this was an AI-only policy (where all rules would also apply to vulnerability reports), or if the rules for vulnerability reporting were split out of this policy.

  • Regarding provenance declarations: Let's make this machine-readable please. I've seen `Co-authored-by: Gemini <[email protected]>` for example, that feels like a reasonable approach. That would simplify identifying AI-assisted commits automatically, much better than a free-form text line that might have a typo.

  • Speaking of provenance declarations: Why is a Signed-off-by not enough to certify this is my own work? That's at least something many other projects already use, and it wouldn't introduce yet another OpenSSL-specific rule that then presents a hurdle for first-time contributors. IIRC, we wanted to lower the barrier to contribution, not raise it.

  • Why does the provenance declaration for corporation contributions need to name the company and its seat? Is the author's email domain not already a unique identifier for the company?

  • And finally, regarding provenance: why is this even required at all? You'll continue to ship old existing code that doesn't have this, so what's your solution to mark the provenance of that code? How are you going to handle contributions from essentially anonymous authors using GitHub handles in the future? Do we now need to start mailing copies of passports for individual contributor agreements, and if not, what's your plan for dealing with the provenance of such contributions, and why can't that plan also apply to all other contributions so that this stack of paperwork isn't required?

  • Section 6.3, "By submitting a contribution, the contributor attests that […] [t]he work does not infringe any copyright, patent, or other intellectual property rights"
    I don't think it is realistic to expect a patent review before contribution, and if this becomes the rule, I don't think I could recommend individuals to contribute to OpenSSL anymore without talking to a lawyer first. Is this really what we want?

I agree with all of the parts of this proposal that actually deal with AI. I think the rest needs significant revision.

JB

James Bourne Tue 9 Dec 2025 9:41PM

@Clemens Lang. All excellent points. Thank you. I'll revise.

 Your comments do the beg the question as to whether general code contribution requirements should be reviewed anyway and updated to support an AI Policy?

NT

Nicola Tuveri Wed 10 Dec 2025 10:26AM

@James Bourne@Clemens Lang actually most of those requirements are (with different wording, but basically equivalent if not even more restrictive) in the Individual or Corporate CLA that any OpenSSL Contributor has already filled.

https://openssl-library.org/policies/cla/

All the code in OpenSSL today is covered by those terms: they cover both provenance (both when you are submitting your own code or submitting on behalf of other) and IPRs.

I believe if an AI policy is required on these matters, it should at most provide a minimal explanation of how those CLA terms apply in the "age of AI".

Signed-off is not required by our current policy, you have separately already signed your CLA if that code is being considered for inclusion, and it applies to any contribution you send thereafter.

Co-authored-by: is regularly used, to share co-authorship, but who submits the commits for consideration (whoever opens the PR) is the one that has undersigned the terms of the CLA and is responsible that he is not breaching the terms of the CLA in any commit he sent for consideration.