OpenSSL AI Policy
Hello all, I proposed the need for OpenSSL to develop an overarching AI policy earlier in the year. You can review my initial efforts and corresponding community feedback. After discussions with the BAC and TAC, please find below a first draft. It's important to note that:
This is a first draft. Please treat it as such and provide appropriate critical feedback
The draft policy is designed to be public policy rather than confidential or internal policy
The draft policy has been designed to be as unambiguous and simple as possible
The draft policy is not designed to restrict or prohibit the use of any specific AI tooling
The draft policy has not been reviewed legally nor tested in a court of law
The draft policy has been written in Australian English
The CLA will need to be reviewed and updated to support this policy.
Additionally, policies of this nature are designed to be updated to meet the needs of the OpenSSL Library and community as a whole. As such, once the policy is ratified, approved, and adopted it should be regularly reviewed (e.g. at least annually) to ensure it still meets the needs of the OpenSSL Library, aligns with advances in AI, and meets necessary regulatory or statutory requirements.
So please review, provide feedback, with a view that I can setup a poll to approve the policy so that it can go live (hopefully) sometime in Q1 2026.
Thank you!
OpenSSL Library AI Policy
Version: 2.0
Effective Date: [DATE]
License: This policy is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share, adapt, and build upon this policy for any purpose, including commercial use, provided you give appropriate credit.
TL;DR — Executive Summary
When AI tools are used to write or assist with contributions to the OpenSSL Library, contributors must:
Disclose AI involvement using the co-authored-by trailer format
Review and test all AI-generated output before submission
Verify that AI output does not reproduce copyrighted or incompatibly-licensed code
This policy supplements existing contribution requirements. The standard Signed-off-by attestation continues to apply.
1. Purpose and Objectives
This policy establishes requirements for the use of AI tooling in contributions to the OpenSSL Library. It aims to:
Protect against AI slop — Prevent poorly reviewed, low-quality, or fabricated AI-generated content from entering the codebase
Prevent accidental harm — Mitigate risks from AI hallucinations, outdated practices, or insecure coding patterns
Ensure license compliance — Verify that AI-generated content does not reproduce material that is copyrighted or licensed incompatibly with the Apache License 2.0
Support regulatory compliance — Maintain AI usage records to support software supply chain transparency requirements under frameworks such as the EU Cyber Resilience Act and US Executive Order 14028
This policy applies to new contributions from its effective date. It does not apply retroactively to existing code.
2. Scope
This policy applies when AI tools are used in the creation of:
Code contributions (source files, scripts, configuration)
Documentation
Tests
Security reports and vulnerability disclosures
This policy supplements, and does not replace, existing OpenSSL contribution requirements including the Developer Certificate of Origin (DCO) attestation via Signed-off-by.
3. AI Disclosure Requirements
3.1 When Disclosure is Required
Disclosure is required when AI tools were used to generate or substantially assist with the contributed content. This includes tools such as GitHub Copilot, ChatGPT, Claude, Gemini, Cursor, Codeium, and similar.
Disclosure is not required for:
Spell-checking or grammar correction tools
IDE autocompletion of syntax (e.g., bracket matching, import suggestions)
Search engines used for research
3.2 Disclosure Format
AI involvement must be disclosed using the Co-authored-by Git trailer format:
Co-authored-by: TOOL-NAME <[email protected]>
Examples:
Co-authored-by: GitHub Copilot <[email protected]>
Co-authored-by: ChatGPT <[email protected]>
Co-authored-by: Claude <[email protected]>
Co-authored-by: Gemini <[email protected]>
This format is machine-readable and consistent with existing Git conventions.
3.3 Multiple Tools
If multiple AI tools were used, include a Co-authored-by line for each.
4. Contributor Responsibilities
When submitting AI-assisted contributions, the contributor must:
Review all output — Examine and understand every line of AI-generated content
Test the code — Verify functionality and ensure it builds and passes tests
Check for security issues — Review for vulnerabilities, hardcoded secrets, insecure patterns, and deprecated functions
Verify license compliance — Ensure no copyrighted or incompatibly-licensed code has been reproduced
Check dependencies — Verify that any suggested dependencies actually exist and are not hallucinated package names
Apply coding standards — Ensure the contribution follows OpenSSL coding conventions
A practical guideline: if a reviewer can readily identify that code was AI-generated without checking the trailer, more work is needed before submission.
5. Security Reports
5.1 AI Disclosure for Security Reports
If AI tools were used to identify or analyse a potential vulnerability, this must be disclosed in the report.
5.2 Verification Requirement
AI tools frequently generate false positives and fabricate vulnerabilities. Before submitting a security report:
Verify the vulnerability exists — Personally confirm the issue is real and behaves as described
Write your own report — Describe the issue in your own words based on your verified understanding; do not submit AI-generated reports verbatim
Investigating unverified reports consumes significant maintainer time. Contributors who repeatedly submit fabricated or unverified AI-generated security reports may have future reports deprioritised or ignored.
6. Unacceptable Practices
The following will result in contribution rejection:
Undisclosed AI involvement — Submitting AI-generated code without the required Co-authored-by trailer
Unreviewed AI output — Directly submitting AI-generated code without examination, testing, or understanding
Hallucinated dependencies — Code that references packages, libraries, or APIs that do not exist
Reproduced copyrighted material — AI output that contains code protected by copyright or incompatible licenses
7. Review and Enforcement
Maintainers may:
Request clarification on AI disclosure
Reject contributions lacking required disclosures
Request evidence that AI-generated code has been reviewed and tested
Repeated violations may result in increased scrutiny of future contributions.
8. Acknowledgements
This policy was developed with reference to AI policies and guidance from the curl project, the Apache Software Foundation, and the OpenSSF Best Practices Working Group.
9. Version History
Version |
Date |
Description |
|---|---|---|
1.0 |
10 Dec 2026 |
Initial draft |
2.0 |
10 Dec 2026 |
Refocused on AI-specific requirements; adopted machine-readable disclosure format |
This policy is licensed under CC BY 4.0. Attribution: OpenSSL Foundation.
Paul Dale Wed 24 Dec 2025 10:58PM
You missed the point. I've signed a CLA. What happens if I submit AI assisted code without signing the new updated CLA? That's a legal gap.
James Bourne Sun 28 Dec 2025 9:52PM
@Paul Dale Merry Christmas! I think that would be relatively straight forward to address:
Code copyright covered by Apache 2.0
Historical committer submissions covered by CLA 1.0
Existing committer new submissions subject to CLA 2.0* **
New committer new contributors subject to CLA 2.0*
Date of signing subject to ratification of CLA 2.0. CLA 2.0 to be versioned and dated as it may well be updated again from time to time.
Yes? No? Otherwise, my next question is who oversees CLA signing and enforcement?
* Triggers signing of CLA 2.0 where contribution is not trivial
** Applies to updates to existing code previously submitted under CLA 1.0 or any new code (i.e. all code contributions are considered new and subject to CLA 2.0)
Michael Baentsch Wed 31 Dec 2025 6:58AM
@Paul Dale I do not see a legal gap when answering your question "What happens if I submit AI assisted code without signing the new updated CLA?" IMO it's clear: Anyone doing so violates the CLA. See argumentation above. But then again, I'm no lawyer, so take with a grain of salt. Happy New Year!
Clemens Lang Thu 1 Jan 2026 10:06PM
@Michael Baentsch Regarding
Finally a comment to @Clemens Lang : I have some sympathy for your statement "IIRC, we wanted to lower the barrier to contribution, not raise it." but in its consequence for security software like this and as applied to AI I have to reject it whole-heartedly: OpenSSL imo must raise the barrier for accepting AI-generated contributions, not lower it, let alone disregard it
I made this comment when the state of the proposed policy still required explicitly stating that AI was not used in the commit message, i.e., the policy made contributions harder for users that do not use AI.
I agree that we can place higher requirements on users that use AI.
Richard Levitte (individual) Wed 17 Dec 2025 9:51AM
This policy supplements, and does not replace, existing OpenSSL contribution requirements including the Developer Certificate of Origin (DCO) attestation via Signed-off-by.
Mention of DCO is quite irrelevant for OpenSSL. That sentence could be reduced to:
This policy supplements, and does not replace, existing OpenSSL contribution requirements.
Richard Levitte (individual) Wed 17 Dec 2025 9:53AM
(yes, I know that some do git commit -s, which adds the Signed-off-by trailer. They are free to do so, but in practice, we ignore it completely)
Richard Levitte (individual) Wed 17 Dec 2025 10:16AM
- Security Reports
Everything in here should really apply to any issue raised, not just security reports.
While it seems that AI-generated security reports are currently the most popular, especially in bounty environments, I wouldn't be surprised if we'll see an increase non-security AI-generated issues as well in the future. It would be a mistake to allow that under the guise of "but... it's not a security report!"
Richard Levitte (individual) Wed 31 Dec 2025 2:03PM
I see that there's much talk about how generated contents may conflict with our CLAs. However, considering our CLAs are derived (not to say straight up copied) from the Apache CLAs, it might be worth seeing what Apache has to say on the matter:
https://www.apache.org/legal/generative-tooling.html
It may be noted that we often have a much stricter interpretation of the CLAs than Apache does. Still, their considerations are worth a look.
Nicola Tuveri Mon 5 Jan 2026 12:32PM
@Richard Levitte (individual) you are entirely right, but as you note at the bottom OpenSSL has been particularly stricter about interpreting CLA and licensing: see the ECCkyla PR for an example of where the same contribution was acceptable under other Apache-licensed projects but not under OpenSSL interpretation.
Comparing with the historical way OpenSSL has been strict about CLA interpretation, I do believe the current direction of this AI policy proposal is not really compatible.
I’d like for the 2 boards to provide their own view on this point, or how they wish the perceived incompatibilities could be resolved. If there is no will from the boards to change towards leniency the standing CLA interpretations I do expect most of this work to be wasted, which is a pity.
@Anton Arapov @Jon Ericson can we make this happen?
James Bourne Mon 5 Jan 2026 4:00PM
@Nicola Tuveri From what I'm reading as part of the discussion to date:
It's impossible to exclude AI assistance from any submission
The CLA needs to evolve to support AI-assisted development
The bar must be raised on contributor accountability in light of this.
Hence, the follow framework should be developed and finalised:
AI Policy v1.0
CLA v2.0
Contributor Procedure and Checklist v1.0
Reviewer Procedure and Checklist v1.0
IMHO none of this will be wasted. It's a business and technical imperative that the project actively addresses AI to ensure the project's survivability especially in light of supply chain compliance framework accountability and mandates.
I can revise these documents for general review.
Santosh Pandit Fri 2 Jan 2026 2:49PM
Great initiative. AI is here to stay and your policy introduction is timely. Below are suggested edits.
1. Add explicit regulatory framework context (Section 1)
Recommendation 1 (preamble): The policy should explicitly state why compliance frameworks matter to OpenSSL contributors. For example:
“This policy anticipates compliance frameworks affecting OpenSSL users. The EU Cyber Resilience Act requires cybersecurity documentation for products with digital elements. The NIST AI Risk Management Framework applies to United States federal agencies and contractors. The EU AI Act creates obligations for downstream use of OpenSSL in high risk applications. Contributors do not need to be compliance experts. This policy’s disclosure and verification requirements serve these purposes automatically and reduce the compliance burden for contributors.”
Recommendation 2: If the above suggestion is too verbose, you could instead extend the 4th bullet "...and reduce the regulatory burden on contributors including downstream."
2. Add cryptography specific code review appendix
While playing with liboqs, I found that AI does not always generate the best code. I do not know if other cryptographers have similar experience. If yes, perhaps all of us could benefit from concrete guidance for identifying AI generated cryptographic weaknesses. Otherwise I would find it difficult to attest under 4.3. This would distribute expertise and create consistent review standards, directly addressing the highest risk code patterns.
Recommendation: If supported by community, create “Appendix A: Reviewers’ Guide to AI Generated Cryptographic Code Patterns” covering:
a. Cryptography red flags (deprecated algorithms without justification, hardcoded keys or IVs, timing dependent comparisons for cryptographic values, incorrect key derivation, constant or predictable IV generation)
b. General AI code patterns (over commenting, hallucinated dependencies, non idiomatic variable names)
c. Reviewer protocol (request revision rather than immediate rejection, collaborate with the contributor when needed)
d. Escalation guidance (cryptography maintainers should review any code with more than two cryptography specific red flags before merge)
3. Need to clarify AI tool attribution does not substitute human responsibility.
Recommended clarification in Section 3.2:
“Named AI tools in this section are just metadata and does not create responsibility on the AI creators.”
4. Extend verification requirements to all community issues
Recommendation: Require AI disclosure and verification for all issue types (bugs, features, documentation and security reports), applying the same standard regardless of category. Perhaps a distinction could be between “verified” and “unverified” issues, not “security” and “other.”
5. Provide optional self assessment checklist for contributors
A simple checklist could help contributors avoid both over reviewing and under reviewing AI assisted changes.
Recommendation: Create an optional (linked, not inline) “AI Verification Checklist” covering:
a. Licensing checks (using the tool’s similar code detection if available, manual searches for exact matches in public repositories, checking that dependencies actually exist)
b. Security checks (no deprecated cryptographic algorithms without explicit justification, no hardcoded keys or secrets, correct use of cryptographically secure random number generators, safe error handling)
c. Functionality checks (code builds cleanly, tests pass, changes follow OpenSSL coding conventions)
d. Documentation checks (commit message explains what the change does, code comments explain why design choices were made rather than restating the code)
6. Optional strengthening of acknowledgements
Recommendation: The acknowledgements could explicitly reference the NIST AI Risk Management Framework, ISO/IEC 42001, the EU AI Act and CISA guidance, in addition to the curl, Apache, OpenInfra and OpenSSF materials already mentioned..
Nicola Tuveri Mon 5 Jan 2026 12:24PM
@Santosh Pandit I actually think that most if not all of these recommendations would negatively impact the document: such a verbose document does not really add value for the readers as it is not really actionable.
The various inputs and considerations providing a rationale for the streamlined policy could be part of a blog post covering what went into preparing an AI policy maybe.
Santosh Pandit Mon 5 Jan 2026 7:32PM
@Nicola Tuveri - I appreciate your perspective on keeping the policy concise.
@James Bourne - I'll leave it to you to assess which substantive points (particularly around CLA alignment and scope) merit inclusion, shortened as needed. I'm happy to help refine specific wording if useful.
James Bourne Mon 5 Jan 2026 4:29PM
And since we all love a good infographic 😀
travis parker Sat 10 Jan 2026 8:04AM
A thoughtful and pragmatic first draft that balances transparency, code quality, and legal safety without discouraging AI use clear disclosure rules and contributor responsibility make it well-suited for a security-critical project like OpenSSL.
Amy Jacob Sat 10 Jan 2026 9:20AM
Just as unique birthday wishes for friends should be thoughtful, original, and personal, AI-assisted contributions must also be transparent, well-reviewed, and responsibly crafted. Quality and authenticity matter whether in code or in meaningful words shared with friends.
James Bourne · Wed 24 Dec 2025 11:25AM
@Paul Dale My thoughts: code submitted under the existing CLAs should remain in force to maintain legal certainty. New submissions from new committers should be subject to the revised CLA and corresponding AI policies. This is to reduce friction and minimise administrative overhead. Apache 2.0 has the concept of Inbound=Outbound licensing which is described in Section 5.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
OpenSSL's CLA is modelled on Apache's so it should be retained as a legal instrument to document provenance. Perhaps all that is required is for the AI policy components to be included in the CLA so OpenSSL can determine where code that is contributed using NI, AI, or hybrid and where the author has exhaustively determined the AI generated component does not contain copyrighted hallucinations, slopsquatting vulnerabilities, etc. Anything purely AI generated is not protected by copyright in Australia. This would imply the AI generated components are encumbrance free from a licensing perspective on the inbound side?
Maybe time to reach out to Apache and various government bodies to obtain better guidance?
More here https://www.ag.gov.au/rights-and-protections/copyright/copyright-and-artificial-intelligence-reference-group-cairg