OpenSSL Artificial Intelligence (AI) Policy: Request for Feedback

Background
We at FireDaemon were challenged last week regarding the use of AI-generated imagery to promote the upcoming OpenSSL Conference that was placed on one of our webpages. The comment received can be summarised as: "How can I trust OpenSSL when you use AI-generated imagery? How far and pervasive is your use of AI in your business and by OpenSSL?" On the surface, this might appear to be inconsequential. However, the core issue that was raised was trust. How trustworthy is OpenSSL as a project? How is the project influenced by the use of AI?
So, after a brief chat with Tim Hudson, it would appear that the project does not have a comprehensive AI policy. The project does have restrictions around the submission of defect reports from AI bots, where the defect has to be manually vetted first.
Foundation and Corporation Small Business BAC Proposal
So, as a member of the BACs, I would like to make a formal submission to the Foundation and Corporation that we develop and publish an AI Policy akin to the Security Policy (https://openssl-library.org/policies/general/security-policy/). Before that formal submission occurs, I need consensus from members of the BACs and TACs. I'm happy to develop the first draft of the policy. The AI Policy could encompass the following:
Scope and Definitions
-
Core Policy Principles Statement
Ethics
Values
Transparency
Fairness
Accountability
-
Openness and Licensing
Licensing Requirements of AI Models
Definition of Derivative Works
Use of OpenSSL Materials (e.g. Web Sites, Communities, Source Code) as Training Data
Licensing Requirements for AI-Generated Derivative Works
-
Permitted and Acceptable Use of AI within the OpenSSL Ecosystem
AI-Generated Code Submission
AI-Based Code Testing
AI-Based Code Review
AI-Based Defect Reporting
List of Permitted AI-Based Tools Used by Submitters and Contributors
List of Permitted AI-Based Tools Used For Testing, Publishing, and Delivery
Mechanisms to address security, reliability, and misuse of AI
-
Permitted and Acceptable Use of AI by the Corporation, Foundation, TACs, and BACs
Scope and Use of AI-Tools
List of Permitted AI-Based Tools Used
-
Policy Enforcement and Revisions
Framework Policy Alignment Statement (e.g. ISO 27001:2022, ISO 42001:2023, NIST AI 600-1, NIST 800-53, AICPA TSC 2017, MPA CSBP, etc.)
Summary
OpenSSL should develop and publish a public AI policy as a major open-source project
Need consensus from members of the BACs and TACs that a policy needs to be developed and published, and make the corresponding recommendation to the Foundation and Corporation
The contents of the policy should be discussed and drafted openly based on the outline above
Foundation and Corporation accept the proposal and commit to developing and publishing the policy.
So please feel free to provide your feedback and agree that this proposal should proceed.
Michael Richardson Tue 12 Aug 2025 6:53PM
I agree that the project needs a policy.
You listed quite a number of things.
Some seem to assume conclusions not yet reached, but that's okay for now.
I personally care about the theft of IPR, the lack of accountability, and the
environmental destruction. {50 BILLION gallons of water in Texas for cooling}
Paul Yang · Tue 12 Aug 2025 1:55AM
I fully agree with this. And besides the scope you have just posted, I also suggest adding AI-assistant code review to the permissive list, thus document typos or grammar polishing stuff will not abuse the developers anymore.