OpenSSL Artificial Intelligence (AI) Policy: Request for Feedback

Background
We at FireDaemon were challenged last week regarding the use of AI-generated imagery to promote the upcoming OpenSSL Conference that was placed on one of our webpages. The comment received can be summarised as: "How can I trust OpenSSL when you use AI-generated imagery? How far and pervasive is your use of AI in your business and by OpenSSL?" On the surface, this might appear to be inconsequential. However, the core issue that was raised was trust. How trustworthy is OpenSSL as a project? How is the project influenced by the use of AI?
So, after a brief chat with Tim Hudson, it would appear that the project does not have a comprehensive AI policy. The project does have restrictions around the submission of defect reports from AI bots, where the defect has to be manually vetted first.
Foundation and Corporation Small Business BAC Proposal
So, as a member of the BACs, I would like to make a formal submission to the Foundation and Corporation that we develop and publish an AI Policy akin to the Security Policy (https://openssl-library.org/policies/general/security-policy/). Before that formal submission occurs, I need consensus from members of the BACs and TACs. I'm happy to develop the first draft of the policy. The AI Policy could encompass the following:
Scope and Definitions
-
Core Policy Principles Statement
Ethics
Values
Transparency
Fairness
Accountability
-
Openness and Licensing
Licensing Requirements of AI Models
Definition of Derivative Works
Use of OpenSSL Materials (e.g. Web Sites, Communities, Source Code) as Training Data
Licensing Requirements for AI-Generated Derivative Works
-
Permitted and Acceptable Use of AI within the OpenSSL Ecosystem
AI-Generated Code Submission
AI-Based Code Testing
AI-Based Code Review
AI-Based Defect Reporting
List of Permitted AI-Based Tools Used by Submitters and Contributors
List of Permitted AI-Based Tools Used For Testing, Publishing, and Delivery
Mechanisms to address security, reliability, and misuse of AI
-
Permitted and Acceptable Use of AI by the Corporation, Foundation, TACs, and BACs
Scope and Use of AI-Tools
List of Permitted AI-Based Tools Used
-
Policy Enforcement and Revisions
Framework Policy Alignment Statement (e.g. ISO 27001:2022, ISO 42001:2023, NIST AI 600-1, NIST 800-53, AICPA TSC 2017, MPA CSBP, etc.)
Summary
OpenSSL should develop and publish a public AI policy as a major open-source project
Need consensus from members of the BACs and TACs that a policy needs to be developed and published, and make the corresponding recommendation to the Foundation and Corporation
The contents of the policy should be discussed and drafted openly based on the outline above
Foundation and Corporation accept the proposal and commit to developing and publishing the policy.
So please feel free to provide your feedback and agree that this proposal should proceed.
Michael Richardson Tue 12 Aug 2025 6:53PM
I agree that the project needs a policy.
You listed quite a number of things.
Some seem to assume conclusions not yet reached, but that's okay for now.
I personally care about the theft of IPR, the lack of accountability, and the
environmental destruction. {50 BILLION gallons of water in Texas for cooling}

Randall Becker Mon 25 Aug 2025 8:55PM
My sense on this is having a policy is business critical. I consider AI to be problematic on many levels, particularly IP theft and generated code quality.

Poll Created Mon 25 Aug 2025 10:17PM
OpenSSL AI Policy Development Initiative Closed Tue 2 Sep 2025 2:00PM
What is the decision you need to make?
Following today's BAC meeting, the Committee is soliciting community feedback to support our recommendation that the Business develop AI policies and procedures for organisational governance. The scope includes assessing whether current OpenSSL licensing and contributor agreements adequately cover AI-related scenarios.
Why is this important?
Risk Management and Security: OpenSSL's core mission involves cryptographic security, and AI introduces new attack vectors and vulnerabilities. AI systems can potentially be used to exploit cryptographic weaknesses, generate sophisticated attacks, or compromise secure communications. Having clear policies helps the organisation understand and mitigate these risks.
Technology Integration: As AI becomes increasingly integrated into software development workflows, from code generation to automated testing and vulnerability detection, OpenSSL needs guidelines for how and when to leverage these tools safely in its development processes without compromising the integrity of its security-critical software.
Supply Chain Considerations: OpenSSL is foundational infrastructure used by countless organisations. AI policies would help establish standards for how AI-generated or AI-assisted code contributions are evaluated, ensuring the same rigorous security standards apply regardless of whether human developers or AI tools were involved in the development process.
Stakeholder Confidence: Given OpenSSL's critical role in global internet security, having transparent AI policies demonstrates responsible governance to the business community, government agencies, and other stakeholders who depend on OpenSSL's reliability and security.
Competitive Positioning: Proactive AI governance can position OpenSSL as a thought leader in secure AI adoption within the open source cryptography space, potentially attracting partnerships and funding from organisations prioritising responsible AI development.
Regulatory Preparedness: As AI regulations evolve globally, having established internal policies positions OpenSSL ahead of potential compliance requirements and demonstrates proactive responsibility to regulators and business partners.
Intellectual Property Protection and Copyright Compliance: OpenSSL must mitigate the risk of inadvertently incorporating copyrighted code through AI-assisted development tools. Many AI coding assistants are trained on vast repositories of code, including proprietary and copyrighted material, and may generate suggestions that closely resemble or directly reproduce protected code. Without clear AI policies, OpenSSL could face significant legal exposure from copyright infringement claims, which would be particularly damaging given OpenSSL's widespread use as foundational internet infrastructure.
Code Quality and Consistency Standards: OpenSSL must establish AI policies to maintain its rigorous code quality, formatting, and architectural standards. AI-generated code often lacks the nuanced understanding of OpenSSL's specific coding conventions, security requirements, and architectural patterns that have been developed over decades of cryptographic software development.
What are you asking of people in this proposal?
We are seeking your approval to recommend that the Business initiate the development of AI policies and procedures to address OpenSSL's organisational needs.
If you have an Objection, please explain why and suggest a modification to the proposal that ensures safety.
Remember, we are seeking consent for a ‘good enough’ decision that is 'safe to try', so we can make a good decision for our organisation.
Results
Results | Option | % of points | Voters | |
---|---|---|---|---|
|
Consent | 100.0% | 11 |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Objection | 0.0% | 0 | ||
Undecided | 0% | 37 |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
11 of 48 people have participated (22%)
Michael Richardson Tue 26 Aug 2025 8:10PM
I am unable, from first reading of this email, to understand what if anything is being
proposed. At most, I read:
> We are seeking your approval to recommend that the Business
> initiate the development of AI policies and procedures to address
> OpenSSL's organisational needs.
That is, I'm being asked to approve development of a policy.
What will happen after this policy is developed, is unclear.
Who will approve it? When will that happen?
Will the approval be yes/no, or is it a starting point for discussion?
I see two buttons for "Consent", one for "Safe to Try" and two for Objection.
Ah, way down, I see some real content... but it's still apparently a proposal
to propose something. Maybe you think we all read HTML formatted email?
I think you are over-thinking this, and you are investing too much in form,
and not much in function.
I'm also not sure where this reply-to goes.

James Bourne Tue 26 Aug 2025 10:01PM
@Michael Richardson. Thank you for your feedback. I understand your frustration, but I have no control over the emails that are generated and sent by Lumio (this platform). Regarding the location of your reply, it is posted in the OpenSSL Communities Lumio Corporation Business Small Business forum for all to read.
As a BAC member, we need to seek input from our respective communities before submitting a proposal to the OpenSSL Corporation and/or the Foundation Board, and before proceeding with any tangible work. In an attempt to get traction on this particular subject:
I attempted to determine whether there is any interest or thoughts regarding the development of an AI policy, procedure, and process by posting somewhat generically and providing a very high-level policy outline
Based on discussions at the monthly Corporation BAC meeting, I established a poll to allow members of the Corporation Small Business community to consent or object to a recommendation to the respective BACs to authorise and approve the scoping and implementation of necessary policies, procedures, and processes. I included various business justifications.
To answer your questions:
You are being asked to provide your consent to make the recommendation to proceed, since you are a community member
If consent outweighs objections, and members of the BACs agree, then we will make a formal recommendation to the Business and/or Foundation Boards
If the Business and/or Foundation Board accepts the recommendation, then work will begin scoping and preparing the necessary policy documents, procedures and processes
We will most probably deliver a series of "straw man" policies, which may be published publicly to solicit feedback and comments
Given the contentious and rapidly evolving nature of the subject area, we would look to have these policies, procedures, and processes reviewed externally by a legal entity with multi-jurisdictional competencies in this area at some point.
In terms of overthinking this, I have to navigate obligations to the community, BACs, Business and Foundation Boards. So agreed, plenty of form first, but which hopefully leads to function later on.
Thanks for your feedback. We must hear from community members. I've taken everything you have suggested on board and have hopefully responded suitably and appropriately.
Michael Richardson Wed 27 Aug 2025 12:31AM
James Bourne (via OpenSSL Communities) wrote:
> @Michael Richardson. Thank you for your feedback. I understand your
> frustration, but I have no control over the emails that are generated
> and sent by Lumio (this platform). Regarding the location of your
> reply, it is posted in the OpenSSL Communities (
> https://openssl-communities.org/hub-businesses-small/ ) Lumio
> Corporation Business Small Business forum for all to read.
Maybe we should just use email then.
> * I attempted to determine whether there is any interest or thoughts
> regarding the development of an AI policy, procedure, and process by
> posting somewhat generically and providing a very high-level policy
> outline
Yes, we should develop a policy.
> * Based on discussions at the monthly Corporation BAC meeting, I
> established a poll to allow members of the Corporation Small Business
I'm unclear when these are.
I think I'm a small business member.
> In terms of overthinking this, I have to navigate obligations to the
> community, BACs, Business and Foundation Boards. So agreed, plenty of
> form first, but which hopefully leads to function later on.

James Bourne Wed 3 Sep 2025 1:43AM
Thanks to everyone who responded. Will make a formal recommendation via the Corporation and Foundation BACs. 🎉
Paul Yang · Tue 12 Aug 2025 1:55AM
I fully agree with this. And besides the scope you have just posted, I also suggest adding AI-assistant code review to the permissive list, thus document typos or grammar polishing stuff will not abuse the developers anymore.