OpenSSL Communities
Mon 22 Sep 2025 9:05PM

AI and OpenSSL

PD Paul Dale Public Seen by 21

Over in the small business community James asked about AI use in OpenSSL. He wrote something better than I would so I'm grabbing his text wholesale. I'll follow up with his poll.

Background
We at FireDaemon were challenged last week regarding the use of AI-generated imagery to promote the upcoming OpenSSL Conference that was placed on one of our webpages. The comment received can be summarised as: "How can I trust OpenSSL when you use AI-generated imagery? How far and pervasive is your use of AI in your business and by OpenSSL?" On the surface, this might appear to be inconsequential. However, the core issue that was raised was trust. How trustworthy is OpenSSL as a project? How is the project influenced by the use of AI?

So, after a brief chat with Tim Hudson, it would appear that the project does not have a comprehensive AI policy. The project does have restrictions around the submission of defect reports from AI bots, where the defect has to be manually vetted first.

Foundation and Corporation Small Business BAC Proposal
So, as a member of the BACs, I would like to make a formal submission to the Foundation and Corporation that we develop and publish an AI Policy akin to the Security Policy (https://openssl-library.org/policies/general/security-policy/). Before that formal submission occurs, I need consensus from members of the BACs and TACs. I'm happy to develop the first draft of the policy. The AI Policy could encompass the following:

  • Scope and Definitions

  • Core Policy Principles Statement

    • Ethics

    • Values

    • Transparency

    • Fairness

    • Accountability

  • Openness and Licensing

    • Licensing Requirements of AI Models

    • Definition of Derivative Works

    • Use of OpenSSL Materials (e.g. Web Sites, Communities, Source Code) as Training Data

    • Licensing Requirements for AI-Generated Derivative Works

  • Permitted and Acceptable Use of AI within the OpenSSL Ecosystem

    • AI-Generated Code Submission

    • AI-Based Code Testing

    • AI-Based Code Review

    • AI-Based Defect Reporting

    • List of Permitted AI-Based Tools Used by Submitters and Contributors

    • List of Permitted AI-Based Tools Used For Testing, Publishing, and Delivery

    • Mechanisms to address security, reliability, and misuse of AI

  • Permitted and Acceptable Use of AI by the Corporation, Foundation, TACs, and BACs

    • Scope and Use of AI-Tools

    • List of Permitted AI-Based Tools Used

  • Policy Enforcement and Revisions

    • Framework Policy Alignment Statement (e.g. ISO 27001:2022, ISO 42001:2023, NIST AI 600-1, NIST 800-53, AICPA TSC 2017, MPA CSBP, etc.)

Summary

  • OpenSSL should develop and publish a public AI policy as a major open-source project

  • Need consensus from members of the BACs and TACs that a policy needs to be developed and published, and make the corresponding recommendation to the Foundation and Corporation

  • The contents of the policy should be discussed and drafted openly based on the outline above

  • Foundation and Corporation accept the proposal and commit to developing and publishing the policy.

So please feel free to provide your feedback and agree that this proposal should proceed.

PD

Poll Created Mon 22 Sep 2025 9:06PM

OpenSSL AI Policy Development Initiative Closed Sat 27 Sep 2025 9:00PM

What is the decision you need to make?

Following today's BAC meeting, the Committee is soliciting community feedback to support our recommendation that the Business develop AI policies and procedures for organisational governance. The scope includes assessing whether current OpenSSL licensing and contributor agreements adequately cover AI-related scenarios.

Why is this important?

Risk Management and Security: OpenSSL's core mission involves cryptographic security, and AI introduces new attack vectors and vulnerabilities. AI systems can potentially be used to exploit cryptographic weaknesses, generate sophisticated attacks, or compromise secure communications. Having clear policies helps the organisation understand and mitigate these risks.

Technology Integration: As AI becomes increasingly integrated into software development workflows, from code generation to automated testing and vulnerability detection, OpenSSL needs guidelines for how and when to leverage these tools safely in its development processes without compromising the integrity of its security-critical software.

Supply Chain Considerations: OpenSSL is foundational infrastructure used by countless organisations. AI policies would help establish standards for how AI-generated or AI-assisted code contributions are evaluated, ensuring the same rigorous security standards apply regardless of whether human developers or AI tools were involved in the development process.

Stakeholder Confidence: Given OpenSSL's critical role in global internet security, having transparent AI policies demonstrates responsible governance to the business community, government agencies, and other stakeholders who depend on OpenSSL's reliability and security.

Competitive Positioning: Proactive AI governance can position OpenSSL as a thought leader in secure AI adoption within the open source cryptography space, potentially attracting partnerships and funding from organisations prioritising responsible AI development.

Regulatory Preparedness: As AI regulations evolve globally, having established internal policies positions OpenSSL ahead of potential compliance requirements and demonstrates proactive responsibility to regulators and business partners.

Intellectual Property Protection and Copyright Compliance: OpenSSL must mitigate the risk of inadvertently incorporating copyrighted code through AI-assisted development tools. Many AI coding assistants are trained on vast repositories of code, including proprietary and copyrighted material, and may generate suggestions that closely resemble or directly reproduce protected code. Without clear AI policies, OpenSSL could face significant legal exposure from copyright infringement claims, which would be particularly damaging given OpenSSL's widespread use as foundational internet infrastructure.

Code Quality and Consistency Standards: OpenSSL must establish AI policies to maintain its rigorous code quality, formatting, and architectural standards. AI-generated code often lacks the nuanced understanding of OpenSSL's specific coding conventions, security requirements, and architectural patterns that have been developed over decades of cryptographic software development.

What are you asking of people in this proposal?

We are seeking your approval to recommend that the Business initiate the development of AI policies and procedures to address OpenSSL's organisational needs.

If you have an Objection, please explain why and suggest a modification to the proposal that ensures safety.

Remember, we are seeking consent for a ‘good enough’ decision that is 'safe to try', so we can make a good decision for our organisation.

Results

Results Option % of points Voters
Consent 100 9 FW DVO SL RL DB PY NH NT TM
Objection 0 0  
Undecided 0 15 KR HL BE VD PD RL KM AA TH JE TH SN TC TS MC

9 of 24 votes cast (37% participation)

SL

Shane Lontis Mon 22 Sep 2025 9:06PM

Consent

I agree that there should be a policy.
Whilst it is possible to specify a policy in good faith, I am not quite sure how it would be possible to determine where source code actually came from, or what tools were used to generate the code (acceptable or not). A reviewer may not be able to determine this.

DVO

David von Oheimb Mon 22 Sep 2025 9:06PM

Consent

Certainly important to have an official policy on AI use.

TM

Tomas Mraz Mon 22 Sep 2025 9:06PM

Consent

Certainly having a policy would be a good thing. How restrictive should it be, that's a different question.

NT

Nicola Tuveri Mon 22 Sep 2025 9:06PM

Consent

As mentioned before: the original ask seemed to be about an AI policy of how the various OpenSSL entities use generative AI.

I am in favor of ALSO developing an AI policy with guidelines for external contributors about their use of AI, but that was not the main objective of the community members who raised the concern.