OpenSSL Communities

Foundation BAC meeting (August 26, 2025)

JE Jon Ericson Public Seen by 9

Attendees:

  • Nicola Tuveri - Academics

  • Paul Dale - Committers

  • Dmitry Belyavsky - Distributions

  • Tim Chevalier - Large Businesses

  • Matt Caswell

  • Tomas Mraz

  • Jon Ericson

  • Face-to-face at the OpenSSL Conference (Matt/Jon)

    • Join Foundation TAC/BAC/staff will happen at 3pm-5 at the conference hotel (Oct. 6)

  • Re-election timeline (Jon)

    • Last year the Foundation BAC voting started on December 5 and closed on December 15. There was also a runoff so we didn't actually meet until January. (Jon)

    • Prefer slightly earlier to avoid all the communications piling up at one time. (Matt)

    • Try to avoid conflict with winter holidays this year. (Jon)

  • AI policy (Matt)

    • Feeling out a policy for AI based on his own business and the Corporation BAC discussion (James)

    • Concern about CLA/copyright policy when it comes to the AI training material. (Matt)

    • Incoming AI vs. outgoing. What does the project and library promise about the use of AI? (Nicola)

    • From the compliance perspective, audits sometimes ask about AI usage. Does not exclude the use of AI, but the key is disclosure. (James)

    • There are all sorts of ways AI could be used: finding vulnerabilities, etc. (Matt)

    • There needs to be some sort of statement about what the policy is. (James)

    • The code review process will still apply to AI, which is separate from the copyright and ethical concerns. (Including AI-generated images used for marketing.) (Matt)

    • It’s impossible to tell the difference between AI-assisted code and other submissions. (Dmitry)

    • Banning “AI use” is not sustainable since there are so many different tools available now. (Tomas)

    • There’s a difference between open source and proprietary code. It’s difficult to sandbox AI even from the QA toolchain since we can’t guarantee the code won’t be used outside of the intended purpose of the code. In addition, if we can’t validate the authorship of code and there is a legal risk to using AI. From an advisory perspective, this is something we need to consider. (Randall)

    • We can’t prevent someone from using OpenSSL code as training data for an AI. If we find OpenSSL code that AI was involved in code generation, it’s not really different than if someone simply copied the code. (Tomas)

    • Co-pilot, for instance, could potentially leak internal code even before the code is published. (Nicola)

    • Legally AI doesn’t have legal standing the same way people do. At least not yet. Content creators might be put in a difficult place as a result. (Randall)

    • We have to trust contributors (and always have) that they actually can contribute code under their CLA. We can inform people, but we can’t prevent people from claiming they wrote code that they haven’t. (Matt)

    • The policies are in repositories: (Matt)

    • Policies need to be kept up-to-date. (James)

    • Maybe the General policies should be under the remit of the BACs. (Matt)

    • A lawyer (with an international view) needs to review this. (Tim and Randall)

    • What is the scope of the policy? Library? Project, mission, Corporation, Foundation? (Nicola)

      • Library and other code we develop is a big enough problem to solve for now. A Foundation policy could be a subsequent policy that builds on the Library policy. (Matt)

    • Provocation: Whatever policy we come up with, will it be hypocritical if, say, marketing uses AI that doesn’t line up with the policy? (Nicola)

    • It’s hard to remove AI from the code, but it’s easier to deal with copyright problems in marketing. We shouldn’t be fundamental here. (Tomas)

    • Would a contributor be comfortable submitting code that is generated by AI? The contributor might be liable for what they submit. (Randall)

  • Sovereign Tech Fund update (Matt)

    • The Foundation was awarded a significant amount of funding. It covers constant-time BIGNUM and GitHub issue backlog. (Matt)

    • BIGNUM is in design and hopefully we’ll start sharing more in September. (Matt)

    • The pre-work cleanup of issues isn’t part of the Sovereign Tech cleanup because overtaken (Tomas)

  • General Discussion hub (Jon)

    • Can we change the settings so that the notifications of new threads is visible? (Dmitry)

      • Not at the moment and that is a problem. I will be working with Anton to see if we can make it so that people joining a subgroup will also be added to the general discussion group. “If you can’t see it, it doesn’t exist.” (Jon)

Action items

  • Form a proposal for a BAC election timeline (Jon)

  • Straw man AI policy for broader consideration on the Community forum (James)

PD

Paul Dale Thu 4 Sep 2025 12:18AM

I wasn't an attendee at the meeting but these minutes list me as being present.
I did send apologies however.