How to respond to recent critiques? (Big picture)
In the past few months there have been posts pointing out concerns with the OpenSSL Library. Notably:
- The State of SSL Stacks on the HAProxy blog and
- The State of OpenSSL for pyca/cryptography on the Python
cryptographysite.
I'm reminded of one of our community values:
We believe in behaving in a manner that fosters trust and confidence.
One way we can do that is by listening to people who are critical of the OpenSSL project, attempt to find root causes that we can address. It's good, for instance, that Paul Kehrer and Alex Gaynor, who maintain the Python cryptography library, spoke at the OpenSSL Conference. I attended that talk and while it was uncomfortable at times, I'm glad I learned about the problems they face when working with the OpenSSL Library. The alternative is not knowing and, therefore, being unable to address them.
It's human nature to take negative responses to our work as a personal attack. I do not believe that is an accurate characterization in this case. As I read these posts, I categorize the points in these general buckets:
- Technical problems with the OpenSSL Library (particularly performance regressions from OpenSSL 1.1.1 to 3.0).
- Disagreements with technical decisions made by the maintainers of OpenSSL (for instance the implementation of the provider architecture).
- Concerns about how the OpenSSL Library is managed.
These posts also sound like the authors feel unheard and that OpenSSL can't be fixed. So what can we, as a community, do about it? I'm going to put some ideas in this post, but I encourage everyone to pitch in ideas as replies below. Please focus on the big picture for now! We can dig into specific details either in GitHub issues or new threads.
Document what steps have already been taken to address concerns
One thing that caught my attention was the focus on the performance of OpenSSL 3.0 in particular. If you look at the performance graphs that release is consistently the worst performing branch. Other 3.x releases show improvement and frequently master is the closest to 1.1.1 performance.
But, of course, most people who use the OpenSSL Library aren't using master. Instead they are on an LTS release, so it makes sense people have formed their perception on 3.0. The most recent LTS, 3.5 is less than a year old so it's not yet worked it's way through the ecosystem. The gap between perception and reality is, at least in part, the result of 3.0's regression in performance combined with the unfortunate reality that that version was the LTS release for so long.
The Feisty Duck newsletter's December issue, OpenSSL Performance Still Under Scrutiny, models how we might address performance concerns:
- Frank acknowledgement of past problems.
- Practical hints about how to get the most out of more recent releases.
- Optimism that the situation will improve now that the OpenSSL Library has show movement in the direction of better performance.
When it comes to fostering trust, GitHub issues and PRs speak louder than promises. What specifically has been done to improve the situation?
Communicate the reasons behind technical decisions
Allow me to quote a powerful paragraph from the pyca post:
We do not fully understand the motivations that led to the public APIs and internal complexity we’ve described here. We’ve done our best to reverse engineer them by asking “what would motivate someone to do this” and often we’ve found ourselves coming up short. The fact that none of the other OpenSSL forks have made these same design choices is informative to the question of “was this necessary”.
Now the change in strategic architecture between 1.1.1 and 3.0 is documented. Reading between the lines I can see that the old architecture failed to meet the needs of some users of the OpenSSL Library. But the specific motivations for these choices aren't clear in that document. Searching around the internet, I found this post from our very own @beldmit that suggests the provider architecture useful for people who want:
- legacy algorithms
- experimental algorithms (OQS)
- meet government standards (particularly FIPS-140-3)
- cryptographic hardware (PKCS#11 and TPM2)
Not everyone cares about these use cases, but it's harder to argue that the changes to support these uses serve no purpose. Knowing that one of the goals of OpenSSL 3.0 was, to quote Dmitry, "maintainable FIPS-140-3 certified modules" clarifies the actual limitations to the design. People can disagree with the decisions, but not claim the changes were capricious.
It's pretty common for people to fail to adequately explain the reasoning for making big changes by the time the announcement goes out, the people who made the decision have lived with it for awhile and assume that the end product speaks for itself. They might not even remember the alternative choices that were rejected because once you commit to a path, the road not taken is irrelevant. In order to bring end users along, it's useful give that background. Ideally we need to answer the implied question behind many criticisms: "What's in it for me?" You can't answer that question too many times.
Let our mission be our guide
When I talk about OpenSSL to people who don't know what we are, I like to start with a paraphrase of the mission. We want everyone to have access to privacy and security tools. The OpenSSL Library is the more important and most obvious product of that mission. (Yes, I know the mission is much newer than the library. But the beliefs behind the mission are a big reason the library exists.) Forks of the OpenSSL Library tend to specialize in specific use cases whereas the OpenSSL LIbrary itself has the larger ambition of supporting everyone.
Fairly recently the organization that manages the OpenSSL Library chose to split into two organizations:
- the OpenSSL Corporation that specializes in commercial applications and
- the OpenSSL Foundation which covers non-commercial interests.
Since it's impossible to know all the ways people use the OpenSSL Library, we've also set up this platform so that the OpenSSL Communities can give us feedback. The best way to influence the future of the OpenSSL Library is to get involved right here.
For reference, both the HAProxy and Python cryptography library maintainers are welcome to join the Distributions community:
The Distributions comprises maintainers of operating systems and significant software packages that integrate projects from the OpenSSL Foundation and the OpenSSL Corporation.
Keep the lines of communication open
As a community manager, I have found that people have a hard time assuming the worst about others when they are in active communication. Sometimes a conversation can unlock unexpected solutions to seemingly intractable differences. It can help to remember that behind avatars are real people.
How else can we respond?
Tim Hudson Sat 17 Jan 2026 12:55AM
Publish all 1.1.1 performance baselines on the performance dashboard. It currently shows 3.x versions improving against each other and a limited set of 1.1.1 performance metrics.
Which tests in particular are you concerned about? As the ones that don't show 1.1 performance are for things that didn't exist back in 1.1.1 - i.e. all the tests that could work should be working. If we have missed a specific thing then let us know.
e.g. EVP_fetch performance testing isn't relevant prior to OpenSSL-3.0 as there was no EVP_fetch.
Tim Hudson Sat 17 Jan 2026 12:57AM
Pick a QUIC position. Adopt the BoringSSL-compatible API or publicly confirm OpenSSL's API will never be compatible. We need to be explicit.
We already have done precisely that and communicated it. And provided an interface and helped various projects cut across to it. It is the same interface that we sit our own implementation on top of and will be long term supported.
James Bourne Sat 17 Jan 2026 1:05AM
@Tim Hudson OK. I didn't know that. I realise it's available in the legend, but can the page be updated to make it clear that there is no historical 1.1.1 performance benchmark for specific tests? Also, include further tests for recent distributions where possible (e.g. Debian 13 vs 11, Ubuntu 25 vs 20, BSD 15 vs 13, macOS 26 vs 11, Windows 11 vs 10, Windows Server 2025, RHEL/Rocky 10 etc.). Also, if there is a link to the public statement on APIs/QUIC so I can educate myself!?
Alicja Kario Mon 19 Jan 2026 9:43AM
@tjh1 I think that one thing that HAProxy is identified as particularly problematic is multithreading and session resumption, while the former is already good compared to 1.1.1, it's not good compared to forks; and the session resumption is particularly bad
Tim Hudson Mon 19 Jan 2026 10:43AM
@Alicja Kario Comparing to various forks which have made substantial changes is a completely different topic to the 1.1.1 compared to 3.x conversation. Separating those out is important.
Anyone doing high performance, high thread count stuff has always done custom code in the past for handling various things and not used the openssl library "out-of-the-box" in general.
Again this requires people to be explicit about the problem that they are concerned about and the interface related issues that are related to the problem area. What helps is to see test programs that demonstrate issues with concrete numbers and in the case of a fork working substantially better, details as to versions and performance that can be reproduced.
Tim Hudson Mon 19 Jan 2026 10:44AM
@James Bourne almost no performance issue is platform specific (in terms of operating system versions) so later releases really make no meaningful difference - and we also want to work using versions customers are running on (and generally that is not the latest releases)
Alicja Kario Mon 19 Jan 2026 11:51AM
@tjh1 I agree, what I'm pointing to (unless I'm reading the metrics on the benchmark page wrong), is that we are missing performance metrics for multithreaded operation and session resumption, both of which are important to most of people that use OpenSSL servers
James Bourne Mon 19 Jan 2026 7:42PM
@Tim Hudson But are you sure of that regarding performance metrics? Server 2025 has significant performance gains over Server 2022 (e.g. improved HyStart++ and RACK, Network ATC, etc.). What I'm trying to get at here is if there are performance issues, let's address them; if there are performance gains, let's demonstrate them too, but in a performance-optimised environment where time has been taken to tune the IP stack and deploy the latest drivers. Then demonstrate them in an easily digestible format that reflects modern operating system use. Benchmarking Windows 10 or older versions of Linux is not ideal. Benchmarking OpenSSL 3.6.x v 1.1.1 on Server 2025 or Rocky 10.1 makes the most sense because that's what we use in our production environments (e.g. need for latest drivers in 25/100/400GbE ultra low latency networks shuffling ~0.5PB of content a day to all NAND storage systems where encryption in flight and at rest is mandatory). As @Alicja Kario mentioned, key metrics are missing, which implies that we reach out to those other projects that depend on OpenSSL for performance to enunciate their performance concerns and possibly subsume their performance optimisations. Then deliver targeted metrics in conjunction with the vendor.
Michael Baentsch Sat 17 Jan 2026 11:43AM
@James Bourne While you make many valid points that are worth while following through, I'm not sure I can agree with this blanket statement and some conclusions:
OpenSSL's critics have already said what they need.
True, many things have been stated, in particular regarding performance. But I'm sure this conversation needs to be on-going: There's always going to be things that can be criticized/improved, so the need to have an ongoing dialogue that is open (read: In GH&Communities and not behind closed doors) is necessary and imo not putting "the burden back on those critics".
What OpenSSL (or each Community guardian) might do better is surface sticky points, lead discussions to conclusion, e.g., the formulation of concrete GH issues/tasks that may then be prominently tracked (say via a very visible "Top challenges" dashboard) & worked on with priority: Something for you to consider instituting maybe @Jon Ericson ?
As performance seems to be priority topic here, what about writing up a clear rationale for why certain decisions have been taken that had a negative impact on performance (say regarding the provider architecture) and what has been done to improve things? Also, wouldn't it be valuable to state clearly where performance matters to the OpenSSL directors/community -- and where it doesn't? That way, everyone understands each other's goals and can influence them: Either by stating expectations in Community discussions and/or by contributing code via GH to meet such expectations.
To be clear: I love performance discussions: They're so neat and clean to have: Just figures. But there's also other aspects, like added flexibility, backwards-compatibility, other problems/priorities or security aspects to trade performance off against, just to mention a few: As a "worked example", I'm sure OpenSSL never wants to squeeze out the last bit of speed using an optimization that may have dire side-channel attack ramifications on some platforms, right?
And the second worked example that's near and dear to my heart: Without the provider architecture, we could've made available PQC crypto only via a fork to the OpenSSL ecosystem (incl. haproxy etc!) -- that no-one in his right mind would have used "for real". With the provider concept, lots of people could work with PQC since quite a few years in real OpenSSL integrations and I think that had value. Could providers have been implemented better? Probably. Should OpenSSL question its existence because of performance problems? Definitely No.
It's really on us, OpenSSL, to rapidly drive the change and respond publicly that "we are fixing it" or "have fixed it". The timelines are short on this to mitigate further reputational damage.
This statement I again completely agree with. A good deal of it is "just" documentation, though; referencing the performance dashboard or the OpenSSL3 design rationale(s) for example. Is anyone already doing that? Has this maybe already been done? If not, anyone volunteering?
James Bourne · Sat 17 Jan 2026 12:45AM
Great post and thank you. OpenSSL's critics have already said what they need. So, to my mind, the question isn't how to respond, it's whether OpenSSL will respond rapidly, make change, and be transparent. Here's practical suggestions that may alleviate the situation / clear the air.
Publish all 1.1.1 performance baselines on the performance dashboard. It currently shows 3.x versions improving against each other and a limited set of 1.1.1 performance metrics. Projects/users measure against 1.1.1. Could we get 1.1.1 performance data on the dashboard for all benchmarks and platforms ASAP?
Prepare a monthly public status on performance work based on GitHub issues and PRs. If no performance optimisation is occurring, say the work has stalled. Be clear about it.
Pick a QUIC position. Adopt the BoringSSL-compatible API or publicly confirm OpenSSL's API will never be compatible. We need to be explicit.
Establish formal engagement with critical downstream projects—pyca/cryptography, HAProxy, Curl, NGINX. Quarterly calls, dedicated escalation contact. These projects shouldn't need to publish public critiques to be heard.
Require performance parity or better with 1.1.1 for any future LTS designation. I realise that the 3.0 LTS had to be made. But it damaged trust and reputation in the library and is forcing projects that traditionally have relied on OpenSSL to look elsewhere for a performant library. While significant improvements have been made in 3.5 LTS, it's still unclear whether that version meets or exceeds 1.1.1's performance.
It's good that you have invited maintainers of those other projects to join the Distributions community but that, perhaps, puts the burden back on those critics. They have chosen to speak publicly and present at the conference about their frustrations. It's really on us, OpenSSL, to rapidly drive the change and respond publicly that "we are fixing it" or "have fixed it". The timelines are short on this to mitigate further reputational damage.