OpenSSL Communities

How to respond to recent critiques? (Big picture)

JE Jon Ericson Fri 16 Jan 2026 10:51PM Public Seen by 121

In the past few months there have been posts pointing out concerns with the OpenSSL Library. Notably:

I'm reminded of one of our community values:

We believe in behaving in a manner that fosters trust and confidence.

One way we can do that is by listening to people who are critical of the OpenSSL project, attempt to find root causes that we can address. It's good, for instance, that Paul Kehrer and Alex Gaynor, who maintain the Python cryptography library, spoke at the OpenSSL Conference. I attended that talk and while it was uncomfortable at times, I'm glad I learned about the problems they face when working with the OpenSSL Library. The alternative is not knowing and, therefore, being unable to address them.

It's human nature to take negative responses to our work as a personal attack. I do not believe that is an accurate characterization in this case. As I read these posts, I categorize the points in these general buckets:

  1. Technical problems with the OpenSSL Library (particularly performance regressions from OpenSSL 1.1.1 to 3.0).
  2. Disagreements with technical decisions made by the maintainers of OpenSSL (for instance the implementation of the provider architecture).
  3. Concerns about how the OpenSSL Library is managed.

These posts also sound like the authors feel unheard and that OpenSSL can't be fixed. So what can we, as a community, do about it? I'm going to put some ideas in this post, but I encourage everyone to pitch in ideas as replies below. Please focus on the big picture for now! We can dig into specific details either in GitHub issues or new threads.

Document what steps have already been taken to address concerns

One thing that caught my attention was the focus on the performance of OpenSSL 3.0 in particular. If you look at the performance graphs that release is consistently the worst performing branch. Other 3.x releases show improvement and frequently master is the closest to 1.1.1 performance.

But, of course, most people who use the OpenSSL Library aren't using master. Instead they are on an LTS release, so it makes sense people have formed their perception on 3.0. The most recent LTS, 3.5 is less than a year old so it's not yet worked it's way through the ecosystem. The gap between perception and reality is, at least in part, the result of 3.0's regression in performance combined with the unfortunate reality that that version was the LTS release for so long.

The Feisty Duck newsletter's December issue, OpenSSL Performance Still Under Scrutiny, models how we might address performance concerns:

  1. Frank acknowledgement of past problems.
  2. Practical hints about how to get the most out of more recent releases.
  3. Optimism that the situation will improve now that the OpenSSL Library has show movement in the direction of better performance.

When it comes to fostering trust, GitHub issues and PRs speak louder than promises. What specifically has been done to improve the situation?

Communicate the reasons behind technical decisions

Allow me to quote a powerful paragraph from the pyca post:

We do not fully understand the motivations that led to the public APIs and internal complexity we’ve described here. We’ve done our best to reverse engineer them by asking “what would motivate someone to do this” and often we’ve found ourselves coming up short. The fact that none of the other OpenSSL forks have made these same design choices is informative to the question of “was this necessary”.

Now the change in strategic architecture between 1.1.1 and 3.0 is documented. Reading between the lines I can see that the old architecture failed to meet the needs of some users of the OpenSSL Library. But the specific motivations for these choices aren't clear in that document. Searching around the internet, I found this post from our very own @beldmit that suggests the provider architecture useful for people who want:

  • legacy algorithms
  • experimental algorithms (OQS)
  • meet government standards (particularly FIPS-140-3)
  • cryptographic hardware (PKCS#11 and TPM2)

Not everyone cares about these use cases, but it's harder to argue that the changes to support these uses serve no purpose. Knowing that one of the goals of OpenSSL 3.0 was, to quote Dmitry, "maintainable FIPS-140-3 certified modules" clarifies the actual limitations to the design. People can disagree with the decisions, but not claim the changes were capricious.

It's pretty common for people to fail to adequately explain the reasoning for making big changes by the time the announcement goes out, the people who made the decision have lived with it for awhile and assume that the end product speaks for itself. They might not even remember the alternative choices that were rejected because once you commit to a path, the road not taken is irrelevant. In order to bring end users along, it's useful give that background. Ideally we need to answer the implied question behind many criticisms: "What's in it for me?" You can't answer that question too many times.

Let our mission be our guide

When I talk about OpenSSL to people who don't know what we are, I like to start with a paraphrase of the mission. We want everyone to have access to privacy and security tools. The OpenSSL Library is the more important and most obvious product of that mission. (Yes, I know the mission is much newer than the library. But the beliefs behind the mission are a big reason the library exists.) Forks of the OpenSSL Library tend to specialize in specific use cases whereas the OpenSSL LIbrary itself has the larger ambition of supporting everyone.

Fairly recently the organization that manages the OpenSSL Library chose to split into two organizations:

Since it's impossible to know all the ways people use the OpenSSL Library, we've also set up this platform so that the OpenSSL Communities can give us feedback. The best way to influence the future of the OpenSSL Library is to get involved right here.

For reference, both the HAProxy and Python cryptography library maintainers are welcome to join the Distributions community:

The Distributions comprises maintainers of operating systems and significant software packages that integrate projects from the OpenSSL Foundation and the OpenSSL Corporation.

Keep the lines of communication open

As a community manager, I have found that people have a hard time assuming the worst about others when they are in active communication. Sometimes a conversation can unlock unexpected solutions to seemingly intractable differences. It can help to remember that behind avatars are real people.

How else can we respond?

JE

Jon Ericson Wed 21 Jan 2026 4:12PM

What OpenSSL (or each Community guardian) might do better is surface sticky points, lead discussions to conclusion, e.g., the formulation of concrete GH issues/tasks that may then be prominently tracked (say via a very visible "Top challenges" dashboard) & worked on with priority: Something for you to consider instituting maybe @Jon Ericson ?

I think something along these lines would be very helpful. For instance, a big goal of 4.0 is removing ENGINEs. It's something that was talked a lot about internally, but could have been more public. Probably even more public than blog posts (which are great, but don't necessarily get a lot of traffic). I'm thinking something on the GitHub repo, Library roadmap and/or the Library documentation.

A good deal of it is "just" documentation, though; referencing the performance dashboard or the OpenSSL3 design rationale(s) for example. Is anyone already doing that? Has this maybe already been done? If not, anyone volunteering?

Getting ready for FOSDEM is eating up my time right now, but I do feel like this is more or less up my alley.

PG

Peter Gutmann Sat 17 Jan 2026 1:38AM

I'd also go with a variant of "communicate the reasons" which is to let people know that there's a way forward, so it's not "we decided to arbitrarily change everything and now you're stuck with it forever". I'm not sure if it's possible to win a benchmarking war if you're competing with old forks of, ah, make-it-go above everything else code, but if you can explain that the changes were necessary for the future because the existing 1.0 architecture was on its last legs and now that you've got it stable it's time to tune it, that might give people some light at the end of the tunnel.

Possibly also publish a plan for profiling and addition of fastpath code for commonly-used stuff if you're not already doing that, with occasional updates to demonstrate progress is being made. For example I assume almost everything is going with SHA2-256 (they usually are) and fastpath that, so all the highly flexible configurable options get bypassed for the common case of SHA-256 which is a direct jump into the raw SHA2 code, bypassing backend selection, object creation, locking, and a lot of other things.

DB

Dmitry Belyavsky Sat 17 Jan 2026 11:48AM

Thank you for your write-up!

My deep beliefs are that in such situations of controversy the only normal way that can improve the situation is direct communication with the most problematic peers (AFAIK, there were some efforts to deal with HAProxy feedback - don't know if they are the result of communication or a common sense). No, communication does not ensure the positive outcome but usually provides some way to move forward.

OTOH, looks like some design decisions made during 3.0 development were not great enough and should be reconsidered. Some improvements were already done. For example, I'm not sure that the logic of encoder/decoder lookup is perfect, and there was a sketch of design to improve it.

One more point I'm seriously consider, but it's quite controversial. Both my background related so support of national cryptography and recent PQ transition where we heavily related on the option to provide new algorithms via providers demonstrate me the importance of plugability and flexibility. But there are gazillion of scenarios where people don't need it but are limited with the choice of algorithms, and probably having alg-specific API targeting these use cases makes sense.

Item removed

JE

Jon Ericson Tue 20 Jan 2026 6:50AM

But there are gazillion of scenarios where people don't need it but are limited with the choice of algorithms, and probably having alg-specific API targeting these use cases makes sense.

In November @matt, @amyp, @nhorman and I visited the computer science department at NC State University. One suggestion we heard was to offer an opinionated library. I had to look this up: "Opinionated software means that there is basically one way (the right way™) to do things and trying to do it differently will be difficult and frustrating."—tvanfosson on Stack Overflow

It's pretty clear the OpenSSL Library is non-opinionated. As a (recovering?) Perl programmer, I feel the pain of having too many ways to shoot yourself in the foot do things. I like the idea of having a subset of algorithms that are secure, reasonably fool-proof and avoid complexity that many people don't need.

RL

Richard Levitte (OpenSSL) Mon 26 Jan 2026 11:07AM

@jon, re OpenSSL Library being opinionated, depends on what you look at. Let me muse philosophically for a bit...

The design around the provider mechanisms made libcrypto un-opinionated, leaving it to providers to be the opinionated parties. But, the OpenSSL Library includes a set of providers, so its opinionatedness hasn't gone away, it just has been shuffled around a bit, and left some spaces for alternative opinions (through external providers).

DB

Dmitry Belyavsky Sun 18 Jan 2026 10:26AM

Some more thoughts. There are 2 possible outcomes beyond such publications - either an invitation to make some bargain or, as opposite, is the final decision and no reasonable compromise is possible - but my qualification is definitely not enough to distinguish what's beyond particularly this one.

DJL

Dimitri John Ledkov (Chainguard) Mon 19 Jan 2026 10:33AM

There is also this blog post from CURL project removing one of the OpenSSL-QUIC backends:

CL

Clemens Lang Mon 19 Jan 2026 11:07AM

@Dimitri John Ledkov (Chainguard) This blog post doesn't make it super explicit, but my understanding is that the ngtcp2 and nghttp3 libraries can use OpenSSL for QUIC, so this removal doesn't mean that curl won't be able to do QUIC or HTTP/3 with OpenSSL anymore.

DJL

Dimitri John Ledkov (Chainguard) Mon 19 Jan 2026 11:28AM

@Clemens Lang yes, it is one more nail in the coffin of the bespoke OpenSSL quic API however. Should the bespoke openssl quic API be deprecated and removed if nobody wants to use it?

NH

Neil Horman Mon 19 Jan 2026 2:19PM

@Dimitri John Ledkov (Chainguard)  I don't think its quite fair to say nobody wants to use the quic TLS integration api as it currently exists. msquic and ngtcp2 both use it, and IIRC work on lsquic to integrate it has been in progress.

NH

Neil Horman Mon 19 Jan 2026 3:32PM

replying to myself. Apologies, I misread this. I though this was referring to the use of the quic TLS integration api, rather than the quic stack itself. Its clear now that this is referring to the latter, rather than the former.

NH

Neil Horman Mon 19 Jan 2026 2:44PM

speaking only for myself here, the most frustrating part of any of these performance discussions has been the....for lack of a better word...reproducibility of any measurements. It cite the HAProxy work as an example, they did a great job trying to identify performance degradation in their testing, and to their credit, they've reached out to us with code to create a test harness to reproduce their results. However attaining an apples-to-apples comparison of those results for the purpose of measuring our current performance vs the last version they tested in the above article (3.2.2, which is over 2 years old now) is no small task. It requires an investment of both money (to setup and maintain the infrastructure needed to generate the kind of load to recreate the environment consistently) and personnel, to run the test, and iteratively test performance improvements.

Thats really the big bit that we're missing here. Our performance dashboard is unit test oriented, testing speeds of common operations in isolation. We have a handshake test (which roughly compares to HAProxy's test metrics), but its done on a much smaller system at much smaller loads.

I think thats the resposne to any organization making the sorts of complaints we see above. "We can improve this, but not without your help". We need contributions of real-world test cases and the guidance to set them up properly to re-create the problems that they see, so that we can test with the latest versions of openssl, and see how proposals for improvement fare.

JW

Jeremy Walch Thu 22 Jan 2026 11:47PM

Is there a response to the critiques that can effectively be summarized as concerns that not enough is being done to prevent the introduction of vulnerabilities? (A couple of sections in from the pyca article boil down to this.)

TH

Tim Hudson Thu 22 Jan 2026 11:55PM

@Jeremy Walch "not enough is being done to prevent the introduction of vulnerabilities"

Where specifically do you read that in the pyca article? There are comments on bugs and test coverage but those are in the context of changing APIs not in the context of vulnerabilities.

JW

Jeremy Walch Fri 23 Jan 2026 12:13AM

@Tim Hudson

OpenSSL 3.0.4 contained a critical buffer overflow in the RSA implementation on AVX-512-capable CPUs.

Is that not rather specifically describing CVE-2022-2274?

OpenSSL is not keeping pace with the state of the art in formal verification. Formal methods have gone from academic novelty to practical reality for meaningful chunks of cryptographic code.

Is it not a reasonable interpretation that the concern here is specifically with code that has security implications?

A library committed to security needs to make a long-term commitment to a migration to a memory safe programming language.

Again, that feels rather clearly focused on vulnerabilities.

I'm not saying that they didn't also articulate frustration with general functional regressions (I myself ran into and reported multiples during the 3.0 development cycle)... but vulnerabilities in particular are definitely quite explicitly spoken about in the article.

I also grant that obviously some mitigations deal with both classes of bugs anyway.

TH

Tim Hudson Fri 23 Jan 2026 7:44AM

@Jeremy Walch Again, that feels rather clearly focused on vulnerabilities.

I think a suggestion that the OpenSSL Library be written in an entirely different programming language is certainly not at all in the same class of issues. There is a large range of entangled items there.

I also find that statement entirely disingenuous - that any security library that hasn't made a long term commitment to moving off C onto some other programming language is in essence inherently vulnerable. That argument simply doesn't fly and applies equally to pretty much all the major security libraries - it is not an OpenSSL Library focused or specific complaint.

One of the rationales for the provider approach was to actually allow for implementations using different approaches to be able to be easily dropped in and used by applications basically unchanged.

This is where things like PKCS#11 device support, formally verified implementations, projects under different licenses, implementations in other programming languages are able to fit and still have the broad set of OpenSSL Library supporting applications working. It is the applications that users care about - not the OpenSSL library itself.

The jostle project is about having Java applications able to use the OpenSSL Library, and the opposite (OpenSSL Library applications able to use Java crypto implementations) is intended.

PD

Paul Dale Fri 23 Jan 2026 11:00PM

One of the rationales for the provider approach was to actually allow for implementations using different approaches to be able to be easily dropped in and used by applications basically unchanged.

Almost all of the security issues found to date have not been in what would be considered provider code. Saying that providers can be written using different approaches isn't going to meaningfully reduce security related problems in the code.

JW

Jeremy Walch Sat 24 Jan 2026 5:20PM

@Tim Hudson

I agree that moving away from C is not the answer, for many reasons that you just didn't have time to get into.

That being said, given the request was to provide citations, I guess I had mistakenly hoped that the quotes provided would be considered in the context of the original article and not in isolation as if they were my own words.

The quote came from an entire section labelled "Memory Safety." Just before the exact quote I gave, they mention how their Rust-based X.509 implementation allowed them to avoid some OpenSSL CVEs. They clearly were not proposing the language change with motivation to improve application interoperability, but rather as a way to avoid a well-known class of vulnerabilities in C (and of course many other low-level languages also).

I think at this point it's worth observing that many of those critiquing OpenSSL are themselves software engineers and for that reason will often struggle to avoid characterizing problems in terms of what they believe the solution should be. I think the OpenSSL leadership would do well to focus less on rebutting specific solution proposals and more on understanding what problems motivated those solutions to be proposed in the first place. The proposed solutions may not be the best ones, but usually the problems are worth solving. (In this specific context, there are many things that can be done to mitigate the C memory safety issue that don't necessarily require throwing out the baby with the bath water.)

JE

Jon Ericson Sat 24 Jan 2026 7:20PM

I think the OpenSSL leadership would do well to focus less on rebutting specific solution proposals and more on understanding what problems motivated those solutions to be proposed in the first place.

The other thing to remember is that the primary audience isn't the people critiquing OpenSSL (their opinions are usually set already), but the larger group of people who are watching the drama play out. For instance, I first learned of the QUIC controversy from an answer about why Discourse doesn't offer HTTP/3. A part of the answer was:

The fact that OpenSSL maintainers basically sabotaged QUIC and halted the progress of the entire ecosystem for the equivalent of a decade.

I don't know if this is a fair accusation, but it's grounded in a GitHub issue on the HAProxy repository. People aren't really interested in the nuances of how QUIC is implemented on OpenSSL, they just want to have HTTP/3 and see OpenSSL as the barrier to their desire.

So one really effective thing the OpenSSL community did was to demonstrate how to use the OpenSSL as a TLS backend in ngtcp2 (which is "an effort to implement IETF QUIC protocol") and support the PR that actually made the change. That's the sort of thing that communicates a desire to solve problems and not just be a barrier to progress (or whatever people think).

SN

Sasha Nedvedicky Thu 29 Jan 2026 10:08AM

Sorry for joining the party late. I'm catching up after being off-line last week.

I'm with Neil here; the performance is hard and there are no easy answers. The issue must be understood first. For example, the referred curl write up on QUIC protocol stack in OpenSSL says it's up to 3times slower than ngtcp2 without referring to any details on how those tests were conducted, so we are left to shoot in the dark, because curl blog post does not provide any details. According to my experience with tquic from tencent the OpenSSL QUIC stack seems to be on par when it comes to single thread performance:

The first run shows the results against t-quic server from tencent:

sashan@work:~/work.openssl/tquic/target/release$ ./tquic_client -d 30 --total-requests-per-thread  0 --connect-to 127.0.0.1:4433  https://127.0.0.1:4433/1024.txt
finished in 31.005322674s, 32.64 req/s
conns: total 1012, finish 1012, success 1012, failure 0
requests: sent 1012, finish 1012, success 1012
time for request(µs):
        min: 495.00, max: 790.00, mean: 626.45, sd: 14.25
        median: 630.00, p80: 636.00, p90: 638.00, p99: 654.09
recv pkts: 7083, sent pkts: 7598, lost pkts: 0
recv bytes: 4653998, sent bytes: 3197164, lost bytes: 0

and here are the numbers against OpenSSL's QUIC server as found in perftools

sashan@work:~/work.openssl/tquic/target/release$ ./tquic_client -d 30 --total-requests-per-thread  0 --connect-to 127.0.0.1:8000  https://127.0.0.1:8000/1024.txt

finished in 31.003283704s, 35.32 req/s
conns: total 1095, finish 1095, success 1095, failure 0
requests: sent 1095, finish 1095, success 1095
time for request(µs):
        min: 315.00, max: 634.00, mean: 365.61, sd: 22.52
        median: 370.00, p80: 382.00, p90: 386.00, p99: 403.00
recv pkts: 8760, sent pkts: 9855, lost pkts: 0
recv bytes: 3489765, sent bytes: 3520425, lost bytes: 0

The numbers above seem to be on par (definitely not 3times worse). So let's put some more pressure on SSL_poll() which OpenSSL quic server uses. Here `--max-concurrent-cons 50` option is added. The first output shows client running against t-quic server from tencent:

sashan@work:~/work.openssl/tquic/target/release$ ./tquic_client -d 30 --max-concurrent-conns 50 --total-requests-per-thread  0 --connect-to 127.0.0.1:4433  https://127.0.0.1:4433/1024.txt

finished in 31.005164332s, 937.27 req/s
conns: total 29072, finish 29072, success 29072, failure 0
requests: sent 29072, finish 29060, success 29060
time for request(µs):
        min: 144.00, max: 208523.00, mean: 10875.70, sd: 6471.23
        median: 11150.50, p80: 13755.00, p90: 14882.00, p99: 22091.01
recv pkts: 202723, sent pkts: 213096, lost pkts: 68
recv bytes: 133691000, sent bytes: 91738939, lost bytes: 49768

We see there are ~940 requests per second. Do the same against OpenSSL QUIC server

sashan@work:~/work.openssl/tquic/target/release$ ./tquic_client -d 30 --max-concurrent-conns 50 --total-requests-per-thread  0 --connect-to 127.0.0.1:8000  https://127.0.0.1:8000/1024.txt

finished in 31.001091568s, 328.67 req/s
conns: total 10239, finish 10239, success 10239, failure 0
requests: sent 10237, finish 10189, success 10189
time for request(µs):
        min: 45556.00, max: 236912.00, mean: 148384.40, sd: 9487.62
        median: 147404.00, p80: 150833.80, p90: 153007.67, p99: 184095.51
recv pkts: 74791, sent pkts: 102378, lost pkts: 44
recv bytes: 32368778, sent bytes: 33279281, lost bytes: 19454

we get like ~330 requests per second with OpenSSL QUIC stack. This is suddenly getting close to 3times worse claim made by curl team. So it feels like the bottleneck here is SSL_poll() which backs asynchronous I/O on OpenSSL quic server. It might be just coincidence but it's definitely worth to check how ngtcp2 is .

OpenSSL project currently tests performance regularly so it can compared with older OpenSSL releases. Earlier this week I've realized we can actually use our performance tools to test other crypto libraries too using OpenSSL tools. So I think this should be the first step.

Jon started the discussion by posting a link to HA-proxy 'State of SSL stacks'. The HA-proxy team has shared details on how to set up the test environment so we can start comparing OpenSSL performance with other libraries using state-of-the-art applications. Here I can share the recent results for HA-proxy test done using siege, I'm still learning h1load tool used by haproxy. The bar chart here shows the number of transactions (HTTP requests/responses) sent via chain of 20 https proxies to httpterm server. Each requests downloads 1kb of data. Just see the trends in those charts as everything is still preliminary, the scripts which set up and run the tests are still in progress, they will be part of perftools repository once finished and reviewed.