OpenSSL Communities

OpenSSL PR review process - proposal

DB Dmitry Belyavsky Mon 5 Jan 2026 10:23AM Public Seen by 110

Dear colleagues,
I just published a proposal how we can improve the situation with PR reviews.
https://openssl-communities.org/d/kMnVYKwd/openssl-pr-review-process-

I published it for the Committers community because according to the policies the Committers are responsible for PRs processing but I believe it affects the whole community so feel free to share your opinions there.

JL

Jan Lübbe Mon 5 Jan 2026 11:33AM

Although GitHub doesn't support assigning PRs to teams, PRs can be assigned to multiple people. I'd find it useful if the assignment would reflect who is expected to handle the next step in bringing the PR forward.

NT

Nicola Tuveri Mon 5 Jan 2026 12:18PM

Cannot comment on that community, but I do like the proposal and I agree with Dmitry this is a real issue that has gotten worse since OTC was disbanded and need attention as it definitely affects all our communities of external contributors.

MB

Michael Baentsch Tue 6 Jan 2026 7:15AM

I second @Nicola Tuveri statement 100%: Also I cannot comment in the community and also I think the current handling is "suboptimal" (putting it nicely).

The script I have proposed some time back to ease the task of Committers and allow "normal people"/Contributors to help with the problem does a few simple things:

  • It tracks "who last commented": A Committer or non-Committer: Based on that it recommends who "should go next". It could do that automatically to "keep things moving" (at least when Contributors/PR authors are concerned) to take a bit of load off Committers

  • It highlights PRs "half-approved": That shouldn't happen at all, I'd think

  • It highlights PRs without any review (after some grace period)

  • It tries to bring together labels with recommendations for next steps in relation to time passed since last action on the PR.

Right now the script simply works using the GH API as a script any person can run to get recommendations what can be worked on next. My recommendation is to consider making this available to folks interested in helping Committers focus.

A few comments to the words of @Neil Horman in the Committers discussion board:

If I don't get to a review, thats unfortunate, but ultimately, not impactful to me personally.

Please also consider whether this is impactful to the Community -- and particularly the author: Not seeing one's PR move forward is pretty detrimental to motivation to contribute in my eyes.

We've actually already done this (or half of it). For issues, we labeled everything we considered stale as inactive, and closed them at the end of 3.4 development. That netted about 810 closed issues, which is great, but we still have around 1300 open issues.

This effort was what triggered me to write the script: It cannot be that someone has to manually labels issues to then eventually close it.

We could certainly do the same thing we PRs (I would be supportive), but it seems like the sort of thing that just grows back if we don't continually address it).

That precisely is the thing: My suggestion is to use automation to ease that effort. The introduction of another manual process (or of a "PR wrangler" akin to the "Bug Wrangler") doesn't seem the way to go.

And risking a severe "comment" backlash, allow me to make a possibly provocative suggestion: Given how straightforward it was to create a script to pick "low-hanging fruit" (of PRs and issues in dire need of attention) and the huge set of closed issues and PRs to learn from, wouldn't this task be ideal for an AI to make recommendations as to who should go next, to keep things moving? Yes, serious work required but possibly worth while and yielding better long-term results than putting such "mundane" task onto the plates of Committers busy with daily business.

NH

Neil Horman Tue 6 Jan 2026 2:01PM

@Michael Baentsch

Please also consider whether this is impactful to the Community -- and particularly the author: Not seeing one's PR move forward is pretty detrimental to motivation to contribute in my eyes.

No disagreement, and I don't mean for my words here to suggest that I don't care about some PRs, or the detrimental experience that results, quite the contrary. All I mean to say is the the PR's that I review which consistently get done in a timely manner are those which are kept in front of me (typically by someone asking me about them on daily standups). When I say "not impactful to me personally", I mean that when something slips through the cracks, theres nothing to catch it and bring it back to my attention, and the outcome is that, well it just gets forgotten. A simple solution here would be to place PRs for which I am a reviewer, and for which a review is waiting on me, on some dashboard in github, but (suprisingly), that is a very difficult thing to accomplish in github.

That precisely is the thing: My suggestion is to use automation to ease that effort. The introduction of another manual process (or of a "PR wrangler" akin to the "Bug Wrangler") doesn't seem the way to go.

No argument, Id be very supportive of something that automated audo-closure of issues/prs that have been waiting on an author for some length of time without response.

I'm not quite as sure about label evaluation, we have a pretty complex set of labels that I feel like human intervention is needed on to get right, but I'd be happy to be wrong on that.

MB

Michael Baentsch Tue 6 Jan 2026 2:56PM

@Neil Horman Yup -- if there'd be a resource (or person) I could go to to learn about usage&meaning for each label, I'd be willing to give that a try and add it to the script I already have.

And if a dashboard in GH is hard to do, why not do it outside of it? I got pretty far just with GH API queries outputting sth akin to a ToDo list. Running that regularly, possibly combined with a GH-handle/emailer to send a personalized ToDo list to people interested in such output (say, "Your Top 10 PRs this week to look at" for committers and "Your PR status" for PR authors) may be the way to go.

NH

Neil Horman Tue 6 Jan 2026 3:16PM

@Michael Baentsch

I'm not sure on the labels question, but I think most are fairly self explanitory:
branch: <branch> - This PR is should be applied to this branch when merged
approval: Waiting for approval - Waiting for requisite number of reviewers
approval: hold <reason> - PR approvals blocked for <reason>

approval: done - Requisite number of reviews met

approval: ready for merge - committer is free to merge PR

as for the dashboard, I think there has been some hesitation to do anything outside of gh, as doing so often entails the additional overhead of maintaining some other tool/integration (which may or may not be true). A command line tool that runs a few queries might be a good solution here, but I've often hit token limitations on the rest api when developing such tools. Thats a solveable problem of course, but I've always run out of steam when need to go get administrative adjustments for such things.

PD

Paul Dale Tue 6 Jan 2026 8:24AM

I'd have expected that the recentish increase in the number of committers would have alleviated the problem to some degree. It doesn't seem to have.

DB

Dmitry Belyavsky Tue 6 Jan 2026 12:00PM

@Paul Dale You can have quite a few resources and just not use them properly

FW

Frederik Wedel-Heinen Sat 10 Jan 2026 4:56AM

Thanks for bringing it up Dmitry. I would like to add a few points:

1. PR KPIs
Measurable KPIs are key to make meaningful statements about the state of the review process. I don’t know if that is something that is already tracked?
 These might include metrics such as:

* Time to first review

* Time to final decision/merge

* Number of PRs opened/closed per week

* Distribution of review workload among reviewers

It'd be interesting to see some numbers :)

2. Improve PR Labeling

As a reviewer, I’d find it much more effective if PRs were consistently labeled with metadata that helps scanning and prioritization—such as:

* Size of contribution (e.g., small/medium/large)

* Use of “Waiting on contributor”, “Needs rebase”, “Hold for next major release”, etc.

* Subsystem or component affected

Automation of these are of course prefered. But also documentation on intended use of the labels for humans to have a guideline for setting these.

3. Set Expectations Around Review Pace

As a contributor, creating a PR for the first time or the 100th time it would be nice to have a rough expectation for review timing based on PR type and labels. For example having text like this in the default PR summary text:

* PRs with labels X, Y are typically reviewed within N days


Having these expectations (even if they’re aspirational) helps contributors plan their work and avoids uncertainty about “when something will ever be looked at.”