So it appears that the amount of contributions exceeds the current review capabilities, question is how to solve that without lowering the standards.
There is some effort spent in improving the CI tooling which at least removes the compile test burden. Adding further people to the equation only got us so far - at least most PRs are now categorized and receive some amount of feedback, yet the amount of merges didn‘t increase proportionally.
Now there needs to be an automatism (as in rules/guidelines) to reach a verdict on open ones. From my experience, whenever we try to mass-merge PRs, a certain extent of unexpected fallout occurs, so people are hesitant to merge things they‘re not intimately involved in.
The only solution I personally see is to automate most aspects of PR review, to cover things like code style, buildability and rebase- or mergeability.
The remaining things like runtime regressions can be easily dealt with by radically reverting every PR which introduces issues - so in other words be liberal with merges but also with reverts.
Another thing I noticed is that nobody dares to simply reject PRs that have no chance to get merged, leading to the impression that everything that has been submitted will also end up in the tree eventually, which isn‘t the case - so this is something that requires some thoughts too.
Finally, and this is my own opinion, I don‘t think that ~170 PRs going back as far as one year is so extremely bad.