2 min readfrom Machine Learning

Why ML conference reviews sometimes feel like a “lottery“ [D]

Our take

The perception that machine learning conference reviews resemble a “lottery” often stems from the challenges of evaluating submissions. While strong papers typically receive acceptance, many submissions fall into a gray area where quality varies significantly. With increasing numbers of submissions, reviewers face pressure, leading to inconsistencies in evaluations. Factors like clarity, framing, and the subjective preferences of reviewers can heavily influence outcomes.

I’ve been trying to make sense of all the “ML conferences are a lottery” takes, and honestly I think it’s both true and not true depending on what you mean.

If a paper is clearly strong, like genuinely solid contribution, well executed, easy to understand, it usually gets in. And if it’s clearly weak, it usually gets filtered out. The weirdness people complain about mostly lives in the huge middle where papers are good but not undeniable.

That’s also where scale starts to matter. There are just so many submissions now that reviewers are stretched thin, matching isn’t perfect, and everyone has slightly different standards or taste. Add tight timelines and limited back-and-forth, and small things start to matter a lot. Whether a reviewer really “gets” your contribution, how clearly you framed it, or even just how it lands with that particular set of reviewers can swing the outcome.

I think that’s why it feels random. Not because the whole system is broken, but because a big chunk of papers are sitting right near the decision boundary, and decisions there are naturally high-variance.

People often from strong research groups don’t experience this. It’s more that they’re better at pushing their papers out of that borderline zone. Cleaner writing, stronger positioning, more predictable execution. So a larger fraction of their work is clearly above the bar.

So my current take is: it’s not a lottery overall, but it absolutely behaves like one near the cutoff, and that’s where most of the frustration comes from.

submitted by /u/Hope999991
[link] [comments]

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#rows.com#big data management in spreadsheets#financial modeling with spreadsheets#big data performance#ML conferences#contribution#reviewers#submissions#paper quality#decision boundary#clear writing#strong research groups#high-variance decisions#effective positioning#execution quality#middle ground papers#research paper acceptance#timelines#borderline zone#review standards