•1 min read•from Machine Learning
AI/ML Conferences [D]
Our take
As a fellow ML researcher, I share the disheartenment felt by many after witnessing the challenges faced by authors submitting to ICML 2026. The overwhelming number of submissions to top AI/ML conferences has exposed significant flaws in the current review system, where papers may be unjustly rejected despite authors addressing all reviewer concerns. This situation calls for a reevaluation of our review processes to ensure fairness and transparency. What innovative strategies can we implement to foster a more equitable and constructive review environment?
As a fellow ML researcher, I feel disheartened and discouraged after seeing the experiences of people who submitted their work to ICML 2026. Given the sheer number of papers submitted to A* AI/ML conferences, the current review system does not seem to work well. For example, in some cases, papers are rejected despite the authors addressing all reviewers’ concerns, leading to substantial increases in scores. What could be a better way forward to ensure a fair review process?
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Tagged with
#rows.com#natural language processing for spreadsheets#generative AI for data analysis#Excel alternatives for data analysis#AI#ML#ICML#review system#conferences#papers#fair review process#submitted#authors#reviewers#rejected#scores#number of papers#fellow researcher#better way forward#concerns