1 min readfrom Machine Learning

How to collect evidence for LLM reviewer? [D]

Our take

In the world of academic publishing, encountering a weak rejection from a reviewer who appears to rely on LLM-generated insights can be frustrating, especially when other reviewers have provided positive feedback. This situation raises important questions about how to effectively gather evidence and address potential biases in the review process. To navigate this challenge, it’s essential to compile specific examples of the reviewer's claims, highlight discrepancies with your work, and consider reporting your findings to the Associate Chair (AC).

As the title suggests, I received a weak rejection with high confidence from a reviewer who is clearly LLM written, while all 4 other reviewers had given a positive score with low confidence.

Most of the points he raised are trivial and do not apply to my paper. All the baselines he mentioned are irrelevant to my task. They are the exact same points raised when I ran LLM simulations.

He is not replying to my rebuttal. I would like to know how people usually deal with this kind of situation. Do you collect evidence and report him to the AC? If so, how do you collect evidence? When you report him to the AC, do you report him on a low-quality review or LLM usage? Because my understanding is that while using LLM, other than grammar polishing, is not allowed, but it's hard to prove it.

Would be nice if people could share their experiences.

submitted by /u/d_edge_sword
[link] [comments]

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#financial modeling with spreadsheets#rows.com#natural language processing for spreadsheets#generative AI for data analysis#Excel alternatives for data analysis#LLM#reviewer#rejection#evidence#confidence#baselines#rebuttal#AC#low-quality review#trivial#irrelevant#simulations#experiences#report#hard to prove