•1 min read•from Machine Learning
Stop letting LLMs edit your .bib [D]
Our take
In the world of research, accuracy in citations is paramount. It's concerning to see how often large language models (LLMs) generate hallucinated citations, leading to incorrect author lists, even for one’s own papers. This issue not only undermines the integrity of research but also places undue blame on technology when the responsibility should lie with researchers. If we truly respect prior literature, should we not take the essential step of ensuring our .bib files are populated accurately?
It’s shocking how frequently I notice hallucinated citations. For citations of my own papers, I’ve seen 5 in the past couple of months, where the the title is correct but the author list is wrong. When I email the author to let them know, they always blame an LLM for hallucinating.
Is it really that hard to populate the .bib yourself? If you have any respect for research, is it not a basic requirement to make sure you correctly cite the prior literature? I feel there should be harsher penalties for these hallucinated citations.
Are others experiencing the same?
[link] [comments]
Read on the original site
Open the publisher's page for the full experience
Tagged with
#natural language processing for spreadsheets#generative AI for data analysis#Excel alternatives for data analysis#rows.com#hallucinated citations#.bib#LLMs#author list#prior literature#citations#basic requirement#research#penalties#email#correctly cite#shocking frequency#respect for research#blame an LLM#title is correct#populating .bib