ICML Pioneers Adaptive AI Review Policies, Fostering Integrity
policy#llm👥 Community|Analyzed: Mar 19, 2026 11:02•
Published: Mar 19, 2026 10:17
•1 min read
•Hacker NewsAnalysis
ICML is proactively adapting to the changing landscape of AI in research, introducing innovative policies to address the use of Generative AI in the peer review process. This forward-thinking approach showcases a commitment to maintaining the integrity of scientific evaluation while embracing the potential of new technologies.
Key Takeaways
- •ICML is leading the way in establishing guidelines for the use of Large Language Models in peer review.
- •Two distinct policies are being tested: one prohibiting LLM use and another allowing it for specific purposes.
- •A significant number of papers were desk-rejected due to violations of these new policies.
Reference / Citation
View Original"This year, we desk-rejected 497 papers (~2% of all submissions), corresponding to submissions of the 506 reciprocal reviewers who violated the rules regarding LLM usage that they had previously explicitly agreed to."