Back to Articles
Artificial Intelligence Research Has a Slop Problem, Academics Say: 'It's a Mess'

The Guardian

SKIPPED

Details

Date Published
5 Dec 2025
Priority Score
3
Australian
No
Created
8 Dec 2025, 02:26 pm

Authors (1)

Description

AI research in question as author claims to have written over 100 papers on AI that one expert calls a ‘disaster’

Summary

The article highlights concerns about the proliferation of low-quality AI research papers, particularly at major conferences like NeurIPS. It raises issues about the credibility and integrity of AI research, citing cases where individuals publish an implausibly high number of papers that are suspected to be AI-generated or largely superficial. This deluge of content poses challenges to maintaining quality and trust in AI research outputs, potentially undermining efforts to address serious AI safety risks. The problem is compounded by institutional pressures on researchers to achieve high publication counts, pointing to systemic flaws in academic and research frameworks. Such concerns are crucial in evaluating the broader implications for AI safety governance and policy-making.

Body

The author, Kevin Zhu, now runs Algoverse, an AI research and mentoring company for high schoolers.Photograph: Cavan Images/AlamyView image in fullscreenThe author, Kevin Zhu, now runs Algoverse, an AI research and mentoring company for high schoolers.Photograph: Cavan Images/AlamyArtificial intelligence research has a slop problem, academics say: ‘It’s a mess’AI research in question as author claims to have written over 100 papers on AI that one expert calls a ‘disaster’A single person claims to haveauthored113 academic papers onartificial intelligencethis year, 89 of which will be presented this week at one of the world’s leading conferences on AI and machine learning, which has raised questions among computer scientists about the state of AI research.The author, Kevin Zhu,recentlyfinisheda bachelor’s degree in computer science at the University of California, Berkeley, and now runs Algoverse, an AI research and mentoring company for high schoolers – many of whom are his co-authors on the papers. Zhu himself graduated from high school in 2018.Papers he hasput outin the past two years cover subjects such as using AI tolocate nomadic pastoralistsin sub-Saharan Africa, toevaluate skin lesionsand totranslateIndonesian dialects. On his LinkedIn, he touts publishing “100+ top conference papers in the past year”, which have been “cited by OpenAI, Microsoft, Google, Stanford, MIT, Oxford and more”.Zhu’s papers are a “disaster”, said Hany Farid, a professor of computer science at Berkeley, in an interview. “I’m fairly convinced that the whole thing, top to bottom, is just vibe coding,” he said, referring to the practice of using AI to create software.Farid called attention to Zhu’s prolific publications in a recent LinkedInpost, which provoked discussion of other, similar cases among AI researchers, who said their newly popular discipline faces a deluge of low-quality research papers, fueled by academic pressures and, in some cases, AI tools.In response to a query from the Guardian, Zhu said that he had supervised the 131 papers, which were “team endeavors” run by his company, Algoverse. The company charges $3,325 to high-school students and undergraduates for a selective 12-week online mentoring experience – which involves help submitting work to conferences.“At a minimum, I help review methodology and experimental design in proposals, and I read and comment on full paper drafts before submission,” he said, adding that projects on subjects such as linguistics, healthcare or education involved “principal investigators or mentors with relevant expertise”.The teams used “standard productivity tools such as reference managers, spellcheck, and sometimes language models for copy-editing or improving clarity”, he said in response to a query about whether the papers were written with AI.Bot watchers in turmoilThe review standards for AI research differ from most other scientific fields. Most work in AI and machine learning does not go undergo the stringent peer-review processes of fields such as chemistry and biology – instead, papers are often presented less formally at major conferences such asNeurIPS, one of the world’s top machine learning and AI gatherings, where Zhu is slated to present.Zhu’s case points at a larger issue in AI research, said Farid. Conferences including NeurIPS are being overwhelmed with increasingnumbersof submissions: NeurIPS fielded 21,575 papers this year, up from under 10,000 in 2020. Another top AI conference, the International Conference on Learning Representations (ICLR), reported a 70% increase in its yearly submissions for 2026’s conference, nearly 20,000 papers, up from just over11,000for the 2025 conference.“Reviewers are complaining about the poor quality of the papers, even suspecting that some are AI-generated. Why has this academic feast lost its flavor?” asked the Chinese tech blog 36Kr in aNovember postabout ICLR, noting that the average score reviewers had awarded papers had declined year-over-year.Meanwhile, students and academics are facing mounting pressure to rack up publications and keep up with their peers. It is uncommon to produce a double-digit number – much less triple – of high-quality academic computer science papers in a year, academics said. Farid says that at times, his students have “vibe coded” papers to up their publication counts.“So many young people want to get into AI. There’s a frenzy right now,” said Farid.NeurIPS reviews papers submitted to it, but its process is far quicker and less thorough than standard scientific peer review, said Jeffrey Walling, an associate professor at Virginia Tech. This year, the conference hasusedlarge numbers of PhD students to vet papers, which a NeurIPS area chair said compromised the process.“The reality is that often times conference referees must review dozens of papers in a short period of time, and there is usually little to no revision,” said Walling.Walling agreed with Farid that too many papers were being published, saying he had encountered other authors with over 100 publications in a year. “Academics are rewarded for publication volume more than quality … Everyone loves the myth of super productivity,” he said.On Zhu’s Algoverse’s FAQ page, answers discusses how the company’s program can help applicants’ future college or career prospects, saying: “The skills, accomplishments, and publications you achieve here are highly regarded in academic circles and can indeed strengthen your college application or résumé. This is especially true if your research is admitted to a top conference – a prestigious feat even for professional researchers.”Farid says that he now counsels students to not go into AI research, because of the “frenzy” in the field and the large volume of low-quality work being put out by people hoping to better their career prospects.skip past newsletter promotionafter newsletter promotion“It’s just a mess. You can’t keep up, you can’t publish, you can’t do good work, you can’t be thoughtful,” he said.Slop floodMuch excellent work has still come out of this process. Memorably, Google’s paper on transformers,Attention Is All You Need– the theoretical basis for the advances in AI that led to ChatGPT – was presented at NeurIPS in 2017.NeurIPS organisers agree the conference is under pressure. In a comment to the Guardian, a spokesperson said that the growth of AI as a field had brought “a significant increase in paper submissions and heightened value placed on peer-reviewed acceptance at NeurIPS”, putting “considerable strain on our review system”.Zhu’s submissions were largely to workshops within NeurIPS, which have a different selection process than the main conference and are often where early-career work gets presented, said NeurIPS organisers. Farid said he did not find this a substantive explanation for one person to put his name on more than 100 papers.“I don’t find this a compelling argument for putting your name on 100 papers that you could not have possibly meaningfully contributed to,” said Farid.The problem is bigger than a flood of papers at NeurIPS. ICLR’s reviewersusedAI to review a large volume submissions – resulting in apparently hallucinated citations and feedback that was “very verbose with lots of bullet points”, according to a recent article in Nature.The feeling of decline is so widespread that finding a solution to the crisis has become the subject of papers itself. AMay 2025 position paper– an academic, evidence-based version of a newspaper op-ed – authored by three South Korean computer scientists that proposed a solution to the “unprecedented challenges with the surge of paper submissions, accompanied by growing concerns over review quality and reviewer responsibility”, won an award for outstanding work at the 2025 International Conference on Machine Learning.Meanwhile, says Farid, major tech companies and small AI safety organisations now dump their work on arXiv, a site once reserved for little-viewed preprints of math and physics papers, flooding the internet with work that is presented as science – but is not subject to review standards.The cost of this, says Farid, is that it is almost impossible to know what’s actually going on in AI – for journalists, the public, and even experts in the field: “You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature. Your signal-to-noise ratio is basically one. I can barely go to these conferences and figure out what the hell is going on.“What I tell students is that, if what you’re trying to optimize publishing papers, you know, it’s actually honestly not that hard to do. Just do really crappy low-quality work and bomb conferences with it. But if you want to do really thoughtful, careful work, you’re at a disadvantage because you’re effectively unilaterally disarmed,” he said.Explore more on these topicsArtificial intelligence (AI)AcademicsComputingnewsShareReuse this content