Sakana’s paper also didn’t undergo as much scrutiny as some other peer-reviewed publications. Because the company withdrew it after the initial peer review, the paper didn’t receive an additional “meta-review,” during which the workshop organizers could have in theory rejected it.
Then there’s the fact that acceptance rates for conference workshops tend to be higher than acceptance rates for the main “conference track” — a fact Sakana candidly mentions in its blog post. The company said that none of its AI-generated studies passed its internal bar for ICLR conference track publication.
Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, called Sakana’s results “a bit misleading.”
“The Sakana folks selected the papers from some number of generated ones, meaning they were using human judgment in terms of picking outputs they thought might get in,” he said via email. “What I think this shows is that humans plus AI can be effective, not that AI alone can create scientific progress.”
Mike Cook, a research fellow at King’s College London specializing in AI, questioned the rigor of the peer reviewers and workshop.
“New workshops, like this one, are often reviewed by more junior researchers,” he told TechCrunch. “It’s also worth noting that this workshop is about negative results and difficulties — which is great, I’ve run a similar workshop before — but it’s arguably easier to get an AI to write about a failure convincingly.”
Cook added that he wasn’t surprised an AI can pass peer review, considering that AI excels at writing human-sounding prose. Partly-AI-generated papers passing journal review isn’t even new, Cook pointed out, nor are the ethical dilemmas this poses for the sciences.
AI’s technical shortcomings — such as its tendency to hallucinate — make many scientists wary of endorsing it for serious work. Moreover, experts fear AI could simply end up generating noise in the scientific literature, not elevating progress.
Leave feedback about this
You must be logged in to post a comment.