Japanese AI startup Sakana said that its AI generated one of the first peer-reviewed scientific publications. But while the claim isn’t necessarily untrue, there are caveats to note.
The debate swirling around AI and its role in the scientific process grows fiercer by the day. Many researchers don’t think AI is quite ready to serve as a “co-scientist,” while others think that there’s potential — but acknowledge it’s early days.
Sakana falls into the latter camp.
The company said that it used an AI system called The AI Scientist-v2 to generate a paper that Sakana then submitted to a workshop at ICLR, a long-running and reputable AI conference. Sakana claims that the workshop’s organizers, as well as ICLR’s leadership, had agreed to work with the company to conduct an experiment to double-blind review AI-generated manuscripts.
Sakana said it collaborated with researchers at the University of British Columbia and the University of Oxford to submit three AI-generated papers to the aforementioned workshop for peer review. The AI Scientist-v2 generated the papers “end-to-end,” Sakana claims, including the scientific hypotheses, experiments and experimental code, data analyses, visualizations, text, and titles.
“We generated research ideas by providing the workshop abstract and description to the AI,” Robert Lange, a research scientist and founding member at Sakana, told TechCrunch via email. “This ensured that the generated papers were on topic and suitable submissions.”
One paper out of the three was accepted to the ICLR workshop — a paper that casts a critical lens on training techniques for AI models. Sakana said it immediately withdrew the paper before it could be published in the interest of transparency and respect for ICLR conventions.
“The accepted paper both introduces a new, promising method for training neural networks and shows that there are remaining empirical challenges,” Lange said. “It provides an interesting data point to spark further scientific investigation.”
But the achievement isn’t as impressive as it might seem at first glance.
In a blog post, Sakana admits that its AI occasionally made “embarrassing” citation errors, for example incorrectly attributing a method to a 2016 paper instead of the original 1997 work.