研究人员打造了一款“人工智能科学家”——它能做什么?

文摘   2024-08-31 09:01   北京  


  • 叽叽喳喳

     

  • Facebook

     

  • 电子邮件

图片来源:Moor Studio/Getty

科学研究能完全实现自动化吗?一个机器学习研究团队已经开始尝试。

“AI Scientist”由东京公司 Sakana AI 的团队以及加拿大和英国的学术实验室创建,可执行从阅读有关问题的现有文献、制定新发展假设到尝试解决方案和撰写论文的全过程研究。AI Scientist 甚至会执行同行评审的部分工作并评估自己的结果。

AI Scientist 参与了一系列旨在创建至少部分科学过程自动化的 AI 代理的努力。 “据我所知,还没有人在一个系统中完成整个科学界的工作,”AI Scientist 联合创始人、加拿大温哥华不列颠哥伦比亚大学机器学习研究员 Cong Lu 说道。研究结果1于本月发布在 arXiv 预印本服务器上。

“他们从头到尾都做到了这一点,令人印象深刻,”西雅图华盛顿大学的计算社会科学家杰文·韦斯特 (Jevin West) 说。“我认为我们应该尝试这些想法,因为它们可能有助于科学研究。”

目前,该系统的产出还不算惊天动地,而且只能在机器学习领域进行研究。特别是,AI Scientist 缺乏大多数科学家认为的科学研究的关键部分——实验室工作的能力。劳伦斯伯克利国家实验室和加州大学伯克利分校的材料科学家 Gerbrand Ceder 说:“从提出假设的 AI 到将其实现为机器人科学家,还有许多工作要做。”不过,Ceder 补充道:“如果展望未来,我毫不怀疑,这将是科学研究的未来发展方向。”

自动化实验

AI Scientist 基于大型语言模型 (LLM)。它使用一篇描述机器学习算法的论文作为模板,首先在文献中搜索类似的工作。然后,该团队采用了一种称为进化计算的技术,该技术受到达尔文进化论的突变和自然选择的启发。它分步进行,对算法应用微小的随机更改,并选择那些可以提高效率的更改。

为此,AI Scientist 通过运行算法并测量其性能来进行自己的“实验”。最后,它会生成一篇论文,并通过一种自动化的同行评审对其进行评估。通过这种方式“扩充文献”,算法就可以再次开始循环,并根据自己的结果进行构建。

作者承认,AI Scientists 撰写的论文只是一些渐进式的进展。其他一些研究人员在社交媒体上发表了尖锐的评论。“作为期刊的编辑,我可能会直接拒绝他们。作为审稿人,我会拒绝他们,” Hacker News 网站上的一位评论者说。

韦斯特还表示,作者们对研究人员如何了解其领域现状持一种简化的观点。他们所知道的很多信息都来自其他形式的交流,比如参加会议或在茶歇时间与同事聊天。“科学不仅仅是一堆论文,”韦斯特说。“你可以进行 5 分钟的对话,这比 5 小时的文献研究更有价值。”

West 的同事 Shahan Memon 对此表示赞同——但 West 和 Memon 都称赞作者们完全公开了他们的代码和结果。这使得他们能够分析《AI 科学家》的结果。例如,他们发现,在选择作为参考文献的早期论文时,存在“流行偏见”,会避开那些引用次数高的论文。Memon 和 West 表示,他们还在研究衡量《AI 科学家》的选择是否是最相关的。

重复任务

当然,AI Scientist 并不是第一次尝试将研究人员工作的各个部分自动化:科学发现自动化的梦想与人工智能本身一样古老——可以追溯到 20 世纪 50 年代,耶路撒冷艾伦人工智能研究所的计算机科学家 Tom Hope 说。例如,早在十年前,自动统计学家2就能够分析数据集并撰写自己的论文。而 Ceder 和他的同事甚至已经将一些实验室工作自动化:他们去年推出的“机器人化学家”可以合成新材料并用它们进行实验3

Hope 表示,目前的法学硕士“无法制定新颖而实用的科学方向,只能靠基本肤浅的流行语组合”。不过,Ceder 表示,即使人工智能短期内无法完成更具创造性的工作,它仍然可以自动化大量重复性的研究工作。“在低层次上,你要尝试分析某事物是什么,某事物如何反应。这不是科学的创造性部分,但它占我们工作的 90%。” Lu 表示,他也从许多其他研究人员那里得到了类似的反馈。“人们会说,我有 100 个想法,但我没有时间去做。让人工智能科学家去做那些吧。”

陆说,要拓宽 AI Scientist 的能力——甚至扩展到机器学习以外的抽象领域,比如纯数学——它可能需要包括语言模型以外的其他技术。例如,谷歌 Deep Mind 在解决数学问题方面的最新成果已经展示了将 LLM 与“符号”AI 技术相结合的强大功能,这些技术将逻辑规则构建到系统中,而不是仅仅依靠它从数据中的统计模式中学习。但他表示,目前的迭代只是一个开始。“我们真的相信这是 AI 科学的 GPT-1,”他说,指的是加利福尼亚州旧金山 OpenAI 早期的大型语言模型。

韦斯特表示,研究结果引发了一场争论,而这场争论是目前许多研究人员最关心的问题。“我所有从事不同科学领域的同事都在试图弄清楚,人工智能在我们的工作中处于什么位置?这确实迫使我们思考,二十一世纪的科学是什么——它可能是什么,它是什么,它不是什么,”他说。

机构编号: https://doi.org/10.1038/d41586-024-02842-3

参考

  1. Lu, C.、Lu, C.、Lange, RT、Foerster, J.、Clune, J. 和 Ha, D. arXiv 预印本 https://arxiv.org/abs/2408.06292 (2024)。

  2. Ghahramani, Z. 《自然》 521 , 452–459 (2015)。

    谷歌学术

     

  3. Szymanski,NJ等人。 《自然 》624,86–91(2023 年)。

    谷歌学术

     

下载参考资料

转载和许可

相关文章

  • 人工智能与科学:1600 名研究人员的看法

  • ChatGPT 和其他 AI 工具如何颠覆科学出版

  • 机器人化学家声称能创造新材料,引发争议

  • 人工智能革命即将席卷机器人:它将如何改变机器人?

  • 人工智能哥白尼“发现”地球绕太阳运行

  • “设置完毕,忘掉它”:自动化实验室使用人工智能和机器人技术来改良蛋白质

主题

  • 机器学习

最新动态:

机器学习

法学硕士在用非裔美国人英语提问时会产生种族主义言论

新闻与观点

紧急澄清欧盟新法下人工智能如何在医学领域应用

一致

Researchers built an ‘AI Scientist’ — what can it do?

  • Twitter 
  • Facebook 
  • Email

Credit: Moor Studio/Getty

You have full access to this article via Fudan University

Could science be fully automated? A team of machine-learning researchers has now tried.

‘AI Scientist’, created by a team at Tokyo company Sakana AI and at academic labs in Canada and the United Kingdom, performs the full cycle of research from reading the existing literature on a problem and formulating hypothesis for new developments to trying out solutions and writing a paper. AI Scientist even does some of the job of peer reviewers and evaluates its own results.

AI Scientist joins a slew of efforts to create AI agents that have automated at least parts of the scientific process. “To my knowledge, no one has yet done the total scientific community, all in one system,” says AI Scientist co-creator Cong Lu, a machine-learning researcher at the University of British Columbia in Vancouver, Canada. The results1 were posted on the arXiv preprint server this month.

“It’s impressive that they’ve done this end-to-end,” says Jevin West, a computational social scientist at the University of Washington in Seattle. “And I think we should be playing around with these ideas, because there could be potential for helping science.”

The output is not earth-shattering so far, and the system can only do research in the field of machine learning itself. In particular, AI Scientist is lacking what most scientists would consider the crucial part of doing science — the ability to do laboratory work. “There’s still a lot of work to go from AI that makes a hypothesis to implementing that in a robot scientist,” says Gerbrand Ceder, a materials scientist at Lawrence Berkeley National Laboratory and the University of California, Berkeley. Still, Ceder adds, “If you look into the future, I have zero doubt in mind that this is where much of science will go.”

Automated experiments

AI Scientist is based on a large language model (LLM). Using a paper that describes a machine learning algorithm as template, it starts from searching the literature for similar work. The team then employed the technique called evolutionary computation, which is inspired by the mutations and natural selection of Darwinian evolution. It proceeds in steps, applying small, random changes to an algorithm and selecting the ones that provide an improvement in efficiency.

To do so, AI Scientist conducts its own ‘experiments’ by running the algorithms and measuring how they perform. At the end, it produces a paper, and evaluates it in a sort of automated peer review. After ‘augmenting the literature’ this way, the algorithm can then start the cycle again, building on its own results.

The authors admit that the papers AI Scientists produced contained only incremental developments. Some other researchers were scathing in their comments on social media. “As an editor of a journal, I would likely desk-reject them. As a reviewer, I would reject them,” said one commenter on the website Hacker News.

West also says that the authors took a reductive view of how researchers learn about the current state of their field. A lot of what they know comes from other forms of communication, such as going to conferences or chatting to colleagues at the water cooler. “Science is more than a pile of papers,” says West. “You can have a 5-minute conversation that will be better than a 5-hour study of the literature.”

West’s colleague Shahan Memon agrees — but both West and Memon praise the authors for having made their code and results fully open. This has enabled them to analyze the AI Scientist’s results. They’ve found, for example, that it has a “popularity bias” in the choice of earlier papers it lists as references, skirting towards those with high citation counts. Memon and West say they are also looking into measuring whether AI Scientist’s choices were the most relevant ones.

Repetitive tasks

AI Scientist is, of course, not the first attempt at automating at least various parts of the job of a researcher: the dream of automating scientific discovery is as old as artificial intelligence itself — dating back to the 1950s, says Tom Hope, a computer scientist at the Allen Institute for AI based in Jerusalem. Already a decade ago, for example, the Automatic Statistician2 was able to analyse sets of data and write up its own papers. And Ceder and his colleagues have even automated some bench work: the ‘robot chemist’ they unveiled last year can synthesize new materials and experiment with them3.

Hope says that current LLMs “are not able to formulate novel and useful scientific directions beyond basic superficial combinations of buzzwords”. Still, Ceder says that even if AI won’t able to do the more creative part of the work any time soon, it could still automate a lot of the more repetitive aspects of research. “At the low level, you’re trying to analyse what something is, how something responds. That’s not the creative part of science, but it’s 90% of what we do.” Lu says he got a similar feedback from a lot of other researchers, too. “People will say, I have 100 ideas that I don’t have time for. Get the AI Scientist to do those.”

Lu says that to broaden AI Scientist’s capabilities — even to abstract fields beyond machine learning, such as pure mathematics — it might need to include other techniques beyond language models. Recent results on solving maths problems by Google Deep Mind, for example, have shown the power of combining LLMs with techniques of ‘symbolic’ AI, which build logical rules into a system rather than merely relying on it learning from statistical patterns in data. But the current iteration is but a start, he says. “We really believe this is the GPT-1 of AI science,” he says, referring to an early large language model by OpenAI in San Francisco, California.

The results feed into a debate that is at the top of many researchers’ concerns these days, says West. “All my colleagues in different sciences are trying to figure out, where does AI fit in in what we do? It does force us to think what is science in the twenty-first century — what it could be, what it is, what it is not,” he says.

doi: https://doi.org/10.1038/d41586-024-02842-3

References

  1. Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J. & Ha, D. Preprint at arXiv https://arxiv.org/abs/2408.06292 (2024).

  2. Ghahramani, Z. Nature 521, 452–459 (2015).

    Google Scholar 

  3. Szymanski, N. J. et al. Nature 624, 86–91 (2023).

    Google Scholar 

Download references

Reprints and permissions

Related Articles

  • AI and science: what 1,600 researchers think

  • How ChatGPT and other AI tools could disrupt scientific publishing

  • Robot chemist sparks row with claim it created new materials

  • The AI revolution is coming to robots: how will it change them?

  • AI Copernicus ‘discovers’ that Earth orbits the Sun

  • ‘Set it and forget it’: automated lab uses AI and robotics to improve proteins

Subjects

  • Machine learning

Latest on:

Machine learning

LLMs produce racist output when prompted in African American English

NEWS & VIEWS

Urgently clarify how AI can be used in medicine under new EU law

CORRESPO


科技世代千高原
透视深度科技化时代™ 探寻合意的人类未来
 最新文章