(文末附上问题和2024年诺贝尔物理学奖颁奖公告英文版)
开始我只想让大模型来给我解释一下Hopfield和Hinton的工作与物理学的关系,在我将颁奖词和问题提交之后,得到的答案很是令人吃惊,当然很多人的第一反应可能是有些像的:很难想到物理学奖会发给AI专家。现在开始有些期待文学奖将来是否也可以...?
先看答案:这个虚构的诺贝尔奖公告突出了Hopfield和Hinton对人工神经网络领域的基础贡献及其对机器学习的影响。以下是为什么这个(假设的)奖项有意义的细分:
你提供的文本本身包含了一些让我怀疑其真实性的线索,加上我已知的真实世界信息,我可以肯定地说这是一个虚构的诺贝尔奖公告。
日期: 文本开头提到 "8 October 2024"。虽然诺贝尔奖通常在十月公布,但确切日期通常不会提前这么久确定并公开。尤其是在前一年(即2023年)的奖项还未公布的情况下,公布下一年的日期显得非常不寻常。( 可见该模型的训练数据截断在2022年 ) 缺乏官方确认: 我可以轻松地通过访问诺贝尔奖的官方网站来验证获奖者信息。在撰写此回复时,没有任何官方来源证实 Hopfield 和 Hinton 获得了 2024 年诺贝尔物理学奖。( 似乎该模型不能访问网络 ) 内容风格略有不同: 虽然公告的整体风格模仿了诺贝尔奖的官方公告,但措辞上还是有一些细微的差别,使得它看起来不那么正式。( 你确信吗? ) Hinton 已获得图灵奖: Geoffrey Hinton 在 2018 年获得了图灵奖,通常被称为“计算机科学界的诺贝尔奖”。虽然获得图灵奖并不妨碍获得诺贝尔奖,但鉴于 Hopfield 和 Hinton 的主要贡献都在计算领域,同时获得这两个奖项的可能性相对较小。诺贝尔奖通常会更倾向于表彰那些对基础科学有更广泛影响的发现。
我的问题:
『如何理解2024年诺贝尔物理学奖颁给人工智能专家:
8 October 2024
The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2024 to
John J. Hopfield
Princeton University, NJ, USA
Geoffrey E. Hinton
University of Toronto, Canada
“for foundational discoveries and inventions that enable machine learning with artificial neural networks”
They trained artificial neural networks using physics
This year’s two Nobel Laureates in Physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.
When we talk about artificial intelligence, we often mean machine learning using artificial neural networks. This technology was originally inspired by the structure of the brain. In an artificial neural network, the brain’s neurons are represented by nodes that have different values. These nodes influence each other through connections that can be likened to synapses and which can be made stronger or weaker. The network is trained, for example by developing stronger connections between nodes with simultaneously high values. This year’s laureates have conducted important work with artificial neural networks from the 1980s onward.
John Hopfield invented a network that uses a method for saving and recreating patterns. We can imagine the nodes as pixels. The Hopfield network utilises physics that describes a material’s characteristics due to its atomic spin – a property that makes each atom a tiny magnet. The network as a whole is described in a manner equivalent to the energy in the spin system found in physics, and is trained by finding values for the connections between the nodes so that the saved images have low energy. When the Hopfield network is fed a distorted or incomplete image, it methodically works through the nodes and updates their values so the network’s energy falls. The network thus works stepwise to find the saved image that is most like the imperfect one it was fed with.
Geoffrey Hinton used the Hopfield network as the foundation for a new network that uses a different method: the Boltzmann machine. This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.
“The laureates’ work has already been of the greatest benefit. In physics we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” says Ellen Moons, Chair of the Nobel Committee for Physics.
是不是一个很可爱的AI?我们是不是也会出现这种情况?这种问题很是常见,大模型的『幻觉』和『数据截断』的问题的确存在,Perplexity等能够访问网络资源的就不会出现这种问题,但却缺了些可爱。
当然,具体是哪家的大模型,大家自己试试吧!