科学
2024 年 8 月 29 日
第385卷,第 6712期
DOI: 10.1126/science.adr8354
“深度伪造”是指利用数字技术,尤其是人工智能,伪造看似真实的媒体(通常是视频,但也包括图像和音频)。例如,可以将一个人的脸叠加到另一个人的身上,从而创造出一个难以与真人图像区分的逼真图像。深度伪造可能会产生误导性和有害内容,给科学和社会带来重大挑战。这些挑战是什么?它们对科学教育意味着什么?深度伪造既有机遇也有挑战,科学界可以利用这些机遇和挑战对科学和教育产生积极影响吗?
深度伪造带来的一个主要挑战关系到科学研究的完整性。科学需要信任:信任来自科学事业的数据、方法和发现。即使没有深度伪造,这种信任有时也会被滥用。例如,据广泛报道,许多科学期刊充斥着假科学论文。仅在 2023 年,就有 10,000 多篇科学研究论文因内容欺诈而被撤回。深度伪造进一步威胁了人们对科学的信任,因为它引入了数据(尤其是视觉数据)可能以难以检测的方式被操纵的可能性。例如,深度伪造技术可能被用于伪造图像(如显微镜数据),以传播虚假发现。
科学发现的传播是深度伪造可能对科学造成潜在危害的另一个领域。科学传播依赖于准确和真实的信息。深度伪造能够通过引入虚假或误导性发现来扭曲这种传播。例如,可以制作值得信赖的知名科学家以令人信服的方式撒谎的视频和图像。当我们考虑公共卫生和气候变化等紧迫问题时,此类内容可能会迅速传播,传播错误信息并可能造成致命后果。
应对科学领域深度伪造挑战的一种方法是制定新技术合乎道德的指南和最佳实践。另一种方法是开发深度伪造检测工具,这是数据科学领域的一个新研究领域。研究人员已经在研究能够识别深度伪造技术产生的细微迹象的算法,例如视频中面部特征的不一致。
尽管深度伪造带来了巨大的挑战,但也可能为科学研究和教育带来机遇。用于开发深度伪造的技术也可能带来其他更有益的技术进步和创新,比如,当它们被开发用于检测伪造数据等积极成果时。此外,它们还可用于创建逼真的模拟以用于教育目的,例如,支持学生医疗技能的发展,而不损害真实患者在现实生活中的安全。深度伪造可以用作教育工具,帮助未来的科学家加深对如何在科学界及其他领域建立信任的理解,以及道德原则如何应用于科学研究的交流。
然而,深度伪造的相关性和重要性超出了专业科学家的培训范围。从负面意义上讲,深度伪造是广大公众的主要担忧。根据美国最近的一项全国性调查,在具有全国代表性的学生、教育工作者和成年公众样本中,33% 至 50% 的人无法区分真实视频和深度伪造。科学教育可以通过让专业人士和普通公民进行深度学习来消除深度伪造的负面影响。教育中的深度学习与机器学习中的深度学习不同。这种深度学习包括批判性、分析性和创造性思维,超越了对概念和程序的表面覆盖。它是关于让学习者参与到可以提高他们高阶思维技能和有意义理解的内容中。例如,学习评估错误信息是此类技能的一部分,批判性思维和解决问题等技能也是如此。将深度学习置于科学背景中可以帮助学生接受科学精神,培养他们健康的怀疑态度和尊重证据的倾向。
一些科学课程已经包含批判性思维和解决问题等技能,以帮助学生应对错误信息。研究伙伴关系明确解决了深度伪造对教育者和学习者带来的挑战。然而,只有少数教育系统能够在国家层面系统地应对与深度伪造和错误信息相关的具体挑战。长期以来,芬兰在许多福祉指标中都处于领先地位,因此该国对错误信息问题十分重视。包括政府与科技公司之间的合作在内的多组织努力推动了学校创新资源的开发。例如,在一节课中,要求学生“处理图片、视频、文本、数字内容……识别各种误导性新闻:从宣传到点击诱饵、从讽刺到阴谋论、从伪科学到党派报道;从描述从未发生过的事件的故事……”为学生创造此类机会来评估内容和信息来源的可信度的深度学习方法可能有助于打击深度伪造。然而,问题也随之而来:深度伪造是否具有需要更多专业关注的特定特征,例如伪造数据的性质以及用于检测它们的工具。这些问题表明,科学教育研究领域存在新的潜在领域,人们可以探索有关深度伪造的教学和学习的细微差别。
尽管深度伪造对科学研究和交流的诚信构成了重大风险,但它们也提供了教育机会。深度伪造的未来影响将取决于科学和教育界如何应对这些挑战并利用这些机会。有效的错误信息检测工具、严格的道德标准和基于研究的教育方法有助于确保科学的深度学习得到加强,而不是受到深度伪造的阻碍。
Deepfakes and students’ deep learning: A harmonious pair in science?
Authors Info & Affiliations
SCIENCE
29 Aug 2024
Vol 385, Issue 6712
DOI: 10.1126/science.adr8354
“Deepfakes” refers to the use of digital technology, particularly artificial intelligence, to fabricate media—typically videos but also images and audio—that appear to be real. For example, a person's face can be superimposed onto another’s body, creating a realistic image that can be difficult to distinguish from an image of the real person. Deepfakes have the potential to create misleading and harmful content, posing major challenges for science and society. What are these challenges? What do they imply for science education? Are there opportunities as well as challenges about deepfakes that the scientific community can harness for a positive impact on science and education?
A major challenge posed by deepfakes concerns the integrity of scientific research. Science requires trust: trust in the data, methods, and findings emerging from the scientific enterprise. Such trust is already abused at times even in the absence of deepfakes. For example, it is widely reported that many science journals are filled with fake scientific papers. In 2023 alone, more than 10,000 scientific research papers were retracted owing to fraudulent content. Deepfakes further threaten trust in science by introducing the possibility that data, particularly visual data, can be manipulated in ways that are difficult to detect. For example, deepfake technology could potentially be used to fabricate images, such as microscopy data, propagating false findings.
The communication of scientific findings is another area where deepfakes could cause potential harm to science. Science communication relies on accurate and truthful information. Deepfakes possess the capacity to distort such communication by introducing false or misleading findings. For example, it is possible to create videos and images of trusted and well-known scientists uttering lies in a convincing manner. When we consider pressing issues such as public health and climate change, such content could potentially go viral, spreading misinformation with potentially fatal consequences.
SIGN UP FOR THE SCIENCEADVISER NEWSLETTER
The latest news, commentary, and research, free to your inbox daily
One approach in addressing challenges about deepfakes in science is to develop guidelines and best practices for ethical use of new technologies. Another approach is to develop deepfake detection tools, a new area of research within data science. Researchers are already working on algorithms that can identify the subtle signs that deepfake technologies generate, such as inconsistencies in facial features in videos.
Although deepfakes pose considerable challenges, they may also present opportunities for scientific research and education. The technologies used to develop deepfakes can also potentially lead to other, more beneficial technological advancement and innovation, for example, when they are developed for positive outcomes such as the detection of fabricated data. Furthermore, they can be used to create realistic simulations for educational purposes, for example, in supporting the development of students’ medical skills without compromising safety in real-life situations with real patients. Deepfakes can potentially be used as educational tools to help future scientists develop their understanding of how trust is established within the scientific community and beyond and how ethical principles apply to the communication of scientific research.
However, the relevance and importance of deepfakes go beyond the training of professional scientists. In the negative sense, deepfakes are a major concern for the public at large. According to a recent national survey in the United States, 33 to 50% of a nationally representative sample of students, educators, and the adult public could not distinguish between authentic videos and deepfakes. Science education can contribute to tackling the negative sense of deepfakes through deep learning for both professionals and regular citizens. This sense of deep learning in educationis different from how the term is used in machine learning. Such deep learning includes critical, analytical, and creative thinking and transcends superficial coverage of concepts and procedures. It is about engaging learners in content that enhances their higher-order thinking skills and meaningful understanding. For example, learning to evaluate misinformation is part of such skills, as are skills such as critical thinking and problem-solving. Situating deep learning in science contexts may help students embrace the ethos of science by cultivating in them an orientation toward healthy scepticism and respect for evidence.
There are science curricula that have embraced skills such as critical thinking and problem-solving for supporting students in navigating misinformation. Research partnerships have explicitly addressed the challenges of deepfakes for educators and learners alike. However, only a few educational systems have managed to operationalize the specific challenges related to deepfakes and misinformation systemically at a national level. Finland, long admired for its lead in many indicators of well-being, has taken the issue of misinformation seriously. Multi-organizational efforts, including collaborations between the government and technology companies, have led the development of innovative resources for schools. Take, for example, a lesson where students are asked to “work with pictures, videos, text, digital content…to identify all the various kinds of misleading news: from propaganda to clickbait, satire to conspiracy theory, pseudoscience to partisan reporting; from stories describing events that simply never happened…” Deep learning approaches that create such opportunities for students to evaluate credibility of content and sources of information may be useful in combating deepfakes. However, the question also arises as to whether there are specific features of deepfakes that need more specialized attention, such as the nature of fabricated data and the tools that are used to detect them. Such questions suggest new potential areas of research in science education where nuances about teaching and learning about deepfakes are explored.
Although deepfakes present substantial risks to the integrity of scientific research and communication, they also offer educational opportunities. The future influence of deepfakes will depend on how the science and education communities address these challenges and leverage the opportunities. Effective misinformation detection tools, robust ethical standards, and research-based educational approaches can help in ensuring that deep learning in science is enhanced, not hampered, by deepfakes.