hello大家好!2022年夏天,一位名叫“提子”的美食网红博主拍摄水煮鲨鱼和烧烤鲨鱼的视频火爆网络,有科普博主反映,这条鲨鱼是国际濒危野生动物噬人鲨,相当于国家二级保护动物。
In January of 1995, Russia detected a nuclear missile headed its way. The alert went all the way to the president, who was deciding whether to strike back when another system contradicted the initial warning. What they thought was the first missile in a massive attack was actually a research rocket studying the Northern Lights. This incident happened after the end of the Cold War, but was nevertheless one of the closest calls we’ve had to igniting a global nuclear war.
1995 年 1 月,俄罗斯探测到一枚核导弹正朝它飞来。警报一直传给总统,总统正在决定是否在另一个系统与最初的警告相矛盾时进行反击。他们认为大规模攻击中的第一枚导弹实际上是研究北极光的研究火箭。这起事件发生在冷战结束后,但仍然是我们不得不引发全球核战争的最接近的呼吁之一。
With the invention of the atomic bomb, humanity gained the power to destroy itself for the first time in our history. Since then, our existential risk— risk of either extinction or the unrecoverable collapse of human civilization— has steadily increased. It’s well within our power to reduce this risk, but in order to do so, we have to understand which of our activities pose existential threats now, and which might in the future.
随着原子弹的发明,人类在我们的历史上第一次获得了毁灭自己的力量。从那时起,我们的生存风险——灭绝或人类文明不可恢复的崩溃的风险——稳步增加。降低这种风险是我们力所能及的,但为了做到这一点,我们必须了解我们的哪些活动现在构成了生存威胁,哪些活动可能在未来构成威胁。
So far, our species has survived 2,000 centuries, each with some extinction risk from natural causes— asteroid impacts, supervolcanoes, and the like. Assessing existential risk is an inherently uncertain business because usually when we try to figure out how likely something is, we check how often it's happened before. But the complete destruction of humanity has never happened before. While there’s no perfect method to determine our risk from natural threats, experts estimate it’s about 1 in 10,000 per century.
到目前为止,我们的物种已经存活了 2,000 个世纪,每个世纪都有一些自然原因导致的灭绝风险——小行星撞击、超级火山等等。评估存在风险本质上是一项不确定的业务,因为通常当我们试图弄清楚某件事的可能性有多大时,我们会检查它以前发生的频率。但人类的彻底毁灭从未发生过。虽然没有完美的方法来确定我们受到自然威胁的风险,但专家估计它每世纪大约有万分之一。
Nuclear weapons were our first addition to that baseline. While there are many risks associated with nuclear weapons, the existential risk comes from the possibility of a global nuclear war that leads to a nuclear winter, where soot from burning cities blocks out the sun for years, causing the crops that humanity depends on to fail. We haven't had a nuclear war yet, but our track record is too short to tell if they’re inherently unlikely or we’ve simply been lucky. We also can’t say for sure whether a global nuclear war would cause a nuclear winter so severe it would pose an existential threat to humanity.
核武器是我们对该基线的第一个补充。虽然与核武器相关的风险很多,但存在的风险来自全球核战争的可能性,这可能导致核冬天,城市燃烧产生的烟灰阻挡太阳多年,导致人类赖以生存的农作物歉收。我们还没有发生过核战争,但我们的记录太短,无法判断它们是否天生就不太可能发生,或者我们只是很幸运。我们也不能肯定地说,一场全球核战争是否会导致如此严重的核冬天,从而对人类构成生存威胁。
The next major addition to our existential risk was climate change. Like nuclear war, climate change could result in a lot of terrible scenarios that we should be working hard to avoid, but that would stop short of causing extinction or unrecoverable collapse. We expect a few degrees Celsius of warming, but can’t yet completely rule out 6 or even 10 degrees, which would cause a calamity of possibly unprecedented proportions. Even in this worst-case scenario, it’s not clear whether warming would pose a direct existential risk, but the disruption it would cause would likely make us more vulnerable to other existential risks.
我们生存风险的下一个主要增加是气候变化。就像核战争一样,气候变化可能会导致许多我们应该努力避免的可怕情景,但这不会导致灭绝或无法恢复的崩溃。我们预计会有几摄氏度的升温,但还不能完全排除是6摄氏度还是10摄氏度,这将导致一场可能前所未有的灾难。即使在这种最坏的情况下,也不清楚变暖是否会构成直接的生存风险,但它可能造成的破坏可能会使我们更容易受到其他生存风险的影响。
The greatest risks may come from technologies that are still emerging. Take engineered pandemics. The biggest catastrophes in human history have been from pandemics. And biotechnology is enabling us to modify and create germs that could be much more deadly than naturally occurring ones. Such germs could cause pandemics through biowarfare and research accidents. Decreased costs of genome sequencing and modification, along with increased availability of potentially dangerous information like the published genomes of deadly viruses, also increase the number of people and groups who could potentially create such pathogens.
最大的风险可能来自仍在出现的技术。以人为的流行病为例。人类历史上最大的灾难来自流行病。生物技术使我们能够改造和创造比自然产生的细菌更致命的细菌。这些细菌可能通过生物战和研究事故引起大流行。基因组测序和修改成本的降低,以及潜在危险信息(如已发表的致命病毒基因组)的可用性增加,也增加了可能产生此类病原体的人和群体的数量。
Another concern is unaligned AI. Most AI researchers think this will be the century where we develop artificial intelligence that surpasses human abilities across the board. If we cede this advantage, we place our future in the hands of the systems we create. Even if created solely with humanity’s best interests in mind, superintelligent AI could pose an existential risk if it isn’t perfectly aligned with human values— a task scientists are finding extremely difficult.
另一个问题是与人类价值观仍存在矛盾的人工智能。大多数人工智能研究人员认为,这将是我们开发全面超越人类能力的人工智能的世纪。如果我们放弃这一优势,我们就将我们的未来掌握在我们创建的系统手中。即使创建时完全考虑到人类的最大利益,如果与人类价值观不完全一致,超级智能 AI 也可能构成生存风险——科学家们发现这项任务极其困难。
Based on what we know at this point, some experts estimate the anthropogenic existential risk is more than 100 times higher than the background rate of natural risk. But these odds depend heavily on human choices. Because most of the risk is from human action, and it’s within human control. If we treat safeguarding humanity's future as the defining issue of our time, we can reduce this risk. Whether humanity fulfils its potential— or not— is in our hands.
根据我们目前所了解的情况,一些专家估计人为存在风险比自然风险的背景率高出 100 多倍。但这些几率在很大程度上取决于人类的选择。因为大部分风险来自人为行为,并且在人为控制范围内。如果我们将保护人类的未来视为我们这个时代的决定性问题,我们就可以降低这种风险。人类是否能发挥出十足的潜力——一切都掌握在我们手中。
Remark:一切权益归TED所有,更多TED相关信息可至官网www.ted.com查询!