你撰写了联合国教科文组织的生成式人工智能时代技术助长性别暴力问题研究报告,这份报告已于2023年11月发布,人工智能技术在哪些方面增加了可能导致妇女和女童遭受暴力侵害的潜在途径?
利用生成式人工智能,可以令伪造的媒体看起来更加真实可信,而女性在面临这种威胁时尤其脆弱无助。针对女性名人的暴力威胁屡见不鲜。想象一下,这种威胁现在还附带上了由生成式人工智能合成的非常逼真的照片——你自己的照片、子女的照片、亲朋好友的照片。以现在的技术,就算完全不懂编码,也可以轻而易举地做到这一点。
网络性别暴力通常始于网络骚扰。近期研究发现,26%的年轻女性曾经受到网络骚扰,而在同年龄段的男性中,有过类似遭遇的人数比例为7%。总的说来,随着生成式人工智能的发展,将出现更多生成内容,其中不乏暴力内容、误导性内容和“垃圾”内容。这些信息铺天盖地,必将堵塞信息渠道、分散人们的注意力,同时也将增加针对女性的暴力行为。
非故意伤害会进一步加剧性别暴力,因为社会中的种种隐形歧视做法和性别歧视经过计算机训练过程,进入了人工智能模型。例如,尽人皆知人工智能模型假定女性的职业是护士或教师,而不是医生或科学家,并且在未经同意或是在无意的情况下自动将女性形象性感化。
© 阿纳斯塔西娅·佩克(Anastasia Pek)
“用伪造的照片、文章、音频或视频可以制作出可信度极高的虚假叙事”
创建交互式深度伪造的能力同样令人不安。我们可以训练聊天机器人模仿任何一个人说话,让人误以为某人说了他/她其实并没说的话。用同样的程序,可以创建一个冒牌的社交媒体账户,在网上假装成某位女性名人(或任何女性),故意通过言行和发帖文来破坏她们的形象。借助技术手段实施性别暴力的犯罪者可以利用这种技术在网上冒充受害女性的身份,破坏她们的职业关系或私人关系,甚至假扮熟人跟踪曾经遭受性别暴力的受害者。
“有了新工具,仅需15到20分钟,全方位的网络骚扰就可以准备就绪”
除此之外,还有恶意软件。怀有恶意者可以生成恶意软件来窃取个人资料,泄露(发布)受害者的私人信息。有了新工具,生成恶意代码的自动骚扰就更容易上手了;仅需15到20分钟,全方位的网络骚扰就可以准备就绪。你只需要告诉生成式人工智能你想写的内容,人工智能自然会为你生成,然后你就可以要求人工智能为你编写代码,每隔十分钟在某人的社交媒体账户上发布一次内容。
在这些事态发展中,你认为哪一点最令人担忧?
所有这些都令人担心,但我最忧虑的,还是这些技术发展最终将产生的影响。我担心,我们将进入后真相世界,网络上一切可见者均不可信。这样一来,我们的世界将从依托于互联网、提倡交流和对话的全球化社会,倒退到人人心怀疑虑的社会。假如我们无法信任网络内容,许多精彩的社会进步将与我们擦肩而过。
我们还应注意到,大多数网络伤害的对象起初都是最弱势群体。在少数种族、族裔、性别表达、种姓或社会经济地位处于劣势的女童和妇女等最脆弱群体,往往最先受到网络攻击,这应成为世界各国普遍关注的一个迹象,因为其他人接下来也将遭遇同样的攻击。
你在研究报告的结论部分指出,受害者有责任保护自己,那么该如何加强在线保护呢?
现有的大多数在线保护工具在不经意间会造成“寒蝉效应”,换言之,这些工具的目的是让公众避免对话以保护自己。这是不公平的,而且对于作为决策者和记者等的女性来说,她们的工作离不开社交媒体领域,她们也做不到缄口不言。这基本上就是在告诉女性,在这个世界上要想受到保护,就必须置身事外。
此外,部分内容分发者实际上已经关闭了创建独立第三方工具的功能,来让人们免受网络骚扰,例如初创公司构建的基于社区的工具。
我曾经有幸进入一些最有影响力的会议室,我每次都会说同样的话:你们会为开发在线保护工具投入多少资金?对于在线保护工具和人工智能的开发投入应该做到旗鼓相当。假如我们无法保证这些技术的安全使用,它们就绝不会对人人憧憬的未来世界产生积极影响。
你说过,我们在使用算法时必须考虑用户多样性,这一点为什么很重要?
公司希望生产出适合所有人使用的产品。不过,看一看人工智能领域从业者的人口统计数据就不难发现,这只是世界人口中的一小部分。假如我们只代表着一种人、一种性别、一个地理区域或一种教育背景,我们就会错失知识和信息多样化。
鉴于这个问题的规模之大,涉及范围之广,必须征求公众意见。我创建的非营利机构“人性化智能”以公开提供偏见探测奖励机制和开展红队测试见长。我们向公众开放人工智能模型,并收集整理公众的反馈意见。
有些人使用的某种语言或是他们的出身背景,在人工智能数据或模型中并不常见。有些人作为建筑师或科学家等专业人士,会从各自的专业角度来评估模型。去年,我们协调多方开展了有史以来规模最大的一次生成式人工智能红队测试,有2200多人参与了评估。
要让更多女性、少数族裔和代表性不足的其他群体进入人工智能行业,就必须了解多样化的缺失在整条产业链中是如何产生的。教师告诫女生,编程是男生做的事;招聘方式重男轻女;创业者往往会雇用自己的朋友,假如其人脉圈子都是与他相似的清一色的男性,多样化必然无从谈起。制订计划来鼓励更多女性创业者创办公司,或是广泛宣传女孩也可以编程的事实,都有助于破除当前的一些固有成见。
你与大型科技公司合作,争取促成各方以负责任的方式使用生成式人工智能等新兴技术,为什么说确保产品合乎伦理和具有包容性是符合行业利益的?
生成式人工智能公司希望其他公司能够使用自己的产品,自然要让这些产品安全可靠。假如人工智能有可能输出种族歧视或性别歧视言论,造成永久性伤害或是暴力,公司就不会生产这些产品。因此,各方都有理由通过合作来解决这些问题。
“要防止生成式人工智能带来的骚扰、错误信息和虚假信息,人人都应发挥作用”
这些都是全球性的重大问题,仅凭一家公司来解决这些问题,纵然能够做到,也会很艰难。要防止生成式人工智能带来的骚扰、错误信息和虚假信息,人人都应发挥作用,这包括内容制作方、平台公司、社交媒体公司、决策者、政府、民间社会和非营利机构,以及可能使用这些平台的普通民众。联合国教科文组织等机构在协助制定标准方面可以起到重要作用,这些标准将有助于公司的程序设计更加尊重多样性。
联合国教科文组织警告:生成式人工智能会对妇女和女童带来风险
鲁曼·乔杜里(Rumman Chowdhury)
鲁曼·乔杜里是一位颇具影响的孟加拉裔美国数据科学家、非营利科技机构“人性化智能”(Humane Intelligence)的创始人,曾任Twitter机器学习伦理总监。
Rumman Chowdhury: “We could be entering a post-truth world”
Data scientist Rumman Chowdhury sounds the alarm about online harassment and gender-based violence in the age of generative artificial intelligence. Moreover, she warns against the dangers of malicious use of new technologies affecting the most vulnerable, and calls for increased focus on the diversity of users.
Interview by Anuliina Savolainen
UNESCO
You authored UNESCO’s study on technology-facilitated gender-based violence in the era of generative artificial intelligence (AI), released in November 2023. In which ways have these technologies increased the potential avenues for such violence against women and girls?
With generative AI we will have more convincing fake media, and women will be particularly vulnerable to this threat. Violent threats against prominent women are common. Now imagine that it is accompanied by very realistic photos of you, your children, your loved ones, produced with generative AI. With today's technology, this can be done very easily without any coding skills.
“It’s possible to create a very believable fake narrative with faked photos or video”
Similarly concerning is the ability to create interactive deepfakes. It will be possible to train a chatbot to talk like any human, and fool people into thinking somebody is saying something that they are not saying. This same process can be applied to create an entirely fake social media account and to pretend to be a prominent woman – or any woman – online, saying things, posting things, and doing things that make them look bad. Perpetrators of technology-facilitated gender-based violence could use this kind of technology to impersonate women’s identities online and ruin their professional or private relationships, and even track down survivors of such violence by pretending to be someone they know.
“With the new tools, a full-scale online harassment campaign can be created in 15-20 minutes”
And then there is malware. Malicious parties can generate malware to steal personal information in order to dox (publish private information about) their victims. With the new tools the bar to entering automated harassment campaigns creating malicious code is much lower; a full-scale online harassment campaign can be created in 15-20 minutes. All you have to do is tell the generative AI what you want to write, and it will generate the code for you. Then you can ask it to help you generate code to post the content on someone’s social media account every ten minutes.
Which of these developments do you find the most worrying?
I worry about all of them and what I worry about in particular is the ultimate effect they’ll have. I worry that we will enter a world that is post-truth, where nothing we see online is believable or trustworthy. In doing so the world will regress from a globalized, communicative, conversational society that exists on the Internet to one where everybody is very suspicious. We would lose out on so many amazing advances in society if we entered a world in which we cannot trust what is online.
We should also care because most online harms start with the most disadvantaged. The most vulnerable communities – such as girls and women of a minoritized race, ethnicity, gender expression, caste, or socio-economic status – are the ones on which these kinds of attacks are tested first, and it should be an indicator for the rest of the world to pay attention, because this is what is coming for everybody else.
In the report you conclude that there’s an onus placed on the victim to protect themselves. How could online protection be reinforced?
Most of the tools that exist for online protection inadvertently create a “chilling effect”, in other words, the purpose of these tools is to remove yourself from the conversation to protect yourself, which is unfair, but also literally impossible for prominent women such as policymakers or journalists whose jobs need to exist in the social media sphere. So basically you're telling women that in order to be protected in this world they have to remove themselves.
Most of these apps also place all of the responsibility on the action on the victim. Women have to decide and take action on reporting. Instead, apps should be developed to encourage the community to provide support, with zero tolerance for people who are in the act of harassing women.
Moreover, some content distributors have actually shut down the ability to create independent third party tools, such as community-based tools built by startups, to enable people to protect themselves against online harassment.
I’ve been fortunate to be in some of the most powerful rooms, and each time I say the same thing: how much of that money are you earmarking towards online protection tools? Invest as much in online protection as you spend on AI development. These technologies will never make the positive impact in the world we’re all envisioning if we do not make them safe to use.
You have said that we need to think about the diversity in the room when working with algorithms. Why is this important?
Companies are trying to make products that are for everybody, but if you look at the demographics of who is in the room, it’s a very small slice of the world. If we have only one type of person, gender, geographical region or educational background represented, we’re missing out on a diversity of knowledge and information.
The scale and scope of issues is so broad that getting input from the public is critically important. The work my nonprofit, Humane Intelligence, is known for is public bias bounties and red-teaming exercises. We open AI models to the public and curate their feedback.
In some cases, they speak a language or come from a background that is not well-represented in AI data or models. In others, they are professionals, like architects or scientists, who will evaluate the model from their own professional perspective. Last year, we coordinated the largest-ever generative AI red-teaming exercise which hosted over 2,200 individuals’ evaluations.
To get more women, minorities and other underrepresented groups into the AI industry, we need to understand how attrition happens all throughout the pipeline. Teachers who tell girls that programming is for boys; or hiring practices that favour men over women. People often hire their friends when they make startups, so if their network is only men who are just like them, then you’re not going to get diversity. Building programs to encourage, for example, more female founders to make companies, or programs that normalize the fact that girls can program, helps break down some of the existing stereotypes.
You work alongside major technology companies to enable the responsible use of emerging technologies such as generative AI. Why is it in the interest of the industry to ensure their products are ethical and inclusive?
Generative AI companies want to build products that are safe and reliable because they want other companies to use them. Products will not be built if there’s a risk that AI will say something racist or sexist, perpetuate harms, perpetuate violence, so there is an incentive to work together to adjust these problems.
“In preventing harassment, misinformation and disinformation stemming from generative AI, everyone has a role to play”
These problems are big and global in scale, so it’s very hard if not impossible for a single company to solve these problems. In preventing harassment, misinformation and disinformation stemming from generative AI, everyone has a role to play – content generators, platform companies, social media companies, policymakers, governments, civil society and non-profits, and just regular people who might be on these platforms. Organizations like UNESCO have an important role to play in helping to define standards that will help companies to design programs that are more respectful of diversity.
UNESCO warns about the risks posed by Generative AI for women and girls
Generative Artificial Intelligence (AI) has amplified existing online harassment methods and increased the potential avenues for gender-based violence online. This is the key finding of the UNESCO report “Your opinion doesn’t matter, anyway”: exposing technology-facilitated gender-based violence in an era of generative AI, published in November 2023.
The report, authored by Big Data specialists Rumman Chowdury and Dhanya Lakshmi, argues that while deep-learning models are revolutionizing the way people access information and interact with content, they present concerns for the overall protection and promotion of human rights and for the safety of women and girls. The harms may include more realistic fake media and fake narratives, and a much wider reach of hate speech and misinformation. Cyber harassment on social media can also be exacerbated with the help of AI-generated harassment templates – a growing concern in a situation where nearly 60 per cent of young women across the world report having faced online harassment on social media platforms.
A second study entitled Challenging systematic prejudices: an Investigation into Gender Bias in Large Language Models, published by UNESCO and IRCAI (International Research Center on Artificial Intelligence) in spring 2024, presents similarly worrying tendencies in large language models (natural language processing tools that underpin popular generative AI platforms) to produce gender bias, as well as homophobia and racial stereotyping.
Both publications highlight the need for action by AI developers and policymakers to combat the new threats. Suggested measures include the implementation of the Recommendation on the Ethics of Artificial Intelligence, adopted by UNESCO Member States in November 2021.
Links:
“Your opinion doesn’t matter, anyway”: exposing technology-facilitated gender-based violence in an era of generative AI
https://unesdoc.unesco.org/ark:/48223/pf0000387483
Challenging systematic prejudices: an Investigation into Gender Bias in Large Language Models
https://unesdoc.unesco.org/ark:/48223/pf0000388971
Recommendation on the Ethics of Artificial Intelligence
https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
Rumman Chowdhury
Former director of machine learning ethics at Twitter, she is an influential Bangladeshi American data scientist and founder of the tech non-profit Humane Intelligence.
END
©️该文章及图片版权归联合国教科文《信使》杂志所有
欢迎分享到朋友圈
转载及合作请联系我们
wechat.unescocourier@gmail.com