GSB播客|if / then - 9:与机器人的关系将如何改变我们?

文摘   财经   2024-07-06 09:02   江苏  


机器人是否能感受到情感,这个问题至少目前最好留给哲学家们去解决。但斯坦福商学院市场营销学副教授Szu-chi Huang表示,越来越清楚的是,机器人确实有能力让人类产生情感。
在本集播客中,Huang深入探讨了机器人不仅可以影响人们的情绪,而且可以影响人们的行为。
Huang的研究表明,当人们看到别人在帮助他人的时候,常常会受到感染去做同样的事情。这就是所谓的“亲社会”的行为。但她想知道:当机器人伸出援助之手时,人们也会受到感染去效仿机器的行为吗?
为了找出这个问题的答案,她设计了一项研究,让参与者观看有关自然灾害及其应对措施的各种新闻报道。在一些故事中,人类是应对灾害的“英雄”,而在另外一些故事中,“英雄”是机器人。
“在这两种情况下,我们都详细解释了这些英雄们在做什么。” Huang说。无论是在地震后将幸存者从废墟中拖出来,还是在新冠疫情肆虐期间对医院进行消毒,“行动完全一样,但英雄们不同。”
在测试对象接触到这些故事后,Huang测量了他们参与亲社会行为的意愿,比如捐款支持有需要的儿童。她发现,那些看到机器人英雄的人捐款的可能性明显低于那些看到采取同样行动的人类英雄的人。“机器人的故事实际上让人们感到没有那么受鼓舞。” Huang说,“这个发现具有重要的意义。如果[使用机器人]降低了我们帮助他人的意愿,那么它可能会产生相当大的负面社会影响。”
那么,随着人工智能和机器人在我们的生活中扮演着越来越重要的角色,我们该怎么做呢?我们如何在不降低自己的人性和亲社会性的情况下接受它们的好处?在本期播客中,Huang分享道,如果我们希望机器人对社会有益,就需要让它们更加人性化。

以下为本期播客的文字整理稿:

Kevin Cool: If we want robots to be good for society, then we need to humanize them.
Ken Salisbury: Yeah. I’m Ken Salisbury. I’m a faculty in computer science at Stanford University. My main focus has been in the design of robotic devices or haptic devices for controlling robots. This lab has evolved over the years. Rapid prototyping has completely changed the way we do robotics. We have a lot of 3D printers. We have a laser cutter which cuts wooden patterns out. I love it because it smells good. It sort smells like a campfire.
Early in my grad student career, I was at a NASA lab and one of the robots, which was a remotely controlled robot, broke a sensor wire and the arm and the shoulder started spinning around and around, around and around. It was almost like a cartoon, but it finally broke the mechanical parts. And in the end, the arm was just sort of dangling by the wires. It had a very strong effect on me. I was worried about the damage and I was worried about fire and all the things you might worry about, but there’s another part of me that kind of worries about how did it feel about that? Is it reflecting on that?
There was a DARPA competition some years ago and it was a simulation of robots going into the Fukushima nuclear plant and had they been able to turn a certain valve or close a certain switch, they could have reduced the damage that was done. So there were dozens of mostly biped robots who went into the competition and they had to climb the stairs and open the door and turn the valve, and very few robots succeeded in that. And there were many, many instances where you’d see the robot standing on the steps and teetering back and forth and every one was going, can you make it? And then many of them would just fall and crash on the ground.
And you feel really bad, not just for the robot, however you do that, but also for the team. I still know the robot is code and motors and human design in it, but as it starts to have its own thoughts, I think there’s potential for anxiety about it.
Is it feeling okay because it’s behaving this way? As we start to attribute intelligence to the emerging AIs, we may start worrying about them as well. I guess I kind of go back to Wally where suspension of disbelief was allowed there. So you could feel inspired by Wally finding a flower and taking care of it or his relationship with the female robot. You could really get emotionally engaged in that. It’s not hard to distance yourself from that because you know it’s a script and it’s an animator, but as robots become real, I’d like to see them to be gracious, whatever that means, because I think that might be infectious.
What’s the vocabulary of grace? I don’t know what that is yet.
Kevin Cool: I’m Kevin Cool, senior editor at the Stanford Graduate School of Business.
What Ken said about the emotional connection we develop with robots, it made me think about the rush of innovation we’re wrestling with today in artificial intelligence, robotics automation. It’s one thing to think about how smart these machines can be, but what about their feelings? Do they even have feelings? Is it possible for a robot to feel pain? We may never know the answer to those questions, but our experience interacting with robots suggests that they do have the power to make us feel as robots continue to show up in so many parts of our lives. What role will emotions play in how we perceive them and what are the implications of that?
This is If/Then, a podcast from the Stanford Graduate School of Business where we explore some of the most interesting complexities of modern life and what we need to do to get where we want to be. Today we speak with Szu-chi Huang. She is an associate professor of marketing at Stanford Graduate School of Business.
Szu-chi Huang: I have no problem setting goals. I have many goals in life, but I find myself procrastinating or coming up with a plan that’s not feasible and end up giving up. So that makes me want to understand this science more. What is motivation and what gets people to get started to do something and what gets them to finish.
Kevin Cool: We’ll examine her pioneering work on the impact of robots on human behavior and their effects on our motivation to help others. Dr. Huang’s research challenges many assumptions about the influence robots have on us. Will they help us be better people or chip away at our basic humanity? Each episode looks at a topic through the lens of an if/then statement. Professor Huang’s is: if we want robots to be good for society, then we need to humanize them.
Now you’ve said yourself that you’re very much drawn to real world situations and you like doing field experiments and your robot study seems to me to be particularly novel. But first of all, why field experiments and how did you land on robots?
Szu-chi Huang: I think field experiments are important, especially for motivation research because we know that a biggest dilemma people have is that they say they want to do something and they don’t do it. So if I just capture intentions online in a survey, everybody’s going to look great on paper. Everybody say they’re going to study and work out and eat healthy. But if I capture real behavior, I start to see gaps in their behavior. They may say one thing and do another thing. And that’s why field experiments are very for me to capture those gaps, study them and find solutions to help people overcome them. So for the robot study, how we have that idea was several years ago we started to observe that technology is getting developed everywhere and why important domain is disaster response? Because disaster response is where a lot of the money is being invested into because government agencies, organizations and companies all want robots that can help to put up fire, clean hospitals deal with earthquakes. And so governments are putting a lot of investments into these technologies. So we thought it would be important to understand the social impact of these disaster response robots. And that’s where we started looking into this and finding something interesting. Then COV happens, which give us even more opportunities to study these disaster response robots used specifically during covid and see how that affects our behavior.
Kevin Cool: And you mentioned that you were studying prosocial behavior and how that was affected. What do you mean by that?
Szu-chi Huang: So prosocial motivation, the way we think about it is the motivation to do something good for others. So motivation could be doing something good for myself such as working out for my health, but pro-social motivation means I’m investing resources which could be time, money, my ability to help somebody else. And that’s what we call prosocial motivation. And so it is a type of motivation. It is again a dilemma map for people. We all say we want to help others and oftentimes we actually don’t do it. And that’s where we want to find other factors that make us more prosocial or less prosocial.
Kevin Cool: And describe if you will, how you designed this experiment, what you were asking the study participants to do and what the context was.
Szu-chi Huang: So in a lot of our experiments, what we did is we let people watch different news reports or different read different news stories that we drafted. And in these news stories or news reports, we will talk about a disaster that just happened somewhere. So it could be the pandemic, it could be the earthquake that’s happened in Africa. And then we guide people through what happened in a disaster, how many people were injured, what was the damage. Then we introduce our heroes. So the heroes were either human or they were robots. And in both cases we explain in detail what those heroes were doing. And so you may imagine that you see new stories like this everywhere and every day. So in a hero human story, we’ll talk about how humans were dragging the survivors out of the ruins. And in contrast, in the robot hero stories, we talk about how different kinds of robots were dragging humans and survivors out of the ruin. So the actions are exactly the same, but the heroes are different.
Kevin Cool: And then how were you then measuring the responses or the behavior of the study participants?
Szu-chi Huang: So after people watch the news stories, sometimes we just capture how they feel at the moment. Sometimes we capture intentions. So how likely are you to help somebody right now? And in some studies we actually introduce a time lapse. So we let people do other things so they kind of forget about the new stories and then as they leave the space, leave the lab, we introduce a pro-social campaign. For instance, we are running a donation drive for local communities to help children in the neighborhood and we invite them to donate. And this is where we actually capture people’s donation behavior and see if there’s a difference depending on the new story that they watched earlier.
Kevin Cool: And it turned out that watching the robots do their work was particularly inspiring to people.
Szu-chi Huang: Yes, turns out hero robots did not work that well. So watching the robot stories actually make people feel less inspired and less encouraged, and that has important consequences such as making them donate much less than the people who watch the human heroes.
Kevin Cool: So there was kind of like a backlash effect almost.
Szu-chi Huang: Yes, definitely.
Kevin Cool: They were less inclined.
Szu-chi Huang: And that’s why we believe this is an important effect to document because all these YouTube videos about hero robots are everywhere. We are watching them every day. And if it lowers our pro-social motivation and our intention to help others, it could have a pretty big negative social impact.
Kevin Cool: So what’s the answer? If we know we’re going to be interacting with robots and considering them partners or helpers or even heroes, then how can we change what seems to be an inclination to not be inspired by those robots?
Szu-chi Huang: Yes. So we spend pretty much the second half of our project really looking into this because we want to find solutions. We cannot tell media not to discuss any robot stories, so it will go out there. But other solutions to help people think about robots differently. One coherent theme we have found so far is that it is really helpful to humanize those robots. So the reason why robots are not encouraging is because we believe that they are not vulnerable. They’re not actually taking on risk when they run into ruins to help people or when they put up fires. Whereas human heroes, we know they were vulnerable and they were taking on risk, and those things are what make a person’s behavior inspiring. So when we highlight how robots can be vulnerable as well, such as their chip can get burned out during a disaster response or a rescue and they literally can die, basically cannot be used again, then we found that knowing that the robots actually did contribute to a disaster response turned out to be more inspiring than if we don’t introduce those information. So I would say thinking about ways to humanize robots will be a very effective solution.
Kevin Cool: You used the word vulnerable. What do you mean by that?
Szu-chi Huang: So in some examples, we tested, we highlight how, for instance, the materials we use to build robots is only can be used one time. And that will mean that the robot as an entity has only one shot of doing this and they’re doing this to save somebody. And after this in some way, they are sacrificing themselves because their materials or their chips, which is their thought center, will not be able to get used again. And when we highlight that, then in some way we introduce the concept of mortality into robot’s existence. And that definitely make people feel that these behaviors are more noble and inspiring than a robot that will never take on any risk. And rescuing somebody from earthquake is just a daily job for
Kevin Cool: Them. So in your study, were these robots just sort of machines that were anonymous and nameless and so on?
Szu-chi Huang: Yes, actually we tried different looks of robots. Some actually look more like human, have a little bit of legs and hands going on. Some literally looks like an airplane or a box. So you will say some is more machine look and some might potentially be more human look, but we never name them and we never describe their behavior as human behavior. We never describe their vulnerability or their ability to make decisions, which is something unique to human as well. And what we found is when we build in those traits, when we highlight how vulnerable they are or they have autonomy, they are making a decision to do this right now, then that makes their actions more inspiring.
Kevin Cool: So if it had a name, we know from Hollywood movies for example, there are lots of robots, some of them the Terminator is fearsome and terrible, Wally, the trash collecting robot is lovable and lonely and we identify with Wally, but they have names and they almost have personalities. So is that part of what you’re getting at that when you say humanize the robots, you want to give them characteristics and attributes that make them human-like
Szu-chi Huang: Exactly. I think these things are important and naming them could be the great place to start. Like you say, different naming, different look and different personalities signal that they are indeed individuals and they have will and they could get hurt and all these things make them more like us, make them more human.
Kevin Cool: So Arnold Schwarzenegger in the first Terminator is terrible. He is just this relentless killing machine. But then in the sequel, he’s there to save the little boy who’s the future of humanity. And there is a feeling of sadness when the terminator in that movie is taking the bullets, if you will, and then sacrifices himself at the end. So Hollywood seems to understand at some level that we’re going to identify with robots in scary, difficult, dangerous situations. Is it possible that we could borrow from some of the techniques that they use, some of the storytelling that they’re doing?
Szu-chi Huang: Definitely, and I think you provided an excellent example. What they did in the second movie is they first built a relationship between this robot and human. And by building this relationship now we start to see the robot more as a human. And then when he actually made the sacrifice, we feel inspired and we feel touched by it. If he was still just a cold machine, we wouldn’t feel anything. A lot of machines got blown away in Hollywood movies. We never shed a tear for them. But because now he has a relationship with human, we start to see him as a human being as well. Then his sacrifice is noble.
Kevin Cool: This is If/Then, a podcast from Stanford Graduate School of Business where we explore some of the most interesting complexities of modern life and what we need to do to get where we want to be. I’m Kevin Kool and I’m speaking with Szu-chi Huang, associate Professor of Marketing at the GSB. We’ll be right back with Professor Huang after a quick break. Coming up, we’ll hear how companies are responding to her robot research and we’ll also find out what most people get wrong about setting goals. Stay with us.
This is If/Then, from Stanford Graduate School of Business, I’m Kevin Cool speaking with Szu-chi Huang, associate professor of marketing at the GSB and an authority on motivation. We’re discussing her If/Then statement: If we want robots to inspire us, then we need to humanize them.
So let’s talk about this notion of agency or autonomy. One of the fears we have about artificial intelligence generally is that it’s going to be too smart, that it’s going to take our jobs, it’s going to supplant work that we’re doing now. It will destroy humanity in the worst case scenario. And yet what’s suggested in the study and what you’re describing now suggests that maybe we want our robots to have some sense of agency. Is that right?
Szu-chi Huang: Yes, and I will say one way to think about this is to kind of tap into what relationship should robots, machines and humans have. In one of the solutions we tested, we actually created the robot human hybrid team because that’s the reality. Robots rarely work alone. Humans these days don’t work alone either. We work with technology. So when we highlight that it is a team of robots and humans, how we talk about that relationship becomes critical. We can say that the relationship is very hierarchical, so humans control the robots, or we can say the relationship is very equal: robots and humans are equal partners. We jointly do computations and make decisions together and conquer this obstacle together. And we felt that the equal relationship really thinking about them as partners who jointly make decisions with us is the one that makes them more human and that is the one that actually turned out to be the most effective to help people feel inspired by their actions.
Kevin Cool: To some degree, and you use this phrase in your analysis, in your study, it’s the illusion of autonomy, right? I mean the robots aren’t literally sort of autonomous and making decisions on their own, but reframing that whole situation and how we think about them.
Szu-chi Huang: Yes, although some people will say that robots indeed have the soul and the will, but the reality is of course a lot of the machines and robots that’s being launched and being used right now do not really have true autonomy how we think about human have, but it is about framing and highlighting their ability to compute and to make decisions just like humans. We do our cost benefit analysis and we make decisions. So by highlighting those similarities, we could make them more humanized even though they’re not fully functioning like a human.
Kevin Cool: One of my favorite robots as a kid was Rosie–the maid for the Jetsons, she was like a member of the family and everything. Well, if I have Rosie in my house cleaning off the dishes after dinner, I’m probably not going to get up and help Rosie. But there’s sort of a social cost. If a human is doing that and I’m not helping, right? I mean I’m going to feel like a jerk if I do that. Is there a point at which we will become conditioned to the work that robots do in a way that will change how that motivation might work?
Szu-chi Huang: Very interesting question. I think I am an optimist when it comes to this. So instead of the possibility of outsourcing all the help to robots and humans end up just not being pro-social at all, I would like to think about it as we outsource some work to robots leaving other work that is more suitable for humans to us. And so if Rosie is cleaning the house and we start to see her as a member of the family and humanize her, then hopefully you still feel encouraged to do other things for families such as taking out the trash because now Rosie is working alone. You don’t help. So my hope is that by rethinking our relationship with robots and rethinking what they really are and how they contribute to our lives, we can continue to be pro-social and we can use that energy for tasks that are specifically designed for humans to do or something we are better at.
Kevin Cool: You used rescue or hero robots in particular because they were doing something dangerous. Why don’t we just assume that robots are going to take on the dangerous stuff and we don’t have to worry about that. So why is it important that they would inspire us to do it?
Szu-chi Huang: I think there are tasks that robots are better at, such as running into danger. By adopting robots, we can reduce the cost on humans and I think those things we should outsource for robots to do. My interest here is that by knowing that robots are doing those work, it’s reducing our humanity. And there are many things actually humans have to organize and do and help. And those are the things that we don’t want to see reduced. And therefore we want to help people rethink the relationship with robots and what mindset they should have when they watch these interesting, amazing news stories about robot heroes so that they can actually still be an active contributor in their community because there are many things only they can do.
So definitely I think this news and these clips are very exciting. It prompts a lot of sharing, so they’re going to have a very broad impact, but what are people going to think after they watch it is the critical part. If millions of people watch it and their poor sociality reduces by even 20%, that’s a huge loss for our society. And in addition, from the videos, we see a lot of opportunities to make things more human, like you pointed out, can we give it names? Can we use different shapes so that they will look more like humans or at least animals? And so there are many opportunities that exist in the design of the robots and in how we promote and talk about them.
Kevin Cool: And is that work happening now or is there a recognition that that should be part of the design and the development of robots?
Szu-chi Huang: I think every time I share this research work, which is very new, it just came out this year, companies resonate with this finding. A lot of times they have to fight internally because internally there might be two voices. One is to make it more human, the other is to make it more perfect. And a lot of times perfection suggests that it should look less like human, more like the future and therefore hopefully our research findings help one side win this argument and they can start making all these disaster response robots more human.
Some marketers think that making them more human will be more relatable and some marketing teams may think that highlighting their capability is better because we always want to create more competitive and more excellent products. So instead of making them sound like us, we should make them sound like something that never existed. And so both could be appealing positions, but our data show that there is one that helps the rest of the society be more pro-social and that’s the one I will vote on.
Kevin Cool: Let’s pivot a little bit to talk about other kinds of motivation and goal setting. Obviously the work that you’re doing with robots has a particular context, but a lot of your work about motivation has to do with how we set a goal, maintain that goal once we’ve achieved it. What do people get wrong about thinking about what motivates them or other people?
Szu-chi Huang: I think one coherence theme I’ve repeatedly found in my research is that people thought setting a goal and working on a goal is a static process. So if I have a goal, I have my workout routine, I just keep on doing that for three months and I’m going to achieve my goal. Before my research, I found that goal pursuit is there’s nothing static about goal pursuit. The whole process is dynamic. What that means is you today and you three months later is going to be a very different person and that person needs a very different set of tools. So thinking about goal pursuit is a dynamic process. Allow us to reset and update our goals, change how we approach our goals, change who we want to pull in or distance ourselves from so we can be successful at our goals. And that’s the most important message I want everybody to know about when it comes to thinking about their goal is to keep it dynamic for yourself and for your stakeholders.
Kevin Cool: Do other people motivate us most of the time? In other words, if we see, and this goes back a little bit to the thinking around how you design the robot experiment. When we see someone doing something that we want to emulate, that can be motivation. There are also situations where seeing that may have the opposite effect. Is that right?
Szu-chi Huang: That is true. In our work, we found that other people in our lives and in our network can either be supporters or collaborators or they could become almost like competitors. In a set of Weight Watchers data, we found that Weight Watchers actually started off feeling more friendly with their fellow members. So they feel that other people in the program are their supporters and friends and support them through this journey. But as they lose more weight and become more successful, they start to feel more distant from other members because other members start to become a benchmark for them to compare against.
The interesting thing is that in both cases, both types of relationships can be motivating. It’s really about how we think about it. The supporters can be motivating because they help us feel that the goal is indeed feasible. If many people are doing it, it must be doable; I can do it too. The competition can be motivating as well because they give us something to achieve and give us some pressure not to fall behind. But it’s about managing that relationship and recognizing that relationship does shift. As you become more successful and accomplish more of your goal, it’s helpful for people to actually use the network the way they see fit.
Kevin Cool: Admittedly, it’s an odd thought that we need to humanize robots to keep ourselves more human. Understanding how robots influence our behavior will be increasingly important as they play a larger and larger role in our daily lives.
So perhaps the key here isn’t only that humanizing robots will help us preserve our best selves, but that we need to be comfortable having an emotional connection to a robot. We may feel sadness when a robot on a rescue mission is damaged, but should we also feel gratitude? Or maybe we should feel gratitude to its creator? In every robot we see, we find the work of thousands of people advancing this technology so that it can be helpful to us in that broken spinning arm we heard about earlier, we not only see the failure of the robot, but that of the many humans who designed, programmed, and built it By humanizing robots, perhaps we not only learned to accept them, but to appreciate the work of our fellow humans who make them possible.
Kevin Cool: If/Then is produced by Jesse Baker and Eric Nuzum of Magnificent Noise for Stanford Graduate School of Business. Our show is produced by Jim Colgan and Julia Natt. From Stanford Graduate School of Business, Jenny Luna, Sorel Denholtz, and Elizabeth Wyleczuk-Stern. If you enjoyed this conversation, we’d appreciate you sharing this with others who might be interested and hope you’ll try some of the other episodes in this series. For more on our professors and their research or to discover more podcasts coming out of Stanford, GSB, visit our website or our YouTube channel. I’m Kevin Cool.


斯坦福商学院
介绍斯坦福商学院的前沿工商管理教研,硅谷的创新和全球商业领袖。分享教授,学生和杰出校友的观点,工作和生活。连接你我和领袖,让我们携手“改变生活,改变组织,改变世界”。
 最新文章