Children are susceptible to peer pressure from robots

“If your friends told you to jump off a bridge, would you?”

It’s a warning you probably heard in childhood, a hypothetical example of the dangers of groupthink. And it likely inspired more impulsive behavior than it prevented. But in the not-too-distant future, parents may have to update this little adage:

“If a robot told you to jump off a bridge, would you?”

Because, as it turns out, quite a few probably would.

In a study published today in the journal Science Robotics, researchers from Germany and the UK demonstrated that children are susceptible to peer pressure from robots. The findings, say the researchers, show that, as robots and AIs become integrated into social spaces, we need to be careful about the influence they wield, especially on the young.

The paper’s authors ask, “For example, if robots recommend products, services, or preferences, will compliance […] be higher than with more traditional advertising methods?” They note that robots are being introduced to plenty of other domains where social influence could be important, including health care, education, and security.

The study in question is actually a reimagining of perhaps the best-known and most influential demonstration of social conformity: the Asch experiment. This series of tests, first carried out in 1951 by Polish psychologist Solomon Asch, illustrates how humans can be influenced by groupthink to the point where we will deny even the most obvious facts.


One of the cards used in the original Asch test. Participants had to say which line on the right was closest in length to the line on the left.
Image: Creative Commons

In his experiments, Asch invited 50 male college students to take part in a “vision test.” The students were seated around a table and shown a line on a chart next to a group of three other lines of varying lengths, labeled A, B, and C. They were then asked, one at a time, to say which of the three lines was closest in length to the first. The answer was obvious, but what participants didn’t know is that all but one of the students were actors. And when the ringers were called upon to give their answer, they all gave the same, incorrect response.

When it came to the turn of the real test subject (who always went last), roughly one-third caved to social pressure and gave the same, incorrect answer as their peers. Over the course of 12 such trials that Asch conducted, roughly 75 percent of participants conformed in this way at least once, while only a quarter never conformed at all.

“It’s such an elegant little experiment that we just thought: let’s do it again, but with robots,” says Tony Belpaeme, a professor of robotics at the University of Plymouth and co-author of the paper. And that’s exactly what he and his colleagues did, adding the extra twist of testing first groups of adults and then groups of children.

The results showed that, while adults did not feel the need to follow the example of the robots, the children were much more likely, too. “When the kids were alone in the room, they were quite good at the task, but when the robots took part and gave wrong answers, they just followed the robots,” says Belpaeme.


Images showing the robot used (A); the setup of the experiment (B and C); and the “vision test” as shown to participants (D).
Photo by Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme

Although it’s the susceptibility of the children that leaps out in this experiment, the fact that the adults were not swayed by the bots is also significant. That’s because it goes against an established theory in sociology known as “computer are social actors,” or CASA. This theory, which was first outlined in a 1996 book, states that humans tend to interact with computers as if they were fellow humans. The results of this study show that there are limits to this theory, although Belpaeme says he and his colleagues were not surprised by this.

“The results with the adults were what we expected,” he says. “The robots we used don’t have enough presence to be influential. They’re too small, too toylike.” Adult participants quizzed after the test told the researchers just as much, saying that they assumed the robots were malfunctioning or weren’t advanced enough to get the question right. Belpaeme suggests that if they tried again with more impressive-looking robots (“Like if we said ‘This is Google’s latest AI’”), then the results might be different.

Although the CASA theory was not validated in this particular test, it’s still a good predictor of human behavior when it comes to robots and computers. Past studies have found that we’re more likely to enjoy interacting with bots that we perceive as having the same personality as us, just as with humans, and we readily stereotype robots based on their perceived gender (which is a topic that’s become particularly relevant in the age of the virtual assistant).

These social instincts can also affect our behavior. We find it harder to turn off robots if they’re begging us not to, for example. Another study published today in Science Robotics found we’re better at paying attention if we’re being watched by a robot we perceive as “mean.”

All this means that although it’s children who seem to give in more easily to robotic peer pressure, adults aren’t exactly immune. Researchers say this is a dynamic we need to pay attention to, especially as robots and AI get more sophisticated. Think about how the sort of personal data that got shared during the Cambridge Analytica scandal could be used to influence us when combined with social AI. “There’s no question about it,” says Belpaeme. “This technology will be used as a channel to persuade us, probably for advertising.”

This robot peer pressure could be used for good as well as evil. For example, AI systems in educational settings can teach children good learning habits, and there’s evidence that robots can help develop social skills in autistic children. In other words, although humans can be influenced by robots, it’s still humans who get to decide how.

AI spots 40,000 prominent scientists overlooked by Wikipedia

AI is often criticized for its tendency to perpetuate society’s biases, but it’s equally capable of fighting them. Machine learning is currently being used to scan scientific studies and news stories to identify prominent scientists who aren’t featured on Wikipedia. Many of these scientists are female, and their omission is particularly significant in the world’s most popular encyclopedia, where 82 percent of biographies are written about men.

The research has been carried out by an AI startup named Primer as a demonstration of the company’s expertise in natural language processing (NLP). This is a challenging but lively subfield of AI that’s all about understanding and generating digital text. Wikipedia is often used as a source to train these sorts of programs, but Primer wants to give back to the site.

In a blog post, Primer’s director of science John Bohannon explains how the company developed a tool named Quicksilver (named after tech from the books of sci-fi author Neal Stephenson “because we’re nerds”) to read some 500 million source documents, sift out the most cited figures, and then write a basic draft article about them and their work.

For example, here’s an AI-written article about Teresa Woodruff, a scientist who doesn’t have a Wikipedia entry but was named one of Time magazine’s “Most Influential Persons” in 2013. Her work includes designing 3D-printed ovaries for mice.

Teresa K Woodruff is a reproductive scientist at Northwestern University. [1] She specializes in gynaecology and obstetrics. [2] She is a member of the Women ’s Health Research Institute. [1] Woodruff is a reproductive scientist and director of the Women’s Health Research Institute at Northwestern University’s Feinberg School of Medicine in Chicago. [3] She coined the term “oncofertility” in 2006, and she’s been at the center of the movement ever since. [4] Five years later, she succeeded: on March 28, the team announced the birth of Evatar, a miniature scale female reproductive tract made of human and mouse tissues. [5] Widely recognized for her work, she holds 10 U.S. patents, and was named in 2013 to Time magazine’s “Most Influential Persons” list. [6]

It’s a basic write-up, but it’s cogent and clearly sourced, which is the perfect starting point for a Wikipedia editor to create an article about Woodruff, says Primer.

To date, the startup has identified 40,000 “missing” scientists whose coverage is similar to individuals who have Wikipedia articles, and has published 100 AI-generated summaries. It’s also been involved with three Wikipedia editathons intended to improve online representation of women in science. (Editathons are events where specialists teach one another to create and edit Wikipedia articles, usually to bolster coverage of their subject area.) And as Bohannon notes, at least one person spotted by Primer’s technology has already been given a Wikipedia article because of it — Canadian roboticist Joëlle Pineau.

Jessica Wade, a physicist at Imperial College London who wrote Pineau’s new entry, told Wired about the system’s benefits. “Wikipedia is incredibly biased and the underrepresentation of women in science is particularly bad,” said Wade. “With Quicksilver, you don’t have to trawl around to find missing names, and you get a huge amount of well-sourced information very quickly.”

Primer says its technology builds on past work by Google and other researchers, including a study published in January this year that also used machine learning to generate basic Wikipedia articles. However, the company says its goals are more practical than this. Rather than using Wikipedia as a testbed for experiments, it wants to create tools with clear benefits for the online information ecosystem.

To that end, Quicksilver doesn’t just spot overlooked individuals and generate draft articles. It can also be used to maintain Wikipedia entries and identify when they haven’t been updated for a while. The company says the Wikipedia entry for data scientist Aleksandr Kogan is a good example. Kogan developed the app at the heart of the Cambridge Analytica scandal, and he had a Wikipedia page created about him in March this year. Primer notes that editing on Kogan’s entry stopped in mid-April (meaning updates about Kogan, such as the fact that he also accessed Twitter data, have yet to be added).

Of course, even tools like this can be susceptible to bias. If Primer spots overlooked scientists based on their inclusion in news stories, then it might end up reflecting the interests of the science press. But Bohannon is adamant that the company’s tools can still be helpful as an assistant to a human-led process.

“The human editors of the most important source of public information can be supported by machine learning,” he told The Register. “Algorithms are already used to detect vandalism and identify underpopulated articles. But the machines can do much more.”