When AIs go rogue

From business platforms predicting supply and demand, to home speakers that reorder groceries and play music for us — AI has become deeply embedded into our work and daily lives. The tasks that they perform for humans have far evolved from answering simple questions to executing commands that involve historical context and analysis. We don’t even think of them as machines anymore. They are assistants and consultants always monitoring, always ready to serve. In the process, they enrich their own capabilities and become more intelligent. As they become more advanced entities, they are intended to serve humans even better.

What happens if AI becomes too intelligent, too fast? AI gone rogue is not just the stuff of Hollywood anymore. The past decade has seen big tech companies unleashing some rogue AIs into the world.

AI did what?

In 2016, Microsoft launched Tay, “the A.I. with zero chill”.  Tay was supposed to entertain the Twitter world, by telling jokes, commenting on pictures, among other things. At first, Tay was fun and candid — like most teenage kids. A few hours later, Tay’s tweets turned vicious. Tay started spewing anti-feminists, anti-semitic, and racist responses to users. So, only 16 hours after her debut, Tay was shut down. Microsoft issued an apology for the critical oversight.

Amazon, being a company that leverages automation to scale massively, also tried their hand at AI. This time, to screen the deluge of employment applications for the best candidates. They fed a computer model resumes that had been submitted to the company in the last 10 years, hoping it would find patterns that indicated a successful hire. It then crawled the internet to spot potentially strong candidates. The result: a system that preferred males. Discovering the bias, Amazon dissolved the project team. They did save a portion of the code to rid their talent database of duplicates.

Facebook, too, has embarked on an ambitious AI project, one that taught AI to negotiate. The negotiating bots, Alice and Bob, turned out to be great at negotiating. They feigned interest, made compromises… and eventually abandoned the English language. Instead, they used non-human, gibberish language, which may have been a more effective tool to do their jobs.

Our news timelines have regaled us with more rogue AI stories, including Sophia, first robot citizen and who said she would destroy humans, and a Google Photos algorithm that classified African-Americans in photos as “gorillas”. These AIs are not the first and will not be the last to stray from what their human creators intended for them to do.

What went wrong?

Let’s take a look at the snags that AI teams encountered in rogue AI stories.

Microsoft introduced Tay to Twitter in order to let it learn from a much wider audience, so it could later provide a more personalized — and positive — experience to users. Then someone thought it would be amusing to make a post on 4chan, an anonymous online bulletin board, encouraging users to inundate the still-learning AI’s profile with malicious language. The trolls abused Tay’s “Repeat after me” function, which makes Tay echo anything it was told, including racist and sexist remarks. Tay learned the context of these words, picked up the attitude, and went on to offend other human Twitter users. Microsoft claimed that they had built safety nets for system abuses and stress-tested the bot, but they did not foresee what a specific, coordinated attack could do to Tay.

In Amazon’s recruitment AI initiative, the resumes submitted as training data to the algorithm were dominantly male — a disproportion that still exists today — so the algorithm interpreted it to mean that males were preferred for technical jobs. The algorithm itself was not sexist, but it did what it was told to: use insights from historical data to make recommendations about candidate talent profiles. 

How about Facebook’s negotiator AI agents? According to Facebook, the agents were rewarded based on the deals they made. The agents learned to determine which of its past actions led to a successful outcome. They were not, however, rewarded for negotiating in English. And so they developed a way to communicate that is similar to how humans use shorthand language.

Can we keep AI from going rogue?

Going rogue implies that the entity doing so is self-aware. That it would intentionally defy the order it was given. In the above examples, none of the big tech AIs actually disobeyed their creators. They were just doing their jobs — perhaps a little too well.

Still, it’s not difficult to imagine a team of highly intelligent scientists losing control of their creation. Contemplating AI-gone-rogue scenarios goes way back, decades before the first episode of Black Mirror.  In 1942, Isaac Asimov, science fiction writer, was among the first to publish a set of rules to deflect a potential AI disaster. In Asimov’s Three Laws of Robotics, he says: 1. A robot should not harm humans; 2. A robot should obey humans, without contradicting the first law; and 3. A robot should protect itself, without contradicting law 1 or 2.

More recently, the likes of Timnit Gebru, former Google AI researcher, and Jessica Fjeld, author, and instructor at Berkman Klein Center at Harvard, made headlines for speaking out about bias, fairness, and responsibility around AI. Another well-known tech personality, Pony Ma, founder, and CEO of multinational technology company Tencent, proposed an ethical framework for AI governance. Said Ma, AI should be Available, Reliable, Comprehensible, and Controllable (ARCC). Key principles of ARCC are as follows:

  • Available. AI development should be made for the well-being of humanity as a whole, not just for the few who can afford. This point also covers that AI should be fair. We’ve already seen how biased data can have serious implications, so creators must incorporate ethics in its design. AI development must identify, solve, and eliminate any and all biases.
  • Reliable. AI must ensure digital, physical, and political security, including privacy.
  • Comprehensible. Users must understand how AI algorithms work, especially with respect to how it is designed to make decisions. Companies should be transparent about their AI’s purpose, function, limitations, and impact.
  • Controllable. Human beings should remain in charge and be able to pull the plug as needed.

AIs among us

Design blunders aside, AIs can significantly contribute to keeping humans alive: they can determine cancer risks, predict disasters, take over hazardous jobs, among others. These days, they are being used in COVID-19 research, including one that involves rapid COVID-19 screening. Because of these vast advantages, it is highly unlikely that we will (or should) cut off AI from work and life, in spite of past mistakes. But we will be careful. As we should when we shop for food, read the fine print. The AI we use should be available, reliable, comprehensible, and controllable.

AI, like any technology, is neither good nor evil. In an end-of-days scenario, an evil scientist is a more likely culprit than sentient AI. What this means is that despite the wildly sensationalized — though admittedly, entertaining — stories of AIs gone astray, we should not fear AI. Instead, we should leverage them to be more productive and to advance the betterment of humanity. 

Curious about AIs in the workplace? Get on the waitlist for Natasha, Pez.AI’s social media AI.

December 28, 2020 brian artificial-intelligence, blog
[Sassy_Social_Share]

Leave a Comment

X
[gravityform id=4 title=false description=false ajax=true tabindex=101]