Zoe Weaver is a BSc Psychology student from the University of the West of England who is doing an internship with the Anthropology + Technology 2020 Conference team. She caught up with creative technologist Joe Derry Hall to talk about his quirky, interactive game A Week With Wanda that exposes the dark side of AI.
Social media is saturated with click-bait articles about cataclysmic scenarios of a world overrun by AI. People seem far more taken with these fantastical stories than about the more innocuous-seeming real-world ethical concerns in emerging tech, happening right under our noses. From biased algorithms deciding exam results, to smart devices that know more about you than your own family, to the U.K. Government’s visa streaming algorithms that make it harder for people from undisclosed countries to obtain a visa.
A Week With Wanda
Cue A Week With Wanda, a fun, interactive simulation game that brings users face-to-face with the dark side of AI. Channeling all of the dark allure of the media’s obsession with dystopian futures, Wanda is a spoof personal assistant – think Siri or Alexa – with whom you can spend a week.
Intrigued to find out for myself whether Wanda could truly “change my life in seven days”, as the tagline claimed, I signed up with my email address. After a few days, I was horrified to discover that her ideas for transforming my life were often invasive and bizarre. Reflective of current AI capabilities, but dialed up a couple of notches.
Wanda offered to help me in an area of my choice: health, finances, or relationships. However, some of her suggestions were not so reassuring. Wanda asked me whether I would sign a petition to allow her to buy up every other major AI medical company, to improve cancer detection.
This idea was clearly beneficial for consolidating the knowledge-base, but putting so much power in the hands of one company could also have undesirable consequences. What if, with no rivals, they started to charge unreasonable prices for their services. Wouldn’t the lack of competition and diversity become detrimental to progress in the long-term. Needless to say, I opted out.
Surely, I wondered, Wanda is aware of the socio-economic circumstances that often give rise to unhealthy habits and low intelligence. Her solution bore an eerie resemblance to eugenics.
Other scenarios were more obviously reprehensible. Wanda suggested that she should block some of the people with lower IQs from the local doctors surgery, whose unhealthy habits were causing them to monopolise the healthcare centre. Surely, I wondered, Wanda is aware of the socio-economic circumstances that often give rise to unhealthy habits and low intelligence. Her solution bore an eerie resemblance to eugenics.
And then it then dawned on me that AI, which is touted as the quick-fix for an increasing number of problems, may be able to categorise the world, even predict it. But it certainly can’t resolve global issues.
AI Ethics, But Fun
As Joe explained to me when we met to discuss Wanda, different scenarios arise according to your responses in the simulation. Meaning that you can play multiple times and get different interactions. Each day of the game is focused on a particular ethical issue, including privacy, centralisation, bias and discrimination, and dehumanisation.
Understanding that people learn better through stories and humour, Joe scripted scenarios that were funny and memorable. People also tend to assimilate and act on information better when reflection is encouraged so he ensured there were opportunities for users to feed back their views each day. And perhaps most importantly, noting the positive correlation between the strength of the emotion in an experience, and the strength of the memory, he made sure that the scenarios would evoke feelings of shock, anger, or disapproval.
Whether it’s changing the insurance deal you get because of the colour of your skin, or giving you a longer prison sentence. Bias and discrimination gets people angriest when they play it.
Joe said, “One of the biggest ethical issues for me was bias and discrimination. Whether it’s changing the insurance deal you get because of the colour of your skin, or giving you a longer prison sentence. Maybe it’s choosing who gets cancer treatment because of what health decisions they’ve made in their own lives, whether they’re overweight or not. Bias and discrimination – whether it’s race, gender, or whatever else – gets people angriest when they play it.”
Socially Responsible AI
I asked Joe about what ethical and practical guidelines he would like to see in place to bring about socially-responsible AI. He suggested that we need something like a “food-labelling system”, similar to the info mark discussed by A+T 2020 speaker Allison Gardiner. “It’s taken a long time for food campaigners to get to the point where you pick up a packet of crisps at the supermarket, and it’s got those traffic lights that tell you how good or bad it is for you on a few different measures”, such as fat or sugar content. “I feel we need some version of that for technology.”
Joe believes that if technology is left unregulated, it will continue to be “steered by profit”. “It’s all the worst things we could think of. So let’s not sit here and think too much about that, but it’s way worse than A Week With Wanda. This stuff is way too important to leave to some tech guys in a dark room”.
If you’d like to spend a week with Wanda, sign-up here.