“What questions can I ask my data and how do I answer them? And which are the questions that my data cannot answer but anthropology might help with?”
In this workshop, as a case study, we bring a behind-the-scenes view of Troll Patrol, a study of online abuse against women on Twitter that caused an overnight fall of $2.5 billion of Twitter market value (-13% within hours), significantly increasing pressure and incentives for tech executives worldwide to take the problem seriously, and powered the first meeting between Twitter’s CEO and Amnesty International.
We will show how a constant back and forth between social sciences and computational statistics/machine learning/AI, and the goodwill of 6000 data-labeling volunteers, allowed our small team to have such an unexpected and outsized impact.
The first half of the workshop will be a lively panel presenting the high and low points of the work, the process, the pitfalls, the lightbulb moments, the slog, the hard-learned lessons, and the elation. How do we graft quantitative tools on a qualitative culture? And when don’t we? How can qualitative knowledge be the driver of quantitative backing, and when can one mislead the other? How do we turn the overgrown hype around Artificial Intelligence/Machine Learning on its head to be useful? And how do we combine their actual strengths with classical methods, all in the service of domain experts?
For the second half we will split into four small subgroups, to exchange war stories amongst participants and organisers, answer questions in more detail, debate, compare notes, discuss “what if” scenarios, and draw parallels.
At the end, we will regroup and summarise the subgroups’ insights so that, while we each walked in with the ideas of one, we all leave with those of thirty.
Attendees will come out of this workshop with:
- A behind-the-scene case study of a successful human campaign mixing qualitative and quantitative methodologies, and media strategy, through a lot of back and forth between Human Rights and Machine Learning.
- Ideas on designing successful crowdsourced data-collection campaigns.
- A joint reflection on what anthropology can ask of the resulting data, and how to do so.
- Vice versa, a joint reflection on the questions the data cannot answer and needs anthropology.
- How to weave into a study both the strengths and, more importantly, the limitations of Machine Learning/Artificial Intelligence into a study.
For the second half of the workshop, please come with your own “war stories” to share, if you have some, examples where you have had to go back-and-forth between quantitative and qualitative analysis, or met strengths or limitations of advanced algorithms, or seen the interface between domain expertise and tech wizardry. Examples where you couldn’t do either are also welcome! Think about successes and, even more important, friction points!
Julien Cornebise, Honorary Associate Professor, UCL
Julien is a Honorary Associate Professor at UCL. An early researcher at DeepMind, he then led and built ElementAI’s London office and its “AI for Good” unit as Director of Research. He had the privilege to work with the NHS to diagnose eye diseases; with Amnesty International to quantify abuse against women on Twitter and find destroyed villages in Darfur; with Forensic Architecture to identify teargas canisters used against civilians; with Human Rights Watch; and NASA FDL. Seventeen years researching and applying algorithms, six supporting social-change actors, have shown him two sides of tech.
Azmina Dhrodia, Independent Expert
Azmina Dhrodia is an expert on online gender-based violence against women with a particular focus on social media content moderation, freedom of expression and online safety. She was most recently the Head of Operations and Research at Block Party, a new tech start-up that solves online harassment by giving people greater control over the communications online. Previously, she was a Research and Policy Advisor on Technology and Human Rights at Amnesty International and spear-headed the organization’s research, policy and advocacy on violence and abuse against women on social media platforms. She has authored several reports and articles on the issue including the cutting-edge report, #ToxicTwitter: Violence against Women Online, where she applied an intersectional lens to analyse the human rights impact of abuse against women on social media platforms and the right to freedom of expression online.
Laure Delisle, California Institute of Technology (Caltech)
Laure is a first-year PhD student and Kortschak fellow at Caltech, working on semi-supervised learning and computer vision. Before that, she was a research engineer in the AI for Good lab at Element AI, enabling NGOs and nonprofits to tackle human rights violations. There, she contributed to quantifying the scale and intersectionality of abuse against women on Twitter, and mapping out rural regions of Darfur, in partnership with Amnesty International.
Freddie Kalaitzis, NASA Frontier Development Lab
Freddie is a team-lead for AI, Space & Earth science at NASA’s Frontier Development Lab. Before, his work in the AI for Good team of Element AI enabled NGOs and nonprofits. He co-authored the first technical report in partnership with Amnesty International, on the study of online abuse against women on Twitter from crowd-sourced data. He also led the research on Multi-Frame Super-Resolution, which was awarded by ESA for topping the PROBA-V Super-Resolution challenge.