Bias, AI, and Recruitment
I will outline the role of AI in recruitment, what is meant when saying an AI protocol is ‘biased’, and how such bias might emerge.
AI is starting to play a greater and greater role in recruitment from automated lead generation of candidates, chat-bots in every stage of the process, CV-scrappers, talent assessments via gamification, and automated interview pre-screening.
The Oxford Dictionary defines bias as: [mass noun] Inclination or prejudice for or against one person or group, especially in a way considered to be unfair. AI finds correlations and as such it cannot be inherently unfair as it has no ethical dimension. The problem is, human beings can be implicitly/explicitly. Most variables for recruitment purposes are on how humans treat us (e.g. invitations for conferences), how humans evaluate us (e.g. grades at school/university), and how these two factors together have shaped our careers (CV track-record).
If AI is trained on biased data sets (that is data sets contaminated with human bias), it will pick up true correlations such as a decreased likelihood of female researchers becoming invited speakers at academic conferences due to discrimination. (From a real study.) I will explain some of the problems of cleaning up and normalising biased data.
The solution: using AI on New Data, e.g. HackAJob with their coding challenges, Pymetrics with their neuroscience games, or Sigma Polaris with our online assessments.
Academic background in mathematics (BSc), behavioural economics (RA), logic (MA), and rationality theory (PhD).
Former (youngest ever) Goodwill ambassador of Denmark. Brand Ambassador for Maersk Shipping. Professional Flutist under contract with the Danish Queen (to throw in an interesting one), and Founder of Sigma Polaris.
Read our interview with Nemo.