Merve Alanyali

Dr Merve Alanyali is on the panel in the afternoon. We invited her to tell us more about herself ahead of the conference.

Merve, thanks for taking the time to talk to us. You’re a Senior Data Scientist at LV=. Algorithms are driving ever more real-world decisions-making such as helping doctors detect cancer or, in LV’s case, identify insurance fraud. Can you tell us a bit about what your work involves?

Insurance is one of the commercial fields that generates large quantities of a very diverse set of data ranging from traditional time series to text data and images. In order to gain insights and extract meaningful information from these large data sets, it is therefore necessary to find automatic ways to analyse these datasets. So this is where we come on the scene.

Since customer experience is of the utmost importance for LV=, our projects are mostly focused around using data to improve accuracy and efficiency of all demises in LV= ultimately to improve the services we offer to our customers. Together with the domain experts across the business, we are acting as an internal consultancy by building tools to automatically analyse data to assist decision makers such as claim handlers. We have been working on a wide range of projects from fraud detection to marketing as well as claim liability and pricing.

Before joining LV, you were also a Turing Fellow and your research focused on analysing large open data sources with the cutting-edge concepts from image analysis to machine learning to understand and predict human behaviour at a global scale. Examples include identifying protest outbreaks using Flickr pictures, estimating household income with Instagram pictures, and predicting non-emergency incidents in New York City. Can you tell us a bit more about this and why this research is important and useful?

Measuring how people behave is of vital importance to scientists and policy makers alike as well as decision-makers in commercial arenas. Traditionally, many measurements of the core aspects of our daily lives have been drawn from surveys and interviews. Although such data offer useful and rich insights into human behaviour, they also have certain drawbacks including the resources required to collect the data and delay in collecting and reporting the data.

Data can now be accessed rapidly and at low cost opening up unprecedented opportunities to analyse social processes and measure human behaviour at a national or even global scale.

Our everyday interactions with technological devices and the online services they connect us to are generating a vast amount of data. This data can be accessed rapidly and at low cost opening up unprecedented opportunities to analyse social processes and measure human behaviour at a national or even global scale.

With improved Internet connectivity, we are witnessing a shift in the format of this data from being text-based to visual media such as videos and images. In my PhD, I worked on identifying new sources that can be used as a practical supplement to traditional sources of data on human behaviour. Under this broad topic, I choose several case studies focusing on detecting global events, estimating socioeconomic statistics and predicting the location of non-emergency incidents. The findings from my research underline the potential of online images as a source of cheap and rapidly-available measure of human behaviour around the world.

The impetus of this conference was the social impact of algorithmic systems and what Joy Buolamwini terms “coded gaze”. Algorithmic systems require vast amounts of data, data which is often biased in terms of race and gender. How do you as a data scientist try to prevent bias in your work?

It is extremely crucial to fully understand what goes into an algorithm and what comes out of it and why.

It is one of the biggest challenges we face but it is very positive to see there has been an increased awareness on the topic especially in the last couple of years. More and more research often supported by companies is going into developing socially-responsible AI systems.

There are multiple layers of this problem. One is to avoid predictors with an obvious bias for instance by avoiding using predictors linked to gender and race. The other layer is to identify hidden biases. Are there any other predictors that might have an underlying link to gender and race? These cases are harder to identify. And finally, data generated by us humans that may be used to train AI models can directly inherit the bias we humans have. One of the most famous examples is Tay the chat-bot that was trained by using other users’ tweets. Microsoft had to shut down the service in less than 24 hours due to the offensive tweets it was posting.

The latter two examples are quite hard to identify. That is why it is extremely crucial to fully understand what goes into an algorithm and what comes out of it and why. Can we interpret the results, why do we think we get those results, which predictors contribute more than the other, are there any obvious cases that might be biasing our results? Nowadays, there is a large selection of powerful algorithms that can even be used off-the-shelf and, as much as I do not agree, can also be used as a black box. My stand here is, if I as a data scientist fail to interpret the results and understand how and why the algorithm reached given results then it is wise to give it another thought before using such models in production.

At LV=, we place high emphasis on interpretability and not accepting any algorithm we use as a black box. We are also about to start a collaborative postdoctoral research project with the University of Bristol, specifically tackling the creation of socially responsible AI systems.

However, it’s not just about having accurate algorithmic systems that don’t discriminate but how these systems are used is also important. What’s your take on this?

That is absolutely true. For instance, at LV= our models are used to assist decision-makers across the business. It is therefore crucial for people who are using these tools on a daily basis to understand what is going on. As data scientists, we are responsible to ensure that users have at least a basic level of understanding of how our models work. They need to be aware of the weaknesses as well as strengths and decide to which extent the information coming from these models should affect their final decision.

A number of government bodies have started forming subgroups specifically focusing on usage of data and AI across businesses. These groups are working towards new policies to protect vulnerable groups and exploitation of AI tools. For instance, we are engaging with the Centre for Data Ethics and Innovation in their snapshot series on AI and personal insurance.

Women make up one-quarter of computer scientists. But in the field of artificial intelligence those numbers are likely much lower, according to an article in The Atlantic. It’s really important for young girls to see women like you in the field of artificial intelligence so that we break the perception of AI that it’s hard to do and exclusive. What would you say to young girls who want to follow in your footsteps?

Seeing all the cool things we can do using mathematics/CS helps breaking the bad reputation of quantitative subjects being “boring” in the eyes of young girls and also boys.

It is indeed quite upsetting to see disciplines being categorised based on gender. I suggest completely ignoring gender stereotypes around disciplines and speaking to someone working in the field. It is the best way to understand “a day in a data scientist’s life”. Seeing all the cool things we can do using mathematics/CS helps breaking the bad reputation of quantitative subjects being “boring” in the eyes of young girls and also boys.

What we were lacking in AI/tech related fields is strong female role models however this has been changing in the recent years. There are a lot of initiatives bringing together young girls and women working in AI providing actual examples that AI is not a field only for boys. For instance, LV= are supporting the Tech She Can initiative and we were a part of the Changemakers event organised by the University of Bristol where young girls had a week of workshops around technical topics and asked to come up with a project for social good. I was on the judging panel and absolutely amazed by the brilliant ideas the girls had and the demos /presentations they prepared in such a short time.

The focus of this conference is about encouraging and facilitating cross-disciplinary discussions on AI. What’s your opinion on the belief that technology teams should comprise people from different disciplines?

It is such a joy to see more cross disciplinary conferences organised as well as data science teams and research groups being formed this way.

I truly support inter-/cross- disciplinary collaborations. Actually, my entire career is based on it; I have done my undergraduate degree in computer science but since then I kept jumping between departments by completing a Complex Systems Science master programme which is a combination of mathematics, statistics and CS and obtained a PhD from business school, behavioural science group. I truly believe in the power that comes from combining different view-points and expertise and I see the value of it every day. It is such a joy to see more cross disciplinary conferences organised as well as data science teams and research groups being formed this way.

Why are you excited to attend the conference and what do you hope to get from attending?

I am also hoping to get more inspiration on how we can better shape technology to integrate with human workers.

I am very excited to network with other professionals from different backgrounds. I always find it a good exercise to explain my work to different people with diverse expertise and get their feedback on potential improvements as well as next steps. I am also hoping to get more inspiration on how we can better shape technology to integrate with human workers.

In your opinion, why do you think other technologists should attend the conference?

The conference will enrich technologists’ understanding of the socio-cultural impacts of the AI systems they are developing.

It can be quite easy to get accustomed to the jargon of a certain group, for instance if everyone around you is a data scientist when talking about technical concepts it is so easy to assume a certain level of knowledge. I personally think that it is very important to get outside our comfort zones and interact with people from different background. Events such as the Anthropology and Technology Conference give technologists a good incentive to mingle with professionals from different backgrounds as well as to enrich their understanding of the socio-cultural impacts of the AI systems they are developing.

Is there anything else you’d like to tell us or say?

I just want to thank you for making me a part of this conference. Can’t wait to meet the rest of the attendees and listen to the interesting talks.

Thanks so much for taking the time to talk to us, Merve. We can’t wait for our delegates to meet you on 3 October!