KATE GLAZEBROOK FOUNDER & CEO, APPLIED
Kate Glazebrook is cofounder and Head of Research at Applied, a web platform that uses the best behavioural research to help organisations find the best person for the job regardless of their background to make hiring smart, fair, and easy. Prior to joining Applied, Kate was Principal Advisor and Head of Growth and Equality at the Behavioural Insights Team (BIT), and has previously worked for UNESCO in South-East Asia, and for the Australian Treasury.
What is the challenge your organisation is working on?
Organisations are very good at making hiring faster and smarter, but not necessarily fairer. Applied is a web platform that uses behavioural research to help organisations find the best person for the job regardless of their background. The aim is to level the playing field in all types of organisations.
How does the integration of AI into hiring affect women?
As AI becomes more prevalent in a range of systems across organisations, there is the potential to perpetuate existing biases. Many high profile cases have shown how this can go terribly wrong. And since we’re passionate about making sure organisations benefit from the whole talent pool, we’re vigilant about designing tech that is as inclusive and fair as possible. For this reason, we’ve tended not to use AI in our platform, but rather design out the risk of bias in human decisions. That said, we are always pushing ourselves on these questions. We can take the example of our own work.
As we explore integrating machine learning into our platform, a particular challenge is finding a suitable dataset to train these machine learning systems so that they don’t perpetuate historic biases. The samples are often much smaller than the sample we want to hit, and based on top performers, which largely excludes women and other historically discriminated against groups. Using those datasets just perpetuates the existing bias, and won’t lead to the result of creating new opportunities for women (or other under-represented groups). In fact, we’re currently considering how we might use the Applied recruitment platform to build a representative dataset.
Why is women’s leadership important on this issue?
Many AI companies have few or no women on project teams, which partially contributes to the perpetuation of biases present in machine learning systems. Part of this is relatively low numbers of women STEM graduates, but there are other barriers. For example, research shows that women are more likely to apply for a job if they meet all the criteria, while often men will do so even if they meet only a portion of the criteria. Men also tend to apply for more jobs while job searching than women. These tendencies compound to perpetuate bias.
Women leaders are often well placed to understand the barriers preventing more women from being hired in tech fields and AI. Women such as digital activist Joy Buolamwini plus mathematicians Hannah Fry and Cathy O’Neil are also already playing a crucial role in raising awareness of the biases in AI systems caused by non-diverse datasets and project teams.
More generally, women are leading a broader conversation about the fair and ethical use of AI by holding important positions in organisations such as the Ada Lovelace and AI Now Institutes, which conduct much needed research on the social implications of AI.
How can women amplify their impact on this issue, and what’s necessary to help them combine their efforts?
All organisations come to us with different challenges, and all want to leave the playing field in a different way. Giving women leaders the opportunity to connect across companies and across industries is a good first step for sharing what we’ve learned, what the challenges are – and most importantly – how to fix them.
How important is collaboration to having an impact on this issue, and what role does an initiative like the Daring Circle play in that shared work?
With the development of new platforms and the applications of AI across diverse systems, you need diverse groups of people and organisations to confront them. Having cross-functional groups, like the Daring Circles, is crucial if we want to share the data, insights, research and best practices necessary to overcome bias. Secondly, it helps us come together on the urgent need for women’s leadership – not just in hiring around AI – but on a range of applications.
This series of stories highlights women leaders and entrepreneurs who are driving positive impact on our most pressing global issues and demonstrating women’s unique contribution to inclusive solutions. It draws on the community of the Daring Circles – workgroups committed to positive impact in areas where women are most affected and where women are demonstrating outsize leadership. Share your stories with the hashtag #Women4AI, and submit your own stories to the Women’s Forum editorial team by emailing Sophie Lambin (firstname.lastname@example.org).