Machine learning systems use artificial intelligence to process data. If the machines are processing the data surely, it’s fair? Or are there biases built into these systems? And are the people creating the algorithms diverse enough to think about all the different types of people and perspectives. Or are the machines biased?

Humans are biased too

Why has machine learning developed? Well, it’s to save time in finding trends, findings and patterns. It’s to make data valuable as a resource and a way to filter and search. And before this was done through data algorithms, it was done by humans. And guess what? Humans are biased too. People have biases learnt through life experiences, surroundings and people which affect judgement and decisions. So, if people are processing a job application, for example, they are more likely to choose people who have characteristics, traits or experiences which remind them of themselves.

Transfer that to machines, and those biases get built into the machine learning technology. If the people building the algorithms and technology for these systems are not diverse themselves, then biases get built into these systems, and perspectives are missed.

A good example of this is the passport photo checker for UK passports. A BBC investigation found that women with darker skin are more than twice as likely to be told their photos fail UK passport rules. The bias against people with darker skin tones has been built into the system through not including enough perspectives when the algorithms are created. And this system was launched even though user research showed there were problems.

What are the machines learning?

Machine learning is a type of artificial intelligence. It’s a way of ‘training’ computers to lots of data and showing them how to make decisions and judgements. They are also trained to make predictions about the information processed.

These types of tools are used in many systems we use to make decisions about things or have decisions made about us. For example, machine learning is used in recruitment. Machines are taught to process job applications, filtering candidates for organisations. And there are biases built into this. Unusual names, unusual skills, non-standard roles or experiences will not be favoured by these algorithms.

Recruiters say that AI saves them time. That might be the case, but these AI tools are also introducing bias. Amazon scrapped their AI tool in 2018 as it clearly showed a preference for male candidates. These are the types of biases that can be built into these systems. And with companies wanting to address their diversity and inclusion issues, it’s something they need to be aware of.

The machines are also learning about how we communicate, and there can be biases built into this too. Google dropped their ‘smart compose’ from suggesting gender-based pronouns. And Google (again) was found to be showing higher paid jobs to men. And all you have to do is search (in Google) for terms such as ‘CEO’ and you will find lots of photos like this.

And Slack, a commonly used messaging tool in companies, uses AI to determine employee satisfaction. Slack is where a lot of conversations and collaboration happens in companies so it makes sense. But are the algorithms taking into account the fact that people react differently on the platform? For example, data has shown that men dominate Slack channel conversations. So they are missing the full picture by relying on data from Slack conversations alone.

Are the machines the future?

Machine learning works on the data it has, and the decisions it is told to make on that data. For engineers to create algorithms that are not biased, they need to understand different types of biases themselves. If we look at the passport photo example, the tool works on white women’s photos. So the team might have assumed everything was fine. But they forgot to include women of colour in their data processing.

AI is used in recruitment to do some initial screening of candidates. It is also used to generate interview times, and to set test for interview tasks. Without careful consideration of bias in these processes, we can see these systems effectively recruiting a team of clones. And as Matthew Syed says in his book Rebel Ideas, teams of rebels beat teams of clones. You need diverse teams for innovation, creativity and profit growth.

Where AI could be useful is in identifying passive candidates. Around 73% of jobseekers are not particularly looking for a new job but could be tempted by the right opportunity. AI could, and is used to look at search terms, browsing history and other data to serve jobs to people that might interest them. AI can also improve things like scheduling interviews (instead of email ‘tennis’). It could be used to sort candidates to help recruiters if algorithms are not biased. Things like follow up information to candidates can be improved with AI. Essentially it can help by taking over some of the admin tasks.

And there are other positive examples, Uber tried to address their gender pay gap using an algorithm to set pay rates and schedule shifts for drivers. It hasn’t worked yet, but they are working on it. Unilever has moved from an exclusively human-driven approach to recruitment by using automated assessments and video interviews. This has helped it bring diverse graduates into the organisation.

And if we think beyond recruitment, there are positive uses of AI in our lives. Netflix showing us recommended films to watch based on what we have watched. It improves weather forecasting, the processing of medical data and other data processing improvements.

The key to all of this is to consider the biases everyone has. Teams working on these systems need to reflect different perspectives. There needs to be challenges to models, methods and thinking. The machines are here to stay, but we need people too.

If this has got you thinking about your teams and how to make sure you are creating diverse, inclusive teams to bring these different perspectives, email hello@watchthisspace.uk