Ethical technologist Kriti Sharma talks about AI and the need to create ethical frameworks around data, inherent biases in datasets and why technologists need to be open to ideas from philosophy and the social sciences.
“I’m a trained computer scientist and all my degrees and all my jobs have been in computing. But I went through my entire education process and much of my career without ever hearing about ethics, even though technology impacts society in so many ways,” says Kriti Sharma, a United Nations Young Leaders For Sustainable Development Goals. Kriti, who is also the founder of AI for Good, a social enterprise that uses artificial intelligence to tackle social issues, will be delivering a keynote speech at CILIP Conference 2019.
Her experience in the tech world has convinced her that it can’t exist in a bubble, “We should be teaching philosophy and social sciences to technologists from the beginning because it’s so powerful, and impacts the world in many different ways.
I’ve had to re-educate myself. I’ve had to really understand what non-geek speak looks like. And I think it’s a responsibility of the technology industry to be more open and explain what we do a lot better.”
Powerful as it is, one of the problems is that artificial intelligence is not aimed at the worthiest of the world’s problems. “Today, if you look at the very successful AI applications at scale they are in the field of making people click more ads. There’s a lot about driving digital addiction, where content is designed to get your attention. And it’s making people buy more products saying: ‘If you’re interested in this then maybe you would like this’ and ‘people like you bought this’.
“But there’s an opportunity to solve social justice issues using this technology. That’s where I would love to see more action going forward, beyond these fields, to see society and the technology industry mature to a point where we would focus on wider problems. This technology could be used to drive behaviour change, changing our pathways to the information that we are exposed to. This combination of behavioural science and machine learning is a real example of knowledge being power. So, if you have that sort of data, it’s critical to create ethical frameworks around it. And I don’t want to look back in three years and think that the best technology of our time was used to make people click more ads.”
“A big part of the problem is that the people who create this technology lack diversity. That manifests itself in the technology and also the data.”
One manifestation is how male and female personas are given to particular AI assistants.
“A lot of these assistants are given female personalities if they’re doing things like ordering the shopping or playing your favourite music – Siri or Alexa – rather than Ross the robot lawyer, where a male persona is making important decisions.”
It’s a pervasive problem where children shout demands at female bots and listen to male ones. And then the problem goes deeper.
“Let’s say, if you were building something like Siri and you trained it on Wikipedia to build its knowledge. If you do that, it’s going to learn that only 17 per cent of notable people’s biographies are of women.
“The point is that if the data is skewed, the machine does not have a better way of knowing it. Another example is the MIT study of facial recognition systems where there is a 0.8 per cent error rate for light-skinned men, and a 34.7 per cent error rate for darker skinned women. They failed to recognise Michelle Obama and Oprah Winfrey. It’s a big issue because this technology is being used in policing and criminal justice. So there are a lot of biases because AI is trained on a certain kind of data set.”
Misleading datasets and poor representation can produce misguided artificial intelligence. It means the positive potential of AI is offset, even negated by its ability to amplify old prejudices. AI For Good, an organisation Kriti founded, runs projects where machines are created to provide friendlier, less judgemental sources of information than their human counterparts.
“In one of my projects in India we are working on sexual and reproductive health information for young people. We’re using AI to provide access to the right information to the right people at the right time. Young people historically have struggled to get access to vital information about it. It’s very awkward and sometimes socially unacceptable to access this information but algorithms, designed with the right controls and the right experts in the room, with a safety first approach, can bridge that gap between the young and information.
“Another very interesting example is a bot we launched in November, called rAInbow, to help victims of domestic violence in South Africa or those at risk of it. Historically victims would have to call a helpline and talk to a human.
“There were major issues about things like stigma and judgement and victim blaming and helplines not being open all the time – the fact that victims or survivors wanted to take action at their own pace in their own time. So we built this non-judgemental machine that was there to give them access to the information and designed to be empathetic. It doesn’t have empathy, but it is designed to be empathetic, and the result is that we had over 150,000 conversations in 90 days from launch and it’s really working as a system which the people suffering actually want rather than something that is given to them. It’s not a replacement to human advisers, but is there to augment their capabilities.”
The evidence suggests that engagement with people and issues beyond technology is not just a nice idea, it is a necessity.
“Let me be very clear about it”, Kriti says. “The answer to solving any technical or business problem is not just throwing a bunch of data scientists into a room with clever degrees and PhDs. You need to work with domain experts and that’s where a big opportunity for re-skilling the work force lies, and it’s quite often overlooked. If you look at the next revolution and how the jobs will change, often they say ‘oh we need to create more data scientists’ – yes we need to do that, but I don’t think that’s the only answer. There’s a huge opportunity to re-skill and improve collaboration because we need that domain expertise. It’s not as easy to train a machine learning system without that domain knowledge.”
Having said that, she believes that professionals should brace themselves for the arrival of data scientists. “What Google did to information and search was very exclusive technology at the time, very expensive to build. What we’re seeing now is an extrapolation of that, a democratisation of it. With advancements in technology this is all more easily available now and AI is already being used in many ways in our daily lives, and in our business lives.
“Some companies are starting to release their ethical AI frameworks on how to deal with this but I think every company should have chief ethics officer roles in place. A person who is not a lawyer, not a compliance professional, but someone who works closely with the people creating the technology, and also anthropologists and social scientists, to be able to find the optimal solution. It’s not just a technical problem.”
Book for CILIP Conference 2019
CILIP Conference takes place in Manchester on 3 & 4 July. Join Kriti and other keynotes, Liz Jolly, Patrick Lambe,
Hong-Anh Nguyes and Aat Vos at this year's conference.
Full information on this year's programme and booking details are available at the CILIP Conference 2019 website.
Book now for early bird deals