Messy relationships [living with AI]

Artificial Intelligence is all over the news these days, and now even the Big Boys are talking about how important it is to consider the social impact. Obviously I agree, but I think there’s still a big fat chunk of that impact that nobody’s really considering yet.

Back in April, I helped facilitate a couple of workshops at the Royal Society on the topic of AI in the UK, exploring how we can ensure we remain leaders in the field far into the future. In our sessions, I posed a question and the response showed me that even amongst the best thinkers in AI in the UK, we’re missing a rather big trick. My question was (I thought) a simple one: how will we work together with AIs? How will we manage hybrid teams of AI and human workers? This question didn’t seem to be on anyone’s radar.

We have a tendency to think about the impact of Artificial Intelligence in rather absolute terms: what will happen when machines take over our jobs? How will we earn money, measure our value as individuals, etc.? These are fair questions, and important ones, but they all reflect an end state that we’re not going to reach for a while yet, and ignore the much messier in-between states we’ll have to go through between now and then.

I think the more immediate concern (and a fiendishly complex one) is how we will interact with AIs more productively as they continue to propagate in our workplaces and homes. How will we develop a common understanding of what they’re up to and why they do what they do? If we can’t do that, then how can we hope to live and work effectively and happily with these assistants, no matter how powerful or useful they may be?

For example, let’s say you’re a Sales Manager at a bank. Today, you might have 5 (human) Loan Officers working for you. In a year’s time, you might have 4 direct reports: 2 AIs and 2 humans. How will you manage them? How will you measure their performance, balance their workload? How will you know that the AIs are making good decisions? Will you need to be a Data Scientist to do that? Will you have to just blindly trust that the AI is doing the right thing? That doesn’t seem like a sustainable way forward.

This isn’t even a particularly extreme example. The problem here is that the way we develop AIs these days does not allow for much in the way of human interrogation. Assistants like Alexa, Cortana or Siri are designed to interact with humans at a certain level, but not (usually) to explain why they give us the answers or reach the conclusions they do. And as more and more algorithms are designed to analyse more and more of the world’s data and respond to more and more human questions and needs, we need to be more and more conscious of the ‘whys’. This will help both us and the AIs to develop positively.

There are lots of projects underway at the moment around Machine Learning and human interaction with data via AI – these projects are fascinating and some could be hugely positive and useful for humanity. But they are also fraught with all the complex problems that any human technological advancement faces – biases (more on this next week), expectations, the limits of our individual visions. That’s perfectly ok, but only if we build in the means to check in with our inventions, to understand in human terms what they’re doing and how they’re drawing their conclusions.

The possibilities of AIs will continue to be dystopian as well as utopian if we have no means of understanding, communicating and negotiating with them. I’ve written before about our unrealistic expectations of AI; as we continue to take steps to embed these technologies in our lives, we need to take steps to manage our own expectations and correct our own mistakes. Because we are certain to make lots of mistakes as we advance along this path – and that’s ok, but only if we learn from them.