Strategist, Speaker, Designer, Instigator

Chicken Little goes to SXSW* [post 21/100]

Have you heard about the anti-robot protest at SXSW? I read about it this morning and it just made me sad. On the one hand, it’s great that people care about how technology is evolving; on the other hand it’s sad that the thinking is still so simplistic and binary. Down the centuries, every new technological breakthrough has been condemned, from Galileo’s discovery that the earth revolved around the sun, through television and video games, to (now) robots and AI. And while it is certainly true that many technologies can be applied in a way that’s harmful to mankind, it’s also true that many of those same technologies have enriched our lives immeasurably. Technology is neither good nor evil – but it has potential to be both, depending on how we humans apply it. Put it this way: a fork is technology, albeit primitive. If I come over there and stab you in the eye with a fork, does that mean it’s a bad fork? No, it means I’m a dick. Much the same as if I program a robot, or an algorithm, to be harmful (intentionally or unintentionally) or even just annoying, it’s not the technology’s fault – it’s mine.

News flash: technology is not going to stop advancing. We humans are inherently curious, and that means we’re going to keep exploring and keep inventing new and better ways to do things – and those things will be both good and bad, and there will always be arguments about which is which. And sometimes people are just plain irresponsible and don’t think about the consequences of what they’re making. Stephen Hawking’s op-ed in the Independent, which some seem to be using as a call to anti-robot arms, wasn’t warning against AI per se, it was warning against thoughtless development of AI:

“So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.”

The point here is not that technology is evil and bound to destroy us – the point is that we are not thinking enough about the potential consequences of the things we are actively trying to invent. I wrote a few weeks ago that I think we’ve still got quite a long road ahead before we get to true AI, but that doesn’t mean we should be thoughtless about it, or about the technologies we do have to hand.

The problem as I see it is that we are so in love with technology that we sometimes forget the human impact it has. We forget to think about the outcomes, or choose not to because it makes things more difficult. I spend a lot of my time ranting about how we need to put people first, be more thoughtful in design, put technology in the service of humans and not the other way around. But I’m not saying “NO TO ROBOTS” – because that would be ridiculous.

Robots and algorithms are already a fundamental part of our lives in many positive ways – they build our cars, direct our mail, guide us while we’re driving, fly planes safely through tricky conditions. Do we want to give up all that progress too? I don’t think so. The point isn’t binary yes/no, technology as good or evil; the point is much more challenging: how are we going to adapt and evolve to make better decisions when it comes to technology?


*Chicken Little == Henny Penny. If you don’t know the story, here you go.