Hunter/gatherers in the 21st century [post 13/100]

Autocorrect failure. Bad recommendations. Offensive ‘related’ content. These are a few of our least favourite things (except in a schadenfreude kind of way). And the frustration we feel comes from the fact that they are all drawing conclusions – the wrong conclusions. It doesn’t help that quite often, we aren’t given adequate means to correct them. A colleague sent me this doozy yesterday:

facebook fail

Man dies in sauna, Facebook pushes beef jerky recipe. Kla$$y.

What is it about conclusions? Clearly we can’t get our algos to make them reliably well yet, so why do we persist? Maybe it’s like the urge we designers have to make artefacts, digital or physical, even when it’s not really necessary (more about that another time, maybe) – maybe as technologists, we have a similarly irrepressible urge to draw conclusions.

In design, you don’t feel like you’re done until there’s a thing you can point to. In technology, you don’t feel done until you’ve come to a conclusion. Maybe it stems from mathematics – whether the solution is a number or a formula, we keep pushing on until we’ve answered the question. Only in the maths world, the answer’s got to be right (or at least not obviously wrong) to count. In the tech world, conclusions regularly go wrong and our answer is, “it’s learning, it will get better.”

Maybe it is and maybe it will. But I can’t help but think we might be missing a trick by insisting on all this concluding. One of the things we humans are best at and enjoy most, evolutionarily, is pattern-spotting. I get that we want to make machines that are as good at it as we are. But we haven’t yet. Why then do we insist on forcing the machine’s conclusion, instead of facilitating the human’s?

Part of this might be the desire to harvest our attention for money – advertising being the prevailing revenue source in much of the digital world, maybe drawing conclusions is a way of ‘hunting’ for our eyeballs. Or maybe it’s because we’re so often in such a hurry to get a Minimum Viable Product out the door that we end up limiting our designs to a few key user journeys. But also maybe we are underestimating our own capacity to engage with and absorb more complexity, to make connections that are more valuable than “you’ve bought some boots, now buy some more boots.”

Years ago when I was at the BBC, we knocked together a prototype of a completely new way to navigate audiovisual content – instead of categorising it into neat-looking boxes and buckets, or pushing like-for-like recommendations, we created a landscape that presented programmes clustered together by similarity but freely navigable, enabling the audience to decide where they wanted to go next, opening the content up for exploration rather than focussing it on a point. We weren’t sure how this would be received – it seemed pretty radical. But when we put it in front of a few people, even older people, they got it right away. And, more importantly, they loved it.

The experiment was put on hold as all hands were needed on the iPlayer, but every time I find myself looking at the recommendations pushed at me by Netflix, Amazon, Spotify, etc., I find myself longing for something like that thing we made back in 2006 – something that lets me draw the conclusions instead of drawing them for me.

Thinking about this on my way into town this morning (thanks, Spotify), I thought again of Simon’s piece from the other day. This time I’m not talking about technology making us dumber, per se. It’s technology failing to recognise that I’m smart, which just makes me sad. I want to design things that actually engage us, not that make us passive participants. There’s something deeply satisfying about making things that satisfy others – it takes more effort and energy to design a system rather than a limited set of tasks (as previously discussed), but isn’t it better to make things that celebrate our uniquely human gifts? Watching telly is a long way from spearing gazelle and dodging lions, but the joy of the hunt is deep in our DNA.