La la la la la can’t hear you! (or, when personalisation really sucks) [post 9/100]

There’s a bit of synchronicity of ideas going on today. A friend and co-conspirator in Superhuman is working on an interaction model that’s heavily driven by personalisation. And this morning, I was linked (thanks, Tom!) to an article about where the Nest experience went terribly wrong for Kara Pernice, one of its (former) users. These two things share a critical common element (and yes, it’s about control again, but in a different way): the balance between who the machine thinks I am and who (I think) I actually am.

I feel certain I’ll be writing about this again in the coming 90 days or so, but let’s start here. One of the issues Kara had with the Nest is that once it had ‘learnt’ her behaviour patterns and deduced her needs, she was unable to further influence it, either to correct it or just to make it warm in her house for crying out loud. This is particularly bad when we’re talking about home comfort, but the problem’s been around a long time. Take Amazon, for example: it will base its recommendations for you on what you’ve bought. On the face of it, that sounds fine. But what if you’ve just bought a bunch of presents for people whose taste you don’t share? Now Amazon is going to be suggesting Things Related to Hang Gliding (and abseiling, and base jumping) because you got a book for your friend’s extreme-sports-junky boyfriend. There used to be an interface somewhere on the Amazon website where you could explicitly indicate what you did and did not want the algorithm to use for your recommendations. I just spent 5 minutes or so (on the Amazon website) trying to find it and couldn’t, so I think we can call that hidden.

[update: Simon just sent me a link to the Amazon interface I referenced, which he found through a google search. He said, “Never use sites to find things, always use Google. It’s nearly always infinitely better.” Which, while probably true, is infinitely depressing.]

But why should it be hidden? Why should I not be able to say what I do and don’t want to be judged on? Why should I not be able to adjust the algorithm’s picture of me?

At least in this example, it would be possible to logically figure out why Amazon made the recommendation it did, but that’s not always the case (another issue Kara listed – why doesn’t my Nest want me to be warm?). Especially in the media world – Netflix, Spotify, et al – you’re almost never told why you’re being recommended a particular film/song/artist/show. Sometimes you can figure it out, sometimes you can’t. And the only interface I can think of where you can easily give feedback on the recommendation is LastFM’s, where you could either skip a track (‘I don’t feel like listening to this right now’) or skip and block (‘I hate this track/artist, don’t play it again’).

This is another example of algorithms drawing uncomfortable conclusions. And yes, I know, algos are getting better all the time, but are they getting better fast enough that we should be ignoring this design problem? As long as we’re looking at screens that can convey a lot of information, there’s still chances for us to figure out what’s happening in the background. But with a thermostat, or an object that has no screen at all, behaviour can get mysterious quite quickly. And especially in our homes, I’m not sure mystery is what we’re after.

To reiterate what I ended with yesterday, I’d like to see what happens if we stop chasing the holy grail of perfect conclusions – or at least stop basing our product experiences so rigidly on the conclusions drawn – and invent new, intuitive ways for the user to engage with the technology that drives their experience, their environment, their home.