Strategist, Speaker, Designer, Instigator


Shared realities: the ontology of tech

Shared realities: the ontology of tech

Over the past few weeks, I’ve had cause to do some thinking about Voice Assistance Technology, specifically what kind of ‘personality’ these things should have. When the topic was first raised, my instinctive response was that it was the wrong question. It took some further mulling and conversation to work out exactly why.

TL;DR: in order for trust – and, by extension, a positive relationship – to exist, there must be a shared understanding of reality. Talking about personality doesn’t address the fact that users of voice assistants don’t really know what they can and can’t do, let alone how they do it. Personality characteristics are to some degree moot; it’s ontological characteristics that we need to focus on first.

When asked in an interview last month what kind of ‘personality’ a (technology-driven) assistant should have, I said:

“Helpful, open, transparent, flexible, responsive… but these are less personality characteristics; these are more like ontological characteristics. There is a certain honesty about it: this is what I am, I am a piece of technology, what would you like?”

In a follow-up a few weeks later, I was pointed to this article by Tracey Follows. In it, she also references ontology as a priority when thinking about technology. I was asked: “Why ontology? Why now?” I hadn’t read Tracey’s piece prior to my interview, but I found myself nodding along. This phrase in particular resonated:

“Why do we want to anthropomorphise these machines? Will that really make for a better human-machine relationship?”


For us humans, trust isn’t based on personality, though personality can help to deepen or augment it. The foundation of trust is a shared understanding of reality. For a while in the tech world, context was king, and now it’s clarity. That’s because there used to be less ambiguity around what technology was delivering to people – it was largely driven by content and simple interactions, and the tech was designed to organise or simplify that content and those tasks so that humans could engage more easily or efficiently. Contextualisation was the key to doing this well. Now we increasingly rely on technology to do more complex, subtler things: to make decisions on our behalf like sorting, sifting and curating content, predicting our journeys, needs and desires, to learn how we live and help us manage our homes. That starts to get pretty slippery, and humans aren’t comfortable when we don’t understand what we’re dealing with. Anthropologically, it’s one of the underlying drivers of tribalism and xenophobia – if we can’t understand the motivations of the Other then we can’t trust them.

An analogy: when we meet other people, there are a whole host of cues that tell us about their motivations, culture, capabilities, and so forth. If a person is to be your assistant, you can tell a lot about what they’re capable of through hundreds of subtle cues like physical fitness, behaviour, speech patterns and so on, and beyond that you can ask them about their experience, comfort level or capabilities with respect to various tasks. This baseline reality will be the foundation of your relationship. Based on this shared understanding, you negotiate a set of expectations and then you know where you stand. Provided those expectations are met, the relationship will be a positive one. Personality is not irrelevant but it’s also not the heart of the matter: if the other party can’t do what you need them to do, or says one thing and does another, then it doesn’t really matter how well you get on.

Bringing voice assistance tech into the home or otherwise into personal life is challenging because there’s no shared reality to start from. Even in the industry, most of us only have a fuzzy idea of what the tech can and can’t do. And in the public at large, voice assistants are starting to creep people out – it turns out we aren’t comfortable with tech listening to everything we say and do, not least because we don’t really know how it works or what it’s doing with what it’s hearing. The only way to find out about its capabilities is trial and error, or reading instructions/ newsletters/ support. Nobody ever reads the manual, and I’d be surprised if Alexa’s emails have a high read rate. The ontological question, “what is this thing, really?” needs to be addressed before we can even begin to ask ourselves a relationship-building question like, “what can this thing be/ do for me?”

When a user understands what a thing can and can’t do, they can make better decisions about the role they want it to play for them. This is what I’d like to see technology companies do better – assist the user in negotiating the role of the object by making its capabilities – and limitations – more transparent.

If we focus on that, maybe we’ll discover that anthropomorphisation is moot. Or maybe there should be different personalities for different roles – you don’t necessarily want a nanny that talks like a party planner, or vice versa. Or maybe you do.

While I do understand that companies who are trying to make connections with customers will be drawn to the idea of personality as solving their problems, I think the more pressing issue is expectation management, and that’s about ontological characteristics. The more superficial questions of formality, style and so forth are secondary and should support a combination of the ontology and a user’s decisions. That’s a solid basis for a positive relationship, and a key to both ethical and commercial sustainability.

Introducing Superventions

Over the past several years, I’ve been engaged with the startup community on a bunch of levels – mentor, interim C suite, sounding board, product/experience/design consultant. At the same time I’ve been honing and documenting the Superhuman toolkit, which contains frameworks for addressing a range of issues that most (if not all) businesses face at… Continue Reading

Messy relationships [living with AI]

Artificial Intelligence is all over the news these days, and now even the Big Boys are talking about how important it is to consider the social impact. Obviously I agree, but I think there’s still a big fat chunk of that impact that nobody’s really considering yet. Back in April, I helped facilitate a couple… Continue Reading

Sharing, shmaring [part 2/2]

In my last post, I went on about how the ’sharing economy’ is a misnomer that distracts from what’s really going on. This time, I’m going to talk about the impact that distraction can have. Businesses that enable peer-to-peer commerce can have a huge positive impact, as I wrote last time. They enable people to… Continue Reading

Sharing, shmaring [part 1/2]

Happy New Year, people. I’ve got a backlog of partially-written pieces from 2015 that I plan to foist upon you in the coming months, on a somewhat more realistic schedule than the long-abandoned ‘100 posts in 100 days’. They’re likely to be mostly long reads, so settle in and make yourself comfortable. —— I’m generally… Continue Reading

Little mysteries [post 47/100]

This morning I updated my iPhone to the latest version of iOS. About an hour later, I left for the airport. My phone was in my jacket pocket, as usual. My headphones were in and I was listening to music on Spotify, as usual. Only this time everything got a bit weird. First, the music… Continue Reading

Conversing with ghosts [post 45/100]

Maybe I’m a little old-fashioned sometimes. A friend who’s got teenage daughters tells me that these days it’s considered ok (by some) to carry on a conversation while fiddling with one’s mobile. This still is definitely not ok in my circles, and no matter how much I apologise I always feel terrible when something comes… Continue Reading


Below the Surface [Picnic, Rio de Janiero]

The prioritization of growth above all else, coupled with the belief that technology can (or even should) solve all our problems, has led us to some pretty unsustainable places – individually, socially, economically and politically. Taking a bit of time and digging deeper to understand the less obvious patterns and movements at work in our world can help us to find more sustainable, innovative solutions to the challenges we face.

Being Human in a ‘Smartified’ World [IoT Asia, Singapore]

The Internet of Things has the potential to bring humans closer to each other and to the places where we live and visit, yet a lot of the projects undertaken, especially on the public side, don’t take humans into account much at all, except as something to be managed. To get the full benefit of the technology we are embedding in the physical world, we need to think carefully about how it enhances or detracts from the experience of being there.

Tiny Gods & False Idols [Data Natives, Berlin]

We have an expectation management problem with technology – there is a big gap between what we expect, hope and believe and what it can actually do. In order to not fall into that gap, we must put people first, rather than machines. Only then can we make technology that’s holistic, helpful and humble instead of messy, mysterious and malicious.