Augmented Empathy

Nikolay Nemshilov
7 min readJan 30, 2019

I grew up deep in Siberian woods. I miss thick heavy forests and cold air. In there, in the mids of a winter, all the moisture in the air freezes out and falls. And so when you go out at night, and the new moon is a barely visible sickle, all the sky is filled with stars. Bright and crisp and beautiful. The Milky Way looks like a photoshoped picture. It stretches from horizon to horizon like a giant rainbow made of stars.

My dad was an engineer, so I used to steal his optical level and go watch the sky and the moon at night. The thing was a pain in the butt to use, because it shows the picture upside down, so it is counter intuitive to navigate. But, it had 50x magnification, so if I’d try hard enough I could see the Andromeda Galaxy and Jupiter satellites and the Saturn ring.

Those days when I think back about that time, what actually fascinates me is that we don’t really see the stars. Well, not in the way we normally think of “seeing”. The reality of it is that our brains spend their entire life in the darkness of a skull. Brains themselves don’t see light, they don’t hear sound; brains don’t touch puppies either. Our brains are connected to a set of peripherals that we call a human body, and they just interpret signals from those.

The question one inevitably gets to is: “what is a peripheral and where does it start and end?”. When I look at a picture of a distant galaxy on my computer screen, my brain “sees” a galaxy. But the original light of that galaxy went through quite a few things before it hit my brain. It went through the vastness of space, through telescope lenses mounted to a satellite somewhere in our planet’s orbit. Then it get digitised, sent through as binary data over radio waves. Then it went through a photoshop program in some artists computer. Then it was laying on a hard-drive in some data center for years. Then I have downloaded it and my computer monitor showed me the picture in bright colours. And then my eyes converted those colours into a set of electrical impulses that my brain understood as a pretty picture of a galaxy.

There are layers and layers of perception tooling between the original galaxy and my brain. The light went through gravitational lenses of space, through human made optics, through an artist’s imagination, through the Internet, through meaty parts of my body. All of those are very imprecise things. How can we use all this crazy external instrumentation and “see” the same thing in the end is an impossible mystery.

Well, actually, we don’t “see” the same things. The system fails miserably when it comes to emotions and interests. Remember when you were in high school, and you had a crash on someone? I bet you remember there were other people who actually loathed the same person. Or remember the last time you had an argument with your significant other? The crazy thing is that you argued about the same thing, it’s just somewhere in those layers of instrumentation between the “thing” and your prefrontal cortex where the idea got diverged.

And so we yell, and get offended and act like asses, and in the end ask the same frustrated question “why can’t they see it the way i do?”. The problem is as old as humans themselves; and maybe older.

What if we could though? We can wear glasses and contact lenses to see what other people see. We can use hearing aids to hear what other people hear. What if we can make a thing that will help us to really understand what other people are trying to say? Something that will interpret the meaning where the language fails?

Okay we might not solve the North Korean crisis in here. We might not even solve a spouse yelling “I hate your guts”, where in reality they’re trying to say: “I love you so much, and I miss how things used to be”. But, what if we aim at banal empathy? So, hear me out on this.

All people are a bit different when it comes to their personalities. Their temperament, environment they grew up or live in, accepted norms, customs, etc. Those things vary quite a bit.

I am for example can be quite direct. I grew up in an environment with a very short power distance, and I hate beating around the bush. Consider this a Siberian thing, you can’t open up your moth for too long in there, you’ll catch a cold. And so I have a tendency to say what I have to say and let others decide what they’re going to do about it. For the same reason I don’t always enjoy someone going overly eloquent on me either.

I’m not trying to be offensive or rude, just direct, in my mind I’m am doing the polite thing by eliminating ambiguity. This is a way of showing respect for other human beings, I value their time on this planet. Unfortunately, as you can imagine, this doesn’t always have desired outcomes. Some people want a hug, some chocolate and a little bit of tongue before they can commit to communicating openly. Again, they’re not trying to be difficult, in their minds they’re being polite by following a protocol they deem appropriate in a situation at hands. Those approaches don’t always mix well.

I’m obviously exaggerating here a bit, but the point I’m trying to make is that there are quite significant differences in communication styles between people. They are not good or bad, they just are. The problems is that some people can navigate the maze effectively and change their style accordingly, but some less so. Whatever the cause of a communication style mismatch, it is usually perceived as a lack of empathy on one—or usually both—sides.

In a sense, it is the same problem as watching the stars at night. The same way as our brains don’t see the light and have to rely on external instrumentation; our brains don’t see other people either. We don’t see the meaning behind what they say and do, we just make guesses based on our sensory inputs.

In a similar way as we use glasses to see around clearly, what if we could augment empathy with a computer? Just imagine a dialogue between two different people in a pull request review on GitHub:

Person A says: this seems like a bug, do you mind fixing it?
Person B sees: my dear friend, how are you this {{timeOfDay}}? so, i have been taking a look of admiration at your work today, and couldn’t help but notice a small thing, which i believe is a mistake. How do you feel about patching this?

And then the reply could be translated back as well.

Person B says: oh hello friend. Thank you for bringing this up to my attention! I definitely can see how you came to see this as a mistake, but actually I had to implement it this way because X. Also, I have talked to such and such, and we also need to take in account Y and Z.
Person A sees: sorry, no can do, because X. Also, such and such says we must keep in mind Y and Z :smiley_face:

Okay, we might not be 100% there yet with the current state of technology. But, translating English into English is not an impossible task. It is just nobody had done it properly yet, and we probably lack the means to mine good data for this task yet.

But, think of implications of a technology like this. It is not just about translating Ahole to Polite and back. It’s everything in between too. Imagine that everything that you consume on the Internet is fine tuned not just to your personality, but also to your current mood in real time. You had a bad day and start skipping frames? Everything around you becomes extra polite and caring. You had a good day? Everything gets a bit of a “fuck yeah” kick in it too.

And that’s just the beginning. Mission critical communications, when people’s lives are at stake, become clear. Education will finally transform from the ubiquitous “why the fuck can’t you understand?” attitude into something tailored for each individual human being and their level of competence/assertiveness. People with mental health issues could get real time hints about other people’s feelings and stop being marginalised by the society. And that’s just a few things popping in mind off the bat.

The same way as mechanics and optics changed forever the way we interact with the external world, augmented empathy, applied at a massive scale, could be truly transformative to what it means to be a human.

PS: you can start today. what if your favourite social media app gives you hint how popular/unpopular the message you’re typing is going to be? Wouldn’t it be great if it could say “don’t post yet another picture of your baby, your readers are sick of it, post a picture of a cat instead”. or one of those support apps you see on every other company website those days, they could feed customers keywords that will help resolving their issues. support reps always react to words like “frustrated” or “i still want this to be resolved”. your company chat app could give an employee a trigger warning that certain phrases tend to derail work and get everyone reeling to no end. there are some useful things we can build now with what we have.

--

--

Nikolay Nemshilov

Engineer, obsessive maker, and occasional speaker. CTO at @shortlyster