Can Interaction Design Save the Internet of Things?

There was a lot of talk about the Internet of Things this year at SXSW Interactive 2015. And yes, there were a lot of cool Internet-things.

But as I found myself repeatedly distracted, disoriented, frustrated, and dumbfounded by my most essential of Internet-things — my iPhone — I began to feel jaded. I’ll be the first to admit that I can no longer imagine coordinating travel, navigating a strange city, or attending an event like SXSW without my constant companion. But over and over again, as connected as I was at all times, I felt disconnected from the reality of what was happening around me. I was eschewing genuine human experience in favor of an Internet-powered experience.

The SXSW app delivered my event schedule, maps of buildings, and a constant stream of beacon-enabled location-aware notifications. Maps navigated me around an unfamiliar city. Twitter gave me a forum to share some of the things I’d learned. Lyft and Uber made sure I always had a ride. Camera for photos, Evernote for notes, Mail to keep up with happenings at work, Messages to keep up with happenings at home. The US Airways app to coordinate my travel (don’t get me started). The list goes on, but I won’t. Individually, each of these things provides me immense value and convenience. Taken together, however, they draw me out of my human experience, and further into a digital one.

ImageThink did a nice job at SXSW 2015 of taking visual notes in some sessions that everyone could share.

ImageThink did a nice job at SXSW 2015 of taking visual notes in some sessions that everyone could share.

This was despite my best efforts. Before every session, workshop, or panel that I attended, I vowed to keep my phone firmly tucked away in order to fully listen, engage, and experience. I took “lean” notes, and only pulled out the camera when something was of particular interest. Nevertheless, I ended up being distracted (annoyed) by people sitting in sessions, not looking up once from their Internet-things. By the sea of phones popping up like (meerkats?) to snap a photo as one slide transitioned to the next. I basked in my slight superiority. But I knew that my steely resolve was not going to fix the larger problem.

The Death of the User Interface

We keep making more and more Internet-things, but what we desperately need are new ways to assimilate them into our lives that don’t require us to give up so much of what’s important. A way to have our cake and eat it, too — to reap the benefits of these amazing technologies without paying the price of our human experience.

Time and again I noticed how much more helpful my human interactions were than my digital ones. A brief conversation with someone in line would reveal several interesting events that I had somehow missed in my efforts to meticulously curate my schedule with the SXSW app. Someone pointing and saying “right down there” saved me minutes of brow-furrowed context switching from Maps to Reality and back again, all whilst slowly turning in circles in an attempt to calibrate my mental compass. In a split second, another person can understand the context of my situation and anticipate my needs. They can tap into their databases and make connections that I can’t.

Human interactions are subtle, nuanced, and instantaneous — exactly the type of interactions we need in a world of connected things.

Golden Krishna summed it up nicely with a reading from his book “The Best Interface is No Interface.” It’s not a new concept, but one that’s gaining new urgency. The screen-based, GUI world that we’ve been living in for the past few decades just doesn’t work well when we’re constantly connected. The interface is an obstacle between a person and the result they desire. He gave some great examples — an app that unlocks your car had some thirteen discrete steps to achieve a task that had three steps without the app. An egregious and obvious example, but a potent reminder. Of course the better solution is a door that opens when it detects your key or phone is near — one step. Bonus points to the Ford Escape team, who came up with the “wave your foot while your hands are full” solution which pops open the rear hatch. That barely even counts as a step.

Excerpt from Golden Krishna’s “The Best Interface is No Interface” that captures too large a portion of my experience at SXSW.

Excerpt from Golden Krishna’s “The Best Interface is No Interface” that captures too large a portion of my experience at SXSW.

A New Design Approach

For the past ten years or so, the question has been “How can we make this interactive?” Because touchscreens, mobile apps, and gesture-based interaction = cool, right? For a little while it was, for some. The theme of the next ten years will invariably be “How can we make this less interactive?” Let the interactions happen behind the scenes — from devices to devices, databases, sensors, learning algorithms, etc. Replace user inputs with machine inputs. Let humans do their human thing.

The benefits will be many. We’ll spend less time staring slack-jawed at our screens, and more time engaging with reality. Meanwhile in the background, our connected devices and services will get better, more consistent inputs than they ever could have hoped to get from us. As Astro Teller from Google X put it so eloquently in his Keynote, explaining why they took the steering wheel out of their autonomous car: “The assumption that humans could be a reliable back-up for the system was a total fallacy.”

Google X “Captain of Moonshots” Astro Teller giving the keynote at SXSW Interactive.

Google X “Captain of Moonshots” Astro Teller giving the keynote at SXSW Interactive.

Our Internet-things can learn our patterns and anticipate our needs, surfacing relevant information and choices based on understanding many facets of our situation. They can learn from each other, adjusting to larger patterns of use and becoming even more dialed in. The question is, what needs to change about our design approach to make the most of this?

Assume Humanity

Words and the labels we use for things often have the unintended consequence of seeding assumptions without our even realizing it. Take “User” for example. Don Norman, Facebook, and many others have advocated that we replace this word with “Human” or “Person.” Their chief complaint has been that User dehumanizes those we design for, making it more difficult to empathize with them, and to understand the full context of their thoughts, attitudes, circumstances, and environments outside of our proposed design.

I would take this further — that the term “User” implies something to be used, as well as their acquiescence of it. We assume that a User is properly motivated to interact with something to achieve their goal. A User wants to perform a task with the fewest clicks or taps possible.

A Person, on the other hand, just wants the result. I want my thermostat to be programmed. I don’t want to be able to program it with fewer steps. Take a look at common User stories: “As a User, I want to be able to edit my payment info at any time.” No, I don’t. I never WANT to edit my payment info.

Thinking in terms of people rather than users is a subtle but effective hack to challenge the assumption of an interface.

From IoT to Augmented Reality

While we’re at it, let’s think about what we’ve been calling the “Internet of Things.” Is that really what it’s all about? I’ve actually heard debates on whether services like Uber or Lyft “count” as part of the IoT. Because it has to be a thing, of course.

If a label is needed at all, I prefer one I heard a few times at SXSW — Augmented Humanity. This gets to the heart of it for me and asks the right question — How can some aspect of human experience be improved now that all of these connections are possible?

A final note on challenging assumptions — what Golden Krishna called “The Lazy Rectangle.” Often we make our first design mistake as soon as the marker hits the whiteboard — drawing a box (thereby assuming an interface). It’s surprising how deftly assumptions like these can undermine our attempts to reimagine how people might interact with a product, service, or system.

New Tools for New Times

For years we’ve thought in wireframes, flows, pixels, and interaction patterns. These are all ways to make less painful interfaces. To design for human augmentation we need to expand our understanding of the tools at our disposal. We have GPS, accelerometers, gyroscopes, databases, learning algorithms, bluetooth, NFC, sensors, beacons, and countless other new and emerging technologies with which to craft an experience.

We also need to start asking different questions: What can we understand about a particular situation in order to respond appropriately? How can we circumvent human input? How can we learn from individual or group patterns? What technologies are becoming inexpensive and small enough to solve this problem in a new way?

Of course, there will always be times when we need to provide some sort of feedback from the device to the person. Can we do this without an interface? If not, can we at least understand enough about the situation and surface highly relevant information or choices? As of today, this is the most challenging part. In a world of flashing lights, buzzes, beeps, dings, and whistles — how can we communicate seamlessly and efficiently with our people?

Understanding new and emerging technologies is an increasingly important part of a designer’s toolbox, and not just in the service of building slicker interfaces. These technologies are the building blocks we will use to design for human augmentation.

Exhibit A: Brain Candy

At the SXSW Robot Petting Zoo, (yes, that was a thing) I encountered a thought-controlled drone that made me realize how close we might be coming to fully intuitive feedback loops. The way it works is simple right now: strap on a headset, and think “forward” (or “horse,” it doesn’t matter). The device takes an EEG snapshot of your brain activity at that moment. Then, that snapshot of your brain activity associated with that thought is mapped to the forward control of the drone. It seems that this could work in the other direction as well. In 2013 researchers at the University of Washington successfully sent a brain signal from one subject to another across campus to move the latter’s finger.

An adorable robot spider that locks onto your face and won’t let go (digitally, that is).

An adorable robot spider that locks onto your face and won’t let go (digitally, that is).

Imagine something that records your brain activity patterns when you’re lost, and also when you know where you’re going. Then, based on your location and destination, it feeds those patterns back into your brain as needed to give you truly intuitive wayfinding.

Can Interaction Design Save the Internet of Things?

Interaction designers have a key role to play in the age of Augmented Humanity. Technology is advancing more quickly than our ability to comprehend and use it effectively. We are the facilitators of a rapidly evolving conversation between humans and machines. As such, we need to understand the capabilities and limitations of both, as well as keep an eye on the overarching goal: augmenting — without obscuring — the human experience.

*This post was published in it's original form here
IoTBill Horan