Force Touch is the “new” pressure-sensitive user interface technique that was pioneered by Apple with its Apple Watch – or was it? Before we jump into that, let’s talk about ways that we can already interact with our smartphones and wearables, and see if we can figure out where Force Touch currently fits in, and where it should fit it.

Text

as-400

Multi-color was considered an “advanced feature”

Before we had mice to move our cursors around we had scroll-lock and tab. Back then screens were two-colored: green on black, yellow on black, white on blue, or some other high-contrast display – which often led to burn-in on our very expensive monitors (this, by the way, is why “screen savers” were invented, but that’s another topic completely).

In these systems you would move around on the screen by pressing the tab key (which would move your cursor to the next field, or shift-tab to take you back). You could jump to another area of the screen with a hot-key (usually identified by an underlined letter). You could jump to menus with function keys, or navigate to other screens by any combination of tabs, hot keys, or function keys. It wasn’t elegant, but it worked – and since you never had to take your fingers off the keyboard, it was fast!

Mice

Mice

Eventually the mouse became popular and users could simply point to where they wanted their cursor to go, and click to start entering data. Before long other “gestures” were introduced: click and hold let you highlight characters, right-click brought up a context-sensitive menu (depending on where the cursor was pointed), click-and-drag let you copy content from one cell to another, and double-click was how you opened things. Most of us are used to how a mouse works, not necessarily because it’s intuitive (who came up with right-click?) but because we grew up using it.

multi-touch

Touch

When smartphones came along, pioneered by Apple’s iPhone (and the iPad Touch before it), a new (and much more intuitive) user interface method was popularized: touch.

These new devices let you literally touch a screen with your finger to select, open, or interact with the objects being shown. It sounds simple, but the programming that went into it is anything but.

Since it was introduced we’ve gone from single taps, to multi-finger taps, to tap-and-hold, and more. We’re basically re-inventing what we’ve been able to do on a mouse, but making it intuitive (sort of).

Force Touch

One of the more useful “gestures” you can perform with a traditional mouse is right-clicking. Trying to replicate that on a touchscreen with a finger has proven to be somewhat challenging. Currently, long-tap seems to be the developer’s preferred way to “right-click” on a touch screen, but that doesn’t work so well.

When Apple brought us its Apple Watch, the company touted something it called “Force Touch”.

In addition to recognizing touch, Apple Watch senses force, adding a new dimension to the user interface. Force Touch uses tiny electrodes around the flexible Retina display to distinguish between a light tap and a deep press, and trigger instant access to a range of contextually specific controls. With Force Touch, pressing firmly on the screen brings up additional controls in apps like Messages, Music, and Calendar. It also lets you select different watch faces, pause or end a workout, search an address in Maps, and more. Force Touch is the most significant new sensing capability since Multi‑Touch.

That sounds pretty new and novel, doesn’t it?

The only problem? Android has had “force touch” since API Level 5 which was introduced in Android 2.0 Eclair. That was back in November of 2009.

adam-outler

To illustrate the ability, Adam Outler put together a nice little app called Force Touch Demo. Not every phone or tablet out there has the same sensitivity in its screens, so your results may vary, but the fact of the matter is that devices powered by Android have had this ability for almost nine years. Why haven’t they picked it up and highlighted it like Apple is doing?

Adam Outler sums it up pretty well:

How is this useful? It’s not, really. It just adds a level of complexity to a user experience.

Hiding user-interface elements behind a “force-press” (or even a right-click) is counter-intuitive. Requiring users to exert a certain level of pressure on their screens to find one of those hidden features is even more problematic.

Update:

Adam Outler reached out and let us know that the getPressure() API has been available since API 1 (which was introduced with Android 1.0). In other words, it’s likely that Android has had the ability to use force touch since the very beginning. Looking at the history, API Level 5 added to the original implementation of getPressure() which implies that it because “useful” in API Level 5, whereas it appears to have been only “stubbed out” in API Level 1.

Thanks for the dialog, Adam Outler!