In the Details: Flick Intent

In Mobile Safari on iOS 7, Apple utilizes a very subtle microinteraction to reveal the search field. Usually, the search field on any given screen is revealed by swiping down to scroll past the top of the viewed content. The search field scrolls into view above the content. If you are viewing a web page, however, the top of the page could be a long scroll away. Apple detects your gesture. If you drag your finger slowly down, the page scrolls as normal. However, if you flick the page down, the page title bar enlarges to become the URL/search field. Scrolling the page up again shrinks it back down to show only the URL. The UI infers your intent in rapidly scrolling towards the top of the page, presenting the controls you would expect to find there.

Part 5 Postponed

I had the entire post written. Then I accidentally brushed my fingers across my Magic Mouse. That magically invoked the Back button, circumventing Tumblr’s protection against accidentally leaving the page with an unsaved post. Thus, I lost the post. I’ve now turned off that particular gesture.

It’s too late to rewrite it all tonight. I’ll do it tomorrow.

Kinect

I had an opportunity to play around with an Xbox Kinect over the holidays, and while it is an interesting piece of technology with a lot of potential, I wasn’t particularly impressed. I first attempted to play Wipeout, based on the ridiculous ABC game show. The game couldn’t tell when I stopped running in place, so my on-screen avatar would frequently run right into obstacles or off ledges. There was significant delay between my actual jump and my avatar’s jump. Counter-intuitively, the game was designed such that bending to the right made your avatar bend forward, and bending to the left would make it bend backward. I assume this was done because the kinect can’t very well detect forward and back movements. Now, these problems could very well be due to poor implementation of that particular game.

The second game I tried was Just Dance 3. My daughters have the first two for our Wii. The older of the two thinks that it works better on the Wii, but I can’t speak to this from my own experience. What I did observe was that the game was extremely forgiving in what it considered to be correct movements. There were several times that it got confused as to which player was whom (up to four can play).

The environment has a huge effect on the Kinect’s performance. When two ceiling fans were turned on in the room, the Kinect could’t see me at all. This may have been due to a subtle strobing effect caused by pot lights above the fans. My two-year-old nephew was running around, and the Kinnect would sometimes confuse me with him, even given our drastic difference in height.

The Kinect seemed to be good enough for the sweeping arm motions used in Fruit Ninja, but the more nuanced motions necessary for the other games didn’t translate well. There is a lot of potential, and the device is certainly selling well, but the experience didn’t make me want to trade in my Wii.

In the Details: Double Tap

I just discovered that in the latest version of Safari, Apple has carried over another feature from iOS. Double-tapping an object, like an image or paragraph, zooms in on it, fitting it to the width of the window. Notice that I said “double-tap” rather than “double-click.” Given the touch-enabled input devices that Apple now sells, it is possible to tap the surface of a touch pad or mouse as an action distinguishable from a click. While this feature isn’t particularly useful to me, I could see it being quite handy to someone with vision deficiency. I never would have thought to try tapping the surface of my mouse, even though I slide my fingers across it to scroll all the time. I discovered it accidentally, and it took me a minute to figure out what I had done. Double-clicking, of course, still selects text, as it always has.

Complex Touch Interactions

The phrase, “It’s just a big iPod touch,” has been repeated ad-nauseum since the iPad announcement. In some superficial ways, that is true. However, the much larger display opens the door to capabilities, requiring more complex interactions. Perhaps the most significant part of last week’s event was the demo of the iWork suite.

Pages, Numbers, and Keynote are not dumbed-down versions of their desktop counterparts as one my expect. They each contain complex functionality. However, the team at Apple has designed sophisticated user interfaces specifically for touch input. They have extended their multi-touch gesture language.

I was especially impressed by their solution to multiple selection. Phil demonstrated how you can rearrange slides by simply dragging them with you finger. But what if you want to move several slides at once? On the desktop, this can be accomplished by holding down a modifier key while clicking slides to select them. With multi-touch, you can drag a slide with one finger, and then tap additional slides with your other hand. Each slide you tap stacks itself under the one you have already dragged. Now you are dragging all of the selected slides and can drop them where you want.

They have invented new UI conventions, such as what Phil called the “page navigator,” a replacement for a scrollbar that will show you a thumbnail of each page as you drag your finger up and down the screen, allowing you to quickly find and jump to a particular page in your document. In Numbers, we saw scrolling tabs. There were more tabs in the spreadsheet than could fit in the width of the screen. Phil dragged his finger horizontally across them and they slid sideways to reveal more tabs that had been hiding off-screen. It’s an elegant interaction, but I wonder how apparent it is that there are additional tabs. I was also intrigued by the interactions shown in Numbers as Phil was editing a spreadsheet. The table was outfitted with areas above and on the left side that displayed buttons for adding rows and columns, and when a column or row was selected, provided handles by which they could be dragged to rearrange them.

Direct manipulation is the key. Rather than today’s plethora of toolbars and menus on the edges of the screen that affect the currently selected object, we are seeing controls applied directly to the objects, an approach that I would argue is much more intuitive. We’ve been seeing futuristic interfaces in the movies for years—displays that respond magically to simple movements of hands and fingers. Apple is now defining the specifics of how this is going to work. It’s going to be sophisticated. It’s going to be elegant.

It’s going to be great.

Can’t Touch This

On Friday, Engadget posted a video demonstrating a working prototype of a touchless remote control. An original concept video had appeared on YouTube in January.

In development by Bang & Olufsen, the concept was intended for use in the kitchen where one might have dirty hands. The prototype controls volume and channel switching, as well as turning the television on and off, by way of hand gestures. 

The industrial design is beautiful. I intrigued by the volume control—the way the device balances itself at any angle while responding to the proximity of your hand. The use of positive and negative space to represent different modes of interaction is ingenious. It really does look like magic.

Is it practical? Perhaps not. Obviously, it’s only a prototype and still needs some work, but even so, with the 200-whatever channels I have through Verizon’s FiOS plan, flipping through them serially isn’t really an option. And how easy is it, really, to use without accidentally touching it. It appears to require patient actions. It further requires a flat surface with plenty of room. Finally, I can’t imagine it being what I would consider affordable.

Still, it is a very elegant object, and it’s fun watching designers like Joris van Gelder exploring new methods of interaction with our environments.