New to Beyond Casual? – start from part 1!

Raiders of the Lost Touch

PrimeSense's NuPortal3 cursor 
Looking at modern UI designs, might get you think everything today must be smooth. 10 years ago, a button click triggered an instantaneous screen change, as the application switches states. Now we know better: we understand our users get disoriented by unnatural immediate changes. After all – we rarely experience such scene changes in real life (Except when we wake up from a nightmare – certainly not the desired experience for our daily interactions…)

So we want to animate our visuals. But just blindly adding animation to every interaction results an unresponsive system. In order to effectively plant visual feedback, we need to examine the interaction mental model, and consciously plan the sensation we target.

As I mentioned in the 1st post – man machine interaction got softer over the decades:

  • Starting an electrical ignition car switch by rotating the key, is way easier than rotating a dynamo leaver to generate a spark in an antique car
  • Pressing a physical button is easier than pulling a leaver down to engage a high voltage power switch
  • Clicking a mouse button feels softer than physical button
  • Clicking a virtual button on a touch screen feels just like touching a piece of glass
  • Clicking something in mid-air in a 3D interface, by itself - does not feel at all. Crap.

Bagel cursor states 
For each softening process, the designers had to compensate on its side-effects – most commonly, the reduction of natural feedback. Virtual buttons gets pressed down and up when clicked, and 1:1 animation happens when you scroll a list of items in your iOS device. You can actually hear a recording of a loud mechanical shutter when a modern camera phone shoots (In the poorly designed ones – you also hear the motor winding the virtual film…) When we migrated to touch screens, we lost the button click sensation. Going to in-the-air interface gives away with even the most elementary tactile feedback of touching a physical surface.

In a simplified model, we are mentally capable of doing 2 operations in the same time. But those operations are not symmetric. Our visual focus can only maintain one target in the center, while the peripheral vision can not reach a comparable resolution or attention level. Adding sound effects are critical in order to allow the user to operate the system on his secondary attention 'slot'. Try to take a picture with your Smartphone without looking at the screen. Naturally – the sound effects supply you with reassuring, so you will know the machine followed your intention.

Shadow touch queues 
Consider simulating a virtual touch screen using gesture detection:
When interacting with a real surface – we receive some visual queues as we get closer and closer to the surface. You will see dropped shadow that gets closer and darker as your finger gets close to the touch point. On a glossy surface, you might see a blurry reflection that merges with your finger tip upon touch. If you think stereo vision is the dominant queue here – try touching a non glossy back illuminated screen that does not have the other queues (Your screen can be a perfect candidate) Try to do it slowly. Can you accurately anticipate when you will reach the touch point?

After implementing several continues cursor feedback, the users get some initial depth perception. But then we encountered another issue. Weaving your hands in the air feels like… well – just that. It certainly does not feel like touching anything. But the virtual surface simulation was not about limitless in-the-air interaction! The moment of touch, should feel different, just as it is in real hard surface. While the feedback during hovering is continues, the touch moment must create a non-continuous sensation. An immediate, non-continuous visual change, combined with crafted sound effects on click and release are important.

 Non continuous touch point 

Kinectimals / Frontier Developments 
When Microsoft launched the Kinect, back in November 2011, we took all 6 launch titles for a spin. Reaching Frontier Development's "Kinectimals" – and after, patiently, waiting for the annoying opening video to pass – we got a bit confused. There we stood, several gesture-savvy engineers and researchers, petting an adorable cub, and trying to figure out how it managed to track our finger interactions using our same PrimeSensor system found in the heart of Kinect. Of course, after a few embarrassing minute we figure it out - it didn't! The game's virtual hand avatar brilliantly interacts with the pet in a natural expected way. If you put your hand on top of the furry head, you just can't help but petting!