Tweet

New to Beyond Casual? – start from part 1!

Raiders of the Lost Touch

PrimeSense's NuPortal3 cursor 
Looking at modern UI designs, might get you think everything today must be smooth. 10 years ago, a button click triggered an instantaneous screen change, as the application switches states. Now we know better: we understand our users get disoriented by unnatural immediate changes. After all – we rarely experience such scene changes in real life (Except when we wake up from a nightmare – certainly not the desired experience for our daily interactions…)

So we want to animate our visuals. But just blindly adding animation to every interaction results an unresponsive system. In order to effectively plant visual feedback, we need to examine the interaction mental model, and consciously plan the sensation we target.

As I mentioned in the 1st post – man machine interaction got softer over the decades:

  • Starting an electrical ignition car switch by rotating the key, is way easier than rotating a dynamo leaver to generate a spark in an antique car
  • Pressing a physical button is easier than pulling a leaver down to engage a high voltage power switch
  • Clicking a mouse button feels softer than physical button
  • Clicking a virtual button on a touch screen feels just like touching a piece of glass
  • Clicking something in mid-air in a 3D interface, by itself - does not feel at all. Crap.


Bagel cursor states 
For each softening process, the designers had to compensate on its side-effects – most commonly, the reduction of natural feedback. Virtual buttons gets pressed down and up when clicked, and 1:1 animation happens when you scroll a list of items in your iOS device. You can actually hear a recording of a loud mechanical shutter when a modern camera phone shoots (In the poorly designed ones – you also hear the motor winding the virtual film…) When we migrated to touch screens, we lost the button click sensation. Going to in-the-air interface gives away with even the most elementary tactile feedback of touching a physical surface.


In a simplified model, we are mentally capable of doing 2 operations in the same time. But those operations are not symmetric. Our visual focus can only maintain one target in the center, while the peripheral vision can not reach a comparable resolution or attention level. Adding sound effects are critical in order to allow the user to operate the system on his secondary attention 'slot'. Try to take a picture with your Smartphone without looking at the screen. Naturally – the sound effects supply you with reassuring, so you will know the machine followed your intention.

Shadow touch queues 
Consider simulating a virtual touch screen using gesture detection:
When interacting with a real surface – we receive some visual queues as we get closer and closer to the surface. You will see dropped shadow that gets closer and darker as your finger gets close to the touch point. On a glossy surface, you might see a blurry reflection that merges with your finger tip upon touch. If you think stereo vision is the dominant queue here – try touching a non glossy back illuminated screen that does not have the other queues (Your screen can be a perfect candidate) Try to do it slowly. Can you accurately anticipate when you will reach the touch point?

After implementing several continues cursor feedback, the users get some initial depth perception. But then we encountered another issue. Weaving your hands in the air feels like… well – just that. It certainly does not feel like touching anything. But the virtual surface simulation was not about limitless in-the-air interaction! The moment of touch, should feel different, just as it is in real hard surface. While the feedback during hovering is continues, the touch moment must create a non-continuous sensation. An immediate, non-continuous visual change, combined with crafted sound effects on click and release are important.

 Non continuous touch point 


Kinectimals / Frontier Developments 
When Microsoft launched the Kinect, back in November 2011, we took all 6 launch titles for a spin. Reaching Frontier Development's "Kinectimals" – and after, patiently, waiting for the annoying opening video to pass – we got a bit confused. There we stood, several gesture-savvy engineers and researchers, petting an adorable cub, and trying to figure out how it managed to track our finger interactions using our same PrimeSensor system found in the heart of Kinect. Of course, after a few embarrassing minute we figure it out - it didn't! The game's virtual hand avatar brilliantly interacts with the pet in a natural expected way. If you put your hand on top of the furry head, you just can't help but petting!   



Hand avatar – the super-metaphor 
Sometime I refer to those realistic hand avatars as a "super-metaphor: not only you instantly understand the type of expected motion, you also can't help imitating it. When you get identified enough with the virtual hand, instincts drives to avoid conflicts. So its not only the machine tracks the user, the user tracks the machine! (Yet another observation backing the assumption we share a common ancestor with monkeys…)

Perhaps - this is how the modern age 'tool-tip' will look like 

Natural interaction is not limited to flat surfaces – the cursor should interact with buttons and menus in a natural way. Its true for gesture interfaces, just as it is true for touch screens. Again - It's all part of the fight against the lost feedback.

A brilliant visual feedback
 Fruit Ninja / Halfbrick
 
As pointed in the 2nd posts, in an actual 3D rendered game environment – you should acknowledge the challenge of depth perception. If the avatar hits something, and manages to go right though, it will feel transparent. The moment of hit is something that might better give a non-continuous experience. The interaction should realistically move the enemies and avatar upon contact. If the avatar is supposed to be huge – impact might throw the enemies away, while you might get pushed back upon punching a giant ogre. But it does not only about physics: a real enemy charging towards you will certainly stop just on reaching melee attack distance – and the same is true for cases the player initiates the charge. Don’t try to challenge your users into getting the 'right' distance – as it will just break the illusion and bring them back to reality (Unless you want them to feel as if they are just lost teenagers, waving hands in front of their living room flat television…) 















3 comments:

  1. Great post! After downloading and playing with some applications from the OpenNI arena, I realized what a strong impact the sound has on the experience. When I turned off the sound in my computer, the fun is gone, just like that (and I am a lost not-teenager, waving hands in front of my computer). Then I started to think about the rubber hand experiment and mirror neurons and what is the minimum needed in an interaction to play these tricks on our mind...

    ReplyDelete
    Replies
    1. Thanks Gila!
      Indeed sounds are that critical, that it is worthless trying to evaluate a mute experience --> even from the most early stages of prototyping. Having any sounds is always better then none (And no matter how many times I have been stated this - every once and a while I fall again in the same "quick and mute draft" trap...)

      (BTW: Eager to see how your own project materializes!)

      Delete
    2. there is another (rather new) type of feedback for giving the feel of a click. rather successful feedback i might say: http://youtu.be/jORsG8AG72I?t=1m40s

      and i agree the sound is a very underestimated feedback. i have discovered the power of sound when i was given a task to develop for the blind a few years ago.

      Delete