Here it comes… the part you have all been waiting for –
Shooting!
For some reason – we are magnetized by the ability to fire
projectiles – and the combination of that with violence is, well, explosive.
But enough about crappy philosophy, shooting is an essential need for hard core
gaming experiences, so let’s explore it a bit.
Some may argue that it’s totally unacceptable, and pulling
the trigger with a finger is mandatory. In the fiction world of most action
movies, the heroes tend spraying around bullets in automatic mode. In reality –
of course – it is: A) Not efficient/accurate B) You will actually spray only if
someone else carries your ammunition.
Of course, games are more like the
movies. Back in the old days, some joysticks even had an ‘auto-fire’ switch –
to make it easier on the lazy gamer. My argument are: it might be acceptable to
find alternative to finger trigger firing, and single finger squeeze - single
shot is not a mandatory requirement for a non-casual games.
Let’s discuss several possible gestures for shooting and the
implementation considerations:
Single hand pistol
- Since the same hand is used for aiming and triggering, this scheme will not allow accurate ranged attacks.
- The requirement from the algorithm to detect a back motion creates a notable delay
- The user needs to learn on the correct speed and length of a short motion. Since it is too short to enjoy intermediate feedback – user will probably suffer from exaggerated motion or missed gesture that might feel like an unresponsive gun (Not fun in scenes you are under fire…)
- It certainly puts too much the light on the finger tracking limitation
Dual hand pistol
Shooting a real pistol accurately requires two hand holding.
For the triggering, we tried a scheme where the hand holding the handle aims,
and bringing the 2nd hand starts auto-firing.
In the movies, some westerns cowboys used the 2nd
hand to speed-up the hammer. This can also be emulated in gestures, by moving
then 2nd hand up/down or forward/backward behind the aiming hand.
While not obvious at first, there is a repeating problem in
all those possibilities: which is related to the tracking technology. Users
tend holding the aiming hand close to their body, and such poses are extremely
challenging for many computer vision algorithms. Getting inaccurate aim is
something you should expect and consider when choosing those schemes.
Rifles are much heavier then hand guns, and the natural
shooting pose involve two hands: one to hold the rifle weight, usually near the
far end of the gun, and another for squeezing the trigger. You can either fire
when the user moves his trigger hand back and forth, or begin auto-fire when
the 2nd hand gets close to the trigger.
Compared to pistol shooting, the aiming hand is relatively
far from the body – and thus reliably tracked
“To Infinity and beyond!”: The Buzz Light-year maneuver
Had he own a gun, Sheriff Woody would probably use the
pistol scheme – but his life friend Buzz has a much more advanced laser,
implanted directly above his forearm. For gesture gaming considerations, this scheme is quite
successful because the aiming hand is always straight in a ‘vision friendly’
pose.
Buzz other abilities, such as foldable wings and rocket
boosters; definitely cry for someone to create a gesture game, any volunteers?
AngryBotsNI implements the Buzz Light-year Laser scheme
Coming up next: The Clone Wars!
very awesome post. good thing I saw this else i'd be doing a lot of trial and errors (which i think would still happen anyway..)
ReplyDeletenow the question -- has anybody tried incorporating user-created physical tools? a paper gun.. or cannon perhaps? could it be crafted in such a way that the color/material used would help kinect accurately track its location for point-shoot purposes?
(Thanks John!)
ReplyDeleteThis idea was always on the table - but unfortunately, the CV libraries, such as KinectSDK or Nite/OpenNI are not really designed for such usage. If you will hold physical items it is unlikely the algorithms will separate it from the player. This will result wrong tracking.
I guess - doing something like that will force you into doing computer vision algorithms yourself - and that is far from trivial (Unless, CV is the focus of your work of course)
how about clothing? any type/color of clothing users can wear to help kinect better track? i see some dance revolution videos where the users wear black suits.. wonder if theres any sense to that
ReplyDelete**and yes, I envision designing kinect-optimized outfits with sponsor logos and all**
i've tried ---
-blue boxing gloves (fail.. your explanation above applies here i guess)
-red hand wraps (seems to track better)
-black hand wraps (kinda complicates tracking a bit)
-tv remote (surprisingly does work a bit better.. seems to work with fruit ninja)
thanks again in advance!!
In order for the depth equalization to work well - you need a mat surface that will reflect the near IR light pattern projected by the PrimeSensor. Since its not visible light - its hard to tell if a certain material reflects it nicely according to its visible color. To be sure - you can switch to receive the IR stream using openNI and you will be able to see the actual image captured by the IR sensor - before depth is generated.
DeleteAside from spectrum - the surface shape and texture might destroy the pattern if its too wrinkled (Imagine what happen if you project an image on hair or fur) or if the object is smaller then the pattern itself (Like your finger tips when you are 10 feet away)
Lastly - the CV algorithms will prefer tight clothes - so it wont get confused by skirts, sleeves, jackets, etc.
Your best bet could be on bright tight cloths without shiny parts
Will take note of that. Thanks again :)
DeleteHi. I can't find AngryBotsNI anywhere. The OpenNI Arena link doesn't work. Could you please provide an alternate download link?
ReplyDelete