SignSense Documentation

System Requirements

Hardware requirements:

  • Processor: Dual-core 2.66-GHz or faster

  • Memory: 2GB minimum, more preferred

  • Graphics: DX10 capable graphics card, 1680 x 1050 minimum resolution

  • Dedicated USB 2.0 bus

  • Kinect for Windows sensor

Recording Gestures

Gesture recorder receives data stream from the attached sensor and processes the model into a compact sequence of data points. Recording can be done in two modes: “Seated” and “Standing”. For seated mode the body can be close to the sensor and only upper body joints positions are processed. Length field shows the number of individual frames in the sequence. Status field shows the sensor status. Recording happens with the frame rate imposed by the sensor. For Kinect for Windows it is 20 frames per second. Smooth parameter allows reduce jittering of the point positions (select “Small”) by averaging values in adjacent frames.
Recorded gestures will be populated into the drop-down list at the upper left corner of the main UI. Recording a new gesture creates an entry in the list with the new name with added timestamp YYYYMD. Gesture names can be changed later by using “Save As” function to save it under a desired name. Each gesture is stored as a file with .REC extension inside “\SignSense” folder inside “My Documents” user location. Once needed set of gestures is captured, the work with Editor can start for visualizing and trimming the needed parts for recognition patterns.


Recording Tips

  • It is OK to start recording at any moment: only the frames with body movement get recorded.

  • After recording, in the beginning and at the end of the record there will be parts of unneeded footage of user preparing / leaving capture position. Those parts can be cut away in editing.

  • Perform a gesture several times. Later on you will be able to see which one is the best and cut everything else away using Editor.

  • When moving, do small breaks in positions where trajectory line makes an angle. Avoid fast movements.

  • Stand in front of the sensor in about 3 meters distance. Sensor should be ideally placed at the eyes level.

  • You can adjust sensor angle manually tilting the sensor head, or changing Sensor Tilt Angle in the Record dialog (value is in degrees).

Gesture Editing

Once a gesture is recorded, it can be viewed in details for doing visual analysis and selecting the most useful gesture fragment for establishing recognition pattern. The editing facilities are provided in “Editor” tab of the SignSense utility.

Skeleton points

Human body in the central part of UI represents the skeleton model consisting of bones delimited by points. Gesture tracking works on the basis of collecting positions of those points and translating human body movements via those changes. On that window, any number of skeleton point check-boxes can be selected and those points will be traced in the output window to the left. Typically a gesture is a limited effort of certain part of the body, for example, “wave” gesture involves palms, wrists and elbows, while other body parts either stay static or commit some minor reflected movements. Visualizing and editing exactly the relevant pattern in a gesture is the task for Editor.
Multi select mode is the option solely for visualization of any number of skeleton points. It can be used to watch the trajectories and projections of any number of points. To define recognition pattern and generated the code for gesture match detection, use one of the three following modes:
Single select option enforces that only one skeleton point would be selected. Once another skeleton point is selected, it clears selection from the previously selected one. If “Single select” check-box is unchecked and checked again it clears all the previously selected points. Use of “Single select” mode currently is required by analyzer part for generating recognition pattern.
Line select option allows selection of a single skeleton point and enforces the line between symmetrical point of the other side of the skeleton. Recognition pattern will be based on length and spatial angle of the created line. This mode is suitable for any gestures, including circle and round curves.
Body select mode is a particular case of “Line select” but the connecting point is between user selected skeleton point and the “shoulder center” point, which is a body center of mass point, the most stable point of a human body to make relation of body parts movement.


Show options

Show skeleton will switch visibility of skeleton points (yellow balls) on and off.
Show Axis Plane There are two planes in the output window: dark red is the vertical plane (Wall) and green horizontal plane (Floor). Those planes assist in establishing cues on how the body and joints are positioned and move. For convenience it is possible to show or hide them.
Show Projection option toggles white trails of the selected skeleton points as they cast onto the Wall and Floor planes. This helps to understand the skeleton movement by adding flat view of the amplitude of movements. In particular, detailing projections can give a good clue that some bones in a gesture make mostly horizontal or vertical movement.
Show Arm shows the line participating in gesture detection for Line Select and Body Select modes.

Time table

Below the skeleton view there is a scale with a red line and two markers on both ends. This is a gesture Player that contains sequence of frames representing the recorded gesture. Player scale allows you to play back a gesture or select arbitrary position on the scale and go frame-by-frame. Rectangular selector can be dragged to select desired frame position. Prev and Next buttons advance frame backwards and forwards. The field with numbers “0 : 143″, as in example, shows “current frame : total frames”. Loop mode in the player allows to engage continuous playback of the selected part or the entire recording. By default in “Single Play” mode it will play once until the end for each press of “Play” button.


The scale markers delimit the red line that is “selected” part of the gesture sequence. It is possible to slide markers using mouse along the scale. After set on desired position, “Trim” button with scissors will cut away the trailing ends before the “starting” marker on the left and the “ending” marker on the right, trimming the selected area. The field with numbers “73 – 153″, for example, shows “begin – end” frame numbers of the selected part. After editing, the active gesture in the “Gestures” drop-down list will get “Changed” status in the tex box after gesture name. Now the changed gesture can be saved.

Trajectory options

While working with gesture in editor, following visualization modes are available.
Show all trajectory – This shows the whole trajectory for selected data points that is contained in timetable. Entire trajectory is shown as yellow line, while the selected boundary set by “begin” and “end” markers on the timetable is shown in red color.
Show stepped trajectory – ”Stepped” view splits trajectory to interleaved yellow and red pieces. The yellow pieces are bases for gesture analysis and recognition and they will be taken into account by the recognition algorithm to catch the matching gesture and trigger recognition event.
Show selected trajectory – Selected trajectory limits the trajectory line to the boundary as set by “begin” and “end” markers on the timetable.
Hide trajectory – Set this choice to clear off the trajectory view.
Show play trajectory – This is an useful option to show the path of skeleton point movement as it advances in time. The trajectory is essentially a trail that moving skeleton point leaves as if it would be painting brush.

Gesture Analyzer

The area in upper right corner will get lines populated into it, showing phases of the gesture as it was deduced by the mathematical processing of the SignSense utility. A gesture is then presented as a sequence of paths with calculated dimensions and angles. Each fragment is presented as a separate line in analysis window. Selecting one line will set the markers of the timeline and highlight the trajectory in the body model window so that the selected fragment is red and the remainder is yellow.
This representation creates recognition pattern to match the gesture when skeleton points in question demonstrate the similar movement trajectory. How close the match for a gesture should happen can be controlled by adjusting spatial roughness (Sphere) and temporal roughness (Pause) parameters. Increasing those values will cause Analyzer to apply wider margins for dimensional parameters and also split gesture to less number of phases. Sphere parameter influences geometric precision in the way of splitting the paths of the gesture to lesser or larger number of pieces (Spheres), while Pause is related to the timing aspect. There is a way to fine-tune the trajectory detection fragments by selecting the Type of movement between XY, Z and XYZ modes. Each mode stands for which axes are involved in trajectory match for this fragment.
Once the gesture is recorded and needed edits are done, recognition pattern can be generated, together with the fragment of C# source code responsible for triggering events into user project on gesture match. Scatter parameter controls the overall precision of how close match should be detected. Higher values mean less precision. Lower values require closer match to the exact record. Default value is 40%
Time break gesture parameter defines when a gesture should be dismissed and after certain delay and no further detection of halted gesture should complete. It is meant for situations when user may start drawing another gesture to avoid match for previously incomplete gesture.

1. Select the target gesture in the “Gestures” list
2. In Gesture properties tab, showing skeleton model, check “Single Select” or “Line Select” or “Body Select” box
3. Check one or more skeleton points boxes you wish to use as a basis of recognition
4. Check the lines with recognition parameters should appear as shown below. Adjust spatial roughness (“Sphere”) and temporal roughness (“Pause”) if needed.
5. Check boxes near each line and notice the code appearing in right bottom window. Test button is available to bring a Recognizer dialog where gesture detection can be tested.
6. Press To clipboard button. C# code is generated and in the clipboard buffer. Select a place in your source project and press “Control+V” or Left Mouse Button and select “Paste”.
7. The C# code is inside your project. First line sets the target event which will be generated by SignSense run-time component when it triggers detection of the defined gesture.
8. Customize the event name and bind it with any custom function or implement any logic of your choice how your project should react on the gesture.

Note: code generation is functional in Full version of the product. In Trial version it produces mock-up output.


Useful hint: when recording gestures, apply moderate speeds to your body movements and make short stops in the positions where trajectory does sharp angles. This will establish clearer base for analyzer to work. Later on gesture can be performed with faster speed when detection working.
Important note: When using multiple gesture triggers, bear in mind that there may be situations where two gestures may seem to be detected in short time one after another. This may happen due to the fact that one gesture is treated as a continuation of another, previously detected one. For example, consider gesture “Z letter” and “triangle”. When doing “Z” gesture, ending part of it practically coincides with the triangle shape, minus one finishing slope line. Therefore, just moving hand to the position where you intend to start drawing a triangle, may result in detection of a triangle, because part of it was drawn as letter Z. To avoid this, you can define more distinct gestures or add some pause in the beginning of a gesture you wish to separate.

API Utilization

To utilize the features of SignSense, Include ProjectKinect.dll and GestureLibrary.dll as references into the project and include the following reference assemblies:
using GestureLibrary.Api;
using GestureLibrary.senseGesture;
using Microsoft.Kinect;
using sense.KinectLibrary.Api;
using sense.KinectLibrary.Impl;

Note that you do not need to distribute *.rec files with your project. They are meant only for convenience of the testing and debugging.

Gesture Triggers

For detection of a gesture, include the code appeared in Generator window into the part of your project responsible for initialization. This will tune SignSense API to look for the gestures in question.
Bear in mind that you only need to invoke one instance of signSenseGestureManager object. In generated code the line “private signSenseGestureManager _manager = new signSenseGestureManager();” appears in each gesture code, however when you define several gestures, only the first instance of the definition is needed, each further gesture trigger will use the same object.
Use clean sensor release on application shutdown via _driver.Stop(); call, as demonstrated in example to avoid init problems on the next runs.

Steady State

One particular type of gesture is a Pose – a static position of the body with no movements. In current Beta release there is no dedicated API for detecting Steady State. However, if you wish to catch some certain orientation of the body, you can bind only Pause element of the desired skeleton point as a gesture. In this case an event will be sent to the code whenever there is no movement of the selected point. Duration of steady state is defined by first parameter in signSensePause() API call, which is roughly number of milliseconds. Once the steady state has triggered an event, signSenseAngleXY() or signSenseGetJoint() API functions can be used to find the position of any skeleton points or joint angles in Polling API mode.

Polling API

It is possible to fetch the coordinates of skeleton points and joint orientation in polling mode by calling SingSense framework for fetching their angles, lengths and coordinates. Both Recognition mode with catching a gesture and Polling mode work together independently and can be used in parallel in the custom project.
Get Position API sample
In this example position values get roughly in millimeters (after multiplied by 1000). Origin of the coordinate system is the position of the sensor. If there is a need to get independent coordinates of skeleton points in relation to the player body itself, the best way is to fetch the coordinates of the skeleton center of mass point (ShoulderCenter) and adjust coordinates of the target skeleton point by subtracting its values from it.
// Initialize framework object
private signSenseGestureManager _manager = new signSenseGestureManager();
// Work on WristRight joint and set 0 time as the most recent moment
IJoint joint = _manager.signSenseGetJoint(senseJointType.WristRight, 0);
// Get individual positions
int posX = (int)(1000 * joint.X);
int posY = (int)(1000 * joint.Y);
int posZ = (int)(1000 * joint.Z);
String positionText = “X:” + posX + ” Y:” + posY + ” Z:” + posZ;
// —————————–

Get Angle & Length API sample
This is the example for getting the angle and length between ElbowRight and WristRight joints.
// Initialize framework object
private signSenseGestureManager _manager = new signSenseGestureManager();
// Prepare variables
float angle = 0;
float len = 0;
// Fetch values to angle and length variables at 0 time as the most recent moment
_manager.signSenseAngleXY(senseJointType.ElbowRight, senseJointType.WristRight, 0, ref angle, ref len);
String info = “angle: ” + angle.ToString(“F00″) + ” len:” + len.ToString(“F00”);
// —————————–

Sample application

“SampleRecognize” source code project included into the contents is a simple example application. It contains several gesture detection events and demonstrates a simple way of integrating SignSense framework into a custom project. You can find the example in “\Sample” folder under SignSense Studio installation folder. Ready for launch executable is in “Sample\bin\Debug folder”.
Sample demonstrates how to efficiently initialize the sensor, establish gesture detection triggers and do position polling.
Sample application involves following gestures’ detection. Please open the links to see the exact animations to understand how to repeat the gestures.
(Use right click->”open in new window”)

  • Letter Z drawn by right hand
  • Square drawn by right hand
  • Upward pointing triangle
  • Step with the right knee moving up

“Start Position Polling” button engages the mode when the right hand position gets mapped to the virtual pointer moving on the screen. Initially the code logic waits for user to provide the reference of the length of the arm by calibrating the distance between “shoulder center” and “wrist right” skeleton points. In first 5 seconds, message “Calibrating…” appears and user expected to stand in “T-pose” in the sensor view (stand straight with arms spread to the sides). Application will fetch the maximum distance between joints within calibration loop and then will calculate the ratio between sensor coordinates and screen pixels. After that the red dot will appear on the screen and it will follow the movement of the right arm.