SignSense® Gesture Studio aims to make access to gesture control and programming quick and easy for game developers and infotainment application developers.
Since the creation of the first computers there has been a continuous evolution methods to interact with them. From switches to keyboard, and mouse to touch screens; interactivity has become more natural, and more human. Interactive technologies are evolving rapidly to create new ways of human-machine interaction. Gesture recognition has become a reality and is on the verge of becoming a familiar way of working. The leading platform in gesture-sensing hardware, Microsoft Kinect, is a powerful tool used to see, and feel, the human body in three dimensions. However its output is raw data, and programming for gesture control with standard Kinect access is a very laborious and sophisticated task, requiring time and expertise. SignSense® Gesture Studio aims to make access to gesture control and programming quick and easy for game developers and infotainment application developers. SignSense® Gesture Studio is a plug-in for Kinect SDK that enhances and simplifies application adaptation for gesture control, providing recorder utility and application run-time layer for required sensor operations.
- Capture and playback own custom gestures
- Visualize trajectory trails frame-by-frame or in movie mode
- Edit, trim and clone recorded gestures
- Test gesture recognition
- Generate C# code loops to detect selected gestures in user project
- Use ready-made basic gestures available out-of the box
- Dash board UI to view gesture trajectories in 3D
- Example application with source code
How does it work?
SignSense Gesture Studio plug-in supports Kinect for Windows and provides simplified access to human-body model and skeleton points. It hooks up to sensor data feed and filters meaningful parameters, translating them as data of skeleton points, poses and gestures. It allows gesture recording and playback, for two selectable modes supported: full-body and seated. Data processing occurs in SignSense’s runtime component, working in the background, controlled via its Public API by user code and providing skeleton and gesture data in polling mode, or triggering events on desired gestures.