Mobile and PC user interfaces have moved past the keyboard and mouse to touch and non-touch interfaces. Capacitive touch interfaces are the norm with smart phones and tablets. Non-touch interfaces are the new technology, and they will be found in more devices in 2014 (see “Consumer Electronics Take User Interfaces Beyond Your Fingertips” at electronicdesign.com).
Related Articles
- Time-Of-Flight 3D Coming To A Device Near You
- Consumer Electronics Take User Interfaces Beyond Your Fingertips
- Prometheus Takes Flight With Cutting-Edge VFX Technology
- Expect 3D Printers, 3D Vision, And More In 2013
- How Microsoft’s PrimeSense-based Kinect Really Works
3D imaging is one of the non-touch interfaces that is having a major impact even as other 3D technologies such as 3D printers are also taking off. Hollywood, for example, is using dual cameras to make major motion pictures in 3D (see “Prometheus Takes Flight With Cutting-Edge VFX Technology” at electronicdesign.com). The cameras are expensive, and so is the mechanical and electronic support hardware. Low-cost, dual-camera solutions are available, but they tend to be more expensive than the two other 3D imaging technologies.
Top Technologies
Microsoft’s original Kinect sensor for the XBox gaming system uses one of these 3D imaging technologies from PrimeSense (see “How Microsoft’s PrimeSense-based Kinect Really Works” at electronicdesign.com). The PrimeSense system emits an infrared pattern that is read using a 2D infrared sensor. A system-on-chip (SoC) analyzes the distortion of the pattern reflected back to the sensor to generate the 3D depth field.
This file type includes high resolution graphics and schematics when applicable.
The other 3D imaging technology measures time-of-flight from an emitter to the sensor (see “Time-Of-Flight 3D Coming To A Device Near You” at electronicdesign.com). This is what Softkinetic’s DS311 3D sensor does (Fig. 1). Microsoft’s Kinect 2 for the XBox One uses the time-of-flight approach (see “XBox One And PlayStation 4 Look More Alike” at electronicdesign.com).
Developers have access to Softkinetic’s DepthSense technology, which the DS311 uses. The module is available to OEMs, so embedded developers can incorporate non-touch 3D sensing in their applications (Fig. 2).
This technology also is inside Creative Technology’s Senz3D (Fig. 3). SoftKinetic has licensed its DepthSense pixel technology to Melexis and Texas Instruments (TI). The 3D Current Assisted Photonic Demodulator (CAPD) sensor is available from Softkinetic’s partners.
The DS311 combines a 2D color camera with the 3D sensing system to deliver a color image with 3D depth information. The image recognition software also can use the color information to identify and track objects from balls to arms. The DS311 incorporates dual microphones as well to support video conferencing and recording. Video and image recognition can operate at speeds up to 60 frames/s.
The main difference between the DS311 and the Senz3D is that the DS311 can handle hand and finger recognition at a range of 0.15 to 1 m in addition to body recognition at a range of 1.5 to 4.5 m. Designed for hand and finger gesture recognition, the Senz3D replaces a laptop HD camera. The DS311 uses a brighter infrared emitter that can be used for both, although only one mode at a time. Softkinetic’s technology and the Senz3D are the reference design for Intel’s Perceptual Computing software development kit (SDK).
Body movement and gesture recognition software are part of the SDK. SoftKinetic provides this technology as well. The hand and finger gesture recognition address Senz3D types of deployments. Future laptops and tablets will have this hardware built in. Users will be able to swipe by gesturing in the air instead of touching the screen. Body movement and gesture recognition is more for gaming platforms and applications like Microsoft’s XBox.
Mobile device application control and gaming are just two applications for 3D time-of-flight sensors and software. 3D scanning for 3D printing is yet another application. They are also useful for robotic applications that need to identify objects to locate and avoid.