As the iPhone and iPad have amply demonstrated, much of Apple’s current hardware depends on accurate detection of direct touch inputs — a finger resting against a screen, or in the Mac’s case, on a trackpad. But as people come to rely on augmented reality for work and entertainment, they’ll need to interact with digital objects that aren’t equipped with physical touch sensors. Today, Apple has patented a key technique to detect touch using depth-mapping cameras and machine learning.
By patent standards, Apple’s depth-based touch detection system is fairly straightforward: External cameras work together in a live environment to create a three-dimensional depth map, measuring the distance of an object — say, a finger — from a touchable surface and then determining when the object touches that surface. Critically, the distance measurement is designed to be usable even when the cameras change position, relying in part on training from a machine learning model to discern touch inputs.
You can read more about the patent, here (via VentureBeat)
|