Apple is moving closer to making hand gestures a practical input method across its device ecosystem, including future Macs, MacBooks, iDevices, and Vision products. The company has filed a new U.S. patent application detailing how its systems could distinguish between gesture input and peripheral use, ensuring precision in user interactions.
Smarter Detection of Intent
The patent outlines methods to analyze hand poses to determine whether a user intends to gesture or is simply using a peripheral device like a keyboard or mouse. If a hand is typing or aligned with a flat surface, the system interprets this as peripheral use. In that case, gesture recognition is disabled to prevent false triggers.
On the other hand, if the hand is free and not engaged with any device, it may be in a “gesture use mode,” allowing input like pinch or swipe gestures to control on-screen elements. Apple also introduces a two-phase gesture model: the system first recognizes the gesture, then either executes the action or cancels it if a conflicting peripheral event follows immediately.
Gaze Tracking Integration
To refine gesture interpretation, Apple’s system may integrate gaze tracking. By analyzing where a user is looking, it can determine if a gesture is intended for a specific on-screen target. This data could also train neural networks to improve hand mode prediction, making the system more reliable and responsive.
According to Patently Apple, this work builds on years of development, dating back to Apple’s 2013 acquisition of PrimeSense, the Israeli company behind the original motion-sensing tech. Apple’s recent filings suggest the company is now integrating this advanced input system deeper into its core devices.