There are many advantages to using gesture control in place of capacitive touch-sensing control or more traditional control devices such as buttons or knobs. When wearing gloves, capacitive touch panels will not work, but infra-red gesture control will. While both sensing technologies work in two dimensions (x and y), gesture control adds a third dimension: users can move their hand towards and away from the sensor to execute control.
In applications in which safety might be compromised, such as when a user must find a knob or push a specific button, a user can operate a machine through gestures in a wide active field where gross movements can replace fine movements for a simple and more intuitive interface with the machine.
But how can gesture control be implemented? This Design Note describes a reference design, developed by Vishay, which may provide a blueprint for a wide range of end-product implementations.
The foundation of Vishay’s Gesture Control Sensor Board is the VCNL4020 proximity and ambient-light sensor. Two of Vishay’s VSMF2890RGX01 IR diodes are mounted on either side of the sensor, as shown in Figure 2. They are mounted on the reverse side of the PCB from the VCNL4020 sensor to keep to a minimum the cross-talk between the diodes and the sensor. The emitters’ high typical radiant intensity of 80mW/sr at 200mA allows for the detection of hand gestures at up to 25cm above the sensor board.
Principle of operation
The detection of gestures is accomplished by comparing the IR signal coming from each of the emitters. The emitted IR light is reflected from an object, for example a hand, and then detected by the VCNL4020 proximity sensor. In order to differentiate between the signals coming from either emitter, the emitters are multiplexed.
The proximity signal is then read out between each pulse via an I2C bus interface. When a hand is in proximity to the board, it will reflect more light from the emitter it is directly located over. If a hand is moved across the board, the reflected-light signal from one emitter will increase before the signal associated with the other emitter.
This time difference of signal strength is analyzed to determine whether a swipe was made and in which direction.
The detection of a gesture calls for two main processes:
• the acquisition and preparation of the raw data to be analyzed
• the interpretation of this raw data by the detection algorithm.
If a gesture is recognised, a corresponding event is triggered.
The data used for gesture recognition are acquired in separate data streams, one per emitter. As the emitters are multiplexed, so only one emitter should be on at a time, the measurements are staggered. One measurement cycle consists of a set of data from each channel.
The measurements are done in quick succession so that the time required to complete a full measurement cycle is much greater than the time between the individual proximity measurements of each emitter, as shown in Figure 3. This means that the system can compare the two emitter signals almost in real time.
If the jumps between measurements are too erratic, for example if the sensor is in a noisy environment, several measurements per emitter and per measurement cycle can be made and then averaged. This will slow down the measurement speed, but it will result in a cleaner signal. This variable can be adjusted in the demo software provided with the board.
To be able to analyze and compare the data streams of each emitter channel with one another, the data of each stream are split into frames of ‘n’ measurements. Each frame contains the most recent n measurements, with each iteration of the measurement cycle moving the sample one frame on. The recognition algorithm analyses each frame, and once a frame is seen to contain a gesture, the next few frames are ignored, in order to avoid detecting the same gesture twice.
Furthermore, by ignoring several frames after a valid gesture has been detected, it is possible to reduce false positives from hand movements in the sensor’s field of view that were not intended as a gesture. This variable can be adjusted in the demo software.
Each frame of the acquired signal is analyzed for two parameters: the standard deviation of each signal and the time delay between signals. By comparing the results of this analysis to user-defined thresholds, the algorithm can tell whether a gesture was made, and which kind of gesture it was.
The standard deviation is a measure of the spread of the data within the frame being analyzed. It is calculated using the following formula:
A low standard deviation implies there is no change in the signal and there is either no hand in the sensor’s detection area or the hand is held steady over the sensor and no swipe or push gesture is being made. A high standard deviation implies a large change in the signal, suggesting the movement of a hand across or towards the sensor.
The detection algorithm only analyses the frame for more parameters if the signal is above a set standard-deviation threshold.
The presence of a sufficient time delay between the signals signifies that a swipe gesture has been made. In the detection algorithm implemented in the demo kit, the time delay is found by measuring the cross correlation between the two signals.
The cross-correlation effectively shifts one of the signals over the other and computes the percentage overlay of their integrals. By taking note of the point of maximum cross-correlation between the signals, an estimation of the time delay between the two signals can be made.
Both a time-delay threshold and the standard-deviation threshold can be adjusted in the demo software, enabling the designer to test and fine-tune them for any given application.
Setting the thresholds
The standard deviation is a measure of the spread of the data within each frame. The movement of the hand over the gesture board will increase the standard deviation of both signals. The detection algorithm will only scan the frames for a time delay between the signals if the standard deviation exceeds a threshold. This threshold should be set higher than any observed signal noise. The further away from the sensor a gesture is performed, the smaller the change in standard deviation, as fewer counts are detected.
The gesture detection range increases with a lower standard deviation threshold, as a smaller change in the signal is required to exceed the threshold.
The time-delay threshold determines the minimum time delay that needs to be detected by the gesture algorithm for it to execute a gesture event, and determine if it was a negative, left swipe, or positive, right swipe, delay, as shown in Figure 4. The detectable time difference will depend on the measurement rate. With a lower measurement rate, higher delay time, the measurement time resolution decreases and so the time-delay threshold must be decreased.
The width of the object used to perform the gesture should also be taken into consideration. A thinner object will lead to a higher detected time delay between signals, whereas a wider object will lead to a lower detected time delay.
A proven blueprint for gesture control
The implementation described here is both sensitive and robust, and provides for the design of a reliable gesture-control system with the use of components from Vishay that are readily available from Future Electronics.