[go: up one dir, main page]

WO2016044807A1 - Systèmes et procédés pour utiliser des informations de survol afin de prédire des emplacements de toucher et réduire ou éliminer la latence de toucher - Google Patents

Systèmes et procédés pour utiliser des informations de survol afin de prédire des emplacements de toucher et réduire ou éliminer la latence de toucher Download PDF

Info

Publication number
WO2016044807A1
WO2016044807A1 PCT/US2015/051085 US2015051085W WO2016044807A1 WO 2016044807 A1 WO2016044807 A1 WO 2016044807A1 US 2015051085 W US2015051085 W US 2015051085W WO 2016044807 A1 WO2016044807 A1 WO 2016044807A1
Authority
WO
WIPO (PCT)
Prior art keywords
user input
current user
input according
touch
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2015/051085
Other languages
English (en)
Inventor
Clifton Forlines
Ricardo Jorge Jota COSTA
Daniel Wigdor
Karan Singh
Haijun XIA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tactual Labs Co
Original Assignee
Tactual Labs Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tactual Labs Co filed Critical Tactual Labs Co
Publication of WO2016044807A1 publication Critical patent/WO2016044807A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/041012.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup

Definitions

  • the present invention relates in general to the field of user input, and in particular to systems and methods that include a facility for predicting user input.
  • FIG. 1 shows a side view illustrating phases of a touch motion above a touch surface.
  • FIG. 2 shows a graph illustrating overlay of all the pre -touch approaches to a northwest target. The blue rectangle represents the interactive surface used in the study.
  • FIG. 3 shows a graph illustrating a side view overlay of all trials, normalized to start and end positions.
  • FIG. 4 shows a trajectory for the eight directions of movement, normalized to start at the same location (center).
  • FIG. 5 shows a graph illustrating final finger approach, as seen from the approaching direction.
  • FIG. 6 shows a graph illustrating trajectory prediction for line, parabola, circle and vertical fits. Future points of the actual trajectory (dots) fit a parabola best.
  • FIG. 7 shows a graph illustrating final finger approach, as seen from the side of the approaching direction.
  • FIG. 8 shows a graph illustrating a parabola fitted in the drop-down plane with (1) an initial point, (2) the angle of movement, (3) and the intersection is orthogonal with the display.
  • FIG. 9 shows a graph illustrating a preference curve for each observed trend and average latency preference for all participants.
  • FIG. 10 shows a state diagraph illustrating transitions between three states of touch input that model the starting and stopping of actions, based on prediction input.
  • a method for reducing the perceived latency of touch input by employing a model to predict touch events before the finger reaches the touch surface is proposed.
  • a corpus of 3D finger movement data was collected, and used to develop a model capable of three granularities at different phases of movement: initial direction, final touch location, time of touchdown.
  • the model predicts the location and time of a touch. Parameters of the model are tuned to the latency of the device to maximize accuracy while guaranteeing performance.
  • a user study of different levels of latency reveals a strong preference for unperceivable latency touchdown feedback.
  • a form of 'soft' feedback is disclosed, as well as other performance-enhancing uses for this prediction model.
  • a system for reducing or eliminating the apparent latency of an interactive system.
  • apparent latency as the time between an input and the system's soft feedback to that input, which serves only to show a quick response to the user (e.g.: pointer movement, UI buttons being depressed), as distinct from the time required to show the hard feedback of an application actually responding to that same input.
  • Methods are disclosed herein for eliminating the apparent latency of tapping actions on a large touchscreen through the development and use of a model of finger movement.
  • the model is used to track the path of a user's finger as it approaches the display and predict the location and time of its landing.
  • the method then signals the application of the impending touch so that it can pre- buffer its response to the touchdown event.
  • a visual response to the touch is triggered at the predicted point before the finger lands on the screen.
  • the timing of the trigger is tuned to the system's processing and display latency, so the feedback is shown to the user at the moment they touch the display. The result is an improvement in the apparent latency as touch and feedback occur simultaneously.
  • a number of sensing techniques have been employed to detect the position of the user prior to touching a display.
  • hover sensing is often simulated using optical tracking tools such as the Vicon motion capture system, as we have done in this work.
  • the user is required to wear or hold objects augmented with markers, as well as the need to deploy stationary cameras.
  • markerless hover sensing has been demonstrated using optical techniques, including through the use of an array of time-of-flight based range finders as well as stereo and optical cameras.
  • Non-optical tracking has also been demonstrated using a number of technologies.
  • acoustic-based sensors such as the "Flock of Birds" tracking employed by Fitzmaurice et al., which enables six degrees of freedom (DOF) position and orientation sensing of physical handheld objects.
  • DOF degrees of freedom
  • EMR electro-magnetic resonance
  • EMR is commonly used to track the position and orientation of styli in relation to a digitizer, and employed in creating pen- based user input. Although typically limited to a small range beyond the digitizer in commercial applications, tracking with EMR has been used in much larger volumes.
  • touch sensors employed today are based on projective capacitance. Fundamentally, the technique is capable of sensing the user's presence centimeters away from the digitizer, as is done with the Theremin. Such sensors employed today are augmented with a ground plane, purposefully added to eliminate their ability to detect a user's finger prior to touch. More recently, sensors have been further augmented to include the ability to not only detect the user's finger above the device, but also to detect its distance from the digitizer.
  • Ng et al. studied the user perception of latency for touch input. For dragging actions with a direct touch device, users were able to detect latency levels as low as 6ms. Jota et al. studied the user performance of latency for touch input and found that dragging task performance is affected if latency levels are above 25ms. In the present disclosure, we focus on eliminating latency of the touchdown moment when the user first touches the screen. Jota et al. found that users are unable to perceive latency of responses to tapping that occur in less than 24ms - we use prediction of touch location to provide soft touchdown feedback within this critical time, effectively eliminating perceptible latency.
  • Tasks were designed according to two independent variables: target direction (8 cardinal directions) and target distance (20.8cm and 30.1cm). The combination of these two variables produces 16 unique gestures. There were four repetitions for each combination of direction and distance. Therefore, a session included a total of 64 actions. The ordering of the trials was randomized within each session. Participants completed 3 sessions and were given a 5-minute break between sessions.
  • FIG. 2 shows an overlay of all the pre -touch approaches to a northwest target.
  • the rectangle represents the interactive surface used in the study.
  • Time & Goals participants completed each trial with an mean movement time of 416ms (std.: 121ms).
  • Our system had an average end-to-end latency of 80ms: 70ms from the Vicon system, 8ms from the display, and 2ms of processing.
  • our goal was to remove at least ⁇ 56ms via prediction. Applying our work to other systems will require additional tuning.
  • FIG. 3 shows that all the trajectories have one peak, with a constant climb before, and a constant decline after. However, we did not find the peak to be at the same place in- between trajectories. Instead the majority of trajectories are asymmetrical, 2.2% have a peak before 30% of the total path, 47.9% have a peak between 30-50% of the total path, 47.1% have a peak between 50- 70% of the total path, and 2.8% have a peak after 80% of the trajectory completed path.
  • lift-off which is characterized by a positive change in height
  • continuation which begins as the user's finger starts to dip vertically
  • drop-down the final plunge towards the screen.
  • FIG. 4 shows trajectory for the eight directions of movement, normalized to start at the same location (center).
  • the solid lines represent the straight-line approach to each target.
  • liftoff direction as might be expected
  • the direction of movement of the user's hand above the plane of the screen is roughly co-linear to the target direction, as shown in the figure.
  • Fitting a straight line to this movement the angle of that line to a straight line from starting point to the target is, on average, 4.78°, with a standard deviation of 4.51°.
  • this information alone is sufficient to eliminate several potential touch targets.
  • FIG. 5 and FIG. 7 show the trajectory of final approach towards the screen.
  • the direction of movement in the drop-down phase roughly fits a vertical drop to the screen.
  • the final approach when viewed from the side is roughly parabolic. It is clear when examining FIG. 7 that a curve, constrained to intersect on a normal to the plane, will provide a rough fit.
  • a parabola constrained to intersect the screen at a normal, and fit to the hover path, would provide the best fit.
  • Lift-off begins with a user lifting a finger off the touch surface and ends at the highest point of the trajectory (peak). As we discussed, above, this often ends before the user has reached the halfway point towards their desired target. As is also described, the direction of movement along the plane of the screen can be used to coarsely predict a line along which their intended target is likely to fall. At this early stage, our model provides this line, allowing elimination of targets outside of its bounds.
  • FIG. 8 shows the parabola fitted in the drop-down plane with (1) an initial point, (2) the angle of movement, (3) and the intersection is orthogonal with the display.
  • This parabola is constrained as follows: (1) the plane is fit to the (nearly planar) drop-down trajectory of the touch; (2) the position of the finger at the time of the fit is on the parabola; (3) the angle of movement at the time of the fit is made a tangent to the parabola; (4) the angle of intersection with the display is orthogonal.
  • the parabola is fit to the data, and constrained by these parameters, its intersection with the display comprises the predicted touch point.
  • the fit is made when the drop-down phase begins. This is characterized by two conditions: (1) the finger's proximity to the screen; and (2) the angle to xy plane is higher than a threshold.
  • the landing point in this plane is defined as:
  • the timing of this phase is tuned based on the overall latency of the system, including that of the hover sensor: the later the prediction is made, the more accurate it will be, but the less time will be available for the system to respond.
  • the goal is to tune the system so that the prediction arrives at the application so that it can respond immediately, and have its response shown on the screen at the precise moment the user touches.
  • target direction (8 cardinal directions), target distance (25.5cm, 32.4cm, and 39.4cm), and target size (1.6cm, 2.1cm, and 2.6cm).
  • target size 1.6cm, 2.1cm, and 2.6cm. The combination of these three variables produces 72 unique tasks.
  • the order of target size and distance was randomized, with target direction always starting with the south position, and going clockwise for each combination of target size and distance. Participants completed 3 sessions and were given a break after each session.
  • Prediction 1 On average, the final touch point was within 4.25° of the straight-line prediction provided by our model (std.: 4.61°). On average, this was made available 186ms (mean; std.: 77ms) before the user touched the display. We found no significant effect for target size, direction, or distance on prediction accuracy.
  • the finger was, on average, 2.87cm (std.: 1.37cm) away from the display when the prediction was made.
  • the model is able to predict, on average, 128ms (std.: 63ms) before touching the display, allowing us to significantly reduce latency. We found no significant effect for target size, direction, or distance on prediction accuracy.
  • Prediction 3 On average, our model predicted the time of the touch within 1.6ms (std.: 20.7ms). This prediction was made, on average, 49ms before the touch was made (std.: 38ms). We found no significant effect for target size, direction, or distance on prediction accuracy.
  • buttons were shown a screen with two buttons, each with different response latency. Before tapping each button once, they were asked to touch and hold a visible starting point until audio feedback, which would occur randomly between 0.7 and 1.0 seconds later, was given. They then were asked to indicate which button they preferred.
  • Tasks were designed with one independent variable, response latency. To limit combinatorial explosion, we decided to provide widget feedback under five different conditions: immediately as a finger prediction is made (0ms after prediction) and then artificially added latencies of 40, 80, 120, and 160ms to the predicted time, resulting in 10 unique pairs of latency. To remove the possible preference for buttons placed to the left or right, we also flipped the order of the buttons, resulting in 20 total pairs. The ordering of the 20 pairs was randomized within each session. Latency level was also randomly generated. Participants completed 7 sessions of 20 pairs and were given a 1 -minute break between sessions, for a total of 2240 total trials.
  • the response time is calculated by artificially adding to the time of prediction some latency (between 0 and 160ms).
  • touch time we consider when the Surface detected the touch and subtract a known Surface latency of 137ms.
  • the effective latency is the difference between the response time and the touch time.
  • Visible Latency Four participants preferred visible latency. When asked about the feeling of immediate response, they expressed that they were not yet confident regarding the predictive model and felt that an immediate response wasn't indicative of a successful recognition. Visible latency gave them a feeling of being in control of the system and, therefore, they preferred it to immediate response. This was true even for trials where prediction was employed.
  • touch-controlled applications execute arbitrary application logic in response to input.
  • a 128-200ms prediction horizon provides system designers with the intriguing possibility of kicking-off time consuming programmatic responses to input before the input occurs.
  • a web-browser coupled with our input prediction model would gain a 128-200ms head-start on loading linked pages.
  • Recent analysis has suggested that the median web-page loading time for desktop systems is 2.45s. As such, a head-start could represent a 5-8% improvement in page loading time, without increasing bandwidth usage or server burden.
  • Similar examples include the loading of launched applications and the caching of the contents of a directory.
  • FIG. 10 shows this model: in State 1, related actions can be issued by the input system as predictions (direction, location, and time) of a possible action are received.
  • the input system will stop all actions.
  • the actual touch target turns out not to be the predicted one, the system may also stop all actions but this will not add extra latency compared to the traditional three-state model.
  • the touch sensor confirms the predicted action, the latency of the touch sensor, network, rendering, and all the procedure related parts will be reduced.
  • Our prediction model is not constrained to only solving latency.
  • the approach is rich in motion data and can be used to enrich many UIs.
  • the velocity of a finger can be mapped to pressure, or the approach direction can be mapped to different gestures.
  • Equally important, perhaps, is the possibility to predict when a finger is leaving a display but not landing again inside the interaction surface, effectively indicating that the user is stopping interaction. This can be useful, for example, to remove UI elements from a video application when the user is leaving the interaction region.
  • the model relies on a high fidelity 3D tracking system, currently unavailable for most commercial products.
  • a Vicon tracking system running at 120Hz, to capture the pre -touch data.
  • this high frequency tracking is not realistic for most commercial products, we tested the model at 60Hz, slower than most commercial sensors.
  • prediction is delayed 8ms on average, the later fit has the benefit of increasing prediction accuracy, because the finger is closer to the display.
  • the model predicts tapping location when the finger is
  • HACHIStack has a sensing height of 1.05 cm above a screen with 31 ⁇ latency. Retrodepth can track hand motion in a large 3D physical input space of 30x30x30cm.
  • touch may be used to describe events or periods of time in which a user's finger, a stylus, an object or a body part is detected by the sensor. In some embodiments, these detections occur only when the user is in physical contact with a sensor, or a device in which it is embodied. In other embodiments, the sensor may be tuned to allow the detection of "touches” or “contacts” that are hovering a distance above the touch surface or otherwise separated from the touch sensitive device.
  • touch event and the word “touch” when used as a noun include a near touch and a near touch event, or any other gesture that can be identified using a sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un système et un procédé pour utiliser un système de détection de toucher capable de détecter l'emplacement d'un doigt ou d'un objet au-dessus d'une surface tactile pour informer un système de réponse de toucher dans un dispositif électronique d'un évènement d'entrée d'utilisateur futur prédit ou de données de mouvement avant un évènement de toucher réel. Une entrée d'utilisateur courante est détectée par l'intermédiaire du système de détection de toucher et des données reflétant des informations de survol sont créées. Un modèle d'interaction d'utilisateur comportant une surface tactile est appliqué aux données représentant l'entrée d'utilisateur pour créer des données reflétant une prédiction d'un évènement d'entrée d'utilisateur futur. Dans un mode de réalisation, avant la survenue de l'évènement d'entrée d'utilisateur prédit, un emplacement prédit et un temps prédit auxquels l'évènement d'entrée d'utilisateur futur prédit se produira sont fournis à un système de réponse de toucher.
PCT/US2015/051085 2014-09-18 2015-09-18 Systèmes et procédés pour utiliser des informations de survol afin de prédire des emplacements de toucher et réduire ou éliminer la latence de toucher Ceased WO2016044807A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462052323P 2014-09-18 2014-09-18
US62/052,323 2014-09-18

Publications (1)

Publication Number Publication Date
WO2016044807A1 true WO2016044807A1 (fr) 2016-03-24

Family

ID=55533938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/051085 Ceased WO2016044807A1 (fr) 2014-09-18 2015-09-18 Systèmes et procédés pour utiliser des informations de survol afin de prédire des emplacements de toucher et réduire ou éliminer la latence de toucher

Country Status (2)

Country Link
US (3) US10088952B2 (fr)
WO (1) WO2016044807A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018004757A1 (fr) * 2016-06-29 2018-01-04 Google Llc Compensation d'entrée de contact par survol dans une réalité augmentée et/ou virtuelle
US10353493B2 (en) 2016-09-30 2019-07-16 Microsoft Technology Licensing, Llc Apparatus and method of pen detection at a digitizer

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9715282B2 (en) * 2013-03-29 2017-07-25 Microsoft Technology Licensing, Llc Closing, starting, and restarting applications
CA2979658C (fr) * 2015-03-13 2024-06-11 Telefonaktiebolaget Lm Ericsson (Publ) Dispositif concu pour le fonctionnement nomade et son procede
US9948742B1 (en) * 2015-04-30 2018-04-17 Amazon Technologies, Inc. Predictive caching of media content
US10552752B2 (en) * 2015-11-02 2020-02-04 Microsoft Technology Licensing, Llc Predictive controller for applications
US10091344B2 (en) 2016-03-28 2018-10-02 International Business Machines Corporation Displaying virtual target window on mobile device based on user intent
US10042550B2 (en) * 2016-03-28 2018-08-07 International Business Machines Corporation Displaying virtual target window on mobile device based on directional gesture
US10395412B2 (en) * 2016-12-30 2019-08-27 Microsoft Technology Licensing, Llc Morphing chart animations in a browser
US10304225B2 (en) 2016-12-30 2019-05-28 Microsoft Technology Licensing, Llc Chart-type agnostic scene graph for defining a chart
US11086498B2 (en) 2016-12-30 2021-08-10 Microsoft Technology Licensing, Llc. Server-side chart layout for interactive web application charts
WO2021117268A1 (fr) * 2019-12-13 2021-06-17 プライム・ストラテジー株式会社 Procédé de commande d'affichage automatique de contenu web
US11354969B2 (en) * 2019-12-20 2022-06-07 Igt Touch input prediction using gesture input at gaming devices, and related devices, systems, and methods
US20210390483A1 (en) * 2020-06-10 2021-12-16 Tableau Software, LLC Interactive forecast modeling based on visualizations
EP4204929A1 (fr) * 2020-08-28 2023-07-05 Apple Inc. Détection de contacts utilisateur-objet à l'aide de données physiologiques
KR20230153417A (ko) 2021-03-03 2023-11-06 가디언 글라스, 엘엘씨 전기장들의 변화들을 생성 및 검출하기 위한 시스템들 및/또는 방법들
US11620019B1 (en) * 2021-10-25 2023-04-04 Amazon Technologies. Inc. Adaptive predictions of contact points on a screen
JP7284844B1 (ja) * 2022-03-08 2023-05-31 レノボ・シンガポール・プライベート・リミテッド 情報処理装置、及び制御方法
TWI843345B (zh) * 2022-12-20 2024-05-21 大陸商北京集創北方科技股份有限公司 拉格朗日觸控軌跡擬合方法、觸控模組及資訊處理裝置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169646A1 (en) * 2010-12-29 2012-07-05 Microsoft Corporation Touch event anticipation in a computing device
US20130181908A1 (en) * 2012-01-13 2013-07-18 Microsoft Corporation Predictive compensation for a latency of an input device
US20130222329A1 (en) * 2012-02-29 2013-08-29 Lars-Johan Olof LARSBY Graphical user interface interaction on a touch-sensitive device
US20140143692A1 (en) * 2012-10-05 2014-05-22 Tactual Labs Co. Hybrid systems and methods for low-latency user input processing and feedback
US20140198052A1 (en) * 2013-01-11 2014-07-17 Sony Mobile Communications Inc. Device and method for touch detection on a display panel

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011170834A (ja) * 2010-01-19 2011-09-01 Sony Corp 情報処理装置、操作予測方法及び操作予測プログラム
US20140198062A1 (en) 2011-03-01 2014-07-17 Printechnologics Gmbh Input Element for Operating a Touch-Screen
US20140240242A1 (en) * 2013-02-26 2014-08-28 Honeywell International Inc. System and method for interacting with a touch screen interface utilizing a hover gesture controller

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169646A1 (en) * 2010-12-29 2012-07-05 Microsoft Corporation Touch event anticipation in a computing device
US20130181908A1 (en) * 2012-01-13 2013-07-18 Microsoft Corporation Predictive compensation for a latency of an input device
US20130222329A1 (en) * 2012-02-29 2013-08-29 Lars-Johan Olof LARSBY Graphical user interface interaction on a touch-sensitive device
US20140143692A1 (en) * 2012-10-05 2014-05-22 Tactual Labs Co. Hybrid systems and methods for low-latency user input processing and feedback
US20140198052A1 (en) * 2013-01-11 2014-07-17 Sony Mobile Communications Inc. Device and method for touch detection on a display panel

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018004757A1 (fr) * 2016-06-29 2018-01-04 Google Llc Compensation d'entrée de contact par survol dans une réalité augmentée et/ou virtuelle
US10353478B2 (en) 2016-06-29 2019-07-16 Google Llc Hover touch input compensation in augmented and/or virtual reality
US10353493B2 (en) 2016-09-30 2019-07-16 Microsoft Technology Licensing, Llc Apparatus and method of pen detection at a digitizer

Also Published As

Publication number Publication date
US20160188112A1 (en) 2016-06-30
US10088952B2 (en) 2018-10-02
US10592050B2 (en) 2020-03-17
US10592049B2 (en) 2020-03-17
US20180292946A1 (en) 2018-10-11
US20180292945A1 (en) 2018-10-11

Similar Documents

Publication Publication Date Title
US10592050B2 (en) Systems and methods for using hover information to predict touch locations and reduce or eliminate touchdown latency
Xia et al. Zero-latency tapping: using hover information to predict touch locations and eliminate touchdown latency
US11853477B2 (en) Zonal gaze driven interaction
Lystbæk et al. Exploring gaze for assisting freehand selection-based text entry in ar
US8842084B2 (en) Gesture-based object manipulation methods and devices
US8466934B2 (en) Touchscreen interface
US8407606B1 (en) Allocating control among inputs concurrently engaging an object displayed on a multi-touch device
US9477324B2 (en) Gesture processing
US9891821B2 (en) Method for controlling a control region of a computerized device from a touchpad
US9696867B2 (en) Dynamic user interactions for display control and identifying dominant gestures
CN107665042B (zh) 增强的虚拟触摸板和触摸屏
TWI569171B (zh) 手勢辨識
US20160364138A1 (en) Front touchscreen and back touchpad operated user interface employing semi-persistent button groups
US20170017393A1 (en) Method for controlling interactive objects from a touchpad of a computerized device
US20150134572A1 (en) Systems and methods for providing response to user input information about state changes and predicting future user input
KR20140035358A (ko) 시선-보조 컴퓨터 인터페이스
WO2014113454A1 (fr) Interactions dynamiques entre utilisateurs d'espace libre pour commande de machine
WO2010032268A2 (fr) Système et procédé permettant la commande d’objets graphiques
US20140210704A1 (en) Gesture recognizing and controlling method and device thereof
US9958946B2 (en) Switching input rails without a release command in a natural user interface
Ens et al. Characterizing user performance with assisted direct off-screen pointing
WO2015013662A1 (fr) Procédé permettant de commander un clavier virtuel à partir d'un pavé tactile d'un dispositif informatisé
US20150268734A1 (en) Gesture recognition method for motion sensing detector
US20170199566A1 (en) Gaze based prediction device and method
WO2015089451A1 (fr) Procédé pour détecter des gestes d'un utilisateur à partir de différents pavés tactiles d'un dispositif informatisé portatif

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15841279

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15841279

Country of ref document: EP

Kind code of ref document: A1