[go: up one dir, main page]

USRE50598E1 - Artificial reality system having a sliding menu - Google Patents

Artificial reality system having a sliding menu

Info

Publication number
USRE50598E1
USRE50598E1 US18/095,946 US202318095946A USRE50598E US RE50598 E1 USRE50598 E1 US RE50598E1 US 202318095946 A US202318095946 A US 202318095946A US RE50598 E USRE50598 E US RE50598E
Authority
US
United States
Prior art keywords
menu
hand
gesture
artificial reality
reality system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US18/095,946
Inventor
Jonathan Ravasz
Jasper Stevens
Adam Tibor Varga
Etienne Pinchon
Simon Charles Tickner
Jennifer Lynn Spurlock
Kyle Eric Sorge-Toomey
Robert Ellis
Barrett Fox
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Technologies LLC
Original Assignee
Meta Platforms Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Meta Platforms Technologies LLC filed Critical Meta Platforms Technologies LLC
Priority to US18/095,946 priority Critical patent/USRE50598E1/en
Application granted granted Critical
Publication of USRE50598E1 publication Critical patent/USRE50598E1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • This disclosure generally relates to artificial reality systems, such as virtual reality, mixed reality, and/or augmented reality systems, and more particularly, to user interfaces of artificial reality systems.
  • artificial reality systems are becoming increasingly ubiquitous with applications in many fields such as computer gaming, health and safety, industrial, and education. As a few examples, artificial reality systems are being incorporated into mobile devices, gaming consoles, personal computers, movie theaters, and theme parks. In general, artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof.
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • hybrid reality or some combination and/or derivatives thereof.
  • Typical artificial reality systems include one or more devices for rendering and displaying content to users.
  • an artificial reality system may incorporate a head mounted display (HMD) worn by a user and configured to output artificial reality content to the user.
  • the artificial reality content may include completely-generated content or generated content combined with captured content (e.g., real-world video and/or images).
  • captured content e.g., real-world video and/or images.
  • the user typically interacts with the artificial reality system to select content, launch applications or otherwise configure the system.
  • this disclosure describes artificial reality systems and, more specifically, graphical user interface elements and techniques for presenting and controlling the user interface elements within an artificial reality environment.
  • artificial reality systems are described that generate and render graphical user interface elements for display to a user in response to detection of one or more pre-defined gestures by the user, such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms, or a combination of pre-defined gestures.
  • the artificial reality system may further trigger generation and rendering of the graphical user interface elements in response to detection of particular gestures in combination with other conditions, such as the position and orientation of the particular gestures in a physical environment relative to a current field of view of the user, which may be determined by real-time gaze tracking of the user, or relative to a pose of an HMD worn by the user.
  • the artificial reality system may generate and present the graphical user interface elements as overlay elements with respect to the artificial reality content currently being rendered within the display of the artificial reality system.
  • the graphical user interface elements may, for example, be a graphical user interface, such as a menu or sub-menu with which the user interacts to operate the artificial reality system, or individual graphical user interface elements selectable and manipulatable by a user, such as toggle elements, drop-down elements, menu selection elements, two-dimensional or three-dimensional shapes, graphical input keys or keyboards, content display windows and the like.
  • a technical problem with some HMDs is the lack of input devices that can be used to interact with aspects of the artificial reality system, for example, to position a selection user interface element within a menu.
  • the artificial reality system can use both hands of a user to provider user interaction with menus or icons.
  • a technical problem with this type of interaction is that one hand can occlude the other hand, making it difficult for the artificial reality system to accurately determine the intent of the user.
  • some users may have a disability that may prevent them from using both hands to interact with the artificial reality system.
  • some aspects include a menu that can be activated and interacted with using one hand.
  • the artificial reality system may cause a menu to be rendered.
  • a menu sliding gesture e.g., horizontal motion
  • UI user interface
  • motion of the hand substantially orthogonal to the menu sliding gesture may cause the menu to be repositioned.
  • the implementation of the artificial reality system does not require use of both hands or use of other input devices in order to interact with the artificial reality system and thus this technical improvement over conventional artificial reality implementations may provide one or more practical applications, such as providing ease of use, providing the ability for persons with disabilities related to the use of one hand to interact with the system, and the ability to accurately determine user interaction with a menu or other user interface elements.
  • an artificial reality system includes an image capture device configured to capture image data; a head mounted device (HMD) configured to output artificial reality content; a gesture detector configured to identify, from the image data, a menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand; a UI engine configured to, in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface; and a rendering engine configured to render the artificial reality content, the menu interface, and the slidably engageable UI element for display at the HMD.
  • HMD head mounted device
  • a method includes obtaining, by an artificial reality system including a head mounted device (HMD), image data via an image capture device; identifying, by the artificial reality system from the image data, a menu activation gesture, the menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand; generating, by the artificial reality system in response to the menu activation gesture, a menu interface and a slidably engageable UI element at a first position relative to the menu interface; and rendering, by the artificial reality system, artificial reality content, the menu interface, and the slidably engageable UI element for display at the HMD.
  • HMD head mounted device
  • a non-transitory, computer-readable medium comprises instructions that, when executed, cause one or more processors of an artificial reality system to capture image data via an image capture device; identify, from the image data, a menu activation gesture comprising a configuration of the hand; in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface; identify, subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand; in response to the menu sliding gesture, translate the slidably engageable UI element to a second position relative to the menu interface; and render artificial reality content, the menu interface, and the slidably engageable UI element for display at a head mounted device (HMD).
  • HMD head mounted device
  • FIG. 1 A is an illustration depicting an example artificial reality system that presents and controls user interface elements within an artificial reality environment in accordance with the techniques of the disclosure.
  • FIG. 1 B is an illustration depicting another example artificial reality system in accordance with the techniques of the disclosure.
  • FIG. 2 is an illustration depicting an example HMD that operates in accordance with the techniques of the disclosure.
  • FIG. 3 is a block diagram showing example implementations of a console and an HMD of the artificial reality systems of FIGS. 1 A, 1 B .
  • FIG. 4 is a block diagram depicting an example in which gesture detection and user interface generation is performed by the HMD of the artificial reality systems of FIGS. 1 A, 1 B in accordance with the techniques of the disclosure.
  • FIG. 5 is a flowchart illustrating operations of an example method for activating a menu prompt or a UI menu in accordance with aspects of the disclosure.
  • FIG. 6 is a flowchart illustrating operations of an example method for positioning and interacting with a UI menu in accordance with aspects of the disclosure.
  • FIGS. 7 A- 7 G are example HMD displays illustrating positioning and interacting with UI menus in accordance with aspects of the disclosure.
  • FIG. 8 is an example HMD display illustrating a menu prompt in accordance with aspects of the disclosure.
  • FIG. 1 A is an illustration depicting an example artificial reality system 10 that presents and controls user interface elements within an artificial reality environment in accordance with the techniques of the disclosure.
  • artificial reality system 10 generates and renders graphical user interface elements to a user 110 in response to one or more detected gestures performed by user 110 . That is, as described herein, artificial reality system 10 presents one or more graphical user interface elements 124 , 126 in response to detecting one or more particular gestures performed by user 110 , such as particular motions, configurations, locations, and/or orientations of the user's hands, fingers, thumbs or arms.
  • artificial reality system 10 presents and controls user interface elements specifically designed for user interaction and manipulation within an artificial reality environment, such as specialized toggle elements, drop-down elements, menu selection elements, graphical input keys or keyboards, content display windows and the like.
  • artificial reality system 10 includes head mounted device (HMD) 112 , console 106 and, in some examples, one or more external sensors 90 .
  • HMD 112 is typically worn by user 110 and includes an electronic display and optical assembly for presenting artificial reality content 122 to user 110 .
  • HMD 112 includes one or more sensors (e.g., accelerometers) for tracking motion of the HMD 112 and may include one or more image capture devices 138 , e.g., cameras, line scanners and the like, for capturing image data of the surrounding physical environment.
  • console 106 is shown as a single computing device, such as a gaming console, workstation, a desktop computer, or a laptop.
  • console 106 may be distributed across a plurality of computing devices, such as a distributed computing network, a data center, or a cloud computing system.
  • Console 106 , HMD 112 , and sensors 90 may, as shown in this example, be communicatively coupled via network 104 , which may be a wired or wireless network, such as WiFi, a mesh network or a short-range wireless communication medium.
  • network 104 may be a wired or wireless network, such as WiFi, a mesh network or a short-range wireless communication medium.
  • HMD 112 is shown in this example as in communication with, e.g., tethered to or in wireless communication with, console 106 , in some implementations HMD 112 operates as a stand-alone, mobile artificial reality system.
  • artificial reality system 10 uses information captured from a real-world, 3D physical environment to render artificial reality content 122 for display to user 110 .
  • user 110 views the artificial reality content 122 constructed and rendered by an artificial reality application executing on console 106 and/or HMD 112 .
  • artificial reality content 122 may be a consumer gaming application in which user 110 is rendered as avatar 120 with one or more virtual objects 128 A, 128 B.
  • artificial reality content 122 may comprise a mixture of real-world imagery and virtual objects, e.g., mixed reality and/or augmented reality.
  • artificial reality content 122 may be, e.g., a video conferencing application, a navigation application, an educational application, training or simulation applications, or other types of applications that implement artificial reality.
  • the artificial reality application constructs artificial reality content 122 for display to user 110 by tracking and computing pose information for a frame of reference, typically a viewing perspective of HMD 112 .
  • a frame of reference typically a viewing perspective of HMD 112 .
  • the artificial reality application uses HMD 112 as a frame of reference, and based on a current field of view 130 as determined by a current estimated pose of HMD 112 , the artificial reality application renders 3D artificial reality content which, in some examples, may be overlaid, at least in part, upon the real-world, 3D physical environment of user 110 .
  • the artificial reality application uses sensed data received from HMD 112 , such as movement information and user commands, and, in some examples, data from any external sensors 90 , such as external cameras, to capture 3D information within the real world, physical environment, such as motion by user 110 and/or feature tracking information with respect to user 110 . Based on the sensed data, the artificial reality application determines a current pose for the frame of reference of HMD 112 and, in accordance with the current pose, renders the artificial reality content 122 .
  • the artificial reality application detects gestures performed by user 110 and, in response to detecting one or more particular gestures, generates one or more user interface elements, e.g., UI menu 124 and UI element 126 , which may be overlaid on underlying artificial reality content 122 being presented to the user.
  • user interface elements 124 , 126 may be viewed as part of the artificial reality content 122 being presented to the user in the artificial reality environment.
  • artificial reality system 10 dynamically presents one or more graphical user interface elements 124 , 126 in response to detecting one or more particular gestures by user 110 , such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms.
  • Example configurations of a user's hand may include a fist, one or more digits extended, the relative and/or absolute positions and orientations of one or more of the individual digits of the hand, the shape of the palm of the hand, and so forth.
  • the user interface elements may, for example, be a graphical user interface, such as a menu or sub-menu with which user 110 interacts to operate the artificial reality system, or individual user interface elements selectable and manipulatable by user 110 , such as icon elements, toggle elements, drop-down elements, menu selection elements, two-dimensional or three-dimensional shapes, graphical input keys or keyboards, content display windows and the like. While depicted as a two-dimensional element, for example, UI element 126 may be a two-dimensional or three-dimensional shape that is manipulatable by a user performing gestures to translate, scale, and/or rotate the shape in the artificial reality environment.
  • artificial reality system 10 may trigger generation and rendering of graphical user interface elements 124 , 126 in response to other conditions, such as a current state of one or more applications being executed by the system, or the position and orientation of the particular detected gestures in a physical environment in relation to a current field of view 130 of user 110 , as may be determined by real-time gaze tracking of the user, or other conditions.
  • other conditions such as a current state of one or more applications being executed by the system, or the position and orientation of the particular detected gestures in a physical environment in relation to a current field of view 130 of user 110 , as may be determined by real-time gaze tracking of the user, or other conditions.
  • image capture devices 138 of HMD 112 capture image data representative of objects in the real world, physical environment that are within a field of view 130 of image capture devices 138 .
  • Field of view 130 typically corresponds with the viewing perspective of HMD 112 .
  • the artificial reality application renders the portions of hand 132 of user 110 that are within field of view 130 as a virtual hand 136 within artificial reality content 122 .
  • the artificial reality application may present a real-world image of hand 132 and/or arm 134 of user 110 within artificial reality content 122 comprising mixed reality and/or augmented reality.
  • user 110 is able to view the portions of their hand 132 and/or arm 134 that are within field of view 130 as objects within artificial reality content 122 .
  • the artificial reality application may not render representations of the hand 132 or arm 134 of the user.
  • artificial reality system 10 performs object recognition within image data captured by image capture devices 138 of HMD 112 to identify hand 132 , including optionally identifying individual fingers or the thumb, and/or all or portions of arm 134 of user 110 . Further, artificial reality system 10 tracks the position, orientation, and configuration of hand 132 (optionally including particular digits of the hand) and/or portions of arm 134 over a sliding window of time. The artificial reality application analyzes any tracked motions, configurations, positions, and/or orientations of hand 132 and/or portions of arm 134 to identify one or more gestures performed by particular objects, e.g., hand 132 (including particular digits of the hand) and/or portions of arm 134 of user 110 .
  • particular objects e.g., hand 132 (including particular digits of the hand) and/or portions of arm 134 of user 110 .
  • the artificial reality application may compare the motions, configurations, positions and/or orientations of hand 132 and/or portions of arm 134 to gesture definitions stored in a gesture library of artificial reality system 10 , where each gesture in the gesture library may be mapped to one or more actions.
  • detecting movement may include tracking positions of one or more of the digits (individual fingers and thumb) of hand 132 , including whether any of a defined combination of the digits (such as an index finger and thumb) are brought together to touch or approximately touch in the physical environment.
  • detecting movement may include tracking an orientation of hand 132 (e.g., fingers pointing toward HMD 112 or away from HMD 112 ) and/or an orientation of arm 134 (i.e., the normal of the arm facing toward HMD 112 ) relative to the current pose of HMD 112 .
  • the position and orientation of hand 132 (or a portion thereof) thereof may alternatively be referred to as the pose of hand 132 (or a portion thereof).
  • the artificial reality application may analyze configurations, positions, and/or orientations of hand 132 and/or arm 134 to identify a gesture that includes hand 132 and/or arm 134 being held in one or more specific configurations, positions, and/or orientations for at least a threshold period of time.
  • one or more particular positions at which hand 132 and/or arm 134 are being held substantially stationary within field of view 130 for at least a configurable period of time may be used by artificial reality system 10 as an indication that user 110 is attempting to perform a gesture intended to trigger a desired response by the artificial reality application, such as triggering display of a particular type of user interface element 124 , 126 , such as a menu.
  • one or more particular configurations of the fingers and/or palms of hand 132 and/or arm 134 being maintained within field of view 130 for at least a configurable period of time may be used by artificial reality system 10 as an indication that user 110 is attempting to perform a gesture.
  • artificial reality system 10 may identify a left hand and/or arm of user 110 or both right and left hands and/or arms of user 110 . In this way, artificial reality system 10 may detect single-handed gestures performed by either hand, double-handed gestures, or arm-based gestures within the physical environment, and generate associated user interface elements in response to the detected gestures.
  • the artificial reality application determines whether an identified gesture corresponds to a gesture defined by one of a plurality of entries in a gesture library of console 106 and/or HMD 112 .
  • each of the entries in the gesture library may define a different gesture as a specific motion, configuration, position, and/or orientation of a user's hand, digit (finger or thumb) and/or arm over time, or a combination of such properties.
  • each of the defined gestures may be associated with a desired response in the form of one or more actions to be performed by the artificial reality application.
  • one or more of the defined gestures in the gesture library may trigger the generation, transformation, and/or configuration of one or more user interface elements, e.g., UI menu 124 , to be rendered and overlaid on artificial reality content 122 , where the gesture may define a location and/or orientation of UI menu 124 in artificial reality content 122 .
  • one or more of the defined gestures may indicate an interaction by user 110 with a particular user interface element, e.g., selection of UI element 126 of UI menu 124 , to trigger a change to the presented user interface, presentation of a sub-menu of the presented user interface, or the like.
  • the artificial reality application may analyze configurations, positions, and/or orientations of hand 132 and/or arm 134 to identify a menu activation gesture that includes hand 132 being held in a specific configuration and orientation for at least a threshold period of time.
  • the menu activation gesture may, for example, be a hand being held in a substantially upward position while a finger and thumb of the hand are in a pinching configuration.
  • the menu activation gesture may comprise a finger and the thumb of the hand positioned in a pinching configuration irrespective of the orientation of the hand.
  • a menu sliding gesture may cause a virtual hand that moves in accordance with the user's hand to slide along a dimension of the UI menu 124 while the menu remains stationary in the sliding direction.
  • Motion in directions other than the menu sliding gesture may cause the UI menu 124 to be repositioned based on the motion.
  • the menu sliding gesture may be motion of the user's hand 132 in a horizontal direction while maintaining the menu activation gesture.
  • the virtual hand 136 may move along the horizontal dimension while the menu remains stationary in the horizontal direction.
  • the artificial reality application generates a slidably engageable UI element (not shown in FIG. 1 ) in addition to, or alternatively to, the virtual hand 136 . Movement in the vertical direction may cause the UI menu 124 to be repositioned.
  • the menu sliding gesture while maintaining the menu activation gesture may cause the artificial reality application to render an indication that a particular menu item of the UI menu 124 would be selected if the user were to perform a selection gesture without further performing the menu sliding gesture to slide the virtual hand 132 , e.g., to a different location proximate to a different menu item of the UI menu 124 . That particular menu is primed for selection by the user.
  • the indication may be a location of the virtual hand 132 or a slidably engageable UI element being proximate to the menu item; highlighting of the menu item with a different color, for instance; enlargement of the menu item; or some other indication.
  • artificial reality systems as described herein may provide a high-quality artificial reality experience to a user, such as user 110 , of the artificial reality application by generating and rendering user interface elements overlaid on the artificial reality content based on detection of intuitive, yet distinctive, gestures performed by the user.
  • the techniques may provide the user with intuitive user input in the form of gestures by which the user may activate a menu interface and subsequently translate, along a dimension of the menu, a slidably engageable UI element or other indication of the menu item primed for selection by the user.
  • systems as described herein may be configured to detect certain gestures based on hand and arm movements that are defined to avoid tracking occlusion. Tracking occlusion may occur when one hand of the user at least partially overlaps the other hand, making it difficult to accurately track the individual digits (fingers and thumb) on each hand, as well as the position and orientation of each hand.
  • Systems as described herein, therefore, may be configured to primarily detect single-handed or single arm-based gestures. The use of single-handed or single arm-based gestures may further provide enhanced accessibility to users having large- and fine-motor skill limitations.
  • systems as described herein may be configured to detect double-handed or double arm-based gestures in which the hands of the user do not interact or overlap with each other.
  • systems as described herein may be configured to detect gestures that provide self-haptic feedback to the user. For example, a thumb and one or more fingers on each hand of the user may touch or approximately touch in the physical world as part of a pre-defined gesture indicating an interaction with a particular user interface element in the artificial reality content. The touch between the thumb and one or more fingers of the user's hand may provide the user with a simulation of the sensation felt by the user when interacting directly with a physical user input object, such as a button on a physical keyboard or other physical input device.
  • a physical user input object such as a button on a physical keyboard or other physical input device.
  • FIG. 1 B is an illustration depicting another example artificial reality system 20 in accordance with the techniques of the disclosure. Similar to artificial reality system 10 of FIG. 1 A , in some examples, artificial reality system 20 of FIG. 1 B may present and control user interface elements specifically designed for user interaction and manipulation within an artificial reality environment. Artificial reality system 20 may also, in various examples, generate and render certain graphical user interface elements to a user in response to detection of one or more particular gestures of the user.
  • artificial reality system 20 includes external cameras 102 A and 102 B (collectively, “external cameras 102 ”), HMDs 112 A- 112 C (collectively, “HMDs 112 ”), controllers 114 A and 114 B (collectively, “controllers 114 ”), console 106 , and sensors 90 .
  • artificial reality system 20 represents a multi-user environment in which an artificial reality application executing on console 106 and/or HMDs 112 presents artificial reality content to each of users 110 A- 110 C (collectively, “users 110 ”) based on a current viewing perspective of a corresponding frame of reference for the respective user.
  • the artificial reality application constructs artificial content by tracking and computing pose information for a frame of reference for each of HMDs 112 .
  • Artificial reality system 20 uses data received from cameras 102 , HMDs 112 , and controllers 114 to capture 3D information within the real world environment, such as motion by users 110 and/or tracking information with respect to users 110 and objects 108 , for use in computing updated pose information for a corresponding frame of reference of HMDs 112 .
  • the artificial reality application may render, based on a current viewing perspective determined for HMD 112 C, artificial reality content 122 having virtual objects 128 A- 128 C (collectively, “virtual objects 128 ”) as spatially overlaid upon real world objects 108 A- 108 C (collectively, “real world objects 108 ”). Further, from the perspective of HMD 112 C, artificial reality system 20 renders avatars 120 A, 120 B based upon the estimated positions for users 110 A, 110 B, respectively.
  • Each of HMDs 112 concurrently operates within artificial reality system 20 .
  • each of users 110 may be a “player” or “participant” in the artificial reality application, and any of users 110 may be a “spectator” or “observer” in the artificial reality application.
  • HMD 112 C may operate substantially similar to HMD 112 of FIG. 1 A by tracking hand 132 and/or arm 134 of user 110 C, and rendering the portions of hand 132 that are within field of view 130 as virtual hand 136 within artificial reality content 122 .
  • HMD 112 B may receive user inputs from controllers 114 A held by user 110 B.
  • HMD 112 A may also operate substantially similar to HMD 112 of FIG.
  • HMD 112 B may receive user inputs from controllers 114 held by user 110 B. Controllers 114 may be in communication with HMD 112 B using near-field communication of short-range wireless communication such as Bluetooth, using wired communication links, or using other types of communication links.
  • console 106 and/or HMD 112 C of artificial reality system 20 generates and renders user interface elements 124 , 126 , which may be overlaid upon the artificial reality content 122 displayed to user 110 C.
  • console 106 and/or HMD 112 C may trigger the generation and dynamic display of the user interface elements 124 , 126 based on detection, via pose tracking, of intuitive, yet distinctive, gestures performed by user 110 C.
  • artificial reality system 20 may dynamically present one or more graphical user interface elements 124 , 126 in response to detecting one or more particular gestures by user 110 C, such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms.
  • one or more particular gestures by user 110 C such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms.
  • input data from external cameras 102 may be used to track and detect particular motions, configurations, positions, and/or orientations of hands and arms of users 110 , such as hand 132 of user 110 C, including movements of individual and/or combinations of digits (fingers, thumb) of the hand.
  • the artificial reality application can run on console 106 , and can utilize image capture devices 102 A and 102 B to analyze configurations, positions, and/or orientations of hand 132 B to identify menu prompt gestures, menu activation gestures, menu sliding gestures, selection gestures, or menu positioning motions, etc. that may be performed by a user of HMD 112 A.
  • HMD 112 C can utilize image capture device 138 to analyze configurations, positions, and/or orientations of hand 132 C to identify menu prompt gestures, menu activation gestures, menu sliding gestures, selection gestures, or menu positioning motions, etc., that may be performed by a user of HMD 112 C.
  • the artificial reality application may render UI menu 124 and virtual hand 136 , responsive to such gestures, in a manner similar to that described above with respect to FIG. 1 A .
  • FIG. 2 is an illustration depicting an example HMD 112 configured to operate in accordance with the techniques of the disclosure.
  • HMD 112 of FIG. 2 may be an example of any of HMDs 112 of FIGS. 1 A and 1 B .
  • HMD 112 may be part of an artificial reality system, such as artificial reality systems 10 , 20 of FIGS. 1 A, 1 B , or may operate as a stand-alone, mobile artificial realty system configured to implement the techniques described herein.
  • HMD 112 includes a front rigid body and a band to secure HMD 112 to a user.
  • HMD 112 includes an interior-facing electronic display 203 configured to present artificial reality content to the user.
  • Electronic display 203 may be any suitable display technology, such as liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of display capable of generating visual output.
  • the electronic display is a stereoscopic display for providing separate images to each eye of the user.
  • HMD 112 the known orientation and position of display 203 relative to the front rigid body of HMD 112 is used as a frame of reference, also referred to as a local origin, when tracking the position and orientation of HMD 112 for rendering artificial reality content according to a current viewing perspective of HMD 112 and the user.
  • HMD 112 may take the form of other wearable head mounted displays, such as glasses or goggles.
  • HMD 112 further includes one or more motion sensors 206 , such as one or more accelerometers (also referred to as inertial measurement units or “IMUs”) that output data indicative of current acceleration of HMD 112 , GPS sensors that output data indicative of a location of HMD 112 , radar or sonar that output data indicative of distances of HMD 112 from various objects, or other sensors that provide indications of a location or orientation of HMD 112 or other objects within a physical environment.
  • accelerometers also referred to as inertial measurement units or “IMUs”
  • GPS sensors that output data indicative of a location of HMD 112
  • radar or sonar that output data indicative of distances of HMD 112 from various objects, or other sensors that provide indications of a location or orientation of HMD 112 or other objects within a physical environment.
  • HMD 112 may include integrated image capture devices 138 A and 138 B (collectively, “image capture devices 138 ”), such as video cameras, laser scanners, Doppler radar scanners, depth scanners, or the like, configured to output image data representative of the physical environment. More specifically, image capture devices 138 capture image data representative of objects in the physical environment that are within a field of view 130 A, 130 B of image capture devices 138 , which typically corresponds with the viewing perspective of HMD 112 .
  • HMD 112 includes an internal control unit 210 , which may include an internal power source and one or more printed-circuit boards having one or more processors, memory, and hardware to provide an operating environment for executing programmable operations to process sensed data and present artificial reality content on display 203 .
  • control unit 210 is configured to, based on the sensed data, identify a specific gesture or combination of gestures performed by the user and, in response, perform an action. For example, in response to one identified gesture, control unit 210 may generate and render a specific user interface element overlaid on artificial reality content for display on electronic display 203 . As explained herein, in accordance with the techniques of the disclosure, control unit 210 may perform object recognition within image data captured by image capture devices 138 to identify a hand 132 , fingers, thumb, arm or another part of the user, and track movements, positions, configuration, etc., of the identified part(s) to identify pre-defined gestures performed by the user.
  • control unit 210 In response to identifying a pre-defined gesture, control unit 210 takes some action, such as selecting an option from an option set associated with a user interface element, translating the gesture into input (e.g., characters), launching an application or otherwise displaying content, and the like. In some examples, control unit 210 dynamically generates and presents a user interface element, such as a menu, in response to detecting a pre-defined gesture specified as a “trigger” for revealing a user interface. In other examples, control unit 210 performs such functions in response to direction from an external device, such as console 106 , which may perform, object recognition, motion tracking and gesture detection, or any part thereof.
  • an external device such as console 106 , which may perform, object recognition, motion tracking and gesture detection, or any part thereof.
  • control unit 210 can utilize image capture devices 138 A and 138 B to analyze configurations, positions, movements, and/or orientations of hand 132 and/or arm 134 to identify a menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, or menu positioning motions, etc., that may be performed by users of HMD 112 .
  • the control unit 210 can render a UI menu, slidably engageable UI element, and/or virtual hand based on detection of the menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, and menu positioning motions.
  • FIG. 3 is a block diagram showing example implementations of console 106 and HMD 112 of artificial reality system 10 , 20 of FIGS. 1 A, 1 B .
  • console 106 performs pose tracking, gesture detection, and user interface generation and rendering for HMD 112 in accordance with the techniques described herein based on sensed data, such as motion data and image data received from HMD 112 and/or external sensors.
  • HMD 112 includes one or more processors 302 and memory 304 that, in some examples, provide a computer platform for executing an operating system 305 , which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system.
  • operating system 305 provides a multitasking operating environment for executing one or more software components 307 , including application engine 340 .
  • processors 302 are coupled to electronic display 203 , motion sensors 206 and image capture devices 138 .
  • processors 302 and memory 304 may be separate, discrete components.
  • memory 304 may be on-chip memory collocated with processors 302 within a single integrated circuit.
  • console 106 is a computing device that processes image and tracking information received from cameras 102 ( FIG. 1 B ) and/or HMD 112 to perform gesture detection and user interface generation for HMD 112 .
  • console 106 is a single computing device, such as a workstation, a desktop computer, a laptop, or gaming system.
  • at least a portion of console 106 such as processors 312 and/or memory 314 , may be distributed across a cloud computing system, a data center, or across a network, such as the Internet, another public or private communications network, for instance, broadband, cellular, Wi-Fi, and/or other types of communication networks for transmitting data between computing systems, servers, and computing devices.
  • console 106 includes one or more processors 312 and memory 314 that, in some examples, provide a computer platform for executing an operating system 316 , which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system.
  • operating system 316 provides a multitasking operating environment for executing one or more software components 317 .
  • Processors 312 are coupled to one or more I/O interfaces 315 , which provides one or more I/O interfaces for communicating with external devices, such as a keyboard, game controllers, display devices, image capture devices, HMDs, and the like.
  • the one or more I/O interfaces 315 may include one or more wired or wireless network interface controllers (NICs) for communicating with a network, such as network 104 .
  • NICs network interface controllers
  • processors 302 , 312 may comprise any one or more of a multi-core processor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or integrated logic circuitry.
  • Memory 304 , 314 may comprise any form of memory for storing data and executable software instructions, such as random-access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), and flash memory.
  • RAM random-access memory
  • ROM read only memory
  • PROM programmable read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electronically erasable programmable read only memory
  • Software applications 317 of console 106 operate to provide an overall artificial reality application.
  • software applications 317 include application engine 320 , rendering engine 322 , gesture detector 324 , pose tracker 326 , and user interface engine 328 .
  • application engine 320 includes functionality to provide and present an artificial reality application, e.g., a teleconference application, a gaming application, a navigation application, an educational application, training or simulation applications, and the like.
  • Application engine 320 may include, for example, one or more software packages, software libraries, hardware drivers, and/or Application Program Interfaces (APIs) for implementing an artificial reality application on console 106 .
  • APIs Application Program Interfaces
  • rendering engine 322 Responsive to control by application engine 320 , renders 3D artificial reality content for display to the user by application engine 340 of HMD 112 .
  • Application engine 320 and rendering engine 322 construct the artificial content for display to user 110 in accordance with current pose information for a frame of reference, typically a viewing perspective of HMD 112 , as determined by pose tracker 326 . Based on the current viewing perspective, rendering engine 322 constructs the 3D, artificial reality content which may in some cases be overlaid, at least in part, upon the real-world 3D environment of user 110 .
  • pose tracker 326 operates on sensed data received from HMD 112 , such as movement information and user commands, and, in some examples, data from any external sensors 90 ( FIGS. 1 A, 1 B ), such as external cameras, to capture 3D information within the real world environment, such as motion by user 110 and/or feature tracking information with respect to user 110 .
  • pose tracker 326 determines a current pose for the frame of reference of HMD 112 and, in accordance with the current pose, constructs the artificial reality content for communication, via the one or more I/O interfaces 315 , to HMD 112 for display to user 110 .
  • gesture detector 324 analyzes the tracked motions, configurations, positions, and/or orientations of objects (e.g., hands, arms, wrists, fingers, palms, thumbs) of the user to identify one or more gestures performed by user 110 . More specifically, gesture detector 324 analyzes objects recognized within image data captured by image capture devices 138 of HMD 112 and/or sensors 90 and external cameras 102 to identify a hand and/or arm of user 110 , and track movements of the hand and/or arm relative to HMD 112 to identify gestures performed by user 110 .
  • objects e.g., hands, arms, wrists, fingers, palms, thumbs
  • Gesture detector 324 may track movement, including changes to position and orientation, of the hand, digits, and/or arm based on the captured image data, and compare motion vectors of the objects to one or more entries in gesture library 330 to detect a gesture or combination of gestures performed by user 110 .
  • Some entries in gesture library 330 may each define a gesture as a series or pattern of motion, such as a relative path or spatial translations and rotations of a user's hand, specific fingers, thumbs, wrists and/or arms.
  • Some entries in gesture library 330 may each define a gesture as a configuration, position, and/or orientation of the user's hand and/or arms (or portions thereof) at a particular time, or over a period of time. Other examples of type of gestures are possible.
  • each of the entries in gesture library 330 may specify, for the defined gesture or series of gestures, conditions that are required for the gesture or series of gestures to trigger an action, such as spatial relationships to a current field of view of HMD 112 , spatial relationships to the particular region currently being observed by the user, as may be determined by real-time gaze tracking of the individual, types of artificial content being displayed, types of applications being executed, and the like.
  • Each of the entries in gesture library 330 further may specify, for each of the defined gestures or combinations/series of gestures, a desired response or action to be performed by software applications 317 .
  • certain specialized gestures may be pre-defined such that, in response to detecting one of the pre-defined gestures, user interface engine 328 dynamically generates a user interface as an overlay to artificial reality content being displayed to the user, thereby allowing the user 110 to easily invoke a user interface for configuring HMD 112 and/or console 106 even while interacting with artificial reality content.
  • certain gestures may be associated with other actions, such as providing input, selecting objects, launching applications, and the like.
  • gesture library 330 may include entries that describe a menu prompt gesture, menu activation gesture, a menu sliding gesture, a selection gesture, and menu positioning motions.
  • Gesture detector 324 may process image data from image capture devices 138 to analyze configurations, positions, motions, and/or orientations of a user's hand to identify a menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, and menu positioning motions etc. that may be performed by users.
  • the rendering engine 322 can render a menu and virtual hand based on detection of the menu prompt gesture, menu activation gesture, menu sliding gesture, and menu positioning motions.
  • the user interface engine 328 can define the menu that is displayed and can control actions that are performed in response to selections cause by selection gestures.
  • FIG. 4 is a block diagram depicting an example in which gesture detection and user interface generation is performed by HMD 112 of the artificial reality systems of FIGS. 1 A, 1 B in accordance with the techniques of the disclosure.
  • HMD 112 includes one or more processors 302 and memory 304 that, in some examples, provide a computer platform for executing an operating system 305 , which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system.
  • operating system 305 provides a multitasking operating environment for executing one or more software components 417 .
  • processor(s) 302 are coupled to electronic display 203 , motion sensors 206 , and image capture devices 138 .
  • software components 417 operate to provide an overall artificial reality application.
  • software applications 417 include application engine 440 , rendering engine 422 , gesture detector 424 , pose tracker 426 , and user interface engine 428 .
  • software components 417 operate similar to the counterpart components of console 106 of FIG. 3 (e.g., application engine 320 , rendering engine 322 , gesture detector 324 , pose tracker 326 , and user interface engine 328 ) to construct user interface elements overlaid on, or as part of, the artificial content for display to user 110 in accordance with detected gestures of user 110 .
  • rendering engine 422 constructs the 3D, artificial reality content which may be overlaid, at least in part, upon the real-world, physical environment of user 110 .
  • gesture detector 424 analyzes the tracked motions, configurations, positions, and/or orientations of objects (e.g., hands, arms, wrists, fingers, palms, thumbs) of the user to identify one or more gestures performed by user 110 .
  • user interface engine 428 generates user interface elements as part of, e.g., overlaid upon, the artificial reality content to be displayed to user 110 and/or performs actions based on one or more gestures or combinations of gestures of user 110 detected by gesture detector 424 .
  • gesture detector 424 analyzes objects recognized within image data captured by image capture devices 138 of HMD 112 and/or sensors 90 or external cameras 102 to identify a hand and/or arm of user 110 , and track movements of the hand and/or arm relative to HMD 112 to identify gestures performed by user 110 .
  • Gesture detector 424 may track movement, including changes to position and orientation, of the hand, digits, and/or arm based on the captured image data, and compare motion vectors of the objects to one or more entries in gesture library 430 to detect a gesture or combination of gestures performed by user 110 .
  • Gesture library 430 is similar to gesture library 330 of FIG. 3 .
  • Each of the entries in gesture library 430 may specify, for the defined gesture or series of gestures, conditions that are required for the gesture to trigger an action, such as spatial relationships to a current field of view of HMD 112 , spatial relationships to the particular region currently being observed by the user, as may be determined by real-time gaze tracking of the individual, types of artificial content being displayed, types of applications being executed, and the like.
  • HMD 112 In response to detecting a matching gesture or combination of gestures, HMD 112 performs the response or action assigned to the matching entry in gesture library 430 .
  • certain specialized gestures may be pre-defined such that, in response to gesture detector 424 detecting one of the pre-defined gestures, user interface engine 428 dynamically generates a user interface as an overlay to artificial reality content being displayed to the user, thereby allowing the user 110 to easily invoke a user interface for configuring HMD 112 while viewing artificial reality content.
  • user interface engine 428 and/or application engine 440 may receive input, select values or parameters associated with user interface elements, launch applications, modify configurable settings, send messages, start or stop processes or perform other actions.
  • gesture library 430 may include entries that describe a menu prompt gesture, menu activation gesture, a menu sliding gesture, menu positioning motions, and a selection gesture.
  • Gesture detector 424 can utilize image data from image capture devices 138 to analyze configurations, positions, and/or orientations of a user's hand to identify a menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, or menu positioning motions, etc., that may be performed by users.
  • the rendering engine 422 can render a UI menu, slidably engageable element, and/or virtual hand based on detection of the menu activation gesture, menu sliding gesture, selection gesture, and menu positioning motions.
  • the user interface engine 328 can define the menu that is displayed and can control actions performed by application engine 440 in response to selections caused by selection gestures.
  • FIGS. 5 and 6 are flowcharts illustrating example methods for activating menu prompts and menus, and for determining positioning and user interaction with menus.
  • the operations illustrated in FIGS. 5 and 6 may be performed by one or more components of an artificial reality system, such as artificial reality systems 10 , 20 of FIGS. 1 A, 1 B .
  • an artificial reality system such as artificial reality systems 10 , 20 of FIGS. 1 A, 1 B .
  • some or all of the operations may be performed by one or more of gesture detector ( 324 , 424 of FIGS. 3 and 4 ), a user interface engine ( 328 , 428 of FIGS. 3 and 4 ), and a rendering engine ( 322 , 422 of FIGS. 3 and 4 ).
  • FIG. 5 is a flowchart illustrating operations of an example method for activating a menu prompt or a menu interface in accordance with aspects of the disclosure.
  • the artificial reality system may determine a current configuration of a hand ( 502 ).
  • the configuration may include an orientation of the hand and positioning of digits of the hand with respect to one another.
  • image data may be captured and analyzed to determine the configuration of the hand.
  • Other sensor data may be used in addition to, or instead of, image data to determine the configuration of the hand.
  • the artificial reality system may determine if the current configuration of the hand indicates that the user is performing a menu prompt gesture ( 504 ).
  • the artificial reality system can be configurable (for example, by the user) to determine a configuration of the left hand or the right hand.
  • the artificial reality system can utilize data describing the current configuration of the hand and data in one or more entries of a gesture library that specify particular gestures to determine if the current configuration of the hand is a menu prompt gesture.
  • the menu prompt gesture can be a configuration of the hand in which the hand is in a substantially upturned orientation, and a finger and the thumb of the user's hand are positioned such that a space exists between the finger and the thumb.
  • the finger and the thumb of the user's hand may form a “C” shape or pincer shape, where the finger and the thumb do not touch at the ends.
  • the artificial reality system may render a menu prompt ( 506 ).
  • the menu prompt is rendered in proximity to a virtual hand representing the orientation of the user's hand.
  • the menu prompt may be a UI element located between the virtual finger and virtual thumb of the virtual hand corresponding to the finger and thumb of the user performing the menu prompt gesture.
  • FIG. 8 is an example HMD display 800 illustrating a menu prompt 810 in accordance with aspects of the disclosure.
  • the user of the artificial reality system has placed their hand in a substantially upturned orientation with a space between the index finger and the thumb. (Other fingers beside the index finger may be used for the menu prompt and menu activation gestures.)
  • the artificial reality system can determine the position and orientation of the hand and can render a virtual hand 136 to match the orientation of the user's hand and finger positioning.
  • the artificial reality system can detect that the user has performed a menu prompt gesture with their hand based on the configuration of the hand.
  • the artificial reality system can render a menu prompt 810 between the index finger and thumb of the virtual hand.
  • the menu prompt 810 can be a user interface element that serves as an indicator or reminder (i.e., a prompt) to the user that the user can perform an action with the thumb and index finger (e.g., a pinching action) to place user's hand in a menu activation gesture to cause the artificial reality system to provide a menu to the user.
  • the menu prompt 810 can include a line extending between the index finger and the thumb.
  • the menu prompt 810 can include a virtual object positioned between the thumb and the index finger.
  • the menu prompt 810 can include highlighting the index finger and/or the thumb.
  • Other types of user interface elements can be rendered as a menu prompt 810 . For example, arrows may be used to indicate the direction that the user's index finger and thumb should be moved in order to activate a menu.
  • the artificial reality system may determine a new current configuration of the hand ( 502 ). There may be many other operations performed by the artificial reality system in between rendering the menu prompt and determining a new configuration of the hand.
  • the artificial reality system can determine if the current configuration of the hand indicates the user is performing a menu activation gesture.
  • the artificial reality system can utilize data describing the current configuration of the hand and data in one or more entries of the gesture library to determine if the current configuration of the hand is a menu activation gesture.
  • the menu activation gesture can be a configuration of the hand in which the hand is in a substantially upturned orientation, and a finger and the thumb of the user's hand are positioned in a pinching configuration.
  • the menu activation gesture may comprise a finger and thumb positioned in a pinching configuration irrespective of the orientation of the hand.
  • the artificial reality system can render a UI menu ( 510 ).
  • the menu is rendered in proximity to a virtual hand representing the orientation of the user's hand.
  • the artificial reality system may render the UI menu responsive to detecting a menu activation gesture only if the artificial reality system first detected a menu prompt gesture.
  • the menu prompt gesture is not a prerequisite.
  • FIG. 7 A is an example HMD display 700 depicting a UI menu 124 in accordance with aspects of the disclosure.
  • the user of the artificial reality system has placed their hand in a substantially upturned orientation with the index finger and thumb of the hand in a pinching configuration.
  • the artificial reality system can determine the position and orientation of the hand and can render a virtual hand 136 to represent the orientation of the user's hand and finger positioning.
  • the artificial reality system can detect that the user has performed a menu activation gesture with their hand based on the configuration of the hand.
  • the artificial reality system can render a UI menu 124 in proximity to the virtual hand.
  • the UI menu 124 can include one or more UI elements 126 that are arrayed along a dimension of the UI menu 124 .
  • the one or more UI elements 126 can be menu items arrayed along a horizontal dimension of a coordinate space such as a viewing space or display space.
  • a coordinate axis 704 is shown solely to illustrate the coordinate space. The coordinate axis 704 need not be presented on the actual display.
  • the horizontal dimension is along the X axis
  • the vertical dimension is along the Y axis
  • depth is along the Z axis.
  • the menu activation gesture can include the user placing their hand in a substantially upturned orientation.
  • the artificial reality system can detect that a vector 702 normal to the palm of other surface of the hand is also substantially normal to the plane formed by the X axis and Z axis.
  • the vector 702 can be considered substantially normal if the vector 702 is within thirty degrees of normal to the plane formed by the X axis and Z axis (illustrated by dashed lines). Other thresholds besides thirty degrees can be used in one or more aspects.
  • a slidably engageable UI element 706 may be rendered in proximity to the virtual hand 136 .
  • the slidably engageable UI element 706 is a circle.
  • Other graphical elements such as spheres, triangles, squares, etc., or virtual hand 136 alone, can serve as the slidably engageable UI element 706 .
  • a finger or fingers of virtual hand 136 can be highlighted to indicate that a highlighted portion of a finger or fingers is the slidably engageable UI element.
  • FIG. 7 B is an example HMD display 740 illustrating a UI menu and slidably engageable UI element in accordance with aspects of the disclosure.
  • the user has performed a menu sliding gesture so as to cause artificial reality system to render the slidably engageable UI element 706 at a position in proximity to menu item 708 .
  • the menu item 708 in proximity to the slidably engageable UI element 706 can be highlighted or otherwise augmented or modified to indicate that the menu item 708 will be selected upon the user performing a selection gesture 708 .
  • a label 710 can be provided in proximity to the menu element 708 in addition to, or instead of highlighting the menu item 708 .
  • Highlighting menu element 708 can indicate that menu element 708 will be selected if the user performs a selection gesture.
  • the selection gesture can be movement of a finger of the hand that is different from the finger in the pinching configuration, e.g., releasing the pinching configuration.
  • the selection gesture can be a movement of the hand in a direction that is substantially normal to the plane of the UI menu.
  • substantially normal to a plane may indicate within 0-2 degrees of the normal to the plane, within 0-5 degrees of the normal to the plane, within 0-10 degrees of the normal to the plane, within 0-20 degrees of the normal to the plane, or within 0-30 degrees of the normal to the plane.
  • the selection gesture can be reconfiguring the thumb and the finger of the hand to no longer be in the pinching configuration.
  • the selection gesture may be a motion or reconfiguration of a different finger (e.g., the pinky finger), such as to curl or extend. Detection of the selection gesture may cause the artificial reality system to perform some action.
  • the selection gesture may cause an application to be instantiated, or can cause a currently running application to be brought into the foreground of the display of the HMD, or in some cases may cause the artificial reality system to perform some action within the particular executing artificial reality application.
  • a sequence of gestures may be used to trigger display of a menu 124 , position the slidably engageable UI element 706 over, or in proximity to a menu element 708 of the menu 124 , and select the menu element 708 .
  • a user can perform a menu activation gesture (e.g., position the fingers of a hand in a pinching configuration) to cause a menu 124 to be displayed by the HMD.
  • the user can perform a menu sliding gesture (e.g., move their hand while maintaining the pinching configuration) to cause a slidably engageable UI element 706 to slide along the menu 124 in accordance with the motion of the hand.
  • the user may then perform a selection gesture (e.g., release the finger and thumb from the pinching configuration) to select the menu element 708 indicated by the slidably engageable UI element 706 .
  • artificial reality system may determine a new current configuration of the hand ( 502 ). There may be many other operations performed by the artificial reality system in between rendering the UI menu 124 and determining a new configuration of the hand.
  • the artificial reality system can determine if a UI menu 124 or menu prompt 810 has been displayed. If so, the artificial reality system can remove the UI menu 124 or menu prompt 810 from the display ( 514 ) because the user's hand is no longer in the appropriate configuration to display the UI menu 124 or menu prompt 810 .
  • the flow After removing the UI menu 124 or menu prompt 810 , the flow returns to determine a new current configuration of the hand ( 502 ). There may be many other operations performed by the artificial reality system in between removing the UI menu 124 or menu prompt 810 and determining a new configuration of the hand.
  • FIG. 6 is a flowchart illustrating operations of an example method for positioning and interacting with a UI menu in accordance with aspects of the disclosure.
  • the artificial reality system may determine a position of the hand ( 602 ).
  • the position can be determined from image data captured from image sensors or from other types of sensors coupled with the artificial reality system.
  • the artificial reality system may determine if the UI menu is currently active (i.e., is being rendered and displayed via the HMD) ( 604 ). If the UI menu is not currently active, flow can return to determining an updated position of the hand ( 602 ). There may be many other operations performed by the artificial reality system in between determining if the UI menu is active and determining an updated position of the hand.
  • the artificial reality system can determine if the user has performed a menu sliding gesture ( 606 ).
  • the menu sliding gesture can be substantially horizontal motion of the hand while the menu is active (e.g. while the user's hand is performing a menu activation gesture). For example, the artificial reality system can compare a previous position of the hand with a current position of the hand to determine if a menu sliding gesture has occurred. If the menu sliding gesture is detected, then the artificial reality system can translate the virtual hand and/or slidably engageable interface element along the UI menu 124 in accordance with the menu sliding gesture ( 608 ). If the menu items are oriented vertically, then the menu sliding gesture can be substantially vertical motion of the hand.
  • FIG. 7 C is an example HMD display 750 illustrating a UI menu and menu sliding gesture in accordance with aspects of the disclosure.
  • the menu sliding gesture can be motion of the user's hand along a horizontal dimension of the UI menu 124 (e.g., motion parallel to an X axis).
  • the user has moved their hand (while maintaining the menu activation gesture) along a horizontal dimension of the UI menu 124 .
  • the artificial reality system can reposition the virtual hand 136 and the slidably engageable UI element 706 in accordance with the motion of the user's hand such that the virtual hand 136 and slidably engageable UI element 706 are in proximity to menu item 712 .
  • the artificial reality system can remove highlighting from menu element 708 and can highlight menu element 712 to indicate that menu element 712 will be selected if the user performs the selection gesture.
  • a label 714 can be displayed in addition to, or instead of highlighting the menu item 712 .
  • the artificial reality system does not highlight menu items.
  • the artificial reality system does not render UI element 706 .
  • the UI menu 124 remains in the same position in the horizontal direction as it was in prior to the menu sliding gesture. In other words, the UI menu 124 is horizontally stationary while the virtual hand 136 and slidably engageable UI element 706 move along the horizontal dimension of the UI menu 124 responsive to the user performing the menu sliding gesture.
  • the artificial reality system can determine if non-horizontal motion by the user hand has occurred ( 610 ). For example, the artificial reality system can determine if there has been motion in a vertical direction (i.e., motion parallel to the Y axis) and/or front-to-back or back-to-front motion (i.e., motion parallel to the Z axis). If non-horizontal motion of the user's hand is detected, the artificial reality system can translate the position of the virtual hand, slidably engageable UI element, and UI menu based with the non-horizontal motion. In examples where the UI menu items are arrayed vertically, the non-vertical motion of the user's hand constitutes the “other movement” of the hand.
  • FIG. 7 D is an example HMD display 760 illustrating a UI menu after vertical motion has been detected.
  • the artificial reality system can detect the vertical motion, and can translate the position of the UI menu 124 , virtual hand 136 and slidably engageable UI element 706 based on the detected vertical motion.
  • the virtual hand 136 and slidably engageable UI element 706 remain in their previous position with respect to the UI menu 124 .
  • the same menu element 712 remains highlighted as the vertical motion occurs.
  • FIG. 7 E is an example HMD display 770 illustrating a UI menu after back-to-front motion has been detected (i.e., the user has moved their hand closer to themselves along the Z axis).
  • the artificial reality system can detect the back-to-front motion and can translate the position of the UI menu 124 , virtual hand 136 and slidably engageable UI element 706 based on the detected back-to-front motion.
  • the position of the virtual hand, UI menu 124 , and slidably engageable UI element 706 appear to be larger, and thus closer to the user.
  • the virtual hand 136 and slidably engageable UI element 706 remain in their previous position with respect to the UI menu 124 .
  • the same menu element 712 remains highlighted as the motion along the Z axis occurs.
  • the artificial reality system can render the virtual hand and slidably engageable UI element in proximity to the UI menu 124 based on the current position of the user hand. Flow can then return to determine an updated position of the hand ( 602 ). There may be many other operations performed by the artificial reality system in between rendering the UI menu determining an updated position of the hand.
  • FIG. 7 F is an example HMD display 780 illustrating a UI menu 124 and UI icon array 720 in accordance with aspects of the disclosure.
  • the menu items of a UI menu 124 can correspond to applications.
  • UI menu 124 can be divided into two portions 716 and 718 .
  • the menu elements in portion 716 can represent favorite applications, and the menu elements in portion 718 can represent applications currently running within the artificial reality system.
  • the artificial reality system can present an icon array 710 of icons representing applications available or running in the artificial reality system.
  • the images on the individual icons in icon array 720 can represent a current display of the corresponding application, or an image associated with the application.
  • FIG. 7 G is an example HMD display 790 illustrating a UI menu 124 and UI icon array 720 in accordance with aspects of the disclosure.
  • the virtual hand 136 and slidably engageable UI element 706 are in proximity to a menu item 724 .
  • a three-dimensional highlighting is used, where the menu item in proximity to the slidably engageable UI element 706 can be brought forward, thereby making the image appear larger to the user.
  • the icon 722 corresponding to the menu item 724 can also be highlighted. In this example, the boarder of the icon 722 is highlighted.
  • the discussion above has presented aspects of the artificial reality system in which the UI menu is configured in a horizontal direction.
  • the UI menu can be configured in a vertical direction.
  • vertical motion of the hand can cause the slidably engageable UI element and virtual hand to move along the vertical dimension of the UI menu while the UI menu remains stationary in the vertical direction.
  • Non-vertical motion i.e., horizontal motion or front-to-back motion
  • Non-vertical motion can cause translation of the position of the UI menu in accordance with the non-vertical motion.
  • processors including one or more microprocessors, DSPs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components.
  • processors including one or more microprocessors, DSPs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • a control unit comprising hardware may also perform one or more of the techniques of this disclosure.
  • Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure.
  • any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware or software components or integrated within common or separate hardware or software components.
  • Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.
  • RAM random access memory
  • ROM read only memory
  • PROM programmable read only memory
  • EPROM erasable programmable read only memory
  • EEPROM electronically erasable programmable read only memory
  • flash memory a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.
  • artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof.
  • Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs).
  • the artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer).
  • artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality.
  • the artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head mounted device (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
  • HMD head mounted device

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system can include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system can cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand can be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the UI menu is held constant. Motion of the hand orthogonal to the menu sliding gesture (e.g., non-horizontal motion) can cause the menu to be repositioned. The implementation of the artificial reality system does require use of both hands or use of other input devices in order to interact with the artificial reality system.

Description

TECHNICAL FIELD
This is an application for reissue of U.S. Pat. No. 10,890,983.
This disclosure generally relates to artificial reality systems, such as virtual reality, mixed reality, and/or augmented reality systems, and more particularly, to user interfaces of artificial reality systems.
BACKGROUND
Artificial reality systems are becoming increasingly ubiquitous with applications in many fields such as computer gaming, health and safety, industrial, and education. As a few examples, artificial reality systems are being incorporated into mobile devices, gaming consoles, personal computers, movie theaters, and theme parks. In general, artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof.
Typical artificial reality systems include one or more devices for rendering and displaying content to users. As one example, an artificial reality system may incorporate a head mounted display (HMD) worn by a user and configured to output artificial reality content to the user. The artificial reality content may include completely-generated content or generated content combined with captured content (e.g., real-world video and/or images). During operation, the user typically interacts with the artificial reality system to select content, launch applications or otherwise configure the system.
SUMMARY
In general, this disclosure describes artificial reality systems and, more specifically, graphical user interface elements and techniques for presenting and controlling the user interface elements within an artificial reality environment.
For example, artificial reality systems are described that generate and render graphical user interface elements for display to a user in response to detection of one or more pre-defined gestures by the user, such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms, or a combination of pre-defined gestures. In some examples, the artificial reality system may further trigger generation and rendering of the graphical user interface elements in response to detection of particular gestures in combination with other conditions, such as the position and orientation of the particular gestures in a physical environment relative to a current field of view of the user, which may be determined by real-time gaze tracking of the user, or relative to a pose of an HMD worn by the user.
In some examples, the artificial reality system may generate and present the graphical user interface elements as overlay elements with respect to the artificial reality content currently being rendered within the display of the artificial reality system. The graphical user interface elements may, for example, be a graphical user interface, such as a menu or sub-menu with which the user interacts to operate the artificial reality system, or individual graphical user interface elements selectable and manipulatable by a user, such as toggle elements, drop-down elements, menu selection elements, two-dimensional or three-dimensional shapes, graphical input keys or keyboards, content display windows and the like.
A technical problem with some HMDs is the lack of input devices that can be used to interact with aspects of the artificial reality system, for example, to position a selection user interface element within a menu. In some systems, the artificial reality system can use both hands of a user to provider user interaction with menus or icons. However, a technical problem with this type of interaction is that one hand can occlude the other hand, making it difficult for the artificial reality system to accurately determine the intent of the user. Additionally, some users may have a disability that may prevent them from using both hands to interact with the artificial reality system. As a technical solution to the aforementioned technical problems, some aspects include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system may cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand may be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the menu is held constant. In some aspects, motion of the hand substantially orthogonal to the menu sliding gesture (e.g., non-horizontal motion) may cause the menu to be repositioned. The implementation of the artificial reality system does not require use of both hands or use of other input devices in order to interact with the artificial reality system and thus this technical improvement over conventional artificial reality implementations may provide one or more practical applications, such as providing ease of use, providing the ability for persons with disabilities related to the use of one hand to interact with the system, and the ability to accurately determine user interaction with a menu or other user interface elements.
In one or more example aspects, an artificial reality system includes an image capture device configured to capture image data; a head mounted device (HMD) configured to output artificial reality content; a gesture detector configured to identify, from the image data, a menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand; a UI engine configured to, in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface; and a rendering engine configured to render the artificial reality content, the menu interface, and the slidably engageable UI element for display at the HMD.
In one or more further example aspects, a method includes obtaining, by an artificial reality system including a head mounted device (HMD), image data via an image capture device; identifying, by the artificial reality system from the image data, a menu activation gesture, the menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand; generating, by the artificial reality system in response to the menu activation gesture, a menu interface and a slidably engageable UI element at a first position relative to the menu interface; and rendering, by the artificial reality system, artificial reality content, the menu interface, and the slidably engageable UI element for display at the HMD.
In one or more additional example aspects, a non-transitory, computer-readable medium comprises instructions that, when executed, cause one or more processors of an artificial reality system to capture image data via an image capture device; identify, from the image data, a menu activation gesture comprising a configuration of the hand; in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface; identify, subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand; in response to the menu sliding gesture, translate the slidably engageable UI element to a second position relative to the menu interface; and render artificial reality content, the menu interface, and the slidably engageable UI element for display at a head mounted device (HMD).
The details of one or more examples of the techniques of this disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1A is an illustration depicting an example artificial reality system that presents and controls user interface elements within an artificial reality environment in accordance with the techniques of the disclosure.
FIG. 1B is an illustration depicting another example artificial reality system in accordance with the techniques of the disclosure.
FIG. 2 is an illustration depicting an example HMD that operates in accordance with the techniques of the disclosure.
FIG. 3 is a block diagram showing example implementations of a console and an HMD of the artificial reality systems of FIGS. 1A, 1B.
FIG. 4 is a block diagram depicting an example in which gesture detection and user interface generation is performed by the HMD of the artificial reality systems of FIGS. 1A, 1B in accordance with the techniques of the disclosure.
FIG. 5 is a flowchart illustrating operations of an example method for activating a menu prompt or a UI menu in accordance with aspects of the disclosure.
FIG. 6 is a flowchart illustrating operations of an example method for positioning and interacting with a UI menu in accordance with aspects of the disclosure.
FIGS. 7A-7G are example HMD displays illustrating positioning and interacting with UI menus in accordance with aspects of the disclosure.
FIG. 8 is an example HMD display illustrating a menu prompt in accordance with aspects of the disclosure.
Like reference characters refer to like elements throughout the figures and description.
DETAILED DESCRIPTION
FIG. 1A is an illustration depicting an example artificial reality system 10 that presents and controls user interface elements within an artificial reality environment in accordance with the techniques of the disclosure. In some example implementations, artificial reality system 10 generates and renders graphical user interface elements to a user 110 in response to one or more detected gestures performed by user 110. That is, as described herein, artificial reality system 10 presents one or more graphical user interface elements 124, 126 in response to detecting one or more particular gestures performed by user 110, such as particular motions, configurations, locations, and/or orientations of the user's hands, fingers, thumbs or arms. In other examples, artificial reality system 10 presents and controls user interface elements specifically designed for user interaction and manipulation within an artificial reality environment, such as specialized toggle elements, drop-down elements, menu selection elements, graphical input keys or keyboards, content display windows and the like.
In the example of FIG. 1A, artificial reality system 10 includes head mounted device (HMD) 112, console 106 and, in some examples, one or more external sensors 90. As shown, HMD 112 is typically worn by user 110 and includes an electronic display and optical assembly for presenting artificial reality content 122 to user 110. In addition, HMD 112 includes one or more sensors (e.g., accelerometers) for tracking motion of the HMD 112 and may include one or more image capture devices 138, e.g., cameras, line scanners and the like, for capturing image data of the surrounding physical environment. In this example, console 106 is shown as a single computing device, such as a gaming console, workstation, a desktop computer, or a laptop. In other examples, console 106 may be distributed across a plurality of computing devices, such as a distributed computing network, a data center, or a cloud computing system. Console 106, HMD 112, and sensors 90 may, as shown in this example, be communicatively coupled via network 104, which may be a wired or wireless network, such as WiFi, a mesh network or a short-range wireless communication medium. Although HMD 112 is shown in this example as in communication with, e.g., tethered to or in wireless communication with, console 106, in some implementations HMD 112 operates as a stand-alone, mobile artificial reality system.
In general, artificial reality system 10 uses information captured from a real-world, 3D physical environment to render artificial reality content 122 for display to user 110. In the example of FIG. 1A, user 110 views the artificial reality content 122 constructed and rendered by an artificial reality application executing on console 106 and/or HMD 112. As one example, artificial reality content 122 may be a consumer gaming application in which user 110 is rendered as avatar 120 with one or more virtual objects 128A, 128B. In some examples, artificial reality content 122 may comprise a mixture of real-world imagery and virtual objects, e.g., mixed reality and/or augmented reality. In other examples, artificial reality content 122 may be, e.g., a video conferencing application, a navigation application, an educational application, training or simulation applications, or other types of applications that implement artificial reality.
During operation, the artificial reality application constructs artificial reality content 122 for display to user 110 by tracking and computing pose information for a frame of reference, typically a viewing perspective of HMD 112. Using HMD 112 as a frame of reference, and based on a current field of view 130 as determined by a current estimated pose of HMD 112, the artificial reality application renders 3D artificial reality content which, in some examples, may be overlaid, at least in part, upon the real-world, 3D physical environment of user 110. During this process, the artificial reality application uses sensed data received from HMD 112, such as movement information and user commands, and, in some examples, data from any external sensors 90, such as external cameras, to capture 3D information within the real world, physical environment, such as motion by user 110 and/or feature tracking information with respect to user 110. Based on the sensed data, the artificial reality application determines a current pose for the frame of reference of HMD 112 and, in accordance with the current pose, renders the artificial reality content 122.
Moreover, in accordance with the techniques of this disclosure, based on the sensed data, the artificial reality application detects gestures performed by user 110 and, in response to detecting one or more particular gestures, generates one or more user interface elements, e.g., UI menu 124 and UI element 126, which may be overlaid on underlying artificial reality content 122 being presented to the user. In this respect, user interface elements 124, 126 may be viewed as part of the artificial reality content 122 being presented to the user in the artificial reality environment. In this way, artificial reality system 10 dynamically presents one or more graphical user interface elements 124, 126 in response to detecting one or more particular gestures by user 110, such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms. Example configurations of a user's hand may include a fist, one or more digits extended, the relative and/or absolute positions and orientations of one or more of the individual digits of the hand, the shape of the palm of the hand, and so forth. The user interface elements may, for example, be a graphical user interface, such as a menu or sub-menu with which user 110 interacts to operate the artificial reality system, or individual user interface elements selectable and manipulatable by user 110, such as icon elements, toggle elements, drop-down elements, menu selection elements, two-dimensional or three-dimensional shapes, graphical input keys or keyboards, content display windows and the like. While depicted as a two-dimensional element, for example, UI element 126 may be a two-dimensional or three-dimensional shape that is manipulatable by a user performing gestures to translate, scale, and/or rotate the shape in the artificial reality environment.
Moreover, as described herein, in some examples, artificial reality system 10 may trigger generation and rendering of graphical user interface elements 124, 126 in response to other conditions, such as a current state of one or more applications being executed by the system, or the position and orientation of the particular detected gestures in a physical environment in relation to a current field of view 130 of user 110, as may be determined by real-time gaze tracking of the user, or other conditions.
More specifically, as further described herein, image capture devices 138 of HMD 112 capture image data representative of objects in the real world, physical environment that are within a field of view 130 of image capture devices 138. Field of view 130 typically corresponds with the viewing perspective of HMD 112. In some examples, such as the illustrated example of FIG. 1A, the artificial reality application renders the portions of hand 132 of user 110 that are within field of view 130 as a virtual hand 136 within artificial reality content 122. In other examples, the artificial reality application may present a real-world image of hand 132 and/or arm 134 of user 110 within artificial reality content 122 comprising mixed reality and/or augmented reality. In either example, user 110 is able to view the portions of their hand 132 and/or arm 134 that are within field of view 130 as objects within artificial reality content 122.
In other examples, the artificial reality application may not render representations of the hand 132 or arm 134 of the user.
In any case, during operation, artificial reality system 10 performs object recognition within image data captured by image capture devices 138 of HMD 112 to identify hand 132, including optionally identifying individual fingers or the thumb, and/or all or portions of arm 134 of user 110. Further, artificial reality system 10 tracks the position, orientation, and configuration of hand 132 (optionally including particular digits of the hand) and/or portions of arm 134 over a sliding window of time. The artificial reality application analyzes any tracked motions, configurations, positions, and/or orientations of hand 132 and/or portions of arm 134 to identify one or more gestures performed by particular objects, e.g., hand 132 (including particular digits of the hand) and/or portions of arm 134 of user 110. To detect the gesture(s), the artificial reality application may compare the motions, configurations, positions and/or orientations of hand 132 and/or portions of arm 134 to gesture definitions stored in a gesture library of artificial reality system 10, where each gesture in the gesture library may be mapped to one or more actions. In some examples, detecting movement may include tracking positions of one or more of the digits (individual fingers and thumb) of hand 132, including whether any of a defined combination of the digits (such as an index finger and thumb) are brought together to touch or approximately touch in the physical environment. In other examples, detecting movement may include tracking an orientation of hand 132 (e.g., fingers pointing toward HMD 112 or away from HMD 112) and/or an orientation of arm 134 (i.e., the normal of the arm facing toward HMD 112) relative to the current pose of HMD 112. The position and orientation of hand 132 (or a portion thereof) thereof may alternatively be referred to as the pose of hand 132 (or a portion thereof).
Moreover, the artificial reality application may analyze configurations, positions, and/or orientations of hand 132 and/or arm 134 to identify a gesture that includes hand 132 and/or arm 134 being held in one or more specific configurations, positions, and/or orientations for at least a threshold period of time. As examples, one or more particular positions at which hand 132 and/or arm 134 are being held substantially stationary within field of view 130 for at least a configurable period of time may be used by artificial reality system 10 as an indication that user 110 is attempting to perform a gesture intended to trigger a desired response by the artificial reality application, such as triggering display of a particular type of user interface element 124, 126, such as a menu. As another example, one or more particular configurations of the fingers and/or palms of hand 132 and/or arm 134 being maintained within field of view 130 for at least a configurable period of time may be used by artificial reality system 10 as an indication that user 110 is attempting to perform a gesture. Although only right hand 132 and right arm 134 of user 110 are illustrated in FIG. 1A, in other examples, artificial reality system 10 may identify a left hand and/or arm of user 110 or both right and left hands and/or arms of user 110. In this way, artificial reality system 10 may detect single-handed gestures performed by either hand, double-handed gestures, or arm-based gestures within the physical environment, and generate associated user interface elements in response to the detected gestures.
In accordance with the techniques of this disclosure, the artificial reality application determines whether an identified gesture corresponds to a gesture defined by one of a plurality of entries in a gesture library of console 106 and/or HMD 112. As described in more detail below, each of the entries in the gesture library may define a different gesture as a specific motion, configuration, position, and/or orientation of a user's hand, digit (finger or thumb) and/or arm over time, or a combination of such properties. In addition, each of the defined gestures may be associated with a desired response in the form of one or more actions to be performed by the artificial reality application. As one example, one or more of the defined gestures in the gesture library may trigger the generation, transformation, and/or configuration of one or more user interface elements, e.g., UI menu 124, to be rendered and overlaid on artificial reality content 122, where the gesture may define a location and/or orientation of UI menu 124 in artificial reality content 122. As another example, one or more of the defined gestures may indicate an interaction by user 110 with a particular user interface element, e.g., selection of UI element 126 of UI menu 124, to trigger a change to the presented user interface, presentation of a sub-menu of the presented user interface, or the like.
In some aspects, the artificial reality application may analyze configurations, positions, and/or orientations of hand 132 and/or arm 134 to identify a menu activation gesture that includes hand 132 being held in a specific configuration and orientation for at least a threshold period of time. In some aspects, the menu activation gesture may, for example, be a hand being held in a substantially upward position while a finger and thumb of the hand are in a pinching configuration. In some aspects, the menu activation gesture may comprise a finger and the thumb of the hand positioned in a pinching configuration irrespective of the orientation of the hand. A menu sliding gesture may cause a virtual hand that moves in accordance with the user's hand to slide along a dimension of the UI menu 124 while the menu remains stationary in the sliding direction. Motion in directions other than the menu sliding gesture may cause the UI menu 124 to be repositioned based on the motion. As an example, the menu sliding gesture may be motion of the user's hand 132 in a horizontal direction while maintaining the menu activation gesture. The virtual hand 136 may move along the horizontal dimension while the menu remains stationary in the horizontal direction. In some examples, the artificial reality application generates a slidably engageable UI element (not shown in FIG. 1 ) in addition to, or alternatively to, the virtual hand 136. Movement in the vertical direction may cause the UI menu 124 to be repositioned.
The menu sliding gesture while maintaining the menu activation gesture may cause the artificial reality application to render an indication that a particular menu item of the UI menu 124 would be selected if the user were to perform a selection gesture without further performing the menu sliding gesture to slide the virtual hand 132, e.g., to a different location proximate to a different menu item of the UI menu 124. That particular menu is primed for selection by the user. The indication may be a location of the virtual hand 132 or a slidably engageable UI element being proximate to the menu item; highlighting of the menu item with a different color, for instance; enlargement of the menu item; or some other indication.
Accordingly, the techniques of the disclosure provide specific technical improvements to the computer-related field of rendering and displaying content by an artificial reality system. For example, artificial reality systems as described herein may provide a high-quality artificial reality experience to a user, such as user 110, of the artificial reality application by generating and rendering user interface elements overlaid on the artificial reality content based on detection of intuitive, yet distinctive, gestures performed by the user. More specifically, the techniques may provide the user with intuitive user input in the form of gestures by which the user may activate a menu interface and subsequently translate, along a dimension of the menu, a slidably engageable UI element or other indication of the menu item primed for selection by the user.
Further, systems as described herein may be configured to detect certain gestures based on hand and arm movements that are defined to avoid tracking occlusion. Tracking occlusion may occur when one hand of the user at least partially overlaps the other hand, making it difficult to accurately track the individual digits (fingers and thumb) on each hand, as well as the position and orientation of each hand. Systems as described herein, therefore, may be configured to primarily detect single-handed or single arm-based gestures. The use of single-handed or single arm-based gestures may further provide enhanced accessibility to users having large- and fine-motor skill limitations. Furthermore, systems as described herein may be configured to detect double-handed or double arm-based gestures in which the hands of the user do not interact or overlap with each other.
In addition, systems as described herein may be configured to detect gestures that provide self-haptic feedback to the user. For example, a thumb and one or more fingers on each hand of the user may touch or approximately touch in the physical world as part of a pre-defined gesture indicating an interaction with a particular user interface element in the artificial reality content. The touch between the thumb and one or more fingers of the user's hand may provide the user with a simulation of the sensation felt by the user when interacting directly with a physical user input object, such as a button on a physical keyboard or other physical input device.
FIG. 1B is an illustration depicting another example artificial reality system 20 in accordance with the techniques of the disclosure. Similar to artificial reality system 10 of FIG. 1A, in some examples, artificial reality system 20 of FIG. 1B may present and control user interface elements specifically designed for user interaction and manipulation within an artificial reality environment. Artificial reality system 20 may also, in various examples, generate and render certain graphical user interface elements to a user in response to detection of one or more particular gestures of the user.
In the example of FIG. 1B, artificial reality system 20 includes external cameras 102A and 102B (collectively, “external cameras 102”), HMDs 112A-112C (collectively, “HMDs 112”), controllers 114A and 114B (collectively, “controllers 114”), console 106, and sensors 90. As shown in FIG. 1B, artificial reality system 20 represents a multi-user environment in which an artificial reality application executing on console 106 and/or HMDs 112 presents artificial reality content to each of users 110A-110C (collectively, “users 110”) based on a current viewing perspective of a corresponding frame of reference for the respective user. That is, in this example, the artificial reality application constructs artificial content by tracking and computing pose information for a frame of reference for each of HMDs 112. Artificial reality system 20 uses data received from cameras 102, HMDs 112, and controllers 114 to capture 3D information within the real world environment, such as motion by users 110 and/or tracking information with respect to users 110 and objects 108, for use in computing updated pose information for a corresponding frame of reference of HMDs 112. As one example, the artificial reality application may render, based on a current viewing perspective determined for HMD 112C, artificial reality content 122 having virtual objects 128A-128C (collectively, “virtual objects 128”) as spatially overlaid upon real world objects 108A-108C (collectively, “real world objects 108”). Further, from the perspective of HMD 112C, artificial reality system 20 renders avatars 120A, 120B based upon the estimated positions for users 110A, 110B, respectively.
Each of HMDs 112 concurrently operates within artificial reality system 20. In the example of FIG. 1B, each of users 110 may be a “player” or “participant” in the artificial reality application, and any of users 110 may be a “spectator” or “observer” in the artificial reality application. HMD 112C may operate substantially similar to HMD 112 of FIG. 1A by tracking hand 132 and/or arm 134 of user 110C, and rendering the portions of hand 132 that are within field of view 130 as virtual hand 136 within artificial reality content 122. HMD 112B may receive user inputs from controllers 114A held by user 110B. HMD 112A may also operate substantially similar to HMD 112 of FIG. 1A and receive user inputs in the form of gestures by of hands 132A, 132B of user 110A. HMD 112B may receive user inputs from controllers 114 held by user 110B. Controllers 114 may be in communication with HMD 112B using near-field communication of short-range wireless communication such as Bluetooth, using wired communication links, or using other types of communication links.
In a manner similar to the examples discussed above with respect to FIG. 1A, console 106 and/or HMD 112C of artificial reality system 20 generates and renders user interface elements 124, 126, which may be overlaid upon the artificial reality content 122 displayed to user 110C. Moreover, console 106 and/or HMD 112C may trigger the generation and dynamic display of the user interface elements 124, 126 based on detection, via pose tracking, of intuitive, yet distinctive, gestures performed by user 110C. For example, artificial reality system 20 may dynamically present one or more graphical user interface elements 124, 126 in response to detecting one or more particular gestures by user 110C, such as particular motions, configurations, positions, and/or orientations of the user's hands, fingers, thumbs or arms. As shown in FIG. 1B, in addition to or alternatively to image data captured via camera 138 of HMD 112C, input data from external cameras 102 may be used to track and detect particular motions, configurations, positions, and/or orientations of hands and arms of users 110, such as hand 132 of user 110C, including movements of individual and/or combinations of digits (fingers, thumb) of the hand.
In some aspects, the artificial reality application can run on console 106, and can utilize image capture devices 102A and 102B to analyze configurations, positions, and/or orientations of hand 132B to identify menu prompt gestures, menu activation gestures, menu sliding gestures, selection gestures, or menu positioning motions, etc. that may be performed by a user of HMD 112A. Similarly, HMD 112C can utilize image capture device 138 to analyze configurations, positions, and/or orientations of hand 132C to identify menu prompt gestures, menu activation gestures, menu sliding gestures, selection gestures, or menu positioning motions, etc., that may be performed by a user of HMD 112C. The artificial reality application may render UI menu 124 and virtual hand 136, responsive to such gestures, in a manner similar to that described above with respect to FIG. 1A.
FIG. 2 is an illustration depicting an example HMD 112 configured to operate in accordance with the techniques of the disclosure. HMD 112 of FIG. 2 may be an example of any of HMDs 112 of FIGS. 1A and 1B. HMD 112 may be part of an artificial reality system, such as artificial reality systems 10, 20 of FIGS. 1A, 1B, or may operate as a stand-alone, mobile artificial realty system configured to implement the techniques described herein.
In this example, HMD 112 includes a front rigid body and a band to secure HMD 112 to a user. In addition, HMD 112 includes an interior-facing electronic display 203 configured to present artificial reality content to the user. Electronic display 203 may be any suitable display technology, such as liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of display capable of generating visual output. In some examples, the electronic display is a stereoscopic display for providing separate images to each eye of the user. In some examples, the known orientation and position of display 203 relative to the front rigid body of HMD 112 is used as a frame of reference, also referred to as a local origin, when tracking the position and orientation of HMD 112 for rendering artificial reality content according to a current viewing perspective of HMD 112 and the user. In other examples, HMD 112 may take the form of other wearable head mounted displays, such as glasses or goggles.
As further shown in FIG. 2 , in this example, HMD 112 further includes one or more motion sensors 206, such as one or more accelerometers (also referred to as inertial measurement units or “IMUs”) that output data indicative of current acceleration of HMD 112, GPS sensors that output data indicative of a location of HMD 112, radar or sonar that output data indicative of distances of HMD 112 from various objects, or other sensors that provide indications of a location or orientation of HMD 112 or other objects within a physical environment. Moreover, HMD 112 may include integrated image capture devices 138A and 138B (collectively, “image capture devices 138”), such as video cameras, laser scanners, Doppler radar scanners, depth scanners, or the like, configured to output image data representative of the physical environment. More specifically, image capture devices 138 capture image data representative of objects in the physical environment that are within a field of view 130A, 130B of image capture devices 138, which typically corresponds with the viewing perspective of HMD 112. HMD 112 includes an internal control unit 210, which may include an internal power source and one or more printed-circuit boards having one or more processors, memory, and hardware to provide an operating environment for executing programmable operations to process sensed data and present artificial reality content on display 203.
In one example, in accordance with the techniques described herein, control unit 210 is configured to, based on the sensed data, identify a specific gesture or combination of gestures performed by the user and, in response, perform an action. For example, in response to one identified gesture, control unit 210 may generate and render a specific user interface element overlaid on artificial reality content for display on electronic display 203. As explained herein, in accordance with the techniques of the disclosure, control unit 210 may perform object recognition within image data captured by image capture devices 138 to identify a hand 132, fingers, thumb, arm or another part of the user, and track movements, positions, configuration, etc., of the identified part(s) to identify pre-defined gestures performed by the user. In response to identifying a pre-defined gesture, control unit 210 takes some action, such as selecting an option from an option set associated with a user interface element, translating the gesture into input (e.g., characters), launching an application or otherwise displaying content, and the like. In some examples, control unit 210 dynamically generates and presents a user interface element, such as a menu, in response to detecting a pre-defined gesture specified as a “trigger” for revealing a user interface. In other examples, control unit 210 performs such functions in response to direction from an external device, such as console 106, which may perform, object recognition, motion tracking and gesture detection, or any part thereof.
As an example, control unit 210 can utilize image capture devices 138A and 138B to analyze configurations, positions, movements, and/or orientations of hand 132 and/or arm 134 to identify a menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, or menu positioning motions, etc., that may be performed by users of HMD 112. The control unit 210 can render a UI menu, slidably engageable UI element, and/or virtual hand based on detection of the menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, and menu positioning motions.
FIG. 3 is a block diagram showing example implementations of console 106 and HMD 112 of artificial reality system 10, 20 of FIGS. 1A, 1B. In the example of FIG. 3 , console 106 performs pose tracking, gesture detection, and user interface generation and rendering for HMD 112 in accordance with the techniques described herein based on sensed data, such as motion data and image data received from HMD 112 and/or external sensors.
In this example, HMD 112 includes one or more processors 302 and memory 304 that, in some examples, provide a computer platform for executing an operating system 305, which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system. In turn, operating system 305 provides a multitasking operating environment for executing one or more software components 307, including application engine 340. As discussed with respect to the example of FIG. 2 , processors 302 are coupled to electronic display 203, motion sensors 206 and image capture devices 138. In some examples, processors 302 and memory 304 may be separate, discrete components. In other examples, memory 304 may be on-chip memory collocated with processors 302 within a single integrated circuit.
In general, console 106 is a computing device that processes image and tracking information received from cameras 102 (FIG. 1B) and/or HMD 112 to perform gesture detection and user interface generation for HMD 112. In some examples, console 106 is a single computing device, such as a workstation, a desktop computer, a laptop, or gaming system. In some examples, at least a portion of console 106, such as processors 312 and/or memory 314, may be distributed across a cloud computing system, a data center, or across a network, such as the Internet, another public or private communications network, for instance, broadband, cellular, Wi-Fi, and/or other types of communication networks for transmitting data between computing systems, servers, and computing devices.
In the example of FIG. 3 , console 106 includes one or more processors 312 and memory 314 that, in some examples, provide a computer platform for executing an operating system 316, which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system. In turn, operating system 316 provides a multitasking operating environment for executing one or more software components 317. Processors 312 are coupled to one or more I/O interfaces 315, which provides one or more I/O interfaces for communicating with external devices, such as a keyboard, game controllers, display devices, image capture devices, HMDs, and the like. Moreover, the one or more I/O interfaces 315 may include one or more wired or wireless network interface controllers (NICs) for communicating with a network, such as network 104. Each of processors 302, 312 may comprise any one or more of a multi-core processor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or integrated logic circuitry. Memory 304, 314 may comprise any form of memory for storing data and executable software instructions, such as random-access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), and flash memory.
Software applications 317 of console 106 operate to provide an overall artificial reality application. In this example, software applications 317 include application engine 320, rendering engine 322, gesture detector 324, pose tracker 326, and user interface engine 328.
In general, application engine 320 includes functionality to provide and present an artificial reality application, e.g., a teleconference application, a gaming application, a navigation application, an educational application, training or simulation applications, and the like. Application engine 320 may include, for example, one or more software packages, software libraries, hardware drivers, and/or Application Program Interfaces (APIs) for implementing an artificial reality application on console 106. Responsive to control by application engine 320, rendering engine 322 generates 3D artificial reality content for display to the user by application engine 340 of HMD 112.
Application engine 320 and rendering engine 322 construct the artificial content for display to user 110 in accordance with current pose information for a frame of reference, typically a viewing perspective of HMD 112, as determined by pose tracker 326. Based on the current viewing perspective, rendering engine 322 constructs the 3D, artificial reality content which may in some cases be overlaid, at least in part, upon the real-world 3D environment of user 110. During this process, pose tracker 326 operates on sensed data received from HMD 112, such as movement information and user commands, and, in some examples, data from any external sensors 90 (FIGS. 1A, 1B), such as external cameras, to capture 3D information within the real world environment, such as motion by user 110 and/or feature tracking information with respect to user 110. Based on the sensed data, pose tracker 326 determines a current pose for the frame of reference of HMD 112 and, in accordance with the current pose, constructs the artificial reality content for communication, via the one or more I/O interfaces 315, to HMD 112 for display to user 110.
Moreover, based on the sensed data, gesture detector 324 analyzes the tracked motions, configurations, positions, and/or orientations of objects (e.g., hands, arms, wrists, fingers, palms, thumbs) of the user to identify one or more gestures performed by user 110. More specifically, gesture detector 324 analyzes objects recognized within image data captured by image capture devices 138 of HMD 112 and/or sensors 90 and external cameras 102 to identify a hand and/or arm of user 110, and track movements of the hand and/or arm relative to HMD 112 to identify gestures performed by user 110. Gesture detector 324 may track movement, including changes to position and orientation, of the hand, digits, and/or arm based on the captured image data, and compare motion vectors of the objects to one or more entries in gesture library 330 to detect a gesture or combination of gestures performed by user 110. Some entries in gesture library 330 may each define a gesture as a series or pattern of motion, such as a relative path or spatial translations and rotations of a user's hand, specific fingers, thumbs, wrists and/or arms. Some entries in gesture library 330 may each define a gesture as a configuration, position, and/or orientation of the user's hand and/or arms (or portions thereof) at a particular time, or over a period of time. Other examples of type of gestures are possible. In addition, each of the entries in gesture library 330 may specify, for the defined gesture or series of gestures, conditions that are required for the gesture or series of gestures to trigger an action, such as spatial relationships to a current field of view of HMD 112, spatial relationships to the particular region currently being observed by the user, as may be determined by real-time gaze tracking of the individual, types of artificial content being displayed, types of applications being executed, and the like.
Each of the entries in gesture library 330 further may specify, for each of the defined gestures or combinations/series of gestures, a desired response or action to be performed by software applications 317. For example, in accordance with the techniques of this disclosure, certain specialized gestures may be pre-defined such that, in response to detecting one of the pre-defined gestures, user interface engine 328 dynamically generates a user interface as an overlay to artificial reality content being displayed to the user, thereby allowing the user 110 to easily invoke a user interface for configuring HMD 112 and/or console 106 even while interacting with artificial reality content. In other examples, certain gestures may be associated with other actions, such as providing input, selecting objects, launching applications, and the like.
As an example, gesture library 330 may include entries that describe a menu prompt gesture, menu activation gesture, a menu sliding gesture, a selection gesture, and menu positioning motions. Gesture detector 324 may process image data from image capture devices 138 to analyze configurations, positions, motions, and/or orientations of a user's hand to identify a menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, and menu positioning motions etc. that may be performed by users. The rendering engine 322 can render a menu and virtual hand based on detection of the menu prompt gesture, menu activation gesture, menu sliding gesture, and menu positioning motions. The user interface engine 328 can define the menu that is displayed and can control actions that are performed in response to selections cause by selection gestures.
FIG. 4 is a block diagram depicting an example in which gesture detection and user interface generation is performed by HMD 112 of the artificial reality systems of FIGS. 1A, 1B in accordance with the techniques of the disclosure.
In this example, similar to FIG. 3 , HMD 112 includes one or more processors 302 and memory 304 that, in some examples, provide a computer platform for executing an operating system 305, which may be an embedded, real-time multitasking operating system, for instance, or other type of operating system. In turn, operating system 305 provides a multitasking operating environment for executing one or more software components 417. Moreover, processor(s) 302 are coupled to electronic display 203, motion sensors 206, and image capture devices 138.
In the example of FIG. 4 , software components 417 operate to provide an overall artificial reality application. In this example, software applications 417 include application engine 440, rendering engine 422, gesture detector 424, pose tracker 426, and user interface engine 428. In various examples, software components 417 operate similar to the counterpart components of console 106 of FIG. 3 (e.g., application engine 320, rendering engine 322, gesture detector 324, pose tracker 326, and user interface engine 328) to construct user interface elements overlaid on, or as part of, the artificial content for display to user 110 in accordance with detected gestures of user 110. In some examples, rendering engine 422 constructs the 3D, artificial reality content which may be overlaid, at least in part, upon the real-world, physical environment of user 110.
Similar to the examples described with respect to FIG. 3 , based on the sensed data, gesture detector 424 analyzes the tracked motions, configurations, positions, and/or orientations of objects (e.g., hands, arms, wrists, fingers, palms, thumbs) of the user to identify one or more gestures performed by user 110. In accordance with the techniques of the disclosure, user interface engine 428 generates user interface elements as part of, e.g., overlaid upon, the artificial reality content to be displayed to user 110 and/or performs actions based on one or more gestures or combinations of gestures of user 110 detected by gesture detector 424. More specifically, gesture detector 424 analyzes objects recognized within image data captured by image capture devices 138 of HMD 112 and/or sensors 90 or external cameras 102 to identify a hand and/or arm of user 110, and track movements of the hand and/or arm relative to HMD 112 to identify gestures performed by user 110. Gesture detector 424 may track movement, including changes to position and orientation, of the hand, digits, and/or arm based on the captured image data, and compare motion vectors of the objects to one or more entries in gesture library 430 to detect a gesture or combination of gestures performed by user 110.
Gesture library 430 is similar to gesture library 330 of FIG. 3 . Each of the entries in gesture library 430 may specify, for the defined gesture or series of gestures, conditions that are required for the gesture to trigger an action, such as spatial relationships to a current field of view of HMD 112, spatial relationships to the particular region currently being observed by the user, as may be determined by real-time gaze tracking of the individual, types of artificial content being displayed, types of applications being executed, and the like.
In response to detecting a matching gesture or combination of gestures, HMD 112 performs the response or action assigned to the matching entry in gesture library 430. For example, in accordance with the techniques of this disclosure, certain specialized gestures may be pre-defined such that, in response to gesture detector 424 detecting one of the pre-defined gestures, user interface engine 428 dynamically generates a user interface as an overlay to artificial reality content being displayed to the user, thereby allowing the user 110 to easily invoke a user interface for configuring HMD 112 while viewing artificial reality content. In other examples, in response to gesture detector 424 detecting one of the pre-defined gestures, user interface engine 428 and/or application engine 440 may receive input, select values or parameters associated with user interface elements, launch applications, modify configurable settings, send messages, start or stop processes or perform other actions.
As an example, gesture library 430 may include entries that describe a menu prompt gesture, menu activation gesture, a menu sliding gesture, menu positioning motions, and a selection gesture. Gesture detector 424 can utilize image data from image capture devices 138 to analyze configurations, positions, and/or orientations of a user's hand to identify a menu prompt gesture, menu activation gesture, menu sliding gesture, selection gesture, or menu positioning motions, etc., that may be performed by users. The rendering engine 422 can render a UI menu, slidably engageable element, and/or virtual hand based on detection of the menu activation gesture, menu sliding gesture, selection gesture, and menu positioning motions. The user interface engine 328 can define the menu that is displayed and can control actions performed by application engine 440 in response to selections caused by selection gestures.
FIGS. 5 and 6 are flowcharts illustrating example methods for activating menu prompts and menus, and for determining positioning and user interaction with menus. The operations illustrated in FIGS. 5 and 6 may be performed by one or more components of an artificial reality system, such as artificial reality systems 10, 20 of FIGS. 1A, 1B. For instance, some or all of the operations may be performed by one or more of gesture detector (324, 424 of FIGS. 3 and 4 ), a user interface engine (328, 428 of FIGS. 3 and 4 ), and a rendering engine (322, 422 of FIGS. 3 and 4 ).
FIG. 5 is a flowchart illustrating operations of an example method for activating a menu prompt or a menu interface in accordance with aspects of the disclosure. As noted above, certain configurations of a hand may be detected and used to trigger activation of a menu interface or menu prompt. The artificial reality system may determine a current configuration of a hand (502). The configuration may include an orientation of the hand and positioning of digits of the hand with respect to one another. In one or more aspects, image data may be captured and analyzed to determine the configuration of the hand. Other sensor data may be used in addition to, or instead of, image data to determine the configuration of the hand.
The artificial reality system may determine if the current configuration of the hand indicates that the user is performing a menu prompt gesture (504). In one or more aspects, the artificial reality system can be configurable (for example, by the user) to determine a configuration of the left hand or the right hand. In one or more aspects, the artificial reality system can utilize data describing the current configuration of the hand and data in one or more entries of a gesture library that specify particular gestures to determine if the current configuration of the hand is a menu prompt gesture. In one or more aspects, the menu prompt gesture can be a configuration of the hand in which the hand is in a substantially upturned orientation, and a finger and the thumb of the user's hand are positioned such that a space exists between the finger and the thumb. For the menu prompt gesture, the finger and the thumb of the user's hand may form a “C” shape or pincer shape, where the finger and the thumb do not touch at the ends. Those of skill in the art having the benefit of the disclosure will appreciate that other configurations of a hand can be used as a menu prompt gesture.
In response to determining that the user has performed the menu prompt gesture, the artificial reality system may render a menu prompt (506). In one or more aspects, the menu prompt is rendered in proximity to a virtual hand representing the orientation of the user's hand. The menu prompt may be a UI element located between the virtual finger and virtual thumb of the virtual hand corresponding to the finger and thumb of the user performing the menu prompt gesture.
FIG. 8 is an example HMD display 800 illustrating a menu prompt 810 in accordance with aspects of the disclosure. In the example illustrated in FIG. 8 , the user of the artificial reality system has placed their hand in a substantially upturned orientation with a space between the index finger and the thumb. (Other fingers beside the index finger may be used for the menu prompt and menu activation gestures.) The artificial reality system can determine the position and orientation of the hand and can render a virtual hand 136 to match the orientation of the user's hand and finger positioning. In addition, the artificial reality system can detect that the user has performed a menu prompt gesture with their hand based on the configuration of the hand. In response to detecting the menu prompt gesture, the artificial reality system can render a menu prompt 810 between the index finger and thumb of the virtual hand. The menu prompt 810 can be a user interface element that serves as an indicator or reminder (i.e., a prompt) to the user that the user can perform an action with the thumb and index finger (e.g., a pinching action) to place user's hand in a menu activation gesture to cause the artificial reality system to provide a menu to the user. In some aspects, the menu prompt 810 can include a line extending between the index finger and the thumb. In some aspects, the menu prompt 810 can include a virtual object positioned between the thumb and the index finger. In some aspects, the menu prompt 810 can include highlighting the index finger and/or the thumb. Other types of user interface elements can be rendered as a menu prompt 810. For example, arrows may be used to indicate the direction that the user's index finger and thumb should be moved in order to activate a menu.
Returning to FIG. 5 , after rendering the menu prompt, the artificial reality system may determine a new current configuration of the hand (502). There may be many other operations performed by the artificial reality system in between rendering the menu prompt and determining a new configuration of the hand.
If the current configuration of the hand does not match a menu prompt gesture, the artificial reality system can determine if the current configuration of the hand indicates the user is performing a menu activation gesture. In one or more aspects, the artificial reality system can utilize data describing the current configuration of the hand and data in one or more entries of the gesture library to determine if the current configuration of the hand is a menu activation gesture. In one or more aspects, the menu activation gesture can be a configuration of the hand in which the hand is in a substantially upturned orientation, and a finger and the thumb of the user's hand are positioned in a pinching configuration. Those of skill in the art having the benefit of the disclosure will appreciate that other configurations of a hand can be used as a menu activation gesture. For example, the menu activation gesture may comprise a finger and thumb positioned in a pinching configuration irrespective of the orientation of the hand.
In response to determining that the user has performed the menu activation gesture, the artificial reality system can render a UI menu (510). In one or more aspects, the menu is rendered in proximity to a virtual hand representing the orientation of the user's hand. In some aspects, the artificial reality system may render the UI menu responsive to detecting a menu activation gesture only if the artificial reality system first detected a menu prompt gesture. In some aspects, the menu prompt gesture is not a prerequisite.
FIG. 7A is an example HMD display 700 depicting a UI menu 124 in accordance with aspects of the disclosure. In the example illustrated in FIG. 7A, the user of the artificial reality system has placed their hand in a substantially upturned orientation with the index finger and thumb of the hand in a pinching configuration. (Again, the index finger is being used as one example of a finger of the hand.) The artificial reality system can determine the position and orientation of the hand and can render a virtual hand 136 to represent the orientation of the user's hand and finger positioning. In addition, the artificial reality system can detect that the user has performed a menu activation gesture with their hand based on the configuration of the hand. In response to detecting the menu activation gesture, the artificial reality system can render a UI menu 124 in proximity to the virtual hand. The UI menu 124 can include one or more UI elements 126 that are arrayed along a dimension of the UI menu 124. In one or more aspects, the one or more UI elements 126 can be menu items arrayed along a horizontal dimension of a coordinate space such as a viewing space or display space. In the example illustrated in FIG. 7A, a coordinate axis 704 is shown solely to illustrate the coordinate space. The coordinate axis 704 need not be presented on the actual display. In the examples illustrated in FIGS. 7A-7G, the horizontal dimension is along the X axis, the vertical dimension is along the Y axis, and depth is along the Z axis.
As noted above, the menu activation gesture can include the user placing their hand in a substantially upturned orientation. For example, the artificial reality system can detect that a vector 702 normal to the palm of other surface of the hand is also substantially normal to the plane formed by the X axis and Z axis. In one or more aspects, the vector 702 can be considered substantially normal if the vector 702 is within thirty degrees of normal to the plane formed by the X axis and Z axis (illustrated by dashed lines). Other thresholds besides thirty degrees can be used in one or more aspects.
In one or more aspects, a slidably engageable UI element 706 may be rendered in proximity to the virtual hand 136. In the example illustrated in FIG. 7A, the slidably engageable UI element 706 is a circle. Other graphical elements such as spheres, triangles, squares, etc., or virtual hand 136 alone, can serve as the slidably engageable UI element 706. Additionally, a finger or fingers of virtual hand 136 can be highlighted to indicate that a highlighted portion of a finger or fingers is the slidably engageable UI element.
FIG. 7B is an example HMD display 740 illustrating a UI menu and slidably engageable UI element in accordance with aspects of the disclosure. In the example illustrated in FIG. 7B, the user has performed a menu sliding gesture so as to cause artificial reality system to render the slidably engageable UI element 706 at a position in proximity to menu item 708. In one or more aspects, the menu item 708 in proximity to the slidably engageable UI element 706 can be highlighted or otherwise augmented or modified to indicate that the menu item 708 will be selected upon the user performing a selection gesture 708. A label 710 can be provided in proximity to the menu element 708 in addition to, or instead of highlighting the menu item 708. Various highlighting mechanisms can be used, including border highlighting, background highlighting, blinking, enlargement, etc. may be used to highlight the menu item 708. Highlighting menu element 708 can indicate that menu element 708 will be selected if the user performs a selection gesture. In one or more aspects, the selection gesture can be movement of a finger of the hand that is different from the finger in the pinching configuration, e.g., releasing the pinching configuration. In one or more aspects, the selection gesture can be a movement of the hand in a direction that is substantially normal to the plane of the UI menu. As used herein, “substantially normal” to a plane may indicate within 0-2 degrees of the normal to the plane, within 0-5 degrees of the normal to the plane, within 0-10 degrees of the normal to the plane, within 0-20 degrees of the normal to the plane, or within 0-30 degrees of the normal to the plane. In one or more aspects, the selection gesture can be reconfiguring the thumb and the finger of the hand to no longer be in the pinching configuration. In one or more aspect, the selection gesture may be a motion or reconfiguration of a different finger (e.g., the pinky finger), such as to curl or extend. Detection of the selection gesture may cause the artificial reality system to perform some action. For example, the selection gesture may cause an application to be instantiated, or can cause a currently running application to be brought into the foreground of the display of the HMD, or in some cases may cause the artificial reality system to perform some action within the particular executing artificial reality application.
Thus, a sequence of gestures may be used to trigger display of a menu 124, position the slidably engageable UI element 706 over, or in proximity to a menu element 708 of the menu 124, and select the menu element 708. In an example implementation, a user can perform a menu activation gesture (e.g., position the fingers of a hand in a pinching configuration) to cause a menu 124 to be displayed by the HMD. The user can perform a menu sliding gesture (e.g., move their hand while maintaining the pinching configuration) to cause a slidably engageable UI element 706 to slide along the menu 124 in accordance with the motion of the hand. The user may then perform a selection gesture (e.g., release the finger and thumb from the pinching configuration) to select the menu element 708 indicated by the slidably engageable UI element 706.
Returning to FIG. 5 , after rendering the UI menu 124, artificial reality system may determine a new current configuration of the hand (502). There may be many other operations performed by the artificial reality system in between rendering the UI menu 124 and determining a new configuration of the hand.
If the artificial reality system detects that the user's hand is no longer performing a menu prompt gesture or a menu activation gesture, then the artificial reality system can determine if a UI menu 124 or menu prompt 810 has been displayed. If so, the artificial reality system can remove the UI menu 124 or menu prompt 810 from the display (514) because the user's hand is no longer in the appropriate configuration to display the UI menu 124 or menu prompt 810.
After removing the UI menu 124 or menu prompt 810, the flow returns to determine a new current configuration of the hand (502). There may be many other operations performed by the artificial reality system in between removing the UI menu 124 or menu prompt 810 and determining a new configuration of the hand.
FIG. 6 is a flowchart illustrating operations of an example method for positioning and interacting with a UI menu in accordance with aspects of the disclosure. The artificial reality system may determine a position of the hand (602). For example, the position can be determined from image data captured from image sensors or from other types of sensors coupled with the artificial reality system.
The artificial reality system may determine if the UI menu is currently active (i.e., is being rendered and displayed via the HMD) (604). If the UI menu is not currently active, flow can return to determining an updated position of the hand (602). There may be many other operations performed by the artificial reality system in between determining if the UI menu is active and determining an updated position of the hand.
If the UI menu is active, then the artificial reality system can determine if the user has performed a menu sliding gesture (606). In some aspects, the menu sliding gesture can be substantially horizontal motion of the hand while the menu is active (e.g. while the user's hand is performing a menu activation gesture). For example, the artificial reality system can compare a previous position of the hand with a current position of the hand to determine if a menu sliding gesture has occurred. If the menu sliding gesture is detected, then the artificial reality system can translate the virtual hand and/or slidably engageable interface element along the UI menu 124 in accordance with the menu sliding gesture (608). If the menu items are oriented vertically, then the menu sliding gesture can be substantially vertical motion of the hand. If the menu sliding gesture is not detected the artificial reality system can determine if other motion of the hand is detected that is not the menu sliding gesture (610). “Substantially” vertical and horizontal may be within 5 degrees, 10 degrees, 20 degrees, or 30 degrees of vertical or horizontal.
FIG. 7C is an example HMD display 750 illustrating a UI menu and menu sliding gesture in accordance with aspects of the disclosure. In one or more aspects, the menu sliding gesture can be motion of the user's hand along a horizontal dimension of the UI menu 124 (e.g., motion parallel to an X axis). In the example illustrated in FIG. 7C, the user has moved their hand (while maintaining the menu activation gesture) along a horizontal dimension of the UI menu 124. The artificial reality system can reposition the virtual hand 136 and the slidably engageable UI element 706 in accordance with the motion of the user's hand such that the virtual hand 136 and slidably engageable UI element 706 are in proximity to menu item 712. The artificial reality system can remove highlighting from menu element 708 and can highlight menu element 712 to indicate that menu element 712 will be selected if the user performs the selection gesture. A label 714 can be displayed in addition to, or instead of highlighting the menu item 712. In some examples, the artificial reality system does not highlight menu items. In some examples, the artificial reality system does not render UI element 706. In one or more aspects, the UI menu 124 remains in the same position in the horizontal direction as it was in prior to the menu sliding gesture. In other words, the UI menu 124 is horizontally stationary while the virtual hand 136 and slidably engageable UI element 706 move along the horizontal dimension of the UI menu 124 responsive to the user performing the menu sliding gesture.
Returning to FIG. 6 , after determining whether or not horizontal motion has been detected, the artificial reality system can determine if non-horizontal motion by the user hand has occurred (610). For example, the artificial reality system can determine if there has been motion in a vertical direction (i.e., motion parallel to the Y axis) and/or front-to-back or back-to-front motion (i.e., motion parallel to the Z axis). If non-horizontal motion of the user's hand is detected, the artificial reality system can translate the position of the virtual hand, slidably engageable UI element, and UI menu based with the non-horizontal motion. In examples where the UI menu items are arrayed vertically, the non-vertical motion of the user's hand constitutes the “other movement” of the hand.
FIG. 7D is an example HMD display 760 illustrating a UI menu after vertical motion has been detected. In the example illustrated in FIG. 7D, the user has moved their hand downward. The artificial reality system can detect the vertical motion, and can translate the position of the UI menu 124, virtual hand 136 and slidably engageable UI element 706 based on the detected vertical motion. In some aspects, if there is no horizontal motion detected in addition to the vertical motion, the virtual hand 136 and slidably engageable UI element 706 remain in their previous position with respect to the UI menu 124. Thus, the same menu element 712 remains highlighted as the vertical motion occurs.
FIG. 7E is an example HMD display 770 illustrating a UI menu after back-to-front motion has been detected (i.e., the user has moved their hand closer to themselves along the Z axis). The artificial reality system can detect the back-to-front motion and can translate the position of the UI menu 124, virtual hand 136 and slidably engageable UI element 706 based on the detected back-to-front motion. Thus, the position of the virtual hand, UI menu 124, and slidably engageable UI element 706 appear to be larger, and thus closer to the user. In some aspects, if there is no horizontal motion detected in addition to the motion along the Z axis, the virtual hand 136 and slidably engageable UI element 706 remain in their previous position with respect to the UI menu 124. Thus, the same menu element 712 remains highlighted as the motion along the Z axis occurs.
Returning to FIG. 6 , after the UI menu, virtual hand, and slidably engageable UI element have been translated according to motion of the user's hand (if any), the artificial reality system can render the virtual hand and slidably engageable UI element in proximity to the UI menu 124 based on the current position of the user hand. Flow can then return to determine an updated position of the hand (602). There may be many other operations performed by the artificial reality system in between rendering the UI menu determining an updated position of the hand.
FIG. 7F is an example HMD display 780 illustrating a UI menu 124 and UI icon array 720 in accordance with aspects of the disclosure. In some aspects, the menu items of a UI menu 124 can correspond to applications. In one or more aspects, UI menu 124 can be divided into two portions 716 and 718. The menu elements in portion 716 can represent favorite applications, and the menu elements in portion 718 can represent applications currently running within the artificial reality system. Further, in some aspects, the artificial reality system can present an icon array 710 of icons representing applications available or running in the artificial reality system. The images on the individual icons in icon array 720 can represent a current display of the corresponding application, or an image associated with the application.
FIG. 7G is an example HMD display 790 illustrating a UI menu 124 and UI icon array 720 in accordance with aspects of the disclosure. In the example illustrated in FIG. 7G, the virtual hand 136 and slidably engageable UI element 706 are in proximity to a menu item 724. In this example, a three-dimensional highlighting is used, where the menu item in proximity to the slidably engageable UI element 706 can be brought forward, thereby making the image appear larger to the user. In addition, the icon 722 corresponding to the menu item 724 can also be highlighted. In this example, the boarder of the icon 722 is highlighted.
The discussion above has presented aspects of the artificial reality system in which the UI menu is configured in a horizontal direction. In other aspects, the UI menu can be configured in a vertical direction. In such aspects, vertical motion of the hand can cause the slidably engageable UI element and virtual hand to move along the vertical dimension of the UI menu while the UI menu remains stationary in the vertical direction. Non-vertical motion (i.e., horizontal motion or front-to-back motion) can cause translation of the position of the UI menu in accordance with the non-vertical motion.
The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, DSPs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit comprising hardware may also perform one or more of the techniques of this disclosure.
Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure. In addition, any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware or software components or integrated within common or separate hardware or software components.
The techniques described in this disclosure may also be embodied or encoded in a computer-readable medium, such as a computer-readable storage medium, containing instructions. Instructions embedded or encoded in a computer-readable storage medium may cause a programmable processor, or other processor, to perform the method, e.g., when the instructions are executed. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.
As described by way of various examples herein, the techniques of the disclosure may include or be implemented in conjunction with an artificial reality system. As described, artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head mounted device (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.

Claims (60)

What is claimed is:
1. An artificial reality system comprising:
an image capture device configured to capture image data;
a head mounted display (HMD) configured to output artificial reality content;
a gesture detector comprising processing circuitry configured to identify, from the image data, a menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand and identify, from the image data and subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand;
a user interface (UI) engine configured to, in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface, and in response to the menu sliding gesture that, via at least the configuration of the hand, engages the slidably engageable UI element, translate the slidably engageable UI element to a second position relative to the menu interface by sliding the slidably engageable UI element such that each motion of the hand, in at least one dimension, is translated to a motion of the menu sliding gesture that causes the sliding of the slidably engageable UI element; and
a rendering engine configured to render the artificial reality content, the menu interface, and the translation of the slidably engageable UI element from the first position relative to the user menu interface to the second position relative to the user menu interface for display at the HMD.
2. The artificial reality system of claim 1,
wherein the menu interface comprises one or more menu items arrayed along a dimension of the menu interface, and
wherein the UI engine is configured to highlight one of the menu items according to a position of the slidably engageable UI element relative to the menu interface.
3. The artificial reality system of claim 2, wherein the one or more menu items correspond to respective applications executing on the artificial reality system.
4. The artificial reality system of claim 1,
wherein the menu interface comprises one or more menu items arrayed along a dimension of the menu interface,
wherein to translate the slidably engageable UI element to the second position relative to the menu interface, the UI engine is configured to slide the slidably engageable UI element along the dimension of the menu interface to the second position relative to the menu interface.
5. The artificial reality system of claim 1,
wherein the menu sliding gesture comprises motion of the hand in a substantially first direction,
wherein the gesture detector is configured to identify, from the image data, motion of the hand in a substantially second direction subsequent to the menu activation gesture, the substantially second direction being substantially orthogonal to the substantially first direction,
wherein the UI engine is further configured to, in response to the motion of the hand in the substantially second direction, translate the slidably engageable UI element and the menu interface while retaining a position of the slidably engageable UI element relative to the menu interface.
6. The artificial reality system of claim 1, further comprising:
an application engine comprising processing circuitry for execution of one or more artificial reality applications,
wherein the gesture detector is configured to identify, from the image data, a selection gesture subsequent to the menu sliding gesture, and
wherein the application engine is configured to perform an action in response to the selection gesture.
7. The artificial reality system of claim 6, wherein the selection gesture comprises one of (1) movement of a different finger of the hand, (2) translation of the hand in a direction that is substantially normal to the menu interface, or (3) reconfiguring the thumb and the finger of the hand to no longer be in the pinching configuration.
8. The artificial reality system of claim 1,
wherein the gesture detector is further configured to identify, from the image data, a menu prompt gesture prior to the menu activation gesture, and
wherein the UI engine is further configured to generate a menu prompt element in response to the menu prompt gesture.
9. The artificial reality system of claim 8, wherein the menu prompt gesture comprises the hand configured in a substantially upturned position with a space between a thumb and a finger, and
wherein the UI engine generates the menu prompt element in the space between a thumb and a finger of a virtual hand.
10. The artificial reality system of claim 9, wherein the menu prompt element comprises a line between the thumb and the finger of the virtual hand.
11. The artificial reality system of claim 1, wherein the image capture device is integrated with the HMD.
12. A method comprising:
obtaining, by an artificial reality system including a head mounted display (HMD), image data via an image capture device;
identifying, by the artificial reality system from the image data, a menu activation gesture, the menu activation gesture comprising a configuration of a hand in a substantially upturned orientation of the hand and a pinching configuration of a thumb and a finger of the hand;
generating, by the artificial reality system in response to the menu activation gesture, a menu interface and a slidably engageable UI element at a first position relative to the menu interface;
identifying, from the image data and subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand;
in response to the menu sliding gesture that, via at least the configuration of the hand, engages the slidably engageable UI element, translate the slidably engageable UI element to a second position relative to the menu interface, by sliding the slidably engageable UI element such that each motion of the hand, in at least one dimension, is translated to a motion of the menu sliding gesture that causes the sliding of the slidably engageable UI element; and
rendering, by the artificial reality system, artificial reality content, the menu interface, and the translation of the slidably engageable UI element from the first position relative to the menu interface to the second position relative to the user menu interface for display at the HMD.
13. The method of claim 12, wherein the menu interface comprises one or more menu items arrayed along a dimension of the menu interface, the method further comprising:
highlighting, by the artificial reality system, one of the menu items according to a position of the slidably engageable UI element relative to the menu interface.
14. The method of claim 12, wherein translating the slidably engageable UI element to the second position relative to the menu interface comprises sliding the slidably engageable UI element along a dimension of the menu interface to the second position relative to the menu interface.
15. The method of claim 12,
wherein the menu sliding gesture comprises motion of the hand in a substantially first direction, the method further comprising:
identifying, by the artificial reality system from the image data, motion of the hand in a substantially second direction subsequent to the menu activation gesture, the substantially second direction being substantially orthogonal to the substantially first direction; and
translating, by the artificial reality system in response to the motion of the hand in the substantially second direction, the slidably engageable UI element and the menu interface while retaining a position of the slidably engageable UI element relative to the menu interface.
16. The method of claim 12, further comprising:
identifying, by the artificial reality system from the image data, a menu prompt gesture; and
generating, by the artificial reality system, a menu prompt element in response to the menu prompt gesture.
17. A non-transitory, computer-readable medium comprising instructions that, when executed, cause one or more processors of an artificial reality system to:
capture image data via an image capture device;
identify, from the image data, a menu activation gesture comprising a configuration of the a hand;
in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface;
identify, subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand;
in response to the menu sliding gesture that, via at least the configuration of the hand, engages the slidably engageable UI element, translate the slidably engageable UI element to a second position relative to the menu interface by sliding the slidably engageable UI element such that each motion of the hand, in at least one dimension, is translated to a motion of the menu sliding gesture that causes the sliding of the slidably engageable UI element; and
render artificial reality content, the menu interface, and the slidably engageable UI element for display at a head mounted display (HMD).
18. The non-transitory, computer-readable medium of claim 17, wherein the instructions further cause the one or more processors to:
identify, from the image data, a menu prompt gesture comprising the hand configured in a substantially upturned position with a space between a thumb and a finger; and
generate, in the space between a thumb and a finger of a virtual hand, a menu prompt element in response to the menu prompt gesture.
19. The non-transitory, computer-readable medium of claim 18, wherein the menu prompt element comprises a line between the thumb and the finger of the virtual hand.
20. The non-transitory, computer-readable medium of claim 17, wherein the menu interface comprises one or more menu items, and wherein the instructions further cause the one or more processors to:
modify the appearance of one of the menu items according to a position of the slidably engageable UI element relative to the menu interface.
21. The non-transitory, computer-readable medium of claim 20, wherein the one or more menu items correspond to respective applications executing on the artificial reality system.
22. The non-transitory, computer-readable medium of claim 17,
wherein the menu interface comprises one or more menu items arrayed along a dimension of the menu interface, and
wherein the slidably engageable UI element is rendered translating along the dimension of the menu interface from the first position relative to the menu interface to the second position relative to the menu interface.
23. The non-transitory, computer-readable medium of claim 22, wherein the one or more menu items correspond to respective applications executing on the artificial reality system.
24. The non-transitory, computer-readable medium of claim 17, wherein the menu sliding gesture comprises motion of the hand in a substantially first direction, and wherein the instructions further cause the one or more processors to:
identify, subsequent to the menu activation gesture, motion of the hand in a substantially second direction, the substantially second direction being substantially orthogonal to the substantially first direction; and
translate, in response to the motion of the hand in the substantially second direction, the slidably engageable UI element and the menu interface while retaining a position of the slidably engageable UI element relative to the menu interface.
25. The non-transitory, computer-readable medium of claim 17, wherein the instructions further cause the one or more processors to:
identify, subsequent to the menu sliding gesture, a selection gesture; and
perform an action in response to the selection gesture.
26. The non-transitory, computer-readable medium of claim 25, wherein the menu interface comprises one or more menu items, the second position relates to a first menu item of the one or more menu items, and the performed action is associated with the first menu item.
27. The non-transitory, computer-readable medium of claim 25,
wherein the configuration of the hand of the menu activation gesture comprises the hand in a substantially upturned orientation and a pinching configuration of a thumb and a finger of the hand; and
wherein the selection gesture comprises one of (1) movement of a different finger of the hand, (2) translation of the hand in a direction that is substantially normal to the menu interface, or (3) reconfiguring the thumb and the finger of the hand to not be in a pinching configuration.
28. The non-transitory, computer-readable medium of claim 17, wherein the instructions further cause the one or more processors to:
render a virtual hand for display at the HMD.
29. The non-transitory, computer-readable medium of claim 28, wherein the virtual hand comprises one or more digits.
30. A method comprising:
capturing image data via an image capture device;
identifying, from the image data, a menu activation gesture comprising a configuration of a hand;
in response to the menu activation gesture, generating a menu interface and a slidably engageable UI element at a first position relative to the menu interface;
identifying, subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand; and
in response to the menu sliding gesture that, via at least the configuration of the hand, engages the slidably engageable UI element, translating the slidably engageable UI element to a second position relative to the menu interface, and rendering the menu interface and the slidably engageable UI element for display at a head mounted display (HMD),
wherein the slidably engageable UI element is translated to the second position by sliding the slidably engageable UI element such that each motion of the hand, in at least one dimension, is translated to a motion of the menu sliding gesture that causes the sliding of the slidably engageable UI element.
31. The method of claim 30, further comprising:
identifying a menu prompt gesture; and
generating a menu prompt element in response to the menu prompt gesture.
32. The method of claim 31, wherein the menu prompt gesture comprises the hand configured in a substantially upturned position with a space between a thumb and a finger.
33. The method of claim 31, wherein the menu prompt element is generated in the space between a thumb and a finger of a virtual hand.
34. The method of claim 33, wherein the menu prompt element comprises a line between the thumb and the finger of the virtual hand.
35. The method of claim 30, wherein the menu interface comprises one or more menu items, the method further comprising:
modifying the appearance of one of the menu items according to a position of the slidably engageable UI element relative to the menu interface.
36. The method of claim 35, wherein the one or more menu items correspond to respective applications executing on an artificial reality system.
37. The method of claim 30,
wherein the menu interface comprises one or more menu items arrayed along a dimension of the menu interface, and
wherein translating the slidably engageable UI element to the second position relative to the menu interface comprises translating the slidably engageable UI element along the dimension of the menu interface from the first position relative to the menu interface to the second position relative to the menu interface.
38. The method of claim 37, wherein the one or more menu items correspond to respective applications executing on an artificial reality system.
39. The method of claim 30, wherein the menu sliding gesture comprises motion of the hand in a substantially first direction, the method further comprising:
identifying, subsequent to the menu activation gesture, motion of the hand in a substantially second direction, the substantially second direction being substantially orthogonal to the substantially first direction; and
translating, in response to the motion of the hand in the substantially second direction, the slidably engageable UI element and the menu interface while retaining a position of the slidably engageable UI element relative to the menu interface.
40. The method of claim 30, further comprising:
identifying, subsequent to the menu sliding gesture, a selection gesture; and
performing an action in response to the selection gesture.
41. The method of claim 40, wherein the menu interface comprises one or more menu items, the second position relates to a first menu item of the one or more menu items, and the performed action is associated with the first menu item.
42. The method of claim 40,
wherein the configuration of the hand of the menu activation gesture comprises the hand in a substantially upturned orientation and a pinching configuration of a thumb and a finger of the hand; and
wherein the selection gesture comprises one of (1) movement of a different finger of the hand, (2) translation of the hand in a direction that is substantially normal to the menu interface, or (3) reconfiguring a thumb and a finger of the hand to not be in a pinching configuration.
43. The method of claim 30, further comprising:
rendering a virtual hand for display at the HMD.
44. The method of claim 43, wherein the virtual hand comprises one or more digits.
45. An artificial reality system comprising:
an image capture device configured to capture image data;
a head mounted display (HMD) configured to output artificial reality content;
a gesture detector comprising processing circuitry configured to identify, from the image data, a menu activation gesture comprising a configuration of a hand and identify, subsequent to the menu activation gesture, a menu sliding gesture comprising the configuration of the hand in combination with a motion of the hand,
a user interface (UI) engine comprising processing circuitry configured to, in response to the menu activation gesture, generate a menu interface and a slidably engageable UI element at a first position relative to the menu interface, and in response to the menu sliding gesture that, via at least the configuration of the hand, engages the slidably engageable UI element, translate the slidably engageable UI element to a second position relative to the menu interface by sliding the slidably engageable UI element such that each motion of the hand, in at least one dimension, is translated to a motion of the menu sliding gesture that causes the sliding of the slidably engageable UI element; and
a rendering engine comprising processing circuitry configured to render the menu interface and the slidably engageable UI element for display at the HMD.
46. The artificial reality system of claim 45,
wherein the gesture detector is further configured to identify a menu prompt gesture, and
wherein the UI engine is further configured to generate a menu prompt element in response to the menu prompt gesture.
47. The artificial reality system of claim 46, wherein the menu prompt gesture comprises the hand configured in a substantially upturned position with a space between a thumb and a finger.
48. The artificial reality system of claim 46, wherein the UI engine generates the menu prompt element in the space between a thumb and a finger of a virtual hand.
49. The artificial reality system of claim 48, wherein the menu prompt element comprises a line between the thumb and the finger of the virtual hand.
50. The artificial reality system of claim 45,
wherein the menu interface comprises one or more menu items, and
wherein the UI engine is configured to modify the appearance of one of the menu items according to a position of the slidably engageable UI element relative to the menu interface.
51. The artificial reality system of claim 50, wherein the one or more menu items correspond to respective applications executing on the artificial reality system.
52. The artificial reality system of claim 45,
wherein the menu interface comprises one or more menu items arrayed along a dimension of the menu interface,
wherein to translate the slidably engageable UI element to the second position relative to the menu interface, the UI engine is configured to translate the slidably engageable UI element along the dimension of the menu interface from the first position relative to the menu interface to the second position relative to the menu interface.
53. The artificial reality system of claim 52, wherein the one or more menu items correspond to respective applications executing on the artificial reality system.
54. The artificial reality system of claim 45,
wherein the menu sliding gesture comprises motion of the hand in a substantially first direction,
wherein the gesture detector is configured to identify motion of the hand in a substantially second direction subsequent to the menu activation gesture, the substantially second direction being substantially orthogonal to the substantially first direction, and
wherein the UI engine is further configured to, in response to the motion of the hand in the substantially second direction, translate the slidably engageable UI element and the menu interface while retaining a position of the slidably engageable UI element relative to the menu interface.
55. The artificial reality system of claim 45, further comprising:
an application engine comprising processing circuitry for execution of one or more artificial reality applications,
wherein the gesture detector is configured to identify a selection gesture subsequent to the menu sliding gesture, and
wherein the application engine is configured to perform an action in response to the selection gesture.
56. The artificial reality system of claim 55, wherein the menu interface comprises one or more menu items, the second position relates to a first menu item of the one or more menu items, and the performed action is associated with the first menu item.
57. The artificial reality system of claim 55,
wherein the configuration of the hand of the menu activation gesture comprises the hand in a substantially upturned orientation and a pinching configuration of a thumb and a finger of the hand; and
wherein the selection gesture comprises one of (1) movement of a different finger of the hand, (2) translation of the hand in a direction that is substantially normal to the menu interface, or (3) reconfiguring the thumb and the finger of the hand to not be in a pinching configuration.
58. The artificial reality system of claim 45, wherein the image capture device is integrated with the HMD.
59. The artificial reality system of claim 45, wherein the rendering engine is further configured to render a virtual hand for display at the HMD.
60. The artificial reality system of claim 59, wherein the virtual hand comprises one or more digits.
US18/095,946 2019-06-07 2023-01-11 Artificial reality system having a sliding menu Active USRE50598E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/095,946 USRE50598E1 (en) 2019-06-07 2023-01-11 Artificial reality system having a sliding menu

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/434,919 US10890983B2 (en) 2019-06-07 2019-06-07 Artificial reality system having a sliding menu
US18/095,946 USRE50598E1 (en) 2019-06-07 2023-01-11 Artificial reality system having a sliding menu

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/434,919 Reissue US10890983B2 (en) 2019-06-07 2019-06-07 Artificial reality system having a sliding menu

Publications (1)

Publication Number Publication Date
USRE50598E1 true USRE50598E1 (en) 2025-09-23

Family

ID=71842836

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/434,919 Ceased US10890983B2 (en) 2019-06-07 2019-06-07 Artificial reality system having a sliding menu
US18/095,946 Active USRE50598E1 (en) 2019-06-07 2023-01-11 Artificial reality system having a sliding menu

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/434,919 Ceased US10890983B2 (en) 2019-06-07 2019-06-07 Artificial reality system having a sliding menu

Country Status (7)

Country Link
US (2) US10890983B2 (en)
EP (1) EP3980870A1 (en)
JP (1) JP2022535316A (en)
KR (1) KR20220016274A (en)
CN (2) CN113853575B (en)
TW (1) TW202113555A (en)
WO (1) WO2020247550A1 (en)

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10890983B2 (en) 2019-06-07 2021-01-12 Facebook Technologies, Llc Artificial reality system having a sliding menu
US11275453B1 (en) 2019-09-30 2022-03-15 Snap Inc. Smart ring for manipulating virtual objects displayed by a wearable device
US11277597B1 (en) 2020-03-31 2022-03-15 Snap Inc. Marker-based guided AR experience
US11455078B1 (en) * 2020-03-31 2022-09-27 Snap Inc. Spatial navigation and creation interface
US11798429B1 (en) 2020-05-04 2023-10-24 Snap Inc. Virtual tutorials for musical instruments with finger tracking in augmented reality
US11520399B2 (en) 2020-05-26 2022-12-06 Snap Inc. Interactive augmented reality experiences using positional tracking
CN116507997A (en) 2020-09-11 2023-07-28 苹果公司 Method for displaying user interface in environment, corresponding electronic device, and computer-readable storage medium
US11925863B2 (en) * 2020-09-18 2024-03-12 Snap Inc. Tracking hand gestures for interactive game control in augmented reality
US12032803B2 (en) * 2020-09-23 2024-07-09 Apple Inc. Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
US11562528B2 (en) 2020-09-25 2023-01-24 Apple Inc. Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
CN117555417B (en) 2020-09-25 2024-07-19 苹果公司 Method for adjusting and/or controlling immersion associated with a user interface
KR20230054733A (en) 2020-09-25 2023-04-25 애플 인크. Methods for interacting with virtual controls and/or affordance for moving virtual objects in virtual environments
JP2023543799A (en) 2020-09-25 2023-10-18 アップル インコーポレイテッド How to navigate the user interface
AU2021347112B2 (en) 2020-09-25 2023-11-23 Apple Inc. Methods for manipulating objects in an environment
US11921931B2 (en) * 2020-12-17 2024-03-05 Huawei Technologies Co., Ltd. Methods and systems for multi-precision discrete control of a user interface control element of a gesture-controlled device
US11782577B2 (en) 2020-12-22 2023-10-10 Snap Inc. Media content player on an eyewear device
US12229342B2 (en) 2020-12-22 2025-02-18 Snap Inc. Gesture control on an eyewear device
US11797162B2 (en) * 2020-12-22 2023-10-24 Snap Inc. 3D painting on an eyewear device
EP4268066A1 (en) 2020-12-22 2023-11-01 Snap Inc. Media content player on an eyewear device
KR20230124732A (en) * 2020-12-29 2023-08-25 스냅 인코포레이티드 Fine hand gestures to control virtual and graphical elements
KR20230124077A (en) 2020-12-30 2023-08-24 스냅 인코포레이티드 Augmented reality precision tracking and display
US11740313B2 (en) 2020-12-30 2023-08-29 Snap Inc. Augmented reality precision tracking and display
CN116670627A (en) 2020-12-31 2023-08-29 苹果公司 Methods for Grouping User Interfaces in Environments
US11954242B2 (en) 2021-01-04 2024-04-09 Apple Inc. Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
US20220229524A1 (en) * 2021-01-20 2022-07-21 Apple Inc. Methods for interacting with objects in an environment
US11995230B2 (en) * 2021-02-11 2024-05-28 Apple Inc. Methods for presenting and sharing content in an environment
US11531402B1 (en) 2021-02-25 2022-12-20 Snap Inc. Bimanual gestures for controlling virtual and graphical elements
EP4320502A1 (en) 2021-04-08 2024-02-14 Snap, Inc. Bimanual interactions between mapped hand regions for controlling virtual and graphical elements
EP4323852A1 (en) 2021-04-13 2024-02-21 Apple Inc. Methods for providing an immersive experience in an environment
US11861070B2 (en) 2021-04-19 2024-01-02 Snap Inc. Hand gestures for animating and controlling virtual and graphical elements
JP7707629B2 (en) * 2021-04-27 2025-07-15 富士フイルムビジネスイノベーション株式会社 Information processing device, information processing program, and information processing system
CN113282169B (en) * 2021-05-08 2023-04-07 青岛小鸟看看科技有限公司 Interaction method and device of head-mounted display equipment and head-mounted display equipment
CN117980962A (en) 2021-09-23 2024-05-03 苹果公司 Apparatus, method and graphical user interface for content application
EP4388397A1 (en) 2021-09-25 2024-06-26 Apple Inc. Devices, methods, and graphical user interfaces for presenting virtual objects in virtual environments
TWI764838B (en) * 2021-09-27 2022-05-11 國立臺中科技大學 A synchronous live teaching device integrating real-time recording and screenshot functions
EP4427117A1 (en) * 2021-11-04 2024-09-11 Microsoft Technology Licensing, LLC Multi-factor intention determination for augmented reality (ar) environment control
US12067159B2 (en) 2021-11-04 2024-08-20 Microsoft Technology Licensing, Llc. Multi-factor intention determination for augmented reality (AR) environment control
WO2023080957A1 (en) * 2021-11-04 2023-05-11 Microsoft Technology Licensing, Llc. Multi-factor intention determination for augmented reality (ar) environment control
US11928264B2 (en) * 2021-12-16 2024-03-12 Lenovo (Singapore) Pte. Ltd. Fixed user interface navigation
US12272005B2 (en) 2022-02-28 2025-04-08 Apple Inc. System and method of three-dimensional immersive applications in multi-user communication sessions
CN116795203A (en) * 2022-03-17 2023-09-22 北京字跳网络技术有限公司 Control method and device based on virtual reality and electronic equipment
WO2023196258A1 (en) 2022-04-04 2023-10-12 Apple Inc. Methods for quick message response and dictation in a three-dimensional environment
US12394167B1 (en) 2022-06-30 2025-08-19 Apple Inc. Window resizing and virtual object rearrangement in 3D environments
US12236512B2 (en) 2022-08-23 2025-02-25 Snap Inc. Avatar call on an eyewear device
US12112011B2 (en) 2022-09-16 2024-10-08 Apple Inc. System and method of application-based three-dimensional refinement in multi-user communication sessions
US12148078B2 (en) 2022-09-16 2024-11-19 Apple Inc. System and method of spatial groups in multi-user communication sessions
US12099653B2 (en) 2022-09-22 2024-09-24 Apple Inc. User interface response based on gaze-holding event assessment
US12405704B1 (en) 2022-09-23 2025-09-02 Apple Inc. Interpreting user movement as direct touch user interface interactions
US12400414B2 (en) 2023-02-08 2025-08-26 Meta Platforms Technologies, Llc Facilitating system user interface (UI) interactions in an artificial reality (XR) environment
US12387449B1 (en) 2023-02-08 2025-08-12 Meta Platforms Technologies, Llc Facilitating system user interface (UI) interactions in an artificial reality (XR) environment
US20240264660A1 (en) * 2023-02-08 2024-08-08 Meta Platforms Technologies, Llc Facilitating User Interface Interactions in an Artificial Reality Environment
US12108012B2 (en) 2023-02-27 2024-10-01 Apple Inc. System and method of managing spatial states and display modes in multi-user communication sessions
US12118200B1 (en) 2023-06-02 2024-10-15 Apple Inc. Fuzzy hit testing
US12113948B1 (en) 2023-06-04 2024-10-08 Apple Inc. Systems and methods of managing spatial groups in multi-user communication sessions
KR20250024322A (en) * 2023-08-11 2025-02-18 삼성전자주식회사 Head mounted display device for displaying interface and operating method for the same
WO2025160361A1 (en) * 2024-01-26 2025-07-31 Google Llc Gesture navigation

Citations (193)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236541A1 (en) 1997-05-12 2004-11-25 Kramer James F. System and method for constraining a graphical hand from penetrating simulated graphical objects
US6842175B1 (en) 1999-04-22 2005-01-11 Fraunhofer Usa, Inc. Tools for interacting with virtual environments
US20080089587A1 (en) 2006-10-11 2008-04-17 Samsung Electronics Co.; Ltd Hand gesture recognition input system and method for a mobile phone
US20090077504A1 (en) 2007-09-14 2009-03-19 Matthew Bell Processing of Gesture-Based User Interactions
US7701439B2 (en) 2006-07-13 2010-04-20 Northrop Grumman Corporation Gesture recognition simulation system and method
US20100306716A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Extending standard gestures
US20110239155A1 (en) 2007-01-05 2011-09-29 Greg Christie Gestures for Controlling, Manipulating, and Editing of Media Files Using Touch Sensitive Devices
US20110267265A1 (en) 2010-04-30 2011-11-03 Verizon Patent And Licensing, Inc. Spatial-input-based cursor projection systems and methods
US20120071892A1 (en) * 2010-09-21 2012-03-22 Intuitive Surgical Operations, Inc. Method and system for hand presence detection in a minimally invasive surgical system
US20120069168A1 (en) 2010-09-17 2012-03-22 Sony Corporation Gesture recognition system for tv control
US20120113223A1 (en) 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20120143358A1 (en) 2009-10-27 2012-06-07 Harmonix Music Systems, Inc. Movement based recognition and evaluation
US20120188279A1 (en) 2009-09-29 2012-07-26 Kent Demaine Multi-Sensor Proximity-Based Immersion System and Method
US20120206345A1 (en) 2011-02-16 2012-08-16 Microsoft Corporation Push actuation of interface controls
US20120218395A1 (en) 2011-02-25 2012-08-30 Microsoft Corporation User interface presentation and interactions
US20120218183A1 (en) 2009-09-21 2012-08-30 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US20120249740A1 (en) * 2011-03-30 2012-10-04 Tae-Yon Lee Three-dimensional image sensors, cameras, and imaging systems
US20120249741A1 (en) * 2011-03-29 2012-10-04 Giuliano Maciocci Anchoring virtual images to real world surfaces in augmented reality systems
US20120275686A1 (en) 2011-04-29 2012-11-01 Microsoft Corporation Inferring spatial object descriptions from spatial gestures
US20120293544A1 (en) 2011-05-18 2012-11-22 Kabushiki Kaisha Toshiba Image display apparatus and method of selecting image region using the same
KR20120136719A (en) 2011-06-09 2012-12-20 안지윤 The method of pointing and controlling objects on screen at long range using 3d positions of eyes and hands
US20130063345A1 (en) 2010-07-20 2013-03-14 Shigenori Maeda Gesture input device and gesture input method
US20130125066A1 (en) 2011-11-14 2013-05-16 Microsoft Corporation Adaptive Area Cursor
US20130147793A1 (en) 2011-12-09 2013-06-13 Seongyeom JEON Mobile terminal and controlling method thereof
US20130182902A1 (en) 2012-01-17 2013-07-18 David Holz Systems and methods for capturing motion in three-dimensional space
US20130265220A1 (en) 2012-04-09 2013-10-10 Omek Interactive, Ltd. System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US8558759B1 (en) 2011-07-08 2013-10-15 Google Inc. Hand gestures to signify what is important
US20140007484A1 (en) 2012-07-03 2014-01-09 Andrei Erdoss Ocular cross dominance impediment corrective apparatus for use with a shoulder-mounted firearm
US20140125598A1 (en) 2012-11-05 2014-05-08 Synaptics Incorporated User interface systems and methods for managing multiple regions
US20140204002A1 (en) 2013-01-21 2014-07-24 Rotem Bennet Virtual interaction with image projection
WO2014119258A1 (en) 2013-01-31 2014-08-07 パナソニック株式会社 Information processing method and information processing device
US20140236996A1 (en) 2011-09-30 2014-08-21 Rakuten, Inc. Search device, search method, recording medium, and program
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
US20140306891A1 (en) 2013-04-12 2014-10-16 Stephen G. Latta Holographic object feedback
US20140357366A1 (en) 2011-09-14 2014-12-04 Bandai Namco Games Inc. Method for implementing game, storage medium, game device, and computer
US20140375691A1 (en) 2011-11-11 2014-12-25 Sony Corporation Information processing apparatus, information processing method, and program
US8947351B1 (en) 2011-09-27 2015-02-03 Amazon Technologies, Inc. Point of view determinations for finger tracking
US20150040040A1 (en) 2013-08-05 2015-02-05 Alexandru Balan Two-hand interaction with natural user interface
US20150035746A1 (en) 2011-12-27 2015-02-05 Andy Cockburn User Interface Device
US20150062160A1 (en) 2013-08-30 2015-03-05 Ares Sakamoto Wearable user device enhanced display system
US20150110285A1 (en) 2013-10-21 2015-04-23 Harman International Industries, Inc. Modifying an audio panorama to indicate the presence of danger or other events of interest
JP2015100032A (en) 2013-11-19 2015-05-28 株式会社Nttドコモ Video display device, video presentation method, and program
US20150153833A1 (en) 2012-07-13 2015-06-04 Softkinetic Software Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
US20150160736A1 (en) 2013-12-11 2015-06-11 Sony Corporation Information processing apparatus, information processing method and program
US20150169076A1 (en) 2013-12-16 2015-06-18 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual cameras with vectors
US20150181679A1 (en) 2013-12-23 2015-06-25 Sharp Laboratories Of America, Inc. Task light based system and gesture control
US20150178985A1 (en) 2013-12-23 2015-06-25 Harman International Industries, Incorporated Virtual three-dimensional instrument cluster with three-dimensional navigation system
US20150206321A1 (en) 2014-01-23 2015-07-23 Michael J. Scavezze Automated content scrolling
US20150220150A1 (en) 2012-02-14 2015-08-06 Google Inc. Virtual touch user interface system and methods
US9117274B2 (en) 2011-08-01 2015-08-25 Fuji Xerox Co., Ltd. System and method for interactive markerless paper documents in 3D space with mobile cameras and projectors
US20150243100A1 (en) * 2013-07-12 2015-08-27 Magic Leap, Inc. Method and system for determining user input based on totem
US20150261659A1 (en) 2014-03-12 2015-09-17 Bjoern BADER Usability testing of applications by assessing gesture inputs
US20150269783A1 (en) 2014-03-21 2015-09-24 Samsung Electronics Co., Ltd. Method and wearable device for providing a virtual input interface
JP2015176439A (en) 2014-03-17 2015-10-05 オムロン株式会社 Multimedia device, control method of multimedia device, and control program of multimedia device
JP2015192436A (en) 2014-03-28 2015-11-02 キヤノン株式会社 Transmission terminal, reception terminal, transmission/reception system and program therefor
WO2015192117A1 (en) 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9292089B1 (en) 2011-08-24 2016-03-22 Amazon Technologies, Inc. Gestural object selection
US20160110052A1 (en) 2014-10-20 2016-04-21 Samsung Electronics Co., Ltd. Apparatus and method of drawing and solving figure content
US20160147308A1 (en) 2013-07-10 2016-05-26 Real View Imaging Ltd. Three dimensional user interface
US20160170603A1 (en) 2014-12-10 2016-06-16 Microsoft Technology Licensing, Llc Natural user interface camera calibration
US20160217614A1 (en) * 2015-01-28 2016-07-28 CCP hf. Method and System for Receiving Gesture Input Via Virtual Control Objects
US9406277B1 (en) 2013-05-29 2016-08-02 Amazon Technologies, Inc. Control of spectral range intensity in media devices
US9477368B1 (en) 2009-03-31 2016-10-25 Google Inc. System and method of indicating the distance or the surface of an image of a geographical object
US20160378291A1 (en) 2015-06-26 2016-12-29 Haworth, Inc. Object group processing and selection gestures for grouping objects in a collaboration system
WO2017009707A1 (en) 2015-07-13 2017-01-19 Quan Xiao Apparatus and method for hybrid type of input of buttons/keys and "finger writing" and low profile/variable geometry hand-based controller
US20170050542A1 (en) 2014-04-25 2017-02-23 Mitsubishi Electric Corporation Automatic adjuster, automatic adjusting system and automatic adjusting method
US20170060230A1 (en) 2015-08-26 2017-03-02 Google Inc. Dynamic switching and merging of head, gesture and touch input in virtual reality
US20170109936A1 (en) 2015-10-20 2017-04-20 Magic Leap, Inc. Selecting virtual objects in a three-dimensional space
US20170139478A1 (en) 2014-08-01 2017-05-18 Starship Vending-Machine Corp. Method and apparatus for providing interface recognizing movement in accordance with user's view
US20170139481A1 (en) 2015-11-12 2017-05-18 Oculus Vr, Llc Method and apparatus for detecting hand gestures with a handheld controller
CN106716332A (en) 2014-09-24 2017-05-24 微软技术许可有限责任公司 Gesture navigation for secondary user interface
US20170192513A1 (en) 2015-12-31 2017-07-06 Microsoft Technology Licensing, Llc Electrical device for hand gestures detection
US20170206691A1 (en) 2014-03-14 2017-07-20 Magic Leap, Inc. Augmented reality systems and methods utilizing reflections
US20170228130A1 (en) * 2016-02-09 2017-08-10 Unity IPR ApS Systems and methods for a virtual reality editor
US20170262063A1 (en) 2014-11-27 2017-09-14 Erghis Technologies Ab Method and System for Gesture Based Control Device
US20170263033A1 (en) 2016-03-10 2017-09-14 FlyInside, Inc. Contextual Virtual Reality Interaction
US20170278304A1 (en) 2016-03-24 2017-09-28 Qualcomm Incorporated Spatial relationships for integration of visual images of physical environment into virtual reality
US20170287225A1 (en) 2016-03-31 2017-10-05 Magic Leap, Inc. Interactions with 3d virtual objects using poses and multiple-dof controllers
US20170296363A1 (en) 2016-04-15 2017-10-19 Board Of Regents, The University Of Texas System Systems, apparatuses and methods for controlling prosthetic devices by gestures and other modalities
US20170308166A1 (en) 2014-12-31 2017-10-26 Sony Interactive Entertainment Inc. Signal generation and detector systems and methods for determining positions of fingers of a user
US9817472B2 (en) 2012-11-05 2017-11-14 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
US20170329515A1 (en) 2016-05-10 2017-11-16 Google Inc. Volumetric virtual reality keyboard methods, user interface, and interactions
US20170337742A1 (en) 2016-05-20 2017-11-23 Magic Leap, Inc. Contextual awareness of user interface menus
US20170364198A1 (en) 2016-06-21 2017-12-21 Samsung Electronics Co., Ltd. Remote hover touch system and method
US20180004283A1 (en) 2016-06-29 2018-01-04 Cheyne Rory Quin Mathey-Owens Selection of objects in three-dimensional space
US20180033204A1 (en) 2016-07-26 2018-02-01 Rouslan Lyubomirov DIMITROV System and method for displaying computer-based content in a virtual or augmented environment
US20180039341A1 (en) 2016-08-03 2018-02-08 Google Inc. Methods and systems for determining positional data for three-dimensional interactions inside virtual reality environments
US20180046245A1 (en) 2016-08-11 2018-02-15 Microsoft Technology Licensing, Llc Mediation of interaction methodologies in immersive environments
US20180059901A1 (en) 2016-08-23 2018-03-01 Gullicksen Brothers, LLC Controlling objects using virtual rays
WO2018067508A1 (en) 2016-10-04 2018-04-12 Facebook, Inc. Controls and interfaces for user interactions in virtual spaces
US20180107278A1 (en) 2016-10-14 2018-04-19 Intel Corporation Gesture-controlled virtual reality systems and methods of controlling the same
US20180113599A1 (en) 2016-10-26 2018-04-26 Alibaba Group Holding Limited Performing virtual reality input
US20180157398A1 (en) 2016-12-05 2018-06-07 Magic Leap, Inc. Virtual user input controls in a mixed reality environment
US20180188816A1 (en) 2017-01-04 2018-07-05 Htc Corporation Controller for finger gesture recognition and method for recognizing finger gesture
US10042430B2 (en) 2013-01-15 2018-08-07 Leap Motion, Inc. Free-space user interface and control using virtual constructs
CN108536273A (en) 2017-03-01 2018-09-14 天津锋时互动科技有限公司深圳分公司 Man-machine menu mutual method and system based on gesture
US20180303446A1 (en) 2017-04-21 2018-10-25 Hans Schweizer Medical imaging device and method for supporting a person using a medical imaging device
US20180307303A1 (en) 2017-04-19 2018-10-25 Magic Leap, Inc. Multimodal task execution and text editing for a wearable system
US20180322701A1 (en) 2017-05-04 2018-11-08 Microsoft Technology Licensing, Llc Syndication of direct and indirect interactions in a computer-mediated reality environment
US20180323992A1 (en) 2015-08-21 2018-11-08 Samsung Electronics Company, Ltd. User-Configurable Interactive Region Monitoring
US20180329492A1 (en) 2017-05-09 2018-11-15 Microsoft Technology Licensing, Llc Parallax correction for touch-screen display
US20180335925A1 (en) 2014-12-19 2018-11-22 Hewlett-Packard Development Company, L.P. 3d visualization
US20180357780A1 (en) 2017-06-09 2018-12-13 Sony Interactive Entertainment Inc. Optimized shadows in a foveated rendering system
WO2018235371A1 (en) 2017-06-20 2018-12-27 ソニー株式会社 INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
US20190018498A1 (en) * 2017-07-12 2019-01-17 Unity IPR ApS Methods and systems for displaying ui elements in mixed reality environments
US10187936B1 (en) 2013-01-07 2019-01-22 Amazon Technologies, Inc. Non-linear lighting system brightness control for a user device
US20190050071A1 (en) 2017-08-14 2019-02-14 Industrial Technology Research Institute Transparent display device and control method using the same
US20190057531A1 (en) 2017-08-16 2019-02-21 Microsoft Technology Licensing, Llc Repositioning user perspectives in virtual reality environments
US10220303B1 (en) 2013-03-15 2019-03-05 Harmonix Music Systems, Inc. Gesture-based music game
US20190094981A1 (en) 2014-06-14 2019-03-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10248284B2 (en) 2015-11-16 2019-04-02 Atheer, Inc. Method and apparatus for interface control with prompt and feedback
US20190107894A1 (en) 2017-10-07 2019-04-11 Tata Consultancy Services Limited System and method for deep learning based hand gesture recognition in first person view
US20190120593A1 (en) 2017-10-19 2019-04-25 SMWT Ltd. Visual Aid
US20190130653A1 (en) * 2016-06-02 2019-05-02 Audi Ag Method for operating a display system and display system
US20190129607A1 (en) 2017-11-02 2019-05-02 Samsung Electronics Co., Ltd. Method and device for performing remote control
US20190138107A1 (en) 2016-10-11 2019-05-09 Valve Corporation Virtual reality hand gesture generation
US20190146599A1 (en) 2017-11-13 2019-05-16 Arkio Ehf. Virtual/augmented reality modeling application for architecture
US10318100B2 (en) 2013-10-16 2019-06-11 Atheer, Inc. Method and apparatus for addressing obstruction in an interface
US20190213792A1 (en) 2018-01-11 2019-07-11 Microsoft Technology Licensing, Llc Providing Body-Anchored Mixed-Reality Experiences
US20190258318A1 (en) 2016-06-28 2019-08-22 Huawei Technologies Co., Ltd. Terminal for controlling electronic device and processing method thereof
US20190265828A1 (en) 2016-09-23 2019-08-29 Apple Inc. Devices, Methods, and User Interfaces for Interacting with a Position Indicator within Displayed Text via Proximity-based Inputs
US20190279424A1 (en) 2018-03-07 2019-09-12 California Institute Of Technology Collaborative augmented reality system
US20190278376A1 (en) 2011-06-23 2019-09-12 Intel Corporation System and method for close-range movement tracking
US20190286231A1 (en) 2014-07-25 2019-09-19 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
US20190290999A1 (en) 2016-10-11 2019-09-26 Valve Corporation Holding and Releasing Virtual Objects
US20190318640A1 (en) 2017-08-21 2019-10-17 Precisionos Technology Inc. Medical virtual reality, mixed reality or agumented reality surgical system
US20190325651A1 (en) 2016-10-11 2019-10-24 Valve Corporation Holding and Releasing Virtual Objects
US10473935B1 (en) 2016-08-10 2019-11-12 Meta View, Inc. Systems and methods to provide views of virtual content in an interactive space
US20190347865A1 (en) 2014-09-18 2019-11-14 Google Inc. Three-dimensional drawing inside virtual reality environment
US20190362562A1 (en) 2018-05-25 2019-11-28 Leap Motion, Inc. Throwable Interface for Augmented Reality and Virtual Reality Environments
US20190361521A1 (en) 2018-05-22 2019-11-28 Microsoft Technology Licensing, Llc Accelerated gaze-supported manual cursor control
US20190369391A1 (en) 2018-05-31 2019-12-05 Renault Innovation Silicon Valley Three dimensional augmented reality involving a vehicle
US20190377416A1 (en) 2018-06-07 2019-12-12 Facebook, Inc. Picture-Taking Within Virtual Reality
US20190391710A1 (en) 2017-09-25 2019-12-26 Tencent Technology (Shenzhen) Company Limited Information interaction method and apparatus, storage medium, and electronic apparatus
WO2019245681A1 (en) 2018-06-20 2019-12-26 Valve Corporation Virtual reality hand gesture generation
US20200001172A1 (en) 2018-06-27 2020-01-02 Facebook Technologies, Llc Capacitive sensing assembly for detecting proximity of user to a controller device
US20200012341A1 (en) 2018-07-09 2020-01-09 Microsoft Technology Licensing, Llc Systems and methods for using eye gaze to bend and snap targeting rays for remote interaction
US10536691B2 (en) 2016-10-04 2020-01-14 Facebook, Inc. Controls and interfaces for user interactions in virtual spaces
US20200082629A1 (en) 2018-09-06 2020-03-12 Curious Company, LLC Controlling presentation of hidden information
US10595011B2 (en) 2014-08-28 2020-03-17 Samsung Electronics Co., Ltd Method and apparatus for configuring screen for virtual reality
US20200097091A1 (en) 2018-09-25 2020-03-26 XRSpace CO., LTD. Method and Apparatus of Interactive Display Based on Gesture Recognition
US20200097077A1 (en) 2018-09-26 2020-03-26 Rockwell Automation Technologies, Inc. Augmented reality interaction techniques
US10607413B1 (en) * 2015-09-08 2020-03-31 Ultrahaptics IP Two Limited Systems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
US20200129850A1 (en) 2017-04-28 2020-04-30 Sony Interactive Entertainment Inc. Information processing device, control method of information processing device, and program
US20200159337A1 (en) 2018-11-19 2020-05-21 Kenrick Cheng-kuo Kin Systems and methods for transitioning between modes of tracking real-world objects for artificial reality interfaces
US10691233B2 (en) 2016-10-11 2020-06-23 Valve Corporation Sensor fusion algorithms for a handheld controller that includes a force sensing resistor (FSR)
US20200225813A1 (en) * 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Context-aware system menu behavior for mixed reality
US20200225758A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Augmented two-stage hand gesture input
US20200225736A1 (en) 2019-01-12 2020-07-16 Microsoft Technology Licensing, Llc Discrete and continuous gestures for enabling hand rays
US20200225757A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Hand motion and orientation-aware buttons and grabbable objects in mixed reality
US20200226814A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Holographic palm raycasting for targeting virtual objects
US20200225830A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Near interaction mode for far virtual object
US20200286299A1 (en) 2019-03-06 2020-09-10 Microsoft Technology Licensing, Llc Snapping virtual object to target surface
US20200379576A1 (en) 2017-08-10 2020-12-03 Google Llc Context-sensitive hand interaction
US20200387287A1 (en) 2019-06-07 2020-12-10 Facebook Technologies, Llc Detecting input in artificial reality systems based on a pinch and pull gesture
US20200387228A1 (en) 2019-06-07 2020-12-10 Facebook Technologies, Llc Artificial reality system having a sliding menu
US20200388247A1 (en) 2019-06-07 2020-12-10 Facebook Technologies, Llc Corner-identifiying gesture-driven user interface element gating for artificial reality systems
US20210076091A1 (en) 2017-08-29 2021-03-11 Makoto Shohara Image capturing apparatus, image display system, and operation method
US10957059B1 (en) 2016-09-08 2021-03-23 Facebook Technologies, Llc Multi-pattern depth camera assembly
US10956724B1 (en) 2019-09-10 2021-03-23 Facebook Technologies, Llc Utilizing a hybrid model to recognize fast and precise hand inputs in a virtual environment
US20210096726A1 (en) 2019-09-27 2021-04-01 Apple Inc. Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20210134065A1 (en) 2019-10-30 2021-05-06 Purdue Research Foundation System and method for generating asynchronous augmented reality instructions
US20210141461A1 (en) 2017-01-04 2021-05-13 Htc Corporation Controller for finger gesture recognition and method for recognizing finger gesture
US20210183135A1 (en) 2019-12-12 2021-06-17 Facebook Technologies, Llc Feed-forward collision avoidance for artificial reality environments
US20210208698A1 (en) 2018-05-31 2021-07-08 Purple Tambourine Limited Interacting with a virtual environment using a pointing controller
US11079753B1 (en) 2018-01-07 2021-08-03 Matthew Roy Self-driving vehicle with remote user supervision and temporary override
US20210373672A1 (en) 2020-05-29 2021-12-02 Microsoft Technology Licensing, Llc Hand gesture-based emojis
US11221730B2 (en) 2017-07-11 2022-01-11 Logitech Europe S.A. Input device for VR/AR applications
US20220084279A1 (en) 2020-09-11 2022-03-17 Apple Inc. Methods for manipulating objects in an environment
US20220121344A1 (en) 2020-09-25 2022-04-21 Apple Inc. Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
CN110134234B (en) 2019-04-24 2022-05-10 山东文旅云智能科技有限公司 A method and device for positioning a three-dimensional object
US20220156999A1 (en) 2020-11-18 2022-05-19 Snap Inc. Personalized avatar real-time motion capture
US20220163800A1 (en) 2020-11-25 2022-05-26 Sony Interactive Entertainment Inc. Image based finger tracking plus controller tracking
US20220198755A1 (en) 2020-12-22 2022-06-23 Facebook Technologies, Llc Virtual reality locomotion via hand gesture
WO2022146938A1 (en) 2020-12-31 2022-07-07 Sterling Labs Llc Method of manipulating user interfaces in an environment
US20220262080A1 (en) 2021-02-16 2022-08-18 Apple Inc. Interfaces for presenting avatars in three-dimensional environments
US11514650B2 (en) 2019-12-03 2022-11-29 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling thereof
US20230031913A1 (en) 2020-01-17 2023-02-02 Sony Group Corporation Information processing device, information processing method, computer program, and augmented reality system
US20230040610A1 (en) 2021-08-06 2023-02-09 Apple Inc. Object placement for electronic devices
EP4145397A1 (en) 2020-04-30 2023-03-08 Virtualwindow Co., Ltd. Communication terminal device, communication method, and software program
US20230072423A1 (en) 2018-01-25 2023-03-09 Meta Platforms Technologies, Llc Wearable electronic devices and extended reality systems including neuromuscular sensors
US20230139337A1 (en) 2021-07-28 2023-05-04 Multinarity Ltd Controlling duty cycle in wearable extended reality appliances
CN116339737A (en) 2023-05-26 2023-06-27 阿里巴巴(中国)有限公司 XR application editing method, device and storage medium
US20230274512A1 (en) 2021-02-08 2023-08-31 Multinarity Ltd. Coordinating cursor movement between a physical surface and a virtual surface
US20230343052A1 (en) 2020-08-31 2023-10-26 Sony Group Corporation Information processing apparatus, information processing method, and program
US20240019938A1 (en) 2022-06-17 2024-01-18 Meta Platforms Technologies, Llc Systems for detecting gestures performed within activation-threshold distances of artificial-reality objects to cause operations at physical electronic devices, and methods of use thereof
US20240028129A1 (en) 2022-06-17 2024-01-25 Meta Platforms Technologies, Llc Systems for detecting in-air and surface gestures available for use in an artificial-reality environment using sensors at a wrist-wearable device, and methods of use thereof
US11991222B1 (en) 2023-05-02 2024-05-21 Meta Platforms Technologies, Llc Persistent call control user interface element in an artificial reality environment
US20240264660A1 (en) 2023-02-08 2024-08-08 Meta Platforms Technologies, Llc Facilitating User Interface Interactions in an Artificial Reality Environment
US20240265656A1 (en) 2023-02-08 2024-08-08 Meta Platforms Technologies, Llc Facilitating System User Interface (UI) Interactions in an Artificial Reality (XR) Environment
US20240281071A1 (en) 2023-02-16 2024-08-22 Meta Platforms Technologies, Llc Simultaneous Controller and Touch Interactions
US20250054243A1 (en) 2023-08-11 2025-02-13 Meta Platforms Technologies, Llc Two-Dimensional User Interface Content Overlay for an Artificial Reality Environment
US20250068297A1 (en) 2023-08-23 2025-02-27 Meta Platforms Technologies, Llc Gesture-Engaged Virtual Menu for Controlling Actions on an Artificial Reality Device

Patent Citations (205)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236541A1 (en) 1997-05-12 2004-11-25 Kramer James F. System and method for constraining a graphical hand from penetrating simulated graphical objects
US6842175B1 (en) 1999-04-22 2005-01-11 Fraunhofer Usa, Inc. Tools for interacting with virtual environments
US7701439B2 (en) 2006-07-13 2010-04-20 Northrop Grumman Corporation Gesture recognition simulation system and method
US20080089587A1 (en) 2006-10-11 2008-04-17 Samsung Electronics Co.; Ltd Hand gesture recognition input system and method for a mobile phone
US20110239155A1 (en) 2007-01-05 2011-09-29 Greg Christie Gestures for Controlling, Manipulating, and Editing of Media Files Using Touch Sensitive Devices
US20090077504A1 (en) 2007-09-14 2009-03-19 Matthew Bell Processing of Gesture-Based User Interactions
US9477368B1 (en) 2009-03-31 2016-10-25 Google Inc. System and method of indicating the distance or the surface of an image of a geographical object
US20100306716A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Extending standard gestures
US20120218183A1 (en) 2009-09-21 2012-08-30 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US20120188279A1 (en) 2009-09-29 2012-07-26 Kent Demaine Multi-Sensor Proximity-Based Immersion System and Method
US20120143358A1 (en) 2009-10-27 2012-06-07 Harmonix Music Systems, Inc. Movement based recognition and evaluation
US20110267265A1 (en) 2010-04-30 2011-11-03 Verizon Patent And Licensing, Inc. Spatial-input-based cursor projection systems and methods
US20130063345A1 (en) 2010-07-20 2013-03-14 Shigenori Maeda Gesture input device and gesture input method
US20120069168A1 (en) 2010-09-17 2012-03-22 Sony Corporation Gesture recognition system for tv control
US20120071892A1 (en) * 2010-09-21 2012-03-22 Intuitive Surgical Operations, Inc. Method and system for hand presence detection in a minimally invasive surgical system
US20120113223A1 (en) 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US20120206345A1 (en) 2011-02-16 2012-08-16 Microsoft Corporation Push actuation of interface controls
US20120218395A1 (en) 2011-02-25 2012-08-30 Microsoft Corporation User interface presentation and interactions
US20120249741A1 (en) * 2011-03-29 2012-10-04 Giuliano Maciocci Anchoring virtual images to real world surfaces in augmented reality systems
US20120249740A1 (en) * 2011-03-30 2012-10-04 Tae-Yon Lee Three-dimensional image sensors, cameras, and imaging systems
US20120275686A1 (en) 2011-04-29 2012-11-01 Microsoft Corporation Inferring spatial object descriptions from spatial gestures
US20120293544A1 (en) 2011-05-18 2012-11-22 Kabushiki Kaisha Toshiba Image display apparatus and method of selecting image region using the same
KR20120136719A (en) 2011-06-09 2012-12-20 안지윤 The method of pointing and controlling objects on screen at long range using 3d positions of eyes and hands
US20190278376A1 (en) 2011-06-23 2019-09-12 Intel Corporation System and method for close-range movement tracking
US8558759B1 (en) 2011-07-08 2013-10-15 Google Inc. Hand gestures to signify what is important
US9117274B2 (en) 2011-08-01 2015-08-25 Fuji Xerox Co., Ltd. System and method for interactive markerless paper documents in 3D space with mobile cameras and projectors
US9292089B1 (en) 2011-08-24 2016-03-22 Amazon Technologies, Inc. Gestural object selection
US20140357366A1 (en) 2011-09-14 2014-12-04 Bandai Namco Games Inc. Method for implementing game, storage medium, game device, and computer
US8947351B1 (en) 2011-09-27 2015-02-03 Amazon Technologies, Inc. Point of view determinations for finger tracking
US20140236996A1 (en) 2011-09-30 2014-08-21 Rakuten, Inc. Search device, search method, recording medium, and program
US20140375691A1 (en) 2011-11-11 2014-12-25 Sony Corporation Information processing apparatus, information processing method, and program
US20130125066A1 (en) 2011-11-14 2013-05-16 Microsoft Corporation Adaptive Area Cursor
US20130147793A1 (en) 2011-12-09 2013-06-13 Seongyeom JEON Mobile terminal and controlling method thereof
US20150035746A1 (en) 2011-12-27 2015-02-05 Andy Cockburn User Interface Device
US20130182902A1 (en) 2012-01-17 2013-07-18 David Holz Systems and methods for capturing motion in three-dimensional space
US20150220150A1 (en) 2012-02-14 2015-08-06 Google Inc. Virtual touch user interface system and methods
US20130265220A1 (en) 2012-04-09 2013-10-10 Omek Interactive, Ltd. System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US9477303B2 (en) * 2012-04-09 2016-10-25 Intel Corporation System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US20140007484A1 (en) 2012-07-03 2014-01-09 Andrei Erdoss Ocular cross dominance impediment corrective apparatus for use with a shoulder-mounted firearm
US20150153833A1 (en) 2012-07-13 2015-06-04 Softkinetic Software Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
US20140125598A1 (en) 2012-11-05 2014-05-08 Synaptics Incorporated User interface systems and methods for managing multiple regions
US9817472B2 (en) 2012-11-05 2017-11-14 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
US10187936B1 (en) 2013-01-07 2019-01-22 Amazon Technologies, Inc. Non-linear lighting system brightness control for a user device
US10042430B2 (en) 2013-01-15 2018-08-07 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US20140204002A1 (en) 2013-01-21 2014-07-24 Rotem Bennet Virtual interaction with image projection
US20150054742A1 (en) 2013-01-31 2015-02-26 Panasonic Intellectual Property Corporation of Ame Information processing method and information processing apparatus
WO2014119258A1 (en) 2013-01-31 2014-08-07 パナソニック株式会社 Information processing method and information processing device
US10220303B1 (en) 2013-03-15 2019-03-05 Harmonix Music Systems, Inc. Gesture-based music game
US20140306891A1 (en) 2013-04-12 2014-10-16 Stephen G. Latta Holographic object feedback
US9406277B1 (en) 2013-05-29 2016-08-02 Amazon Technologies, Inc. Control of spectral range intensity in media devices
US20160147308A1 (en) 2013-07-10 2016-05-26 Real View Imaging Ltd. Three dimensional user interface
US20150243100A1 (en) * 2013-07-12 2015-08-27 Magic Leap, Inc. Method and system for determining user input based on totem
US20150040040A1 (en) 2013-08-05 2015-02-05 Alexandru Balan Two-hand interaction with natural user interface
US20150062160A1 (en) 2013-08-30 2015-03-05 Ares Sakamoto Wearable user device enhanced display system
US10318100B2 (en) 2013-10-16 2019-06-11 Atheer, Inc. Method and apparatus for addressing obstruction in an interface
US20150110285A1 (en) 2013-10-21 2015-04-23 Harman International Industries, Inc. Modifying an audio panorama to indicate the presence of danger or other events of interest
JP2015100032A (en) 2013-11-19 2015-05-28 株式会社Nttドコモ Video display device, video presentation method, and program
US20150160736A1 (en) 2013-12-11 2015-06-11 Sony Corporation Information processing apparatus, information processing method and program
US20150169076A1 (en) 2013-12-16 2015-06-18 Leap Motion, Inc. User-defined virtual interaction space and manipulation of virtual cameras with vectors
US20150178985A1 (en) 2013-12-23 2015-06-25 Harman International Industries, Incorporated Virtual three-dimensional instrument cluster with three-dimensional navigation system
US20150181679A1 (en) 2013-12-23 2015-06-25 Sharp Laboratories Of America, Inc. Task light based system and gesture control
US20150206321A1 (en) 2014-01-23 2015-07-23 Michael J. Scavezze Automated content scrolling
US20150261659A1 (en) 2014-03-12 2015-09-17 Bjoern BADER Usability testing of applications by assessing gesture inputs
US20170206691A1 (en) 2014-03-14 2017-07-20 Magic Leap, Inc. Augmented reality systems and methods utilizing reflections
JP2015176439A (en) 2014-03-17 2015-10-05 オムロン株式会社 Multimedia device, control method of multimedia device, and control program of multimedia device
US20150269783A1 (en) 2014-03-21 2015-09-24 Samsung Electronics Co., Ltd. Method and wearable device for providing a virtual input interface
JP2015192436A (en) 2014-03-28 2015-11-02 キヤノン株式会社 Transmission terminal, reception terminal, transmission/reception system and program therefor
US20170050542A1 (en) 2014-04-25 2017-02-23 Mitsubishi Electric Corporation Automatic adjuster, automatic adjusting system and automatic adjusting method
US20190094981A1 (en) 2014-06-14 2019-03-28 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
EP3155560A1 (en) 2014-06-14 2017-04-19 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
EP3155560B1 (en) 2014-06-14 2020-05-20 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
WO2015192117A1 (en) 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
JP2017529635A (en) 2014-06-14 2017-10-05 マジック リープ, インコーポレイテッドMagic Leap,Inc. Methods and systems for creating virtual and augmented reality
US20190286231A1 (en) 2014-07-25 2019-09-19 Microsoft Technology Licensing, Llc Gaze-based object placement within a virtual reality environment
US20170139478A1 (en) 2014-08-01 2017-05-18 Starship Vending-Machine Corp. Method and apparatus for providing interface recognizing movement in accordance with user's view
US10595011B2 (en) 2014-08-28 2020-03-17 Samsung Electronics Co., Ltd Method and apparatus for configuring screen for virtual reality
US20190347865A1 (en) 2014-09-18 2019-11-14 Google Inc. Three-dimensional drawing inside virtual reality environment
CN106716332A (en) 2014-09-24 2017-05-24 微软技术许可有限责任公司 Gesture navigation for secondary user interface
US20160110052A1 (en) 2014-10-20 2016-04-21 Samsung Electronics Co., Ltd. Apparatus and method of drawing and solving figure content
US20170262063A1 (en) 2014-11-27 2017-09-14 Erghis Technologies Ab Method and System for Gesture Based Control Device
US20160170603A1 (en) 2014-12-10 2016-06-16 Microsoft Technology Licensing, Llc Natural user interface camera calibration
US20180335925A1 (en) 2014-12-19 2018-11-22 Hewlett-Packard Development Company, L.P. 3d visualization
US20170308166A1 (en) 2014-12-31 2017-10-26 Sony Interactive Entertainment Inc. Signal generation and detector systems and methods for determining positions of fingers of a user
US20160217614A1 (en) * 2015-01-28 2016-07-28 CCP hf. Method and System for Receiving Gesture Input Via Virtual Control Objects
US20160378291A1 (en) 2015-06-26 2016-12-29 Haworth, Inc. Object group processing and selection gestures for grouping objects in a collaboration system
WO2017009707A1 (en) 2015-07-13 2017-01-19 Quan Xiao Apparatus and method for hybrid type of input of buttons/keys and "finger writing" and low profile/variable geometry hand-based controller
US20180323992A1 (en) 2015-08-21 2018-11-08 Samsung Electronics Company, Ltd. User-Configurable Interactive Region Monitoring
US20170060230A1 (en) 2015-08-26 2017-03-02 Google Inc. Dynamic switching and merging of head, gesture and touch input in virtual reality
US10607413B1 (en) * 2015-09-08 2020-03-31 Ultrahaptics IP Two Limited Systems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
US20170109936A1 (en) 2015-10-20 2017-04-20 Magic Leap, Inc. Selecting virtual objects in a three-dimensional space
US20170139481A1 (en) 2015-11-12 2017-05-18 Oculus Vr, Llc Method and apparatus for detecting hand gestures with a handheld controller
US10248284B2 (en) 2015-11-16 2019-04-02 Atheer, Inc. Method and apparatus for interface control with prompt and feedback
US20170192513A1 (en) 2015-12-31 2017-07-06 Microsoft Technology Licensing, Llc Electrical device for hand gestures detection
US20170228130A1 (en) * 2016-02-09 2017-08-10 Unity IPR ApS Systems and methods for a virtual reality editor
US20170263033A1 (en) 2016-03-10 2017-09-14 FlyInside, Inc. Contextual Virtual Reality Interaction
US20170278304A1 (en) 2016-03-24 2017-09-28 Qualcomm Incorporated Spatial relationships for integration of visual images of physical environment into virtual reality
US20170287225A1 (en) 2016-03-31 2017-10-05 Magic Leap, Inc. Interactions with 3d virtual objects using poses and multiple-dof controllers
US20170296363A1 (en) 2016-04-15 2017-10-19 Board Of Regents, The University Of Texas System Systems, apparatuses and methods for controlling prosthetic devices by gestures and other modalities
US20170329515A1 (en) 2016-05-10 2017-11-16 Google Inc. Volumetric virtual reality keyboard methods, user interface, and interactions
US20170337742A1 (en) 2016-05-20 2017-11-23 Magic Leap, Inc. Contextual awareness of user interface menus
US20190130653A1 (en) * 2016-06-02 2019-05-02 Audi Ag Method for operating a display system and display system
US20170364198A1 (en) 2016-06-21 2017-12-21 Samsung Electronics Co., Ltd. Remote hover touch system and method
US20190258318A1 (en) 2016-06-28 2019-08-22 Huawei Technologies Co., Ltd. Terminal for controlling electronic device and processing method thereof
US20180004283A1 (en) 2016-06-29 2018-01-04 Cheyne Rory Quin Mathey-Owens Selection of objects in three-dimensional space
US20180033204A1 (en) 2016-07-26 2018-02-01 Rouslan Lyubomirov DIMITROV System and method for displaying computer-based content in a virtual or augmented environment
US20180039341A1 (en) 2016-08-03 2018-02-08 Google Inc. Methods and systems for determining positional data for three-dimensional interactions inside virtual reality environments
US10473935B1 (en) 2016-08-10 2019-11-12 Meta View, Inc. Systems and methods to provide views of virtual content in an interactive space
US20180046245A1 (en) 2016-08-11 2018-02-15 Microsoft Technology Licensing, Llc Mediation of interaction methodologies in immersive environments
US20180059901A1 (en) 2016-08-23 2018-03-01 Gullicksen Brothers, LLC Controlling objects using virtual rays
US10957059B1 (en) 2016-09-08 2021-03-23 Facebook Technologies, Llc Multi-pattern depth camera assembly
US20190265828A1 (en) 2016-09-23 2019-08-29 Apple Inc. Devices, Methods, and User Interfaces for Interacting with a Position Indicator within Displayed Text via Proximity-based Inputs
US10536691B2 (en) 2016-10-04 2020-01-14 Facebook, Inc. Controls and interfaces for user interactions in virtual spaces
WO2018067508A1 (en) 2016-10-04 2018-04-12 Facebook, Inc. Controls and interfaces for user interactions in virtual spaces
US20190325651A1 (en) 2016-10-11 2019-10-24 Valve Corporation Holding and Releasing Virtual Objects
US20190138107A1 (en) 2016-10-11 2019-05-09 Valve Corporation Virtual reality hand gesture generation
US20190290999A1 (en) 2016-10-11 2019-09-26 Valve Corporation Holding and Releasing Virtual Objects
US10691233B2 (en) 2016-10-11 2020-06-23 Valve Corporation Sensor fusion algorithms for a handheld controller that includes a force sensing resistor (FSR)
US20180107278A1 (en) 2016-10-14 2018-04-19 Intel Corporation Gesture-controlled virtual reality systems and methods of controlling the same
US20180113599A1 (en) 2016-10-26 2018-04-26 Alibaba Group Holding Limited Performing virtual reality input
US20180157398A1 (en) 2016-12-05 2018-06-07 Magic Leap, Inc. Virtual user input controls in a mixed reality environment
US20210141461A1 (en) 2017-01-04 2021-05-13 Htc Corporation Controller for finger gesture recognition and method for recognizing finger gesture
US20180188816A1 (en) 2017-01-04 2018-07-05 Htc Corporation Controller for finger gesture recognition and method for recognizing finger gesture
US11307671B2 (en) 2017-01-04 2022-04-19 Htc Corporation Controller for finger gesture recognition and method for recognizing finger gesture
CN108536273A (en) 2017-03-01 2018-09-14 天津锋时互动科技有限公司深圳分公司 Man-machine menu mutual method and system based on gesture
US20180307303A1 (en) 2017-04-19 2018-10-25 Magic Leap, Inc. Multimodal task execution and text editing for a wearable system
US20180303446A1 (en) 2017-04-21 2018-10-25 Hans Schweizer Medical imaging device and method for supporting a person using a medical imaging device
US20200129850A1 (en) 2017-04-28 2020-04-30 Sony Interactive Entertainment Inc. Information processing device, control method of information processing device, and program
US20180322701A1 (en) 2017-05-04 2018-11-08 Microsoft Technology Licensing, Llc Syndication of direct and indirect interactions in a computer-mediated reality environment
US20180329492A1 (en) 2017-05-09 2018-11-15 Microsoft Technology Licensing, Llc Parallax correction for touch-screen display
US20180357780A1 (en) 2017-06-09 2018-12-13 Sony Interactive Entertainment Inc. Optimized shadows in a foveated rendering system
WO2018235371A1 (en) 2017-06-20 2018-12-27 ソニー株式会社 INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
US20200218423A1 (en) 2017-06-20 2020-07-09 Sony Corporation Information processing apparatus, information processing method, and recording medium
US11221730B2 (en) 2017-07-11 2022-01-11 Logitech Europe S.A. Input device for VR/AR applications
US20190018498A1 (en) * 2017-07-12 2019-01-17 Unity IPR ApS Methods and systems for displaying ui elements in mixed reality environments
US20200379576A1 (en) 2017-08-10 2020-12-03 Google Llc Context-sensitive hand interaction
US20190050071A1 (en) 2017-08-14 2019-02-14 Industrial Technology Research Institute Transparent display device and control method using the same
US20190057531A1 (en) 2017-08-16 2019-02-21 Microsoft Technology Licensing, Llc Repositioning user perspectives in virtual reality environments
US10521944B2 (en) 2017-08-16 2019-12-31 Microsoft Technology Licensing, Llc Repositioning user perspectives in virtual reality environments
US20190318640A1 (en) 2017-08-21 2019-10-17 Precisionos Technology Inc. Medical virtual reality, mixed reality or agumented reality surgical system
US20210076091A1 (en) 2017-08-29 2021-03-11 Makoto Shohara Image capturing apparatus, image display system, and operation method
US20190391710A1 (en) 2017-09-25 2019-12-26 Tencent Technology (Shenzhen) Company Limited Information interaction method and apparatus, storage medium, and electronic apparatus
US20190107894A1 (en) 2017-10-07 2019-04-11 Tata Consultancy Services Limited System and method for deep learning based hand gesture recognition in first person view
US20190120593A1 (en) 2017-10-19 2019-04-25 SMWT Ltd. Visual Aid
US20190129607A1 (en) 2017-11-02 2019-05-02 Samsung Electronics Co., Ltd. Method and device for performing remote control
US20190146599A1 (en) 2017-11-13 2019-05-16 Arkio Ehf. Virtual/augmented reality modeling application for architecture
US11079753B1 (en) 2018-01-07 2021-08-03 Matthew Roy Self-driving vehicle with remote user supervision and temporary override
US20190213792A1 (en) 2018-01-11 2019-07-11 Microsoft Technology Licensing, Llc Providing Body-Anchored Mixed-Reality Experiences
US20230072423A1 (en) 2018-01-25 2023-03-09 Meta Platforms Technologies, Llc Wearable electronic devices and extended reality systems including neuromuscular sensors
US20190279424A1 (en) 2018-03-07 2019-09-12 California Institute Of Technology Collaborative augmented reality system
US20190361521A1 (en) 2018-05-22 2019-11-28 Microsoft Technology Licensing, Llc Accelerated gaze-supported manual cursor control
US20190362562A1 (en) 2018-05-25 2019-11-28 Leap Motion, Inc. Throwable Interface for Augmented Reality and Virtual Reality Environments
US20190369391A1 (en) 2018-05-31 2019-12-05 Renault Innovation Silicon Valley Three dimensional augmented reality involving a vehicle
US20210208698A1 (en) 2018-05-31 2021-07-08 Purple Tambourine Limited Interacting with a virtual environment using a pointing controller
US20190377416A1 (en) 2018-06-07 2019-12-12 Facebook, Inc. Picture-Taking Within Virtual Reality
WO2019245681A1 (en) 2018-06-20 2019-12-26 Valve Corporation Virtual reality hand gesture generation
US20200001172A1 (en) 2018-06-27 2020-01-02 Facebook Technologies, Llc Capacitive sensing assembly for detecting proximity of user to a controller device
US20200012341A1 (en) 2018-07-09 2020-01-09 Microsoft Technology Licensing, Llc Systems and methods for using eye gaze to bend and snap targeting rays for remote interaction
US20200082629A1 (en) 2018-09-06 2020-03-12 Curious Company, LLC Controlling presentation of hidden information
US20200097091A1 (en) 2018-09-25 2020-03-26 XRSpace CO., LTD. Method and Apparatus of Interactive Display Based on Gesture Recognition
US20200097077A1 (en) 2018-09-26 2020-03-26 Rockwell Automation Technologies, Inc. Augmented reality interaction techniques
US20200159337A1 (en) 2018-11-19 2020-05-21 Kenrick Cheng-kuo Kin Systems and methods for transitioning between modes of tracking real-world objects for artificial reality interfaces
US20200225830A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Near interaction mode for far virtual object
US20200225813A1 (en) * 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Context-aware system menu behavior for mixed reality
US20200226814A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Holographic palm raycasting for targeting virtual objects
US20200225757A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Hand motion and orientation-aware buttons and grabbable objects in mixed reality
US20200225758A1 (en) 2019-01-11 2020-07-16 Microsoft Technology Licensing, Llc Augmented two-stage hand gesture input
US20200225736A1 (en) 2019-01-12 2020-07-16 Microsoft Technology Licensing, Llc Discrete and continuous gestures for enabling hand rays
US20200286299A1 (en) 2019-03-06 2020-09-10 Microsoft Technology Licensing, Llc Snapping virtual object to target surface
CN110134234B (en) 2019-04-24 2022-05-10 山东文旅云智能科技有限公司 A method and device for positioning a three-dimensional object
US20200388247A1 (en) 2019-06-07 2020-12-10 Facebook Technologies, Llc Corner-identifiying gesture-driven user interface element gating for artificial reality systems
US20220244834A1 (en) 2019-06-07 2022-08-04 Facebook Technologies, Llc Detecting input in artificial reality systems based on a pinch and pull gesture
US20200387287A1 (en) 2019-06-07 2020-12-10 Facebook Technologies, Llc Detecting input in artificial reality systems based on a pinch and pull gesture
US10890983B2 (en) 2019-06-07 2021-01-12 Facebook Technologies, Llc Artificial reality system having a sliding menu
US20200387228A1 (en) 2019-06-07 2020-12-10 Facebook Technologies, Llc Artificial reality system having a sliding menu
US10956724B1 (en) 2019-09-10 2021-03-23 Facebook Technologies, Llc Utilizing a hybrid model to recognize fast and precise hand inputs in a virtual environment
US20210096726A1 (en) 2019-09-27 2021-04-01 Apple Inc. Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
US20210134065A1 (en) 2019-10-30 2021-05-06 Purdue Research Foundation System and method for generating asynchronous augmented reality instructions
US11514650B2 (en) 2019-12-03 2022-11-29 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling thereof
US20210183135A1 (en) 2019-12-12 2021-06-17 Facebook Technologies, Llc Feed-forward collision avoidance for artificial reality environments
US20230031913A1 (en) 2020-01-17 2023-02-02 Sony Group Corporation Information processing device, information processing method, computer program, and augmented reality system
EP4145397A1 (en) 2020-04-30 2023-03-08 Virtualwindow Co., Ltd. Communication terminal device, communication method, and software program
US20210373672A1 (en) 2020-05-29 2021-12-02 Microsoft Technology Licensing, Llc Hand gesture-based emojis
US20230343052A1 (en) 2020-08-31 2023-10-26 Sony Group Corporation Information processing apparatus, information processing method, and program
US20220084279A1 (en) 2020-09-11 2022-03-17 Apple Inc. Methods for manipulating objects in an environment
US20220121344A1 (en) 2020-09-25 2022-04-21 Apple Inc. Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
US20220156999A1 (en) 2020-11-18 2022-05-19 Snap Inc. Personalized avatar real-time motion capture
US20220163800A1 (en) 2020-11-25 2022-05-26 Sony Interactive Entertainment Inc. Image based finger tracking plus controller tracking
US20220198755A1 (en) 2020-12-22 2022-06-23 Facebook Technologies, Llc Virtual reality locomotion via hand gesture
WO2022146938A1 (en) 2020-12-31 2022-07-07 Sterling Labs Llc Method of manipulating user interfaces in an environment
US20230274512A1 (en) 2021-02-08 2023-08-31 Multinarity Ltd. Coordinating cursor movement between a physical surface and a virtual surface
US20220262080A1 (en) 2021-02-16 2022-08-18 Apple Inc. Interfaces for presenting avatars in three-dimensional environments
US20230139337A1 (en) 2021-07-28 2023-05-04 Multinarity Ltd Controlling duty cycle in wearable extended reality appliances
US20230040610A1 (en) 2021-08-06 2023-02-09 Apple Inc. Object placement for electronic devices
US20240019938A1 (en) 2022-06-17 2024-01-18 Meta Platforms Technologies, Llc Systems for detecting gestures performed within activation-threshold distances of artificial-reality objects to cause operations at physical electronic devices, and methods of use thereof
US20240028129A1 (en) 2022-06-17 2024-01-25 Meta Platforms Technologies, Llc Systems for detecting in-air and surface gestures available for use in an artificial-reality environment using sensors at a wrist-wearable device, and methods of use thereof
US20240264660A1 (en) 2023-02-08 2024-08-08 Meta Platforms Technologies, Llc Facilitating User Interface Interactions in an Artificial Reality Environment
US20240265656A1 (en) 2023-02-08 2024-08-08 Meta Platforms Technologies, Llc Facilitating System User Interface (UI) Interactions in an Artificial Reality (XR) Environment
US20240281071A1 (en) 2023-02-16 2024-08-22 Meta Platforms Technologies, Llc Simultaneous Controller and Touch Interactions
US20240281070A1 (en) 2023-02-16 2024-08-22 Meta Platforms Technologies, Llc Simultaneous Controller and Touch Interactions
US11991222B1 (en) 2023-05-02 2024-05-21 Meta Platforms Technologies, Llc Persistent call control user interface element in an artificial reality environment
US20240372901A1 (en) 2023-05-02 2024-11-07 Meta Platforms Technologies, Llc Persistent Call Control User Interface Element in an Artificial Reality Environment
CN116339737A (en) 2023-05-26 2023-06-27 阿里巴巴(中国)有限公司 XR application editing method, device and storage medium
US20250054243A1 (en) 2023-08-11 2025-02-13 Meta Platforms Technologies, Llc Two-Dimensional User Interface Content Overlay for an Artificial Reality Environment
US20250068297A1 (en) 2023-08-23 2025-02-27 Meta Platforms Technologies, Llc Gesture-Engaged Virtual Menu for Controlling Actions on an Artificial Reality Device

Non-Patent Citations (35)

* Cited by examiner, † Cited by third party
Title
Argelaguet F., et al., A Survey of 3D Object Selection Techniques for Virtual Environments, Computers & Graphics,2013, vol. 37, No. 3, pp. 121-136.
Argelaguet F., et al., A Survey of 3D Object Selection Techniques for Virtual Environments, Computers & Graphics,2013, vol. 37, No. 3, pp. 121-136.
Cardoso J., Comparison of Gesture, Gamepad, and Gaze-Based Locomotion for VR Worlds, Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, Nov. 2, 2016, pp. 319-320.
European Search Report for European Patent Application No. 24154976.5, dated Jun. 7, 2024, 8 pages.
European Search Report for European Patent Application No. 24155225.6, dated May 2, 2024, 10 pages.
Fox B., et al., "Designing Singlehanded Shortcuts for VR & AR," May 10, 2018, Retrieved from the Internet: URL: https://www.roadtovr.com/leap-motion-designing-single-handed-shortcuts-for-vr-ar/, [Retrieved on Oct. 27, 2020], 18 pages.
Hincapie-Ramos J.D., et aL, "GyroWand: IMU-Based Raycasting for Augmented Reality Head-Mounted Displays," Proceedings of the 3rd Association for Computing Machinery (ACM) Symposium on Spatial User Interaction, Los Angeles, CA, USA, Aug. 8-9, 2015, pp. 89-98.
Huang Y., et al., "Evaluation of a Hybrid of Hand Gesture and Controller Inputs in Virtual Reality," International Journal of Human-Computer Interaction, Aug. 26, 2020, vol. 37, No. 2, pp. 169-180.
International Preliminary Report on Patentability for International Application No. PCT/US2020/051763, mailed Mar. 31, 2022, 10 pages.
International Search Report and Written Opinion for International Application No. PCT/US2020/051763, mailed Feb. 3, 2021, 11 Pages.
International Search Report and Written Opinion for International Application No. PCT/US2021/063536 mailed Mar. 22, 2022, 12 pages.
International Search Report and Written Opinion for International Application No. PCT/US2023/017990, mailed Jul. 10, 2023, 9 pages.
International Search Report and Written Opinion for International Application No. PCT/US2024/036906, mailed Oct. 17, 2024, 14 pages.
International Search Report and Written Opinion for International Application No. PCT/US2024/040999, mailed Dec. 2, 2024, 10 pages.
International Search Report and Written Opinion of International Application No. PCT/US2020/035998, mailed Sep. 30, 2020, 16 pages.
Lang B., "Leap Motion Virtual Wearable AR Prototype is a Potent Glimpse at the Future of Your Smartphone," Mar. 24, 2018, Retrieved from the Internet: URL: https://www.roadtovr.com/leap-motion-virtual-wearable-ar-prototype-glimpse-of-future-smartphone/, [Retrieved on Oct. 27, 2020], 6 pages.
Lang, Ben, "Leap Motion Virtual Wearable AR Prototype is a Potent Glimpse at the Future of Your Smartphone" Mar. 24, 2018, Retrieved from https://www.roadtovr.com/leap-motion-virtual-wearable-ar-prototype-glimpse-of-future-smartphone/, retrieved on Oct. 27, 2020, pp. 6.
Lee M.S., et al., "A Computer Vision System for on-Screen Item Selection by Finger Pointing," In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2001, vol. 1, 8 pages.
Mardanbegi D., et al., "Eyesee Through: Unifying Tool Selection and Application in Virtual Environments," In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2019, pp. 474-483.
Matsuda K., "Augmented City 3D [Official]," YouTube, Aug. 20, 2010, Retrieved from the Internet: URL: https://www.youtube.com/watch?v=3TL80ScTLIM, 1 page.
Mayer S., et aL, "The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments," Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, Apr. 21-26, 2018, pp. 1-13.
Mine M.R., et al., Moving Objects in Space: Exploiting Proprioception in Virtual-Environment Interaction, In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 1997, pp. 19-26.
Newton A., "Immersive Menus Demo," YouTube, Oct. 8, 2017, Retrieved from the Internet: URL: https://www.youtube.com/watch?v=_ow1RboHJDY, 1 page.
Office Action mailed Apr. 9, 2024 for Japanese Patent Application No. 2021-555497, filed on Jun. 3, 2020, 4 pages.
Office Action mailed Feb. 27, 2024 for Chinese Application No. 202080035337.X, filed Jun. 3, 2020, 8 pages.
Office Action mailed Sep. 19, 2023 for European Patent Application No. 20747255.6, filed on Dec. 7, 2021, 6 pages.
Olwal A., et al., "The Flexible Pointer: An Interaction Technique for Selection in Augmented and Virtual Reality," Proceedings of ACM Symposium on User Interface Software and Technology (UIST), Vancouver, BC, Nov. 2-5, 2003, pp. 81-82.
Pfeuffer, et al., "Gaze+ Pinch Interaction in Virtual Reality," Proceedings of the 5th Symposium on Spatial User Interaction, SUI '17, Oct. 16, 2017, pp. 99-108.
Prosecution History of U.S. Appl. No. 16/434,919, dated Apr. 2, 2020 through Dec. 15, 2020, 46 pages.
Renner P., et al., "Ray Casting", Central Facility Labs [Online], [Retrieved on Apr. 7, 2020], 2 pages, Retrieved from the Internet: URL:https://www.techfak.uni-bielefeld.de/˜tpfeiffe/lehre/VirtualReality/interaction/ray_casting.html.
Schweigert R., et aL, "EyePointing: A Gaze-Based Selection Technique," Proceedings of Mensch and Computer, Hamburg, Germany, Sep. 8-11, 2019, pp. 719-723.
Tomberlin M., et al., "Gauntlet: Travel Technique for Immersive Environments using Non-Dominant Hand," IEEE Virtual Reality (VR), Mar. 18, 2017, pp. 299-300.
Unity Gets Toolkit for Common AR/VR Interactions, Unity XR interaction Toolkit Preview [Online], Dec. 19, 2019 Retrieved on Apr. 7, 2020], 1 page, Retrieved from the Internet: URL: http://youtu.be/ZPhv4qmT9EQ.
YouTube, Matsuda, Keiichi, "Augmented City 3D [Official]", retrieved from https://www.youtube.com/watch?v=3TL80ScTLIM, Aug. 20, 2010, 1p.
YouTube, Newton, Andrew, "Immersive Menus Demo", retrieved from https://www.youtube.com/watch?v=_owlRboHJDY, Oct. 8, 2017, p. 1.

Also Published As

Publication number Publication date
WO2020247550A1 (en) 2020-12-10
JP2022535316A (en) 2022-08-08
TW202113555A (en) 2021-04-01
CN113853575B (en) 2024-10-29
US10890983B2 (en) 2021-01-12
CN119336210A (en) 2025-01-21
EP3980870A1 (en) 2022-04-13
US20200387228A1 (en) 2020-12-10
CN113853575A (en) 2021-12-28
KR20220016274A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
USRE50598E1 (en) Artificial reality system having a sliding menu
US12099693B2 (en) Detecting input in artificial reality systems based on a pinch and pull gesture
US11003307B1 (en) Artificial reality systems with drawer simulation gesture for gating user interface elements
EP3997552B1 (en) Virtual user interface using a peripheral device in artificial reality environments
US11043192B2 (en) Corner-identifiying gesture-driven user interface element gating for artificial reality systems
US20200387214A1 (en) Artificial reality system having a self-haptic virtual keyboard
US11422669B1 (en) Detecting input using a stylus in artificial reality systems based on a stylus movement after a stylus selection action
US10921879B2 (en) Artificial reality systems with personal assistant element for gating user interface elements
US11023035B1 (en) Virtual pinboard interaction using a peripheral device in artificial reality environments
US10976804B1 (en) Pointer-based interaction with a virtual surface using a peripheral device in artificial reality environments
US10990240B1 (en) Artificial reality system having movable application content items in containers
US20200387286A1 (en) Arm gaze-driven user interface element gating for artificial reality systems
US11086475B1 (en) Artificial reality systems with hand gesture-contained content window
US10955929B2 (en) Artificial reality system having a digit-mapped self-haptic input method
US10852839B1 (en) Artificial reality systems with detachable personal assistant for gating user interface elements
US11023036B1 (en) Virtual drawing surface interaction using a peripheral device in artificial reality environments
US11816757B1 (en) Device-side capture of data representative of an artificial reality environment

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY