EP4577893A1 - Avatars à regard ajusté pour applications de réalité immersive - Google Patents
Avatars à regard ajusté pour applications de réalité immersiveInfo
- Publication number
- EP4577893A1 EP4577893A1 EP23769024.3A EP23769024A EP4577893A1 EP 4577893 A1 EP4577893 A1 EP 4577893A1 EP 23769024 A EP23769024 A EP 23769024A EP 4577893 A1 EP4577893 A1 EP 4577893A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- avatar
- transmitter
- receiver
- gaze direction
- fixation point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
Definitions
- the present disclosure is related to real-time adjustments of three-dimensional human representations for immersive reality applications. More specifically, the present disclosure is related to real-time gaze locking of avatars for social presence in virtual and augmented reality applications.
- Virtual reality, augmented reality, mixed reality, and other immersive reality applications are becoming popular with users.
- the realistic representation of human faces is a computationally intensive operation, often relayed to a remote server communicatively coupled with an enhanced reality headset either directly or via a mobile device paired with it.
- these strategies may result in a time lag (e.g., latency) that creates awkward artifacts in the virtual world.
- a time lag e.g., latency
- One of the most notorious artifacts created by this latency is a retarded eye gaze on a transmitter/receiver avatar. This creates misunderstandings, expectations, and anxiety for two (or more) people trying to maintain a virtual conversation.
- a system in a second embodiment, includes a memory storing multiple instructions, and one or more processors configured to execute the instructions to cause the system to perform a process.
- the process includes to verify that a visual tracking of a transmitter avatar is active, and to adjust, in a receiver device, a gaze direction of the transmitter avatar to a fixation point, wherein to adjust the gaze direction of the transmitter avatar the one or more processors execute instructions to estimate a coordinate of the fixation point in a receiver frame at a later time, and rotate, in the receiver device, two gaze directions from a transmitter device to track the fixation point.
- a non-transitory , computer-readable medium stores instructions which, when executed by a processor, cause a computer to execute a method.
- the method includes verifying, in a receiver device, that a visual tracking of a transmitter avatar is active in a transmitter device, and adjusting, in the receiver device, a gaze direction of the transmitter avatar to a fixation point. Adjusting the gaze direction of the transmitter avatar includes estimating a coordinate of the fixation point in a receiver frame at a later time, and rotating, in the receiver device, two eyeballs of the transmitter avatar to point in a direction of the fixation point.
- a system includes a first means to store instructions and a second means to execute the instructions to cause the system to perform a method.
- the method includes verifying, in a receiver device, that a visual tracking of a transmitter avatar is active in a transmitter device, and adjusting, in the receiver device, a gaze direction of the transmitter avatar to a fixation point. Adjusting the gaze direction of the transmitter avatar includes estimating a coordinate of the fixation point in a receiver frame at a later time, and rotating, in the receiver device, two eyeballs of the transmitter avatar to point in a direction of the fixation point.
- verifying that a visual tracking of a transmitter avatar is currently in place comprises identifying a conversation between the transmitter avatar and a receiver avatar.
- verifying that a visual tracking of a transmitter avatar is currently in place comprises querying the transmitter device for a visual tracking signal.
- verifying that a visual tracking of a transmitter avatar is currently in place comprises verifying that a value of a function of an angular difference between the gaze direction of the transmitter avatar and the fixation point is greater than a pre- selected threshold.
- the one or more processors execute instructions to visually track the transmitter avatar with a gaze lock between the transmitter avatar and a receiver avatar, and to estimate the coordinate of the fixation point the one or more processors execute instructions to estimate a relative position between the transmitter avatar and the receiver avatar at the later time.
- the one or more processors execute instructions to update the gaze direction at the later time to a new gaze direction received from the transmitter device when a distance between the gaze direction and the new gaze direction is greater than a preselected threshold.
- the one or more processors further execute instructions to adjust ahead direction of the transmitter avatar based on the fixation point at the later time.
- the one or more processors further execute instructions to update a world state at the later time based on a dynamic condition of the world state at a current time.
- FIG. 1 illustrates a receiver and a transmitter participating in a virtual conversation using enhanced reality headsets configured for gaze locking in real time, according to some embodiments.
- FIG. 4 illustrates world status parameters for a function to verify a gaze lock status of a transmitter avatar, according to some embodiments.
- FIG. 5 is a flow chart illustrating steps in a method for gaze locking a transmitter avatar in real time for immersive reality applications, according to some embodiments.
- FIG. 6 is a block diagram illustrating an exemplary computer system with which headsets and other client devices, and the method in FIG. 5, can be implemented.
- embodiments as disclosed herein provide a receiver headset with mechanisms and procedures to adjust the gaze of a transmitter avatar locally, without having to communicate remotely with a mobile device, a server, and ultimately the transmitter headset.
- techniques as disclosed herein may overcome a noise eye tracking detection system by estimating accurately a gaze fixation of a transmitter/receiver during a virtual conversation.
- the eye tracking device can determine a gaze direction of receiver 101/transmitter 102.
- Microphones 121 may record a speech 107-1 from receiver 101 or a speech 107-2 from transmitter 102 (hereinafter, collectively referred to as “speeches 107”).
- architecture 10 transmits speeches 107 to the remote server 130 via network 150, where the entire capturing of the session may take place to be fed into virtual world 170.
- each one of the devices illustrated in architecture 10 may include a memory storing instructions and one or more processors configured to execute the instructions to cause each device to participate, at least partially, in methods as disclosed herein.
- an object of interest for receiver 101 may be the eyes or the face of avatar 112, the flower, or even a moving object at a distance (e.g. , a car, a train, a plane, another avatar, and the like). Accordingly, it is desirable to maintain gaze 115-1 focused on the flower for as long as the context of the conversation in virtual world 170 calls for. Reciprocally, it may be desirable to keep a gaze 115-2 of transmitter avatar 112 on the face (e.g., the eyes) of receiver avatar 111, within the context of the conversation that receiver 101 and transmitter 102 are having in virtual world 170.
- the processing and transmission of datasets 103 incurs in an inherent latency of network 150. Moreover, errors are expected to occur as the data transmission involves at least three devices communicating over network 150 (e.g, headsets 100, mobile devices 110, and remote server 130). Accordingly, it is desirable that headset 100- 1 with receiver 101 makes a decision to adjust the gaze locking of transmitter avatar 112 before receiving an accurate rendition of such avatar from headset 100-2 through network 150.
- This adjustment being a local change made by headset 100-1, is a real-time gaze locking that gives the conversation in the virtual world 170 a highly realistic impression.
- headset 100-1 may be configured to analyze the context of the conversation from speeches 107, extracting meaning and intention and other more explicit or implicit content from the conversation between receiver and transmitter.
- a user may interact with client device 110 via an input device 214 and an output device 216.
- Input device 214 may include a mouse, a keyboard, a pointer, a touchscreen, a microphone, a joystick, a virtual joystick, and the like.
- input device 214 may include cameras, microphones, and sensors, such as touch sensors, acoustic sensors, inertial motion units -IMUs- and other sensors configured to provide input data to a VR/AR headset.
- input device 214 may include an eye tracking device to detect the position of a user’s pupil in a VR/AR headset.
- Output device 216 may be a screen display, a touchscreen, a speaker, and the like.
- Client device 110 may include a memory 220-1 and a processor 212-1.
- Memory 220- 1 may include an application 222 and a GUI 225, configured to run in client device 110 and couple with input device 214 and output device 216.
- Application 222 may be downloaded by the user from server 130 and may be hosted by server 130.
- client device 110 is a VR/AR headset and application 222 is an immersive reality application.
- client device 110 is a mobile phone used to collect a video or picture and upload to server 130 using a video or image collection application 222, to store in training database 152.
- Memory 220-1 may also include an eye tracking module 242 that receives input from the eye tracking device in a headset and identifies a gaze direction of the headset user, in a virtual world (e.g., virtual world 170). Further, in some embodiments, eye tracking module 242 may also capture a pupil position and gaze direction of an avatar in the virtual world. Accordingly, eye tracking module 242 may correlate the gaze direction of a “receiver” avatar (e.g., a “local” avatar modeled on the user of client device 110) with the gaze direction of a “transmitter” avatar (e.g., an avatar that interacts with the receiver avatar and that is modeled on a user of a remote device 110).
- a “receiver” avatar e.g., a “local” avatar modeled on the user of client device 110
- a “transmitter” avatar e.g., an avatar that interacts with the receiver avatar and that is modeled on a user of a remote device 110.
- the gaze direction of the transmitter avatar may be provided to client device 110 from the remote client device used by the transmitter, via server 130.
- a network delay may result in an outdated signal received by client device 110 regarding the gaze direction of the transmitter avatar.
- eye tracking module 242 may use a neural network tool 248 trained to predict a correct gaze direction of the transmitter avatar in the virtual world. In some embodiments, eye tracking module 242 is triggered to replace the current feed for the transmitter avatar gaze with a locally predicted gaze when a mismatch between the two becomes larger than a pre-selected threshold.
- neural network tool 248 may be part of one or more machine learning models stored in database 152.
- Database 152 includes training archives and other data files that may be used by eye tracking module 242 to accurately predict a gaze direction of the transmitter avatar.
- at least one or more training archives or machine learning models may be stored in either one of memories 220, and client device 1 10 may have access to them through application 222.
- An angular distance, 3 may be defined as:
- Instructions may also be implemented in computer languages such as array languages, aspect-oriented languages, assembly languages, authonng languages, command line interface languages, compiled languages, concurrent languages, curly-bracket languages, dataflow languages, data-structured languages, declarative languages, esoteric languages, extension languages, fourth-generation languages, functional languages, interactive mode languages, interpreted languages, iterative languages, list-based languages, little languages, logic-based languages, machine languages, macro languages, metaprogramming languages, multiparadigm languages, numerical analysis, non-English-based languages, object-oriented class-based languages, object-oriented prototype-based languages, off-side rule languages, procedural languages, reflective languages, rule-based languages, scripting languages, stack-based languages, synchronous languages, syntax handling languages, visual languages, wirth languages, and xml-based languages.
- Memory 604 may also be used for storing temporary variable or other intermediate information during execution of instructions to be executed by processor 602.
- a computer program as discussed herein does not necessarily correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, subprograms, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
- Computer system 600 further includes a data storage device 606 such as a magnetic disk or optical disk, coupled with bus 608 for storing information and instructions.
- Computer system 600 may be coupled via input/output module 610 to various devices.
- Input/output module 610 can be any input/output module.
- Exemplary input/output modules 610 include data ports such as USB ports.
- the input/output module 610 is configured to connect to a communications module 612.
- Exemplary communications modules 612 include networking interface cards, such as Ethernet cards and modems.
- input/output module 610 is configured to connect to a plurality of devices, such as an input device 614 and/or an output device 616.
- Exemplary input devices 614 include a keyboard and a pointing device, e.g., a mouse or a trackball, by which a consumer can provide input to the computer system 600.
- Other kinds of input devices 614 can be used to provide for interaction with a consumer as well, such as a tactile input device, visual input device, audio input device, or brain-computer interface device.
- feedback provided to the consumer can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the consumer can be received in any form, including acoustic, speech, tactile, or brain wave input.
- Exemplary output devices 616 include display devices, such as an LCD (liquid crystal display) monitor, for displaying information to the consumer.
- headsets and client devices 1 10 can be implemented, at least partially, using a computer system 600 in response to processor 602 executing one or more sequences of one or more instructions contained in memory 604. Such instructions may be read into memory 604 from another machine-readable medium, such as data storage device 606. Execution of the sequences of instructions contained in main memory 604 causes processor 602 to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in memory 604. In alternative aspects, hard-wired circuitry may be used in place of or in combination with software instructions to implement various aspects of the present disclosure. Thus, aspects of the present disclosure are not limited to any specific combination of hardware circuitry and software.
- a computing system that includes a back end component, e.g, a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical consumer interface or a Web browser through which a consumer can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components.
- the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
- the communication network can include, for example, any one or more of a LAN, a WAN, the Internet, and the like.
- the communication network can include, but is not limited to, for example, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network, or the like.
- the communications modules can be, for example, modems or Ethernet cards.
- Computer system 600 can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- Computer system 600 can be, for example, and without limitation, a desktop computer, laptop computer, or tablet computer.
- Computer system 600 can also be embedded in another device, for example, and without limitation, a mobile telephone, a PDA, a mobile audio player, a Global Positioning System (GPS) receiver, a video game console, and/or a television set top box.
- GPS Global Positioning System
- machine-readable storage medium or “computer-readable medium” as used herein refers to any medium or media that participates in providing instructions to processor 602 for execution. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, and transmission media.
- Non-volatile media include, for example, optical or magnetic disks, such as data storage device 606.
- Volatile media include dynamic memory, such as memory 604.
- Transmission media include coaxial cables, copper wire, and fiber optics, including the wires forming bus 608.
- Machine-readable media include, for example, floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
- the machine-readable storage medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter affecting a machine-readable propagated signal, or a combination of one or more of them.
- a method may be an operation, an instruction, or a function and vice versa.
- a clause may be amended to include some or all of the words (e.g., instructions, operations, functions, or components) recited in either one or more clauses, one or more words, one or more sentences, one or more phrases, one or more paragraphs, and/or one or more clauses.
- the phrase “at least one of’ preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (e.g., each item).
- the phrase “at least one of’ does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items.
- phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
- exemplary is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the user technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience only and do not imply that a disclosure relating to such phrase(s) is essential to the user technology or that such disclosure applies to all configurations of the user technology.
- a disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations.
- a disclosure relating to such phrase(s) may provide one or more examples.
- a phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
- a reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more.” Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. The term “some” refers to one or more.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
L'invention concerne un procédé de mise à jour d'une direction de regard pour un avatar d'émetteur dans un casque de récepteur. Le procédé comprend la vérification, dans un dispositif récepteur, du fait qu'un suivi visuel d'un avatar d'émetteur est actif dans un dispositif émetteur, et l'ajustement, dans le dispositif récepteur, d'une direction du regard de l'avatar d'émetteur sur un point de fixation. L'ajustement de la direction du regard de l'avatar d'émetteur comprend l'estimation d'une coordonnée du point de fixation dans une trame de récepteur à un instant ultérieur et la rotation, dans le dispositif récepteur, de deux globes oculaires de l'avatar d'émetteur vers un point dans une direction du point de fixation. L'invention concerne en outre un casque d'écoute, une mémoire dans le casque d'écoute stockant des instructions et un processeur configuré pour exécuter les instructions pour mettre en œuvre le procédé ci-dessus.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263399997P | 2022-08-22 | 2022-08-22 | |
| US18/067,603 US12032737B2 (en) | 2022-08-22 | 2022-12-16 | Gaze adjusted avatars for immersive reality applications |
| PCT/US2023/030832 WO2024044193A1 (fr) | 2022-08-22 | 2023-08-22 | Avatars à regard ajusté pour applications de réalité immersive |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4577893A1 true EP4577893A1 (fr) | 2025-07-02 |
Family
ID=88020851
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23769024.3A Pending EP4577893A1 (fr) | 2022-08-22 | 2023-08-22 | Avatars à regard ajusté pour applications de réalité immersive |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4577893A1 (fr) |
| CN (1) | CN119404167A (fr) |
| WO (1) | WO2024044193A1 (fr) |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10325396B2 (en) * | 2017-02-14 | 2019-06-18 | Linden Research, Inc. | Virtual reality presentation of eye movement and eye contact |
| JP7389032B2 (ja) * | 2017-12-14 | 2023-11-29 | マジック リープ, インコーポレイテッド | 仮想アバタのコンテキストベースのレンダリング |
-
2023
- 2023-08-22 WO PCT/US2023/030832 patent/WO2024044193A1/fr not_active Ceased
- 2023-08-22 EP EP23769024.3A patent/EP4577893A1/fr active Pending
- 2023-08-22 CN CN202380046736.XA patent/CN119404167A/zh active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024044193A1 (fr) | 2024-02-29 |
| CN119404167A (zh) | 2025-02-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230350497A1 (en) | Scrolling and navigation in virtual reality | |
| US20240355064A1 (en) | Overlaying visual content using model adaptation | |
| US20240355065A1 (en) | Dynamic model adaptation customized for individual users | |
| US12056824B2 (en) | Simulated control for 3-dimensional human poses in virtual reality environments | |
| US11734888B2 (en) | Real-time 3D facial animation from binocular video | |
| US12482188B2 (en) | Simulated control for 3-dimensional human poses in virtual reality environments | |
| KR20250105422A (ko) | 증강 현실 디바이스들을 사용하는 아바타 이동의 사용자 제어 | |
| US12032737B2 (en) | Gaze adjusted avatars for immersive reality applications | |
| EP4345755A1 (fr) | Transfert d'expression pour avatars stylisés | |
| EP4577893A1 (fr) | Avatars à regard ajusté pour applications de réalité immersive | |
| US20240064413A1 (en) | Comfortable multiplexed lighting for modeling relightable avatars | |
| US12101549B2 (en) | Camera control using system sensor data | |
| US20240177331A1 (en) | Computer-based posture assessment and correction | |
| US12299821B2 (en) | Solution of body-garment collisions in avatars for immersive reality applications | |
| US20250200893A1 (en) | Contextual time-based digital representations | |
| US20230252822A1 (en) | Sign language detection for smart glasses | |
| US20250111604A1 (en) | Optimizing level of detail generation in virtual environments | |
| US20230046341A1 (en) | World lock spatial audio processing | |
| US12468439B1 (en) | Hand scale factor estimation from mobile interactions | |
| US20250254269A1 (en) | Virtual meeting background freeze | |
| US20240242455A1 (en) | Stylizing animatable head avatars | |
| US20230004220A1 (en) | Dynamic uniformity compensation for foveated imaging in virtual reality and augmented reality headsets | |
| US20250267240A1 (en) | Detecting the presence of a virtual meeting participant | |
| US20250356453A1 (en) | Automated display zoom based on movement of user | |
| JP2025170221A (ja) | シミュレートされたキャラクター相互作用を異なる形態および相互作用シナリオに適応すること |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250227 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) |