[go: up one dir, main page]

WO2025035402A1 - Authentication of user in the metaverse - Google Patents

Authentication of user in the metaverse Download PDF

Info

Publication number
WO2025035402A1
WO2025035402A1 PCT/CN2023/113196 CN2023113196W WO2025035402A1 WO 2025035402 A1 WO2025035402 A1 WO 2025035402A1 CN 2023113196 W CN2023113196 W CN 2023113196W WO 2025035402 A1 WO2025035402 A1 WO 2025035402A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual character
attribute
determining
pattern image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/113196
Other languages
French (fr)
Inventor
Stefan Metz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TrinamiX GmbH
Original Assignee
TrinamiX GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TrinamiX GmbH filed Critical TrinamiX GmbH
Priority to PCT/CN2023/113196 priority Critical patent/WO2025035402A1/en
Publication of WO2025035402A1 publication Critical patent/WO2025035402A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication

Definitions

  • the invention relates to a method for verifying at least one attribute of a virtual character in a virtual environment.
  • the present invention further relates to a device and/or a system, a computer program and a non-transient computer-readable medium.
  • the devices, methods and uses according to the present invention specifically may be employed for example in various areas of daily life, security technology, gaming, traffic technology, production technology, photography such as digital photography or video photography for arts, documentation or technical purposes, safety technology, information technology, agriculture, crop protection, maintenance, cosmetics, medical technology or in sciences.
  • photography such as digital photography or video photography for arts, documentation or technical purposes, safety technology, information technology, agriculture, crop protection, maintenance, cosmetics, medical technology or in sciences.
  • other applications are also possible.
  • a real life user is typically represented by a virtual character, e.g. an avatar.
  • virtual characters are, typically easy to create and, thereby, it is easy to fake being a specific real life user, particularly in case if no verification is employed.
  • a method for verifying at least one attribute of a virtual character in a virtual environment associated with a user is disclosed.
  • the method for verifying at least one attribute of a virtual character in a virtual environment associated with a user which may be performed in the given order. A different order, however, may also be feasible.
  • two or more of the method steps may be performed simultaneously. Thereby the method steps may at least partly overlap in time.
  • the method steps may be performed once or repeatedly. Thus, one or more or even all of the method steps may be performed once or repeatedly.
  • the method may comprise additional method steps, which are not listed herein.
  • the method may be a computer-implemented method.
  • at least one of the method steps preferably any one of the method steps may be performed by using a device and/or system comprising at least one processor for executing the steps.
  • the term "computer implemented method” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically specifically may refer, without limitation, to a method, which involves at least one apparatus, specifically a computer, or a plurality of apparatus, particularly connected via a computer network.
  • the plurality of apparatus may be connected, particularly for transmitting data, via a network by using at least one connection interface at any one of the apparatuses of the plurality of apparatus.
  • the computer-implemented method may be implemented as at least one computer program that may be provided on a storage medium carrying the computer program, whereby at least one of the steps of the computer-implemented method, specifically at least one of steps, are performed by using the at least one computer program. Preferably any one of the steps may be performed using the at least one computer program.
  • the at least one computer program may be accessible by an apparatus which may be adapted for performing the method via a network, such as via an in-house network, via internet, or via a cloud.
  • the present method can, thus, be performed on a programmable apparatus, which is configured for this purpose, such as by providing a computer program, which is configured for such a purpose.
  • the method comprises:
  • the method comprises receiving a request to verify at least one attribute of the virtual character.
  • the term “requesting” is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a first unit querying an item, such as data, from a second unit.
  • the requesting specifically may take place electronically, e.g. by sending an electronic request, such as via at least one telecommunications network.
  • the request may be provided by a providing server of the virtual environment.
  • the request may be received by a device and/or system configured for verifying at least one attribute of the virtual character.
  • receiving or any grammatical variation thereof, as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to, e.g. to a device and/or entity, getting the received item, specifically by using a connection interface.
  • the item may be provided by a further device and/or entity.
  • virtual character is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to at least one virtual representation of a user, particularly in a virtual environment.
  • the virtual character may be a two-dimensional model and/or a three-dimensional model, such as an “avatar” .
  • the virtual character may allow the user to interact with a virtual environment and/or at least one further virtual character of a further user.
  • virtual environment is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a computer program configured for allowing at least one a user to interact in a computing environment.
  • at least one virtual character of the user and/or at least one virtual character of the further user may be present.
  • the interaction of the user and the further user may be performed via the at least one virtual character of the user and the at least one virtual character of the further user.
  • the virtual environment may be a virtual world and/or a metaverse.
  • a “virtual world” virtual may be a computer-simulated environment in which a plurality of users can interact with each other and the at least one surrounding object via virtual characters.
  • the term "metaverse” may refer to a collective virtual space, which comprises a plurality of multiple virtual worlds.
  • the term “attribute” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a characteristic property of the virtual character.
  • the attribute may comprise at least one recognition value configured for allowing to recognize the virtual character of the user and/or to distinguish the virtual character of the user from a virtual character of a further user.
  • the at least one attribute of the virtual character of the user may be verified in a manner that by identifying the virtual character of the user the user may be recognized.
  • the at least one attribute may be at least a portion of an appearance of the virtual character.
  • the at least a portion of an appearance may be a face of the virtual character, a fingerprint of the virtual character.
  • the at least one attribute may be at least one item of personal data of the virtual character.
  • personal data refers to at least one item of information allowing for identifying the virtual character of the user.
  • the identity of the virtual character may be a name of the virtual character and/or a unique identification code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code.
  • the identification number may be a passport number and/or another physical or digital ID number.
  • the at least one attribute of the virtual character may correspond to at least one attribute of the user.
  • the at least one attribute of the virtual character and the at least one attribute of the user may be the same.
  • the user may have the same at least one attribute in the real world.
  • the “real world” may be the physical reality in which the user is living.
  • the user and the virtual character may have the same name.
  • the user and the virtual character may have the same face. Thereby the identification of the user that is controlling the virtual character may be particularly easy.
  • the at least one attribute of the virtual character and the at least one attribute of the user may be the same.
  • the user and the virtual character may have the same name.
  • the user and the virtual character may have the same face.
  • verifying is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a confirmation of the correctness of something by checking it.
  • verifying of the at least one attribute of the virtual character may be performed for confirming that the at least one attribute of the virtual character allows for the recognition of the correct user in the virtual environment.
  • at least one further user may be prevented from obtaining the at least one attribute of the virtual character of the user and falsely pretending to be the user.
  • Verifying may comprise at least one of:
  • the method comprises receiving at least one pattern image showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination.
  • the pattern image may be provided by an image generation unit.
  • the method may comprise requesting the pattern image from the image generation unit.
  • the device and/or system configured for verifying at least one attribute of the virtual character may receive the pattern image.
  • the method may comprise illuminating the user with at least one infrared light pattern, particularly by using at least one pattern illumination source.
  • the method may comprise capturing the at least one pattern image while the user is being illuminated by patterned infrared illumination, particularly by using an image generation unit.
  • the term “associated with” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a term used to indicate a relationship between at least two arbitrary elements.
  • a first element may be associated with a second element in a manner that said first element is as-signed to said second element.
  • the relationship between the first and second elements may be at least one of: a relationship by predefinition, a relationship by at least one property the first and second elements have in common, a relationship due to identical or similar causes or origins of the first and second elements, a relationship due to the first element causing or evoking the second element, a relationship due to the second element causing or evoking the first element.
  • the virtual character may be of and/or assigned to the user.
  • pattern image is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an image generated by the image generation unit while illuminating with the infrared light pattern, e.g. on an object and/or a user.
  • the pattern image may comprise an image showing a user, in particular at least parts of the face of the user, while the user is being illuminated with the infrared light pattern, particularly on a respective area of interest comprised by the image.
  • the pattern image may be generated by imaging and/or recording light reflected by an object and/or user, which is illuminated by the infrared light pattern.
  • the pattern image showing the user may comprise at least a portion of the illuminated infrared light pattern on at least a portion the user.
  • the illumination by the pattern illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the device and/or system.
  • the term “illuminate” is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to the process of exposing at least one element to light.
  • the term “illumination source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary device configured for generating or providing light in the sense of the above-mentioned definition.
  • pattern illumination source is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary device configured for generating or providing at least one light pattern, in particular at least one infrared light pattern.
  • the term “light pattern” also referred to as “patterned infrared illumination” , as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to at least one arbitrary pattern comprising a plurality of light spots.
  • the light spot may be at least partially spatially extended. At least one spot or any spot may have an arbitrary shape. In some cases, a circular shape of at least one spot or any spot may be preferred.
  • the spots may be arranged by considering a structure of a display. Typically, an arrangement of an OLED-pixel-structure of the display may be considered.
  • the term “infrared light pattern” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a light pattern comprising spots in the infrared spectral range.
  • the infrared light pattern may be a near infrared light pattern.
  • the method comprises determining if the user corresponds to a living human based on the at least one pattern image, particularly in a validation process. “Determining” may be or may comprise evaluating. “Based on” may be or may comprise considering.
  • living human is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an individual of the species Homo sapiens, wherein the individual is currently alive.
  • Determining if the user corresponds to a living human based on the at least one pattern image may comprise extracting material data from the at least one pattern image, particularly in a validation step. Particularly thereby, it may be determined that the user is a human.
  • the material data may comprise an item of information on the type of material of the user detected in the pattern image. Extracting material data from the pattern image may be or may comprise generating the material type and/or data derived from the material type.
  • the pattern image may be provided to a receiving model for determining the material data from the at least one pattern image.
  • the material data may be received from the providing model for determining the material data from the at least one pattern image.
  • providing is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to making the provided item available, e.g. to a device and/or entity, specifically by using a connection interface.
  • the item may be provided by a further device and/or entity.
  • the device and/or entity may request the item. The request may be received by the further device and/or entity.
  • Providing the at least one pattern image to a receiving model for determining the material data from the at least one pattern image may comprise receiving the at least one pattern image by the receiving model for determining the material data from the at least one pattern image.
  • the at least one pattern image may be received by at least one input layer comprised by the receiving model for determining the material data from the at least one pattern image.
  • the model for determining the material data from the at least one pattern image may comprise at least one mechanistic model for determining the material data from the at least one pattern image.
  • mechanistic model also referred to as “deterministic model” , as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a model that reflects physical phenomena in mathematical form, e.g., including first-principles models.
  • a mechanistic model may comprise a set of equations that describe an interaction between the material and the patterned electromagnetic radiation thereby resulting in a condition measure, a vital sign measure or the like.
  • the mechanistic model for determining the material data from the at least one pattern image may be calibrated by using calibration data in a regression procedure.
  • regression procedure is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a statistical method employed for establishing a relationship between a dependent variable, such as the material data, and one or more independent variables, such as the pattern image. The goal of regression is to predict the value of the dependent variable based on the input values of the independent variables.
  • calibration as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to the process of optimizing at least one parameter of the mechanistic model in a manner that the dependent variable may be evaluated more precisely.
  • the calibration data may comprise:
  • the model for determining the material data from the at least one pattern image comprises at least one data-driven model for determining the material data from the at least one pattern image.
  • data-driven model is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a classification model comprising at least one machine-learning architecture and a plurality of model parameters.
  • the data-driven model may be parametrized in a training by using training data.
  • the training may be a process of finding a best parameter combination of the plurality of model parameters.
  • the training is carried out to improve the capability of the machine learning algorithm to obtain a representative result, such as the material data, by evaluating an input, such as the pattern image.
  • the training data may comprise one or more training data sets.
  • the one or more data sets may each comprise input data and a known representative result derivable by evaluating the input data by using the data-driven model.
  • the data-driven model for determining the material data from the at least one pattern image may be trained by using training data in a training procedure, particularly wherein the training data comprises, particularly a plurality of training data sets each comprising:
  • Retraining may be included when referring to training herein.
  • the at least one data-driven model for determining the material data from the at least one pattern image may comprise at least one of:
  • the component comprised by the data-driven model for determining the material data is an encoder, specifically an auto-encoder.
  • the “convolutional neural network” may comprise at least one convolutional layer and/or at least one pooling layer.
  • convolutional neural networks may reduce the dimensionality of a partial image and/or an image by applying a convolution, e.g. based on a convolutional layer, and/or by pooling. Applying a convolution may be suitable for selecting feature related to material information of the pattern image.
  • the “encoder-decoder structure” refers to a machine-learning architecture used in sequence-to-sequence tasks.
  • the “encoder” may be used for processing the input data and transforming it into a context vector representation.
  • the context vector is fed into a “decoder” .
  • the decoder generates an output.
  • Determining if the user corresponds to a living human based on the at least one pattern image may comprise determining at least one blood perfusion measure. Particularly thereby, it may be determined that the human is living.
  • blood perfusion measure is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a blood volume flow through a given volume or mass of tissue.
  • the blood perfusion measure may be given in units of ml/ml/s or ml/100 g/min.
  • the blood perfusion measure may represent a local blood flow through the at least one capillary network and one or more extracellular spaces in a body tissue.
  • Determining the at least one blood perfusion measure may comprise determining at least one speckle contrast of the pattern image.
  • determining the at least one blood perfusion measure may comprise determining a blood perfusion measure based on the determined at least one speckle contrast.
  • speckle contrast is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a degree of a variation in a speckle pattern generated by coherent light.
  • the speckle pattern may be generated by the pattern illumination source, particularly on the user, more particularly by flood light and/or pattern light scattered on the user.
  • a speckle contrast may represent a measure for a mean contrast of an intensity distribution within an area of a speckle pattern.
  • a speckle contrast may equal or may comprise a speckle contrast value.
  • a speckle contrast value may be distributed between 0 and 1.
  • the blood perfusion measure is determined based on the speckle contrast.
  • the vital sign measure may depend on the determined speckle contrast. If the speckle contrast changes, the blood perfusion measure derived from the speckle contrast may change accordingly.
  • a blood perfusion measure may be a single number or value that may represent a likelihood that the object is a living subject.
  • the complete pattern image may be used for determining the speckle contrast.
  • a section of the pattern image may be used.
  • the section of the pattern image preferably, represents a smaller area of the pattern image than an area of the complete pattern image.
  • the section of the pattern image may be obtained by cropping the pattern image.
  • the method comprises providing an indication on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image.
  • the indication on that the at least one attribute of the virtual character is verified may be provided by the device and/or system configured for verifying at least one attribute of the virtual character.
  • the indication on that the at least one attribute of the virtual character is verified may be provided to the server of the virtual environment.
  • indication is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a flag that is given to an arbitrary piece of information.
  • Providing the indication on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified.
  • the virtual character may be flagged by at least one of a visible and/or auditive and/or haptic confirmation sign.
  • the confirmation sign may be an arbitrary two-dimensional sign and/or the displaying of the at least one attribute on the virtual character.
  • making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified may comprise displaying the verified at least one attribute of the virtual character on the virtual character.
  • Providing the indication on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is authenticated.
  • the method may further comprise a step of allowing the user to use and/or to control the virtual character in the virtual environment.
  • the user may be allowed to use and/or control the virtual character in the virtual environment in case of at least one of:
  • the method step of allowing the user to use and/or to control the virtual character may be performed by the server of the virtual environment.
  • the method may further comprise a step of declining the user to use and/or to control the virtual character in the virtual environment.
  • the user may be declined to use and/or control the virtual character in the virtual environment in case of at least one of:
  • the method step of declining the user to use and/or to control the virtual character may be performed by the server of the virtual environment.
  • the method may comprise:
  • providing an indication on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
  • the flood image may be provided by the image generation unit. Particularly for initiating providing of the flood image by the image generation unit, the method may comprise requesting the flood image from the image generation unit.
  • the device and/or system configured for verifying at least one attribute of the virtual character may receive the flood image.
  • Determining if the identity of the user corresponds to a verified identity based on the at least one flood image may be performed for authenticating the user.
  • the term “authenticating” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to verifying an identity of a user.
  • authentication may comprise distinguishing between the user from other humans or objects.
  • the authentication may comprise verifying identity of a respective user and/or assigning identity to a user.
  • the authentication may comprise generating and/or providing identity information.
  • the identity information may be validated by the authentication.
  • the identity information may be and/or may comprise at least one identity token.
  • an image of a face recorded by the image generation unit may be verified to be an image of the user’s face.
  • the authenticating may be performed using at least one authentication process.
  • the authentication process may comprise a plurality of steps.
  • the authentication process may comprise performing at least one face detection.
  • the face detection step may comprise analyzing the flood image.
  • the authentication process may comprise identifying.
  • the identifying may comprise assigning an identity to a detected face and/or verifying an identity of the user.
  • the identifying may comprise performing a face verification of the imaged face to be the user’s face.
  • the identifying the user may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the user’s face, with a template.
  • a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector may be considered and/or evaluated.
  • the template vector may be obtained from a template image.
  • the template image may be generated in an enrollment process.
  • the term “enrollment process” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to at least one step of registering, particularly to a service.
  • the template image may be generated under secure conditions in a manner that it is guaranteed that the generated template image shows the user.
  • the enrollment process may comprise at least one step of: capturing the template image; recording personal data; selecting the at least one attribute; generating the virtual character.
  • the at least one template feature vector and/or the template image may be received, particularly from a providing server, particularly of the virtual environment, by using a connection interface, and/or
  • the at least one template feature vector may be stored, particularly on a memory, more particularly of the server and/or the device and/or system configured for verifying at least one attribute of the virtual character.
  • the term “memory” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to at least one electronic storage space configured for storing data, instructions, and programs.
  • the stored data, instruction and/or programs may be forwarded for processing to a processor.
  • the memory may be or may comprise at least one of: a Random Access Memory; a Read-Only Memory; a Cache Memory; a Hard Disk Drive; a Solid State Drive; a Virtual Memory.
  • the method may comprise:
  • freod illumination source is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to substantially continuous spatial illumination, in particular diffuse and/or uniform illumination.
  • the flood light has a wavelength in the infrared range, in particular in the near infrared range.
  • the flood illumination source may comprise at least one LED or at least one least one VCSEL, preferably a plurality of VCSELs.
  • substantially continuous spatial illumination as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to uniform spatial illumination, wherein areas of non-uniform are possible.
  • the area e.g. covering a user, a portion of the user and/or a face of the user, illuminated from the flood illumination source, may be contiguous. Power may be spread over a whole field of illumination.
  • illumination provided by the light pattern may comprise at least two contiguous areas, in particular a plurality of contiguous areas, and/or power may be concentrated in small (compared to the whole field of illumination) areas of the field of illumination.
  • the infrared flood illumination may be suitable for illuminating a contiguous area, in particular one contiguous area.
  • the infrared pattern illumination may be suitable for illuminating at least two contiguous areas.
  • the term “flood image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an image generated by the image generation unit while illumination source is emitting infrared flood light, e.g. on an object and/or a user.
  • the flood image may comprise an image showing a user, in particular the face of the user, while the user is being illuminated with the flood light.
  • the flood image may be generated by imaging and/or recording light reflected by an object and/or user which is illuminated by the flood light.
  • the flood image showing the user may comprise at least a portion of the flood light on at least a portion the user.
  • the illumination by the flood illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the device and/or system.
  • image generation unit is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to at least one unit configured for capturing at least one image, particularly for generating image data.
  • the term “capturing” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically specifically may refer, without limitation, to generating and/or determining and/or recording at least one image by using the image generation unit. Capturing may comprise recording a single image and/or a plurality of images such as a sequence of images.
  • capturing may comprise recording continuously a sequence of images such as a video or a movie.
  • the image generation may be initiated by a user action or may automatically be initiated, e.g. once the presence of at least one object or user within a field of view and/or within a predetermined sector of the field of view of the image generation unit is automatically detected.
  • the pattern illumination source may be covered at least partially by a transparent display and/or the image generation unit may be covered at least partially by a transparent display.
  • the flood illumination source may be covered at least partially by a transparent display.
  • the term “display” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary shaped device configured for displaying an item of information.
  • the item of information may be arbitrary information such as at least one image, at least one diagram, at least one histogram, at least one graphic, text, numbers, at least one sign, an operating menu, and the like.
  • the display may be or may comprise at least one screen.
  • the display may have an arbitrary shape, e.g. a rectangular shape.
  • the display may be a front display.
  • the term “at least partially transparent” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to a property of the display to allow light, in particular of a certain wavelength range, e.g. in the infrared spectral region, in particular in the near infrared spectral region, to pass at least partially through.
  • the display may be semitransparent in the near infrared region.
  • the display may have a transparency of 20 %to 50 %in the near infrared region.
  • the display may have a different transparency for other wavelength ranges.
  • the present invention may propose a device and/or system comprising the image generation unit and two illumination sources that can be placed behind the display of a device.
  • the transparent area (s) of the display can allow for operation of the device and/or system behind the display.
  • the display is an at least partially transparent display, as described above.
  • the display may have a reduced pixel density and/or a reduced pixel size and/or may comprise at least one transparent conducting path.
  • the transparent area (s) of the display may have a pixel density of 360-440 PPI (pixels per inch) .
  • Other areas of the display e.g. non-transparent areas, may have pixels densities higher than 400 PPI, e.g. a pixel density of 460-500 PPI.
  • a device and/or system particularly configured for verifying at least one attribute of the virtual character.
  • the device and/or system comprises:
  • - a memory storing instructions that, when executed by the processor, configure the apparatus to perform the steps of any one of the methods as disclosed elsewhere herein.
  • processor as generally used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an arbitrary logic circuitry configured for performing basic operations of a computer or system, and/or, generally, to a device which is configured for performing calculations or logic operations.
  • the processing unit may be configured for processing basic instructions that drive the computer or system.
  • the processing unit may comprise at least one arithmetic logic unit (ALU) , at least one floating-point unit (FPU) , such as a math co-processor or a numeric coprocessor, a plurality of registers, specifically registers configured for supplying operands to the ALU and storing results of operations, and a memory, such as an L1 and L2 cache memory.
  • ALU arithmetic logic unit
  • FPU floating-point unit
  • the processing unit may be a multi-core processor.
  • the processing unit may be or may comprise a central processing unit (CPU) .
  • the processing unit may be or may comprise a microprocessor, thus specifically the processing unit’s elements may be contained in one single integrated circuitry (IC) chip.
  • the processing unit may be or may comprise one or more application-specific integrated circuits (ASICs) and/or one or more field-programmable gate arrays (FPGAs) or the like.
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • the processing unit specifically may be configured, such as by software programming, for performing one or more evaluation operations.
  • the device may be selected from the group consisting of: a television device; a game console; a personal computer; a mobile device, particularly a cell phone, and/or a smart phone, and/or a tablet computer, and/or a laptop, and/or a tablet, and/or a virtual reality device, and/or a wearable, such as a smart watch; or another type of portable computer.
  • the device and/or system further may comprise at least one of:
  • connection interface is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
  • the term specifically may refer, without limitation, to an item or an element forming a boundary configured for transmitting information or data.
  • the connection interface may be configured for transmitting information from a computational device, e.g. a computer, such as to provide data, e.g. to another device.
  • the connection interface may be configured for transmitting information to a computational device, e.g. to a computer, such as to receive data.
  • the connection interface may specifically be configured for transmitting or exchanging data.
  • the connection interface may provide a data transfer connection, e.g. Bluetooth, NFC, or inductive coupling.
  • the connection interface may be or may comprise at least one port comprising one or more of a network or internet port, a USB-port, and a disk drive.
  • a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method as disclosed elsewhere herein.
  • the computer may be device and/or a system as disclosed elsewhere herein.
  • the computer program may be stored on a computer-readable data carrier and/or on a computer-readable storage medium.
  • the computer program may be executed on at least one processor comprised by the device and/or system configured for verifying at least one attribute of the virtual character.
  • a non-transitory computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method as disclosed elsewhere herein.
  • the computer may be a device and/or a system as disclosed elsewhere herein.
  • the “computer-readable storage medium” specifically may refer to non-transitory data storage means, such as a hardware storage medium having stored thereon computer-executable instructions.
  • the stored computer-executable instruction may be associate with the computer program.
  • the computer-readable data carrier or storage medium specifically may be or may comprise a storage medium such as a random-access memory (RAM) and/or a read-only memory (ROM) .
  • a use of a pattern image for verifying an attribute of a virtual character in a virtual environment may be made to any definition, Embodiment, claim and/or aspect as disclosed herein.
  • program code means in order to perform the method according to the present invention in one or more of the embodiments enclosed herein when the program is executed on a computer or computer network.
  • the program code means may be stored on a computer-readable data carrier and/or on a computer-readable storage medium.
  • a data carrier having a data structure stored thereon, which, after loading into a computer or computer network, such as into a working memory or main memory of the computer or computer network, may execute the method according to one or more of the embodiments disclosed herein.
  • a computer program product with program code means stored on a machine-readable carrier, in order to perform the method according to one or more of the embodiments disclosed herein, when the program is executed on a computer or computer network.
  • a computer program product refers to the program as a tradable product.
  • the product may generally exist in an arbitrary format, such as in a paper for-mat, or on a computer-readable data carrier and/or on a computer-readable storage medium.
  • the computer program product may be distributed over a data network.
  • Non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the method according to one or more of the embodiments disclosed herein.
  • modulated data signal which contains instructions readable by a computer system or computer network, for performing the method according to one or more of the embodiments disclosed herein.
  • one or more of the method steps or even all of the method steps of the method according to one or more of the embodiments disclosed herein may be performed by using a computer or computer network.
  • any of the method steps including provision and/or manipulation of data may be performed by using a computer or computer network.
  • these method steps may include any of the method steps, typically except for method steps requiring manual work, such as providing the samples and/or certain aspects of performing the actual measurements.
  • a computer or computer network comprising at least one processor, wherein the processor is adapted to perform the method according to one of the embodiments described in this description,
  • a data structure is stored on the storage medium and wherein the data structure is adapted to perform the method according to one of the embodiments described in this description after having been loaded into a main and/or working storage of a computer or of a computer network, and
  • program code means can be stored or are stored on a storage medium, for performing the method ac-cording to one of the embodiments described in this description, if the program code means are executed on a computer or on a computer network.
  • the terms “have” , “comprise” or “include” or any arbitrary grammatical variations thereof are used in a non-exclusive way. Thus, these terms may both refer to a situation in which, besides the feature introduced by these terms, no further features are present in the entity described in this context and to a situation in which one or more further features are present.
  • the expressions “A has B” , “A comprises B” and “A includes B” may both refer to a situation in which, besides B, no other element is present in A (i.e. a situation in which A solely and exclusively consists of B) and to a situation in which, besides B, one or more further elements are present in entity A, such as element C, elements C and D or even further elements.
  • the terms “at least one” , “one or more” or similar expressions indicating that a feature or element may be present once or more than once typically are used only once when introducing the respective feature or element. In most cases, when referring to the respective feature or element, the expressions “at least one” or “one or more” are not repeated, nonwithstanding the fact that the respective feature or element may be present once or more than once.
  • the terms “preferably” , “more preferably” , “particularly” , “more particularly” , “specifically” , “more specifically” or similar terms are used in conjunction with optional features, without restricting alternative possibilities.
  • features introduced by these terms are optional features and are not intended to restrict the scope of the claims in any way.
  • the invention may, as the skilled person will recognize, be performed by using alternative features.
  • features introduced by “in an embodiment of the invention” or similar expressions are intended to be optional features, without any restriction regarding alternative embodiments of the invention, without any restrictions regarding the scope of the invention and without any restriction regarding the possibility of combining the features introduced in such way with other optional or non-optional features of the invention.
  • the present disclosure exhibits several advantages, such as a fast, reliable and secure verification of a virtual character that offers a counterfeit protection, particularly if the disclosed verification procedure is combined with an anti-spoofing procedure.
  • a real-name authentication may be provided, e.g. for business transactions.
  • a virtual character such as an avatar may be created with the real name of the user. This may be achieved in a separate enrollment process.
  • the facial data generated in the enrollment process may be stored for at least one subsequent security check.
  • the virtual character may be highlighted in the virtual environment, such as the metaverse, as a verified person. Control of the avatar may only be possible for a verified person.
  • the verification may comprise a spoof-proof facial authentication.
  • the face of the virtual character may be the real copy of the face of the user in case the user is authenticated.
  • the device used for the authentication may be used to authenticate the user by evaluating the template of the data of the virtual character without any correlation to real-life names. This may be used for interactions where a “real-person” check may be necessary and a unique feature like a real copy of the face and/or the fingerprint is shown, but where the real name should be hidden.
  • Embodiment 1 A method for verifying at least one attribute of a virtual character in a virtual environment associated with a user, the method comprising:
  • Embodiment 2 The method according to the preceding Embodiment, wherein the method is a computer-implemented method,
  • any one of the method steps are performed by using a device and/or system comprising at least one processor for executing the steps.
  • Embodiment 3 The method according to any one of the preceding Embodiments, wherein the method comprises:
  • Embodiment 4 The method according to the preceding Embodiment, wherein the pattern illumination source is covered at least partially by a transparent display and/or wherein the image generation unit is covered at least partially by a transparent display.
  • Embodiment 5 The method according to any one of the preceding Embodiments, wherein the material data comprises an item of information on the type of material of the user detected in the pattern image.
  • Embodiment 6 The method according to any one of the preceding Embodiments, wherein determining if the user corresponds to a living human based on the at least one pattern image comprises extracting material data from the at least one pattern image.
  • Embodiment 7 The method according to the preceding Embodiment, wherein, for extracting the material data from the at least one pattern image,
  • the pattern image is provided to a receiving model for determining the material data from the at least one pattern image, and/or
  • the material data is received from the providing model for determining the material data from the at least one pattern image.
  • Embodiment 8 The method according to the preceding Embodiment, wherein providing the at least one pattern image to a receiving model for determining the material data from the at least one pattern image comprises receiving the at least one pattern image by the receiving model for determining the material data from the at least one pattern image,
  • the at least one pattern image is received by at least one input layer comprised by the receiving model for determining the material data from the at least one pattern image.
  • Embodiment 9 The method according to any one of the two preceding Embodiments, wherein the model for determining the material data from the at least one pattern image comprises at least one mechanistic model for determining the material data from the at least one pattern image.
  • Embodiment 10 The method according to the preceding Embodiment, wherein the mechanistic model for determining the material data from the at least one pattern image is calibrated by using calibration data in a regression procedure, particularly wherein the calibration data comprises
  • Embodiment 11 The method according to the any one of the four preceding Embodiments, wherein the model for determining the material data from the at least one pattern image comprises at least one data-driven model for determining the material data from the at least one pattern image.
  • Embodiment 12 The method according to the preceding Embodiment, wherein the at least one data-driven model for determining the material data from the at least one pattern image comprises at least one of:
  • the component comprised by the data-driven model for determining the material data is an encoder, specifically an auto-encoder.
  • Embodiment 13 The method according to any one of the two preceding Embodiments, wherein the data-driven model for determining the material data from the at least one pattern image is trained by using training data in a training procedure, particularly wherein the training data comprises
  • Embodiment 14 The method according to any one of the preceding Embodiments, wherein determining if the user corresponds to a living human based on the at least one pattern image comprises determining at least one blood perfusion measure.
  • Embodiment 15 The method according to the preceding Embodiment, wherein determining at least one blood perfusion measure comprises
  • Embodiment 16 The method according to any one of the preceding Embodiments, wherein the patterned infrared illumination is coherent patterned infrared illumination.
  • Embodiment 17 The method according to any one of the preceding Embodiments, wherein the infrared illumination is within a range between 750 nm and 1100 nm.
  • Embodiment 18 The method according to any one the preceding Embodiments, wherein the method comprises
  • providing an indication on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
  • Embodiment 19 The method according to any one of the preceding Embodiments, wherein the method comprises:
  • Embodiment 20 The method according to the preceding Embodiment, wherein the flood illumination source is covered at least partially by a transparent display.
  • Embodiment 21 The method according to the preceding Embodiment, wherein, for determining if the identity of the user corresponds to a verified identity based on the at least one flood image, a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector is considered.
  • Embodiment 22 The method according to the preceding Embodiment, wherein the template vector is obtained from a template image.
  • Embodiment 23 The method according to the preceding Embodiment, wherein the template image is generated in an enrollment process.
  • Embodiment 24 The method according to the preceding Embodiment, wherein, for determining if the identity of the user corresponds to a verified identity,
  • the at least one template feature vector and/or the template image is received, particularly from a providing server by using a connection interface, and/or
  • the at least one template feature vector is stored, particularly on a memory.
  • Embodiment 25 The method according to any one of the preceding Embodiments, wherein the at least one attribute is at least one of:
  • identity of the virtual character is at least one of:
  • a unique identification code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code.
  • Embodiment 26 The method according to any one of the preceding Embodiments, wherein providing the indication on that the at least one attribute of the virtual character is verified comprises making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified.
  • Embodiment 27 The method according to the preceding Embodiment, wherein making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified comprises displaying the verified at least one attribute of the virtual character on the virtual character.
  • Embodiment 28 The method according to any one of the preceding Embodiments, wherein the at least one attribute of the virtual character corresponds to at least one attribute of the user in a manner that the user has the same at least one attribute in the real world.
  • Embodiment 29 A device and/or system comprising
  • Embodiment 30 The device and/or system according to the preceding Embodiment, wherein the device and/or system further comprises at least one of:
  • Embodiment 31 A computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method according to any one of the method Embodiments.
  • Embodiment 32 The computer program according to the preceding Embodiment, wherein the computer is a device and/or a system according to any one of the preceding Embodiments referring to a device and/or a system.
  • Embodiment 33 A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method according to any one of the method Embodiments.
  • Embodiment 34 The non-transitory computer-readable storage medium according to the preceding Embodiment, wherein the computer is a device and/or a system according to any one of the preceding Embodiments referring to a device and/or a system.
  • Embodiment 35 Use of a pattern image for verifying an attribute of a virtual character in a virtual environment.
  • Figure 1 shows an exemplary method for verifying at least one attribute of a virtual character in a virtual environment associated with a user
  • Figure 2 an exemplary device and/or system.
  • Figure 1 shows an exemplary method 110 for verifying at least one attribute of a virtual character in a virtual environment associated with a user is disclosed.
  • the at least one attribute may be at least a portion of an appearance of the virtual character, specifically wherein the at least a portion of an appearance is at least one of: a face of the virtual character, a fingerprint of the virtual character.
  • the at least one attribute may be personal data of the virtual character, specifically wherein the personal data of the virtual character is at least one of: a name of the virtual character; a unique identification code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code.
  • the at least one attribute of the virtual character may correspond to at least one attribute of the user.
  • the method comprises:
  • step 114 receiving at least one pattern image showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination
  • step 118 providing an indication on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image.
  • Providing the indication, in step 118, on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified.
  • Making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified may comprise displaying the verified at least one attribute of the virtual character on the virtual character.
  • Providing the indication, in step 118, on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is authenticated.
  • the method 110 may be a computer-implemented method. Alternatively or in addition, at least one of the method steps, preferably any one of the method steps may be performed by using a device 120 and/or system 122 comprising at least one processor 124 for executing the steps.
  • the method 110 may comprise illuminating the user, in a step 126, with at least one infrared light pattern, particularly by using at least one pattern illumination source 128.
  • the method may comprise capturing the at least one pattern image, in a step 130, while the user is being illuminated by patterned infrared illumination, particularly by using an image generation unit 132.
  • the at least one pattern image that is received in step 114 is generated for being provided.
  • determining if the user corresponds to a living human based on the at least one pattern image may comprise extracting material data from the at least one pattern image.
  • the material data may comprise an item of information on the type of material of the user detected in the pattern image.
  • the pattern image may be provided, in a step 136, to a receiving model for determining the material data from the at least one pattern image.
  • the material data may be received, in a step 138, from the providing model for determining the material data from the at least one pattern image.
  • Providing the at least one pattern image, in the step 136, to a receiving model for determining the material data from the at least one pattern image may comprises receiving the at least one pattern image by the receiving model for determining the material data from the at least one pattern image.
  • the at least one pattern image may be received by at least one input layer comprised by the receiving model for determining the material data from the at least one pattern image.
  • the model for determining the material data from the at least one pattern image may comprise at least one mechanistic model for determining the material data from the at least one pattern image.
  • the mechanistic model for determining the material data from the at least one pattern image may be calibrated by using calibration data in a regression procedure.
  • the calibration data may comprise
  • the model for determining the material data from the at least one pattern image comprises at least one data-driven model for determining the material data from the at least one pattern image.
  • the at least one data-driven model for determining the material data from the at least one pattern image may comprise at least one of:
  • the component comprised by the data-driven model for determining the material data is an encoder, specifically an auto-encoder.
  • the data-driven model for determining the material data from the at least one pattern image may be trained by using training data in a training procedure, particularly wherein the training data comprises
  • determining if the user corresponds to a living human based on the at least one pattern image may comprise determining at least one blood perfusion measure, in a step 140. Determining the at least one blood perfusion measure, in the step 140, may comprise determining at least one speckle contrast of the pattern image. Alternatively or in addition, determining the at least one blood perfusion measure, in the step 140, may comprise determining a blood perfusion measure based on the determined at least one speckle contrast.
  • the method 110 may comprise
  • step 142 receiving at least one flood image showing the user associated with the virtual character while the user is being illuminated by infrared flood light;
  • providing an indication on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
  • the method may comprise:
  • step 146 illuminating the user, in a step 146, with at least one infrared flood light by using at least one the flood illumination source 148;
  • the at least one flood image that is received in step 142 is generated for being provided.
  • a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector may be considered, in a step 152.
  • the template vector may be obtained from a template image.
  • the template image may be generated in an enrollment process. Performing the enrollment process may be comprised by the method in a step 154.
  • step 144 for determining if the identity of the user corresponds to a verified identity,
  • the at least one template feature vector and/or the template image may be received, in a step 156, particularly from a providing server by using a connection interface, and/or
  • the at least one template feature vector may be stored, in a step 158, particularly on a memory 160.
  • the method may further comprise a step 162 of allowing the user to use and/or to control the virtual character in the virtual environment.
  • the user may be allowed to use and/or control the virtual character in the virtual environment in case of at least one of:
  • the method step of allowing the user to use and/or to control the virtual character may be performed by the server of the virtual environment.
  • the method may further comprise a step 164 of declining the user to use and/or to control the virtual character in the virtual environment.
  • the user may be declined to use and/or control the virtual character in the virtual environment in case of at least one of:
  • a device and/or system comprises:
  • a memory 160 storing instructions that, when executed by the processor, configure the apparatus to perform the steps of any one of the methods of any one of method claims.
  • the device and/or system further may comprise at least one of:
  • connection interface 166 a connection interface 166.
  • the pattern illumination source 128 may be covered at least partially by a transparent display 134 and/or the image generation unit 132 may be covered at least partially by a transparent display 134.
  • the flood illumination source 150 may be covered at least partially by a transparent display 134.
  • a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method according to any one of the method claims.
  • the computer may be device and/or a system as disclosed elsewhere herein.
  • a non-transitory computer-readable storage medium the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method according to any one of the method claims.
  • the computer may be a device and/or a system as disclosed elsewhere herein.
  • List of reference numbers 110 method for verifying at least one attribute of a virtual character 112 receiving a request to verify 114 receiving at least one pattern image 116 determining if the user corresponds to a living human 118 providing an indication on that the at least one attribute 120 device 122 system 124 processor 126 illuminating the user 128 pattern illumination source 130 capturing the at least one pattern image 132 image generation unit 134 display 136 providing the pattern image 138 receiving the material data 140 determining at least one blood perfusion measure 142 receiving at least one flood image 144 determining the identity of the user 146 illuminating the user 148 flood illumination source 150 capturing the at least one flood image 152 considering a similarity between feature vectors 154 performing enrollment process 156 receiving one or more feature vectors 158 storing a template feature vector 160 memory 162 allowing the user to use and/or to control the virtual character 164 declining the user to use and/or to control the virtual character 166 connection interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

It relates to a method (110) for verifying at least one attribute of a virtual character in a virtual environment associated with a user, the method (110) comprising: -receiving a request (112) to verify at least one attribute of the virtual character, -receiving at least one pattern image (114) showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination, -determining if the user corresponds to a living human (116) based on the at least one pattern image, -providing an indication (118) on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image.

Description

AUTHENTICATION OF USER IN THE METAVERSE Field of the invention
The invention relates to a method for verifying at least one attribute of a virtual character in a virtual environment. The present invention further relates to a device and/or a system, a computer program and a non-transient computer-readable medium.
The devices, methods and uses according to the present invention specifically may be employed for example in various areas of daily life, security technology, gaming, traffic technology, production technology, photography such as digital photography or video photography for arts, documentation or technical purposes, safety technology, information technology, agriculture, crop protection, maintenance, cosmetics, medical technology or in sciences. However, other applications are also possible.
Prior art
Many of the daily interactions seem to shift from the real world into virtual environments, e.g. the metaverse. This may apply to both social and business interactions. In the virtual environments a real life user is typically represented by a virtual character, e.g. an avatar. Such virtual characters are, typically easy to create and, thereby, it is easy to fake being a specific real life user, particularly in case if no verification is employed.
Problem addressed by the invention
It is therefore an object of the present invention to provide methods, devices and systems facing the above-mentioned technical challenges of known devices and methods. Specifically, it is an object of the present invention to provide devices and methods, which allow a fast, reliable and secure verification of a virtual character that offers a counterfeit protection.
Summary of the invention
This problem is solved by the invention with the features of the independent patent claims. Advantageous developments of the invention, which can be realized individually or in  combination, are presented in the dependent claims and/or in the following specification and detailed embodiments.
In a first aspect of the present invention, a method for verifying at least one attribute of a virtual character in a virtual environment associated with a user is disclosed. The method for verifying at least one attribute of a virtual character in a virtual environment associated with a user, which may be performed in the given order. A different order, however, may also be feasible. Further, two or more of the method steps may be performed simultaneously. Thereby the method steps may at least partly overlap in time. Further, the method steps may be performed once or repeatedly. Thus, one or more or even all of the method steps may be performed once or repeatedly. The method may comprise additional method steps, which are not listed herein.
The method may be a computer-implemented method. Alternatively or in addition, at least one of the method steps, preferably any one of the method steps may be performed by using a device and/or system comprising at least one processor for executing the steps. The term "computer implemented method" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a method, which involves at least one apparatus, specifically a computer, or a plurality of apparatus, particularly connected via a computer network. The plurality of apparatus may be connected, particularly for transmitting data, via a network by using at least one connection interface at any one of the apparatuses of the plurality of apparatus. The computer-implemented method may be implemented as at least one computer program that may be provided on a storage medium carrying the computer program, whereby at least one of the steps of the computer-implemented method, specifically at least one of steps, are performed by using the at least one computer program. Preferably any one of the steps may be performed using the at least one computer program. Alternatively, the at least one computer program may be accessible by an apparatus which may be adapted for performing the method via a network, such as via an in-house network, via internet, or via a cloud. With particular regard to the present invention, the present method can, thus, be performed on a programmable apparatus, which is configured for this purpose, such as by providing a computer program, which is configured for such a purpose.
The method comprises:
- receiving a request to verify at least one attribute of the virtual character,
- receiving at least one pattern image showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination,
- determining if the user corresponds to a living human based on the at least one pattern image,
- providing an indication on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based the at least one pattern image.
For this aspect, reference may be made to any definition, Embodiment, claim and/or aspect as disclosed herein.
As already disclosed, the method comprises receiving a request to verify at least one attribute of the virtual character.
The term “requesting” , or any grammatical variation thereof, as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a first unit querying an item, such as data, from a second unit. The requesting specifically may take place electronically, e.g. by sending an electronic request, such as via at least one telecommunications network. The request may be provided by a providing server of the virtual environment. The request may be received by a device and/or system configured for verifying at least one attribute of the virtual character.
The term “receiving” , or any grammatical variation thereof, as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to, e.g. to a device and/or entity, getting the received item, specifically by using a connection interface. The item may be provided by a further device and/or entity.
The term “virtual character" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one virtual representation of a user, particularly in a virtual environment. The virtual character may be a two-dimensional model and/or a three-dimensional model, such as an “avatar” . The virtual  character may allow the user to interact with a virtual environment and/or at least one further virtual character of a further user.
The term “virtual environment" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a computer program configured for allowing at least one a user to interact in a computing environment. In the computing environment, at least one virtual character of the user and/or at least one virtual character of the further user may be present. The interaction of the user and the further user may be performed via the at least one virtual character of the user and the at least one virtual character of the further user. The virtual environment may be a virtual world and/or a metaverse. A “virtual world” virtual may be a computer-simulated environment in which a plurality of users can interact with each other and the at least one surrounding object via virtual characters. The term "metaverse" may refer to a collective virtual space, which comprises a plurality of multiple virtual worlds.
The term “attribute" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a characteristic property of the virtual character. The attribute may comprise at least one recognition value configured for allowing to recognize the virtual character of the user and/or to distinguish the virtual character of the user from a virtual character of a further user. The at least one attribute of the virtual character of the user may be verified in a manner that by identifying the virtual character of the user the user may be recognized.
The at least one attribute may be at least a portion of an appearance of the virtual character. The at least a portion of an appearance may be a face of the virtual character, a fingerprint of the virtual character. Alternatively or in addition, the at least one attribute may be at least one item of personal data of the virtual character. The term “personal data” refers to at least one item of information allowing for identifying the virtual character of the user. The identity of the virtual character may be a name of the virtual character and/or a unique identification code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code. The identification number may be a passport number and/or another physical or digital ID number.
With particular regard to the present invention, the at least one attribute of the virtual character may correspond to at least one attribute of the user. For corresponding to the at least one attribute of the user, the at least one attribute of the virtual character and the at least one attribute of the user may be the same. The user may have the same at least one attribute in the real world. The “real world” may be the physical reality in which the user is living. Exemplarily, the user and the virtual character may have the same name. As a further example, the user and the virtual character may have the same face. Thereby the identification of the user that is controlling the virtual character may be particularly easy.
For corresponding to at least one attribute of the user, the at least one attribute of the virtual character and the at least one attribute of the user may be the same. Exemplarily, the user and the virtual character may have the same name. Further exemplarily, the user and the virtual character may have the same face.
The term "verifying" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a confirmation of the correctness of something by checking it. With particular regard to the present invention, verifying of the at least one attribute of the virtual character may be performed for confirming that the at least one attribute of the virtual character allows for the recognition of the correct user in the virtual environment. Particularly in this way, at least one further user may be prevented from obtaining the at least one attribute of the virtual character of the user and falsely pretending to be the user. Verifying may comprise at least one of:
- determining if the user corresponds to a living human based on the at least one pattern image, particularly in an authentication process, and
- determining if the identity of the user corresponds to a verified identity, particularly in a validation process.
As already disclosed, the method comprises receiving at least one pattern image showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination. The pattern image may be provided by an image generation unit. Particularly for initiating providing of the pattern image by the image generation unit, the method may comprise requesting the pattern image from the image generation unit. The device  and/or system configured for verifying at least one attribute of the virtual character may receive the pattern image.
The method may comprise illuminating the user with at least one infrared light pattern, particularly by using at least one pattern illumination source. Alternatively or in addition, the method may comprise capturing the at least one pattern image while the user is being illuminated by patterned infrared illumination, particularly by using an image generation unit.
The term “associated with" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a term used to indicate a relationship between at least two arbitrary elements. A first element may be associated with a second element in a manner that said first element is as-signed to said second element. The relationship between the first and second elements, as an example, may be at least one of: a relationship by predefinition, a relationship by at least one property the first and second elements have in common, a relationship due to identical or similar causes or origins of the first and second elements, a relationship due to the first element causing or evoking the second element, a relationship due to the second element causing or evoking the first element. For the user to be associated with the virtual character, the virtual character may be of and/or assigned to the user.
The term “pattern image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an image generated by the image generation unit while illuminating with the infrared light pattern, e.g. on an object and/or a user. The pattern image may comprise an image showing a user, in particular at least parts of the face of the user, while the user is being illuminated with the infrared light pattern, particularly on a respective area of interest comprised by the image. The pattern image may be generated by imaging and/or recording light reflected by an object and/or user, which is illuminated by the infrared light pattern. The pattern image showing the user may comprise at least a portion of the illuminated infrared light pattern on at least a portion the user. For example, the illumination by the pattern illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the device and/or system.
The term “illuminate” , as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to the process of exposing at least one element to light. The term “illumination source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary device configured for generating or providing light in the sense of the above-mentioned definition.
The term “pattern illumination source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary device configured for generating or providing at least one light pattern, in particular at least one infrared light pattern. The term “light pattern” , also referred to as “patterned infrared illumination” , as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one arbitrary pattern comprising a plurality of light spots. The light spot may be at least partially spatially extended. At least one spot or any spot may have an arbitrary shape. In some cases, a circular shape of at least one spot or any spot may be preferred. The spots may be arranged by considering a structure of a display. Typically, an arrangement of an OLED-pixel-structure of the display may be considered. The term “infrared light pattern” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a light pattern comprising spots in the infrared spectral range. The infrared light pattern may be a near infrared light pattern.
The method comprises determining if the user corresponds to a living human based on the at least one pattern image, particularly in a validation process. “Determining” may be or may comprise evaluating. “Based on” may be or may comprise considering.
The term “living human” , as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special  or customized meaning. The term specifically may refer, without limitation, to an individual of the species Homo sapiens, wherein the individual is currently alive.
Determining if the user corresponds to a living human based on the at least one pattern image may comprise extracting material data from the at least one pattern image, particularly in a validation step. Particularly thereby, it may be determined that the user is a human. The material data may comprise an item of information on the type of material of the user detected in the pattern image. Extracting material data from the pattern image may be or may comprise generating the material type and/or data derived from the material type.
Determining if the user corresponds to a living human based the at least one pattern image may comprise determining if the extracted material data corresponds a desired material data. Determining if material data corresponds to a desired material data may comprise comparing material data with desired material data. In the example, skin as desired material data may be compared with non-skin material or silicon as material data and the result may be declination since silicon or non-skin material may be different from skin. Comparing the material data with desired material data may comprise determining a similarity of the extracted material data and the desired material data. Desired material data may refer to predetermined material data. In an example, desired material data may be skin. It may be determined if material data may correspond to the desired material data. In the example, material data may be non-skin material or silicon.
For extracting the material data from the at least one pattern image, the pattern image may be provided to a receiving model for determining the material data from the at least one pattern image. Alternatively or in addition, for extracting the material data from the at least one pattern image, the material data may be received from the providing model for determining the material data from the at least one pattern image.
The term “providing” , or any grammatical variation thereof, as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to making the provided item available, e.g. to a device and/or entity, specifically by using a connection interface. The item may be provided by a further device and/or entity. For  providing the item, the device and/or entity may request the item. The request may be received by the further device and/or entity.
Providing the at least one pattern image to a receiving model for determining the material data from the at least one pattern image may comprise receiving the at least one pattern image by the receiving model for determining the material data from the at least one pattern image. Alternatively or in addition, the at least one pattern image may be received by at least one input layer comprised by the receiving model for determining the material data from the at least one pattern image.
The model for determining the material data from the at least one pattern image may comprise at least one mechanistic model for determining the material data from the at least one pattern image.
The term “mechanistic model” , also referred to as “deterministic model” , as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a model that reflects physical phenomena in mathematical form, e.g., including first-principles models. A mechanistic model may comprise a set of equations that describe an interaction between the material and the patterned electromagnetic radiation thereby resulting in a condition measure, a vital sign measure or the like.
The mechanistic model for determining the material data from the at least one pattern image may be calibrated by using calibration data in a regression procedure.
The term “regression procedure” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a statistical method employed for establishing a relationship between a dependent variable, such as the material data, and one or more independent variables, such as the pattern image. The goal of regression is to predict the value of the dependent variable based on the input values of the independent variables. The term “calibration” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation,  to the process of optimizing at least one parameter of the mechanistic model in a manner that the dependent variable may be evaluated more precisely.
The calibration data may comprise:
- at least one pattern image showing the user;
- at least one known material data of the user in the at least one pattern image.
The model for determining the material data from the at least one pattern image comprises at least one data-driven model for determining the material data from the at least one pattern image.
The term “data-driven model” , as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a classification model comprising at least one machine-learning architecture and a plurality of model parameters. The data-driven model may be parametrized in a training by using training data. The training may be a process of finding a best parameter combination of the plurality of model parameters. The training is carried out to improve the capability of the machine learning algorithm to obtain a representative result, such as the material data, by evaluating an input, such as the pattern image.
The training data may comprise one or more training data sets. The one or more data sets may each comprise input data and a known representative result derivable by evaluating the input data by using the data-driven model. The data-driven model for determining the material data from the at least one pattern image may be trained by using training data in a training procedure, particularly wherein the training data comprises, particularly a plurality of training data sets each comprising:
- at least one pattern image showing the user;
- at least one known material data of the user in the at least one pattern image.
Retraining may be included when referring to training herein.
The at least one data-driven model for determining the material data from the at least one pattern image may comprise at least one of:
- at least one convolutional neural network;
- at least one component of an encoder-decoder structure, particularly wherein the component comprised by the data-driven model for determining the material data is an encoder, specifically an auto-encoder.
The “convolutional neural network” may comprise at least one convolutional layer and/or at least one pooling layer. Typically, convolutional neural networks may reduce the dimensionality of a partial image and/or an image by applying a convolution, e.g. based on a convolutional layer, and/or by pooling. Applying a convolution may be suitable for selecting feature related to material information of the pattern image.
Typically, the “encoder-decoder structure” refers to a machine-learning architecture used in sequence-to-sequence tasks. The “encoder” may be used for processing the input data and transforming it into a context vector representation. The context vector is fed into a “decoder” . The decoder generates an output.
Determining if the user corresponds to a living human based on the at least one pattern image may comprise determining at least one blood perfusion measure. Particularly thereby, it may be determined that the human is living.
The term “blood perfusion measure" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a blood volume flow through a given volume or mass of tissue. Typically, the blood perfusion measure may be given in units of ml/ml/s or ml/100 g/min. The blood perfusion measure may represent a local blood flow through the at least one capillary network and one or more extracellular spaces in a body tissue.
Determining the at least one blood perfusion measure may comprise determining at least one speckle contrast of the pattern image. Alternatively or in addition, determining the at least one blood perfusion measure may comprise determining a blood perfusion measure based on the determined at least one speckle contrast.
The term “speckle contrast " as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special  or customized meaning. The term specifically may refer, without limitation, to a degree of a variation in a speckle pattern generated by coherent light. The speckle pattern may be generated by the pattern illumination source, particularly on the user, more particularly by flood light and/or pattern light scattered on the user. A speckle contrast may represent a measure for a mean contrast of an intensity distribution within an area of a speckle pattern. In particular, a speckle contrast K over an area of the speckle pattern may be expressed as a ratio of standard deviation σ to the mean speckle intensity <I>, i.e., K= σ /<I>. A speckle contrast may equal or may comprise a speckle contrast value. Typically, a speckle contrast value may be distributed between 0 and 1. The blood perfusion measure is determined based on the speckle contrast. Thus, the vital sign measure may depend on the determined speckle contrast. If the speckle contrast changes, the blood perfusion measure derived from the speckle contrast may change accordingly. A blood perfusion measure may be a single number or value that may represent a likelihood that the object is a living subject. Preferably, for determining the speckle contrast, the complete pattern image may be used. Alternatively, for determining the speckle contrast, a section of the pattern image may be used. The section of the pattern image, preferably, represents a smaller area of the pattern image than an area of the complete pattern image. The section of the pattern image may be obtained by cropping the pattern image.
The method comprises providing an indication on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image. The indication on that the at least one attribute of the virtual character is verified may be provided by the device and/or system configured for verifying at least one attribute of the virtual character. The indication on that the at least one attribute of the virtual character is verified may be provided to the server of the virtual environment.
The term “indication" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a flag that is given to an arbitrary piece of information.
Providing the indication on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified.
For making the indication perceptible, the virtual character may be flagged by at least one of a visible and/or auditive and/or haptic confirmation sign. The confirmation sign may be an arbitrary two-dimensional sign and/or the displaying of the at least one attribute on the virtual character.
Particularly consequently, making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified may comprise displaying the verified at least one attribute of the virtual character on the virtual character.
Providing the indication on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is authenticated.
Particularly when the indication is provided, the method may further comprise a step of allowing the user to use and/or to control the virtual character in the virtual environment. The user may be allowed to use and/or control the virtual character in the virtual environment in case of at least one of:
- it is determined that the user corresponds to a living human, and
- it is determined that the identity of the user corresponds to a verified identity.
The method step of allowing the user to use and/or to control the virtual character may be performed by the server of the virtual environment.
Particularly when the indication is not provided, the method may further comprise a step of declining the user to use and/or to control the virtual character in the virtual environment. The user may be declined to use and/or control the virtual character in the virtual environment in case of at least one of:
- it is determined that the user does not correspond to a living human, and
- it is determined that the identity of the user does not correspond to a verified identity. The method step of declining the user to use and/or to control the virtual character may be performed by the server of the virtual environment.
The method may comprise:
- receiving at least one flood image showing the user associated with the virtual character while the user is being illuminated by infrared flood light;
- determining if the identity of the user corresponds to a verified identity based on the at least one flood image, particularly in an authentication process,
wherein providing an indication on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
The flood image may be provided by the image generation unit. Particularly for initiating providing of the flood image by the image generation unit, the method may comprise requesting the flood image from the image generation unit. The device and/or system configured for verifying at least one attribute of the virtual character may receive the flood image.
Determining if the identity of the user corresponds to a verified identity based on the at least one flood image may be performed for authenticating the user. The term “authenticating” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to verifying an identity of a user. Specifically, authentication may comprise distinguishing between the user from other humans or objects. The authentication may comprise verifying identity of a respective user and/or assigning identity to a user. The authentication may comprise generating and/or providing identity information. The identity information may be validated by the authentication. For example, the identity information may be and/or may comprise at least one identity token. In case of successful authentication, an image of a face recorded by the image generation unit may be verified to be an image of the user’s face.
The authenticating may be performed using at least one authentication process. The authentication process may comprise a plurality of steps. For example, the authentication process may comprise performing at least one face detection. The face detection step may comprise analyzing the flood image. In addition, for example, the authentication process may comprise identifying. The identifying may comprise assigning an identity to a detected face and/or verifying an identity of the user. The identifying may comprise performing a face verification of the imaged face to be the user’s face. The identifying the user may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the  user’s face, with a template. For matching the flood image with a template, a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector may be considered and/or evaluated. The template vector may be obtained from a template image.
The template image may be generated in an enrollment process. The term “enrollment process" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one step of registering, particularly to a service. In the enrollment process, the template image may be generated under secure conditions in a manner that it is guaranteed that the generated template image shows the user. The enrollment process may comprise at least one step of: capturing the template image; recording personal data; selecting the at least one attribute; generating the virtual character.
For determining if the identity of the user corresponds to a verified identity,
- the at least one template feature vector and/or the template image may be received, particularly from a providing server, particularly of the virtual environment, by using a connection interface, and/or
- the at least one template feature vector may be stored, particularly on a memory, more particularly of the server and/or the device and/or system configured for verifying at least one attribute of the virtual character.
The term “memory" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one electronic storage space configured for storing data, instructions, and programs. The stored data, instruction and/or programs may be forwarded for processing to a processor. The memory may be or may comprise at least one of: a Random Access Memory; a Read-Only Memory; a Cache Memory; a Hard Disk Drive; a Solid State Drive; a Virtual Memory.
The method may comprise:
- illuminating the user with at least one infrared flood light by using at least one the flood illumination source;
- capturing the at least one flood image by using the image generation unit.
The term “flood illumination source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one arbitrary device configured for providing substantially continuous spatial illumination. The term “flood light” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to substantially continuous spatial illumination, in particular diffuse and/or uniform illumination. The flood light has a wavelength in the infrared range, in particular in the near infrared range. The flood illumination source may comprise at least one LED or at least one least one VCSEL, preferably a plurality of VCSELs. The term “substantially continuous spatial illumination” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to uniform spatial illumination, wherein areas of non-uniform are possible. The area, e.g. covering a user, a portion of the user and/or a face of the user, illuminated from the flood illumination source, may be contiguous. Power may be spread over a whole field of illumination. In contrast, illumination provided by the light pattern may comprise at least two contiguous areas, in particular a plurality of contiguous areas, and/or power may be concentrated in small (compared to the whole field of illumination) areas of the field of illumination. The infrared flood illumination may be suitable for illuminating a contiguous area, in particular one contiguous area. The infrared pattern illumination may be suitable for illuminating at least two contiguous areas.
The term “flood image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an image generated by the image generation unit while illumination source is emitting infrared flood light, e.g. on an object and/or a user. The flood image may comprise an image showing a user, in particular the face of the user, while the user is being illuminated with the flood light. The flood image may be generated by imaging and/or recording light reflected by an object and/or user which is illuminated by the flood light. The flood image showing the user may comprise at least a portion of the flood light on at least a portion the user. For example, the illumination by the  flood illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the device and/or system.
The term “image generation unit” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one unit configured for capturing at least one image, particularly for generating image data. The term “capturing” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to generating and/or determining and/or recording at least one image by using the image generation unit. Capturing may comprise recording a single image and/or a plurality of images such as a sequence of images. For example, capturing may comprise recording continuously a sequence of images such as a video or a movie. The image generation may be initiated by a user action or may automatically be initiated, e.g. once the presence of at least one object or user within a field of view and/or within a predetermined sector of the field of view of the image generation unit is automatically detected.
The pattern illumination source may be covered at least partially by a transparent display and/or the image generation unit may be covered at least partially by a transparent display. The flood illumination source may be covered at least partially by a transparent display.
The term “display” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary shaped device configured for displaying an item of information. The item of information may be arbitrary information such as at least one image, at least one diagram, at least one histogram, at least one graphic, text, numbers, at least one sign, an operating menu, and the like. The display may be or may comprise at least one screen. The display may have an arbitrary shape, e.g. a rectangular shape. The display may be a front display.
The term “at least partially transparent” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a property of the display to allow light, in particular of a certain wavelength range, e.g. in the  infrared spectral region, in particular in the near infrared spectral region, to pass at least partially through. For example, the display may be semitransparent in the near infrared region. For example, the display may have a transparency of 20 %to 50 %in the near infrared region. The display may have a different transparency for other wavelength ranges. The present invention may propose a device and/or system comprising the image generation unit and two illumination sources that can be placed behind the display of a device. The transparent area (s) of the display can allow for operation of the device and/or system behind the display. The display is an at least partially transparent display, as described above. The display may have a reduced pixel density and/or a reduced pixel size and/or may comprise at least one transparent conducting path. The transparent area (s) of the display may have a pixel density of 360-440 PPI (pixels per inch) . Other areas of the display, e.g. non-transparent areas, may have pixels densities higher than 400 PPI, e.g. a pixel density of 460-500 PPI.
In a further aspect, a device and/or system, particularly configured for verifying at least one attribute of the virtual character, is disclosed. The device and/or system comprises:
- a processor; and
- a memory storing instructions that, when executed by the processor, configure the apparatus to perform the steps of any one of the methods as disclosed elsewhere herein.
For this aspect, reference may be made to any definition, Embodiment, claim and/or aspect as disclosed herein.
The term “processor” as generally used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary logic circuitry configured for performing basic operations of a computer or system, and/or, generally, to a device which is configured for performing calculations or logic operations. In particular, the processing unit may be configured for processing basic instructions that drive the computer or system. As an example, the processing unit may comprise at least one arithmetic logic unit (ALU) , at least one floating-point unit (FPU) , such as a math co-processor or a numeric coprocessor, a plurality of registers, specifically registers configured for supplying operands to the ALU and storing results of operations, and a memory, such as an L1 and L2 cache memory. In particular, the processing unit may be a multi-core processor. Specifically, the processing unit may be or may comprise a central processing unit  (CPU) . Additionally or alternatively, the processing unit may be or may comprise a microprocessor, thus specifically the processing unit’s elements may be contained in one single integrated circuitry (IC) chip. Additionally or alternatively, the processing unit may be or may comprise one or more application-specific integrated circuits (ASICs) and/or one or more field-programmable gate arrays (FPGAs) or the like. The processing unit specifically may be configured, such as by software programming, for performing one or more evaluation operations.
The device may be selected from the group consisting of: a television device; a game console; a personal computer; a mobile device, particularly a cell phone, and/or a smart phone, and/or a tablet computer, and/or a laptop, and/or a tablet, and/or a virtual reality device, and/or a wearable, such as a smart watch; or another type of portable computer.
The device and/or system further may comprise at least one of:
- the pattern illumination source;
- the flood illumination source;
- the image generation unit;
- the transparent display;
- the memory;
- a connection interface.
The term “connection interface” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an item or an element forming a boundary configured for transmitting information or data. In particular, the connection interface may be configured for transmitting information from a computational device, e.g. a computer, such as to provide data, e.g. to another device. Additionally or alternatively, the connection interface may be configured for transmitting information to a computational device, e.g. to a computer, such as to receive data. The connection interface may specifically be configured for transmitting or exchanging data. In particular, the connection interface may provide a data transfer connection, e.g. Bluetooth, NFC, or inductive coupling. As an example, the connection interface may be or may comprise at least one port comprising one or more of a network or internet port, a USB-port, and a disk drive.
In a further aspect, a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method as disclosed elsewhere herein. For this aspect, reference may be made to any definition, Embodiment, claim and/or aspect as disclosed herein. The computer may be device and/or a system as disclosed elsewhere herein. Specifically, the computer program may be stored on a computer-readable data carrier and/or on a computer-readable storage medium. The computer program may be executed on at least one processor comprised by the device and/or system configured for verifying at least one attribute of the virtual character.
In a further aspect, a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method as disclosed elsewhere herein. For this aspect, reference may be made to any definition, Embodiment, claim and/or aspect as disclosed herein. The computer may be a device and/or a system as disclosed elsewhere herein.
As used herein, the “computer-readable storage medium” specifically may refer to non-transitory data storage means, such as a hardware storage medium having stored thereon computer-executable instructions. The stored computer-executable instruction may be associate with the computer program. The computer-readable data carrier or storage medium specifically may be or may comprise a storage medium such as a random-access memory (RAM) and/or a read-only memory (ROM) .
In a further aspect, a use of a pattern image for verifying an attribute of a virtual character in a virtual environment. For this aspect, reference may be made to any definition, Embodiment, claim and/or aspect as disclosed herein.
Further disclosed and proposed herein is a computer program product having program code means, in order to perform the method according to the present invention in one or more of the embodiments enclosed herein when the program is executed on a computer or computer network. Specifically, the program code means may be stored on a computer-readable data carrier and/or on a computer-readable storage medium.
Further disclosed and proposed herein is a data carrier having a data structure stored thereon, which, after loading into a computer or computer network, such as into a working memory or main memory of the computer or computer network, may execute the method according to one or more of the embodiments disclosed herein.
Further disclosed and proposed herein is a computer program product with program code means stored on a machine-readable carrier, in order to perform the method according to one or more of the embodiments disclosed herein, when the program is executed on a computer or computer network. As used herein, a computer program product refers to the program as a tradable product. The product may generally exist in an arbitrary format, such as in a paper for-mat, or on a computer-readable data carrier and/or on a computer-readable storage medium. Specifically, the computer program product may be distributed over a data network.
Further disclosed and proposed herein is a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the method according to one or more of the embodiments disclosed herein.
Finally, disclosed and proposed herein is a modulated data signal which contains instructions readable by a computer system or computer network, for performing the method according to one or more of the embodiments disclosed herein.
Referring to the computer-implemented aspects of the invention, one or more of the method steps or even all of the method steps of the method according to one or more of the embodiments disclosed herein may be performed by using a computer or computer network. Thus, generally, any of the method steps including provision and/or manipulation of data may be performed by using a computer or computer network. Generally, these method steps may include any of the method steps, typically except for method steps requiring manual work, such as providing the samples and/or certain aspects of performing the actual measurements.
Specifically, further disclosed herein are:
- a computer or computer network comprising at least one processor, wherein the processor is adapted to perform the method according to one of the embodiments described in this description,
- a computer loadable data structure that is adapted to perform the method according to one of the embodiments described in this description while the data structure is being executed on a computer,
- a computer program, wherein the computer program is adapted to perform the method according to one of the embodiments described in this description while the program is being executed on a computer,
- a computer program comprising program means for performing the method according to one of the embodiments described in this description while the computer program is being executed on a computer or on a computer network,
- a computer program comprising program means according to the preceding embodiment, wherein the program means are stored on a storage medium readable to a computer,
- a storage medium, wherein a data structure is stored on the storage medium and wherein the data structure is adapted to perform the method according to one of the embodiments described in this description after having been loaded into a main and/or working storage of a computer or of a computer network, and
- a computer program product having program code means, wherein the program code means can be stored or are stored on a storage medium, for performing the method ac-cording to one of the embodiments described in this description, if the program code means are executed on a computer or on a computer network.
As used herein, the terms “have” , “comprise” or “include” or any arbitrary grammatical variations thereof are used in a non-exclusive way. Thus, these terms may both refer to a situation in which, besides the feature introduced by these terms, no further features are present in the entity described in this context and to a situation in which one or more further features are present. As an example, the expressions “A has B” , “A comprises B” and “A includes B” may both refer to a situation in which, besides B, no other element is present in A (i.e. a situation in which A solely and exclusively consists of B) and to a situation in which, besides B, one or more further elements are present in entity A, such as element C, elements C and D or even further elements.
Further, it shall be noted that the terms “at least one” , “one or more” or similar expressions indicating that a feature or element may be present once or more than once typically are used only once when introducing the respective feature or element. In most cases, when referring to the respective feature or element, the expressions “at least one” or “one or more” are not  repeated, nonwithstanding the fact that the respective feature or element may be present once or more than once.
Further, as used herein, the terms "preferably" , "more preferably" , "particularly" , "more particularly" , "specifically" , "more specifically" or similar terms are used in conjunction with optional features, without restricting alternative possibilities. Thus, features introduced by these terms are optional features and are not intended to restrict the scope of the claims in any way. The invention may, as the skilled person will recognize, be performed by using alternative features. Similarly, features introduced by "in an embodiment of the invention" or similar expressions are intended to be optional features, without any restriction regarding alternative embodiments of the invention, without any restrictions regarding the scope of the invention and without any restriction regarding the possibility of combining the features introduced in such way with other optional or non-optional features of the invention.
The present disclosure exhibits several advantages, such as a fast, reliable and secure verification of a virtual character that offers a counterfeit protection, particularly if the disclosed verification procedure is combined with an anti-spoofing procedure.
Several scenarios may be imagined. To begin with a real-name authentication may be provided, e.g. for business transactions. In this context a virtual character, such as an avatar may be created with the real name of the user. This may be achieved in a separate enrollment process. The facial data generated in the enrollment process may be stored for at least one subsequent security check. The virtual character may be highlighted in the virtual environment, such as the metaverse, as a verified person. Control of the avatar may only be possible for a verified person. The verification may comprise a spoof-proof facial authentication.
In a further scenario, the face of the virtual character may be the real copy of the face of the user in case the user is authenticated. In this case, the device used for the authentication may be used to authenticate the user by evaluating the template of the data of the virtual character without any correlation to real-life names. This may be used for interactions where a “real-person” check may be necessary and a unique feature like a real copy of the face and/or the fingerprint is shown, but where the real name should be hidden.
Overall, in the context of the present invention, the following embodiments are regarded as preferred:
Embodiment 1. A method for verifying at least one attribute of a virtual character in a virtual environment associated with a user, the method comprising:
- receiving a request to verify at least one attribute of the virtual character,
- receiving at least one pattern image showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination,
- determining if the user corresponds to a living human based on the at least one pattern image,
- providing an indication on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image.
Embodiment 2. The method according to the preceding Embodiment, wherein the method is a computer-implemented method,
optionally wherein at least one of the method steps, preferably any one of the method steps are performed by using a device and/or system comprising at least one processor for executing the steps.
Embodiment 3. The method according to any one of the preceding Embodiments, wherein the method comprises:
- illuminating the user with at least one infrared light pattern, particularly by using at least one pattern illumination source;
- capturing the at least one pattern image while the user is being illuminated by patterned infrared illumination, particularly by using an image generation unit.
Embodiment 4. The method according to the preceding Embodiment, wherein the pattern illumination source is covered at least partially by a transparent display and/or wherein the image generation unit is covered at least partially by a transparent display.
Embodiment 5. The method according to any one of the preceding Embodiments, wherein the material data comprises an item of information on the type of material of the user detected in the pattern image.
Embodiment 6. The method according to any one of the preceding Embodiments, wherein determining if the user corresponds to a living human based on the at least one pattern image comprises extracting material data from the at least one pattern image.
Embodiment 7. The method according to the preceding Embodiment, wherein, for extracting the material data from the at least one pattern image,
- the pattern image is provided to a receiving model for determining the material data from the at least one pattern image, and/or
- the material data is received from the providing model for determining the material data from the at least one pattern image.
Embodiment 8. The method according to the preceding Embodiment, wherein providing the at least one pattern image to a receiving model for determining the material data from the at least one pattern image comprises receiving the at least one pattern image by the receiving model for determining the material data from the at least one pattern image,
optionally wherein the at least one pattern image is received by at least one input layer comprised by the receiving model for determining the material data from the at least one pattern image.
Embodiment 9. The method according to any one of the two preceding Embodiments, wherein the model for determining the material data from the at least one pattern image comprises at least one mechanistic model for determining the material data from the at least one pattern image.
Embodiment 10. The method according to the preceding Embodiment, wherein the mechanistic model for determining the material data from the at least one pattern image is calibrated by using calibration data in a regression procedure, particularly wherein the calibration data comprises
- at least one pattern image showing the user;
- at least one known material data of the user in the at least one pattern image.
Embodiment 11. The method according to the any one of the four preceding Embodiments, wherein the model for determining the material data from the at least one pattern image comprises at least one data-driven model for determining the material data from the at least one pattern image.
Embodiment 12. The method according to the preceding Embodiment, wherein the at least one data-driven model for determining the material data from the at least one pattern image comprises at least one of:
- at least one convolutional neural network;
- at least one component of an encoder-decoder structure, particularly wherein the component comprised by the data-driven model for determining the material data is an encoder, specifically an auto-encoder.
Embodiment 13. The method according to any one of the two preceding Embodiments, wherein the data-driven model for determining the material data from the at least one pattern image is trained by using training data in a training procedure, particularly wherein the training data comprises
- at least one pattern image showing the user;
- at least one known material data of the user in the at least one pattern image.
Embodiment 14. The method according to any one of the preceding Embodiments, wherein determining if the user corresponds to a living human based on the at least one pattern image comprises determining at least one blood perfusion measure.
Embodiment 15. The method according to the preceding Embodiment, wherein determining at least one blood perfusion measure comprises
- determining at least one speckle contrast of the pattern image, and
- determining a blood perfusion measure based on the determined at least one speckle contrast.
Embodiment 16. The method according to any one of the preceding Embodiments, wherein the patterned infrared illumination is coherent patterned infrared illumination.
Embodiment 17. The method according to any one of the preceding Embodiments, wherein the infrared illumination is within a range between 750 nm and 1100 nm.
Embodiment 18. The method according to any one the preceding Embodiments, wherein the method comprises
- receiving at least one flood image showing the user associated with the virtual character while the user is being illuminated by infrared flood light;
- determining if the identity of the user corresponds to a verified identity based on the at least one flood image,
wherein providing an indication on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
Embodiment 19. The method according to any one of the preceding Embodiments, wherein the method comprises:
- illuminating the user with at least one infrared flood light by using at least one the flood illumination source;
- capturing the at least one flood image by using the image generation unit.
Embodiment 20. The method according to the preceding Embodiment, wherein the flood illumination source is covered at least partially by a transparent display.
Embodiment 21. The method according to the preceding Embodiment, wherein, for determining if the identity of the user corresponds to a verified identity based on the at least one flood image, a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector is considered.
Embodiment 22. The method according to the preceding Embodiment, wherein the template vector is obtained from a template image.
Embodiment 23. The method according to the preceding Embodiment, wherein the template image is generated in an enrollment process.
Embodiment 24. The method according to the preceding Embodiment, wherein, for determining if the identity of the user corresponds to a verified identity,
- the at least one template feature vector and/or the template image is received, particularly from a providing server by using a connection interface, and/or
- the at least one template feature vector is stored, particularly on a memory.
Embodiment 25. The method according to any one of the preceding Embodiments, wherein the at least one attribute is at least one of:
- at least a portion of an appearance of the virtual character,
specifically wherein the at least a portion of an appearance is at least one of:
· a face of the virtual character,
· a fingerprint of the virtual character,
- personal data of the virtual character,
specifically wherein the identity of the virtual character is at least one of:
· a personal data of the virtual character,
· a unique identification code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code.
Embodiment 26. The method according to any one of the preceding Embodiments, wherein providing the indication on that the at least one attribute of the virtual character is verified comprises making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified.
Embodiment 27. The method according to the preceding Embodiment, wherein making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified comprises displaying the verified at least one attribute of the virtual character on the virtual character.
Embodiment 28. The method according to any one of the preceding Embodiments, wherein the at least one attribute of the virtual character corresponds to at least one attribute of the user in a manner that the user has the same at least one attribute in the real world.
Embodiment 29. A device and/or system comprising
- a processor; and
- a memory storing instructions that, when executed by the processor, configure the apparatus to perform the steps of any one of the methods of any one of method Embodiments.
Embodiment 30. The device and/or system according to the preceding Embodiment, wherein the device and/or system further comprises at least one of:
- the pattern illumination source;
- the flood illumination source;
- the image generation unit;
- the transparent display;
- the memory
- a connection interface.
Embodiment 31. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method according to any one of the method Embodiments.
Embodiment 32. The computer program according to the preceding Embodiment, wherein the computer is a device and/or a system according to any one of the preceding Embodiments referring to a device and/or a system.
Embodiment 33. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method according to any one of the method Embodiments.
Embodiment 34. The non-transitory computer-readable storage medium according to the preceding Embodiment, wherein the computer is a device and/or a system according to any one of the preceding Embodiments referring to a device and/or a system.
Embodiment 35. Use of a pattern image for verifying an attribute of a virtual character in a virtual environment.
Brief description of the figures
Further optional details and features of the invention are evident from the description of preferred exemplary embodiments which follows in conjunction with the dependent claims. In this context, the particular features may be implemented in an isolated fashion or in combination with other features. The invention is not restricted to the exemplary embodiments. The exemplary embodiments are shown schematically in the figures. Identical reference numerals in the individual figures refer to identical elements or elements with identical function, or elements which correspond to one another with regard to their functions.
Specifically, in the figures:
Figure 1 shows an exemplary method for verifying at least one attribute of a virtual character in a virtual environment associated with a user; and
Figure 2 an exemplary device and/or system.
Detailed description of the embodiments:
Figure 1 shows an exemplary method 110 for verifying at least one attribute of a virtual character in a virtual environment associated with a user is disclosed.
The at least one attribute may be at least a portion of an appearance of the virtual character, specifically wherein the at least a portion of an appearance is at least one of: a face of the virtual character, a fingerprint of the virtual character. Alternatively or in addition, the at least one attribute may be personal data of the virtual character, specifically wherein the personal data of the virtual character is at least one of: a name of the virtual character; a unique identification  code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code. The at least one attribute of the virtual character may correspond to at least one attribute of the user.
The method comprises:
- in a step 112, receiving a request to verify at least one attribute of the virtual character,
- in a step 114, receiving at least one pattern image showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination,
- in a step 116, determining if the user corresponds to a living human based on the at least one pattern image,
- in a step 118, providing an indication on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image.
Providing the indication, in step 118, on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified. Making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified may comprise displaying the verified at least one attribute of the virtual character on the virtual character.
Providing the indication, in step 118, on that the at least one attribute of the virtual character is verified may comprise making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is authenticated.
The method 110 may be a computer-implemented method. Alternatively or in addition, at least one of the method steps, preferably any one of the method steps may be performed by using a device 120 and/or system 122 comprising at least one processor 124 for executing the steps.
The method 110 may comprise illuminating the user, in a step 126, with at least one infrared light pattern, particularly by using at least one pattern illumination source 128. Alternatively or  in addition, the method may comprise capturing the at least one pattern image, in a step 130, while the user is being illuminated by patterned infrared illumination, particularly by using an image generation unit 132. Particularly, by performing at least one of the steps 126, 130, the at least one pattern image that is received in step 114 is generated for being provided.
In the step 116, determining if the user corresponds to a living human based on the at least one pattern image may comprise extracting material data from the at least one pattern image. The material data may comprise an item of information on the type of material of the user detected in the pattern image. For extracting the material data from the at least one pattern image, the pattern image may be provided, in a step 136, to a receiving model for determining the material data from the at least one pattern image. Alternatively or in addition, for extracting the material data from the at least one pattern image, the material data may be received, in a step 138, from the providing model for determining the material data from the at least one pattern image.
Providing the at least one pattern image, in the step 136, to a receiving model for determining the material data from the at least one pattern image may comprises receiving the at least one pattern image by the receiving model for determining the material data from the at least one pattern image. Alternatively or in addition, the at least one pattern image may be received by at least one input layer comprised by the receiving model for determining the material data from the at least one pattern image.
The model for determining the material data from the at least one pattern image may comprise at least one mechanistic model for determining the material data from the at least one pattern image. The mechanistic model for determining the material data from the at least one pattern image may be calibrated by using calibration data in a regression procedure. The calibration data may comprise
- at least one pattern image showing the user;
- at least one known material data of the user in the at least one pattern image.
The model for determining the material data from the at least one pattern image comprises at least one data-driven model for determining the material data from the at least one pattern image. The at least one data-driven model for determining the material data from the at least one pattern image may comprise at least one of:
- at least one convolutional neural network;
- at least one component of an encoder-decoder structure, particularly wherein the component comprised by the data-driven model for determining the material data is an encoder, specifically an auto-encoder.
The data-driven model for determining the material data from the at least one pattern image may be trained by using training data in a training procedure, particularly wherein the training data comprises
- at least one pattern image showing the user;
- at least one known material data of the user in the at least one pattern image.
In the step 116, determining if the user corresponds to a living human based on the at least one pattern image may comprise determining at least one blood perfusion measure, in a step 140. Determining the at least one blood perfusion measure, in the step 140, may comprise determining at least one speckle contrast of the pattern image. Alternatively or in addition, determining the at least one blood perfusion measure, in the step 140, may comprise determining a blood perfusion measure based on the determined at least one speckle contrast.
The method 110 may comprise
- in a step 142, receiving at least one flood image showing the user associated with the virtual character while the user is being illuminated by infrared flood light;
- in a step 144, determining if the identity of the user corresponds to a verified identity based on the at least one flood image,
wherein providing an indication on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
The method may comprise:
- illuminating the user, in a step 146, with at least one infrared flood light by using at least one the flood illumination source 148;
- capturing the at least one flood image, in a step 150, by using the image generation unit 132.
Particularly, by performing at least one of the steps 146, 150, the at least one flood image that is received in step 142 is generated for being provided.
For determining if the identity of the user corresponds to a verified identity, in the step 144, based on the at least one flood image, a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector may be considered, in a step 152. The template vector may be obtained from a template image. The template image may be generated in an enrollment process. Performing the enrollment process may be comprised by the method in a step 154.
In step 144, for determining if the identity of the user corresponds to a verified identity,
- the at least one template feature vector and/or the template image may be received, in a step 156, particularly from a providing server by using a connection interface, and/or
- the at least one template feature vector may be stored, in a step 158, particularly on a memory 160.
Particularly when the indication, in step 118, is provided, the method may further comprise a step 162 of allowing the user to use and/or to control the virtual character in the virtual environment. The user may be allowed to use and/or control the virtual character in the virtual environment in case of at least one of:
- it is determined that the user corresponds to a living human, and
- it is determined that the identity of the user corresponds to a verified identity.
The method step of allowing the user to use and/or to control the virtual character may be performed by the server of the virtual environment.
Particularly when the indication, in step 118, is not provided, the method may further comprise a step 164 of declining the user to use and/or to control the virtual character in the virtual environment. The user may be declined to use and/or control the virtual character in the virtual environment in case of at least one of:
- it is determined that the user does not correspond to a living human, and
- it is determined that the identity of the user does not correspond to a verified identity.
In a further aspect, a device and/or system is disclosed. The device and/or system comprises:
- a processor 124; and
- a memory 160 storing instructions that, when executed by the processor, configure the apparatus to perform the steps of any one of the methods of any one of method claims.
The device and/or system further may comprise at least one of:
- the pattern illumination source 128;
- the flood illumination source 148;
- the image generation unit 132;
- the display 134;
- a connection interface 166.
The pattern illumination source 128 may be covered at least partially by a transparent display 134 and/or the image generation unit 132 may be covered at least partially by a transparent display 134. The flood illumination source 150 may be covered at least partially by a transparent display 134.
In a further aspect, a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method according to any one of the method claims. The computer may be device and/or a system as disclosed elsewhere herein. In a further aspect, a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method according to any one of the method claims. The computer may be a device and/or a system as disclosed elsewhere herein. In a further aspect, a use of a pattern image for verifying an attribute of a virtual character in a virtual environment.
List of reference numbers
110 method for verifying at least one attribute of a virtual character
112 receiving a request to verify
114 receiving at least one pattern image
116 determining if the user corresponds to a living human
118 providing an indication on that the at least one attribute
120 device
122 system
124 processor
126 illuminating the user
128 pattern illumination source
130 capturing the at least one pattern image
132 image generation unit
134 display
136 providing the pattern image
138 receiving the material data
140 determining at least one blood perfusion measure
142 receiving at least one flood image
144 determining the identity of the user
146 illuminating the user
148 flood illumination source
150 capturing the at least one flood image
152 considering a similarity between feature vectors
154 performing enrollment process
156 receiving one or more feature vectors
158 storing a template feature vector
160 memory
162 allowing the user to use and/or to control the virtual character
164 declining the user to use and/or to control the virtual character
166 connection interface

Claims (15)

  1. A method (110) for verifying at least one attribute of a virtual character in a virtual environment associated with a user, the method (110) comprising:
    - receiving a request (112) to verify at least one attribute of the virtual character,
    - receiving at least one pattern image (114) showing the user associated with the virtual character while the user is being illuminated by patterned infrared illumination,
    - determining if the user corresponds to a living human (116) based on the at least one pattern image,
    - providing an indication (118) on that the at least one attribute of the virtual character is verified based on the determining if the user corresponds to a living human based on the at least one pattern image.
  2. The method (110) according to the preceding claim, wherein determining if the user corresponds to a living human based on the at least one pattern image comprises extracting material data from the at least one pattern image.
  3. The method (110) according to any one of the preceding claims, wherein determining if the user corresponds to a living human based on the at least one pattern image comprises determining (140) at least one blood perfusion measure.
  4. The method (110) according to any one of the preceding claims, wherein the method (110) comprises:
    - receiving at least one flood image (142) showing the user associated with the virtual character while the user is being illuminated by infrared flood light;
    - determining if the identity of the user (144) corresponds to a verified identity based on the at least one flood image,
    wherein providing an indication (118) on that the at least one attribute of the virtual character is verified is further based on the determining if the identity of the user corresponds to a verified identity.
  5. The method (110) according to the preceding claim, wherein, for determining if the identity of the user (144) corresponds to a verified identity based on the at least one flood image, a similarity between at least one image feature vector obtained from the flood image and at least one template feature vector is considered, wherein the template vector is obtained from a template image.
  6. The method (110) according to the preceding claim, wherein the template image is generated in an enrollment process.
  7. The method (110) according to any one of the preceding claims, wherein the at least one attribute is
    at least a portion of an appearance of the virtual character,
    specifically wherein the at least a portion of an appearance is at least one of:
    · a face of the virtual character,
    · a fingerprint of the virtual character.
  8. The method (110) according to any one of the preceding claims, wherein the at least one attribute is
    personal data of the virtual character,
    specifically wherein the personal data of the virtual character is at least one of:
    · a name of the virtual character,
    · a unique identification code, specifically in form of a plain text, such as a human-readable text, or in form of a machine-readable code, such as a QR code.
  9. The method (110) according to any one of the preceding claims, wherein providing the indication (118) on that the at least one attribute of the virtual character is verified comprises:
    making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the at least one attribute of the virtual character is verified.
  10. The method (110) according to the preceding claim, wherein making the indication perceptible for at least one further virtual character in the virtual environment in a manner that the further user is aware that the user is verified comprises:
    displaying the verified at least one attribute of the virtual character on the virtual character.
  11. The method (110) according to any one of the preceding claims, wherein the at least one attribute of the virtual character corresponds to at least one attribute of the user in a manner that the user has the same at least one attribute in the real world.
  12. A device (120) and/or system (122) comprising:
    - a processor (124) ; and
    - a memory (160) storing instructions that, when executed by the processor (124) , configure the apparatus to perform the steps of any one of the methods (110) of any one of the preceding method (110) claims.
  13. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method (110) according to any one of the method (110) claims.
  14. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform the method (110) according to any one of the method (110) claims.
  15. Use of a pattern image for verifying an attribute of a virtual character in a virtual environment.
PCT/CN2023/113196 2023-08-15 2023-08-15 Authentication of user in the metaverse Pending WO2025035402A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/113196 WO2025035402A1 (en) 2023-08-15 2023-08-15 Authentication of user in the metaverse

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/113196 WO2025035402A1 (en) 2023-08-15 2023-08-15 Authentication of user in the metaverse

Publications (1)

Publication Number Publication Date
WO2025035402A1 true WO2025035402A1 (en) 2025-02-20

Family

ID=94631862

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113196 Pending WO2025035402A1 (en) 2023-08-15 2023-08-15 Authentication of user in the metaverse

Country Status (1)

Country Link
WO (1) WO2025035402A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018125563A1 (en) * 2016-12-30 2018-07-05 Tobii Ab Identification, authentication, and/or guiding of a user using gaze information
CN114144781A (en) * 2019-05-17 2022-03-04 Q5Id公司 Identity verification and management system
WO2022183070A1 (en) * 2021-02-26 2022-09-01 Dreamchain Corporation Systems and methods for a tokenized virtual persona for use with a plurality of software applications
WO2023010715A1 (en) * 2021-08-06 2023-02-09 完美世界(北京)软件科技发展有限公司 Game account control method, apparatus and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018125563A1 (en) * 2016-12-30 2018-07-05 Tobii Ab Identification, authentication, and/or guiding of a user using gaze information
CN114144781A (en) * 2019-05-17 2022-03-04 Q5Id公司 Identity verification and management system
WO2022183070A1 (en) * 2021-02-26 2022-09-01 Dreamchain Corporation Systems and methods for a tokenized virtual persona for use with a plurality of software applications
WO2023010715A1 (en) * 2021-08-06 2023-02-09 完美世界(北京)软件科技发展有限公司 Game account control method, apparatus and device

Similar Documents

Publication Publication Date Title
US11973877B2 (en) Systems and methods for secure tokenized credentials
Kramer et al. Face morphing attacks: Investigating detection with humans and computers
Galbally et al. Three‐dimensional and two‐and‐a‐half‐dimensional face recognition spoofing using three‐dimensional printed models
Feng et al. Towards racially unbiased skin tone estimation via scene disambiguation
CN108389053B (en) Payment method, payment device, electronic equipment and readable storage medium
US20250094553A1 (en) Face authentication including material data extracted from image
WO2018153311A1 (en) Virtual reality scene-based business verification method and device
KR20210062381A (en) Liveness test method and liveness test apparatus, biometrics authentication method and biometrics authentication apparatus
US12388620B2 (en) Systems, methods, and devices for generating digital and cryptographic assets by mapping bodies for n-dimensional monitoring using mobile image devices
JP6792986B2 (en) Biometric device
KR102640356B1 (en) Control method of system for non-face-to-face identification using artificial intelligence model leareed with face template data
JP2023150898A (en) Authentication system and authentication method
US11475648B2 (en) System and method for providing eyewear try-on and recommendation services using truedepth camera
WO2025035402A1 (en) Authentication of user in the metaverse
KR101968810B1 (en) System and method for biometric behavior context-based human recognition
CN112989902B (en) In vivo testing method and in vivo testing device
WO2025040591A1 (en) Skin roughness as security feature for face unlock
JP2025508407A (en) Image manipulation for determining materials information.
EP4530666A1 (en) 2in1 projector with polarized vcsels and beam splitter
CN114723451A (en) Payment electronic equipment control method and device, electronic equipment and storage medium
JP7100774B1 (en) Authentication system, authentication method, and program
WO2025176821A1 (en) Method for authenticating a user of a device
CN120296270A (en) Authentication method, first service system, second service system and storage medium
Badovinac et al. Biometric Authentication Model Based on Transformation of Face Image into a PIN Number Usable During the Covid-19 Pandemic
WO2025172524A1 (en) Beam profile analysis in combination with tof sensors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23948832

Country of ref document: EP

Kind code of ref document: A1