[go: up one dir, main page]

US12260025B2 - Information processing apparatus, information processing method, and program - Google Patents

Information processing apparatus, information processing method, and program Download PDF

Info

Publication number
US12260025B2
US12260025B2 US18/571,510 US202218571510A US12260025B2 US 12260025 B2 US12260025 B2 US 12260025B2 US 202218571510 A US202218571510 A US 202218571510A US 12260025 B2 US12260025 B2 US 12260025B2
Authority
US
United States
Prior art keywords
hold
side object
virtual object
virtual
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US18/571,510
Other versions
US20240281072A1 (en
Inventor
Keijiroh Nagano
Tsuyoshi Ishikawa
Shin Shiroma
Daisuke Tajima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp filed Critical Sony Group Corp
Assigned to Sony Group Corporation reassignment Sony Group Corporation ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHIROMA, SHIN, ISHIKAWA, TSUYOSHI, NAGANO, KEIJIROH, TAJIMA, DAISUKE
Publication of US20240281072A1 publication Critical patent/US20240281072A1/en
Application granted granted Critical
Publication of US12260025B2 publication Critical patent/US12260025B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • A63F13/285Generating tactile feedback signals via the game input device, e.g. force feedback
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present technology relates to an information processing apparatus, an information processing method, and a program that can be applied to creation of a virtual space such as a virtual reality (VR) space and an augmented reality (AR) space.
  • a virtual space such as a virtual reality (VR) space and an augmented reality (AR) space.
  • VR virtual reality
  • AR augmented reality
  • Patent Literature 1 has disclosed a technology that enables a user to make an intuitive operation on an object in a virtual reality space and enables reduction of a necessary amount of calculation for achieving such an operation.
  • an information processing apparatus includes a hold determination unit.
  • the hold determination unit determines, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
  • this information processing apparatus it is determined that the virtual object has been held in a case where the rate of decrease of the distance between the first hold-side object and the second hold-side object becomes smaller than the predetermined threshold value. Accordingly, a high-quality virtual experience can be achieved.
  • the information processing apparatus may further include a storage unit and a cancel determination unit.
  • the storage unit stores a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance.
  • the cancel determination unit determines whether or not the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled on the basis of the stored hold determination distance.
  • the cancel determination unit may determine that the hold has been cancelled in a case where the distance between the first hold-side object and the second hold-side object becomes larger than a distance obtained by adding a predetermined threshold value to the hold determination distance.
  • the first hold-side object may be one or more predetermined fingers of a user.
  • the second hold-side object is one or more other fingers different from the one or more predetermined fingers of the user.
  • the hold determination unit may set two objects which attempt to pinch and hold the virtual object as the first hold-side object and the second hold-side object.
  • the hold determination unit may set, in a case where a user has directed a finger pad of each of two fingers towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
  • the hold determination unit may set, in a case where a user has directed portions of two adjacent fingers, which face each other, towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
  • the hold determination unit may set two objects whose distance to the virtual object is smaller than the predetermined threshold value as the first hold-side object and the second hold-side object, respectively.
  • the hold determination unit may set two objects determined to have collided with the virtual object as the first hold-side object and the second hold-side object, respectively.
  • the hold determination unit may set the first hold-side object and the second hold-side object on the basis of at least one of a shape or a size of the virtual object.
  • the hold determination unit may set a thumb finger of a user and one or more other fingers of the same hand as the thumb finger as the first hold-side object and the second hold-side object.
  • the hold determination unit executes, in a case where a plurality of fingers is set as the first hold-side object or the second hold-side object, hold determination of the virtual object by using a position of a center of weight of the plurality of fingers as a reference.
  • the information processing apparatus may further include a notification control unit that notifies a user of at least one of the fact that the virtual object has been held by the first hold-side object and the second hold-side object or the fact that the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled.
  • the notification control unit may execute notification by at least one of presentation of a sense of touch, display of a virtual image, or output of a sound.
  • the notification control unit may display, in a case where the first hold-side object or the second hold-side object which holds the virtual object is spaced apart from the virtual object when the hold determination unit determines that the virtual object has been held, a virtual image of the hold-side object spaced apart from the virtual object in such a manner that the virtual image is in contact with the virtual object.
  • the information processing apparatus may further include a storage unit that stores a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance.
  • the hold determination unit may determine, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when the same virtual object is pinched and held in the past, that the virtual object has been held by the first hold-side object and the second hold-side object.
  • the information processing apparatus may further include a storage unit that stores, with respect to each of a plurality of users, a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance.
  • the hold determination unit may determine, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when another user pinches and holds the same virtual object, that the virtual object has been held by the first hold-side object and the second hold-side object.
  • the information processing apparatus may further include a storage unit that stores a distance between the first hold-side object and the second hold-side object when an actual object corresponding to the virtual object has been pinched and held by the first hold-side object and the second hold-side object as an actual hold distance.
  • the hold determination unit may determine, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the actual hold distance, that the virtual object has been held by the first hold-side object and the second hold-side object.
  • An information processing method is an information processing method to be executed by a computer system and includes determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
  • a program according to an embodiment of the present technology causes a computer system to execute the following step.
  • FIG. 1 A schematic view for outlining an AR providing system according to an embodiment.
  • FIG. 2 A perspective view showing an outer appearance example of an HMD.
  • FIG. 3 A block diagram showing a functional configuration example of the HMD.
  • FIG. 4 A view for describing a basic operation of hold determination and release determination on a virtual object.
  • FIG. 5 A flowchart showing a processing example of the hold determination/release determination.
  • FIG. 6 A schematic view for describing each step shown in FIG. 5 .
  • FIG. 7 A schematic view for describing each step shown in FIG. 5 .
  • FIG. 8 A schematic view for describing each step shown in FIG. 5 .
  • FIG. 9 A schematic view showing another example of a hold operation of a user.
  • FIG. 10 A schematic view showing another example of the hold operation of the user.
  • FIG. 11 A schematic view for describing another embodiment of the hold determination.
  • FIG. 12 A schematic view showing an example of notifying the user.
  • FIG. 13 A schematic view for describing a display example of virtual images of a first hold-side object and a second hold-side object.
  • FIG. 14 A schematic view showing an example of a wearable controller.
  • FIG. 15 A block diagram showing a hardware configuration example of a computer that can be applied to the present technology.
  • FIG. 1 is a schematic view for outlining an AR providing system according to an embodiment of the present technology.
  • An AR providing system 1 corresponds to an embodiment of an information processing system according to the present technology.
  • the AR providing system 1 includes a head-mounted display (HMD) 2 .
  • HMD head-mounted display
  • the HMD 2 is put on the head of a user 3 for use.
  • the HMD 2 is an eyeglasses-type device with a see-through display.
  • the HMD 2 is also called AR glasses.
  • the HMD 2 reproduces virtual content to the user 3 .
  • This can provide an AR space (virtual space) to the user 3 using the HMD 2 .
  • the user 3 can experience various AR worlds by using the HMD 2 .
  • the reproduction of the virtual content includes displaying the virtual object to be superimposed on the real world. Moreover, the reproduction of the virtual content includes outputting virtual sounds. In addition, a smell, a sense of touch, and the like may be virtually provided to the user 3 .
  • two users 3 a and 3 b respectively wear HMDs 2 a and 2 b and use this AR providing system 1 .
  • a table 4 a is placed as an actual object 4 in the real world.
  • a ball 4 b is placed on the table 4 a as an actual object.
  • a cubic box 5 a and a model car 5 b are placed as virtual objects 5 .
  • the model car 5 b travels between the ball 4 b as the actual object 4 and the box 5 a as the virtual object 5 .
  • the users 3 a and 3 b can grasp and move the box 5 a as the virtual object 5 with their hands.
  • a traveling route of the model car 5 b changes in accordance with a change in position of the box 5 a , and the users can enjoy its state.
  • the model car 5 b and the box 5 a are objects displayed by the HMD 2 as virtual images, and those are not actually placed on the table 4 a.
  • the users 3 a and 3 b perceive as if the model car 5 b and the box 5 a were placed on the table 4 a as shown in FIG. 1 .
  • this AR providing system 1 enables such an AR experience.
  • the AR world that can be experienced by applying the present technology is not limited, and a variety of AR worlds can be experienced.
  • any virtual image such as computer graphics (CG) of a character and the like, a photograph, and letters can be displayed as the virtual object 5 .
  • CG computer graphics
  • any sound such as a character's voice, a siren sound, and a door close sound effect can be output as the virtual sound.
  • the number of users 3 who can experience the virtual space is also not limited.
  • the HMD 2 functions as an embodiment of an information processing apparatus according to the present technology.
  • FIG. 2 is a perspective view showing an outer appearance example of the HMD 2 .
  • the HMD 2 has a frame 6 , a left eye lens 7 a and a right eye lens 7 b , a left eye display 8 a and a right eye display 8 b , a left eye camera 9 a and a right eye camera 9 b , and an outside camera 10 .
  • the frame 6 has an eyeglasses-like shape and has a rim portion 11 and temples 12 .
  • the rim portion 11 is a portion arranged in front of the left and right eyes of the user 3 and respectively supports the left eye lens 7 a and the right eye lens 7 b.
  • the temples 12 extend rearward to both ears of the user 3 from both ends of the rim portion 11 . End portions of the temples 12 are attached to the both ears.
  • the rim portion 11 and the temples 12 are made of a material, for example, the synthetic resin or metal.
  • the left eye lens 7 a and the right eye lens 7 b are respectively arranged in front of the left and right eyes of the user 3 , covering at least a part of the field of view of the user 3 .
  • the respective lenses are designed to correct the eyesight of the user 3 .
  • the present technology is not limited thereto, and so-called non-prescription lenses may be used.
  • the left eye display 8 a and the right eye display 8 b are see-through displays.
  • the left eye display 8 a and the right eye display 8 b are respectively arranged to cover a partial region of the left and right eye lenses 7 a and 7 b . That is, the left and right eye displays 8 a and 8 b are respectively arranged in front of the left and right eyes of the user 3 .
  • the left and right eye displays 8 a and 8 b respectively display left and right eye images or the like.
  • the user 3 wearing the HMD 2 can visually recognize a real scene and images displayed on the respective displays 8 a and 8 b at the same time.
  • the user 3 can experience augmented reality (AR) or the like.
  • AR augmented reality
  • Each of the displays 8 a and 8 b displays, for example, the virtual object 5 .
  • Examples of the left and right eye displays 8 a and 8 b include a see-through organic EL display and a liquid-crystal display (LCD).
  • a see-through organic EL display and a liquid-crystal display (LCD).
  • LCD liquid-crystal display
  • left and right eye displays 8 a and 8 b are not limited.
  • a see-through display using any method e.g., a display method of projecting an image on a transparent screen or a display method of displaying an image through a prism or the like may be used as appropriate.
  • the left eye camera 9 a and the right eye camera 9 b are provided in the frame 6 as appropriate so as to be capable of imaging the left and right eyes of the user 3 .
  • the left eye camera 9 a and the right eye camera 9 b are capable of detecting gaze information about the gaze of the user 3 or the like, for example, on the basis of left and right eye images captured by the left and right eye cameras 9 a and 9 b.
  • Examples of the left and right eye cameras 9 a and 9 b include a digital camera provided with an image sensor such as a complementary metal-oxide semiconductor (CMOS) sensor and a charge coupled device (CCD) sensor. Moreover, for example, an infrared camera with infrared light such as an infrared LED may be used.
  • CMOS complementary metal-oxide semiconductor
  • CCD charge coupled device
  • an infrared camera with infrared light such as an infrared LED may be used.
  • the left eye lens 7 a and the right eye lens 7 b will be sometimes both referred to as lenses 7 and the left eye display 8 a and the right eye display 8 b will be sometimes both referred to as see-through displays 8 .
  • the left eye camera 9 a and the right eye camera 9 b will be sometimes both referred to as inside cameras 9 .
  • the see-through display 8 corresponds to a display unit.
  • the outside camera 10 is arranged in the middle of the frame 6 (rim portion 11 ), oriented outward (opposite to the user 3 ).
  • the outside camera 10 is capable of imaging a real space included in the field of view of the user 3 . Therefore, the outside camera 10 is capable of generating a captured image of the real space.
  • the outside camera images an area, which is an area on the front side as viewed from the user 3 and includes a display region of the see-through display 8 . That is, the outside camera 10 images the real space to include an area that the user 3 can see through the display region.
  • the outside camera 10 include a digital camera with an image sensor such as a CMOS sensor and a CCD sensor.
  • the area that the user 3 can see through the display region is an area where the virtual object 5 can be superimposed on the real world.
  • the area will be referred to as an effective field of view of the user 3 .
  • the effective field of view can also be said to be an angle of view where the virtual object 5 can be displayed.
  • FIG. 3 is a block diagram showing a functional configuration example of the HMD 2 .
  • the HMD 2 further includes a loudspeaker 14 , a vibration unit 15 , a communication unit 16 , a connector 17 , an operation button 18 , a sensor unit 19 , a storage unit 20 , and a controller 21 .
  • the loudspeaker 14 is provided at a predetermined position in the frame 6 .
  • the configuration of the loudspeaker 14 is not limited.
  • the loudspeaker 14 capable of outputting stereo sound, monaural sound, or the like may be used as appropriate.
  • the vibration unit 15 is provided inside the frame 6 and generates a vibration.
  • the vibration unit 15 is any vibration motor capable of generating a notifying vibration, for example. By driving the vibration unit 15 , a sense of touch can be presented to the user 3 .
  • the vibration unit 15 functions as an embodiment of a tactile presentation unit.
  • the communication unit 16 is a module for network communication or near-field communication with the other device.
  • a communication module e.g., a wireless LAN module such as WiFi or Bluetooth (registered trademark) is provided.
  • the connector 17 is a terminal for connection with the other device.
  • a terminal e.g., a universal serial bus (USB) or high-definition multimedia interface (HDMI) (registered trademark) is provided.
  • a charging terminal of a charging stand (cradle) is connected to the connector 17 for charging.
  • the operation button 18 is provided at a predetermined position in the frame 6 , for example.
  • the operation button 18 enables power-on/off operations and operations related to various functions of the HMD 2 , such as image display and audio output-related functions and a network communication function.
  • the sensor unit 19 includes a 9-axis sensor 22 , a GPS 23 , a ranging sensor 24 , and a microphone 25 .
  • the 9-axis sensor 22 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor.
  • the 9-axis sensor 22 is capable of detecting accelerations, angular velocities, and directions of the HMD 2 in three axes.
  • an inertial measurement unit (IMU) sensor having any configuration may be used.
  • the GPS 23 acquires information about a current position of the HMD 2 .
  • Detection results of the 9-axis sensor 22 and the GPS 23 are used for detecting attitude and position of the user 3 (HMD 2 ) and a movement (motion) of the user 3 , for example.
  • These sensors are provided at, for example, predetermined positions in the frame 6 .
  • the ranging sensor 24 is capable of acquiring three-dimensional information (distance to a detection target).
  • Examples of the ranging sensor 24 include light detection and ranging, laser imaging detection and ranging (LiDAR), a laser ranging sensor, a stereo camera, a time of flight (ToF) sensor, an ultrasonic sensor, and a structured-light ranging sensor.
  • LiDAR laser imaging detection and ranging
  • ToF time of flight
  • ultrasonic sensor ultrasonic sensor
  • a structured-light ranging sensor Alternatively, a sensor having both functions of image sensor and ranging sensor may be used.
  • the ranging sensor 24 is mounted, for example, using the front side of the user 3 as a detection direction. That is, the ranging sensor 24 is mounted to be capable of measuring a distance in the real space included in the field of view of the user 3 .
  • the present technology is not limited thereto, and the ranging sensor 24 may be mounted to be capable of measuring a distance in a 360-degree entire periphery around the user 3 set as a center.
  • the microphone 25 detects information about a sound in the periphery of the user 3 .
  • the microphone 25 detects, for example, a speech uttered by the user 3 as appropriate. This enables the user 3 to, for example, enjoy an AR experience during phone call or input an operation on the HMD 2 by voice input.
  • the type of sensor provided as the sensor unit 19 is not limited. Any sensor may be provided. For example, a thermal sensor and a humidity sensor capable of measuring temperature and humidity of an environment where the HMD 2 is used may be provided. Moreover, a biometric sensor capable of detecting biometric information of the user 3 may be provided. Examples of the biometric sensor include an electroencephalography sensor, an electromyography sensor, a pulse sensor, a sweat sensor, a thermal sensor, a blood flow sensor, and a body movement sensor.
  • the inside camera 9 and the outside camera 10 can also be considered as a part of the sensor unit 19 .
  • the storage unit 20 is a storage device such as a nonvolatile memory. Examples of the storage unit include a hard disk drive (HDD) and a solid state drive (SSD). In addition, any computer-readable non-transitory storage medium may be used.
  • HDD hard disk drive
  • SSD solid state drive
  • any computer-readable non-transitory storage medium may be used.
  • the storage unit 20 stores a control program for comprehensively controlling operations of the HMD 2 .
  • the storage unit 20 stores various types of information about an AR application for providing the AR space.
  • the storage unit 20 stores, for example, various types of information and data according to the present technology, e.g., content data such as the virtual object and the virtual sound.
  • the storage unit 20 stores a hold determination distance, a release determination distance, and the like to be described later.
  • a method for installing the control program, the content data, and the like to the HMD 2 is not limited.
  • the AR application according to the present technology may be provided as a cloud service.
  • the HMD 2 installs an application program for configuring the AR space from a content provider server in a network.
  • the present technology is not limited to such an aspect.
  • the controller 21 controls the operation of each block of the HMD 2 .
  • the controller 21 has hardware circuits necessary for a computer, e.g., a CPU and memories (RAM, ROM).
  • the CPU executes a program according to the present technology stored in the storage unit 20 or the memory, thereby executing various types of processing.
  • controller 21 examples include a programmable logic device (PLD) such as a field programmable gate array (FPGA) and another device such as an application specific integrated circuit (ASIC).
  • PLD programmable logic device
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • the CPU of the controller 21 executes the program according to the present technology (e.g., an application program), thereby achieving, as functional blocks, an information acquisition unit 26 , an object recognition unit 27 , a hold determination unit 28 , a release determination unit 29 , a notification control unit 30 , an AR reproduction control unit 31 , and a sending control unit 32 .
  • the program e.g., an application program
  • these functional blocks execute an information processing method according to the present embodiment. It should be noted that dedicated hardware such as an integrated circuit (IC) may be used as appropriate for achieving each functional block.
  • IC integrated circuit
  • the information acquisition unit 26 is capable of, for example, acquiring various types of information from each unit of the HMD 2 and outputting them to each functional block of the controller 21 .
  • the information acquisition unit 26 is capable of, for example, acquiring image information captured by the inside camera 9 and the outside camera 10 , detection results (sensor information) of the respective sensors of the sensor unit 19 , and the like.
  • the information acquisition unit 26 is also capable of acquiring various types of information received from the other device via the communication unit 16 .
  • the information acquisition unit is also capable of receiving various types of information about the AR application that the user 3 experiences from a predetermined server in the network, for example.
  • the information acquisition unit may be capable of acquiring, for example, images from cameras provided in walls, poles, etc. in the real world for which the AR world has been created and sensor information of ranging sensors, etc.
  • the object recognition unit 27 is capable of executing recognition processing on image information captured by the inside camera 9 and the outside camera and detection results from the respective sensors of the sensor unit 19 , thereby recognizing the actual object 4 and outputting a recognition result (recognition information).
  • the recognition result by the object recognition unit 27 includes any information, e.g., a position of the actual object 4 , a state of the actual object 4 , and a motion of the actual object 4 .
  • the object recognition unit 27 is capable of outputting various types of information, e.g., an amount of movement of the actual object 4 , a distance of the actual object 4 from a predetermined position, an attitude of the actual object 4 , or the presence/absence of an input operation by the actual object 4 as the recognition result.
  • a region (actual object region) determined to include the actual object 4 is extracted from, for example, two-dimensional image data, three-dimensional depth image data, or point cloud data as the recognition processing.
  • a predetermined recognition algorithm is executed by inputting data about the extracted actual object region and a recognition result is output.
  • a recognition algorithm may be executed by inputting full two-dimensional image data, three-dimensional depth image data, or the like and a recognition result related to the actual object 4 may be output.
  • recognition processing using a rule-based algorithm is executed as the recognition processing.
  • recognition information can be generated by, for example, processing of executing matching processing or the like with a model image of the actual object and processing of determining a position of the actual object region on the data with a marker image or the like.
  • the recognition information may be generated by referring to table information from the actual object region data.
  • any recognition processing using a rule-based algorithm may be employed.
  • recognition processing using a machine learning algorithm may be executed as the recognition processing.
  • Any machine learning algorithm using, for example, a deep neural network (DNN) can be used.
  • estimation information can be generated by, for example, inputting actual object region data and executing a machine learning algorithm, and those can be output as a recognition result.
  • AI artificial intelligence
  • the use of artificial intelligence (AI) for deep learning can improve the object recognition estimation accuracy.
  • a label for recognition information wished to be acquired is set to actual object region data for learning, and data set for learning is generated.
  • a program with learned parameters is generated as a learned model by learning based on a machine learning algorithm with the data set for learning.
  • the learned model is used for outputting a recognition result in response to the input of the actual object region data.
  • a three-dimensional position of each feature point in the actual object may be determined by inputting three-dimensional information of the actual object region.
  • pose estimation may be executed with a machine learning algorithm in a case where the actual object is an entire body or a body part. It should be noted that the pose estimation is also called bone estimation or skeleton estimation.
  • any algorithm may be used for the recognition processing.
  • a machine learning algorithm may be applied to any processing in the present disclosure.
  • the object recognition unit 27 defines a coordinate system with respect to a space in the effective field of view on which the virtual object 5 can be superimposed.
  • a coordinate value e.g., XYZ coordinate value
  • an absolute coordinate system world coordinate system
  • a coordinate value e.g., xyz coordinate value or uvd coordinate value
  • a relative coordinate system using a predetermined point as a basis (point of origin)
  • any point of origin may be set as the basis.
  • a relative coordinate system is defined having world coordinates or the like of a predetermined portion of the table 4 a as the actual object 4 shown in FIG. 1 as its origin.
  • Information about position, orientation, and the like of the actual object 4 located within the effective field-of-view is acquired by using the defined coordinate system as appropriate.
  • any other method may be used as a method of defining the position information.
  • the object recognition unit 27 may estimate a self-position of the user 3 (HMD 2 ).
  • the self-position includes position and attitude of the HMD 2 .
  • the self-position estimation can calculate a position of the HMD 2 and attitude information regarding which direction the HMD 2 faces.
  • the self-position of the HMD 2 is calculated on the basis of, for example, detection results from the sensor unit 19 and images captured by the inside camera 9 and the outside camera 10 .
  • position coordinates in a three-dimensional coordinate system (XYZ coordinate system) defined by the object recognition unit 27 are calculated as the self-position of the HMD 2 .
  • a pitch angle, a roll angle, and a yaw angle about a predetermined reference axis extending on the front side of the user 3 (HMD 2 ) in a case where the X-axis is set as a pitch axis, the Y-axis is set as a roll axis, and the Z-axis is set as a yaw axis are calculated.
  • An algorithm for estimating the self-position of the HMD 2 is not also limited. Any algorithm such as simultaneous localization and mapping (SLAM) may be used. In addition, any machine learning algorithm and the like may be used.
  • SLAM simultaneous localization and mapping
  • Three-dimensional coordinates of the periphery may be defined on the basis of the estimated self-position of the user 3 (HMD 2 ).
  • a self-position estimation unit may be configured as a functional block different from the object recognition unit 27 .
  • the object recognition unit 27 executes the recognition processing by setting the hands of the users 3 a and 3 b as targets. Then, the object recognition unit 27 acquires various types of information related to the hands. It should be noted that the hands include fingers.
  • the object recognition unit 27 is capable of acquiring, as the recognition result, for example, positions of the right hand and the left hand (including a position relationship between them), attitudes of the right hand and the left hand (including orientations of the hands), motions of the right hand and the left hand (including movement velocity), the presence/absence of an input operation with the right hand and the left hand, and the like.
  • the object recognition unit 27 is capable of acquiring various types of information related to the fingers of each of the right hand and the left hand.
  • the object recognition unit 27 is capable of acquiring various types of information such as positions (including a position relationship between them), attitudes (including orientations of the fingers), and motions (including movement velocity), for example, with respect to each of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger.
  • the object recognition unit 27 is capable of determining any input operation such as a touch operation, a drag operation, a scroll operation, and a pinch operation.
  • the object recognition unit 27 is capable of determining a gesture such as “rock” (a state in which the hand is closed), “scissors” (a state in which only the index finger and the middle finger are extended), “paper” (a state in which the hand is opened), and “pistol” (a state in which only the index finger and the thumb finger are extended).
  • the object recognition unit 27 is also capable of acquiring information about directions in which the finger pads are directed, whether each joint of the finger is extended or bent, and by what angle the joint is bent in a case where it is bent with respect to each of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger.
  • the hold determination unit 28 and the release determination unit 29 will be described later.
  • the notification control unit 30 controls the operation of each device in the HMD 2 to notify the user 3 of various types of information.
  • the notification control unit 30 is capable of notifying the user 3 of the information by, for example, presentation of a sense of touch, display of a virtual image, or output of a sound.
  • the notification control unit 30 controls the operation of the vibration unit 15 to execute the notification of the information by the presentation of a sense of touch.
  • the notification control unit 30 controls the left and right eye displays 8 a and 8 b to execute the notification of the information by the display of a virtual image.
  • the notification control unit 30 controls the loudspeaker 14 to execute the notification of the information by the output of a sound.
  • any method may be employed as a method of notifying the user 3 of the information.
  • a light source device such as an LED may be mounted on the HDM 2 and lighting of the device may be controlled.
  • the AR reproduction control unit 31 controls the reproduction of the virtual content to the user 3 .
  • the virtual object 5 How the virtual object 5 will move, how the virtual sound will be heard, and the like in accordance with the world view of the AR world, for example, are determined. Then, the virtual object 5 is displayed on the see-through display 8 so as to achieve the determined contents. Moreover, the loudspeaker 14 outputs the virtual sound.
  • a display position of the virtual object 5 is calculated on the basis of, for example, the three-dimensional coordinates defined by the object recognition unit 27 .
  • the calculated display position (three-dimensional coordinates) is converted into two-dimensional coordinates (display coordinates on the see-through display 8 ) by a homography or the like.
  • the virtual object 5 is displayed at the converted display coordinates. This achieves an AR space where the virtual object 5 is located at a desired position in the real space.
  • a generation position of the virtual sound is calculated on the basis of, for example, the three-dimensional coordinates defined by the object recognition unit 27 .
  • An AR space where the virtual sound can be heard from a desired position (desired direction) in the real space is achieved by controlling the loudspeaker 14 to adjust localization of the sound.
  • a specific algorithm for reproducing the virtual content is not limited. Any control may be executed.
  • the sending control unit 32 is capable of sending various types of information to the other device via the communication unit 16 .
  • the sending control unit 32 is capable of sending to the other device the information and the like stored in the storage unit 20 , for example, the information acquired by the information acquisition unit 26 , the recognition result acquired by the object recognition unit 27 , and the display position of the virtual object 5 calculated by the AR reproduction control unit 31 .
  • the HMDs 2 a and 2 b illustrated in FIG. 1 are capable of sending and receiving various types of information and data to/from each other.
  • the hold determination unit 28 corresponds to an embodiment of a hold determination unit according to the present technology.
  • the release determination unit 29 corresponds to an embodiment of a cancel determination unit according to the present technology.
  • the storage unit 20 corresponds to an embodiment of a storage unit according to the present technology.
  • FIG. 4 is a view for describing a basic operation of hold determination and release determination on the virtual object 5 according to the present embodiment.
  • the hold determination is processing of determining whether or not the virtual object 5 has been pinched and held.
  • the release determination is processing of determining whether or not the pinched and held virtual object 5 has been released.
  • the release determination can also be said to be processing of determining whether or not the hold of the virtual object 5 has been cancelled.
  • the pinch-and-hold operation can also be said to be grip.
  • the expression “pinch/hold” can also be used.
  • the hold determination can also be said to be grip determination.
  • the hold determination can also be said to be pinch determination/hold determination.
  • a distance D between a first hold-side object 34 and a second hold-side object 35 that set the virtual object 5 as a pinch-and-hold target is calculated.
  • the first hold-side object 34 and the second hold-side object 35 that set the virtual object 5 as the pinch-and-hold target typically correspond to two objects that attempt to pinch and hold the virtual object 5 .
  • the present technology is not limited thereto, and for example two objects set in advance to hold the virtual object 5 may be employed.
  • what objects are set as the first hold-side object 34 and the second hold-side object 35 is not limited.
  • each of the first hold-side object 34 and the second hold-side object 35 may be the actual object 4 or may be the virtual object 5 .
  • each of the first hold-side object 34 and the second hold-side object 35 may be a pair of the actual object 4 and the virtual object 5 .
  • respective positions of the first hold-side object 34 and the second hold-side object 35 are calculated at a predetermined frame rate. Accordingly, the distance D can be calculated for each frame.
  • a difference between a distance D acquired for the current frame and a distance D acquired in a past frame preceding it by several frames is divided by a time equivalent to a predetermined number of frames. Accordingly, the rate of decrease of the distance D can be calculated.
  • another algorithm may be used.
  • a nearly zero value is set as the predetermined threshold value.
  • zero may be set as the predetermined threshold value.
  • Any threshold value with which it can be determined that the decrease in distance D between the first hold-side object 34 and the second hold-side object 35 that attempt to pinch and hold the virtual object 5 has stopped, i.e., the movement of the first hold-side object 34 and the second hold-side object 35 has stopped may be employed.
  • each of the first hold-side object 34 and the second hold-side object 35 can slightly move (e.g., shake). Therefore, setting the threshold value to be slightly larger than zero can prevent such a slight movement or the like from influencing the hold determination.
  • the threshold value slightly larger than zero can also be said to be a threshold value with which it can be determined that the movement of the first hold-side object 34 and the second hold-side object 35 has substantially stopped.
  • the rate of decrease of the distance D can also be said to be finger-to-finger velocity.
  • velocity in a direction in which the distance D decreases and velocity in a direction in which the distance D increases may be distinguished from each other with positive and negative reference signs, such that those can be both calculated.
  • a rate of change of the distance D can also be referred to as the finger-to-finger velocity.
  • B of FIG. 4 is a schematic view showing a state when it is determined that the virtual object 5 has been held.
  • the first hold-side object 34 and the second hold-side object 35 is held in contact with the virtual object 5 .
  • Each of the first hold-side object 34 and the second hold-side object 35 can be slightly spaced apart from the virtual object 5 when it is actually determined that the virtual object 5 has been held.
  • each of the first hold-side object 34 and the second hold-side object 35 can also slightly overlap the virtual object 5 . That is, the first hold-side object 34 and the second hold-side object 35 can also slightly cave in the virtual object 5 .
  • a distance between the first hold-side object 34 and the second hold-side object 35 when it is determined that the virtual object 5 has been held is first stored as a hold determination distance D 1 as shown in B of FIG. 4 .
  • Whether or not the hold of the virtual object 5 by the first hold-side object 34 and the second hold-side object 35 has been cancelled is determined on the basis of the stored hold determination distance D 1 .
  • the distance D 2 is a reference distance for the hold cancel determination, i.e., release determination (hereinafter, referred to as a release determination distance D 2 with the same reference sign).
  • the user 3 can pinch and hold the virtual object 5 with his or her hand(s) (fingers). Moreover, the user 3 can also release the pinched and held virtual object 5 .
  • the user 3 pinches and holds the virtual object 5 with two fingers will be taken as an example. That is, the user 3 can hold the virtual object 5 with any two fingers of five fingers of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger of the right hand and five fingers of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger of the left hand, a total of ten fingers.
  • FIG. 5 is a flowchart showing a processing example of the hold determination/release determination according to the present embodiment.
  • FIGS. 6 to 8 are schematic views for describing each step shown in FIG. 5 .
  • the processing shown in FIG. 5 is processing executed by the HMD 2 of the user 3 . Moreover, the processing shown in FIG. 5 is repeated at a predetermined frame rate, for example. As a matter of course, the present technology is not limited to the case where the processing is executed for each frame.
  • the hold determination unit 28 acquires a position of each finger of the user 3 (in Step 101 ). For example, on the basis of a recognition result by the object recognition unit 27 , the hold determination unit 28 is capable of acquiring a position of each finger of the user 3 .
  • a position of a finger pad of a DIP joint is acquired as the finger position.
  • a position of a center of weight of each finger may be acquired as the finger position.
  • information for defining the finger position is not limited.
  • the hold determination unit 28 determines whether or not the user 3 has held the virtual object 5 (in Step 102 ).
  • the hold determination unit 28 determines whether or not the user attempts to pinch and hold the virtual object 5 with the two fingers (in Step 103 ).
  • Step 102 and 103 can be executed, for example, on the basis of a recognition result by the object recognition unit 27 .
  • Step 103 Some variations of the determination in Step 103 will be described. One of the following variations may be employed or a plurality of variations may be employed in combination.
  • the user directs finger pads S 1 of a thumb finger 37 and an index finger 38 of the right hand towards the virtual object 5 .
  • the user directs the finger pad S 1 of the index finger 38 only towards the virtual object 5 .
  • the user directs portions of the index finger 38 and a middle finger 39 of the right hand, which face each other, towards the virtual object 5 . That is, a portion 38 a of the index finger 38 , which faces the middle finger 39 , and a portion 39 a of the middle finger 39 , which faces the index finger 38 , are directed towards the virtual object 5 .
  • a position of the finger pad S 1 of the DIP joint of each finger is acquired as a position of each finger.
  • the distance to the virtual object 5 from the position of the pad S 1 of the thumb finger 37 is a distance D 3 between the thumb finger 37 and the virtual object 5 and is smaller than the predetermined threshold value.
  • the distance to the virtual object 5 from the position of the pad S 1 of the index finger 38 is a distance D 3 between the index finger 38 and the virtual object 5 and is smaller than the predetermined threshold value.
  • the threshold value related to the distance may be about 3 cm, for example.
  • the present technology is not limited thereto, and any value may be set as the threshold value related to the distance.
  • the collision determination can be executed on the basis of a collider 43 set to the virtual object 5 for example as shown in B of FIG. 7 .
  • colliders may be respectively set to the thumb finger 37 and the index finger 38 and the collision determination may be executed by the use of these colliders.
  • any algorithm may be used for the determination in Step 103 .
  • the hold determination unit 28 sets the two objects which attempt to pinch and hold the virtual object 5 as the first hold-side object 34 and the second hold-side object 35 shown in FIG. 4 .
  • Step 103 in a case where the two fingers which attempt to pinch and hold the virtual object 5 are detected, these two fingers are set as the first hold-side object 34 and the second hold-side object 35 .
  • the thumb finger 37 and the index finger 38 are set as the first hold-side object 34 and the second hold-side object 35 .
  • the hold determination unit 28 sets these two fingers as the first hold-side object 34 and the second hold-side object 35 , respectively.
  • the index finger 38 and the middle finger 39 are set as the first hold-side object 34 and the second hold-side object 35 .
  • the hold determination unit 28 sets these two fingers as the first hold-side object 34 and the second hold-side object 35 , respectively.
  • the thumb finger 37 and the index finger 38 are set as the first hold-side object 34 and the second hold-side object 35 .
  • the hold determination unit 28 sets two objects whose distance to the virtual object 5 is smaller than the predetermined threshold value as the first hold-side object 34 and the second hold-side object 35 , respectively.
  • the thumb finger 37 and the index finger 38 are set as the first hold-side object 34 and the second hold-side object 35 .
  • the hold determination unit 28 sets two objects determined to have collided with the virtual object 5 as the first hold-side object 34 and the second hold-side object 35 , respectively.
  • Step 103 in a case where it is determined that the user does not attempt to pinch and hold the virtual object 5 with the two fingers, the processing returns to Step 102 .
  • Step 103 in a case where it is determined that the user attempts to pinch and hold the virtual object 5 with the two fingers, the hold determination unit 28 acquires a rate of decrease of the finger-to-finger distance (the distance D between the two fingers shown in FIG. 4 ) (in Step 104 ).
  • a difference between a finger-to-finger distance D acquired for the current frame and a finger-to-finger distance D acquired several frames before is divided by a time equivalent to several frames.
  • any other method may be used as a method of calculating the rate of decrease of the finger-to-finger distance D.
  • a position of the finger pad S 1 of the DIP joint of each finger of the user 3 is acquired as the finger position.
  • a distance between the finger pads S 1 of the thumb finger 37 and the index finger 38 which attempt to pinch and hold the virtual object 5 is acquired as a finger-to-finger distance D.
  • a rate of decrease of the finger-to-finger distance D is acquired.
  • the hold determination unit 28 determines whether or not the rate of decrease of the finger-to-finger distance D becomes equal to or smaller than a predetermined threshold value (in Step 105 ). Any value may be set as the predetermined threshold value. Moreover, the hold determination unit 28 may determine whether or not the rate of decrease becomes smaller than the predetermined threshold value.
  • the hold determination unit 28 determines that the virtual object 5 has been held by the two fingers (in Step 106 ).
  • the user 3 feels the thickness of the virtual object 5 and stops the operation of making the thumb finger 37 and the index finger 38 approach each other at the finger-to-finger distance D where the user feels that the user has gripped the virtual object 5 .
  • the hold determination unit 28 is capable of determining that the virtual object 5 has been held in accordance with a hold operation made by the user 3 following his or her feeling. As a result, a high-quality virtual experience can be achieved.
  • the distance between the two fingers when it is determined that the virtual object 5 has been held is stored as the hold determination distance D 1 (see FIG. 4 ). Then, the release determination distance D 2 (see FIG. 4 ) is set on the basis of the hold determination distance D 1 (in Step 107 ).
  • the release determination distance D 2 is a distance obtained by adding a predetermined threshold value ⁇ to the hold determination distance D 1 . Any value may be set as the predetermined threshold value ⁇ .
  • the release determination distance D 2 is stored in the storage unit 20 . Then, the processing returns to Step 101 .
  • Step 102 in a case where it is determined that the user 3 has held the virtual object 5 (Yes in Step 102 ), the release determination unit 29 executes release determination.
  • the release determination unit 29 determines whether or not the finger-to-finger distance D between the two fingers is equal to or larger than the release determination distance D 2 (in Step 108 ).
  • the release determination unit 29 determines that the virtual object 5 has not been released. Then, the processing returns to Step 101 . In this case, the hold of the virtual object 5 by the two fingers is maintained.
  • the release determination unit 29 determines that the virtual object 5 has been released. That is, the release determination unit 29 determines that the hold of the virtual object 5 by the two fingers has been cancelled (in Step 109 ).
  • the hold determination distance D 1 is a finger determination distance D where the user 3 feels that the user 3 has gripped the virtual object 5 .
  • the release determination distance D 2 set on the basis of the hold determination distance D 1 is also a determination distance following the feeling of the user 3 .
  • the user 3 For releasing the virtual object 5 , the user 3 makes an operation of extending the distance between the two fingers from the finger-to-finger distance D (hold determination distance D 1 ) following the feeling that the user has pinched and held the virtual object 5 .
  • the release determination unit 29 is capable of determining that the virtual object 5 has been released in accordance with such an operation made by the user 3 following his or her feeling. As a result, a high-quality virtual experience can be achieved.
  • the hold state is maintained until it is determined that it has been released even if the finger-to-finger distance D is increased or reduced.
  • the hold state is maintained until it is determined that it has been released even if the finger-to-finger distance D is increased or reduced.
  • the positions of the respective fingers may not be acquired and may be lost.
  • the virtual object 5 and the hand holding it may depart from the effective field-of-view for example when the user 3 swings around the hand holding the virtual object 5 .
  • the position of each finger may be lost.
  • execution of the release determination by the release determination unit 29 is limited. In other words, the release determination is not executed.
  • a finger-to-finger distance D is calculated anew and the release determination is executed after the lost finger position is restored.
  • the AR reproduction control unit 31 controls reproduction of the virtual content to the user 3 as appropriate on the basis of a determination result by the hold determination unit 28 and a determination result by the release determination unit 29 .
  • a virtual image in which the virtual object 5 held by the fingers of the user 3 moves in accordance with a movement of the hand of the user 3 is displayed.
  • a virtual sound is output from the virtual object 5 held by the hand of the user 3 .
  • the threshold value may be adjusted as appropriate on the basis of the movement of the hand (fingers) of the user 3 .
  • the threshold value in Step 105 is set to be large.
  • the threshold value in Step 105 is set to be small.
  • the determination as to whether the user attempts to hold the virtual object 5 precedes the hold determination. Accordingly, it is possible to prevent the hold determination from being executed on two objects that does not attempt to hold the virtual object 5 . As a result, the usability for the user 3 can be improved.
  • FIGS. 9 and 10 are schematic views showing other examples of the hold operation by the user 3 .
  • the present technology can also be applied in a case where the virtual object 5 is pinched and held by an index finger 38 R of the right hand and an index finger 38 L of the left hand as shown in FIG. 9 .
  • the present technology can also be applied in a case where the virtual object 5 has been held by any finger of the right hand and any finger of the left hand.
  • the same side-surface portion of the virtual object 5 may be held by a plurality of fingers.
  • a lower portion of the virtual object 5 is held by the thumb finger 37 and an upper portion of the virtual object 5 is held by three fingers, i.e., the index finger 38 , the middle finger 39 , and a ring finger 40 .
  • first hold-side object 34 and the second hold-side object 35 shown in FIG. 4 are set in any combination as follows.
  • the first hold-side object 34 . . . one or more predetermined fingers of the user 3 .
  • the second hold-side object 35 . . . one or more other fingers different from the one or more predetermined fingers of the user 3 .
  • the thumb finger 37 of the user 3 and one or more other fingers (the index finger 38 , the middle finger 39 , and the ring finger 40 ) of the same hand as the thumb finger 37 are set as the first hold-side object 34 and the second hold-side object 35 .
  • the number of fingers that pinches and holds the virtual object 5 is set in advance.
  • a setting is made so that the virtual object 5 should be held by the thumb finger 37 and the three other fingers.
  • the index finger 38 , the middle finger 39 , and the ring finger 40 may be specified as the three other fingers.
  • Step 103 of FIG. 5 it is determined that the user attempts to pinch and hold the virtual object 5 , for example, in a case where all the finger pads S 1 of the thumb finger 37 and the three other fingers have been directed towards the virtual object 5 .
  • the index finger 38 , the middle finger 39 , and the ring finger 40 have been specified as the three other fingers, it is determined that the user attempts to pinch and hold the virtual object 5 in a case where the user has directed the finger pads S 1 of all of the thumb finger 37 , the index finger 38 , the middle finger 39 , and the ring finger 40 towards the virtual object 5 .
  • the index finger 38 , the middle finger 39 , and the ring finger 40 have been specified as the three other fingers, it is determined that the user attempts to pinch and hold the virtual object 5 in a case where the distance to the virtual object 5 from all of the thumb finger 37 , the index finger 38 , the middle finger 39 , and the ring finger 40 is smaller than the predetermined threshold value.
  • the index finger 38 , the middle finger 39 , and the ring finger 40 have been specified as the three other fingers, it is determined that the user attempts to pinch and hold the virtual object 5 in a case where all of the thumb finger 37 , the index finger 38 , the middle finger 39 , and the ring finger 40 are determined to have collided with the virtual object 5 .
  • the hold determination of the virtual object 5 can be executed by using, for example, a position of a center of weight of the plurality of fingers as a reference.
  • Steps 104 and 105 in FIG. 5 can be executed by setting a distance between a position of a center of weight of the index finger 38 , the middle finger 39 , and the ring finger 40 and a position of the thumb finger 37 as the finger-to-finger distance D.
  • a center of weight of three positions i.e., the position of the index finger 38 (e.g., the position of the finger pad S 1 ), the position of the middle finger 39 (e.g., the position of the finger pad S 1 ), and the position of the ring finger 40 (e.g., the position of the finger pad S 1 ) may be employed as the position of the center of weight of the index finger 38 , the middle finger 39 , and the ring finger 40 .
  • the present technology is not limited thereto.
  • Steps 104 and 105 in FIG. may be executed by setting a distance between the position of the selected representative finger and the position of the thumb finger 37 as the finger-to-finger distance D.
  • a setting may be made so that the hold determination always uses the position of the index finger 38 . That is, the finger to be used for the determination may be specified.
  • FIG. 11 is a schematic view for describing another embodiment of the hold determination.
  • the thumb finger 37 and one or more other fingers of the same hand as the thumb finger 37 are set as the first hold-side object 34 and the second hold-side object 35 .
  • the number of one or more other fingers is set on the basis of a length L of the virtual object 5 . That is, the number of fingers necessary for the hold determination of the virtual object 5 is specified in accordance with the length L of the virtual object 5 .
  • a size of a hold target surface 41 to be held by the one or more other fingers in a direction to the little finger from the index finger 38 is defined as the length L of the virtual object 5 .
  • the present technology is not limited thereto.
  • the number of other fingers that holds the virtual object 5 together with the thumb finger 37 is specified to be one.
  • the hold determination and the release determination are executed on the thumb finger 37 and the single other finger.
  • the index finger 38 is used is shown in A of FIG. 11 , another finger may be used.
  • the number of other fingers that holds the virtual object 5 together with the thumb finger 37 is specified to be two.
  • the hold determination and the release determination are executed on the thumb finger 37 and the two other fingers.
  • the index finger 38 and the middle finger 39 are used is shown in B of FIG. 11 , other fingers may be used.
  • the number of other fingers that holds the virtual object 5 together with the thumb finger 37 is specified to be three.
  • the hold determination and the release determination are executed on the thumb finger 37 and the three other fingers.
  • the index finger 38 , the middle finger 39 , and the ring finger 40 are used is shown in C of FIG. 11 , other fingers may be used.
  • the hold determination unit 28 may set the first hold-side object 34 and the second hold-side object 35 on the basis of the size of the virtual object 5 .
  • the hold determination unit 28 may set the first hold-side object 34 and the second hold-side object 35 on the basis of the shape of the virtual object 5 .
  • any method may be employed as a method of setting the first hold-side object 34 and the second hold-side object 35 .
  • two objects closest to the virtual object 5 may be set as the first hold-side object 34 and the second hold-side object 35 .
  • two objects focused by the user 3 may be detected by eye tracking for example and may be set as the first hold-side object 34 and the second hold-side object 35 .
  • the release determination may be executed on each of the plurality of fingers. Then, in a case where all the fingers have released the virtual object 5 , the virtual object 5 may be determined to have been released.
  • the release determination may be executed by using a position of a center of weight of the plurality of fingers as a reference. Otherwise, a representative finger is selected from among the plurality of fingers. Then, the release determination may be executed by using the position of the selected representative finger as a reference.
  • the notification control unit 30 notifies the user 3 of the fact that the hold of the virtual object 5 by the first hold-side object 34 and the second hold-side object 35 has been cancelled.
  • the notification control unit 30 is capable of notifying the user 3 of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled by presentation of a sense of touch, display of a virtual image, or output of a sound or the like.
  • the notification control unit 30 controls the operation of the vibration unit 15 so that it can notify of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled by presentation of a sense of touch such as a vibration.
  • the notification control unit 30 controls the left and right eye displays 8 a and 8 b so that it can notify of the fact that the virtual object has been held and the fact that the hold of the virtual object 5 has been cancelled by display of a virtual image.
  • the notification control unit 30 controls the loudspeaker 14 so that it can notify of the fact that the hold of the virtual object 5 has been cancelled by output of a sound.
  • FIG. 12 is a schematic view showing an example of notifying the user 3 .
  • Displaying the virtual image in this manner enables visual feedback to the user 3 .
  • the user 3 can easily know that the user 3 has successfully held the virtual object 5 , and a high-quality virtual experience can be achieved.
  • the release determination unit 29 has determined that the hold of the virtual object 5 by the thumb finger 37 and the index finger 38 has been cancelled.
  • the virtual representation lighting up the thumb finger 37 and the index finger 38 the virtual representation lighting up the contour of the thumb finger 37 and the index finger 38 , or the display of the text image “OK!” is cancelled.
  • the user 3 can easily know that the user 3 has released the virtual object 5 .
  • Any method may be employed as a method of notifying of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled.
  • a vibration in a particular pattern may be generated for notifying the user of the fact that the virtual object 5 has been held.
  • the vibration may be stopped for notifying the user of the fact that the hold has been cancelled.
  • the voice saying “You have grasped the model car (virtual object 5 )” or “You have released the model car (virtual object 5 )” may be output from the loudspeaker 14 .
  • the user When the user grips the actual object 4 in the real world, the user receives haptic information fed back from the actual object 4 .
  • the user perceives that the user has released it because the user receives no haptic information. However, the user does not receive a haptic feedback associated with it from the virtual object 5 .
  • various devices provided in the HMD 2 performs feedback using a visual representation, a sound, and a sense of touch. Accordingly, the user can easily know the fact that the virtual object 5 has been held and the fact that the virtual object 5 has been released, and a high-quality virtual experience can be achieved.
  • the feedback to the user 3 may be performed from a device other than the HMD 2 .
  • FIG. 13 is a schematic view for describing a display example of virtual images of the first hold-side object 34 and the second hold-side object 35 .
  • a of FIG. 13 is a schematic view showing a state when it is determined that the virtual object 5 has been held.
  • the index finger 38 L of the left hand is spaced apart from the virtual object 5 .
  • the index finger 38 R of the right hand overlaps the virtual object 5 and caves in the virtual object 5 .
  • the finger(s) can be spaced apart from the virtual object 5 or can overlap the virtual object 5 .
  • the notification control unit 30 displays the virtual image of the hold-side object spaced apart from the virtual object 5 in such a manner that it is held in contact with the surface of the virtual object 5 .
  • the notification control unit 30 displays the virtual image of the hold-side object overlapping the virtual object 5 in such a manner that it is held in contact with the surface of the virtual object 5 .
  • a model image 42 L of the index finger 38 L of the left hand and a model image 42 R of the index finger 38 R of the right hand are generated as virtual images.
  • the model images 42 L and 42 R are displayed in such a manner that it is held in contact with the surface of the virtual object 5 .
  • the index finger 38 L of the left hand and the index finger 38 R of the right hand that are the actual objects 4 are hidden from the effective field-of-view.
  • an image of the background or the like is made to overlap the index finger 38 L of the left hand and the index finger 38 R of the right hand.
  • any image processing method may be employed for hiding the actual object 4 .
  • Displaying the virtual images of the first hold-side object 34 and the second hold-side object 35 in this manner can overcome for example a gap between the actual hand and the virtual object 5 .
  • a high-quality virtual experience can be achieved.
  • the virtual images of the first hold-side object 34 and the second hold-side object 35 are displayed, it may be configured to be separate from the notification control unit 30 .
  • the display of these virtual images may be executed, considering that the operation of notifying of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled is a separate operation.
  • the AR providing system 1 and the HMD 2 in a case where the rate of decrease of the distance between the first hold-side object 34 and the second hold-side object 35 becomes smaller than the predetermined threshold value, it is determined that the virtual object 5 has been held. Accordingly, a high-quality virtual experience can be achieved.
  • a method of achieving the grip (hold/pinch) operation with respect to the virtual object 5 can be a method of determining that the virtual object 5 has been gripped in a case where the two fingers are held in contact with the virtual object 5 .
  • a method determining that the virtual object 5 is released when the contact of the fingers with the virtual object 5 is cancelled can be used. Also in this case, the user may not successively cancel grip even though the user feels that the user keeps his or her fingers off the virtual object 5 or the user may release the virtual object 5 even though the user does not attempt to do so.
  • a method of performing grip determination by setting a friction coefficient between the virtual object 5 and the fingers can also be used.
  • the grip may become instable, so the virtual object 5 can slip and drop.
  • it is difficult to perform natural grip operation and release operation on the virtual object 5 .
  • the user 3 When the user 3 attempts to grasp (pinch) the virtual object 5 having a certain size, the user 3 adjusts the finger-to-finger distance D, using his or her feeling in accordance with the size of the virtual object 5 .
  • the hold determination and the release determination can be executed in accordance with the finger-to-finger distance D following the feeling of each user 3 .
  • the virtual object 5 does not slip and drop from the fingers even when the user moves the virtual object 5 after the user grasps the virtual object 5 and the user can release the virtual object 5 only by slightly opening the fingers following his or her feeling even without largely opening the fingers.
  • FIG. 14 is a schematic view showing an example of a wearable controller.
  • a of FIG. 14 is a schematic view showing an outer appearance of the wearable controller on a palm side.
  • FIG. 14 is a schematic view showing an outer appearance of the wearable controller on a back-of-hand side.
  • a wearable controller 44 is configured as a so-called palm vest device. The wearable controller 44 is put on the hand of the user 3 for use.
  • various devices such as cameras, a 9-axis sensor, a GPS, a distance measurement sensor, a microphone, an IR sensor, and an optical marker are mounted on predetermined positions of the wearable controller 44 .
  • the cameras are respectively placed on the palm side and the back-of-hand side so that the cameras can image the fingers.
  • Recognition processing of the hand of the user 3 can be executed on the basis of images of the fingers captured by the cameras, detection results (sensor information) from the respective sensors, a sensing result of IR light reflected by the optical marker, and the like.
  • various types of information such as positions, attitudes, and movements of the hand and the respective fingers can be acquired.
  • determination of an input operation such as a touch operation, determination of a gesture with the hand, and the like can be executed.
  • the user 3 can perform a variety of gesture inputs or an operation with respect to the virtual object 5 with his or her hand.
  • a plurality of vibrators is mounted at predetermined positions of the wearable controller 44 as the tactile presentation unit. By the vibrator being driven, senses of touch in various patterns can be presented to the hand of the user 3 . It should be noted that specific configurations of the vibrators are not limited, and any configuration may be employed.
  • the embodiment of the present technology can also be achieved with such a wearable controller 44 .
  • the AR providing system 1 as shown in FIG. 1 can also be achieved without the HMD 2 .
  • a projector for displaying the virtual object 5 on the table 4 a is disposed above or below the table 4 a .
  • the user 3 wears the wearable controller 44 shown in FIG. 14 and pinches and holds the box 5 a that is the virtual object 5 .
  • a recognition result of each finger of the user 3 can be acquired on the basis of the image and the sensor information from the wearable controller 44 .
  • the hold determination and the release determination according to the present technology as illustrated in FIG. 5 can be executed, and a high-quality AR providing system 1 can be achieved.
  • the wearable controller 44 may notify the user 3 of a result of the hold determination and a result of the release determination.
  • a haptic feedback can be performed for example by driving the plurality of vibrators.
  • a visual feedback and an audio feedback may be performed by mounting a display unit and a loudspeaker.
  • the hold determination and the release determination may be executed by another computer (e.g., server apparatus) connected to be capable of communicating with the wearable controller 44 .
  • the other computer functions as an embodiment of the information processing apparatus according to the present technology.
  • the wearable controller 44 may execute the hold determination and the release determination.
  • the wearable controller 44 functions as an embodiment of the information processing apparatus according to the present technology.
  • cooperation of the wearable controller 44 with the other computer may achieve the information processing apparatus according to the present technology and may execute the information processing method according to the present technology.
  • a VR providing system may be configured as an embodiment of the information processing system according to the present technology.
  • the user 3 wears an immersive HMD 2 configured to cover the field-of-view of the user 3 . Then, the user 3 operates a corresponding virtual object movable in accordance with a movement of the user's hand (fingers) within the VR space by moving the user's hand (fingers).
  • the corresponding virtual object may be a model image of the user's hand (fingers).
  • the present technology is not limited thereto, and the corresponding virtual object may be a hand (fingers) of a character or robot or may be a virtual image of a tool or the like such as a crane and a tongs.
  • the present technology can be applied for example in a case where the user pinches and holds the virtual object 5 with the corresponding virtual object or a case where the user releases the virtual object 5 .
  • any device may be used for achieving the virtual space.
  • the present technology is not limited to the devices such as the HMD and the projector described above, and the virtual space may be achieved by using a smartphone, a tablet terminal, a personal computer (PC), or the like.
  • the type of the actual object 4 and the virtual object 5 set as the first hold-side object 34 and the second hold-side object 35 are not limited.
  • the present technology can be applied to any object capable of executing the operation of pinching and holding the virtual object 5 such as chopsticks, pin set, crane, tongs, entire hand, entire arm, entire leg, and toes.
  • the hold determination distance D 1 when the user 3 pinches and holds the same virtual object 5 in the past may be used for the hold determination of the virtual object 5 .
  • the hold determination unit 28 may determine that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35 .
  • the user 3 grips the virtual object 5 , the user 3 often actively or naturally remember the finger-to-finger distance D at that time. Thus, it can be estimated that the grip operation for the same virtual object 5 will be likely to be performed at substantially the same finger-to-finger distance D.
  • the hold determination distance D 1 of the virtual object 5 is stored with respect to each of the plurality of users 3 .
  • the hold determination distance D 1 set to the other user 3 can be received and stored for example by the HMD 2 communicating with each other.
  • the hold determination distance D 1 when the other user 3 pinches and holds the same virtual object 5 may be used for the hold determination of the virtual object 5 .
  • the hold determination unit 28 may determine that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35 .
  • the same type of the actual object 4 and the virtual object 5 are present in the virtual space.
  • the actual object 4 of the model car and the virtual object 5 displayed as a virtual image of the model car may be placed on the table 4 a .
  • the model car of the actual object 4 and the model car of the virtual object 5 have the same size.
  • the user 3 can enjoy an application that the model car of the virtual object 5 travels in various ways in the situation where the plurality of model cars is placed.
  • the distance between the first hold-side object 34 and the second hold-side object 35 (referred to as an actual hold distance) when the actual object 4 corresponding to the virtual object 5 is pinched and held may be used for the hold determination of the virtual object 5 .
  • an actual hold distance is calculated by the object recognition unit 27 of the HMD 2 when the actual object 4 is pinched and held and the actual hold distance is stored in the storage unit 20 .
  • the hold determination unit 28 determines that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35 .
  • the HMD 2 functions as the embodiment of the information processing apparatus according to the present technology
  • the present technology is not limited thereto. Any computer such as a PC connected to the HMD 2 via a network or the like may achieve the embodiment of the information processing apparatus according to the present technology and may execute the information processing method according to the present technology. Moreover, cooperation of the HMD 2 with the computer in the network may achieve the embodiment of the information processing apparatus according to the present technology and may execute the information processing method according to the present technology.
  • another computer connected to the HMD 2 may achieve some or all of the functional blocks achieved by the controller 21 shown in FIG. 3 .
  • FIG. 15 is a block diagram showing a hardware configuration example of a computer 60 that can be applied to the present technology.
  • the computer 60 includes a CPU 61 , a read only memory (ROM) 62 , a RAM 63 , an input/output interface 65 , and a bus 64 for connecting them to one another.
  • a display unit 66 , an input unit 67 , a storage unit 68 , a communication unit 69 , and a drive unit 70 are connected to the input/output interface 65 .
  • the display unit 66 is a display device using liquid-crystals or EL, for example.
  • the input unit 67 is, for example, a keyboard, a pointing device, a touch panel, or another operation apparatus. In a case where the input unit 67 includes a touch panel, the display unit 66 can be integral with the touch panel.
  • the storage unit 68 is a nonvolatile storage device, and is, for example, an HDD, a flash memory, or another solid-state memory.
  • the drive unit 70 is a device capable of driving a removable recording medium 71 , e.g., an optical recording medium or a magnetic recording tape.
  • the communication unit 69 is a modem, a router, or another communication apparatus that is connectable to a LAN, a WAN, or the like and is for communicating with the other device.
  • the communication unit 69 may perform wired communication or may perform wireless communication.
  • the communication unit 69 is often used separate from the computer 60 .
  • the information processing method according to the present technology is achieved by loading a program configuring the software, which has been stored in the ROM 62 or the like, to the RAM 63 and executing it.
  • the program is installed to the computer 60 via a recording medium 61 , for example.
  • the program may be installed to the computer 60 via a global network or the like.
  • any computer-readable non-transitory storage medium may be used.
  • Cooperation of a plurality of computers connected to be capable of communicating via the network or the like may execute the information processing method and the program according to the present technology or create the information processing apparatus according to the present technology.
  • the information processing method and the program according to the present technology can be executed not only in a computer system configured by a single computer but also in a computer system in which a plurality of computers cooperatively operates.
  • the system means a set of a plurality of components (apparatuses, modules (parts), and the like) and it does not matter whether or not all the components are housed in the same casing. Therefore, both a plurality of apparatuses housed in separate casings and connected to one another via a network and a single apparatus having a plurality of modules housed in a single casing are the system.
  • Executing the information processing method and the program according to the present technology by the computer system includes both executing, by a single computer, the determination as to whether the user attempts to pinch and hold the object, the hold determination, the release determination, the notification control, and the reproduction control of the virtual content, for example, and executing these processes by different computers. Moreover, executing the respective processes by a predetermined computer includes causing another computer to execute some or all of those processes and acquiring the results.
  • the information processing method and the program according to the present technology can also be applied to a cloud computing configuration in which a plurality of apparatuses shares and cooperatively processes a single function via a network.
  • the concepts that define the shape, the size, the position relationship, the state, and the like such as “center”, “middle”, “uniform”, “equal”, the “same”, “orthogonal”, “parallel”, “symmetric”, “extending”, “axial”, “columnar”, “cylindrical”, “ring-shaped”, and “annular” are concepts including “substantially center”, “substantially middle”, “substantially uniform”, “substantially equal”, “substantially the same”, “substantially orthogonal”, “substantially parallel”, “substantially symmetric”, “substantially extending”, “substantially axial”, “substantially columnar”, “substantially cylindrical”, “substantially ring-shaped”, “substantially annular”, and the like.
  • a predetermined range e.g., ⁇ 10% range
  • states included in a predetermined range using “completely center”, “completely middle”, “completely uniform”, “completely equal”, “completely the same”, “completely orthogonal”, “completely parallel”, “completely symmetric”, “completely extending”, “completely axial”, “completely columnar”, “completely cylindrical”, “completely ring-shaped”, “completely annular”, and the like as the bases are also included.
  • the comparative expressions are expressions encompassing both a concept including a case where it is equal to A and a concept not including a case where it is equal to A.
  • “larger than A” is not limited to the case where not including “equal to A”, and also includes “A or more”.
  • “smaller than A” is not limited to “less than A”, and also includes “A or less”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An information processing apparatus according to an embodiment of the present technology includes a hold determination unit. The hold determination unit determines, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object. Accordingly, a high-quality virtual experience can be achieved.

Description

CROSS REFERENCE TO PRIOR APPLICATION
This application is a National Stage Patent Application of PCT International Patent Application No. PCT/JP2022/010069 (filed on Mar. 8, 2022) under 35 U.S.C. § 371, which claims priority to Japanese Patent Application No. 2021-113612 (filed on Jul. 8, 2021), which are all hereby incorporated by reference in their entirety.
TECHNICAL FIELD
The present technology relates to an information processing apparatus, an information processing method, and a program that can be applied to creation of a virtual space such as a virtual reality (VR) space and an augmented reality (AR) space.
BACKGROUND ART
Patent Literature 1 has disclosed a technology that enables a user to make an intuitive operation on an object in a virtual reality space and enables reduction of a necessary amount of calculation for achieving such an operation.
CITATION LIST Patent Literature
    • Patent Literature 1: WO 2017/204120
DISCLOSURE OF INVENTION Technical Problem
It is thus desirable to provide a technology capable of achieving a high-quality virtual experience.
In view of the above-mentioned circumstances, it is an objective of the present technology to provide an information processing apparatus, an information processing method, and a program that are capable of achieving a high-quality virtual experience.
Solution to Problem
In order to accomplish the above-mentioned objective, an information processing apparatus according to an embodiment of the present technology includes a hold determination unit.
The hold determination unit determines, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
In this information processing apparatus, it is determined that the virtual object has been held in a case where the rate of decrease of the distance between the first hold-side object and the second hold-side object becomes smaller than the predetermined threshold value. Accordingly, a high-quality virtual experience can be achieved.
The information processing apparatus may further include a storage unit and a cancel determination unit.
The storage unit stores a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance.
The cancel determination unit determines whether or not the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled on the basis of the stored hold determination distance.
The cancel determination unit may determine that the hold has been cancelled in a case where the distance between the first hold-side object and the second hold-side object becomes larger than a distance obtained by adding a predetermined threshold value to the hold determination distance.
The first hold-side object may be one or more predetermined fingers of a user. In this case, the second hold-side object is one or more other fingers different from the one or more predetermined fingers of the user.
The hold determination unit may set two objects which attempt to pinch and hold the virtual object as the first hold-side object and the second hold-side object.
The hold determination unit may set, in a case where a user has directed a finger pad of each of two fingers towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
The hold determination unit may set, in a case where a user has directed portions of two adjacent fingers, which face each other, towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
The hold determination unit may set two objects whose distance to the virtual object is smaller than the predetermined threshold value as the first hold-side object and the second hold-side object, respectively.
The hold determination unit may set two objects determined to have collided with the virtual object as the first hold-side object and the second hold-side object, respectively.
The hold determination unit may set the first hold-side object and the second hold-side object on the basis of at least one of a shape or a size of the virtual object.
The hold determination unit may set a thumb finger of a user and one or more other fingers of the same hand as the thumb finger as the first hold-side object and the second hold-side object.
The hold determination unit executes, in a case where a plurality of fingers is set as the first hold-side object or the second hold-side object, hold determination of the virtual object by using a position of a center of weight of the plurality of fingers as a reference.
The information processing apparatus may further include a notification control unit that notifies a user of at least one of the fact that the virtual object has been held by the first hold-side object and the second hold-side object or the fact that the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled.
The notification control unit may execute notification by at least one of presentation of a sense of touch, display of a virtual image, or output of a sound.
The notification control unit may display, in a case where the first hold-side object or the second hold-side object which holds the virtual object is spaced apart from the virtual object when the hold determination unit determines that the virtual object has been held, a virtual image of the hold-side object spaced apart from the virtual object in such a manner that the virtual image is in contact with the virtual object.
The information processing apparatus may further include a storage unit that stores a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance. In this case, the hold determination unit may determine, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when the same virtual object is pinched and held in the past, that the virtual object has been held by the first hold-side object and the second hold-side object.
The information processing apparatus may further include a storage unit that stores, with respect to each of a plurality of users, a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance. In this case, the hold determination unit may determine, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when another user pinches and holds the same virtual object, that the virtual object has been held by the first hold-side object and the second hold-side object.
The information processing apparatus may further include a storage unit that stores a distance between the first hold-side object and the second hold-side object when an actual object corresponding to the virtual object has been pinched and held by the first hold-side object and the second hold-side object as an actual hold distance. In this case, the hold determination unit may determine, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the actual hold distance, that the virtual object has been held by the first hold-side object and the second hold-side object.
An information processing method according to an embodiment of the present technology is an information processing method to be executed by a computer system and includes determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
A program according to an embodiment of the present technology causes a computer system to execute the following step.
A step of determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 A schematic view for outlining an AR providing system according to an embodiment.
FIG. 2 A perspective view showing an outer appearance example of an HMD.
FIG. 3 A block diagram showing a functional configuration example of the HMD.
FIG. 4 A view for describing a basic operation of hold determination and release determination on a virtual object.
FIG. 5 A flowchart showing a processing example of the hold determination/release determination.
FIG. 6 A schematic view for describing each step shown in FIG. 5 .
FIG. 7 A schematic view for describing each step shown in FIG. 5 .
FIG. 8 A schematic view for describing each step shown in FIG. 5 .
FIG. 9 A schematic view showing another example of a hold operation of a user.
FIG. 10 A schematic view showing another example of the hold operation of the user.
FIG. 11 A schematic view for describing another embodiment of the hold determination.
FIG. 12 A schematic view showing an example of notifying the user.
FIG. 13 A schematic view for describing a display example of virtual images of a first hold-side object and a second hold-side object.
FIG. 14 A schematic view showing an example of a wearable controller.
FIG. 15 A block diagram showing a hardware configuration example of a computer that can be applied to the present technology.
MODE(S) FOR CARRYING OUT THE INVENTION
Hereinafter, embodiments according to the present technology will be described with reference to the drawings.
[AR Providing System]
FIG. 1 is a schematic view for outlining an AR providing system according to an embodiment of the present technology. An AR providing system 1 corresponds to an embodiment of an information processing system according to the present technology.
The AR providing system 1 includes a head-mounted display (HMD) 2.
As shown in FIG. 1 , the HMD 2 is put on the head of a user 3 for use.
The HMD 2 is an eyeglasses-type device with a see-through display. The HMD 2 is also called AR glasses.
In the present embodiment, the HMD 2 reproduces virtual content to the user 3. This can provide an AR space (virtual space) to the user 3 using the HMD 2. The user 3 can experience various AR worlds by using the HMD 2.
The reproduction of the virtual content includes displaying the virtual object to be superimposed on the real world. Moreover, the reproduction of the virtual content includes outputting virtual sounds. In addition, a smell, a sense of touch, and the like may be virtually provided to the user 3.
In the example shown in FIG. 1 , two users 3 a and 3 b respectively wear HMDs 2 a and 2 b and use this AR providing system 1.
Specifically, a table 4 a is placed as an actual object 4 in the real world. Moreover, a ball 4 b is placed on the table 4 a as an actual object.
Using positions of these actual objects 4 as references, a cubic box 5 a and a model car 5 b are placed as virtual objects 5. The model car 5 b travels between the ball 4 b as the actual object 4 and the box 5 a as the virtual object 5.
The users 3 a and 3 b can grasp and move the box 5 a as the virtual object 5 with their hands. A traveling route of the model car 5 b changes in accordance with a change in position of the box 5 a, and the users can enjoy its state.
The model car 5 b and the box 5 a are objects displayed by the HMD 2 as virtual images, and those are not actually placed on the table 4 a.
On the other hand, the users 3 a and 3 b perceive as if the model car 5 b and the box 5 a were placed on the table 4 a as shown in FIG. 1 .
Using this AR providing system 1 enables such an AR experience. As a matter of course, the AR world that can be experienced by applying the present technology is not limited, and a variety of AR worlds can be experienced.
For example, any virtual image such as computer graphics (CG) of a character and the like, a photograph, and letters can be displayed as the virtual object 5.
Moreover, any sound such as a character's voice, a siren sound, and a door close sound effect can be output as the virtual sound.
Moreover, the number of users 3 who can experience the virtual space is also not limited.
In the present embodiment, the HMD 2 functions as an embodiment of an information processing apparatus according to the present technology.
[HMD]
FIG. 2 is a perspective view showing an outer appearance example of the HMD 2.
The HMD 2 has a frame 6, a left eye lens 7 a and a right eye lens 7 b, a left eye display 8 a and a right eye display 8 b, a left eye camera 9 a and a right eye camera 9 b, and an outside camera 10.
The frame 6 has an eyeglasses-like shape and has a rim portion 11 and temples 12.
The rim portion 11 is a portion arranged in front of the left and right eyes of the user 3 and respectively supports the left eye lens 7 a and the right eye lens 7 b.
The temples 12 extend rearward to both ears of the user 3 from both ends of the rim portion 11. End portions of the temples 12 are attached to the both ears. The rim portion 11 and the temples 12 are made of a material, for example, the synthetic resin or metal.
The left eye lens 7 a and the right eye lens 7 b are respectively arranged in front of the left and right eyes of the user 3, covering at least a part of the field of view of the user 3.
Typically, the respective lenses are designed to correct the eyesight of the user 3. As a matter of course, the present technology is not limited thereto, and so-called non-prescription lenses may be used.
The left eye display 8 a and the right eye display 8 b are see-through displays. The left eye display 8 a and the right eye display 8 b are respectively arranged to cover a partial region of the left and right eye lenses 7 a and 7 b. That is, the left and right eye displays 8 a and 8 b are respectively arranged in front of the left and right eyes of the user 3.
The left and right eye displays 8 a and 8 b respectively display left and right eye images or the like. The user 3 wearing the HMD 2 can visually recognize a real scene and images displayed on the respective displays 8 a and 8 b at the same time.
Accordingly, the user 3 can experience augmented reality (AR) or the like.
Each of the displays 8 a and 8 b displays, for example, the virtual object 5.
Examples of the left and right eye displays 8 a and 8 b include a see-through organic EL display and a liquid-crystal display (LCD).
In addition, specific configurations of the left and right eye displays 8 a and 8 b are not limited. A see-through display using any method, e.g., a display method of projecting an image on a transparent screen or a display method of displaying an image through a prism or the like may be used as appropriate.
The left eye camera 9 a and the right eye camera 9 b are provided in the frame 6 as appropriate so as to be capable of imaging the left and right eyes of the user 3. The left eye camera 9 a and the right eye camera 9 b are capable of detecting gaze information about the gaze of the user 3 or the like, for example, on the basis of left and right eye images captured by the left and right eye cameras 9 a and 9 b.
Examples of the left and right eye cameras 9 a and 9 b include a digital camera provided with an image sensor such as a complementary metal-oxide semiconductor (CMOS) sensor and a charge coupled device (CCD) sensor. Moreover, for example, an infrared camera with infrared light such as an infrared LED may be used.
Hereinafter, the left eye lens 7 a and the right eye lens 7 b will be sometimes both referred to as lenses 7 and the left eye display 8 a and the right eye display 8 b will be sometimes both referred to as see-through displays 8. Moreover, the left eye camera 9 a and the right eye camera 9 b will be sometimes both referred to as inside cameras 9. In the present embodiment, the see-through display 8 corresponds to a display unit.
The outside camera 10 is arranged in the middle of the frame 6 (rim portion 11), oriented outward (opposite to the user 3). The outside camera 10 is capable of imaging a real space included in the field of view of the user 3. Therefore, the outside camera 10 is capable of generating a captured image of the real space.
In the present embodiment, the outside camera images an area, which is an area on the front side as viewed from the user 3 and includes a display region of the see-through display 8. That is, the outside camera 10 images the real space to include an area that the user 3 can see through the display region. Examples of the outside camera 10 include a digital camera with an image sensor such as a CMOS sensor and a CCD sensor.
It should be noted that the area that the user 3 can see through the display region is an area where the virtual object 5 can be superimposed on the real world. In the present disclosure, the area will be referred to as an effective field of view of the user 3. The effective field of view can also be said to be an angle of view where the virtual object 5 can be displayed.
FIG. 3 is a block diagram showing a functional configuration example of the HMD 2.
As shown in FIG. 3 , the HMD 2 further includes a loudspeaker 14, a vibration unit 15, a communication unit 16, a connector 17, an operation button 18, a sensor unit 19, a storage unit 20, and a controller 21.
The loudspeaker 14 is provided at a predetermined position in the frame 6. The configuration of the loudspeaker 14 is not limited. For example, the loudspeaker 14 capable of outputting stereo sound, monaural sound, or the like may be used as appropriate.
The vibration unit 15 is provided inside the frame 6 and generates a vibration. The vibration unit 15 is any vibration motor capable of generating a notifying vibration, for example. By driving the vibration unit 15, a sense of touch can be presented to the user 3.
The vibration unit 15 functions as an embodiment of a tactile presentation unit.
The communication unit 16 is a module for network communication or near-field communication with the other device. A communication module, e.g., a wireless LAN module such as WiFi or Bluetooth (registered trademark) is provided.
The connector 17 is a terminal for connection with the other device. A terminal, e.g., a universal serial bus (USB) or high-definition multimedia interface (HDMI) (registered trademark) is provided. Moreover, a charging terminal of a charging stand (cradle) is connected to the connector 17 for charging.
The operation button 18 is provided at a predetermined position in the frame 6, for example. The operation button 18 enables power-on/off operations and operations related to various functions of the HMD 2, such as image display and audio output-related functions and a network communication function.
The sensor unit 19 includes a 9-axis sensor 22, a GPS 23, a ranging sensor 24, and a microphone 25.
The 9-axis sensor 22 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor. The 9-axis sensor 22 is capable of detecting accelerations, angular velocities, and directions of the HMD 2 in three axes. In addition, an inertial measurement unit (IMU) sensor having any configuration may be used.
The GPS 23 acquires information about a current position of the HMD 2. Detection results of the 9-axis sensor 22 and the GPS 23 are used for detecting attitude and position of the user 3 (HMD 2) and a movement (motion) of the user 3, for example. These sensors are provided at, for example, predetermined positions in the frame 6.
The ranging sensor 24 is capable of acquiring three-dimensional information (distance to a detection target). Examples of the ranging sensor 24 include light detection and ranging, laser imaging detection and ranging (LiDAR), a laser ranging sensor, a stereo camera, a time of flight (ToF) sensor, an ultrasonic sensor, and a structured-light ranging sensor. Alternatively, a sensor having both functions of image sensor and ranging sensor may be used.
The ranging sensor 24 is mounted, for example, using the front side of the user 3 as a detection direction. That is, the ranging sensor 24 is mounted to be capable of measuring a distance in the real space included in the field of view of the user 3. The present technology is not limited thereto, and the ranging sensor 24 may be mounted to be capable of measuring a distance in a 360-degree entire periphery around the user 3 set as a center.
The microphone 25 detects information about a sound in the periphery of the user 3. The microphone 25 detects, for example, a speech uttered by the user 3 as appropriate. This enables the user 3 to, for example, enjoy an AR experience during phone call or input an operation on the HMD 2 by voice input.
The type of sensor provided as the sensor unit 19 is not limited. Any sensor may be provided. For example, a thermal sensor and a humidity sensor capable of measuring temperature and humidity of an environment where the HMD 2 is used may be provided. Moreover, a biometric sensor capable of detecting biometric information of the user 3 may be provided. Examples of the biometric sensor include an electroencephalography sensor, an electromyography sensor, a pulse sensor, a sweat sensor, a thermal sensor, a blood flow sensor, and a body movement sensor.
The inside camera 9 and the outside camera 10 can also be considered as a part of the sensor unit 19.
The storage unit 20 is a storage device such as a nonvolatile memory. Examples of the storage unit include a hard disk drive (HDD) and a solid state drive (SSD). In addition, any computer-readable non-transitory storage medium may be used.
The storage unit 20 stores a control program for comprehensively controlling operations of the HMD 2.
Moreover, the storage unit 20 stores various types of information about an AR application for providing the AR space. The storage unit 20 stores, for example, various types of information and data according to the present technology, e.g., content data such as the virtual object and the virtual sound.
Moreover, the storage unit 20 stores a hold determination distance, a release determination distance, and the like to be described later.
A method for installing the control program, the content data, and the like to the HMD 2 is not limited.
For example, the AR application according to the present technology may be provided as a cloud service. In this case, the HMD 2 installs an application program for configuring the AR space from a content provider server in a network. As a matter of course, the present technology is not limited to such an aspect.
The controller 21 controls the operation of each block of the HMD 2. The controller 21 has hardware circuits necessary for a computer, e.g., a CPU and memories (RAM, ROM). The CPU executes a program according to the present technology stored in the storage unit 20 or the memory, thereby executing various types of processing.
Examples of the controller 21 include a programmable logic device (PLD) such as a field programmable gate array (FPGA) and another device such as an application specific integrated circuit (ASIC).
In the present embodiment, the CPU of the controller 21 executes the program according to the present technology (e.g., an application program), thereby achieving, as functional blocks, an information acquisition unit 26, an object recognition unit 27, a hold determination unit 28, a release determination unit 29, a notification control unit 30, an AR reproduction control unit 31, and a sending control unit 32.
Then, these functional blocks execute an information processing method according to the present embodiment. It should be noted that dedicated hardware such as an integrated circuit (IC) may be used as appropriate for achieving each functional block.
The information acquisition unit 26 is capable of, for example, acquiring various types of information from each unit of the HMD 2 and outputting them to each functional block of the controller 21.
The information acquisition unit 26 is capable of, for example, acquiring image information captured by the inside camera 9 and the outside camera 10, detection results (sensor information) of the respective sensors of the sensor unit 19, and the like.
Moreover, the information acquisition unit 26 is also capable of acquiring various types of information received from the other device via the communication unit 16.
The information acquisition unit is also capable of receiving various types of information about the AR application that the user 3 experiences from a predetermined server in the network, for example.
Moreover, the information acquisition unit may be capable of acquiring, for example, images from cameras provided in walls, poles, etc. in the real world for which the AR world has been created and sensor information of ranging sensors, etc.
The object recognition unit 27 is capable of executing recognition processing on image information captured by the inside camera 9 and the outside camera and detection results from the respective sensors of the sensor unit 19, thereby recognizing the actual object 4 and outputting a recognition result (recognition information).
The recognition result by the object recognition unit 27 includes any information, e.g., a position of the actual object 4, a state of the actual object 4, and a motion of the actual object 4. The object recognition unit 27 is capable of outputting various types of information, e.g., an amount of movement of the actual object 4, a distance of the actual object 4 from a predetermined position, an attitude of the actual object 4, or the presence/absence of an input operation by the actual object 4 as the recognition result.
A region (actual object region) determined to include the actual object 4 is extracted from, for example, two-dimensional image data, three-dimensional depth image data, or point cloud data as the recognition processing. A predetermined recognition algorithm is executed by inputting data about the extracted actual object region and a recognition result is output.
The present technology is not limited to such processing. A recognition algorithm may be executed by inputting full two-dimensional image data, three-dimensional depth image data, or the like and a recognition result related to the actual object 4 may be output.
For example, recognition processing using a rule-based algorithm is executed as the recognition processing. For example, with respect to the actual object data, recognition information can be generated by, for example, processing of executing matching processing or the like with a model image of the actual object and processing of determining a position of the actual object region on the data with a marker image or the like. Alternatively, the recognition information may be generated by referring to table information from the actual object region data. In addition, any recognition processing using a rule-based algorithm may be employed.
Moreover, recognition processing using a machine learning algorithm may be executed as the recognition processing. Any machine learning algorithm using, for example, a deep neural network (DNN) can be used.
Various types of estimation information can be generated by, for example, inputting actual object region data and executing a machine learning algorithm, and those can be output as a recognition result. For example, the use of artificial intelligence (AI) for deep learning can improve the object recognition estimation accuracy.
For example, a label for recognition information wished to be acquired is set to actual object region data for learning, and data set for learning is generated. A program with learned parameters is generated as a learned model by learning based on a machine learning algorithm with the data set for learning. The learned model is used for outputting a recognition result in response to the input of the actual object region data.
For example, a three-dimensional position of each feature point in the actual object may be determined by inputting three-dimensional information of the actual object region. Moreover, pose estimation may be executed with a machine learning algorithm in a case where the actual object is an entire body or a body part. It should be noted that the pose estimation is also called bone estimation or skeleton estimation.
In addition, any algorithm may be used for the recognition processing.
It should be noted that a machine learning algorithm may be applied to any processing in the present disclosure.
In the present embodiment, the object recognition unit 27 defines a coordinate system with respect to a space in the effective field of view on which the virtual object 5 can be superimposed.
For example, a coordinate value (e.g., XYZ coordinate value) defined by an absolute coordinate system (world coordinate system) may be used. Alternatively, a coordinate value (e.g., xyz coordinate value or uvd coordinate value) defined by a relative coordinate system using a predetermined point as a basis (point of origin) may be used. In a case of using the relative coordinate system, any point of origin may be set as the basis.
For example, a relative coordinate system is defined having world coordinates or the like of a predetermined portion of the table 4 a as the actual object 4 shown in FIG. 1 as its origin. Information about position, orientation, and the like of the actual object 4 located within the effective field-of-view is acquired by using the defined coordinate system as appropriate. As a matter of course, any other method may be used as a method of defining the position information.
The object recognition unit 27 may estimate a self-position of the user 3 (HMD 2).
For example, the self-position includes position and attitude of the HMD 2. For example, the self-position estimation can calculate a position of the HMD 2 and attitude information regarding which direction the HMD 2 faces.
The self-position of the HMD 2 is calculated on the basis of, for example, detection results from the sensor unit 19 and images captured by the inside camera 9 and the outside camera 10.
For example, position coordinates in a three-dimensional coordinate system (XYZ coordinate system) defined by the object recognition unit 27 are calculated as the self-position of the HMD 2. Moreover, a pitch angle, a roll angle, and a yaw angle about a predetermined reference axis extending on the front side of the user 3 (HMD 2) in a case where the X-axis is set as a pitch axis, the Y-axis is set as a roll axis, and the Z-axis is set as a yaw axis are calculated.
As a matter of course, specific forms and the like of the position information and the attitude information of the user 3 (HMD 2) are not limited.
An algorithm for estimating the self-position of the HMD 2 is not also limited. Any algorithm such as simultaneous localization and mapping (SLAM) may be used. In addition, any machine learning algorithm and the like may be used.
Three-dimensional coordinates of the periphery may be defined on the basis of the estimated self-position of the user 3 (HMD 2).
Moreover, a self-position estimation unit may be configured as a functional block different from the object recognition unit 27.
In the present embodiment, the object recognition unit 27 executes the recognition processing by setting the hands of the users 3 a and 3 b as targets. Then, the object recognition unit 27 acquires various types of information related to the hands. It should be noted that the hands include fingers.
The object recognition unit 27 is capable of acquiring, as the recognition result, for example, positions of the right hand and the left hand (including a position relationship between them), attitudes of the right hand and the left hand (including orientations of the hands), motions of the right hand and the left hand (including movement velocity), the presence/absence of an input operation with the right hand and the left hand, and the like.
Moreover, the object recognition unit 27 is capable of acquiring various types of information related to the fingers of each of the right hand and the left hand.
The object recognition unit 27 is capable of acquiring various types of information such as positions (including a position relationship between them), attitudes (including orientations of the fingers), and motions (including movement velocity), for example, with respect to each of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger.
For example, the object recognition unit 27 is capable of determining any input operation such as a touch operation, a drag operation, a scroll operation, and a pinch operation.
Moreover, the object recognition unit 27 is capable of determining a gesture such as “rock” (a state in which the hand is closed), “scissors” (a state in which only the index finger and the middle finger are extended), “paper” (a state in which the hand is opened), and “pistol” (a state in which only the index finger and the thumb finger are extended).
Moreover, the object recognition unit 27 is also capable of acquiring information about directions in which the finger pads are directed, whether each joint of the finger is extended or bent, and by what angle the joint is bent in a case where it is bent with respect to each of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger.
The hold determination unit 28 and the release determination unit 29 will be described later.
The notification control unit 30 controls the operation of each device in the HMD 2 to notify the user 3 of various types of information.
The notification control unit 30 is capable of notifying the user 3 of the information by, for example, presentation of a sense of touch, display of a virtual image, or output of a sound.
For example, the notification control unit 30 controls the operation of the vibration unit 15 to execute the notification of the information by the presentation of a sense of touch.
Moreover, the notification control unit 30 controls the left and right eye displays 8 a and 8 b to execute the notification of the information by the display of a virtual image.
Moreover, the notification control unit 30 controls the loudspeaker 14 to execute the notification of the information by the output of a sound.
In addition, any method may be employed as a method of notifying the user 3 of the information. For example, a light source device (illumination device) such as an LED may be mounted on the HDM 2 and lighting of the device may be controlled.
The AR reproduction control unit 31 controls the reproduction of the virtual content to the user 3.
How the virtual object 5 will move, how the virtual sound will be heard, and the like in accordance with the world view of the AR world, for example, are determined. Then, the virtual object 5 is displayed on the see-through display 8 so as to achieve the determined contents. Moreover, the loudspeaker 14 outputs the virtual sound.
As to the display of the virtual object 5, a display position of the virtual object 5 is calculated on the basis of, for example, the three-dimensional coordinates defined by the object recognition unit 27. The calculated display position (three-dimensional coordinates) is converted into two-dimensional coordinates (display coordinates on the see-through display 8) by a homography or the like. The virtual object 5 is displayed at the converted display coordinates. This achieves an AR space where the virtual object 5 is located at a desired position in the real space.
As to the output of the virtual sound, a generation position of the virtual sound (position of a virtual sound source) is calculated on the basis of, for example, the three-dimensional coordinates defined by the object recognition unit 27. An AR space where the virtual sound can be heard from a desired position (desired direction) in the real space is achieved by controlling the loudspeaker 14 to adjust localization of the sound.
A specific algorithm for reproducing the virtual content is not limited. Any control may be executed.
The sending control unit 32 is capable of sending various types of information to the other device via the communication unit 16. For example, the sending control unit 32 is capable of sending to the other device the information and the like stored in the storage unit 20, for example, the information acquired by the information acquisition unit 26, the recognition result acquired by the object recognition unit 27, and the display position of the virtual object 5 calculated by the AR reproduction control unit 31.
For example, the HMDs 2 a and 2 b illustrated in FIG. 1 are capable of sending and receiving various types of information and data to/from each other.
In the present embodiment, the hold determination unit 28 corresponds to an embodiment of a hold determination unit according to the present technology.
The release determination unit 29 corresponds to an embodiment of a cancel determination unit according to the present technology.
The notification control unit 30 corresponds to an embodiment of a notification control unit according to the present technology.
The storage unit 20 corresponds to an embodiment of a storage unit according to the present technology.
[Hold Determination/Release Determination on Virtual Object]
FIG. 4 is a view for describing a basic operation of hold determination and release determination on the virtual object 5 according to the present embodiment.
The hold determination is processing of determining whether or not the virtual object 5 has been pinched and held. The release determination is processing of determining whether or not the pinched and held virtual object 5 has been released. The release determination can also be said to be processing of determining whether or not the hold of the virtual object 5 has been cancelled.
Moreover, the pinch-and-hold operation can also be said to be grip. Moreover, the expression “pinch/hold” can also be used. Thus, the hold determination can also be said to be grip determination. Moreover, the hold determination can also be said to be pinch determination/hold determination.
As shown in A of FIG. 4 , in the present embodiment, a distance D between a first hold-side object 34 and a second hold-side object 35 that set the virtual object 5 as a pinch-and-hold target is calculated.
The first hold-side object 34 and the second hold-side object 35 that set the virtual object 5 as the pinch-and-hold target typically correspond to two objects that attempt to pinch and hold the virtual object 5. The present technology is not limited thereto, and for example two objects set in advance to hold the virtual object 5 may be employed. In addition, what objects are set as the first hold-side object 34 and the second hold-side object 35 is not limited.
Moreover, each of the first hold-side object 34 and the second hold-side object 35 may be the actual object 4 or may be the virtual object 5. Alternatively, each of the first hold-side object 34 and the second hold-side object 35 may be a pair of the actual object 4 and the virtual object 5.
In the present embodiment, in a case where a rate of decrease of the distance D between the first hold-side object 34 and the second hold-side object 35 that attempt to pinch and hold the virtual object 5 becomes smaller than a predetermined threshold value, it is determined that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35.
For example, respective positions of the first hold-side object 34 and the second hold-side object 35 are calculated at a predetermined frame rate. Accordingly, the distance D can be calculated for each frame.
For example, a difference between a distance D acquired for the current frame and a distance D acquired in a past frame preceding it by several frames is divided by a time equivalent to a predetermined number of frames. Accordingly, the rate of decrease of the distance D can be calculated. As a matter of course, another algorithm may be used.
For example, a nearly zero value is set as the predetermined threshold value. As a matter of course, zero may be set as the predetermined threshold value.
Any threshold value with which it can be determined that the decrease in distance D between the first hold-side object 34 and the second hold-side object 35 that attempt to pinch and hold the virtual object 5 has stopped, i.e., the movement of the first hold-side object 34 and the second hold-side object 35 has stopped may be employed.
For example, also after a movement of reducing the distance D in order to pinch and hold the virtual object 5 stops, each of the first hold-side object 34 and the second hold-side object 35 can slightly move (e.g., shake). Therefore, setting the threshold value to be slightly larger than zero can prevent such a slight movement or the like from influencing the hold determination.
The threshold value slightly larger than zero can also be said to be a threshold value with which it can be determined that the movement of the first hold-side object 34 and the second hold-side object 35 has substantially stopped.
Moreover, the rate of decrease of the distance D can also be said to be finger-to-finger velocity. Moreover, velocity in a direction in which the distance D decreases and velocity in a direction in which the distance D increases may be distinguished from each other with positive and negative reference signs, such that those can be both calculated. In this case, a rate of change of the distance D can also be referred to as the finger-to-finger velocity.
B of FIG. 4 is a schematic view showing a state when it is determined that the virtual object 5 has been held. In B of FIG. 4 , the first hold-side object 34 and the second hold-side object 35 is held in contact with the virtual object 5.
Each of the first hold-side object 34 and the second hold-side object 35 can be slightly spaced apart from the virtual object 5 when it is actually determined that the virtual object 5 has been held. In addition, each of the first hold-side object 34 and the second hold-side object 35 can also slightly overlap the virtual object 5. That is, the first hold-side object 34 and the second hold-side object 35 can also slightly cave in the virtual object 5.
As to the release determination on the virtual object 5, a distance between the first hold-side object 34 and the second hold-side object 35 when it is determined that the virtual object 5 has been held is first stored as a hold determination distance D1 as shown in B of FIG. 4 .
Whether or not the hold of the virtual object 5 by the first hold-side object 34 and the second hold-side object 35 has been cancelled is determined on the basis of the stored hold determination distance D1.
Specifically, as shown in C of FIG. 4 , in a case where the distance D between the first hold-side object 34 and the second hold-side object 35 becomes larger than a distance D2 obtained by adding a predetermined threshold value α to the hold determination distance D1, it is determined that the hold has been cancelled.
The distance D2 is a reference distance for the hold cancel determination, i.e., release determination (hereinafter, referred to as a release determination distance D2 with the same reference sign).
Hereinafter, specific embodiments of the hold determination/release determination according to the present technology will be described by taking the AR providing system 1 illustrated in FIG. 1 as an example.
As described above, in the AR providing system 1, the user 3 can pinch and hold the virtual object 5 with his or her hand(s) (fingers). Moreover, the user 3 can also release the pinched and held virtual object 5.
Here, for the sake of description, a case where the user 3 pinches and holds the virtual object 5 with two fingers will be taken as an example. That is, the user 3 can hold the virtual object 5 with any two fingers of five fingers of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger of the right hand and five fingers of the thumb finger, the index finger, the middle finger, the ring finger, and the little finger of the left hand, a total of ten fingers.
FIG. 5 is a flowchart showing a processing example of the hold determination/release determination according to the present embodiment.
FIGS. 6 to 8 are schematic views for describing each step shown in FIG. 5 .
The processing shown in FIG. 5 is processing executed by the HMD 2 of the user 3. Moreover, the processing shown in FIG. 5 is repeated at a predetermined frame rate, for example. As a matter of course, the present technology is not limited to the case where the processing is executed for each frame.
The hold determination unit 28 acquires a position of each finger of the user 3 (in Step 101). For example, on the basis of a recognition result by the object recognition unit 27, the hold determination unit 28 is capable of acquiring a position of each finger of the user 3.
For example, with respect to each finger, a position of a finger pad of a DIP joint is acquired as the finger position. Alternatively, for example a position of a center of weight of each finger may be acquired as the finger position. In addition, information for defining the finger position is not limited.
The hold determination unit 28 determines whether or not the user 3 has held the virtual object 5 (in Step 102).
In a case where the user 3 has not held the virtual object 5 with the two fingers (No in Step 102), the hold determination unit 28 determines whether or not the user attempts to pinch and hold the virtual object 5 with the two fingers (in Step 103).
The determination in Step 102 and 103 can be executed, for example, on the basis of a recognition result by the object recognition unit 27.
Some variations of the determination in Step 103 will be described. One of the following variations may be employed or a plurality of variations may be employed in combination.
For example, in a case where the user 3 directs the finger pads of the two fingers towards the virtual object 5, it is determined that the user attempts to pinch and hold the virtual object 5 with the two fingers.
In the example shown in A of FIG. 6 , the user directs finger pads S1 of a thumb finger 37 and an index finger 38 of the right hand towards the virtual object 5. In this case, it is determined that the user attempts to pinch and hold the virtual object 5 with the thumb finger 37 and the index finger 38.
For example, the user directs the finger pad S1 of the index finger 38 only towards the virtual object 5. In this case, it is determined that the user does not attempt to pinch and hold the virtual object 5 because it is impossible to pinch and hold the virtual object 5 with a single finger.
Otherwise, in a case where the user 3 directs portions of the two adjacent fingers, which face each other, towards the virtual object 5, it is determined that the user attempts to pinch and hold the virtual object 5 with the two fingers.
In the example shown in B of FIG. 6 , the user directs portions of the index finger 38 and a middle finger 39 of the right hand, which face each other, towards the virtual object 5. That is, a portion 38 a of the index finger 38, which faces the middle finger 39, and a portion 39 a of the middle finger 39, which faces the index finger 38, are directed towards the virtual object 5.
In this case, it is determined that the user attempts to pinch and hold the virtual object 5 with the index finger 38 and the middle finger 39.
Otherwise, in a case where there are two objects whose distance to the virtual object 5 is smaller than a predetermined threshold value, it is determined that the user attempts to pinch and hold the virtual object 5 with these two objects.
In the example shown in A of FIG. 7 , a position of the finger pad S1 of the DIP joint of each finger is acquired as a position of each finger.
The distance to the virtual object 5 from the position of the pad S1 of the thumb finger 37 is a distance D3 between the thumb finger 37 and the virtual object 5 and is smaller than the predetermined threshold value.
The distance to the virtual object 5 from the position of the pad S1 of the index finger 38 is a distance D3 between the index finger 38 and the virtual object 5 and is smaller than the predetermined threshold value.
In this case, it is determined that the user attempts to pinch and hold the virtual object 5 with the thumb finger 37 and the index finger 38.
It should be noted that the threshold value related to the distance may be about 3 cm, for example. The present technology is not limited thereto, and any value may be set as the threshold value related to the distance.
Otherwise, in a case where two objects are determined to have collided with the virtual object 5, it is determined that the user attempts to pinch and hold the virtual object 5 with these two objects.
The collision determination can be executed on the basis of a collider 43 set to the virtual object 5 for example as shown in B of FIG. 7 .
In B of FIG. 7 , the thumb finger 37 and the collider 43 set to the virtual object 5 collide with each other. Moreover, the index finger 38 and the collider 43 set to the virtual object 5 collide with each other.
In this case, it is determined that the user attempts to pinch and hold the virtual object 5 with the thumb finger 37 and the index finger 38.
It should be noted that colliders may be respectively set to the thumb finger 37 and the index finger 38 and the collision determination may be executed by the use of these colliders.
In addition, any algorithm may be used for the determination in Step 103.
In the present embodiment, the hold determination unit 28 sets the two objects which attempt to pinch and hold the virtual object 5 as the first hold-side object 34 and the second hold-side object 35 shown in FIG. 4 .
Therefore, in Step 103, in a case where the two fingers which attempt to pinch and hold the virtual object 5 are detected, these two fingers are set as the first hold-side object 34 and the second hold-side object 35.
For example, in the example shown in A of FIG. 6 , the thumb finger 37 and the index finger 38 are set as the first hold-side object 34 and the second hold-side object 35.
That is, in a case where the user 3 directs the finger pads of the two fingers towards the virtual object 5, the hold determination unit 28 sets these two fingers as the first hold-side object 34 and the second hold-side object 35, respectively.
In the example shown in B of FIG. 6 , the index finger 38 and the middle finger 39 are set as the first hold-side object 34 and the second hold-side object 35.
That is, in a case where the user 3 directs portions of the two adjacent fingers, which face each other, towards the virtual object 5, the hold determination unit 28 sets these two fingers as the first hold-side object 34 and the second hold-side object 35, respectively.
In the example shown in A of FIG. 7 , the thumb finger 37 and the index finger 38 are set as the first hold-side object 34 and the second hold-side object 35.
That is, the hold determination unit 28 sets two objects whose distance to the virtual object 5 is smaller than the predetermined threshold value as the first hold-side object 34 and the second hold-side object 35, respectively.
In the example shown in B of FIG. 7 , the thumb finger 37 and the index finger 38 are set as the first hold-side object 34 and the second hold-side object 35.
That is, the hold determination unit 28 sets two objects determined to have collided with the virtual object 5 as the first hold-side object 34 and the second hold-side object 35, respectively.
In Step 103, in a case where it is determined that the user does not attempt to pinch and hold the virtual object 5 with the two fingers, the processing returns to Step 102.
In Step 103, in a case where it is determined that the user attempts to pinch and hold the virtual object 5 with the two fingers, the hold determination unit 28 acquires a rate of decrease of the finger-to-finger distance (the distance D between the two fingers shown in FIG. 4 ) (in Step 104).
For example, a difference between a finger-to-finger distance D acquired for the current frame and a finger-to-finger distance D acquired several frames before is divided by a time equivalent to several frames. As a matter of course, any other method may be used as a method of calculating the rate of decrease of the finger-to-finger distance D.
In the example shown in A of FIG. 8 , a position of the finger pad S1 of the DIP joint of each finger of the user 3 is acquired as the finger position. Then, a distance between the finger pads S1 of the thumb finger 37 and the index finger 38 which attempt to pinch and hold the virtual object 5 is acquired as a finger-to-finger distance D. A rate of decrease of the finger-to-finger distance D is acquired.
The hold determination unit 28 determines whether or not the rate of decrease of the finger-to-finger distance D becomes equal to or smaller than a predetermined threshold value (in Step 105). Any value may be set as the predetermined threshold value. Moreover, the hold determination unit 28 may determine whether or not the rate of decrease becomes smaller than the predetermined threshold value.
In a case where the rate of decrease of the finger-to-finger distance D does not become equal to or smaller than the predetermined threshold value (No in Step 105), the processing returns to Step 101.
In a case where the rate of decrease of the finger-to-finger distance D becomes equal to or smaller than the predetermined threshold value, the hold determination unit 28 determines that the virtual object 5 has been held by the two fingers (in Step 106).
As shown in B of FIG. 8 , the user 3 feels the thickness of the virtual object 5 and stops the operation of making the thumb finger 37 and the index finger 38 approach each other at the finger-to-finger distance D where the user feels that the user has gripped the virtual object 5.
As shown in C of FIG. 8 , the hold determination unit 28 is capable of determining that the virtual object 5 has been held in accordance with a hold operation made by the user 3 following his or her feeling. As a result, a high-quality virtual experience can be achieved.
In a case where it is determined that the virtual object 5 has been held by the two fingers, then the distance between the two fingers when it is determined that the virtual object 5 has been held is stored as the hold determination distance D1 (see FIG. 4 ). Then, the release determination distance D2 (see FIG. 4 ) is set on the basis of the hold determination distance D1 (in Step 107).
The release determination distance D2 is a distance obtained by adding a predetermined threshold value α to the hold determination distance D1. Any value may be set as the predetermined threshold value α.
The release determination distance D2 is stored in the storage unit 20. Then, the processing returns to Step 101.
In Step 102, in a case where it is determined that the user 3 has held the virtual object 5 (Yes in Step 102), the release determination unit 29 executes release determination.
The release determination unit 29 determines whether or not the finger-to-finger distance D between the two fingers is equal to or larger than the release determination distance D2 (in Step 108).
In a case where the finger-to-finger distance D is not equal to or larger than the release determination distance D2 (No in Step 108), the release determination unit 29 determines that the virtual object 5 has not been released. Then, the processing returns to Step 101. In this case, the hold of the virtual object 5 by the two fingers is maintained.
In a case where the finger-to-finger distance D is equal to or larger than the release determination distance D2 (Yes in Step 108), the release determination unit 29 determines that the virtual object 5 has been released. That is, the release determination unit 29 determines that the hold of the virtual object 5 by the two fingers has been cancelled (in Step 109).
The hold determination distance D1 is a finger determination distance D where the user 3 feels that the user 3 has gripped the virtual object 5. Thus, the release determination distance D2 set on the basis of the hold determination distance D1 is also a determination distance following the feeling of the user 3.
For releasing the virtual object 5, the user 3 makes an operation of extending the distance between the two fingers from the finger-to-finger distance D (hold determination distance D1) following the feeling that the user has pinched and held the virtual object 5.
The release determination unit 29 is capable of determining that the virtual object 5 has been released in accordance with such an operation made by the user 3 following his or her feeling. As a result, a high-quality virtual experience can be achieved.
Moreover, the hold state is maintained until it is determined that it has been released even if the finger-to-finger distance D is increased or reduced. Thus, it is possible to sufficiently prevent a determination that it has been released from being made because of small noise. Thus, it is possible to sufficiently prevent the virtual object 5 from dropping even though the user 3 does not wish to release it.
In Step 101, the positions of the respective fingers (in particular, the fingers holding the virtual object 5) may not be acquired and may be lost. For example, the virtual object 5 and the hand holding it may depart from the effective field-of-view for example when the user 3 swings around the hand holding the virtual object 5. In such a case for example, the position of each finger may be lost.
In this case, execution of the release determination by the release determination unit 29 is limited. In other words, the release determination is not executed. A finger-to-finger distance D is calculated anew and the release determination is executed after the lost finger position is restored.
Accordingly, the hold of the virtual object 5 can be maintained even in a case where the position of each finger is lost. It is possible to prevent for example a case where the virtual object 5 undesirably drops without noticing while the user 3 is swinging around the hand holding the virtual object 5.
The AR reproduction control unit 31 controls reproduction of the virtual content to the user 3 as appropriate on the basis of a determination result by the hold determination unit 28 and a determination result by the release determination unit 29.
For example, a virtual image in which the virtual object 5 held by the fingers of the user 3 moves in accordance with a movement of the hand of the user 3 is displayed. A virtual sound is output from the virtual object 5 held by the hand of the user 3.
Moreover, a virtual image in which the virtual object held by the fingers of the user 3 drops is displayed. Otherwise, a virtual sound when the hand of the user 3 drops is output.
In addition, any virtual representation may be achieved.
As to a threshold value that is a reference for the hold determination in Step 105, the threshold value may be adjusted as appropriate on the basis of the movement of the hand (fingers) of the user 3.
For example, in a case where the movement of the hand including the two fingers determined to attempt to pinch and hold the virtual object 5 is large (in a case where the moving velocity is high), the threshold value in Step 105 is set to be large. In a case where the movement of the hand including the two fingers is small (in a case where the moving velocity is low), the threshold value in Step 105 is set to be small.
In a case where the movement of the hand is large, a change in the finger-to-finger distance D will likely occur even when the user has stopped the movement of the fingers in order to hold the virtual object 5. On the other hand, in a case where the movement of the hand is small, a change in the finger-to-finger distance D when the user has stopped the movement of the fingers in order to hold the virtual object 5 will hardly occur or such a change will be very small.
Therefore, increasing the threshold value as the movement of the hand increases enables high-accuracy hold determination according to the movement of the hand. For example, an operation of catching the virtual object 5 while moving the hand can be achieved with high accuracy.
Moreover, in the present embodiment, the determination as to whether the user attempts to hold the virtual object 5 precedes the hold determination. Accordingly, it is possible to prevent the hold determination from being executed on two objects that does not attempt to hold the virtual object 5. As a result, the usability for the user 3 can be improved.
FIGS. 9 and 10 are schematic views showing other examples of the hold operation by the user 3.
The present technology can also be applied in a case where the virtual object 5 is pinched and held by an index finger 38R of the right hand and an index finger 38L of the left hand as shown in FIG. 9 .
As a matter of course, the present technology can also be applied in a case where the virtual object 5 has been held by any finger of the right hand and any finger of the left hand.
As shown in FIG. 10 , the same side-surface portion of the virtual object 5 may be held by a plurality of fingers.
In the example shown in FIG. 10 , a lower portion of the virtual object 5 is held by the thumb finger 37 and an upper portion of the virtual object 5 is held by three fingers, i.e., the index finger 38, the middle finger 39, and a ring finger 40.
Thus, the present technology can also be applied in a case where the first hold-side object 34 and the second hold-side object 35 shown in FIG. 4 are set in any combination as follows.
The first hold-side object 34 . . . one or more predetermined fingers of the user 3.
The second hold-side object 35 . . . one or more other fingers different from the one or more predetermined fingers of the user 3.
In the example shown in FIG. 10 , the thumb finger 37 of the user 3 and one or more other fingers (the index finger 38, the middle finger 39, and the ring finger 40) of the same hand as the thumb finger 37 are set as the first hold-side object 34 and the second hold-side object 35.
For example, the number of fingers that pinches and holds the virtual object 5 is set in advance. In the example shown in FIG. 10 , a setting is made so that the virtual object 5 should be held by the thumb finger 37 and the three other fingers. The index finger 38, the middle finger 39, and the ring finger 40 may be specified as the three other fingers.
In Step 103 of FIG. 5 , it is determined that the user attempts to pinch and hold the virtual object 5, for example, in a case where all the finger pads S1 of the thumb finger 37 and the three other fingers have been directed towards the virtual object 5.
In an instance where the index finger 38, the middle finger 39, and the ring finger 40 have been specified as the three other fingers, it is determined that the user attempts to pinch and hold the virtual object 5 in a case where the user has directed the finger pads S1 of all of the thumb finger 37, the index finger 38, the middle finger 39, and the ring finger 40 towards the virtual object 5.
Otherwise, in a case where the distance to the virtual object 5 from all of the thumb finger 37 and the three other fingers is smaller than the predetermined threshold value, it is determined that the user attempts to pinch and hold the virtual object 5.
In an instance where the index finger 38, the middle finger 39, and the ring finger 40 have been specified as the three other fingers, it is determined that the user attempts to pinch and hold the virtual object 5 in a case where the distance to the virtual object 5 from all of the thumb finger 37, the index finger 38, the middle finger 39, and the ring finger 40 is smaller than the predetermined threshold value.
Otherwise, in a case where all of the thumb finger 37 and the three other fingers are determined to have collided with the virtual object 5, it is determined that the user attempts to pinch and hold the virtual object 5.
In an instance where the index finger 38, the middle finger 39, and the ring finger 40 have been specified as the three other fingers, it is determined that the user attempts to pinch and hold the virtual object 5 in a case where all of the thumb finger 37, the index finger 38, the middle finger 39, and the ring finger 40 are determined to have collided with the virtual object 5.
In an instance where a plurality of fingers has been set as the first hold-side object 34 or the second hold-side object 35, the hold determination of the virtual object 5 can be executed by using, for example, a position of a center of weight of the plurality of fingers as a reference.
For example, in the example shown in FIG. 10 , Steps 104 and 105 in FIG. 5 can be executed by setting a distance between a position of a center of weight of the index finger 38, the middle finger 39, and the ring finger 40 and a position of the thumb finger 37 as the finger-to-finger distance D.
For example, a center of weight of three positions, i.e., the position of the index finger 38 (e.g., the position of the finger pad S1), the position of the middle finger 39 (e.g., the position of the finger pad S1), and the position of the ring finger 40 (e.g., the position of the finger pad S1) may be employed as the position of the center of weight of the index finger 38, the middle finger 39, and the ring finger 40. As a matter of course, the present technology is not limited thereto.
Otherwise, a representative finger is selected from among the index finger 38, the middle finger 39, and the ring finger 40. Then, Steps 104 and 105 in FIG. may be executed by setting a distance between the position of the selected representative finger and the position of the thumb finger 37 as the finger-to-finger distance D.
Otherwise, a setting may be made so that the hold determination always uses the position of the index finger 38. That is, the finger to be used for the determination may be specified.
FIG. 11 is a schematic view for describing another embodiment of the hold determination.
In the example shown in FIG. 11 , the thumb finger 37 and one or more other fingers of the same hand as the thumb finger 37 are set as the first hold-side object 34 and the second hold-side object 35.
Moreover, as shown in A to C of FIG. 11 , the number of one or more other fingers is set on the basis of a length L of the virtual object 5. That is, the number of fingers necessary for the hold determination of the virtual object 5 is specified in accordance with the length L of the virtual object 5.
In the example shown in FIG. 11 , a size of a hold target surface 41 to be held by the one or more other fingers in a direction to the little finger from the index finger 38 is defined as the length L of the virtual object 5. As a matter of course, the present technology is not limited thereto.
In A of FIG. 11 , assuming that the length L of the virtual object 5 is smaller than a first threshold (e.g., 15 mm), the number of other fingers that holds the virtual object 5 together with the thumb finger 37 is specified to be one. Thus, the hold determination and the release determination are executed on the thumb finger 37 and the single other finger. Although the case where the index finger 38 is used is shown in A of FIG. 11 , another finger may be used.
In B of FIG. 11 , assuming that the length L of the virtual object 5 is within a range of the first threshold (e.g., 15 mm) to a second threshold (e.g., 30 mm), the number of other fingers that holds the virtual object 5 together with the thumb finger 37 is specified to be two. Thus, the hold determination and the release determination are executed on the thumb finger 37 and the two other fingers. Although the case where the index finger 38 and the middle finger 39 are used is shown in B of FIG. 11 , other fingers may be used.
In C of FIG. 11 , assuming that the length L of the virtual object 5 is larger than the second threshold (e.g., 30 mm), the number of other fingers that holds the virtual object 5 together with the thumb finger 37 is specified to be three. Thus, the hold determination and the release determination are executed on the thumb finger 37 and the three other fingers. Although the case where the index finger 38, the middle finger 39, and the ring finger 40 are used is shown in C of FIG. 11 , other fingers may be used.
In this manner, the hold determination unit 28 may set the first hold-side object 34 and the second hold-side object 35 on the basis of the size of the virtual object 5.
Otherwise, the hold determination unit 28 may set the first hold-side object 34 and the second hold-side object 35 on the basis of the shape of the virtual object 5.
In addition, any method may be employed as a method of setting the first hold-side object 34 and the second hold-side object 35. For example, two objects closest to the virtual object 5 may be set as the first hold-side object 34 and the second hold-side object 35.
Otherwise, two objects focused by the user 3 may be detected by eye tracking for example and may be set as the first hold-side object 34 and the second hold-side object 35.
As to the release determination in the instance where the plurality of fingers has been set as the first hold-side object 34 or the second hold-side object 35, the release determination may be executed on each of the plurality of fingers. Then, in a case where all the fingers have released the virtual object 5, the virtual object 5 may be determined to have been released.
Otherwise, as described above, the release determination may be executed by using a position of a center of weight of the plurality of fingers as a reference. Otherwise, a representative finger is selected from among the plurality of fingers. Then, the release determination may be executed by using the position of the selected representative finger as a reference.
[Notification of Hold Determination and Release Determination to User]
In the present embodiment, the notification control unit 30 notifies the user 3 of the fact that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35.
Moreover, the notification control unit 30 notifies the user 3 of the fact that the hold of the virtual object 5 by the first hold-side object 34 and the second hold-side object 35 has been cancelled.
It is also possible to employ a configuration in which the user is notified of only one of either the fact that notification of the virtual object 5 has been performed or the fact that the hold of the virtual object 5 has been cancelled.
The notification control unit 30 is capable of notifying the user 3 of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled by presentation of a sense of touch, display of a virtual image, or output of a sound or the like.
For example, the notification control unit 30 controls the operation of the vibration unit 15 so that it can notify of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled by presentation of a sense of touch such as a vibration.
Moreover, the notification control unit 30 controls the left and right eye displays 8 a and 8 b so that it can notify of the fact that the virtual object has been held and the fact that the hold of the virtual object 5 has been cancelled by display of a virtual image.
Moreover, the notification control unit 30 controls the loudspeaker 14 so that it can notify of the fact that the hold of the virtual object 5 has been cancelled by output of a sound.
In addition, any method may be employed.
FIG. 12 is a schematic view showing an example of notifying the user 3.
In the example shown in A of FIG. 12 , a virtual image in which the thumb finger 37 and the index finger 38 pinching and holding the virtual object 5 are lit up is displayed.
In the example shown in B of FIG. 12 , a virtual image in which the contour of the thumb finger 37 and the index finger 38 pinching and holding the virtual object 5 is lit up is displayed.
In the example shown in C of FIG. 12 , the text image “OK!!” is displayed as the virtual image.
Displaying the virtual image in this manner enables visual feedback to the user 3. As a result, the user 3 can easily know that the user 3 has successfully held the virtual object 5, and a high-quality virtual experience can be achieved.
In the examples of A to C of FIG. 12 , the release determination unit 29 has determined that the hold of the virtual object 5 by the thumb finger 37 and the index finger 38 has been cancelled. In this case, the virtual representation lighting up the thumb finger 37 and the index finger 38, the virtual representation lighting up the contour of the thumb finger 37 and the index finger 38, or the display of the text image “OK!!” is cancelled.
Accordingly, the user 3 can easily know that the user 3 has released the virtual object 5.
Any method may be employed as a method of notifying of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled.
For example, a vibration in a particular pattern may be generated for notifying the user of the fact that the virtual object 5 has been held. The vibration may be stopped for notifying the user of the fact that the hold has been cancelled.
Otherwise, the voice saying “You have grasped the model car (virtual object 5)” or “You have released the model car (virtual object 5)” may be output from the loudspeaker 14.
When the user grips the actual object 4 in the real world, the user receives haptic information fed back from the actual object 4. When the user releases the actual object 4, the user perceives that the user has released it because the user receives no haptic information. However, the user does not receive a haptic feedback associated with it from the virtual object 5.
In the present embodiment, various devices provided in the HMD 2 performs feedback using a visual representation, a sound, and a sense of touch. Accordingly, the user can easily know the fact that the virtual object 5 has been held and the fact that the virtual object 5 has been released, and a high-quality virtual experience can be achieved.
As a matter of course, the feedback to the user 3 may be performed from a device other than the HMD 2.
FIG. 13 is a schematic view for describing a display example of virtual images of the first hold-side object 34 and the second hold-side object 35.
A of FIG. 13 is a schematic view showing a state when it is determined that the virtual object 5 has been held.
B of FIG. 13 is a schematic view of a state in which virtual images of the first hold-side object 34 and the second hold-side object 35 have been displayed.
As shown in A of FIG. 13 , when it is determined that the virtual object 5 has been held, the first hold-side object 34 or the second hold-side object 35 holding the virtual object 5 may be spaced apart from the virtual object 5.
Moreover, when it is determined that the virtual object 5 has been held, the first hold-side object 34 or the second hold-side object 35 holding the virtual object 5 may overlap the virtual object 5.
In the example shown in A of FIG. 13 , the index finger 38L of the left hand is spaced apart from the virtual object 5. Moreover, the index finger 38R of the right hand overlaps the virtual object 5 and caves in the virtual object 5.
For example, in a case where the finger-to-finger distance D is larger than the width of the virtual object 5 or in a case where the finger-to-finger distance D is smaller than the width of the virtual object 5, the finger(s) can be spaced apart from the virtual object 5 or can overlap the virtual object 5.
As shown in B of FIG. 13 , in the present embodiment, the notification control unit 30 displays the virtual image of the hold-side object spaced apart from the virtual object 5 in such a manner that it is held in contact with the surface of the virtual object 5.
Moreover, the notification control unit 30 displays the virtual image of the hold-side object overlapping the virtual object 5 in such a manner that it is held in contact with the surface of the virtual object 5.
In the example shown in B of FIG. 13 , a model image 42L of the index finger 38L of the left hand and a model image 42R of the index finger 38R of the right hand are generated as virtual images. The model images 42L and 42R are displayed in such a manner that it is held in contact with the surface of the virtual object 5.
At this time, the index finger 38L of the left hand and the index finger 38R of the right hand that are the actual objects 4 are hidden from the effective field-of-view. For example, an image of the background or the like is made to overlap the index finger 38L of the left hand and the index finger 38R of the right hand. In addition, any image processing method may be employed for hiding the actual object 4.
Displaying the virtual images of the first hold-side object 34 and the second hold-side object 35 in this manner can overcome for example a gap between the actual hand and the virtual object 5. As a result, a high-quality virtual experience can be achieved.
It should be noted that although the virtual images of the first hold-side object 34 and the second hold-side object 35 are displayed, it may be configured to be separate from the notification control unit 30. The display of these virtual images may be executed, considering that the operation of notifying of the fact that the virtual object 5 has been held and the fact that the hold of the virtual object 5 has been cancelled is a separate operation.
Hereinabove, in the AR providing system 1 and the HMD 2 according to the present embodiment, in a case where the rate of decrease of the distance between the first hold-side object 34 and the second hold-side object 35 becomes smaller than the predetermined threshold value, it is determined that the virtual object 5 has been held. Accordingly, a high-quality virtual experience can be achieved.
A method of achieving the grip (hold/pinch) operation with respect to the virtual object 5 can be a method of determining that the virtual object 5 has been gripped in a case where the two fingers are held in contact with the virtual object 5.
With this method, there can be a gap between a situation where the contact of the fingers with the virtual object 5 is determined and the feeling of the user 3 who attempts to grip the virtual object 5. In such a case, the user 3 may not successively grasp the virtual object 5 even though the user 3 feels that the user 3 touches and grasps it or the user 3 may touch the virtual object 5 with the fingers even though the user 3 feels that the user 3 does not touch the virtual object 5 yet.
Moreover, also as to the release determination, a method determining that the virtual object 5 is released when the contact of the fingers with the virtual object 5 is cancelled can be used. Also in this case, the user may not successively cancel grip even though the user feels that the user keeps his or her fingers off the virtual object 5 or the user may release the virtual object 5 even though the user does not attempt to do so.
In such a case, it is difficult to perform natural grip operation and release operation on the virtual object 5.
A method of performing grip determination by setting a friction coefficient between the virtual object 5 and the fingers can also be used. However, the grip may become instable, so the virtual object 5 can slip and drop. Thus, it is difficult to perform natural grip operation and release operation on the virtual object 5.
By applying the present technology, it is possible to determine that the virtual object 5 has been held in accordance with the hold operation made by the user 3 following his or her feeling. Moreover, it is possible to determine that the virtual object 5 has been held in accordance with the release operation made by the user 3 following his or her feeling.
When the user 3 attempts to grasp (pinch) the virtual object 5 having a certain size, the user 3 adjusts the finger-to-finger distance D, using his or her feeling in accordance with the size of the virtual object 5. The hold determination and the release determination can be executed in accordance with the finger-to-finger distance D following the feeling of each user 3.
Accordingly, it leads to reduction in the number of grasp mistakes and reduction in time until the user grasps the object. Moreover, the virtual object 5 does not slip and drop from the fingers even when the user moves the virtual object 5 after the user grasps the virtual object 5 and the user can release the virtual object 5 only by slightly opening the fingers following his or her feeling even without largely opening the fingers.
Since the user can intuitively grip the virtual object 5 in accordance with the size of the virtual object 5 in this manner, an enhanced virtual experience can be achieved.
[Wearable Controller]
FIG. 14 is a schematic view showing an example of a wearable controller.
A of FIG. 14 is a schematic view showing an outer appearance of the wearable controller on a palm side.
B of FIG. 14 is a schematic view showing an outer appearance of the wearable controller on a back-of-hand side.
A wearable controller 44 is configured as a so-called palm vest device. The wearable controller 44 is put on the hand of the user 3 for use.
Although the illustration is omitted, various devices such as cameras, a 9-axis sensor, a GPS, a distance measurement sensor, a microphone, an IR sensor, and an optical marker are mounted on predetermined positions of the wearable controller 44.
For example, the cameras are respectively placed on the palm side and the back-of-hand side so that the cameras can image the fingers. Recognition processing of the hand of the user 3 can be executed on the basis of images of the fingers captured by the cameras, detection results (sensor information) from the respective sensors, a sensing result of IR light reflected by the optical marker, and the like.
Therefore, various types of information such as positions, attitudes, and movements of the hand and the respective fingers can be acquired. Moreover, determination of an input operation such as a touch operation, determination of a gesture with the hand, and the like can be executed.
The user 3 can perform a variety of gesture inputs or an operation with respect to the virtual object 5 with his or her hand.
Moreover, although the illustration is omitted, a plurality of vibrators is mounted at predetermined positions of the wearable controller 44 as the tactile presentation unit. By the vibrator being driven, senses of touch in various patterns can be presented to the hand of the user 3. It should be noted that specific configurations of the vibrators are not limited, and any configuration may be employed.
The embodiment of the present technology can also be achieved with such a wearable controller 44.
For example, the AR providing system 1 as shown in FIG. 1 can also be achieved without the HMD 2.
A projector for displaying the virtual object 5 on the table 4 a is disposed above or below the table 4 a. The user 3 wears the wearable controller 44 shown in FIG. 14 and pinches and holds the box 5 a that is the virtual object 5.
A recognition result of each finger of the user 3 can be acquired on the basis of the image and the sensor information from the wearable controller 44. Thus, the hold determination and the release determination according to the present technology as illustrated in FIG. 5 can be executed, and a high-quality AR providing system 1 can be achieved.
Moreover, the wearable controller 44 may notify the user 3 of a result of the hold determination and a result of the release determination. A haptic feedback can be performed for example by driving the plurality of vibrators. As a matter of course, a visual feedback and an audio feedback may be performed by mounting a display unit and a loudspeaker.
It should be noted that the hold determination and the release determination may be executed by another computer (e.g., server apparatus) connected to be capable of communicating with the wearable controller 44. In this case, the other computer functions as an embodiment of the information processing apparatus according to the present technology.
As a matter of course, the wearable controller 44 may execute the hold determination and the release determination. In this case, the wearable controller 44 functions as an embodiment of the information processing apparatus according to the present technology.
In addition, cooperation of the wearable controller 44 with the other computer may achieve the information processing apparatus according to the present technology and may execute the information processing method according to the present technology.
[VR Providing System]
A VR providing system may be configured as an embodiment of the information processing system according to the present technology.
For example, the user 3 wears an immersive HMD 2 configured to cover the field-of-view of the user 3. Then, the user 3 operates a corresponding virtual object movable in accordance with a movement of the user's hand (fingers) within the VR space by moving the user's hand (fingers).
For example, the corresponding virtual object may be a model image of the user's hand (fingers). As a matter of course, the present technology is not limited thereto, and the corresponding virtual object may be a hand (fingers) of a character or robot or may be a virtual image of a tool or the like such as a crane and a tongs.
The present technology can be applied for example in a case where the user pinches and holds the virtual object 5 with the corresponding virtual object or a case where the user releases the virtual object 5.
For example, whether two corresponding virtual objects attempt to pinch and hold the virtual object 5 is determined and those are set as the first hold-side object 34 and the second hold-side object 35.
In a case where the rate of decrease of the distance between the first hold-side object 34 and the second hold-side object 35 becomes smaller than a predetermined threshold value, it is determined that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35.
In a case where the distance between the first hold-side object 34 and the second hold-side object 35 becomes larger than the release determination distance D2 set on the basis of the hold determination distance D1, it is determined that the hold of the virtual object 5 by the first hold-side object 34 and the second hold-side object 35 has been cancelled.
In addition, the various technologies described above may be performed.
In addition, any device may be used for achieving the virtual space.
The present technology is not limited to the devices such as the HMD and the projector described above, and the virtual space may be achieved by using a smartphone, a tablet terminal, a personal computer (PC), or the like.
<Other Embodiments>
The present technology is not limited to the above-mentioned embodiments, and various other embodiments can be achieved.
The type of the actual object 4 and the virtual object 5 set as the first hold-side object 34 and the second hold-side object 35 are not limited.
The present technology can be applied to any object capable of executing the operation of pinching and holding the virtual object 5 such as chopsticks, pin set, crane, tongs, entire hand, entire arm, entire leg, and toes.
The hold determination distance D1 when the user 3 pinches and holds the same virtual object 5 in the past may be used for the hold determination of the virtual object 5.
That is, in a case where the distance D between the first hold-side object 34 and the second hold-side object 35 becomes equal to the hold determination distance D1 when the user 3 pinches and holds the same virtual object 5 in the past, the hold determination unit 28 may determine that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35.
Once the user 3 grips the virtual object 5, the user 3 often actively or naturally remember the finger-to-finger distance D at that time. Thus, it can be estimated that the grip operation for the same virtual object 5 will be likely to be performed at substantially the same finger-to-finger distance D.
Using the hold determination distance D1 when the same virtual object 5 is pinched in the past enables hold determination according to such a natural grip operation, and a high-quality virtual experience can be achieved.
In a case where the plurality of users 3 is present in the virtual space as shown in FIG. 1 , the hold determination distance D1 of the virtual object 5 is stored with respect to each of the plurality of users 3. The hold determination distance D1 set to the other user 3 can be received and stored for example by the HMD 2 communicating with each other.
In this case, the hold determination distance D1 when the other user 3 pinches and holds the same virtual object 5 may be used for the hold determination of the virtual object 5.
That is, in a case where the distance D between the first hold-side object 34 and the second hold-side object 35 becomes equal to the hold determination distance D1 when the other user 3 pinches and holds the same virtual object 5, the hold determination unit 28 may determine that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35.
When the user attempts to grip the virtual object that has been gripped by the other user 3 located in front of the user within the same virtual space, there is a high possibility that the user has seen the other user 3 gripping that virtual object 5 and knows the finger-to-finger distance D at which the other user 3 has gripped such a virtual object 5. It can be estimated that the user 3 will be likely to perform the grip operation at substantially the same finger-to-finger distance D as the finger-to-finger distance D of the other user 3 that the user 3 has seen.
Using the hold determination distance D1 when the other user 3 pinches the same virtual object 5 enables hold determination according to such a natural grip operation, and a high-quality virtual experience can be achieved.
There can a case where the same type of the actual object 4 and the virtual object 5 are present in the virtual space. For example, in the example shown in FIG. 1 , the actual object 4 of the model car and the virtual object 5 displayed as a virtual image of the model car may be placed on the table 4 a. The model car of the actual object 4 and the model car of the virtual object 5 have the same size. The user 3 can enjoy an application that the model car of the virtual object 5 travels in various ways in the situation where the plurality of model cars is placed.
For example in a case where such a virtual space has been achieved, the distance between the first hold-side object 34 and the second hold-side object 35 (referred to as an actual hold distance) when the actual object 4 corresponding to the virtual object 5 is pinched and held may be used for the hold determination of the virtual object 5.
For example, an actual hold distance is calculated by the object recognition unit 27 of the HMD 2 when the actual object 4 is pinched and held and the actual hold distance is stored in the storage unit 20.
In a case where the distance between the first hold-side object 34 and the second hold-side object 35 becomes equal to the actual hold distance, the hold determination unit 28 determines that the virtual object 5 has been held by the first hold-side object 34 and the second hold-side object 35.
In a case where there are objects with the same size in real and virtual worlds, it is conceivable that the user will be likely to perform the grip operation for the virtual object 5 with the same size at substantially the same distance as the finger-to-finger distance D after the user first grips the actual object 4 of the real world.
Using the actual hold distance when the user grips the actual object 4 enables hold determination corresponding to such a natural grip operation, and a high-quality virtual experience can be achieved.
In the above description, the case where the HMD 2 functions as the embodiment of the information processing apparatus according to the present technology has been taken as the example. The present technology is not limited thereto. Any computer such as a PC connected to the HMD 2 via a network or the like may achieve the embodiment of the information processing apparatus according to the present technology and may execute the information processing method according to the present technology. Moreover, cooperation of the HMD 2 with the computer in the network may achieve the embodiment of the information processing apparatus according to the present technology and may execute the information processing method according to the present technology.
For example, another computer connected to the HMD 2 may achieve some or all of the functional blocks achieved by the controller 21 shown in FIG. 3 .
FIG. 15 is a block diagram showing a hardware configuration example of a computer 60 that can be applied to the present technology.
The computer 60 includes a CPU 61, a read only memory (ROM) 62, a RAM 63, an input/output interface 65, and a bus 64 for connecting them to one another. A display unit 66, an input unit 67, a storage unit 68, a communication unit 69, and a drive unit 70, for example, are connected to the input/output interface 65.
The display unit 66 is a display device using liquid-crystals or EL, for example. The input unit 67 is, for example, a keyboard, a pointing device, a touch panel, or another operation apparatus. In a case where the input unit 67 includes a touch panel, the display unit 66 can be integral with the touch panel.
The storage unit 68 is a nonvolatile storage device, and is, for example, an HDD, a flash memory, or another solid-state memory. The drive unit 70 is a device capable of driving a removable recording medium 71, e.g., an optical recording medium or a magnetic recording tape.
The communication unit 69 is a modem, a router, or another communication apparatus that is connectable to a LAN, a WAN, or the like and is for communicating with the other device. The communication unit 69 may perform wired communication or may perform wireless communication. The communication unit 69 is often used separate from the computer 60.
Cooperation of software stored in the storage unit 68 or the ROM 62, for example, with hardware resources for the computer 60 achieves the information processing by the computer 60 having the above-mentioned hardware configurations. Specifically, the information processing method according to the present technology is achieved by loading a program configuring the software, which has been stored in the ROM 62 or the like, to the RAM 63 and executing it.
The program is installed to the computer 60 via a recording medium 61, for example. Alternatively, the program may be installed to the computer 60 via a global network or the like. In addition, any computer-readable non-transitory storage medium may be used.
Cooperation of a plurality of computers connected to be capable of communicating via the network or the like may execute the information processing method and the program according to the present technology or create the information processing apparatus according to the present technology.
That is, the information processing method and the program according to the present technology can be executed not only in a computer system configured by a single computer but also in a computer system in which a plurality of computers cooperatively operates.
It should be noted that in the present disclosure, the system means a set of a plurality of components (apparatuses, modules (parts), and the like) and it does not matter whether or not all the components are housed in the same casing. Therefore, both a plurality of apparatuses housed in separate casings and connected to one another via a network and a single apparatus having a plurality of modules housed in a single casing are the system.
Executing the information processing method and the program according to the present technology by the computer system includes both executing, by a single computer, the determination as to whether the user attempts to pinch and hold the object, the hold determination, the release determination, the notification control, and the reproduction control of the virtual content, for example, and executing these processes by different computers. Moreover, executing the respective processes by a predetermined computer includes causing another computer to execute some or all of those processes and acquiring the results.
That is, the information processing method and the program according to the present technology can also be applied to a cloud computing configuration in which a plurality of apparatuses shares and cooperatively processes a single function via a network.
The configurations such as the AR providing system, the HMD, the virtual object, and the wearable controller, the respective processing flows, and the like described above with reference to the drawings are merely embodiments, and can be modified as appropriate without departing from the gist of the present technology. That is, any other configurations, algorithms, and the like for carrying out the present technology may be employed.
In the present disclosure, for the sake of easy understanding, the wordings, “substantially”, “almost”, and “about” are used as appropriate. However, no clear difference is defined between a case with the wordings, “substantially”, “almost”, and “about” and a case without these wordings.
That is, in the present disclosure, it is assumed that the concepts that define the shape, the size, the position relationship, the state, and the like such as “center”, “middle”, “uniform”, “equal”, the “same”, “orthogonal”, “parallel”, “symmetric”, “extending”, “axial”, “columnar”, “cylindrical”, “ring-shaped”, and “annular” are concepts including “substantially center”, “substantially middle”, “substantially uniform”, “substantially equal”, “substantially the same”, “substantially orthogonal”, “substantially parallel”, “substantially symmetric”, “substantially extending”, “substantially axial”, “substantially columnar”, “substantially cylindrical”, “substantially ring-shaped”, “substantially annular”, and the like.
For example, states included in a predetermined range (e.g., ±10% range) using “completely center”, “completely middle”, “completely uniform”, “completely equal”, “completely the same”, “completely orthogonal”, “completely parallel”, “completely symmetric”, “completely extending”, “completely axial”, “completely columnar”, “completely cylindrical”, “completely ring-shaped”, “completely annular”, and the like as the bases are also included.
Therefore, also a case where no wordings, “substantially”, “almost”, and “about” are added can include concepts that can be expressed by adding so-called “substantially”, “almost”, “about”, and the like. On the contrary, states expressed with “substantially”, “almost”, “about”, and the like does not necessarily exclude complete states.
In the present disclosure, the comparative expressions, e.g., “larger than A” or “smaller than A” are expressions encompassing both a concept including a case where it is equal to A and a concept not including a case where it is equal to A. For example, “larger than A” is not limited to the case where not including “equal to A”, and also includes “A or more”. Moreover, “smaller than A” is not limited to “less than A”, and also includes “A or less”.
For carrying out the present technology, specific settings and the like only need to be employed as appropriate on the basis of the concepts included in “larger than A” and “smaller than A” so as to provide the above-mentioned effects.
At least two of the features according to the present technology, which have been described above, may be combined. That is, the various features described in the respective embodiments may be combined across the respective embodiments as appropriate. Moreover, the above-mentioned various effects are merely exemplary and not limitative, and other effects may be provided.
It should be noted that the present technology can also take the following configurations.
    • (1) An information processing apparatus, including
      • a hold determination unit that determines, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
    • (2) The information processing apparatus according to (1), further including
      • a storage unit that stores a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance; and
      • a cancel determination unit that determines whether or not the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled on the basis of the stored hold determination distance.
    • (3) The information processing apparatus according to (2), in which
      • the cancel determination unit determines that the hold has been cancelled in a case where the distance between the first hold-side object and the second hold-side object becomes larger than a distance obtained by adding a predetermined threshold value to the hold determination distance.
    • (4) The information processing apparatus according to any one of (1) to (3), in which
      • the first hold-side object is one or more predetermined fingers of a user, and
      • the second hold-side object is one or more other fingers different from the one or more predetermined fingers of the user.
    • (5) The information processing apparatus according to any one of (1) to (4), in which
      • the hold determination unit sets two objects which attempt to pinch and hold the virtual object as the first hold-side object and the second hold-side object.
    • (6) The information processing apparatus according to any one of (1) to (5), in which
      • the hold determination unit sets, in a case where a user has directed a finger pad of each of two fingers towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
    • (7) The information processing apparatus according to any one of (1) to (6), in which
      • the hold determination unit sets, in a case where a user has directed portions of two adjacent fingers, which face each other, towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
    • (8) The information processing apparatus according to any one of (1) to (7), in which
      • the hold determination unit sets two objects whose distance to the virtual object is smaller than the predetermined threshold value as the first hold-side object and the second hold-side object, respectively.
    • (9) The information processing apparatus according to any one of (1) to (8), in which
      • the hold determination unit sets two objects determined to have collided with the virtual object as the first hold-side object and the second hold-side object, respectively.
    • (10) The information processing apparatus according to any one of (1) to (9), in which
      • the hold determination unit sets the first hold-side object and the second hold-side object on the basis of at least one of a shape or a size of the virtual object.
    • (11) The information processing apparatus according to any one of (1) to (10), in which
      • the hold determination unit sets a thumb finger of a user and one or more other fingers of the same hand as the thumb finger as the first hold-side object and the second hold-side object.
    • (12) The information processing apparatus according to (11), in which
      • the hold determination unit executes, in a case where a plurality of fingers is set as the first hold-side object or the second hold-side object, hold determination of the virtual object by using a position of a center of weight of the plurality of fingers as a reference.
    • (13) The information processing apparatus according to any one of (1) to (12), further including
      • a notification control unit that notifies a user of at least one of the fact that the virtual object has been held by the first hold-side object and the second hold-side object or the fact that the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled.
    • (14) The information processing apparatus according to (13), in which
      • the notification control unit executes notification by at least one of presentation of a sense of touch, display of a virtual image, or output of a sound.
    • (15) The information processing apparatus according to (14), in which
      • the notification control unit displays, in a case where the first hold-side object or the second hold-side object which holds the virtual object is spaced apart from the virtual object when the hold determination unit determines that the virtual object has been held, a virtual image of the hold-side object spaced apart from the virtual object in such a manner that the virtual image is in contact with the virtual object.
    • (16) The information processing apparatus according to any one of (1) to (15), further including
      • a storage unit that stores a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance, in which
      • the hold determination unit determines, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when the same virtual object is pinched and held in the past, that the virtual object has been held by the first hold-side object and the second hold-side object.
    • (1) The information processing apparatus according to any one of (1) to (16), further including
      • a storage unit that stores, with respect to each of a plurality of users, a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance, in which
      • the hold determination unit determines, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when another user pinches and holds the same virtual object, that the virtual object has been held by the first hold-side object and the second hold-side object.
    • (1) The information processing apparatus according to any one of (1) to (17), further including
      • a storage unit that stores a distance between the first hold-side object and the second hold-side object when an actual object corresponding to the virtual object has been pinched and held by the first hold-side object and the second hold-side object as an actual hold distance, in which
      • the hold determination unit determines, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the actual hold distance, that the virtual object has been held by the first hold-side object and the second hold-side object.
    • (19) An information processing method to be executed by a computer system, including
      • determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
    • (20) A program that causes a computer system to execute
      • a step of determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
REFERENCE SIGNS LIST
    • D finger-to-finger distance
    • D1 hold determination distance
    • D2 release determination distance
    • D3 distance to virtual object
    • S1 finger pad
    • 1 AR providing system
    • 2 HMD
    • 3 user
    • 4 actual object
    • 5 virtual object
    • 20 storage unit
    • 21 controller
    • 26 information acquisition unit
    • 27 object recognition unit
    • 28 hold determination unit
    • 29 release determination unit
    • 31 AR reproduction control unit
    • 32 sending control unit
    • 34 first hold-side object
    • 35 second hold-side object
    • 37 thumb finger
    • 38 index finger
    • 39 middle finger
    • 44 ring finger
    • 44 wearable controller
    • 60 computer

Claims (20)

The invention claimed is:
1. An information processing apparatus, comprising
a hold determination unit configured to determine, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object,
wherein the hold determination unit is implemented via at least one processor.
2. The information processing apparatus according to claim 1, further comprising
a non-transitory computer-readable storage unit configured to store a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance; and
a cancel determination unit configured to determine whether or not the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled on a basis of the stored hold determination distance,
wherein the cancel determination unit is implemented via at least one processor.
3. The information processing apparatus according to claim 2, wherein
the cancel determination unit determines that the hold has been cancelled in a case where the distance between the first hold-side object and the second hold-side object becomes larger than a distance obtained by adding a predetermined threshold value to the hold determination distance.
4. The information processing apparatus according to claim 1, wherein
the first hold-side object is one or more predetermined fingers of a user, and
the second hold-side object is one or more other fingers different from the one or more predetermined fingers of the user.
5. The information processing apparatus according to claim 1, wherein
the hold determination unit sets two objects which attempt to pinch and hold the virtual object as the first hold-side object and the second hold-side object.
6. The information processing apparatus according to claim 1, wherein
the hold determination unit sets, in a case where a user has directed a finger pad of each of two fingers towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
7. The information processing apparatus according to claim 1, wherein
the hold determination unit sets, in a case where a user has directed portions of two adjacent fingers, which face each other, towards the virtual object, the two fingers as the first hold-side object and the second hold-side object, respectively.
8. The information processing apparatus according to claim 1, wherein
the hold determination unit sets two objects whose distance to the virtual object is smaller than the predetermined threshold value as the first hold-side object and the second hold-side object, respectively.
9. The information processing apparatus according to claim 1, wherein
the hold determination unit sets two objects determined to have collided with the virtual object as the first hold-side object and the second hold-side object, respectively.
10. The information processing apparatus according to claim 1, wherein
the hold determination unit sets the first hold-side object and the second hold-side object on a basis of at least one of a shape or a size of the virtual object.
11. The information processing apparatus according to claim 1, wherein
the hold determination unit sets a thumb finger of a user and one or more other fingers of the same hand as the thumb finger as the first hold-side object and the second hold-side object.
12. The information processing apparatus according to claim 11, wherein
the hold determination unit executes, in a case where a plurality of fingers is set as the first hold-side object or the second hold-side object, hold determination of the virtual object by using a position of a center of weight of the plurality of fingers as a reference.
13. The information processing apparatus according to claim 1, further comprising
a notification control unit configured to notify a user of at least one of the fact that the virtual object has been held by the first hold-side object and the second hold-side object or the fact that the hold of the virtual object by the first hold-side object and the second hold-side object has been cancelled,
wherein the notification control unit is implemented via at least one processor.
14. The information processing apparatus according to claim 13, wherein
the notification control unit executes notification by at least one of presentation of a sense of touch, display of a virtual image, or output of a sound.
15. The information processing apparatus according to claim 14, wherein
the notification control unit displays, in a case where the first hold-side object or the second hold-side object which holds the virtual object is spaced apart from the virtual object when the hold determination unit determines that the virtual object has been held, a virtual image of the hold-side object spaced apart from the virtual object in such a manner that the virtual image is in contact with the virtual object.
16. The information processing apparatus according to claim 1, further comprising
a non-transitory computer-readable storage unit configured to store a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance, wherein
the hold determination unit determines, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when the same virtual object is pinched and held in the past, that the virtual object has been held by the first hold-side object and the second hold-side object.
17. The information processing apparatus according to claim 1, further comprising
a non-transitory computer-readable storage unit configured to store, with respect to each of a plurality of users, a distance between the first hold-side object and the second hold-side object when the hold determination unit determines that the virtual object has been held, as a hold determination distance, wherein
the hold determination unit determines, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the hold determination distance stored at the time when another user pinches and holds the same virtual object, that the virtual object has been held by the first hold-side object and the second hold-side object.
18. The information processing apparatus according to claim 1, further comprising
a non-transitory computer-readable storage unit configured to store a distance between the first hold-side object and the second hold-side object when an actual object corresponding to the virtual object has been pinched and held by the first hold-side object and the second hold-side object as an actual hold distance, wherein
the hold determination unit determines, in a case where the distance between the first hold-side object and the second hold-side object becomes equal to the actual hold distance, that the virtual object has been held by the first hold-side object and the second hold-side object.
19. An information processing method to be executed by a computer system, comprising
determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
20. A non-transitory computer-readable medium having embodied thereon a program, which when executed by at least one processor of a computer system causes the computer system to execute a method, the method comprising:
determining, in a case where a rate of decrease of a distance between a first hold-side object and a second hold-side object that set a virtual object as a pinch-and-hold target becomes smaller than a predetermined threshold value, that the virtual object has been held by the first hold-side object and the second hold-side object.
US18/571,510 2021-07-08 2022-03-08 Information processing apparatus, information processing method, and program Active US12260025B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2021-113612 2021-07-08
JP2021113612 2021-07-08
PCT/JP2022/010069 WO2023281819A1 (en) 2021-07-08 2022-03-08 Information processing device for determining retention of object

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010069 A-371-Of-International WO2023281819A1 (en) 2021-07-08 2022-03-08 Information processing device for determining retention of object

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US19/063,989 Continuation US20250216953A1 (en) 2021-07-08 2025-02-26 Information processing apparatus, information processing method, and program

Publications (2)

Publication Number Publication Date
US20240281072A1 US20240281072A1 (en) 2024-08-22
US12260025B2 true US12260025B2 (en) 2025-03-25

Family

ID=84801639

Family Applications (2)

Application Number Title Priority Date Filing Date
US18/571,510 Active US12260025B2 (en) 2021-07-08 2022-03-08 Information processing apparatus, information processing method, and program
US19/063,989 Pending US20250216953A1 (en) 2021-07-08 2025-02-26 Information processing apparatus, information processing method, and program

Family Applications After (1)

Application Number Title Priority Date Filing Date
US19/063,989 Pending US20250216953A1 (en) 2021-07-08 2025-02-26 Information processing apparatus, information processing method, and program

Country Status (5)

Country Link
US (2) US12260025B2 (en)
JP (1) JP7754170B2 (en)
CN (1) CN117642775A (en)
DE (1) DE112022003443T5 (en)
WO (1) WO2023281819A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20250216953A1 (en) * 2021-07-08 2025-07-03 Sony Group Corporation Information processing apparatus, information processing method, and program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12456263B2 (en) 2022-09-09 2025-10-28 Snap Inc. Cursor functionality for augmented reality content in messaging systems

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335557A (en) * 1991-11-26 1994-08-09 Taizo Yasutake Touch sensitive input control device
US5483261A (en) * 1992-02-14 1996-01-09 Itu Research, Inc. Graphical input controller and method with rear screen image detection
US6597347B1 (en) * 1991-11-26 2003-07-22 Itu Research Inc. Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
US20050124401A1 (en) * 2003-12-08 2005-06-09 Nintendo Co., Ltd. Game apparatus and storage medium storing game program
US20060197750A1 (en) * 2005-03-04 2006-09-07 Apple Computer, Inc. Hand held electronic device with multiple touch sensing devices
US20090184936A1 (en) * 2008-01-22 2009-07-23 Mathematical Inventing - Slicon Valley 3D touchpad
US7731387B2 (en) * 2004-10-04 2010-06-08 Koninklijke Philips Electronics N.V. Lighting device with user interface for light control
US20110227871A1 (en) * 2010-03-22 2011-09-22 Mattel, Inc. Electronic Device and the Input and Output of Data
US20110260998A1 (en) * 2010-04-23 2011-10-27 Ludwig Lester F Piecewise-linear and piecewise-affine transformations for high dimensional touchpad (hdtp) output decoupling and corrections
US20160110078A1 (en) * 2011-06-09 2016-04-21 C/O Sony Corporation Information processing device, information processing method and program
US20160109947A1 (en) * 2012-01-04 2016-04-21 Tobii Ab System for gaze interaction
US20160216761A1 (en) * 2012-01-04 2016-07-28 Tobii Ab System for gaze interaction
WO2017204120A1 (en) 2016-05-25 2017-11-30 株式会社ソニー・インタラクティブエンタテインメント Image processing apparatus, image processing method, and program
JP2018032217A (en) 2016-08-24 2018-03-01 株式会社コロプラ Information processing method, program enabling computer to execute method and computer
US10222889B2 (en) * 2015-06-03 2019-03-05 Microsoft Technology Licensing, Llc Force inputs and cursor control
WO2019082854A1 (en) 2017-10-26 2019-05-02 株式会社ソニー・インタラクティブエンタテインメント Image processing device, image processing method, and program
WO2019082853A1 (en) 2017-10-26 2019-05-02 株式会社ソニー・インタラクティブエンタテインメント Image processing device, image processing method, and program
JP2020046981A (en) 2018-09-19 2020-03-26 株式会社コロプラ Program, information processing device, and information processing method
US20200218349A1 (en) 2019-01-03 2020-07-09 J-Mex Inc. Operating method for wearable device interacting with operated device in virtual reality and operating device thereof
US11016626B2 (en) * 2019-02-12 2021-05-25 Alps Alpine Co., Ltd. Input device
US11068118B2 (en) * 2013-09-27 2021-07-20 Sensel, Inc. Touch sensor detector system and method
US11137830B1 (en) * 2017-09-22 2021-10-05 Apple Inc. Interactive computing system and control device
US11334239B2 (en) * 2009-01-23 2022-05-17 Samsung Electronics Co., Ltd. Mobile terminal having dual touch screen and method of controlling content therein
US11449224B2 (en) * 2008-01-04 2022-09-20 Apple Inc. Selective rejection of touch contacts in an edge region of a touch surface
US20230186431A1 (en) * 2021-12-09 2023-06-15 Htc Corporation Method for adjusting displayed content, host, and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2826431B2 (en) * 1993-01-06 1998-11-18 日本電気株式会社 Grasping judgment device
US20140157206A1 (en) * 2012-11-30 2014-06-05 Samsung Electronics Co., Ltd. Mobile device providing 3d interface and gesture controlling method thereof
DE112022003443T5 (en) * 2021-07-08 2024-05-23 Sony Group Corporation DATA PROCESSING EQUIPMENT, DATA PROCESSING METHOD AND PROGRAM

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6597347B1 (en) * 1991-11-26 2003-07-22 Itu Research Inc. Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
US5335557A (en) * 1991-11-26 1994-08-09 Taizo Yasutake Touch sensitive input control device
US5483261A (en) * 1992-02-14 1996-01-09 Itu Research, Inc. Graphical input controller and method with rear screen image detection
US20050124401A1 (en) * 2003-12-08 2005-06-09 Nintendo Co., Ltd. Game apparatus and storage medium storing game program
US7731387B2 (en) * 2004-10-04 2010-06-08 Koninklijke Philips Electronics N.V. Lighting device with user interface for light control
US20060197750A1 (en) * 2005-03-04 2006-09-07 Apple Computer, Inc. Hand held electronic device with multiple touch sensing devices
US11449224B2 (en) * 2008-01-04 2022-09-20 Apple Inc. Selective rejection of touch contacts in an edge region of a touch surface
US20090184936A1 (en) * 2008-01-22 2009-07-23 Mathematical Inventing - Slicon Valley 3D touchpad
US11334239B2 (en) * 2009-01-23 2022-05-17 Samsung Electronics Co., Ltd. Mobile terminal having dual touch screen and method of controlling content therein
US20110227871A1 (en) * 2010-03-22 2011-09-22 Mattel, Inc. Electronic Device and the Input and Output of Data
US20110260998A1 (en) * 2010-04-23 2011-10-27 Ludwig Lester F Piecewise-linear and piecewise-affine transformations for high dimensional touchpad (hdtp) output decoupling and corrections
US20160110078A1 (en) * 2011-06-09 2016-04-21 C/O Sony Corporation Information processing device, information processing method and program
US20160109947A1 (en) * 2012-01-04 2016-04-21 Tobii Ab System for gaze interaction
US20160216761A1 (en) * 2012-01-04 2016-07-28 Tobii Ab System for gaze interaction
US11068118B2 (en) * 2013-09-27 2021-07-20 Sensel, Inc. Touch sensor detector system and method
US10222889B2 (en) * 2015-06-03 2019-03-05 Microsoft Technology Licensing, Llc Force inputs and cursor control
WO2017204120A1 (en) 2016-05-25 2017-11-30 株式会社ソニー・インタラクティブエンタテインメント Image processing apparatus, image processing method, and program
JP2018032217A (en) 2016-08-24 2018-03-01 株式会社コロプラ Information processing method, program enabling computer to execute method and computer
US11137830B1 (en) * 2017-09-22 2021-10-05 Apple Inc. Interactive computing system and control device
WO2019082853A1 (en) 2017-10-26 2019-05-02 株式会社ソニー・インタラクティブエンタテインメント Image processing device, image processing method, and program
WO2019082854A1 (en) 2017-10-26 2019-05-02 株式会社ソニー・インタラクティブエンタテインメント Image processing device, image processing method, and program
JP2020046981A (en) 2018-09-19 2020-03-26 株式会社コロプラ Program, information processing device, and information processing method
US20200218349A1 (en) 2019-01-03 2020-07-09 J-Mex Inc. Operating method for wearable device interacting with operated device in virtual reality and operating device thereof
US11016626B2 (en) * 2019-02-12 2021-05-25 Alps Alpine Co., Ltd. Input device
US20230186431A1 (en) * 2021-12-09 2023-06-15 Htc Corporation Method for adjusting displayed content, host, and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20250216953A1 (en) * 2021-07-08 2025-07-03 Sony Group Corporation Information processing apparatus, information processing method, and program

Also Published As

Publication number Publication date
JP7754170B2 (en) 2025-10-15
JPWO2023281819A1 (en) 2023-01-12
US20250216953A1 (en) 2025-07-03
DE112022003443T5 (en) 2024-05-23
WO2023281819A1 (en) 2023-01-12
CN117642775A (en) 2024-03-01
US20240281072A1 (en) 2024-08-22

Similar Documents

Publication Publication Date Title
US11755122B2 (en) Hand gesture-based emojis
US10339723B2 (en) Generating virtual notation surfaces with gestures in an augmented and/or virtual reality environment
EP3548995B1 (en) End of session detection in an augmented and/or virtual reality environment
EP3504608B1 (en) Manipulating virtual objects with six degree-of-freedom controllers in an augmented and/or virtual reality environment
CN108073276B (en) Dragging virtual elements for augmented and/or virtual reality environments
US20250216953A1 (en) Information processing apparatus, information processing method, and program
US20140306891A1 (en) Holographic object feedback
US20230333645A1 (en) Method and device for processing user input for multiple devices
JP6212666B1 (en) Information processing method, program, virtual space distribution system, and apparatus
JP2018195172A (en) Information processing method, information processing program, and information processing device
CN117716327A (en) Method and apparatus for managing interactions of a user interface with physical objects
US20230011453A1 (en) Artificial Reality Teleportation Via Hand Gestures
JP2018147458A (en) Information processing method, program, virtual space distribution system, and device
WO2021106552A1 (en) Information processing device, information processing method, and program
WO2023021757A1 (en) Information processing device, information processing method, and program
Rose et al. CAPTURE SHORTCUTS FOR SMART GLASSES USING ELECTROMYOGRAPHY
WO2021131950A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SONY GROUP CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NAGANO, KEIJIROH;ISHIKAWA, TSUYOSHI;SHIROMA, SHIN;AND OTHERS;SIGNING DATES FROM 20231220 TO 20231221;REEL/FRAME:065943/0801

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE