[go: up one dir, main page]

WO2023281803A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement Download PDF

Info

Publication number
WO2023281803A1
WO2023281803A1 PCT/JP2022/007217 JP2022007217W WO2023281803A1 WO 2023281803 A1 WO2023281803 A1 WO 2023281803A1 JP 2022007217 W JP2022007217 W JP 2022007217W WO 2023281803 A1 WO2023281803 A1 WO 2023281803A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual space
information
virtual
information processing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2022/007217
Other languages
English (en)
Japanese (ja)
Inventor
孝悌 清水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp filed Critical Sony Group Corp
Publication of WO2023281803A1 publication Critical patent/WO2023281803A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a storage medium.
  • VR Virtual Reality
  • HMD Head Mounted Display
  • Patent Literature 1 listed below discloses a technique for arranging a virtual object such as a pillar in the vicinity of the boundary between a live video displayed in the virtual space and the virtual space.
  • the virtual object merely hides the boundary between the live video and the virtual space.
  • the present disclosure proposes a new and improved information processing device, information processing method, and storage medium that can further improve entertainment in virtual space.
  • an acquisition unit for acquiring feature information from content included in a first virtual object arranged in a virtual space, and at least arranged in the virtual space based on the feature information acquired by the acquisition unit. and a video generation unit that controls display of a second virtual object around the content to be processed.
  • acquiring feature information from content included in a first virtual object placed in a virtual space and placing at least the virtual object in the virtual space based on the acquired feature information. and controlling the display of a second virtual object around said content.
  • a storage medium that non-temporarily stores a computer-executable program, the program extracting feature information from content included in a first virtual object arranged in a virtual space. and a display control function of controlling display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition function.
  • a storage medium is provided.
  • FIG. 1 is an explanatory diagram for explaining a configuration example of an information processing system according to the present disclosure
  • FIG. FIG. 4 is an explanatory diagram for explaining an example of a virtual space V to which XR live is distributed
  • 1 is an explanatory diagram for explaining a functional configuration example of a distribution system 10 according to the present disclosure
  • FIG. FIG. 4 is an explanatory diagram for explaining a functional configuration example of the HMD 40 according to the present disclosure
  • FIG. 4 is an explanatory diagram for explaining an image example of the first embodiment according to the present disclosure
  • FIG. FIG. 4 is an explanatory diagram for explaining an example of operation processing of the first embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of an image in the second embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of an image in the second embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of operation processing of the second embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of an image of the third embodiment according to the present disclosure
  • FIG. 11 is an explanatory diagram for explaining an example of operation processing of the third embodiment according to the present disclosure
  • FIG. 5 is an explanatory diagram for explaining a modification according to the present disclosure
  • 2 is an explanatory diagram showing a hardware configuration of an HMD 40 according to the present disclosure
  • FIG. 1 is an explanatory diagram for explaining a configuration example of an information processing system according to the present disclosure.
  • the information processing system according to the present disclosure includes a network 1, a distribution system 10, and an HMD 40.
  • a network 1 is a wired or wireless transmission path for information transmitted from devices connected to the network 1 .
  • the network 1 may include a public line network such as the Internet, a telephone line network, a satellite communication network, various LANs (Local Area Networks) including Ethernet (registered trademark), WANs (Wide Area Networks), and the like.
  • the network 1 may also include a dedicated line network such as IP-VPN (Internet Protocol-Virtual Private Network).
  • the distribution system 10 and the HMD 40 are connected via the network 1.
  • the HMD 40A used by a certain user and the HMD 40B used by another user are also connected via the network 1, respectively.
  • the distribution system 10 is a system used by a user on the side of a distributor who distributes various types of event information such as live video and live audio (hereinafter sometimes referred to as a distribution user). For example, the distribution system 10 transmits to the HMD 40 a virtual space including live video captured by a camera and live audio captured by a sensor, which will be described later.
  • XR Extended Reality
  • the HMD 40 is an example of an information processing device, and is a terminal used by a user who views live video and live audio in virtual space.
  • the HMD 40 is, for example, a head-mounted display to which applications such as VR (Virtual Reality) or AR (Augmented Reality) are applied.
  • VR Virtual Reality
  • AR Augmented Reality
  • a user wearing the HMD 40 can use an input device such as a hand controller to move the avatar, which is the alter ego of the user, in the virtual space.
  • the user wearing the HMD 40 can view the live video from the first-person viewpoint based on the eyes of the avatar from the display of the HMD 40 .
  • the HMD 40 according to the present disclosure acquires feature information from content included in virtual objects placed in the virtual space.
  • the HMD 40 according to the present disclosure controls display of images representing the virtual space based on the acquired feature information.
  • display control includes displaying a virtual object in the virtual space, dynamically displaying the virtual object in the virtual space, and the like. Various details such as images representing virtual objects, content, and virtual space will be described later.
  • FIG. 2 is an explanatory diagram for explaining an example of the virtual space V in which XR live is delivered.
  • a virtual space V in which a certain XR live is delivered includes, for example, a two-dimensional screen VC1, a two-dimensional video CT1, and an avatar U participating in the virtual space.
  • the two-dimensional screen VC1 is the first virtual object arranged to output the two-dimensional image CT1.
  • the two-dimensional image CT1 is an example of content, and is a live image captured by a camera.
  • the two-dimensional image CT1 may include images of equipment and facilities such as the performer P1 and the speaker C1 as shown in FIG.
  • the two-dimensional image CT1 is inserted on the two-dimensional screen VC1 arranged in the virtual space V.
  • the two-dimensional image CT1 may be subjected to image processing according to the size and shape of the two-dimensional screen VC1.
  • the content includes the above-described two-dimensional image CT1 and various types of information such as sound information output in conjunction with the two-dimensional image CT1.
  • the avatar U is the clone of the user who participated in the virtual space V where the XR live is delivered. For example, when a user in the virtual space V where the XR live is distributed participates, the user's avatar U1 is displayed in the virtual space V. FIG. In addition, when other users (for example, two users) also participate in the virtual space V where the XR live is distributed, the number of avatars U2 corresponding to the number of other users who participated in the virtual space V displayed above. Each avatar can move and act based on each user's operation.
  • virtual space V in which XR live is distributed has been described above, but the virtual space V according to the present disclosure is not limited to such an example.
  • virtual avatars such as NPCs (Non Player Characters) may be placed in the virtual space V
  • various virtual objects such as buildings and mobile objects may be placed in the virtual space V.
  • FIG. 3 is an explanatory diagram for explaining a functional configuration example of the distribution system 10 according to the present disclosure.
  • distribution system 10 according to the present disclosure includes camera 20 , sensor 25 , and distribution server 30 .
  • the camera 20 captures an image of an object and obtains an image.
  • the camera 20 captures a live venue and acquires live video including performers and equipment or facilities at the live venue.
  • the video captured by the camera 20 is transmitted to the distribution server 30 by any communication method.
  • the sensor 25 is a sensor that acquires live sound at a live venue. Further, the sensor 25 may be a sensor that acquires body position information of the performer. Various information (for example, live sound) acquired by the sensor 25 is transmitted to the distribution server 30 by any communication method.
  • the distribution server 30 is a server used by the distributor.
  • the distribution server 30 includes a control unit 310 and a communication unit 320, as shown in FIG.
  • control unit 310 controls overall operations of the distribution server 30 .
  • the control unit 310 synchronizes live video and live audio received from the camera 20 and the sensor 25 and causes the communication unit 320 to transmit them to the HMD 40 used by the user.
  • the communication unit 320 Under the control of the control unit 310, the communication unit 320 performs various communications with the HMD 40 participating in the virtual space where the XR live is distributed. For example, the communication unit 320 streams a virtual space including live video captured by the camera 20 and live audio captured by the sensor 25 to the HMD 40 .
  • FIG. 4 is an explanatory diagram for explaining a functional configuration example of the HMD 40 according to the present disclosure.
  • the HMD 40 according to the present disclosure includes a communication section 410 , a storage section 420 , a display section 430 , an operation section 440 and a control section 450 .
  • the communication unit 410 performs various communications with the distribution server 30 and other HMDs 40 under the control of the control unit 450 .
  • a plurality of HMDs 40 participating in the virtual space V to which the same XR live is distributed are bi-directionally connected via the network 1 .
  • Storage unit 420 holds software and various data.
  • the storage unit 420 holds learning data obtained by learning using pairs of object types and object feature information as teacher data.
  • the object feature information may include information about the shape of the object and information about the color of the object.
  • the object to be learned as teacher data is not limited to an object.
  • the storage unit 420 may hold learning data obtained by learning using sets of various objects such as performers and lights and feature information of the objects as teacher data.
  • the storage unit 420 holds three-dimensional data representing images representing the virtual space, such as virtual objects and particles. Specific examples of virtual objects and particles will be described later.
  • the display unit 430 displays an image in the virtual space from the first-person viewpoint based on the eyes of the avatar. Note that the first-person viewpoint image based on the avatar's eyes is generated based on sensor data acquired by the orientation sensor mounted on the HMD 40 .
  • the functions of the display unit 430 are implemented by, for example, a CRT (Cathode Ray Tube) display device, a liquid crystal display (LCD) device, or an OLED (Organic Light Emitting Diode) device.
  • the display unit 430 may display various images such as a screen for selecting a virtual space to participate in and an image from a bird's-eye view of the virtual space in which the participant participates.
  • the operation unit 440 controls movement and actions of the avatar. Functions of the operation unit 440 are implemented by a device such as a hand controller. Note that the avatar's action includes, for example, various actions of the avatar such as waving and jumping.
  • control unit 450 controls overall operations of the HMD 40 according to the present disclosure. As shown in FIG. 3, the control unit 450 includes a color information detection unit 451, an object/body recognition unit 455, a correction processing unit 459, a correlation processing unit 463, a music analysis processing unit 467, and a space drawing processing unit. 471.
  • the color information detection unit 451 is an example of an acquisition unit and detects color information of a 2D image. For example, the color information detection unit acquires color information from the background image of the two-dimensional image. More specifically, the color information detection unit 451 acquires color information near the boundary between the 2D video and the virtual space in the background video included in the 2D video. It should be noted that the background image here indicates an area in which the performer and the object included in the two-dimensional image do not exist.
  • the color information detection unit 451 may detect color information of an object included in the two-dimensional image.
  • the object/body recognition unit 455 is an example of an acquisition unit, and acquires feature information from a two-dimensional image. For example, the object/body recognition unit 455 recognizes objects and performers included in the two-dimensional video. The object/body recognition unit 455 also acquires feature information from objects and performers included in the two-dimensional video, and recognizes the objects and performers based on the feature information and learning data held in the storage unit 420 .
  • the correction processing unit 459 performs correction processing such as shape interpolation and noise removal on the images of the objects and performers recognized by the object/body recognition unit 455 .
  • the correlation processing unit 463 calculates the degree of correlation between the image of the object or the performer corrected by the correction processing unit 459 and the three-dimensional data of the virtual object held by the storage unit 420 .
  • the music analysis processing unit 467 is an example of an acquisition unit, and acquires sound information of live audio included in the two-dimensional video output in the virtual space.
  • the music analysis processing unit 467 analyzes the acquired sound information and acquires various kinds of information such as rhythm, tempo, volume change (crescendo and decrescendo) of the live sound.
  • the space drawing processing unit 471 is an example of an image generation unit, and controls display of an image representing a virtual space based on the feature information acquired by the object/body recognition unit 455 .
  • a specific example of the video representing the virtual space will be described later.
  • the space drawing processing unit 471 may control the display of the image representing the virtual space based on the color information of the background image detected by the color information detection unit 451 .
  • the space drawing processing unit 471 may control the display of the video representing the virtual space based on the sound information acquired by the music analysis processing unit 467 .
  • the space drawing processing unit 471 controls the display of an image that expresses a virtual space with three-dimensional data of a virtual object similar to an object displayed on a two-dimensional image.
  • FIG. 5 is an explanatory diagram for explaining a video example of the first embodiment according to the present disclosure.
  • the object/body recognition unit 455 recognizes an object included in the two-dimensional image CT1.
  • the object/body recognition unit 455 recognizes the speaker C1 installed at the live venue.
  • the correlation processing unit 463 then calculates the degree of correlation between the speaker C ⁇ b>1 and each of the virtual objects held by the storage unit 420 .
  • the space drawing processing unit 471 controls the display of the three-dimensional data of the virtual object whose degree of correlation satisfies a predetermined criterion by the correlation processing unit 463 as the image of the second virtual object, and arranges it in the virtual space V.
  • the space drawing processing unit 471 controls the display of the speaker model VC2 having a high degree of correlation with the speaker C1 as the second virtual object. This allows the user to feel more like they are participating in a live performance.
  • a virtual object whose degree of correlation satisfies a predetermined criterion may be, for example, a virtual object whose degree of correlation is equal to or greater than a predetermined value.
  • the space drawing processing unit 471 may control the display of the video of the speaker model VC2 in an area that does not interfere with other virtual objects placed in the virtual space V.
  • the number of speaker models VC2 whose display is controlled may be singular or plural.
  • objects included in the two-dimensional image CT1 may change over time, but when the speaker C1 disappears from the two-dimensional image CT1, the space drawing processing unit 471 renders the once-displayed speaker model VC2 into the virtual space. You can erase it from the top, or you can leave it on. Further, the space drawing processing unit 471 may erase the image of the speaker model VC2 when the speaker C1 is not displayed from the two-dimensional image CT1 for a certain period of time.
  • FIG. 6 is an explanatory diagram for explaining an operation processing example of the first embodiment according to the present disclosure.
  • the object/body recognition unit 455 recognizes an object on a two-dimensional image (S101). For example, when a speaker is included in the two-dimensional image, the object/body recognition unit 455 recognizes the object type of the speaker on the two-dimensional image as “speaker”.
  • the color information detection unit 451 acquires the color information of the object (S105).
  • the correlation processing unit 463 calculates the degree of correlation between the virtual object held by the storage unit 420 and the color information of the virtual object, the type of the recognized object, and the color information of the object (S113). At this time, the correlation processing unit 463 determines a virtual object that has a high degree of similarity to the object based on the degree of correlation between the type of object and the color information of the object. may be prepared. More specifically, when determining the degree of similarity, the weighting parameter for the type of object may be set larger than the weighting parameter set for the color information of the object. As a result, it is possible to improve the determination accuracy of the product type of the virtual object, which has a particularly large impact on the user's visual recognition.
  • the control unit 450 determines whether or not there is a virtual object similar to the recognized object (S113). If it is determined that there is a similar virtual object (S113/Yes), the process proceeds to S117, and if it is determined that there is no similar virtual object (S113/No), the control unit 450 according to the present disclosure processes exit.
  • control unit 450 acquires position information of each virtual object in the virtual space (S117).
  • the space drawing processing unit 471 arranges the virtual objects determined to be similar in S113 at positions that do not interfere with each virtual object (S121), and the control unit 450 according to the present disclosure ends the processing.
  • FIG. 7 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9.
  • FIG. 7 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9.
  • FIG. 7 The first embodiment according to the present disclosure has been described above. Next, a second embodiment according to the present disclosure will be described with reference to FIGS. 7 to 9.
  • the space drawing processing unit 471 creates virtual objects (for example, lighting) or particles that have the same color as the color indicated by the color information of the background image of the dynamically changeable two-dimensional image.
  • the display of the image is controlled as an image expressing the virtual space.
  • FIG. 7 is an explanatory diagram for explaining a video example of the second embodiment according to the present disclosure.
  • the color information detection unit 451 detects color information from the background image of the two-dimensional image CT1.
  • the space drawing processing unit 471 controls the display of the video representing the virtual space. More specifically, the space drawing processing unit 471 controls display of an image having a similar color to the color indicated by the color information acquired by the color information detection unit 451 .
  • the color information detection unit 451 acquires color information near the boundary between the two-dimensional image CT1 and the virtual space V. Then, the space drawing processing unit 471 controls the display of the image VC3 of the light emitting light having the same color as the color indicated by the color information as the third virtual object. As a result, the visibility of the boundary between the two-dimensional image CT1 and the virtual space V is reduced, and the user can feel more immersed in the virtual space V.
  • the space drawing processing unit 471 may control the display of the light image VC3 so as to hide the boundaries with the virtual space V at both ends of the two-dimensional screen VC1, as shown in FIG.
  • the space drawing processing unit 471 controls the display of the image based on the color information near the left edge boundary between the two-dimensional image CT1 and the virtual space V so as to hide the boundary with the virtual space V at the left edge of the two-dimensional screen VC1.
  • the space drawing processing unit 471 controls the display of the image based on the color information near the right edge boundary between the two-dimensional image CT1 and the virtual space V so as to hide the boundary between the two-dimensional screen VC1 and the virtual space V at the right edge. do.
  • the left and right sides of the two-dimensional screen VC1 have different background colors, the visibility of the boundary between the two-dimensional image CT1 and the virtual space V is reduced, and the user can feel more immersed in the virtual space V.
  • the image based on the color information acquired by the color information detection unit 451 is not limited to the image of a virtual object such as a light.
  • it may be an image of particles such as light or particles.
  • FIG. 8 is an explanatory diagram for explaining a video example of the second embodiment according to the present disclosure.
  • the space drawing processing unit 471 may control the display of the particle image E1 so as to hide the boundaries with the virtual space V at both ends of the two-dimensional screen VC1.
  • the space drawing processing unit 471 may control display of an image obtained by combining the light image VC3 and the particle image E1 described above.
  • FIG. 9 is an explanatory diagram for explaining an operation processing example of the second embodiment according to the present disclosure.
  • the color information detection unit 451 acquires color information from the background image included in the two-dimensional image (S201).
  • the space drawing processing unit 471 controls the display of lights that emit light of the same color as the color indicated by the color information so as to hide the boundaries at both ends of the two-dimensional image (S205).
  • control unit 450 determines whether or not the degree of similarity between the background color of the 2D video included in the 2D screen and the background color of the virtual space is equal to or greater than a threshold (S209). If the degree of similarity is greater than or equal to the threshold (S209/Yes), the process ends, and if the degree of similarity is less than the threshold (S209/No), the process proceeds to S213.
  • the space drawing processing unit 471 controls the display of the image of the light with the expanded light irradiation range (S213). Then, the processing of S209 to S213 is repeated until it is determined in S209 that the degree of similarity is equal to or greater than the threshold.
  • FIG. 10 The second embodiment according to the present disclosure has been described above.
  • a third embodiment according to the present disclosure will be described with reference to FIGS. 10 and 11.
  • the space drawing processing unit 471 expresses images such as NPC actions, virtual object actions, particles, etc. in the virtual space based on the sound information acquired by the music analysis processing unit 467. Control display as video.
  • FIG. 10 is an explanatory diagram for explaining a video example of the third embodiment according to the present disclosure.
  • the music analysis processing unit 467 analyzes the sound information M1 of the live audio included in the two-dimensional image CT1 output in the virtual space V, and acquires various information such as the rhythm of the live audio.
  • the sound information M1 is illustrated as a musical note mark, but the sound information M1 is actually included in the voice, so it is not displayed in the video.
  • the space drawing processing unit 471 may control the display of the video of the virtual object that moves in response to the rhythm of the live sound as the video of the fourth virtual object.
  • the image of the virtual object that moves in response to the rhythm of sound may be, for example, an image that moves as if the psyllium VC 4 shown in FIG. 10 is swung from side to side.
  • the space drawing processing unit 471 may control the display of an image E2 of particles such as light and particles that move in response to the rhythm of sound as an image representing the virtual space V.
  • an image E2 of particles such as light and particles that move in response to the rhythm of sound as an image representing the virtual space V.
  • FIG. 11 is an explanatory diagram for explaining an operation processing example of the third embodiment according to the present disclosure.
  • the music analysis processing unit 467 acquires sound information output in conjunction with a two-dimensional image (S301).
  • the music analysis processing unit 467 analyzes the rhythm, volume, etc. from the acquired sound information (S305).
  • the space drawing processing unit 471 displays the virtual object corresponding to the analysis result in S305 in the virtual space (S309), and the control unit 450 according to the present disclosure ends the processing.
  • the two-dimensional screen is a planar screen
  • the two-dimensional screen according to the present disclosure is not limited to such an example.
  • a modification according to the present disclosure will be described with reference to FIG. 12 .
  • FIG. 12 is an explanatory diagram for explaining a modification according to the present disclosure.
  • the two-dimensional screen VC1 may have a curved shape as shown in FIG.
  • the two-dimensional screen VC1 may have a curved shape with a curvature radius corresponding to a predetermined area range on the virtual space V.
  • FIG. 12 is an explanatory diagram for explaining a modification according to the present disclosure.
  • the two-dimensional screen VC1 may have a curved shape as shown in FIG.
  • the two-dimensional screen VC1 may have a curved shape with a curvature radius corresponding to a predetermined area range on the virtual space V.
  • the predetermined area range may be the crowd area PA1 as shown in FIG. This makes it possible to reduce the effects of video distortion that may occur depending on the viewing position in the crowd area PA1, and provides the user with the feeling of participating in an actual live performance.
  • Hardware configuration example >> The embodiment according to the present disclosure has been described above. The information processing described above is realized by cooperation between software and the hardware of the HMD 40 described below. The hardware configuration described below can also be applied to the camera 20, the sensor 25, or the distribution server 30 included in the distribution system 10. FIG.
  • FIG. 13 is an explanatory diagram showing the hardware configuration of the HMD 40 according to the present disclosure.
  • the HMD 40 includes a CPU (Central Processing Unit) 4201, a ROM (Read Only Memory) 4202, a RAM (Random Access Memory) 4203, an input device 4208, an output device 4210, and a storage device 4211. , a drive 4212 , an imaging device 4213 , and a communication device 4215 .
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 4201 functions as an arithmetic processing device and a control device, and controls the overall operation of the HMD 40 according to various programs.
  • the CPU 4201 may be a microprocessor.
  • the ROM 4202 stores programs and calculation parameters used by the CPU 4201 .
  • the RAM 4203 temporarily stores programs used in the execution of the CPU 4201, parameters that change as appropriate during the execution, and the like. These are interconnected by a host bus comprising a CPU bus or the like.
  • the input device 4208 generates an input signal based on input means for the user to input information, such as a hand controller, mouse, keyboard, touch panel, button, microphone, switch, and lever, and an input by the user, and outputs the signal to the CPU 4201 . It consists of an input control circuit, etc.
  • a user using the HMD 40 can input various data to the HMD 40 and instruct processing operations by operating the input device 4208 .
  • Output devices 4210 include display devices such as liquid crystal display (LCD) devices, OLED devices, and lamps, for example.
  • output device 4210 includes audio output devices such as speakers and headphones.
  • the display device displays an image of an avatar viewpoint in virtual space.
  • the audio output device converts audio data including live audio into audio and outputs the audio.
  • the storage device 4211 is a data storage device configured as an example of the storage unit 420 of the HMD 40 according to the present disclosure.
  • the storage device 4211 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
  • the storage device 4211 stores programs executed by the CPU 4201 and various data.
  • a drive 4212 is a reader/writer for storage media, and is built in or externally attached to the HMD 40 .
  • the drive 4212 reads information recorded in the removable storage medium 44 such as a semiconductor memory and outputs it to the RAM 4203 .
  • Drive 4212 can also write information to removable storage medium 44 .
  • the imaging device 4213 includes an imaging optical system such as an imaging lens and a zoom lens for condensing light, and a signal conversion element such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
  • the imaging optical system collects light emitted from a subject to form a subject image in a signal conversion section, and the signal conversion element converts the formed subject image into an electrical image signal.
  • the communication device 4215 is, for example, a communication interface configured with a communication device or the like for connecting to the network 1. Also, the communication device 4215 may be a wireless LAN compatible communication device, an LTE compatible communication device, or a wire communication device that performs wired communication.
  • the HMD 40 may be configured separately from the HMD 40.
  • the HMD 40 transmits information about two-dimensional video and audio in the virtual space to the server, and the server executes various processes such as object recognition included in the two-dimensional video and analysis of rhythms included in sound information. may be This can reduce the burden on the HMD 40 .
  • the object/body recognition unit 455 may recognize the audience. Then, the space drawing processing unit 471 may control display of images such as particles that move according to the amount of movement of the spectator.
  • each step in the processing of the HMD 40 in this specification does not necessarily have to be processed in chronological order according to the order described as the flowchart.
  • each step in the processing of the HMD 40 may be processed in an order different from the order described as the flowchart or in parallel.
  • HMD 40 hardware such as CPU, ROM, and RAM built in HMD 40, camera 20, sensor 25, and distribution server 30 exhibits the same functions as each configuration of HMD 40, camera 20, sensor 25, and distribution server 30 described above.
  • a computer program can also be written to do so.
  • a storage medium storing the computer program is also provided.
  • an acquisition unit that acquires feature information from content included in a first virtual object placed in the virtual space; a video generation unit that controls display of a second virtual object around at least the content arranged in the virtual space based on the feature information acquired by the acquisition unit; An information processing device.
  • the acquisition unit obtaining the feature information from an object included in the content; The information processing device according to (1) above.
  • the acquisition unit Recognizing an object based on the acquired feature information, The video generation unit controlling display of a virtual object whose degree of correlation with the object recognized by the acquisition unit satisfies a predetermined criterion as an image of the second virtual object; The information processing device according to (2) above.
  • a virtual object whose degree of correlation meets a predetermined criterion is a virtual object whose degree of correlation is equal to or greater than a predetermined value, The information processing device according to (3) above.
  • the video generation unit controlling display of the image of the second virtual object in an area that does not interfere with other virtual objects placed in the virtual space;
  • the information processing apparatus according to any one of (2) to (4).
  • the acquisition unit obtaining color information from a background video included in the content;
  • the video generation unit controlling display of an image representing the virtual space based on the color information of the background image acquired by the acquisition unit;
  • the information processing apparatus according to any one of (1) to (5) above.
  • the video generation unit controlling display of an image having a similar color to the color indicated by the color information of the background image acquired by the acquisition unit; The information processing device according to (6) above.
  • the video generation unit Based on the color information of the background image acquired by the acquisition unit, controlling display of an image of a third virtual object emitting light having a color similar to that indicated by the color information of the background image; The information processing device according to (7) above.
  • the video generation unit controlling the display of an image based on the color information of the background image acquired by the acquisition unit so as to hide boundaries with the virtual space at both ends of the first virtual object; The information processing apparatus according to any one of (6) to (8).
  • the color information of the background video indicates a color near the boundary between the content and the virtual space in the content;
  • the information processing apparatus according to any one of (6) to (9).
  • the video generation unit controlling display of an image based on color information near a left edge boundary between the content and the virtual space in the content so as to hide a boundary with the virtual space at the left edge of the first virtual object; controlling display of an image based on color information near the right edge boundary between the content and the virtual space so as to hide the boundary with the virtual space at the right edge of the first virtual object;
  • the information processing device according to (10) above.
  • the acquisition unit Acquiring sound information included in the content output in the virtual space;
  • the video generation unit controlling display of an image representing the virtual space based on the sound information acquired by the acquisition unit;
  • the information processing apparatus according to any one of (1) to (11).
  • the video generation unit controlling display of a video of a fourth virtual object representing the virtual space based on the sound information acquired by the acquisition unit;
  • the information processing device according to (12) above.
  • the video generation unit controlling video display that moves in response to the rhythm of the sound acquired by the acquisition unit;
  • the first virtual object is a screen having a curved shape,
  • the information processing apparatus according to any one of (1) to (14).
  • the curved shape has a radius of curvature corresponding to a predetermined area range in the virtual space,
  • the information processing device according to (15) above.
  • (17) acquiring feature information from content included in a first virtual object placed in the virtual space; controlling display of a second virtual object around at least the content arranged in the virtual space based on the acquired feature information;
  • a computer-implemented information processing method comprising: (18) A storage medium that non-temporarily stores a computer-executable program, Said program an acquisition function for acquiring feature information from content included in a first virtual object placed in the virtual space; a display control function that controls display of a second virtual object around at least the content placed in the virtual space based on the feature information acquired by the acquisition function;

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Le problème décrit par la présente invention est d'améliorer encore des propriétés de divertissement dans un espace virtuel. À cet effet, l'invention concerne un dispositif de traitement d'informations comprenant : une unité d'acquisition pour acquérir des informations de caractéristiques à partir d'un contenu inclus dans un premier objet virtuel agencé dans un espace virtuel ; et une unité de génération de vidéo pour, sur la base des informations de caractéristiques acquises par l'unité d'acquisition, commander l'affichage d'un second objet virtuel dans au moins l'environnement du contenu agencé dans l'espace virtuel.
PCT/JP2022/007217 2021-07-08 2022-02-22 Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement Ceased WO2023281803A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021113660 2021-07-08
JP2021-113660 2021-07-08

Publications (1)

Publication Number Publication Date
WO2023281803A1 true WO2023281803A1 (fr) 2023-01-12

Family

ID=84801708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/007217 Ceased WO2023281803A1 (fr) 2021-07-08 2022-02-22 Dispositif de traitement d'informations, procédé de traitement d'informations et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2023281803A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018122167A1 (fr) * 2016-12-30 2018-07-05 Thomson Licensing Dispositif et procédé de génération de contenus virtuels dynamiques flexibles en réalité mixte
JP2019536339A (ja) * 2016-10-25 2019-12-12 株式会社ソニー・インタラクティブエンタテインメント ビデオコンテンツの同期の方法および装置
JP2020017176A (ja) * 2018-07-27 2020-01-30 株式会社Nttドコモ 情報処理装置
WO2020129115A1 (fr) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Système de traitement d'informations, procédé de traitement d'informations et programme informatique

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019536339A (ja) * 2016-10-25 2019-12-12 株式会社ソニー・インタラクティブエンタテインメント ビデオコンテンツの同期の方法および装置
WO2018122167A1 (fr) * 2016-12-30 2018-07-05 Thomson Licensing Dispositif et procédé de génération de contenus virtuels dynamiques flexibles en réalité mixte
JP2020017176A (ja) * 2018-07-27 2020-01-30 株式会社Nttドコモ 情報処理装置
WO2020129115A1 (fr) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Système de traitement d'informations, procédé de traitement d'informations et programme informatique

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: ""Jumping TV" where characters in the TV jump out of the screen and move around", GIGAZINE, 30 May 2014 (2014-05-30), XP055897657, Retrieved from the Internet <URL:https://gigazine.net/news/20140530-ar-spring-tv> [retrieved on 20220304] *
EYECANDYLAB: "augmen.tv by eyecandylab // The AR Lens for Video", YOUTUBE, XP093022663, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=QZVsi7Y5YZ0> [retrieved on 20230209] *
SHOJI RYOICHI: ""MR CM" which people and characters pop out of commercial images. Demonstration experiment on BS Nippon Television program; in particular, drawings "Image of MR CM"", 5 March 2019 (2019-03-05), XP093022662, Retrieved from the Internet <URL:https://av.watch.impress.co.jp/docs/news/1173046.html> [retrieved on 20230209] *
SUGIHARA KENJI: "Keywords you should know Second Screen; in particular "How the TV and second screen integration works"", 1 January 2013 (2013-01-01), pages 409 - 412, XP093022661, Retrieved from the Internet <URL:https://www.ite.or.jp/contents/keywords/FILE-20160413114522.pdf> [retrieved on 20230209] *
YAMAMOTO SUSUMU, TSUTSUGUCHI KEN, HIDENORI TANAKA, SHINGO ANDO, ATSUSHI KATAYAMA: "Visual SyncAR: Augmented Reality which Synchronizes Video and Overlaid Information", THE JOURNAL OF THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN, vol. 43, no. 3, 1 January 2014 (2014-01-01), pages 397 - 403, XP093022660, DOI: 10.11371/iieej.43.397 *

Similar Documents

Publication Publication Date Title
JP7366196B2 (ja) 広範囲同時遠隔ディジタル提示世界
US12322014B2 (en) Interactive virtual reality broadcast systems and methods
JP4921550B2 (ja) ゲームプレイ中にコンピュータ生成アバターに感情的特徴を与える方法
TWI531396B (zh) 用於推動互動故事的自然使用者輸入
TWI647593B (zh) 模擬環境顯示系統及方法
TWI486904B (zh) 律動影像化方法、系統以及電腦可讀取記錄媒體
CN114631127A (zh) 说话头的小样本合成
WO2022209129A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et programme
CN106303555A (zh) 一种基于混合现实的直播方法、装置和系统
JP6688378B1 (ja) コンテンツ配信システム、配信装置、受信装置及びプログラム
CN106730815A (zh) 一种易实现的体感互动方法及系统
WO2008087621A1 (fr) Appareil et procédé d&#39;animation d&#39;objets virtuels à répondant émotionnel
JP2016045814A (ja) 仮想現実サービス提供システム、仮想現実サービス提供方法
WO2023035897A1 (fr) Procédé et appareil de génération de données vidéo, dispositif électronique et support de stockage lisible
WO2017094527A1 (fr) Système de génération d&#39;images animées, et système d&#39;affichage d&#39;images animées
JP2023067708A (ja) 端末、情報処理方法、プログラム、および記録媒体
CN114625468B (zh) 增强现实画面的展示方法、装置、计算机设备及存储介质
JP2014164537A (ja) 仮想現実サービス提供システム、仮想現実サービス提供方法
EP4306192A1 (fr) Dispositif de traitement d&#39;information, terminal de traitement d&#39;information, procédé de traitement d&#39;information et programme
EP4307238A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et système
CN114356090B (zh) 一种控制方法、装置、计算机设备及存储介质
WO2023281803A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et support d&#39;enregistrement
WO2021065694A1 (fr) Procédé et système de traitement d&#39;informations
WO2024009653A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et système de traitement d&#39;informations
CN114283232A (zh) 一种画面展示方法、装置、计算机设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22837225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22837225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP