[go: up one dir, main page]

WO2024195562A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2024195562A1
WO2024195562A1 PCT/JP2024/008737 JP2024008737W WO2024195562A1 WO 2024195562 A1 WO2024195562 A1 WO 2024195562A1 JP 2024008737 W JP2024008737 W JP 2024008737W WO 2024195562 A1 WO2024195562 A1 WO 2024195562A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
displacement
information processing
user
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/008737
Other languages
French (fr)
Japanese (ja)
Inventor
康昭 高橋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Group Corp
Original Assignee
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Group Corp filed Critical Sony Group Corp
Publication of WO2024195562A1 publication Critical patent/WO2024195562A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/34Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators for rolling or scrolling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This technology relates to information processing devices, information processing methods, and programs, and in particular to the technical field of image display.
  • Patent Document 1 mentions that in such a system, when an image is cut out from a 360° video and displayed, the speed of movement of the displayed image is limited to prevent visual sickness.
  • This disclosure therefore proposes technology that prevents visually induced motion sickness while minimizing the loss of real-time viewing for the viewer.
  • the information processing device related to the present technology performs a cut-out process to cut out a range of an angle of view narrower than that of a first image from a first image captured by an imaging device worn by an imaging user, and to create a second image for observation by an observing user, and is equipped with a display control unit that varies the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.
  • the cut-out position can be made to follow the change in the front direction of the first image, for example, the front direction of an imaging device that captures the first image.
  • the speed of change in the displayed image is variably set according to the situation in order to reduce discomfort felt by the viewer of the second image.
  • FIG. 1 is an explanatory diagram of a system configuration according to an embodiment of the present technology
  • FIG. 2 is an explanatory diagram of an example of a display device used in the system of the embodiment.
  • 5A and 5B are explanatory diagrams of displacement of a display image in the system according to the embodiment.
  • FIG. 1 is a block diagram of a configuration example of a system according to an embodiment.
  • FIG. 2 is a block diagram of an information processing device used in the system of the embodiment.
  • 11 is a flowchart of a process relating to an image motion speed according to an embodiment.
  • 11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment.
  • FIG. 1 is an explanatory diagram of a system configuration according to an embodiment of the present technology
  • FIG. 2 is an explanatory diagram of an example of a display device used in the system of the embodiment.
  • 5A and 5B are explanatory diagrams of displacement of a display image in the system
  • FIG. 11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit when a viewer instructs a movement in the same direction as the movement of a captured image in the embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit when a viewer instructs a movement in the same direction as the movement of a captured image in the embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit when a viewer instructs a movement in the same direction as the movement of a captured image in the embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit when a viewer instructs a reverse movement to the movement of a captured image in the embodiment.
  • FIG. 11 is an explanatory diagram of a speed limit when a viewer instructs a reverse movement to the movement of a captured image in the embodiment.
  • FIG. 11 is a flowchart of an example of processing according to status information according to an embodiment.
  • 11 is a flowchart of an example of processing according to status information according to an embodiment.
  • 11 is a flowchart of an example of processing according to status information according to an embodiment.
  • 11 is a flowchart of an example of processing according to status information according to an embodiment.
  • 13 is a flowchart of a modified example of the process relating to the image motion speed according to the embodiment.
  • image includes both moving images and still images. Furthermore, the term “image” is used to mean not only the state displayed on a display device, but also image data in a signal processing process, a transmission process, or a state recorded on a recording medium.
  • FIG. 1 shows a schematic diagram of an information processing system 1.
  • the information processing system 1 includes an imaging device 40, an information processing device 10, an information processing device 20, and an output device 30.
  • the information processing system 1 may also include a content server 90.
  • the information processing device 10, the information processing device 20, and the content server 90 are connected to each other via a network N11 so as to be able to transmit and receive information to and from each other.
  • the type of the network N11 is not particularly limited.
  • the network N11 may be configured as a so-called wireless network, such as a network based on the Wi-Fi (registered trademark) standard.
  • the network N11 may be configured as the Internet, a dedicated line, a local area network (LAN), a wide area network (WAN), or the like.
  • the network N11 may include a plurality of networks, and a part of the network may be configured as a wired network.
  • the imaging device 40 includes an imaging section 41, and the imaging section 41 captures images (e.g., moving images or still images) of the environment surrounding the imaging user Ua.
  • the imaging device 40 is configured as a head-mounted type, and holds the imaging unit 41 at a predetermined position on the head of a user Ua (hereinafter referred to as "imaging user Ua") who wears the imaging device 40 and captures images.
  • the imaging unit 41 includes, for example, an imaging element and an optical system (for example, a lens, etc.) for forming a subject image on the imaging element.
  • the imaging element include a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS).
  • CCD charge coupled device
  • CMOS complementary metal oxide semiconductor
  • the imaging device 40 may also include multiple imaging units 41 (e.g., imaging units 41a and 41b) that are arranged to capture images in different directions, for example, based on the imaging user Ua.
  • multiple imaging units 41 e.g., imaging units 41a and 41b
  • image processing or the like By synthesizing the images captured by each of the multiple imaging units 41 through image processing or the like according to the angle of view of each of the multiple imaging units 41, it is possible to obtain an image that captures a wider range than the angle of view of each imaging unit 41, such as a full-sky image that is a 360° panoramic image.
  • one or more of the imaging units 41 are equipped with a wider-angle lens, and images in each direction based on the imaging user Ua are captured by one or more of the imaging units 41.
  • the imaging unit 41 may be one that captures wide-angle images using a fisheye lens.
  • images captured by one or more imaging units 41 and obtained through synthesis processing and the like in the information processing device 10 can be a variety of images, such as a full-sphere image captured by two fisheye lenses, a hemispherical image captured by one fisheye lens, and a panoramic image (a full-sphere image and a panoramic image with a range of less than 360°).
  • images transmitted from the information processing device 10 will be referred to as "captured images.”
  • this captured image it is sufficient to output an image with a wider angle of view than the image displayed on the output device 30 (called a "display image" to distinguish it from the captured image).
  • a partial area of such a captured image is cut out and used as the display image.
  • the imaging device 40 is not limited to a head-worn wearable device, but may take a variety of forms, such as a neckband type worn around the neck of the imaging user Ua, a pendant type, a glasses type, a shoulder-worn type, or a general camera held by the imaging user Ua.
  • a type in which the imaging direction changes depending on the behavior of the imaging user Ua is assumed.
  • the imaging direction refers to the front direction of the imaging angle of view. If the captured image is not a 360° image, the imaging direction may refer to the optical axis direction of the incident light being captured. Alternatively, in the case of a head-worn imaging device 40, the imaging direction may be the front direction of the head of the imaging user Ua.
  • the imaging device 40 is configured to be capable of transmitting and receiving information to and from the information processing device 10 held by the imaging user Ua, for example, via a wireless or wired communication path. Based on this configuration, the imaging device 40 transmits images captured by each of the imaging units 41a and 41b to the information processing device 10.
  • the information processing device 10 acquires images captured by the imaging units 41a and 41b from the imaging device 40.
  • the information processing device 10 generates an image such as a panoramic image by synthesizing the images captured by the imaging units 41a and 41b.
  • the information processing device 10 also generates information on the behavior of the imaging user Ua, such as head rotation, i.e., in this case, information indicating the position and direction (front direction) of the viewpoint of the captured image, and associates it with the captured image.
  • the behavior of the imaging user Ua is one aspect of the imaging user Ua's operation that affects the displayed image.
  • the head behavior is one aspect of a position-unspecified operation that does not specify the position of image displacement.
  • the information processing device 10 may recognize changes in the position and direction of the viewpoint associated with the rotation of the head of the imaging user Ua, and generate a panoramic image, etc. so that rotation of the image associated with the change in the position and direction of the viewpoint is suppressed.
  • Changes in the position and orientation of the viewpoint can be recognized as changes in the position and orientation of the imaging device 40 based on the detection results of an acceleration sensor or an angular velocity sensor provided in the imaging device 40, for example.
  • changes in the position and orientation of the viewpoint can be recognized by performing image analysis processing on the images captured by the imaging units 41a and 41b, respectively.
  • the content server 90 acquires, via the network N11, captured images (e.g., panoramic images) based on the results of imaging by the imaging device 40 from the information processing device 10 held by the imaging user Ua, and distributes the acquired images to the information processing device 20 held by another user (observing user Ub).
  • the observing user Ub refers to a user who views the images captured by the imaging user Ua. There may be one or more observing users Ub.
  • the content server 90 may also temporarily or permanently store the captured images acquired from the information processing device 10 in, for example, the storage unit 95, and distribute the captured images stored in the storage unit 95 to the information processing device 20. With this configuration, it becomes possible to transmit captured images based on the imaging results of the imaging device 40 synchronously or asynchronously from the information processing device 10 held by the imaging user Ua to the information processing device 20 held by the observing user Ub.
  • the output device 30 is configured as a so-called head mounted display (HMD) equipped with a display unit 31 such as a display, and presents an image via the display unit 31 to the observation user Ub wearing the output device 30.
  • HMD head mounted display
  • FIG. 2A shows an example of an output device 30 applied in the information processing system 1.
  • the output device 30 is configured to be worn on the user's head and to hold a display unit 31 (e.g., a display panel) for displaying an image in front of the user's eyes.
  • a display unit 31 e.g., a display panel
  • head-mounted display devices that can be used as the output device 30 include immersive HMDs, see-through HMDs, video see-through HMDs, and retinal projection HMDs.
  • an immersive HMD When an immersive HMD is worn on the user's head or face, it is worn so as to cover the user's eyes, and a display unit such as a display is held in front of the user's eyes. Therefore, it is difficult for a user wearing an immersive HMD to directly view the outside scenery (i.e., the scenery of the real world), and only the image displayed on the display unit comes into view. With this configuration, the immersive HMD can give a sense of immersion to the user viewing the image.
  • the output device 30 shown in Figure 2A corresponds to an immersive HMD.
  • a see-through HMD holds a virtual image optical system consisting of a transparent light guide section in front of the user's eyes using, for example, a half mirror or a transparent light guide plate, and displays an image inside the virtual image optical system. Therefore, a user wearing a see-through HMD can view the outside scenery even while viewing an image displayed inside the virtual image optical system.
  • a see-through HMD is a so-called glasses-type wearable device in which the part equivalent to the lenses of glasses is configured as a virtual image optical system.
  • video see-through HMDs are worn over the user's eyes, with a display unit such as a display held in front of the user's eyes.
  • video see-through HMDs have an imaging unit for capturing images of the surrounding scenery, and the image of the scenery captured by the imaging unit in the user's line of sight is displayed on the display unit.
  • a retinal projection HMD has a projection unit held in front of the user's eyes, and the image is projected from the projection unit towards the user's eyes so that it is superimposed on the external scenery. More specifically, in a retinal projection HMD, an image is projected directly from the projection unit onto the retina of the user's eye, and the image is formed on the retina. This configuration allows users with myopia or hyperopia to view clearer images. Furthermore, a user wearing a retinal projection HMD can view the external scenery even while viewing an image projected from the projection unit.
  • the output device 30 applied to the information processing system 1 may be a monitor display device as shown in FIG. 2B, or a display screen of a portable information processing device such as a smartphone as shown in FIG. 2C.
  • the output device 30 may also be a display device composed of a projection device and a screen (not shown), or a large spherical display formed in a dome-shaped space that the observing user Ub views inside the dome.
  • the information processing device 20 on the observing user Ub's side acquires an image (e.g., a panoramic image) based on the imaging results by the imaging device 40 from the content server 90 via the network N11.
  • the information processing device 20 then displays the acquired image on the display unit 31 of the output device 30.
  • the observing user Ub wearing the output device 30 can view an image of the environment surrounding the imaging user Ua wearing the imaging device 40, for example, via the display unit 31 of the output device 30.
  • the information processing device 20 may also receive notification of the detection results of the head movement (head posture) of the observing user Ub from the output device 30. With this configuration, the information processing device 20 may recognize changes in the position and direction of the viewpoint of the observing user Ub based on the head movement of the observing user Ub, and present an image corresponding to the recognized change in viewpoint to the observing user Ub via the output device 30 (display unit 31).
  • FIG. 3 shows an example of an operation for presenting an image according to a change in the viewpoint of the observing user Ub.
  • the example shown in FIG. 3 shows an example of simulating a situation in which the observing user Ub views an image v0 that is expanded wider than the observing user Ub's field of view, such as a panoramic image, via an output device 30 (not shown) while moving the viewpoint as if looking around.
  • image v0 may be a moving image or a still image.
  • Image v0 here corresponds to a captured image based on the imaging results by the imaging device 40.
  • the information processing device 20 recognizes changes in the viewpoint of the observing user Ub by detecting the movement of the head of the observing user Ub.
  • the information processing device 20 then cuts out an image v11 of a portion of the image v0, which is expanded wider than the field of view of the observing user Ub, that corresponds to the position and direction of the recognized viewpoint, based on a pre-set field of view angle, and presents the cut-out image v11 to the observing user Ub via the output device 30. That is, the image v11 shown in FIG. 3 is a schematic representation of the image presented on the display unit 31 of the output device 30, and is the display image that is actually viewed by the observing user Ub.
  • the viewing angle for extracting image v11 may be fixed, or may be changeable based on user operation, etc. For example, when a portion of image v11 is to be further enlarged and presented to observing user Ub, information processing device 20 may control image v11 to be relatively smaller in size with respect to image v0 by setting the viewing angle narrower.
  • the output device 30 may be provided with a detection unit (detection unit 33 in FIG. 4) such as an acceleration sensor or an angular velocity sensor (gyro sensor) and configured to detect the behavior of the observing user Ub, such as head movement (head posture).
  • a detection unit detection unit 33 in FIG. 4
  • the output device 30 may detect the yaw, pitch, and roll components of the user's head movement.
  • the output device 30 notifies the information processing device 20 of the detection result of the head movement of the observing user Ub.
  • detection of head movement here means detection of a change in the line of sight of the observing user Ub.
  • the display image is not necessarily displaced in conjunction with the movement of the head of the observing user Ub.
  • an acceleration sensor or an angular velocity sensor may be mounted on a device (such as the information processing device 20) held or worn by the observing user Ub to detect the behavior of the observing user Ub, such as the movement of the arm.
  • an acceleration sensor or an angular velocity sensor may be prepared as a sensing device separate from the output device 30 and the information processing device 20 and attached to the observing user Ub to detect the behavior of the observing user Ub.
  • the output device 30 or the information processing device 20 may be provided with an operator or GUI (Graphical User Interface) that allows the observing user Ub to operate the viewing direction, and the displayed image may be displaced in accordance with that operation.
  • GUI Graphic User Interface
  • the method of detecting the behavior of the observing user Ub is not limited to a method based on the detection results of various sensors provided in the output device 30 or the information processing device 20.
  • a configuration for detecting the head and body movements of the observing user Ub may be provided outside the output device 30.
  • an image of the observing user Ub may be captured by an imaging device, and the captured image may be analyzed to detect the head and body movements of the observing user Ub.
  • various sensors such as so-called optical sensors may be provided outside the output device 30, and the head movements of the observing user Ub may be detected by sensing the observing user Ub with the sensors.
  • the observing user Ub can see the view from the viewpoint of the imaging user Ua, or can see the view around the imaging user Ua as if he is looking around while moving his own viewpoint, allowing him to experience a very realistic image as if he were in the same place as the imaging user Ua. Therefore, for example, by presenting an image (panoramic image) based on the imaging results by the imaging device 40 as image v0, the observing user Ub can have the experience of sharing a space with the imaging user Ua who is wearing the imaging device 40.
  • the output device 30 may be provided with a configuration for extracting a partial image from the panoramic image according to the position and direction of the viewpoint of the observing user Ub and presenting the extracted partial image on the display unit 31.
  • the information processing device 20 and the output device 30 may be configured integrally.
  • the cut-out range of the display image within the captured image is displaced according to the behavior of the observing user Ub, but the captured image itself displaces in the forward direction according to the behavior of the imaging user Ua, for example, head movement. This is because the imaging direction of the imaging unit 41 changes. Therefore, even if the observing user Ub does not change his/her line of sight, i.e., the cut-out range does not change, the entire scene viewed by the observing user Ub as the display image will be displaced in the up/down/left/right directions due to the behavior of the imaging user Ua, for example, shaking his/her head up/down/left/right.
  • a captured image (such as a panoramic image) based on the imaging results by the imaging device 40 is transmitted from the information processing device 10 to the information processing device 20 via the content server 90, but this is not necessarily limited to this configuration.
  • the image based on the imaging results by the imaging device 40 may be transmitted directly from the information processing device 10 to the information processing device 20 without going through the content server 90.
  • Figure 4 is a block diagram showing an example of the functional configuration of the information processing system 1.
  • the imaging device 40 includes an imaging section 41 (in the figure, multiple imaging sections 41a and 41b) and a detection section 43. Note that the example in which the imaging device 40 includes two imaging sections 41a and 41b is just an example, and the imaging device 40 may include one imaging section 41 or three or more imaging sections 41.
  • Each of the imaging units 41a and 41b captures an image of the surroundings of the imaging device 40 (i.e., an image of the surroundings of the imaging user Ua wearing the imaging device 40) and outputs the captured image to the information processing device 10.
  • the detection unit 43 is composed of, for example, an acceleration sensor and an angular velocity sensor (gyro sensor), detects changes in the position and attitude of the imaging device 40, and outputs the detection results to the information processing device 10.
  • the information processing device 10 includes a communication unit 101, a recognition unit 103, and an image synthesis unit 105.
  • the communication unit 101 is a communication interface that enables each component in the information processing device 10 to communicate with an external device (e.g., the content server 90 or the information processing device 20) via the network N11.
  • the configuration of the communication unit 101 can be changed as appropriate depending on the type of network N11.
  • the communication unit 101 may include a baseband processor, an RF circuit, and the like.
  • the communication unit 101 may include a baseband processor, an RF circuit, and the like.
  • the recognition unit 103 acquires the detection results of changes in the position and posture of the imaging device 40 from the detection unit 43, and recognizes the head direction (in other words, the line of sight of the imaging user Ua) determined from the head position and posture of the imaging user Ua wearing the imaging device 40 based on the acquired detection results. The recognition unit 103 then outputs information indicating the recognition results of the head position and posture of the imaging user Ua to the image synthesis unit 105.
  • the image synthesis unit 105 acquires from each of the multiple imaging units 41 an image captured by that imaging unit 41.
  • the image synthesis unit 105 synthesizes the images captured by each of the multiple imaging units 41 according to the imaging direction of each imaging unit 41 and the angle of view of that imaging unit 41 to generate an image (e.g., a panoramic image) that captures a wider range than the angle of view of each imaging unit 41.
  • an image e.g., a panoramic image
  • the front direction of the panoramic image generated by the image synthesis unit 105 changes according to changes in the posture of the head of the imaging user Ua.
  • the image synthesis unit 105 is described as generating a captured image as a panoramic image, the captured image generated by the image synthesis unit 105 is not necessarily limited to a panoramic image.
  • the captured image is an image having a wider angle of view than the display image on the observing user Ub side, and may be, for example, an image of 180 degrees horizontally, or a hemispherical image or panoramic image in which the subject scene is displayed over 360 degrees in the front, back, left, and right directions.
  • the configuration includes the image synthesis unit 105, image synthesis processing is not necessary when capturing images using, for example, a single image capturing unit 41. In that sense, there are also examples in which the image synthesis unit 105 is not provided.
  • the image synthesis unit 105 also calculates, based on the information indicating the recognition result of the position and posture of the head of the imaging user Ua, information indicating the direction and magnitude of the acceleration applied to the head as acceleration information. Note that, at this time, the image synthesis unit 105 may calculate the acceleration information by excluding the translational motion component of the head movement of the imaging user Ua and taking into account only the rotational motion.
  • the acceleration information may be one-axis (for example, yaw direction) or two-axis (for example, yaw and pitch directions) information, or may be three-axis (yaw, pitch, and roll) information.
  • the image synthesis unit 105 associates the captured image, such as the generated panoramic image, with the calculated acceleration information and transmits them to the content server 90 connected via the network N11.
  • the timing at which the above-described imaging device 40 and information processing device 10 perform the process of acquiring an image (e.g., a panoramic image) of the surroundings of the imaging user Ua and the process of transmitting the image may capture an image of the surroundings of the imaging user Ua in real time, generate a panoramic image based on the captured image, and transmit the panoramic image to the content server 90.
  • the imaging device 40 and the information processing device 10 may record the captured image in a recording unit (not shown), and at a desired timing (for example, at the timing of receiving an operation from the imaging user Ua), read out the recorded captured image from the recording unit and transmit it from the communication unit 101.
  • the information generating device 10 records the captured image in the storage unit in association with acceleration information indicating the recognition result of the position and posture of the head of the imaging user Ua, and transmits the acceleration information together with the captured image when transmitting.
  • the information processing device 10 may transmit the panoramic image to the content server 90 in synchronization with the generation of the panoramic image, or may associate the acceleration information with the generation of the panoramic image and record it in a recording unit not shown, and transmit the captured image and the acceleration information to the content server 90 at a later point in time.
  • the content server 90 includes a communication unit 901 and a content control unit 903.
  • the content server 90 may also be configured to be able to access the storage unit 95.
  • the communication unit 901 is a communication interface that enables each component in the content server 90 to communicate with an external device (e.g., information processing devices 10 and 20) via the network N11.
  • the communication unit 901 may have a configuration similar to that of the communication unit 101 described above. In the following explanation, when each component in the content server 90 transmits or receives information to or from an external device via the network N11, it is assumed that the information is transmitted or received via the communication unit 901, even if no special explanation is given.
  • the content control unit 903 acquires captured images, such as panoramic images, generated by the information processing device 10 connected via the network N11 and associated with acceleration information indicating the direction and magnitude of acceleration applied to the head of the imaging user Ua, that is, images of the surroundings of the imaging user Ua.
  • the content control unit 903 distributes the captured image associated with the acceleration information acquired from the information processing device 10 to the other information processing device 20 .
  • the content control unit 903 may also temporarily or permanently store the captured image acquired from the information processing device 10 in the storage unit 95 and distribute the captured image stored in the storage unit 95 to the information processing device 20.
  • the memory unit 95 is configured from a storage device and stores various data such as the captured images and acceleration information described above.
  • the memory unit 95 may be included in the content server 90. With this configuration, the content control unit 903 can transmit captured images acquired from the information processing device 10 to another information processing device 20 synchronously or asynchronously.
  • the output device 30 includes a display unit 31, a detection unit 33, and a sensation induction unit 35.
  • the display unit 31 corresponds to the display unit 31 described with reference to Fig. 1 and Fig. 2.
  • the sensation induction unit 35 is configured to present a force sense to the observing user Ub wearing the output device 30.
  • the detection unit 33 is composed of, for example, an acceleration sensor or an angular velocity sensor (gyro sensor), detects changes in the position and attitude of the output device 30 , and outputs the detection result to the information processing device 20 .
  • the information processing device 20 includes a communication unit 201, an analysis unit 203, a display control unit 205, an input unit 206, and a force feedback control unit 207.
  • the communication unit 201 is a communication interface that enables each component in the information processing device 20 to communicate with an external device (e.g., the content server 90 or the information processing device 10) via the network N11.
  • the communication unit 201 may have a configuration similar to the communication unit 101 or the communication unit 901 described above.
  • each component in the information processing device 20 transmits or receives information to or from an external device via the network N11, it is assumed that the information is transmitted or received via the communication unit 201, even if no special explanation is given.
  • the analysis unit 203 obtains the captured image to be presented to the observing user Ub from the content server 90.
  • the captured image is associated with acceleration information indicating the direction and magnitude of acceleration applied to the head of the imaging user Ua holding the information processing device 10 by the information processing device 10 that generated the captured image.
  • the analysis unit 203 also acquires detection results of changes in the position and posture of the output device 30 from the detection unit 33, and recognizes the position and posture of the head of the observing user Ub wearing the output device 30 (in other words, the position and direction of the viewpoint) based on the acquired detection results. Then, the analysis unit 203 outputs the acquired captured image and information indicating the recognition results of the position and posture of the head of the observing user Ub to the display control unit 205.
  • the analysis unit 203 also recognizes the direction and magnitude of the acceleration applied to the head of the imaging user Ua based on the acceleration information associated with the captured image. By using the recognition result, the analysis unit 203 recognizes a relative change in the head direction (line of sight) of the observing user Ub (i.e., the user wearing the output device 30) with respect to the head direction (line of sight) of the imaging user Ua (i.e., the user wearing the imaging device 40). That is, the analysis unit 203 is able to calculate acceleration information indicating the direction and magnitude of the acceleration applied to the head of the observing user Ub when simulating the acceleration applied to the head of the imaging user Ua to the head of the observing user Ub.
  • the analysis unit 203 outputs the acceleration information indicating the direction and magnitude of the acceleration applied to the head of the observing user Ub to the force sense control unit 207.
  • the imaging user Ua wears the imaging device 40 on his/her head as described above, and the imaging unit 41 moves together with the head of the imaging user Ua. Therefore, the display control unit 205 regards the direction of the head of the imaging user Ua as the direction of the line of sight. In the case where the imaging user Ua wears the imaging unit 41 on, for example, the shoulder instead of on the head, the direction of the imaging user Ua's body may be regarded as the direction of the line of sight.
  • the display control unit 205 acquires the captured image acquired from the analysis unit 203 and information indicating the recognition result of the position and posture of the head of the observing user Ub. Based on the information indicating the recognition result, the display control unit 205 cuts out and extracts an image corresponding to the head direction (line of sight) of the observing user Ub from the acquired captured image based on a preset viewing angle. Then, the display control unit 205 displays the cut-out image as a display image on the display unit 31.
  • the information processing device 20 can present to the observing user Ub an image in which the surroundings of the imaging user Ua are captured, for example, a panoramic image, in a direction corresponding to the position and posture of the head of the observing user Ub.
  • the display control unit 205 can regard the direction of the head of the observing user Ub as the direction of the line of sight, for example, but may also detect the line of sight of the observing user Ub itself.
  • the input unit 206 is an input device that allows various inputs by the observing user Ub. Various forms are assumed, such as an operator such as a key or switch, a touch panel, a trackball, a mouse, a joystick, or a pointer.
  • the input unit 206 is an input device that allows the observing user Ub to instruct the displacement of the displayed image. For example, it allows a position non-designated operation that moves the displayed image left and right without designating a position, and a position designation operation that designates a certain point in the panoramic image and displaces the displayed image toward that point.
  • the above-mentioned head behavior or some other behavior of the observing user Ub is also one aspect of the position designation operation or the position non-designation operation. For example, the observing user moving his/her head can be considered as the position non-designation operation, and the observing user Ub's behavior of pointing in a certain direction can be considered as the position designation operation.
  • the display control unit 205 can also change the cut-out range from the captured image based on the operation information from the input unit 206 and display the cut-out image on the display unit 31.
  • the information processing device 20 can provide the observing user Ub with an image in a direction according to the operation of the observing user Ub.
  • the force sense control unit 207 acquires acceleration information indicating the direction and magnitude of acceleration applied to the head of the observing user Ub from the analysis unit 203. The force sense control unit 207 then presents a force sense to the observing user Ub by driving the body sensation induction unit 35 according to the direction and magnitude of acceleration indicated by the acceleration information. With this configuration, the information processing device 20 can present a force sense to the observing user Ub in conjunction with the image presented to the observing user Ub via the display unit 31.
  • the output device 30 and information processing device 20 may execute the process of acquiring the captured image from the content server 90 and the process of presenting the captured image and presenting the force feedback synchronously or asynchronously.
  • the information processing device 20 can recognize the direction and magnitude of the force sensation presented to the observing user Ub wearing the output device 30, in other words, the direction and magnitude of the acceleration applied to the head of the observing user Ub, the content of the information associated with the captured image transmitted from the information processing device 10 is not particularly limited.
  • the information processing device 10 may associate information indicating the recognition result of the position and posture of the head of the imaging user Ua with the imaging image to be transmitted.
  • the information processing device 20 may calculate the direction and magnitude of the acceleration applied to the head of the imaging user Ua based on the information acquired from the information processing device 10.
  • the information processing device 20 may indirectly recognize a change in the position or posture of the imaging device 40 that is the image capture source of the omnidirectional image (and thus a change in the position or posture of the head of the imaging user Ua) by performing image analysis on the captured image. In this case, only the captured image may be transmitted to the information processing device 20.
  • the information processing device 20 may recognize the observing user Ub wearing the output device 30, and change the magnitude of the presented force sensation, i.e., the magnitude of acceleration, depending on the recognized observing user Ub.
  • the functional configuration of the information processing system 1 is not necessarily limited to the example shown in Fig. 4.
  • a part of the configuration of the information processing device 10 may be provided on the imaging device 40 side, or the imaging device 40 and the information processing device 10 may be configured integrally.
  • a part of the configuration of the information processing device 20 may be provided on the output device 30, or the output device 30 and the information processing device 20 may be configured integrally.
  • the information processing device 10 may transmit the generated panoramic image directly to the information processing device 20 without going through the content server 90.
  • the display control unit 205 that cuts out the display image is provided on the information processing device 20 side, but a configuration example in which the display control unit 205 is provided on the content server 90 or the information processing device 10 is also possible.
  • the information processing device 70 is a device capable of information processing, particularly image processing, such as a computer device.
  • Specific examples of the information processing device 70 include personal computers, workstations, mobile terminal devices such as smartphones and tablets, video editing devices, etc.
  • the information processing device 70 may also be a computer device configured as a server device or a computing device in cloud computing.
  • the CPU (Central Processing Unit) 71 of the information processing device 70 shown in FIG. 5 executes various processes according to programs stored in a ROM (Read Only Memory) 72 or a non-volatile memory unit 74 such as an EEP-ROM (Electrically Erasable Programmable Read-Only Memory), or programs loaded from a storage unit 79 to a RAM (Random Access Memory) 73.
  • the RAM 73 also stores data necessary for the CPU 71 to execute various processes as appropriate.
  • the image processing unit 85 is configured as a processor that performs various types of image processing, such as image synthesis, spherical image generation, image clipping, image analysis, image signal processing including color/brightness adjustment and color conversion, and image editing, or a combination of these processes.
  • image processing unit 85 when this information processing device 70 is used as the information processing device 20 on the observing user Ub side, the image processing unit 85 is a processor that executes the processing of the display control unit 205 and the analysis unit 203 .
  • the image processing unit 85 is a processor that executes the processing of the image synthesis unit 105 .
  • This image processing unit 85 can be realized, for example, by a CPU separate from the CPU 71, a graphics processing unit (GPU), a general-purpose computing on graphics processing units (GPGPU), an artificial intelligence (AI) processor, or the like.
  • the image processing unit 85 may be provided as a function within the CPU 71 .
  • the CPU 71, ROM 72, RAM 73, non-volatile memory unit 74, and image processing unit 85 are interconnected via a bus 83.
  • the input/output interface 75 is also connected to this bus 83.
  • An input unit 76 consisting of operators and operation devices is connected to the input/output interface 75.
  • the input unit 76 may be various operators and operation devices such as a keyboard, a mouse, a key, a trackball, a dial, a touch panel, a touch pad, a remote controller, or the like.
  • An operation by the user is detected by the input unit 76 , and a signal corresponding to the input operation is interpreted by the CPU 71 .
  • a microphone may also be used as the input unit 76. Voice uttered by the user may also be input as operation information.
  • this information processing device 70 is used as the information processing device 20 on the observing user Ub side, the input unit 76 corresponds to the input unit 206 in FIG.
  • the input/output interface 75 is connected, either integrally or separately, to a display unit 77 formed of an LCD (Liquid Crystal Display) or an organic EL (electro-luminescence) panel, or the like, and an audio output unit 78 formed of a speaker, or the like.
  • the display unit 77 is a display unit that performs various displays, and is configured, for example, by a display device provided in the housing of the information processing device 70, or a separate display device connected to the information processing device 70, or the like.
  • the display unit 77 displays various images, operation menus, icons, messages, etc., that is, GUI, on the display screen based on instructions from the CPU 71 .
  • the display unit 77 may be used as the output device 30 in FIGS.
  • the input/output interface 75 may also be connected to a memory unit 79 and a communication unit 80, which may be configured using a hard disk drive (HDD) or solid state drive (SSD).
  • HDD hard disk drive
  • SSD solid state drive
  • the storage unit 79 can store various data and programs.
  • a database can also be configured in the storage unit 79.
  • the communication unit 80 performs communication processing via a transmission path such as the Internet, and communication with various devices such as an external DB, an editing device, and an information processing device via wired/wireless communication, bus communication, etc.
  • This communication unit 80 can function as the communication unit 201, communication unit 101, or communication unit 901 described in FIG. 4.
  • a drive 81 is also connected to the input/output interface 75 as required, and a removable recording medium 82 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory is appropriately mounted thereon.
  • the drive 81 allows video data, various computer programs, and the like to be read from the removable recording medium 82.
  • the read data is stored in the storage unit 79, and the video and audio contained in the data are output on the display unit 77 and the audio output unit 78.
  • the computer programs, etc. read from the removable recording medium 82 are installed in the storage unit 79 as necessary.
  • software for the processing of this embodiment can be installed via network communication by the communication unit 80 or via a removable recording medium 82.
  • the software may be stored in advance in the ROM 72, the storage unit 79, etc.
  • FIG. 6 shows an example of processing by the display control unit 205 that controls the displacement of the display image viewed by the observing user Ub, that is, the image cut out from a captured image such as a panoramic image.
  • this processing example is an example of processing that slows down the displacement speed of the displayed image so that the observing user Ub does not experience visual sickness due to displacement of the cut-out range.
  • the displacement speed is varied according to situation information so as not to impair real-time performance as much as possible. For example, in a situation where visual sickness is likely to occur, the displacement of the displayed image is executed slowly, but in a situation where visual sickness is unlikely to occur, the delay in the displayed image is suppressed.
  • step S101 the display control unit 205 sets the speed of displacement in the received captured image as a speed v.
  • velocity v also includes a component in the direction of displacement.
  • the value of velocity v is set with one side of the left and one side of the left as a + value and the other as a - value.
  • velocity v for each axis can be set by setting one side of the left and one side of the left as a + value and the other as a - value in the x-axis direction, and one side of the up-down direction as a + value and the other as a - value in the y-axis direction.
  • the display control unit 205 may obtain the velocity v, for example, from acceleration information associated with the captured image, or may obtain the velocity v by calculating the amount of displacement of the same subject by matching between frames of the captured image.
  • the display control unit 205 proceeds to step S103 and sets the upper speed limit vmax to a speed vmax1. Note that while the above-mentioned velocity v is a vector, the upper speed limit vmax here is a scalar and takes a positive value.
  • the speed vmax1 assigned as the upper speed limit value vmax is a value set as the upper limit of the speed at which image sickness is unlikely to occur even if the displayed image is displaced regardless of the will of the observing user Ub, and is a value that is slower than the speed vmax2 described later.
  • step S104 the display control unit 205 calculates the cut-out range of each of the subsequent frames according to the speed v, with the speed upper limit vmax set to the speed vmax1. That is, the display control unit 205 calculates the cut-out range of each of the subsequent frames, taking into consideration the amount of displacement of the image per frame period due to the speed v and the direction of the displacement.
  • the cut-out range of each frame is set so that a displacement amount equivalent to the speed v occurs per frame.
  • the imaging user Ua slowly turns his head to the left at a speed less than the speed vmax1
  • the front of the display image seen by the observing user Ub is displaced to the left in the same manner as the imaging user Ua changes his line of sight.
  • the captured image is an image that directly reflects the displacement in the imaging direction due to the behavior of the imaging user Ua
  • the cut-out range is fixed in each frame, a displacement equivalent to the velocity v will also occur in the displayed image viewed by the observing user Ub.
  • the cut-out range is set to shift for each frame depending on information on the speed and direction of the movement of the imaging user Ua, so that a displacement equivalent to a speed v occurs in the displayed image viewed by the observing user Ub.
  • step S104 When it is determined in step S104 that the speed v exceeds the speed vmax1, that is, when the imaging user Ua turns his head to the left at a speed exceeding the speed vmax1, the front of the display image seen by the observing user Ub is displaced to the left at a speed vmax1 slower than the line of sight change speed (speed v) of the imaging user Ua, that is, in a delayed state. That is, in step S104, the display control unit 205 causes the frames to progress at a speed vmax1 or less during the frame period until the display image catches up with the amount of displacement (change in scenery) at the speed v due to the behavior of the imaging user Ua. Therefore, the cut-out range according to the speed v is set in a state where the speed component is limited at the speed vmax1.
  • an arrow 51 indicates the front direction of the imaging user Ua, that is, the front direction of an imaged image such as a panoramic image. It is assumed that the imaging user Ua is capturing images at a live music venue, and that in front of the imaging user Ua are lined up, from the left, as subjects 50: a piano player, a guitar player, a drummer, and a saxophone player.
  • normal cropped image 54 shows a state in which delay processing to prevent motion sickness is not performed. In normal cropped image 54, the guitar player is shown in the center of the displayed image.
  • the speed of displacement of the display image is limited to a speed vmax1.
  • the cut-out center 57 is set to be somewhere between the guitar player and the drummer.
  • the speed-variable cut-out image 58 is used as the display image, the center will be located between the guitar player and the drummer.
  • the movement of the display image from time t0 to time t1 is slower than the movement in the line of sight of the imaging user Ua.
  • the displacement speed of the displayed image is similarly limited to speed vmax1, so that the displacement of the displayed image becomes slower.
  • the captured image has its front direction shifted from the guitar player to the saxophone player at times t1 (FIG. 7) and t2 (FIG. 8), whereas the displayed image has its front direction shifted from the guitar player to the saxophone player over the course of from time t1 to time t4 (FIG. 9). That is, even if the imaging user Ua moves his/her head quickly, the entire scene of the display image visually recognized by the observing user Ub changes slowly. In particular, when proceeding to steps S103 and S104, the observing user Ub has not performed an input (position designation operation or position non-designation operation) to instruct the change, and the change in the display image is an unexpected change. For this reason, fast movement of the display image is likely to induce motion sickness. Therefore, the change speed of the display image is limited to a low value.
  • step S111 the display control unit 205 calculates the cutout reference position using the velocity v set in step S101.
  • a cutout range is set so that the display image seen by the observing user Ub does not displace even if the front direction is displaced by the imaging user Ua, and this is set as the reference position for the displacement due to the input of the observing user Ub.
  • the position of the displacement equivalent to the velocity (-v) for the displacement of the velocity v i.e., the position before the displacement due to the movement of the imaging user Ua) becomes the cutout reference position.
  • step S112 the display control unit 205 branches the process depending on whether the user input was a position designation operation.
  • a position designation operation here refers to an operation in which the observing user Ub designates a certain cut-out range (the center position of the cut-out range).
  • an operation in which a specific position or subject is designated on the projection surface, or an operation in which an amount of displacement is designated such as "rotate left 90 degrees” is a position designation operation referred to here.
  • it is an operation in which the destination of displacement is designated.
  • an operation in which continuous displacement is designated such as a drag operation or a scroll operation, is not a position designation operation as referred to here, but a non-position designation operation.
  • the display control unit 205 sets a speed v of the displacement of the display image based on the difference between the designated position and the current position in step S113. That is, the display control unit 205 changes the speed v set in step S101 to a speed based on the difference between the designated position and the current position.
  • the speed v is set as a vector from the current position to the designated position as a target.
  • the display control unit 205 sets the upper speed limit vmax to the speed vmax2. As described above, vmax2>vmax1.
  • the display control unit 205 calculates the cropping range for each subsequent frame such that the display image is displaced toward the specified position according to the speed v, with the upper speed limit vmax set to speed vmax2.
  • the display control unit 205 sets the cutout range of each frame so that a displacement amount equivalent to the speed v occurs for each frame. For example, if the distance between the position specified by the position specification operation and the current position is short and the transition can be made at a speed equal to or less than vmax2, the cutout range of each frame is set so that the display image transitions at that speed v. On the other hand, when the speed v exceeds the speed vmax2, the display control unit 205 sets the cutout range of each frame so that the display image transitions at the speed vmax2. Therefore, the cut-out range according to the speed v is set in a state where the speed component is limited at the speed vmax2.
  • vmax2>vmax1 means that the displayed image is displaced at that speed, even if it is faster than when there is no user input. This is because the displacement of the displayed image occurs due to the observation user Ub's own operation, so there is little risk of visual sickness occurring.
  • the upper speed limit vmax is set to a relatively high speed vmax2, but the displayed image is displaced in a direction and by an amount of displacement that strictly conforms to the operation of the observing user Ub. This is because, since the observing user Ub is aware of the displacement, there is little possibility that visual sickness will occur even if the delay amount of the displacement speed is reduced, and also because the operation of the observing user Ub can be responded to more responsively.
  • the display control unit 205 proceeds from step S112 to step S115, and replaces the velocity v set in step S101 with a velocity v according to the operation amount of the drag operation, etc.
  • the replaced velocity v is set as a vector including components of the operation direction and operation speed of the drag operation.
  • the velocity component which is the absolute value of the velocity v
  • the upper limit set to the displacement velocity due to the position non-designation operation which means that there is essentially no upper limit
  • the displacement velocity due to an operation such as a drag operation is used as is to set the cut-out range.
  • the forward displacement of the captured image due to the behavior of the capturing user Ua is not reflected in the displayed image, and the displayed image is displaced only in the direction of the operation amount, such as a drag operation.
  • the observing user Ub specifies the speed and direction himself, and the possibility of image-induced sickness is extremely low, the image is displaced according to the amount of operation. This makes it possible to realize a display with good responsiveness to operations.
  • the speed of the displacement of the displayed image is variable depending on the situation, whether the displacement is due to the behavior of the imaging user Ua or due to input by the observing user Ub, but it may also be changed depending on other situation information.
  • FIG. 15 shows the setting changes according to the output device 30.
  • the display control unit 205 checks whether or not information about the output device 30 has been acquired. If the speed vmax2 and/or the speed vmax1 are acquired, the process proceeds to step S202, where the settings of the speed vmax2 and/or the speed vmax1 are changed.
  • the output device 30 may be an HMD type, a large monitor display, a relatively small display provided on a smartphone, etc. There are cases where information on the type of these devices can be acquired. Also, there are cases where it is possible to obtain information on the current display mode, such as whether the displayed image is being displayed as a window occupying part of the screen or being displayed on the entire screen.
  • FIG. 16 shows the change of settings according to user information.
  • the display control unit 205 checks whether or not user information of the observing user Ub has been input. If the predetermined user information has been input, the process proceeds to step S211, where the settings of one or both of the speeds vmax2 and vmax1 are changed.
  • the observer user Ub inputs whether or not he or she is prone to visual motion sickness by operating the input unit 206.
  • the display control unit 205 changes the speed setting based on this input information. For example, if the observing user Ub is not prone to motion sickness, the values of the speeds vmax2 and vmax1 are increased. In other words, the values are updated to reduce the delay. On the other hand, if the observing user Ub is prone to motion sickness, the values of the speeds vmax2 and vmax1 are reduced. In other words, emphasis is placed on preventing motion sickness even if the real-time performance is suppressed.
  • FIG. 17 shows how settings are changed depending on image contents.
  • the display control unit 205 analyzes the captured image in step S221, and determines whether or not to change the settings based on the analysis result in step S222. If it is determined that the settings are to be changed, the display control unit 205 proceeds to step S223, where it changes one or both of the settings of the speed vmax2 and the speed vmax1.
  • FIG. 18 shows how settings are changed depending on the viewing environment.
  • the display control unit 205 determines whether or not information on the viewing environment has been acquired. If information on the viewing environment has been acquired, in step S232, the display control unit 205 compares the acquired information with the previously acquired information on the viewing environment to check whether or not there has been a change in the environment. If it is determined that there is an environmental change, the display control unit 205 proceeds to step S223, and changes the settings of one or both of the speed vmax2 and the speed vmax1.
  • FIG. 19 shows a setting change according to an output other than visual.
  • the display control unit 205 determines whether there is any output other than visual output, such as audio, vibration, or force output to the observing user Ub.
  • the display control unit 205 judges the information related to the sound, vibration, force sense, etc. in step S242. For example, it judges the volume, vibration level, force sense level, etc. Then, depending on the judgment result, the display control unit 205 changes the setting of one or both of the speed vmax2 and the speed vmax1 in step S243.
  • FIG. 20 shows a more specific example of the calculation of the cutout range.
  • step S101 the display control unit 205 sets the speed of displacement in the captured image to speed v, and in step S151 calculates the cut-out reference position P0.
  • this sets a cut-out range that prevents displacement of the displayed image seen by the observing user Ub even if the forward displacement caused by the capturing user Ua occurs, and sets this as the reference position for displacement.
  • the cut-out reference position P0 is set to a position equivalent to -v. In other words, this is set to the position where the forward displacement caused by the capturing user Ua is cancelled.
  • step S102 the display control unit 205 branches the process depending on whether or not a displacement operation has been input by the observing user Ub. If no displacement operation has been input, a speed v' is set in step S152. In this case, the speed v' is set to a speed equivalent to the difference between the gaze position of the imaging user Ua and the current position.
  • the gaze position of the imaging user Ua is the position of the center point in the front direction of the captured image, and the current position is the gaze position before the imaging user Ua moves.
  • the display control unit 205 then proceeds to step S103, sets the upper speed limit vmax to the speed vmax1, and proceeds to step S155.
  • step S102 If it is determined in step S102 that there is an input from the observing user Ub, the display control unit 205 branches the process in step S112 depending on whether the input is a position designation operation or a position non-designation operation.
  • step S113A the display control unit 205 sets the speed v' of the displacement of the displayed image based on the difference between the specified position and the current position. Then, in step S114, the display control unit 205 sets the upper speed limit vmax to speed vmax2, and proceeds to step S155.
  • step S155 the display control unit 205 calculates the cut-out range using the cut-out reference position P0, the speed v' set in either step S152, step S113A, or step S115A, and the upper speed limit vmax set in either step S103, step S114, or step S116.
  • the information processing device 20 of the embodiment performs a cut-out process to cut out a range with a narrower angle of view than the angle of view of the captured image from an captured image (first image) captured by an imaging device worn by the imaging user Ua, and to create a display image (second image) for observation by the observing user Ub, and is equipped with a display control unit 205 that varies the displacement speed of the display image due to a change in the cut-out range when the captured image is displaced in the forward direction during the cut-out process, in accordance with situation information.
  • the captured image is an image captured by the imaging device 40 worn by the imaging user Ua
  • the displacement of the captured image in the forward direction occurs due to a position non-designation operation by the imaging user Ua.
  • This system cuts out a display image from an image captured by an imaging device 40 worn by an imaging user Ua, for example, on the head.
  • the displacement speed of the display image is reduced so as not to cause motion sickness, and the displacement speed is varied according to the situation, so that the imaging user Ua does not need to be conscious of preventing motion sickness in the observing user Ub.
  • This allows the imaging user Ua to capture images freely, and the system can provide the observing user Ub with images that are unconsciously captured to better reflect the local situation.
  • the display control unit 205 variably sets the displacement speed of the displayed image within a range equal to or less than the displacement speed of the captured image, regardless of whether or not the captured image is displaced in the forward direction (see Figures 6 and 20). Making the displacement of the displayed image slower than the displacement of the actual captured image is effective in preventing image-induced motion sickness, but depending on the situation, the delay in the displacement of the displayed image relative to the captured image may be reduced or eliminated.
  • the displacement speed within a range equal to or less than the displacement speed of the captured image, it is preferable to prevent image-induced motion sickness while maintaining real-time performance as much as possible.
  • the processes in FIGS. 6 and 20 can also be applied when there is a displacement in the front direction of the captured image.
  • the situation information includes the presence or absence of a displacement operation on the display image of the observing user Ub.
  • the display control unit 205 makes the displacement speed of the display image when the displacement operation of the observing user Ub has been performed faster than the displacement speed of the display image when the displacement operation has not been performed.
  • the observing user Ub spontaneously displaces the displayed image, so that the observing user is unlikely to get motion sickness even if there is movement in the display. Therefore, when an operation is performed by the observing user Ub, the degree of delay in displacing the displayed image is made smaller than when there is no operation, thereby preventing unnecessary delays from occurring.
  • the display control unit 205 assumes that there is no displacement in the forward direction of the captured image and sets the cut-out range in accordance with the displacement operation of the observing user Ub (see steps S110 to S117 in FIG. 6).
  • the displacement is canceled and the cropping range is set according to the operation of the observing user Ub. This allows the observing user Ub to view the display image that displaces according to his/her own operation, regardless of the behavior of the imaging user Ua.
  • the display control unit 205 sets the displacement speed of the display image in the absence of a displacement operation by the observing user Ub to a first upper speed limit value (speed vmax1) or less. Also, in the case where a user viewing the display image performs an operation to specify a cutout range as a displacement operation, an example has been given in which the displacement speed according to the displacement distance of the display image is set to a second upper speed limit value (vmax2) or less, which is a speed faster than the first upper speed limit value (see steps S113, S114, and S117 in FIG. 6).
  • the speed is determined according to the difference in distance from the position of the current displayed image to the position of the cut-out region, and the speed is set to be equal to or less than speed vmax2.
  • Speed vmax2 is a speed value faster than speed vmax1.
  • the operation of specifying the cut-out range may be, for example, an operation in which the observing user Ub specifies a point on an image or a specific subject, etc., and the display control unit 205 may set a predetermined range around the point, etc. as the cut-out range.
  • the display control unit 205 sets the displacement speed of the displayed image to a speed corresponding to the non-position designation operation (see steps S115, S116, and S117 in Figure 6).
  • the cropping range is set so that the display image is displaced at a speed and in a direction according to the position-unspecified operation, even if there is a displacement in the front direction of the captured image. This allows the observing user Ub to displace the display image as desired by a dragging operation, etc., regardless of the behavior of the capturing user Ua.
  • the display control unit 205 sets the cropping range so that the image is displaced in a direction corresponding to the displacement operation.
  • the direction in which the observing user Ub performs a displacement operation on the displayed image is different from the displacement direction of the captured image due to the behavior of the imaging user Ua
  • the displayed image is displaced in a direction corresponding to the displacement operation of the observing user Ub. This allows the observing user Ub to displace the displayed image as desired by operation, regardless of the behavior of the imaging user Ua.
  • the display control unit 205 sets a cut-out reference position equivalent to the cut-out range of the displayed image in the case where there is no displacement of the captured image in the forward direction, and variably sets the displacement speed of the displayed image based on the cut-out reference position and the displacement amount according to the displacement operation. This makes it possible to displace the display image while canceling the movement of the entire screen caused by the movement of the imaging user Ua.
  • the situation information includes information about the output device 30 that displays the display image.
  • the displacement speed of the display image is set according to information such as the type, size, and display mode of the output device 30 having the display unit 31 viewed by the observing user Ub.
  • the upper speed limit vmax is changed to a value of a speed vmax2 or a speed vmax1. This makes it possible to control the displacement speed of the display image according to a display device that is likely to cause visual sickness and a display device that is unlikely to cause visual sickness.
  • the situation information includes the input information of the observing user Ub.
  • the observing user Ub inputs whether he or she is prone to motion sickness.
  • the display control unit 205 changes the value of the speed vmax2 or the speed vmax1 applied as the upper speed limit vmax. This makes it possible to control the displacement speed of the displayed image according to the individual user's susceptibility to motion sickness.
  • the situation information includes information about the image contents of the display image. For example, whether or not a person is susceptible to motion sickness varies depending on the brightness, dynamic range, close view/distant view, high frequency components, etc. of the displayed image. Therefore, the display control unit 205 changes the values of the speed vmax2 and the speed vmax1, for example, according to information about the image content. This makes it possible to control the displacement speed according to the susceptibility of the displayed image to motion sickness.
  • the situation information includes information about the viewing environment of the displayed image. For example, the value of the speed vmax2 or the speed vmax1 is changed depending on the brightness of the room the display device is placed in. This makes it possible to control the displacement speed depending on whether the environment is prone to visually-induced motion sickness.
  • the situation information includes information regarding extra-visual information that is output together with the display image. For example, when audio, vibration, haptic output, etc. are to be provided together with the displayed image, the values of the speed vmax2 and the speed vmax1 are changed according to the level of these outputs. This makes it possible to control the displacement speed according to whether or not the situation is prone to visual sickness.
  • the processing by the display control unit 205 i.e., the processing of varying the displacement speed of the display image according to the situation information, may be executed by the information processing device 10 on the imaging user Ua side, and the display image after the speed setting processing may be transmitted to the information processing device 20 on the observing user Ub side via the content server 90.
  • the content server 90 may perform processing by the display control unit 205 and transmit the display image after the speed setting processing to the information processing device 20 .
  • the program of the embodiment is a program that causes a processor such as a CPU or a DSP, or a device including these, to execute the process of FIG. 6 described above or the processes of FIGS.
  • the program of the embodiment is a program that causes the information processing device 20 (or the information processing device 10, or the content server 90) to perform a cut-out process to cut out a range with a narrower angle of view than the first image from a first image captured by an imaging device worn by the imaging user Ua, and to create a second image for observation by the observing user Ub, and also to execute display control to vary the displacement speed of the displayed image based on the second image, which is caused by changing the cut-out range of the second image when there is a displacement of the first image in the forward direction, in accordance with situational information.
  • Such a program can realize the information processing device 20 according to the embodiment.
  • the programs of the above-described embodiments can be pre-recorded in a HDD as a recording medium built into a device such as a computer device, or in a ROM in a microcomputer having a CPU.
  • Such programs can also be temporarily or permanently stored (recorded) in removable recording media such as flexible disks, CD-ROMs (Compact Disc Read Only Memory), MO (Magneto Optical) disks, DVDs (Digital Versatile Discs), Blu-ray Discs (registered trademark), magnetic disks, semiconductor memories, and memory cards.
  • removable recording media can be provided as so-called package software.
  • Such a program can be installed in a personal computer or the like from a removable recording medium, or can be downloaded from a download site via a network such as a LAN (Local Area Network) or the Internet.
  • LAN Local Area Network
  • Such a program is suitable for widespread provision of the information processing device 10 of the embodiment.
  • a program is suitable for widespread provision of the information processing device 10 of the embodiment.
  • personal computers communication devices
  • mobile terminal devices such as smartphones and tablets, mobile phones, game devices, video devices, PDAs (Personal Digital Assistants), etc.
  • these devices can function as the information processing device 10 of the present disclosure.
  • An information processing device comprising: a display control unit that performs a cut-out process for cutting out a range of an angle of view narrower than that of a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user; and that varies a displacement speed of a displayed image based on the second image, which is caused by changing the cut-out range of the second image, in accordance with situation information.
  • the display control unit is The information processing device according to (2) above, wherein a displacement speed of the display image caused by changing the cut-out position of the second image is variably set within a range equal to or less than a displacement speed of the first image.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image.
  • the display control unit is The information processing device according to (2) or (3), wherein a displacement speed of the display image when the displacement operation is performed is made faster than a displacement speed of the display image when the displacement operation is not performed.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
  • the display control unit is setting a displacement speed of the display image in the absence of the displacement operation to a first upper speed limit value or less;
  • the information processing device according to any one of (2) to (4) above, wherein a displacement speed of a display image when the displacement operation is performed is set to a second upper speed limit value or less that is faster than the first upper speed limit value.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image
  • the display control unit is The information processing device according to any one of (2) to (5) above, wherein when the displacement operation is performed, the cropping range of the second image is set according to the displacement operation, assuming that there is no displacement of the first image in the forward direction.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
  • the display control unit is setting a displacement speed of the display image in the absence of the displacement operation to a first upper speed limit value or less;
  • An information processing device as described in any of (2) to (6) above, wherein when the observing user performs an operation to specify a cut-out range of the second image as the displacement operation, a displacement speed corresponding to the displacement distance of the displayed image is set to be equal to or lower than a second speed upper limit value that is faster than the first speed upper limit value.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image
  • the display control unit is The information processing device according to any one of (2) to (7) above, wherein, when the observing user performs a position non-designation operation as the displacement operation, a displacement speed of the displayed image is set to a speed corresponding to the position non-designation operation.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image
  • the display control unit is The information processing device described in any of (2) to (8) above, wherein when the displacement operation includes an operation instructing a displacement in a direction different from the displacement direction of the first image, the cut-out range of the second image is set so that it is displaced in a direction corresponding to the displacement operation.
  • the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image
  • the display control unit is An information processing device as described in any of (2) to (9) above, wherein, when there is a displacement of the first image in the forward direction, a cut-out reference position is set that corresponds to the cut-out range of the second image in the case where there is no displacement of the first image in the forward direction, and a displacement speed of the displayed image is variably set based on the cut-out reference position and a displacement amount corresponding to the displacement operation.
  • the status information includes information of a display device that displays a display image based on the second image.
  • the information processing device according to any one of (2) to (11) above, wherein the situation information includes input information of an observing user observing a display image based on the second image. (13) The information processing device according to any one of (2) to (12) above, wherein the status information includes information regarding the image content of the display image based on the second image. (14) The information processing device according to any one of (2) to (13) above, wherein the situation information includes information regarding a viewing environment of an image displayed by the second image. (15) The information processing device according to any one of (2) to (14) above, wherein the status information includes information regarding a non-visual output that is output together with an image displayed by the second image.
  • An information processing device An information processing method comprising: performing a cut-out process for cutting out a range of a narrower angle of view than a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user; and performing display control for varying the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.
  • a program that causes an information processing device to perform a cut-out process for cutting out a range of a narrower angle of view than a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user, and to execute display control for varying the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

This information processing device is rendered to comprise a display control unit that: performs a carving-out process in which from a first image captured with an imaging device worn by an image-capturing user, an angle-of-view range narrower than the first image is carved out to be a second image for an observing user to observe; and in accordance with status information, varies displacement speed, produced by alterations in the carved-out range of the second image when there has been head-on directed displacement of the first image, in a displayed image that is based on the second image.

Description

情報処理装置、情報処理方法、プログラムInformation processing device, information processing method, and program

 本技術は情報処理装置、情報処理方法、プログラムに関し、特に画像表示についての技術分野に関する。 This technology relates to information processing devices, information processing methods, and programs, and in particular to the technical field of image display.

 人の体験をそのまま他者に伝送するために、例えば頭部搭載型カメラ等のウェアラブル機器による一人称視点の画像を伝送し、他者が一人称視点の画像を体験できるシステムが提案されている。
 下記特許文献1では、そのようなシステムにおいて、画像を360°映像から切り出して表示させる際に、表示される画像の動きの速さを制限することで、画像酔いを防止することに言及されている。
In order to transmit a person's experience to others as it is, a system has been proposed that transmits first-person perspective images using a wearable device such as a head-mounted camera, allowing others to experience the first-person perspective images.
The following Patent Document 1 mentions that in such a system, when an image is cut out from a 360° video and displayed, the speed of movement of the displayed image is limited to prevent visual sickness.

WO2015/122108WO2015/122108

 ライブ配信などのリアルタイム画像配信で、カメラを装着した人が、カメラを速く動かす挙動を行うと、その画像を視聴する側のユーザが視認する画像の全画面動きが速くなり、映像酔いが生じることがある。そのためカメラ側のユーザは、撮像中にはカメラを速く動かさないようにすること、例えばカメラを装着した頭部の動きに気をつけることが必要である。しかしながら、完全に画像酔いを起こさせないようにすることは難しい。また気をつけすぎると、自由な撮像を行うことができない。 When a person wearing a camera moves the camera quickly during real-time image distribution, such as live streaming, the image seen by the user watching the image will move faster across the entire screen, which can cause motion sickness. For this reason, the camera user must avoid moving the camera quickly while recording, for example by being careful about the movement of the head where the camera is worn. However, it is difficult to completely prevent motion sickness from occurring. Also, being too careful will prevent free recording.

 一方で、特許文献1のように視認側の表示画像の動きを遅くすることで画像酔いは防止できるが、速さを制限した分だけカメラ正面の画像が表示されるまでの時間遅れが発生してしまいリアルタイム性が損なわれる。 On the other hand, as in Patent Document 1, slowing down the movement of the image displayed on the viewing side can prevent motion sickness, but by limiting the speed, a time delay occurs before the image in front of the camera is displayed, compromising real-time performance.

 そこで本開示では、画像酔いを起こさないようにしつつ、視聴者側でのリアルタイム性を極力損なわないようにする技術を提案する。 This disclosure therefore proposes technology that prevents visually induced motion sickness while minimizing the loss of real-time viewing for the viewer.

 本技術に係る情報処理装置は、撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第1画像の正面方向の変位があったときの前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御部を備える。
 第1画像から第2画像を切り出して表示させるシステムで、第1画像の正面方向、例えば第1画像を撮像する撮像装置の正面方向が変位されることで、切り出し位置も追従するようにすることができる。この場合に、第2画像の視聴者の違和感を低減するために、状況に応じて表示画像の変位速度を可変設定する。
The information processing device related to the present technology performs a cut-out process to cut out a range of an angle of view narrower than that of a first image from a first image captured by an imaging device worn by an imaging user, and to create a second image for observation by an observing user, and is equipped with a display control unit that varies the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.
In a system in which a second image is cut out from a first image and displayed, the cut-out position can be made to follow the change in the front direction of the first image, for example, the front direction of an imaging device that captures the first image. In this case, the speed of change in the displayed image is variably set according to the situation in order to reduce discomfort felt by the viewer of the second image.

本技術の実施の形態のシステム構成の説明図である。FIG. 1 is an explanatory diagram of a system configuration according to an embodiment of the present technology; 実施の形態のシステムで用いられる表示装置の例の説明図である。FIG. 2 is an explanatory diagram of an example of a display device used in the system of the embodiment. 実施の形態のシステムにおける表示画像の変位の説明図である。5A and 5B are explanatory diagrams of displacement of a display image in the system according to the embodiment. 実施の形態のシステムの構成例のブロック図である。FIG. 1 is a block diagram of a configuration example of a system according to an embodiment. 実施の形態のシステムで用いられる情報処理装置のブロック図である。FIG. 2 is a block diagram of an information processing device used in the system of the embodiment. 実施の形態の画像の動き速度に関する処理のフローチャートである。11 is a flowchart of a process relating to an image motion speed according to an embodiment. 実施の形態の撮像画像の動きに対する視聴側の速度制限の説明図である。11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment. FIG. 実施の形態の撮像画像の動きに対する視聴側の速度制限の説明図である。11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment. FIG. 実施の形態の撮像画像の動きに対する視聴側の速度制限の説明図である。11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment. FIG. 実施の形態の撮像画像の動きに対し視聴側の速度制限の説明図である。11 is an explanatory diagram of a speed limit on the viewer side for the movement of a captured image according to an embodiment. FIG. 実施の形態の撮像画像の動きに対し視聴側が同方向動きを指示した場合の速度制限の説明図である。11 is an explanatory diagram of a speed limit when a viewer instructs a movement in the same direction as the movement of a captured image in the embodiment. FIG. 実施の形態の撮像画像の動きに対し視聴側が同方向動きを指示した場合の速度制限の説明図である。11 is an explanatory diagram of a speed limit when a viewer instructs a movement in the same direction as the movement of a captured image in the embodiment. FIG. 実施の形態の撮像画像の動きに対し視聴側が逆方向動きを指示した場合の速度制限の説明図である。11 is an explanatory diagram of a speed limit when a viewer instructs a reverse movement to the movement of a captured image in the embodiment. FIG. 実施の形態の撮像画像の動きに対し視聴側が逆方向動きを指示した場合の速度制限の説明図である。11 is an explanatory diagram of a speed limit when a viewer instructs a reverse movement to the movement of a captured image in the embodiment. FIG. 実施の形態の状況情報に応じた処理例のフローチャートである。11 is a flowchart of an example of processing according to status information according to an embodiment. 実施の形態の状況情報に応じた処理例のフローチャートである。11 is a flowchart of an example of processing according to status information according to an embodiment. 実施の形態の状況情報に応じた処理例のフローチャートである。11 is a flowchart of an example of processing according to status information according to an embodiment. 実施の形態の状況情報に応じた処理例のフローチャートである。11 is a flowchart of an example of processing according to status information according to an embodiment. 実施の形態の状況情報に応じた処理例のフローチャートである。11 is a flowchart of an example of processing according to status information according to an embodiment. 実施の形態の画像の動き速度に関する処理の変形例のフローチャートである。13 is a flowchart of a modified example of the process relating to the image motion speed according to the embodiment.

 以下、実施の形態を次の順序で説明する。
<1.情報処理システムの構成>
<2.情報処理装置の構成>
<3.処理例>
<4.まとめ及び変形例>
The embodiments will be described below in the following order.
1. Configuration of information processing system
2. Configuration of information processing device
<3. Processing example>
4. Summary and Modifications

 なお本開示において「画像」とは動画、静止画のいずれをも含む用語とする。また「画像」という用語はディスプレイデバイスに表示されている状態のみではなく、信号処理過程や、伝送過程や、記録媒体に記録されている状態の画像データも含む意味で用いる。
In this disclosure, the term "image" includes both moving images and still images. Furthermore, the term "image" is used to mean not only the state displayed on a display device, but also image data in a signal processing process, a transmission process, or a state recorded on a recording medium.

<1.システム構成>
 図1を参照して、本開示の実施の形態に係る情報処理システムの構成の一例について説明する。
1. System configuration
An example of a configuration of an information processing system according to an embodiment of the present disclosure will be described with reference to FIG.

 図1は情報処理システム1を概略的に示している。情報処理システム1は、撮像デバイス40、情報処理装置10、情報処理装置20、出力デバイス30を含む。また情報処理システム1は、コンテンツサーバ90を含んでもよい。 FIG. 1 shows a schematic diagram of an information processing system 1. The information processing system 1 includes an imaging device 40, an information processing device 10, an information processing device 20, and an output device 30. The information processing system 1 may also include a content server 90.

 情報処理装置10、情報処理装置20、コンテンツサーバ90は、ネットワークN11を介して互いに情報を送受信可能に接続されている。
 なお、ネットワークN11の種別は特に限定されない。具体的な一例として、ネットワークN11は、Wi-Fi(登録商標)規格に基づくネットワークのような、所謂無線のネットワークにより構成されていてもよい。また、他の一例として、ネットワークN11は、インターネット、専用線、LAN(Local Area Network)、WAN(Wide Area Network)等により構成されていてもよい。また、ネットワークN11は、複数のネットワークを含んでもよく、一部が有線のネットワークとして構成されていてもよい。
The information processing device 10, the information processing device 20, and the content server 90 are connected to each other via a network N11 so as to be able to transmit and receive information to and from each other.
The type of the network N11 is not particularly limited. As a specific example, the network N11 may be configured as a so-called wireless network, such as a network based on the Wi-Fi (registered trademark) standard. As another example, the network N11 may be configured as the Internet, a dedicated line, a local area network (LAN), a wide area network (WAN), or the like. The network N11 may include a plurality of networks, and a part of the network may be configured as a wired network.

 撮像デバイス40は、撮像部41を含み、当該撮像部41により撮像ユーザUaの周囲の環境の画像(例えば動画や静止画)を撮像する。
 例えば撮像デバイス40は頭部装着型として構成され、撮像デバイス40を装着して撮像を行うユーザUa(以下「撮像ユーザUa」と表記)の頭部に対して所定の位置に撮像部41を保持する。撮像部41は、例えば、撮像素子と、当該撮像素子に対して被写体像を結像するための光学系(例えば、レンズ等)を含む。撮像素子としては、例えば、CCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)等が挙げられる。
The imaging device 40 includes an imaging section 41, and the imaging section 41 captures images (e.g., moving images or still images) of the environment surrounding the imaging user Ua.
For example, the imaging device 40 is configured as a head-mounted type, and holds the imaging unit 41 at a predetermined position on the head of a user Ua (hereinafter referred to as "imaging user Ua") who wears the imaging device 40 and captures images. The imaging unit 41 includes, for example, an imaging element and an optical system (for example, a lens, etc.) for forming a subject image on the imaging element. Examples of the imaging element include a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS).

 また撮像デバイス40は、例えば撮像ユーザUaを基準として互いに異なる方向を撮像するように設けられた複数の撮像部41(例えば、撮像部41a及び41b)を含んでもよい。複数の撮像部41それぞれにより撮像された画像を、当該複数の撮像部41それぞれの画角に応じて画像処理等により合成することで、例えば360°パノラマ画像である全天周画像のように、各撮像部41の画角よりもより広い範囲が撮像された画像を取得することが可能となる。 The imaging device 40 may also include multiple imaging units 41 (e.g., imaging units 41a and 41b) that are arranged to capture images in different directions, for example, based on the imaging user Ua. By synthesizing the images captured by each of the multiple imaging units 41 through image processing or the like according to the angle of view of each of the multiple imaging units 41, it is possible to obtain an image that captures a wider range than the angle of view of each imaging unit 41, such as a full-sky image that is a 360° panoramic image.

 また1又は複数の撮像部41は、より広角のレンズが装着されているとよく、撮像ユーザUaを基準とした各方向の画像が、1又は複数の撮像部41により撮像されるようにするとよい。撮像部41は魚眼レンズにより広角撮像を行うものでもよい。 Furthermore, it is preferable that one or more of the imaging units 41 are equipped with a wider-angle lens, and images in each direction based on the imaging user Ua are captured by one or more of the imaging units 41. The imaging unit 41 may be one that captures wide-angle images using a fisheye lens.

 以降の説明では、複数の撮像部41a、41bのそれぞれにより撮像された画像に基づき全天周画像を合成する場合を例にして説明するが、必ずしも同態様のみに限定されるものではない。 In the following explanation, we will use as an example a case where a panoramic image is synthesized based on images captured by each of the multiple imaging units 41a and 41b, but this is not necessarily limited to this configuration.

 例えば1又は複数の撮像部41で撮像され、情報処理装置10での合成処理等を含めて得られる画像(出力する画像)としては、2つの魚眼レンズによる全天球画像、1つの魚眼レンズによる半天球画像、パノラマ画像(全天周画像及び360°未満の範囲のパノラマ画像)など、多様な画像が想定される。このような情報処理装置10から送信される画像を説明上「撮像画像」と呼ぶこととする。この撮像画像としては、少なくとも、出力デバイス30で表示される画像(撮像画像との区別のため「表示画像」とする)の画角よりも広い画角の画像を出力するようにすればよい。出力デバイス側では、そのような撮像画像の一部の領域を切り出して表示画像とする。 For example, images captured by one or more imaging units 41 and obtained through synthesis processing and the like in the information processing device 10 (output images) can be a variety of images, such as a full-sphere image captured by two fisheye lenses, a hemispherical image captured by one fisheye lens, and a panoramic image (a full-sphere image and a panoramic image with a range of less than 360°). For the sake of explanation, such images transmitted from the information processing device 10 will be referred to as "captured images." As this captured image, it is sufficient to output an image with a wider angle of view than the image displayed on the output device 30 (called a "display image" to distinguish it from the captured image). On the output device side, a partial area of such a captured image is cut out and used as the display image.

 また撮像デバイス40は頭部装着型のウェアラブルデバイスに限らず、撮像ユーザUaの首周りに装着するネックバンド型、ペンダント型、めがね型、肩に装着するタイプ、或いは撮像ユーザUaが手持ちで撮像する一般的なカメラなど、多様な形態が想定される。本実施の形態としては、撮像ユーザUaの挙動によって、撮像方向が変位するタイプのものを想定する。撮像方向とは撮像画角の正面方向のことを指す。撮像画像が360°画像ではない場合は、撮像方向とは撮像される入射光の光軸方向のこととしてもよい。或いは頭部装着型の撮像デバイス40の場合などは、撮像方向とは、撮像ユーザUaの頭部の正面方向ということもできる。 The imaging device 40 is not limited to a head-worn wearable device, but may take a variety of forms, such as a neckband type worn around the neck of the imaging user Ua, a pendant type, a glasses type, a shoulder-worn type, or a general camera held by the imaging user Ua. In this embodiment, a type in which the imaging direction changes depending on the behavior of the imaging user Ua is assumed. The imaging direction refers to the front direction of the imaging angle of view. If the captured image is not a 360° image, the imaging direction may refer to the optical axis direction of the incident light being captured. Alternatively, in the case of a head-worn imaging device 40, the imaging direction may be the front direction of the head of the imaging user Ua.

 撮像デバイス40は、例えば無線または有線の通信経路を介して、撮像ユーザUaが保持する情報処理装置10と互いに情報を送受信可能に構成されている。このような構成に基づき、撮像デバイス40は、撮像部41a、41bのそれぞれにより撮像された画像を情報処理装置10に送信する。 The imaging device 40 is configured to be capable of transmitting and receiving information to and from the information processing device 10 held by the imaging user Ua, for example, via a wireless or wired communication path. Based on this configuration, the imaging device 40 transmits images captured by each of the imaging units 41a and 41b to the information processing device 10.

 情報処理装置10は、撮像デバイス40から、撮像部41a、41bのそれぞれにより撮像された画像を取得する。情報処理装置10は、撮像部41a、41bのそれぞれにより撮像された画像を合成することで全天周画像等の撮像画像を生成する。 The information processing device 10 acquires images captured by the imaging units 41a and 41b from the imaging device 40. The information processing device 10 generates an image such as a panoramic image by synthesizing the images captured by the imaging units 41a and 41b.

 情報処理装置10は、撮像ユーザUaの頭部の回転等の挙動の情報、つまりこの場合は撮像画像の視点の位置や向き(正面方向)の変位を示す情報も生成し、撮像画像に関連づける。撮像ユーザUaの頭部の回転等の挙動は、表示画像に影響を与える撮像ユーザUaの操作の一態様となる。特に頭部の挙動などは、画像変位の位置を指定しない位置非指定操作の一態様となる。
 なお情報処理装置10は、撮像ユーザUaの頭部の回転等に伴う視点の位置や向きの変化を認識し、当該視点の位置や向きの変化に伴う画像の回転が抑制されるように、全天周画像等を生成してもよい。
The information processing device 10 also generates information on the behavior of the imaging user Ua, such as head rotation, i.e., in this case, information indicating the position and direction (front direction) of the viewpoint of the captured image, and associates it with the captured image. The behavior of the imaging user Ua, such as head rotation, is one aspect of the imaging user Ua's operation that affects the displayed image. In particular, the head behavior is one aspect of a position-unspecified operation that does not specify the position of image displacement.
The information processing device 10 may recognize changes in the position and direction of the viewpoint associated with the rotation of the head of the imaging user Ua, and generate a panoramic image, etc. so that rotation of the image associated with the change in the position and direction of the viewpoint is suppressed.

 視点の位置や向きの変化については、例えば、撮像デバイス40に加速度センサや角速度センサを設けることで、当該センサの検知結果に基づき、当該撮像デバイス40の位置や姿勢の変化として認識することが可能である。また、他の一例として、撮像部41a及び41bそれぞれにより撮像された画像に対して、画像解析処理を施すことで、視点の位置や向きの変化が認識されてもよい。 Changes in the position and orientation of the viewpoint can be recognized as changes in the position and orientation of the imaging device 40 based on the detection results of an acceleration sensor or an angular velocity sensor provided in the imaging device 40, for example. As another example, changes in the position and orientation of the viewpoint can be recognized by performing image analysis processing on the images captured by the imaging units 41a and 41b, respectively.

 そして、情報処理装置10は、生成した撮像画像、例えば全天周画像を、ネットワークN11を介してコンテンツサーバ90に送信する。 Then, the information processing device 10 transmits the generated captured image, for example a panoramic image, to the content server 90 via the network N11.

 なお、情報処理装置10のうち少なくとも一部の構成が、撮像デバイス40に設けられていてもよい。例えば、全天周画像を生成するための構成が撮像デバイス40に設けられていてもよい。また、情報処理装置10と撮像デバイス40とが一体的に構成されていてもよい。 In addition, at least a part of the configuration of the information processing device 10 may be provided in the imaging device 40. For example, a configuration for generating a panoramic image may be provided in the imaging device 40. In addition, the information processing device 10 and the imaging device 40 may be configured integrally.

 コンテンツサーバ90は、撮像ユーザUaが保持する情報処理装置10から、撮像デバイス40による撮像結果に基づく撮像画像(例えば全天周画像)を、ネットワークN11を介して取得し、取得した当該画像を他のユーザ(観察ユーザUb)が保持する情報処理装置20に配信する。なお観察ユーザUbとは、撮像ユーザUa側で撮像された画像を視聴する側のユーザを指す。観察ユーザUbは一人の場合も複数人の場合もある。 The content server 90 acquires, via the network N11, captured images (e.g., panoramic images) based on the results of imaging by the imaging device 40 from the information processing device 10 held by the imaging user Ua, and distributes the acquired images to the information processing device 20 held by another user (observing user Ub). Note that the observing user Ub refers to a user who views the images captured by the imaging user Ua. There may be one or more observing users Ub.

 また、コンテンツサーバ90は、情報処理装置10から取得した撮像画像を、例えば、記憶部95に一時的または恒久的に記憶し、記憶部95に記憶された撮像画像を、情報処理装置20に配信してもよい。このような構成により、撮像デバイス40による撮像結果に基づく撮像画像を、撮像ユーザUaが保持する情報処理装置10から、観察ユーザUbが保持する情報処理装置20に、同期的または非同期的に伝送することが可能となる。 The content server 90 may also temporarily or permanently store the captured images acquired from the information processing device 10 in, for example, the storage unit 95, and distribute the captured images stored in the storage unit 95 to the information processing device 20. With this configuration, it becomes possible to transmit captured images based on the imaging results of the imaging device 40 synchronously or asynchronously from the information processing device 10 held by the imaging user Ua to the information processing device 20 held by the observing user Ub.

 出力デバイス30は、ディスプレイ等の表示部31を備えた所謂頭部装着型の表示装置(HMD:Head Mounted Display)として構成されており、出力デバイス30を装着した観察ユーザUbに対して、表示部31を介して画像を提示する。 The output device 30 is configured as a so-called head mounted display (HMD) equipped with a display unit 31 such as a display, and presents an image via the display unit 31 to the observation user Ub wearing the output device 30.

 例えば図2Aは、情報処理システム1において適用される出力デバイス30の一例を示している。出力デバイス30は、ユーザの頭部に装着されることで、当該ユーザの眼前に、画像を表示するための表示部31(例えば、表示パネル)を保持するように構成されている。なお、出力デバイス30として適用可能な頭部装着型の表示装置(HMD)には、没入型HMD、シースルー型HMD、ビデオシースルー型HMD、及び網膜投射型HMDが含まれる。 For example, FIG. 2A shows an example of an output device 30 applied in the information processing system 1. The output device 30 is configured to be worn on the user's head and to hold a display unit 31 (e.g., a display panel) for displaying an image in front of the user's eyes. Note that head-mounted display devices (HMDs) that can be used as the output device 30 include immersive HMDs, see-through HMDs, video see-through HMDs, and retinal projection HMDs.

 没入型HMDは、ユーザの頭部または顔部に装着された場合に、ユーザの眼を覆うように装着され、ユーザの眼前にディスプレイ等の表示部が保持される。そのため、没入型HMDを装着したユーザは、外部の風景(即ち、現実世界の風景)を直接視野に入れることが困難であり、表示部に表示された映像のみが視界に入ることとなる。このような構成により、没入型HMDは、画像を視聴しているユーザに対して没入感を与えることが可能となる。なお図2Aに示す出力デバイス30は、没入型HMDに相当する。 When an immersive HMD is worn on the user's head or face, it is worn so as to cover the user's eyes, and a display unit such as a display is held in front of the user's eyes. Therefore, it is difficult for a user wearing an immersive HMD to directly view the outside scenery (i.e., the scenery of the real world), and only the image displayed on the display unit comes into view. With this configuration, the immersive HMD can give a sense of immersion to the user viewing the image. Note that the output device 30 shown in Figure 2A corresponds to an immersive HMD.

 シースルー型HMDは、例えばハーフミラーや透明な導光板を用いて、透明な導光部等からなる虚像光学系をユーザの眼前に保持し、当該虚像光学系の内側に画像を表示させる。そのため、シースルー型HMDを装着したユーザは、虚像光学系の内側に表示された画像を視聴している間も、外部の風景を視野に入れることが可能となる。なお、シースルー型HMDの具体的な一例として、メガネのレンズに相当する部分を虚像光学系として構成した、所謂メガネ型のウェアラブルデバイスが挙げられる。 A see-through HMD holds a virtual image optical system consisting of a transparent light guide section in front of the user's eyes using, for example, a half mirror or a transparent light guide plate, and displays an image inside the virtual image optical system. Therefore, a user wearing a see-through HMD can view the outside scenery even while viewing an image displayed inside the virtual image optical system. One specific example of a see-through HMD is a so-called glasses-type wearable device in which the part equivalent to the lenses of glasses is configured as a virtual image optical system.

 ビデオシースルー型HMDは、没入型HMDと同様に、ユーザの眼を覆うように装着され、ユーザの眼前にディスプレイ等の表示部が保持される。一方で、ビデオシースルー型HMDは、周囲の風景を撮像するための撮像部を有し、当該撮像部により撮像されたユーザの視線方向の風景の画像を表示部に表示させる。このような構成により、ビデオシースルー型HMDを装着したユーザは、外部の風景を直接視野に入れることは困難ではあるが、表示部に表示された画像により、外部の風景を確認することが可能となる。 Like immersive HMDs, video see-through HMDs are worn over the user's eyes, with a display unit such as a display held in front of the user's eyes. On the other hand, video see-through HMDs have an imaging unit for capturing images of the surrounding scenery, and the image of the scenery captured by the imaging unit in the user's line of sight is displayed on the display unit. With this configuration, a user wearing a video see-through HMD can check the outside scenery from the image displayed on the display unit, although it is difficult for the user to directly view the outside scenery.

 網膜投射型HMDは、ユーザの眼前に投影部が保持されており、当該投影部からユーザの眼に向けて、外部の風景に対して画像が重畳するように当該画像が投影される。より具体的には、網膜投射型HMDでは、ユーザの眼の網膜に対して、投影部から画像が直接投射され、当該画像が網膜上で結像する。このような構成により、近視や遠視のユーザの場合においても、より鮮明な映像を視聴することが可能となる。また、網膜投射型HMDを装着したユーザは、投影部から投影される画像を視聴している間も、外部の風景を視野に入れることが可能となる。 A retinal projection HMD has a projection unit held in front of the user's eyes, and the image is projected from the projection unit towards the user's eyes so that it is superimposed on the external scenery. More specifically, in a retinal projection HMD, an image is projected directly from the projection unit onto the retina of the user's eye, and the image is formed on the retina. This configuration allows users with myopia or hyperopia to view clearer images. Furthermore, a user wearing a retinal projection HMD can view the external scenery even while viewing an image projected from the projection unit.

 ここまでHMDタイプを説明したが、情報処理システム1において適用される出力デバイス30としては図2Bに示すように、モニタディスプレイ装置でもよいし、図2Cのようにスマートフォン等の携帯型情報処理装置による表示画面であってもよい。さらには図示しない投影機器及びスクリーンで構成される表示装置や、ドーム状の空間に形成され、観察ユーザUbがドーム内に入って視聴する天球型の大型のディスプレイも出力デバイス30とすることができる。 So far, the HMD type has been described, but the output device 30 applied to the information processing system 1 may be a monitor display device as shown in FIG. 2B, or a display screen of a portable information processing device such as a smartphone as shown in FIG. 2C. Furthermore, the output device 30 may also be a display device composed of a projection device and a screen (not shown), or a large spherical display formed in a dome-shaped space that the observing user Ub views inside the dome.

 観察ユーザUb側の情報処理装置20は、ネットワークN11を介してコンテンツサーバ90から、撮像デバイス40による撮像結果に基づく撮像画像(例えば全天周画像)を取得する。そして、情報処理装置20は、取得した画像を出力デバイス30の表示部31に表示させる。このような構成により、出力デバイス30を装着した観察ユーザUbは、例えば、出力デバイス30の表示部31を介して、撮像デバイス40を装着した撮像ユーザUaの周囲の環境の画像を視聴することが可能となる。 The information processing device 20 on the observing user Ub's side acquires an image (e.g., a panoramic image) based on the imaging results by the imaging device 40 from the content server 90 via the network N11. The information processing device 20 then displays the acquired image on the display unit 31 of the output device 30. With this configuration, the observing user Ub wearing the output device 30 can view an image of the environment surrounding the imaging user Ua wearing the imaging device 40, for example, via the display unit 31 of the output device 30.

 また情報処理装置20は、観察ユーザUbの頭部の動き(頭部の姿勢)の検知結果の通知を出力デバイス30から受けてもよい。このような構成により、情報処理装置20は、観察ユーザUbの頭部の動きに基づき、観察ユーザUbの視点の位置や向きの変化を認識し、認識した視点の変化に応じた画像を、出力デバイス30(表示部31)を介して観察ユーザUbに提示してもよい。 The information processing device 20 may also receive notification of the detection results of the head movement (head posture) of the observing user Ub from the output device 30. With this configuration, the information processing device 20 may recognize changes in the position and direction of the viewpoint of the observing user Ub based on the head movement of the observing user Ub, and present an image corresponding to the recognized change in viewpoint to the observing user Ub via the output device 30 (display unit 31).

 例えば図3は観察ユーザUbの視点の変化に応じた画像を提示するため動作の一例を示している。図3に示す例では、全天周画像のように、観察ユーザUbの視野よりも広く展開された画像v0を、観察ユーザUbが見回すように視点を移動させながら、出力デバイス30(図示せず)を介して参照する態様を模擬した場合の一例を示している。なお画像v0は、動画像であってもよいし静止画像であってもよい。ここでいう画像v0は撮像デバイス40による撮像結果に基づく撮像画像に相当する。 For example, FIG. 3 shows an example of an operation for presenting an image according to a change in the viewpoint of the observing user Ub. The example shown in FIG. 3 shows an example of simulating a situation in which the observing user Ub views an image v0 that is expanded wider than the observing user Ub's field of view, such as a panoramic image, via an output device 30 (not shown) while moving the viewpoint as if looking around. Note that image v0 may be a moving image or a still image. Image v0 here corresponds to a captured image based on the imaging results by the imaging device 40.

 情報処理装置20は、観察ユーザUbの頭部の動きを検出することで、観察ユーザUbの視点の変化を認識する。そして情報処理装置20は、観察ユーザUbの視野よりも広く展開された画像v0のうち、認識した視点の位置や向きに応じた部分の画像v11を、あらかじめ設定された視野角に基づき切り出し、切り出した画像v11を、出力デバイス30を介して観察ユーザUbに提示する。即ち、図3に示された画像v11は、出力デバイス30の表示部31に提示される画像を模式的に示しているもので、観察ユーザUbが実際に視認する表示画像である。 The information processing device 20 recognizes changes in the viewpoint of the observing user Ub by detecting the movement of the head of the observing user Ub. The information processing device 20 then cuts out an image v11 of a portion of the image v0, which is expanded wider than the field of view of the observing user Ub, that corresponds to the position and direction of the recognized viewpoint, based on a pre-set field of view angle, and presents the cut-out image v11 to the observing user Ub via the output device 30. That is, the image v11 shown in FIG. 3 is a schematic representation of the image presented on the display unit 31 of the output device 30, and is the display image that is actually viewed by the observing user Ub.

 なお画像v11を抽出するための視野角は、固定であってもよいし、ユーザ操作等に基づき変更できるようにしてもよい。例えば、画像v11中の一部をさらに拡大して観察ユーザUbに提示する場合には、情報処理装置20は、視野角をより狭く設定することで、画像v0に対する画像v11の大きさが相対的に小さくなるように制御してもよい。 The viewing angle for extracting image v11 may be fixed, or may be changeable based on user operation, etc. For example, when a portion of image v11 is to be further enlarged and presented to observing user Ub, information processing device 20 may control image v11 to be relatively smaller in size with respect to image v0 by setting the viewing angle narrower.

 このような処理のために、出力デバイス30には、例えば加速度センサや角速度センサ(ジャイロセンサ)等の検知部(図4の検知部33)が設けられ、観察ユーザUbの挙動、例えば頭部の動き(頭部の姿勢)を検知可能に構成されている場合がある。例えば出力デバイス30が図2AのようなHMDの場合、出力デバイス30は、ユーザの頭部の動きとして、ヨー(yaw)方向、ピッチ(pitch)方向、及びロール(roll)方向それぞれの成分を検出してもよい。出力デバイス30は、観察ユーザUbの頭部の動きの検知結果を情報処理装置20に通知する。なお、ここでいう頭部の動きの検出とは観察ユーザUbの視線の変化の検出という意味がある。 For such processing, the output device 30 may be provided with a detection unit (detection unit 33 in FIG. 4) such as an acceleration sensor or an angular velocity sensor (gyro sensor) and configured to detect the behavior of the observing user Ub, such as head movement (head posture). For example, if the output device 30 is an HMD as shown in FIG. 2A, the output device 30 may detect the yaw, pitch, and roll components of the user's head movement. The output device 30 notifies the information processing device 20 of the detection result of the head movement of the observing user Ub. Note that detection of head movement here means detection of a change in the line of sight of the observing user Ub.

 なお、必ずしも観察ユーザUbの頭部の動きに連動して表示画像を変位させるものに限られない。
 例えば出力デバイス30の種別、機種タイプなどにより、観察ユーザUbの腕の方向に連動して表示画像を変位させるものも考えられる。例えば観察ユーザUbが保持したり装着したりする機器(情報処理装置20等)に加速度センサや角速度センサが搭載され、観察ユーザUbの挙動、例えば腕の動き等を検知してもよい。
 さらに出力デバイス30や情報処理装置20とは別体のセンシングデバイスとして、加速度センサや角速度センサが用意され、観察ユーザUbに装着されて、観察ユーザUbの挙動を検知できるようにしてもよい。
It should be noted that the display image is not necessarily displaced in conjunction with the movement of the head of the observing user Ub.
For example, it is possible to displace the display image in conjunction with the direction of the arm of the observing user Ub depending on the type and model of the output device 30. For example, an acceleration sensor or an angular velocity sensor may be mounted on a device (such as the information processing device 20) held or worn by the observing user Ub to detect the behavior of the observing user Ub, such as the movement of the arm.
Furthermore, an acceleration sensor or an angular velocity sensor may be prepared as a sensing device separate from the output device 30 and the information processing device 20 and attached to the observing user Ub to detect the behavior of the observing user Ub.

 例えばこれらのように観察ユーザUbの挙動をセンシングできるようにすることで、観察ユーザUbの挙動に応じて表示画像の正面方向を変位させることができる。但し観察ユーザUbの挙動を検知する構成を備えることは情報処理システム1において必須ではない。例えば出力デバイス30や情報処理装置20において、視聴方向を観察ユーザUbが操作できる操作子やGUI(Graphical User Interface)が設けられるようにし、その操作に応じて表示画像の変位が行われるようにしてもよい。 For example, by being able to sense the behavior of the observing user Ub in this manner, it is possible to displace the forward direction of the displayed image in accordance with the behavior of the observing user Ub. However, it is not essential for the information processing system 1 to be provided with a configuration for detecting the behavior of the observing user Ub. For example, the output device 30 or the information processing device 20 may be provided with an operator or GUI (Graphical User Interface) that allows the observing user Ub to operate the viewing direction, and the displayed image may be displaced in accordance with that operation.

 また、観察ユーザUbの挙動を検出する方法については、出力デバイス30や情報処理装置20に設けられた各種センサの検出結果に基づく方法には限定されない。例えば、観察ユーザUbの頭部や身体の動きを検出する構成を、出力デバイス30の外部に設けてもよい。具体的な一例として、撮像装置により観察ユーザUbの画像を撮像し、撮像された画像を解析することで、観察ユーザUbの頭部や身体の動きを検出してもよい。また、出力デバイス30の外部に所謂光センサのような各種センサを設け、当該センサにより観察ユーザUbをセンシングすることで、観察ユーザUbの頭部の動きを検出してもよい。 Furthermore, the method of detecting the behavior of the observing user Ub is not limited to a method based on the detection results of various sensors provided in the output device 30 or the information processing device 20. For example, a configuration for detecting the head and body movements of the observing user Ub may be provided outside the output device 30. As a specific example, an image of the observing user Ub may be captured by an imaging device, and the captured image may be analyzed to detect the head and body movements of the observing user Ub. Furthermore, various sensors such as so-called optical sensors may be provided outside the output device 30, and the head movements of the observing user Ub may be detected by sensing the observing user Ub with the sensors.

 以上のような情報処理システム1の構成により、観察ユーザUbは、撮像ユーザUaの視点の光景を見ることや、自身の視点を移動させながら見回すように撮像ユーザUaの周囲の光景を見たりすることができ、あたかも撮像ユーザUaと同じ場所にいるかのような臨場感あふれる画像を体験することが可能となる。そのため、例えば、画像v0として、撮像デバイス40による撮像結果に基づく画像(全天周画像)が提示されることで、観察ユーザUbは、あたかも撮像デバイス40を装着した撮像ユーザUaと空間を共有しているような体験を得ることが可能となる。 With the information processing system 1 configured as described above, the observing user Ub can see the view from the viewpoint of the imaging user Ua, or can see the view around the imaging user Ua as if he is looking around while moving his own viewpoint, allowing him to experience a very realistic image as if he were in the same place as the imaging user Ua. Therefore, for example, by presenting an image (panoramic image) based on the imaging results by the imaging device 40 as image v0, the observing user Ub can have the experience of sharing a space with the imaging user Ua who is wearing the imaging device 40.

 なお情報処理装置20のうち少なくとも一部の構成が、出力デバイス30に設けられていてもよい。例えば、全天周画像から、観察ユーザUbの視点の位置や向きに応じた部分画像を抽出し、抽出した部分画像を表示部31に提示するための構成が、出力デバイス30に設けられていてもよい。また、情報処理装置20と出力デバイス30とが一体的に構成されていてもよい。 In addition, at least a part of the configuration of the information processing device 20 may be provided in the output device 30. For example, the output device 30 may be provided with a configuration for extracting a partial image from the panoramic image according to the position and direction of the viewpoint of the observing user Ub and presenting the extracted partial image on the display unit 31. In addition, the information processing device 20 and the output device 30 may be configured integrally.

 ところで上記では観察ユーザUbの挙動により撮像画像内での表示画像の切り出し範囲が変位することを述べたが、撮像画像自体は、撮像ユーザUaの挙動、例えば頭部の動きに応じて正面方向が変位する。撮像部41の撮像方向が変化するためである。従って仮に観察ユーザUbが視線を変えなくても、つまり切り出し範囲が変わらなくても、撮像ユーザUaの挙動、例えば頭部を上下左右方向に振ることよって、観察ユーザUbが表示画像として視認する光景全体が上下左右方向に変位することになる。 As mentioned above, the cut-out range of the display image within the captured image is displaced according to the behavior of the observing user Ub, but the captured image itself displaces in the forward direction according to the behavior of the imaging user Ua, for example, head movement. This is because the imaging direction of the imaging unit 41 changes. Therefore, even if the observing user Ub does not change his/her line of sight, i.e., the cut-out range does not change, the entire scene viewed by the observing user Ub as the display image will be displaced in the up/down/left/right directions due to the behavior of the imaging user Ua, for example, shaking his/her head up/down/left/right.

 また上記に説明した例では、撮像デバイス40による撮像結果に基づく撮像画像(全天周画像等)がコンテンツサーバ90を介して情報処理装置10から情報処理装置20に送信される例について説明したが、必ずしも同構成のみには限定されない。例えば撮像デバイス40による撮像結果に基づく画像が、コンテンツサーバ90を介さずに、情報処理装置10から情報処理装置20に対して直接送信される構成であってもよい。 In the above example, a captured image (such as a panoramic image) based on the imaging results by the imaging device 40 is transmitted from the information processing device 10 to the information processing device 20 via the content server 90, but this is not necessarily limited to this configuration. For example, the image based on the imaging results by the imaging device 40 may be transmitted directly from the information processing device 10 to the information processing device 20 without going through the content server 90.

 続いて情報処理システム1の機能構成について説明する。図4は情報処理システム1の機能構成の一例を示したブロック図である。 Next, the functional configuration of the information processing system 1 will be described. Figure 4 is a block diagram showing an example of the functional configuration of the information processing system 1.

 撮像デバイス40は、撮像部41(図では複数の撮像部41a及び41b)と、検知部43とを含む。なお撮像デバイス40が2つの撮像部41a及び41bを含む例は一例で、撮像部41は1つでもよいし、3以上の撮像部41を含んでもよい。 The imaging device 40 includes an imaging section 41 (in the figure, multiple imaging sections 41a and 41b) and a detection section 43. Note that the example in which the imaging device 40 includes two imaging sections 41a and 41b is just an example, and the imaging device 40 may include one imaging section 41 or three or more imaging sections 41.

 撮像部41a及び41bのそれぞれは、撮像デバイス40の周囲の画像(即ち、撮像デバイス40を装着する撮像ユーザUaの周囲の画像)を撮像し、撮像した画像を情報処理装置10に出力する。 Each of the imaging units 41a and 41b captures an image of the surroundings of the imaging device 40 (i.e., an image of the surroundings of the imaging user Ua wearing the imaging device 40) and outputs the captured image to the information processing device 10.

 検知部43は、例えば、加速度センサや角速度センサ(ジャイロセンサ)から成り、撮像デバイス40の位置や姿勢の変化を検知し、検知結果を情報処理装置10に出力する。 The detection unit 43 is composed of, for example, an acceleration sensor and an angular velocity sensor (gyro sensor), detects changes in the position and attitude of the imaging device 40, and outputs the detection results to the information processing device 10.

 情報処理装置10は、通信部101と、認識部103と、画像合成部105とを含む。通信部101は、情報処理装置10内の各構成が、ネットワークN11を介して外部装置(例えば、コンテンツサーバ90や情報処理装置20)と通信を行うための通信インタフェースである。 The information processing device 10 includes a communication unit 101, a recognition unit 103, and an image synthesis unit 105. The communication unit 101 is a communication interface that enables each component in the information processing device 10 to communicate with an external device (e.g., the content server 90 or the information processing device 20) via the network N11.

 なお通信部101の構成は、ネットワークN11の種別に応じて適宜変更することが可能である。例えば、ネットワークN11が無線のネットワークの場合には、通信部101は、ベースバンドプロセッサやRF回路等を含んでもよい。以降の説明では、情報処理装置10内の各構成が、ネットワークN11を介して外部装置と情報の送受信を行う場合には、特に説明が無い場合においても、通信部101を介して当該情報の送受信を行うものとする。 The configuration of the communication unit 101 can be changed as appropriate depending on the type of network N11. For example, if the network N11 is a wireless network, the communication unit 101 may include a baseband processor, an RF circuit, and the like. In the following explanation, when each component in the information processing device 10 transmits and receives information to and from an external device via the network N11, it is assumed that the information is transmitted and received via the communication unit 101, even if no special explanation is given.

 認識部103は、検知部43から撮像デバイス40の位置や姿勢の変化の検知結果を取得し、取得した検知結果に基づき当該撮像デバイス40を装着する撮像ユーザUaの頭部の位置や姿勢から判定される頭部の向き(換言すると、撮像ユーザUaの視線)を認識する。そして、認識部103は、撮像ユーザUaの頭部の位置や姿勢の認識結果を示す情報を画像合成部105に出力する。 The recognition unit 103 acquires the detection results of changes in the position and posture of the imaging device 40 from the detection unit 43, and recognizes the head direction (in other words, the line of sight of the imaging user Ua) determined from the head position and posture of the imaging user Ua wearing the imaging device 40 based on the acquired detection results. The recognition unit 103 then outputs information indicating the recognition results of the head position and posture of the imaging user Ua to the image synthesis unit 105.

 画像合成部105は、複数の撮像部41それぞれから、当該撮像部41により撮像された画像を取得する。画像合成部105は、複数の撮像部41のそれぞれにより撮像された画像を、各撮像部41の撮像方向と当該撮像部41の画角とに応じて合成することで、各撮像部41の画角よりもより広い範囲が撮像された撮像画像(例えば全天周画像)を生成する。撮像部41が撮像ユーザUaの頭部に装着されていることにより、画像合成部105により生成された全天周画像の正面方向は、撮像ユーザUaの頭部の姿勢の変化に応じて変位する。 The image synthesis unit 105 acquires from each of the multiple imaging units 41 an image captured by that imaging unit 41. The image synthesis unit 105 synthesizes the images captured by each of the multiple imaging units 41 according to the imaging direction of each imaging unit 41 and the angle of view of that imaging unit 41 to generate an image (e.g., a panoramic image) that captures a wider range than the angle of view of each imaging unit 41. As the imaging unit 41 is attached to the head of the imaging user Ua, the front direction of the panoramic image generated by the image synthesis unit 105 changes according to changes in the posture of the head of the imaging user Ua.

 なお画像合成部105が全天周画像としての撮像画像を生成するものとして説明するが、画像合成部105が生成する撮像画像は、必ずしも全天周画像には限定されない。撮像画像は、少なくとも観察ユーザUb側の表示画像よりも広い画角の画像であるとし、例えば水平180度の画像などでもよいし、前後左右360度にわたって被写体光景が表示される半天球画像や全天球画像などでもよい。
 また画像合成部105を有する構成としているが、例えば1つの撮像部41で撮像する場合には画像の合成処理は必要ない。その意味では画像合成部105を設けない例もある。
Although the image synthesis unit 105 is described as generating a captured image as a panoramic image, the captured image generated by the image synthesis unit 105 is not necessarily limited to a panoramic image. The captured image is an image having a wider angle of view than the display image on the observing user Ub side, and may be, for example, an image of 180 degrees horizontally, or a hemispherical image or panoramic image in which the subject scene is displayed over 360 degrees in the front, back, left, and right directions.
Although the configuration includes the image synthesis unit 105, image synthesis processing is not necessary when capturing images using, for example, a single image capturing unit 41. In that sense, there are also examples in which the image synthesis unit 105 is not provided.

 また画像合成部105は、撮像ユーザUaの頭部の位置や姿勢の認識結果を示す情報に基づき、当該頭部に対して加わる加速度の向きや大きさを示す情報を、加速度情報として算出する。なお、このとき画像合成部105は、撮像ユーザUaの頭部の動きのうち、並進運動の成分を除き、回転運動のみを考慮して加速度情報を算出してよい。
 加速度情報は1軸(例えばヨー方向)或いは2軸(例えばヨー方向とピッチ方向)の情報でもよいが、3軸(ヨー方向、ピッチ方向、ロール方向)の情報であってもよい。
The image synthesis unit 105 also calculates, based on the information indicating the recognition result of the position and posture of the head of the imaging user Ua, information indicating the direction and magnitude of the acceleration applied to the head as acceleration information. Note that, at this time, the image synthesis unit 105 may calculate the acceleration information by excluding the translational motion component of the head movement of the imaging user Ua and taking into account only the rotational motion.
The acceleration information may be one-axis (for example, yaw direction) or two-axis (for example, yaw and pitch directions) information, or may be three-axis (yaw, pitch, and roll) information.

 そして、画像合成部105は、生成した全天周画像等の撮像画像と、算出した加速度情報とを関連付けて、ネットワークN11を介して接続されたコンテンツサーバ90に送信する。 Then, the image synthesis unit 105 associates the captured image, such as the generated panoramic image, with the calculated acceleration information and transmits them to the content server 90 connected via the network N11.

 なお、上述した撮像デバイス40及び情報処理装置10による、撮像ユーザUaの周囲の画像(例えば、全天周画像)の取得に係る処理や、当該画像の送信に係る処理のそれぞれが実行されるタイミングは特に限定されない。具体的な一例として、撮像デバイス40及び情報処理装置10は、リアルタイムで、撮像ユーザUaの周囲の画像を撮像し、撮像した画像に基づき全天周画像を生成して、当該全天周画像をコンテンツサーバ90に送信してもよい。
 また、他の一例として、撮像デバイス40及び情報処理装置10は、撮像画像を図示しない記録部に記録しておき、所望のタイミング(例えば、撮像ユーザUaからの操作を受けたタイミング)で、記録されていた撮像画像を記録部から読み出して通信部101から送信するようにしてもよい。この場合、情報生成装置10は、撮像画像に撮像ユーザUaの頭部の位置や姿勢の認識結果を示す加速度情報を関連づけて記憶部に記録しておき、送信するときは、撮像画像と共に加速度情報を送信する。
 また情報処理装置10は、全天周画像の生成に同期して当該全天周画像をコンテンツサーバ90に送信してもよいし、全天周画像の生成に対したときに加速度情報を関連づけて図示しない記録部に記録し、後の時点で撮像画像及び加速度情報をコンテンツサーバ90に送信してもよい。
There is no particular limitation on the timing at which the above-described imaging device 40 and information processing device 10 perform the process of acquiring an image (e.g., a panoramic image) of the surroundings of the imaging user Ua and the process of transmitting the image. As a specific example, the imaging device 40 and the information processing device 10 may capture an image of the surroundings of the imaging user Ua in real time, generate a panoramic image based on the captured image, and transmit the panoramic image to the content server 90.
As another example, the imaging device 40 and the information processing device 10 may record the captured image in a recording unit (not shown), and at a desired timing (for example, at the timing of receiving an operation from the imaging user Ua), read out the recorded captured image from the recording unit and transmit it from the communication unit 101. In this case, the information generating device 10 records the captured image in the storage unit in association with acceleration information indicating the recognition result of the position and posture of the head of the imaging user Ua, and transmits the acceleration information together with the captured image when transmitting.
In addition, the information processing device 10 may transmit the panoramic image to the content server 90 in synchronization with the generation of the panoramic image, or may associate the acceleration information with the generation of the panoramic image and record it in a recording unit not shown, and transmit the captured image and the acceleration information to the content server 90 at a later point in time.

 コンテンツサーバ90は、通信部901と、コンテンツ制御部903とを含む。また、コンテンツサーバ90は、記憶部95に対してアクセス可能に構成されていてもよい。 The content server 90 includes a communication unit 901 and a content control unit 903. The content server 90 may also be configured to be able to access the storage unit 95.

 通信部901は、コンテンツサーバ90内の各構成が、ネットワークN11を介して外部装置(例えば、情報処理装置10及び20)と通信を行うための通信インタフェースである。なお、通信部901は、前述した通信部101と同様の構成をとり得る。また、以降の説明では、コンテンツサーバ90内の各構成が、ネットワークN11を介して外部装置と情報の送受信を行う場合には、特に説明が無い場合においても、通信部901を介して当該情報の送受信を行うものとする。 The communication unit 901 is a communication interface that enables each component in the content server 90 to communicate with an external device (e.g., information processing devices 10 and 20) via the network N11. The communication unit 901 may have a configuration similar to that of the communication unit 101 described above. In the following explanation, when each component in the content server 90 transmits or receives information to or from an external device via the network N11, it is assumed that the information is transmitted or received via the communication unit 901, even if no special explanation is given.

 コンテンツ制御部903は、ネットワークN11を介して接続された情報処理装置10から、当該情報処理装置10により生成され、撮像ユーザUaの頭部に対して加わる加速度の向きや大きさを示す加速度情報が関連付けられた全天周画像等の撮像画像、即ち、撮像ユーザUaの周囲の画像を取得する。 The content control unit 903 acquires captured images, such as panoramic images, generated by the information processing device 10 connected via the network N11 and associated with acceleration information indicating the direction and magnitude of acceleration applied to the head of the imaging user Ua, that is, images of the surroundings of the imaging user Ua.

 またコンテンツ制御部903は、情報処理装置10から取得した加速度情報が関連付けられた撮像画像を、他の情報処理装置20に配信する。
 またコンテンツ制御部903は、情報処理装置10から取得した撮像画像を、記憶部95に一時的または恒久的に記憶し、当該記憶部95に記憶された撮像画像を情報処理装置20に配信してもよい。
Furthermore, the content control unit 903 distributes the captured image associated with the acceleration information acquired from the information processing device 10 to the other information processing device 20 .
The content control unit 903 may also temporarily or permanently store the captured image acquired from the information processing device 10 in the storage unit 95 and distribute the captured image stored in the storage unit 95 to the information processing device 20.

 記憶部95はストレージ装置により構成され、上述した撮像画像や加速度情報のような各種データを記憶する。なお、記憶部95は、コンテンツサーバ90に含まれていてもよい。このような構成により、コンテンツ制御部903は、情報処理装置10から取得した撮像画像を、他の情報処理装置20に、同期的または非同期的に伝送することが可能となる。 The memory unit 95 is configured from a storage device and stores various data such as the captured images and acceleration information described above. The memory unit 95 may be included in the content server 90. With this configuration, the content control unit 903 can transmit captured images acquired from the information processing device 10 to another information processing device 20 synchronously or asynchronously.

 出力デバイス30は、表示部31と、検知部33と、体感導入部35とを含む。なお、表示部31は、図1及び図2を参照して説明した表示部31に相当する。また、体感導入部35は、出力デバイス30を装着している観察ユーザUbに対して力覚を提示するための構成である。
 検知部33は、例えば、加速度センサや角速度センサ(ジャイロセンサ)から成り、出力デバイス30の位置や姿勢の変化を検知し、検知結果を情報処理装置20に出力する。
The output device 30 includes a display unit 31, a detection unit 33, and a sensation induction unit 35. The display unit 31 corresponds to the display unit 31 described with reference to Fig. 1 and Fig. 2. The sensation induction unit 35 is configured to present a force sense to the observing user Ub wearing the output device 30.
The detection unit 33 is composed of, for example, an acceleration sensor or an angular velocity sensor (gyro sensor), detects changes in the position and attitude of the output device 30 , and outputs the detection result to the information processing device 20 .

 情報処理装置20は、通信部201と、解析部203と、表示制御部205と、入力部206と、力覚制御部207とを含む。 The information processing device 20 includes a communication unit 201, an analysis unit 203, a display control unit 205, an input unit 206, and a force feedback control unit 207.

 通信部201は、情報処理装置20内の各構成が、ネットワークN11を介して外部装置(例えば、コンテンツサーバ90や情報処理装置10)と通信を行うための通信インタフェースである。なお、通信部201は、前述した通信部101や通信部901と同様の構成をとり得る。また、以降の説明では、情報処理装置20内の各構成が、ネットワークN11を介して外部装置と情報の送受信を行う場合には、特に説明が無い場合においても、通信部201を介して当該情報の送受信を行うものとする。 The communication unit 201 is a communication interface that enables each component in the information processing device 20 to communicate with an external device (e.g., the content server 90 or the information processing device 10) via the network N11. The communication unit 201 may have a configuration similar to the communication unit 101 or the communication unit 901 described above. In the following explanation, when each component in the information processing device 20 transmits or receives information to or from an external device via the network N11, it is assumed that the information is transmitted or received via the communication unit 201, even if no special explanation is given.

 解析部203は、観察ユーザUbに対して提示する撮像画像をコンテンツサーバ90から取得する。なお、撮像画像には、撮像画像の生成元である情報処理装置10による、当該情報処理装置10を保持する撮像ユーザUaの頭部に対して加わった加速度の向きや大きさを示す加速度情報が関連付けられている。 The analysis unit 203 obtains the captured image to be presented to the observing user Ub from the content server 90. The captured image is associated with acceleration information indicating the direction and magnitude of acceleration applied to the head of the imaging user Ua holding the information processing device 10 by the information processing device 10 that generated the captured image.

 また解析部203は、検知部33から出力デバイス30の位置や姿勢の変化の検知結果を取得し、取得した検知結果に基づき当該出力デバイス30を装着する観察ユーザUbの頭部の位置や姿勢(換言すると、視点の位置や向き)を認識する。
 そして、解析部203は、取得した撮像画像と、観察ユーザUbの頭部の位置や姿勢の認識結果を示す情報とを、表示制御部205に出力する。
The analysis unit 203 also acquires detection results of changes in the position and posture of the output device 30 from the detection unit 33, and recognizes the position and posture of the head of the observing user Ub wearing the output device 30 (in other words, the position and direction of the viewpoint) based on the acquired detection results.
Then, the analysis unit 203 outputs the acquired captured image and information indicating the recognition results of the position and posture of the head of the observing user Ub to the display control unit 205.

 また、解析部203は、撮像画像に関連付けられた加速度情報に基づき、撮像ユーザUaの頭部に加わった加速度の向きや大きさを認識する。当該認識結果を利用することで、解析部203は、撮像ユーザUa(即ち、撮像デバイス40を装着したユーザ)の頭部の向き(視線)に対する、観察ユーザUb(即ち、出力デバイス30を装着したユーザ)の頭部の向き(視線)の相対的な変化を認識する。即ち、解析部203は、撮像ユーザUaの頭部に加わった加速度を、観察ユーザUbの頭部に対して模擬する場合における、観察ユーザUbの頭部に対して加える加速度の向きや大きさを示す加速度情報を算出することが可能となる。そして、解析部203は、観察ユーザUbの頭部に対して加える加速度の向きや大きさを示す加速度情報を、力覚制御部207に出力する。
 なお、撮像ユーザUa側では例えば上述のように頭部に撮像デバイス40を装着しており、撮像部41は撮像ユーザUaの頭部と一体に変位する。そのため表示制御部205は、撮像ユーザUaの頭部の向きが、視線の向きであるとみなすこととする。
 撮像ユーザUaが頭部ではなく、例えば肩などに撮像部41を装着する構成の場合には、撮像ユーザUaの身体の向きを視線の向きとみなすようにすればよい。
The analysis unit 203 also recognizes the direction and magnitude of the acceleration applied to the head of the imaging user Ua based on the acceleration information associated with the captured image. By using the recognition result, the analysis unit 203 recognizes a relative change in the head direction (line of sight) of the observing user Ub (i.e., the user wearing the output device 30) with respect to the head direction (line of sight) of the imaging user Ua (i.e., the user wearing the imaging device 40). That is, the analysis unit 203 is able to calculate acceleration information indicating the direction and magnitude of the acceleration applied to the head of the observing user Ub when simulating the acceleration applied to the head of the imaging user Ua to the head of the observing user Ub. Then, the analysis unit 203 outputs the acceleration information indicating the direction and magnitude of the acceleration applied to the head of the observing user Ub to the force sense control unit 207.
In addition, the imaging user Ua wears the imaging device 40 on his/her head as described above, and the imaging unit 41 moves together with the head of the imaging user Ua. Therefore, the display control unit 205 regards the direction of the head of the imaging user Ua as the direction of the line of sight.
In the case where the imaging user Ua wears the imaging unit 41 on, for example, the shoulder instead of on the head, the direction of the imaging user Ua's body may be regarded as the direction of the line of sight.

 表示制御部205は、解析部203から取得された撮像画像と、観察ユーザUbの頭部の位置や姿勢の認識結果を示す情報とを取得する。表示制御部205は、認識結果を示す情報に基づき、取得した撮像画像から、観察ユーザUbの頭部の向き(視線)に応じた画像を、あらかじめ設定された視野角に基づき切り出して抽出する。そして表示制御部205は、切り出した画像を表示画像として表示部31において表示させる。このような構成により、情報処理装置20は、撮像ユーザUaの周囲が撮像された撮像画像、例えば全天周画像のうちで、観察ユーザUbの頭部の位置や姿勢に応じた方向の画像を観察ユーザUbに提示することが可能となる。
 なお表示制御部205は、例えば観察ユーザUbについても頭部の向きを視線の向きとみなすことができるが、観察ユーザUbの視線自体を検出するようにしてもよい。
The display control unit 205 acquires the captured image acquired from the analysis unit 203 and information indicating the recognition result of the position and posture of the head of the observing user Ub. Based on the information indicating the recognition result, the display control unit 205 cuts out and extracts an image corresponding to the head direction (line of sight) of the observing user Ub from the acquired captured image based on a preset viewing angle. Then, the display control unit 205 displays the cut-out image as a display image on the display unit 31. With this configuration, the information processing device 20 can present to the observing user Ub an image in which the surroundings of the imaging user Ua are captured, for example, a panoramic image, in a direction corresponding to the position and posture of the head of the observing user Ub.
Note that the display control unit 205 can regard the direction of the head of the observing user Ub as the direction of the line of sight, for example, but may also detect the line of sight of the observing user Ub itself.

 入力部206は観察ユーザUbによる各種の入力が可能な入力デバイスである。キー、スイッチ等の操作子、タッチパネル、トラックボール、マウス、ジョイスティック、ポインタのような各種の形態が想定される。ここでは観察ユーザUbが、表示画像の変位を指示できる入力デバイスとする。例えば位置を指定せずに表示画像を左右に移動させる位置非指定操作や、全天周画像内の或る特定箇所を指定して、そこに向けて表示画像を変位させる位置指定操作等を可能とする。
 なお、上述の観察ユーザUbの頭部の挙動や何らかの挙動についても、位置指定操作や位置非指定操作の一態様となる。例えば観察ユーザが頭部を動かすことは、位置非指定操作とすることができ、また観察ユーザUbが、或る方向を指さすような挙動は位置指定操作とすることができる。
The input unit 206 is an input device that allows various inputs by the observing user Ub. Various forms are assumed, such as an operator such as a key or switch, a touch panel, a trackball, a mouse, a joystick, or a pointer. Here, the input unit 206 is an input device that allows the observing user Ub to instruct the displacement of the displayed image. For example, it allows a position non-designated operation that moves the displayed image left and right without designating a position, and a position designation operation that designates a certain point in the panoramic image and displaces the displayed image toward that point.
Note that the above-mentioned head behavior or some other behavior of the observing user Ub is also one aspect of the position designation operation or the position non-designation operation. For example, the observing user moving his/her head can be considered as the position non-designation operation, and the observing user Ub's behavior of pointing in a certain direction can be considered as the position designation operation.

 表示制御部205は、この入力部206からの操作情報によっても、撮像画像からの切り出し範囲を変更しながら、切り出した画像を表示部31に表示させることもできる。このような構成により、情報処理装置20は、観察ユーザUbの操作に応じた方向の画像を観察ユーザUbに提供することが可能になる。 The display control unit 205 can also change the cut-out range from the captured image based on the operation information from the input unit 206 and display the cut-out image on the display unit 31. With this configuration, the information processing device 20 can provide the observing user Ub with an image in a direction according to the operation of the observing user Ub.

 力覚制御部207は、解析部203から、観察ユーザUbの頭部に対して加える加速度の向きや大きさを示す加速度情報を取得する。そして、力覚制御部207は、当該加速度情報が示す加速度の向きや大きさに応じて、体感導入部35を駆動させることで、観察ユーザUbに対して力覚を提示する。このような構成により、情報処理装置20は、表示部31を介して観察ユーザUbに提示された画像に連動するように、観察ユーザUbに対して力覚を提示することが可能となる。 The force sense control unit 207 acquires acceleration information indicating the direction and magnitude of acceleration applied to the head of the observing user Ub from the analysis unit 203. The force sense control unit 207 then presents a force sense to the observing user Ub by driving the body sensation induction unit 35 according to the direction and magnitude of acceleration indicated by the acceleration information. With this configuration, the information processing device 20 can present a force sense to the observing user Ub in conjunction with the image presented to the observing user Ub via the display unit 31.

 なお、上述した出力デバイス30及び情報処理装置20による、コンテンツサーバ90からの撮像画像の取得に係る処理と、撮像画像の提示や力覚の提示に係る処理とのそれぞれが実行されるタイミングは特に限定されない。具体的な一例として、出力デバイス30及び情報処理装置20は、コンテンツサーバ90からの撮像画像の取得に係る処理と、当該撮像画像の提示や力覚の提示に係る処理とを同期的に実行してもよいし、非同期的に実行してもよい。 Note that there is no particular limitation on the timing at which the above-mentioned output device 30 and information processing device 20 execute the process of acquiring the captured image from the content server 90 and the process of presenting the captured image and presenting the force feedback. As a specific example, the output device 30 and information processing device 20 may execute the process of acquiring the captured image from the content server 90 and the process of presenting the captured image and presenting the force feedback synchronously or asynchronously.

 また情報処理装置20が、出力デバイス30を装着する観察ユーザUbに対して提示する力覚の方向や大きさ、換言すると観察ユーザUbの頭部に加える加速度の向きや大きさを認識できれば、情報処理装置10から送信される撮像画像に関連付けられる情報の内容は特に限定されない。 Furthermore, as long as the information processing device 20 can recognize the direction and magnitude of the force sensation presented to the observing user Ub wearing the output device 30, in other words, the direction and magnitude of the acceleration applied to the head of the observing user Ub, the content of the information associated with the captured image transmitted from the information processing device 10 is not particularly limited.

 具体的な一例として、情報処理装置10は、撮像ユーザUaの頭部の位置や姿勢の認識結果を示す情報を、送信対象となる撮像画像に関連付けてもよい。この場合には、例えば、情報処理装置20が、情報処理装置10から取得した当該情報に基づき、撮像ユーザUaの頭部に対して加わった加速度の向きや大きさを算出すればよい。
 また、他の一例として、情報処理装置20は、撮像画像に対して画像解析を施すことで、当該全天周画像の撮像元である撮像デバイス40の位置や姿勢の変化(ひいては、撮像ユーザUaの頭部の位置や姿勢の変化)を間接的に認識してもよい。この場合には、情報処理装置20には、撮像画像のみが送信されてもよい。
As a specific example, the information processing device 10 may associate information indicating the recognition result of the position and posture of the head of the imaging user Ua with the imaging image to be transmitted. In this case, for example, the information processing device 20 may calculate the direction and magnitude of the acceleration applied to the head of the imaging user Ua based on the information acquired from the information processing device 10.
As another example, the information processing device 20 may indirectly recognize a change in the position or posture of the imaging device 40 that is the image capture source of the omnidirectional image (and thus a change in the position or posture of the head of the imaging user Ua) by performing image analysis on the captured image. In this case, only the captured image may be transmitted to the information processing device 20.

 また提示された力覚の感じ方は、ユーザによって異なる場合が想定される。そのため、情報処理装置20は、出力デバイス30を装着する観察ユーザUbを認識し、認識した観察ユーザUbに応じて提示する力覚の大きさ、即ち、加速度の大きさを変更してもよい。 It is also assumed that the way the presented force sensation is felt may differ depending on the user. Therefore, the information processing device 20 may recognize the observing user Ub wearing the output device 30, and change the magnitude of the presented force sensation, i.e., the magnitude of acceleration, depending on the recognized observing user Ub.

 以上、図4を参照して、情報処理システム1の機能構成の一例について説明したが、
情報処理システム1の機能構成は、必ずしも図4に示した例のみは限定されない。例えば情報処理装置10の一部の構成が撮像デバイス40側に設けられていてもよいし、撮像デバイス40と情報処理装置10とが一体的に構成されていてもよい。同様に、情報処理装置20の一部の構成が出力デバイス30に設けられていてもよいし、出力デバイス30と情報処理装置20とが一体的に構成されていてもよい。また、情報処理装置10は、生成した全天周画像を、コンテンツサーバ90を介さずに、情報処理装置20に対して直接送信してもよい。
An example of the functional configuration of the information processing system 1 has been described above with reference to FIG.
The functional configuration of the information processing system 1 is not necessarily limited to the example shown in Fig. 4. For example, a part of the configuration of the information processing device 10 may be provided on the imaging device 40 side, or the imaging device 40 and the information processing device 10 may be configured integrally. Similarly, a part of the configuration of the information processing device 20 may be provided on the output device 30, or the output device 30 and the information processing device 20 may be configured integrally. In addition, the information processing device 10 may transmit the generated panoramic image directly to the information processing device 20 without going through the content server 90.

 また表示画像の切り出しを行う表示制御部205は情報処理装置20側に備えるようにしたが、表示制御部205を、コンテンツサーバ90や情報処理装置10が備える構成例も可能である。
Furthermore, the display control unit 205 that cuts out the display image is provided on the information processing device 20 side, but a configuration example in which the display control unit 205 is provided on the content server 90 or the information processing device 10 is also possible.

<2.情報処理装置の構成>
 次に、情報処理装置20、情報処理装置10、コンテンツサーバ90などとして用いることができる情報処理装置70の構成例を図5で説明する。
 情報処理装置70は、コンピュータ機器など、情報処理、特に画像処理が可能な機器である。この情報処理装置70としては、具体的には、パーソナルコンピュータ、ワークステーション、スマートフォンやタブレット等の携帯端末装置、ビデオ編集装置等が想定される。また情報処理装置70は、クラウドコンピューティングにおけるサーバ装置や演算装置として構成されるコンピュータ装置であってもよい。
2. Configuration of information processing device
Next, a configuration example of an information processing device 70 that can be used as the information processing device 20, the information processing device 10, the content server 90, etc. will be described with reference to FIG.
The information processing device 70 is a device capable of information processing, particularly image processing, such as a computer device. Specific examples of the information processing device 70 include personal computers, workstations, mobile terminal devices such as smartphones and tablets, video editing devices, etc. The information processing device 70 may also be a computer device configured as a server device or a computing device in cloud computing.

 図5に示す情報処理装置70のCPU(Central Processing Unit)71は、ROM(Read Only Memory)72や例えばEEP-ROM(Electrically Erasable Programmable Read-Only Memory)などの不揮発性メモリ部74に記憶されているプログラム、または記憶部79からRAM(Random Access Memory)73にロードされたプログラムに従って各種の処理を実行する。RAM73にはまた、CPU71が各種の処理を実行する上において必要なデータなども適宜記憶される。 The CPU (Central Processing Unit) 71 of the information processing device 70 shown in FIG. 5 executes various processes according to programs stored in a ROM (Read Only Memory) 72 or a non-volatile memory unit 74 such as an EEP-ROM (Electrically Erasable Programmable Read-Only Memory), or programs loaded from a storage unit 79 to a RAM (Random Access Memory) 73. The RAM 73 also stores data necessary for the CPU 71 to execute various processes as appropriate.

 画像処理部85は各種の画像処理を行うプロセッサとして構成される。例えば画像合成、全天球画像生成、画像の切り出し処理、画像解析処理、色・輝度調整処理や色変換処理を含む画像信号処理、画像編集処理などのいずれか、或いは複数の処理を行うことができるプロセッサとされる。
 例えばこの情報処理装置70が観察ユーザUb側の情報処理装置20として用いられる場合、画像処理部85は表示制御部205や解析部203の処理を実行するプロセッサとされる。
 また例えば情報処理装置70が撮像ユーザUa側の情報処理装置10として用いられる場合、画像処理部85は画像合成部105の処理を実行するプロセッサとされる。
The image processing unit 85 is configured as a processor that performs various types of image processing, such as image synthesis, spherical image generation, image clipping, image analysis, image signal processing including color/brightness adjustment and color conversion, and image editing, or a combination of these processes.
For example, when this information processing device 70 is used as the information processing device 20 on the observing user Ub side, the image processing unit 85 is a processor that executes the processing of the display control unit 205 and the analysis unit 203 .
Furthermore, for example, when the information processing device 70 is used as the information processing device 10 on the imaging user Ua side, the image processing unit 85 is a processor that executes the processing of the image synthesis unit 105 .

 この画像処理部85は例えば、CPU71とは別体のCPU、GPU(Graphics Processing Unit)、GPGPU(General-purpose computing on graphics processing units)、AI(artificial intelligence)プロセッサ等により実現できる。
 なお画像処理部85はCPU71内の機能として設けられてもよい。
This image processing unit 85 can be realized, for example, by a CPU separate from the CPU 71, a graphics processing unit (GPU), a general-purpose computing on graphics processing units (GPGPU), an artificial intelligence (AI) processor, or the like.
The image processing unit 85 may be provided as a function within the CPU 71 .

 CPU71、ROM72、RAM73、不揮発性メモリ部74、画像処理部85は、バス83を介して相互に接続されている。このバス83にはまた、入出力インタフェース75も接続されている。 The CPU 71, ROM 72, RAM 73, non-volatile memory unit 74, and image processing unit 85 are interconnected via a bus 83. The input/output interface 75 is also connected to this bus 83.

 入出力インタフェース75には、操作子や操作デバイスよりなる入力部76が接続される。例えば入力部76としては、キーボード、マウス、キー、トラックボール、ダイヤル、タッチパネル、タッチパッド、リモートコントローラ等の各種の操作子や操作デバイスが想定される。
 入力部76によりユーザの操作が検知され、入力された操作に応じた信号はCPU71によって解釈される。
 入力部76としてはマイクロフォンも想定される。ユーザの発する音声を操作情報として入力することもできる。
 例えばこの情報処理装置70が観察ユーザUb側の情報処理装置20として用いられる場合、入力部76は図4の入力部206に相当する。
An input unit 76 consisting of operators and operation devices is connected to the input/output interface 75. For example, the input unit 76 may be various operators and operation devices such as a keyboard, a mouse, a key, a trackball, a dial, a touch panel, a touch pad, a remote controller, or the like.
An operation by the user is detected by the input unit 76 , and a signal corresponding to the input operation is interpreted by the CPU 71 .
A microphone may also be used as the input unit 76. Voice uttered by the user may also be input as operation information.
For example, when this information processing device 70 is used as the information processing device 20 on the observing user Ub side, the input unit 76 corresponds to the input unit 206 in FIG.

 また入出力インタフェース75には、LCD(Liquid Crystal Display)或いは有機EL(electro-luminescence)パネルなどよりなる表示部77や、スピーカなどよりなる音声出力部78が一体又は別体として接続される。
 表示部77は各種表示を行う表示部であり、例えば情報処理装置70の筐体に設けられるディスプレイデバイスや、情報処理装置70に接続される別体のディスプレイデバイス等により構成される。
 表示部77は、CPU71の指示に基づいて表示画面上に各種の画像、操作メニュー、アイコン、メッセージ等、即ちGUIとしての表示を行う。
 表示部77が図1,図4における出力デバイス30として用いられる場合もある。
Further, the input/output interface 75 is connected, either integrally or separately, to a display unit 77 formed of an LCD (Liquid Crystal Display) or an organic EL (electro-luminescence) panel, or the like, and an audio output unit 78 formed of a speaker, or the like.
The display unit 77 is a display unit that performs various displays, and is configured, for example, by a display device provided in the housing of the information processing device 70, or a separate display device connected to the information processing device 70, or the like.
The display unit 77 displays various images, operation menus, icons, messages, etc., that is, GUI, on the display screen based on instructions from the CPU 71 .
The display unit 77 may be used as the output device 30 in FIGS.

 入出力インタフェース75には、HDD(Hard Disk Drive)やSSD(Solid State Drive)などより構成される記憶部79や通信部80が接続される場合もある。 The input/output interface 75 may also be connected to a memory unit 79 and a communication unit 80, which may be configured using a hard disk drive (HDD) or solid state drive (SSD).

 記憶部79は、各種のデータやプログラムを記憶することができる。記憶部79においてDBを構成することもできる。 The storage unit 79 can store various data and programs. A database can also be configured in the storage unit 79.

 通信部80は、インターネット等の伝送路を介しての通信処理や、外部のDB、編集装置、情報処理装置等の各種機器との有線/無線通信、バス通信などによる通信を行う。この通信部80は、図4で説明した通信部201,通信部101,或いは通信部901として機能することができる。 The communication unit 80 performs communication processing via a transmission path such as the Internet, and communication with various devices such as an external DB, an editing device, and an information processing device via wired/wireless communication, bus communication, etc. This communication unit 80 can function as the communication unit 201, communication unit 101, or communication unit 901 described in FIG. 4.

 入出力インタフェース75にはまた、必要に応じてドライブ81が接続され、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリなどのリムーバブル記録媒体82が適宜装着される。
 ドライブ81により、リムーバブル記録媒体82からは映像データや、各種のコンピュータプログラムなどを読み出すことができる。読み出されたデータは記憶部79に記憶されたり、データに含まれる映像や音声が表示部77や音声出力部78で出力されたりする。またリムーバブル記録媒体82から読み出されたコンピュータプログラム等は必要に応じて記憶部79にインストールされる。
A drive 81 is also connected to the input/output interface 75 as required, and a removable recording medium 82 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory is appropriately mounted thereon.
The drive 81 allows video data, various computer programs, and the like to be read from the removable recording medium 82. The read data is stored in the storage unit 79, and the video and audio contained in the data are output on the display unit 77 and the audio output unit 78. In addition, the computer programs, etc. read from the removable recording medium 82 are installed in the storage unit 79 as necessary.

 この情報処理装置70では、例えば本実施の形態の処理のためのソフトウェアを、通信部80によるネットワーク通信やリムーバブル記録媒体82を介してインストールすることができる。或いは当該ソフトウェアは予めROM72や記憶部79等に記憶されていてもよい。
In this information processing device 70, for example, software for the processing of this embodiment can be installed via network communication by the communication unit 80 or via a removable recording medium 82. Alternatively, the software may be stored in advance in the ROM 72, the storage unit 79, etc.

<3.処理例>
 以下、実施の形態の情報処理装置20の処理例、特には表示制御部205の処理例を説明する。
 なお、表示制御部205がコンテンツサーバ90に設けられる場合、以下の処理例はコンテンツサーバ90で実行される。また表示制御部205が情報処理装置10に設けられる場合は、以下の処理例は情報処理装置10で実行されることになる。
<3. Processing example>
Hereinafter, a processing example of the information processing device 20 according to the embodiment, in particular, a processing example of the display control unit 205 will be described.
When the display control unit 205 is provided in the content server 90, the following processing example is executed in the content server 90. When the display control unit 205 is provided in the information processing device 10, the following processing example is executed in the information processing device 10.

 図6は観察ユーザUbが視認する表示画像、つまり全天周画像等の撮像画像から切り出された画像の変位を制御する表示制御部205の処理例である。特には、この処理例は、切り出し範囲の変位による画像酔いを観察ユーザUbに生じさせないように、表示画像の変位速度を遅くする処理例である。但し、単に表示画像の変位速度を遅くするのではなく、状況情報に応じて変位速度を可変することで、なるべくリアルタイム性も損なわないようにする。例えば画像酔いしやすい状況では表示画像の変位をゆっくり実行させるが、画像酔いを生じさせにくい状況では、表示画像の遅延を抑えるようにする。 FIG. 6 shows an example of processing by the display control unit 205 that controls the displacement of the display image viewed by the observing user Ub, that is, the image cut out from a captured image such as a panoramic image. In particular, this processing example is an example of processing that slows down the displacement speed of the displayed image so that the observing user Ub does not experience visual sickness due to displacement of the cut-out range. However, rather than simply slowing down the displacement speed of the displayed image, the displacement speed is varied according to situation information so as not to impair real-time performance as much as possible. For example, in a situation where visual sickness is likely to occur, the displacement of the displayed image is executed slowly, but in a situation where visual sickness is unlikely to occur, the delay in the displayed image is suppressed.

 ステップS101で表示制御部205は、受信した撮像画像における変位の速度を速度vとする。これは、例えば撮像部41を装着した撮像ユーザUaの頭部の動きによる撮像画像の正面方向の変位が生じた場合に、その変位の速度を速度vとするものである。撮像ユーザUaの頭部の動きがない場合は速度v=0となる。
 なお速度vには変位の方向の成分も含む。変位を水平の左右方向にのみ考える場合は、左右の一方を+、他方を-の値として速度vの値が設定される。また左右方向(x軸方向)及び上下方向(y軸方向)を考える場合は、x軸方向には左右の一方を+、他方を-の値とし、y軸方向には上下の一方を+、他方を-の値として軸毎の速度vを設定すればよい。
 表示制御部205は、例えば速度vは、撮像画像に関連付けられている加速度情報から求めてもよいし、撮像画像のフレーム間のマッチングにより、同一の被写体の変位量を算出して求めてもよい。
In step S101, the display control unit 205 sets the speed of displacement in the received captured image as a speed v. For example, when a displacement in the front direction of the captured image occurs due to a movement of the head of the imaging user Ua wearing the imaging unit 41, the speed of the displacement is set as the speed v. When there is no movement of the head of the imaging user Ua, the speed v=0.
Note that velocity v also includes a component in the direction of displacement. When considering displacement only in the horizontal left-right direction, the value of velocity v is set with one side of the left and one side of the left as a + value and the other as a - value. When considering the left-right direction (x-axis) and the up-down direction (y-axis), velocity v for each axis can be set by setting one side of the left and one side of the left as a + value and the other as a - value in the x-axis direction, and one side of the up-down direction as a + value and the other as a - value in the y-axis direction.
The display control unit 205 may obtain the velocity v, for example, from acceleration information associated with the captured image, or may obtain the velocity v by calculating the amount of displacement of the same subject by matching between frames of the captured image.

 ステップS102で表示制御部205は、観察ユーザUbによる変位操作の入力があったか否かにより処理を分岐する。この場合の入力とは、表示画像の切り出し範囲の変位を指示する入力を指し、例えば観察ユーザUbの頭部の動きの検知である。或いは入力とは、入力部206による表示画像の変位を指示する操作のことでもある。 In step S102, the display control unit 205 branches the process depending on whether or not there has been an input of a displacement operation by the observing user Ub. In this case, the input refers to an input instructing the displacement of the cropping range of the displayed image, for example, detection of the movement of the head of the observing user Ub. Alternatively, the input can also refer to an operation by the input unit 206 instructing the displacement of the displayed image.

 観察ユーザUbが特に頭部を動かして視線の方向を変えることやマウス等の操作装置のドラッグ操作等による位置非指定操作或いは位置指定操作をしていない場合は、表示制御部205はステップS103に進み、速度上限値vmaxを速度vmax1に設定する。なお上述の速度vはベクトルであるが、ここでいう速度上限値vmaxはスカラーで正の値をとる。
 そして速度上限値vmaxとして代入される速度vmax1は、観察ユーザUbの意思によらずに表示画像が変位しても画像酔いが起こりにくい速度の上限として設定された値であり、後述する速度vmax2と比較して低速となる値である。
If the observing user Ub does not particularly move his/her head to change the direction of the line of sight or perform a position non-designation operation or a position designation operation by dragging an operating device such as a mouse, the display control unit 205 proceeds to step S103 and sets the upper speed limit vmax to a speed vmax1. Note that while the above-mentioned velocity v is a vector, the upper speed limit vmax here is a scalar and takes a positive value.
The speed vmax1 assigned as the upper speed limit value vmax is a value set as the upper limit of the speed at which image sickness is unlikely to occur even if the displayed image is displaced regardless of the will of the observing user Ub, and is a value that is slower than the speed vmax2 described later.

 ステップS104で表示制御部205は、速度上限値vmaxを速度vmax1とした状態で、速度vに応じて以降のフレームのそれぞれ切り出し範囲を算出する。つまり速度vによる1フレーム期間毎の画像の変位量と、変位の方向を考慮して、以降のフレームのそれぞれ切り出し範囲を算出する。
 ここで1フレーム期間毎の変位量をみたときに、速度vが速度vmax1以下であれば、1フレーム毎に速度vに相当する変位量が生ずるように、各フレームの切り出し範囲を設定する。従って撮像ユーザUaが速度vmax1に満たない速度でゆっくり頭部を左に向けたような場合、観察ユーザUbが見る表示画像の正面は、その撮像ユーザUaの視線変更と同様に左方向の光景に変位する。
 なお、撮像画像が、撮像ユーザUaの挙動による撮像方向の変位をそのまま反映させた画像としている場合は、各フレームで切り出し範囲を固定しておけば観察ユーザUbが視認する表示画像にも速度vに相当する変位が生じる。
 一方、情報処理装置10が撮像ユーザUaの頭部の回転等による視線の変化に伴う画像の回転が抑制されるように撮像画像を生成する場合は、撮像ユーザUaの動きの速度や方向の情報に応じて、フレーム毎に切り出し範囲がずれていくように設定することで、観察ユーザUbが視認する表示画像に速度vに相当する変位が生じる。
In step S104, the display control unit 205 calculates the cut-out range of each of the subsequent frames according to the speed v, with the speed upper limit vmax set to the speed vmax1. That is, the display control unit 205 calculates the cut-out range of each of the subsequent frames, taking into consideration the amount of displacement of the image per frame period due to the speed v and the direction of the displacement.
Here, when looking at the amount of displacement per frame period, if the speed v is equal to or less than the speed vmax1, the cut-out range of each frame is set so that a displacement amount equivalent to the speed v occurs per frame. Therefore, if the imaging user Ua slowly turns his head to the left at a speed less than the speed vmax1, the front of the display image seen by the observing user Ub is displaced to the left in the same manner as the imaging user Ua changes his line of sight.
In addition, if the captured image is an image that directly reflects the displacement in the imaging direction due to the behavior of the imaging user Ua, if the cut-out range is fixed in each frame, a displacement equivalent to the velocity v will also occur in the displayed image viewed by the observing user Ub.
On the other hand, when the information processing device 10 generates an image so as to suppress rotation of the image due to a change in the line of sight caused by the rotation of the imaging user Ua's head, etc., the cut-out range is set to shift for each frame depending on information on the speed and direction of the movement of the imaging user Ua, so that a displacement equivalent to a speed v occurs in the displayed image viewed by the observing user Ub.

 ステップS104で速度vが速度vmax1を越えると判定される場合、つまり撮像ユーザUaが速度vmax1を越える速度で頭部を左に向けたような場合は、観察ユーザUbが見る表示画像の正面が、その撮像ユーザUaの視線変更速度(速度v)よりも遅い速度vmax1の状態、つまり遅延した状態で、左方向の光景に変位するようにする。即ち表示制御部205はステップS104で、撮像ユーザUaの挙動による速度vでの変位量(光景の変化)に、表示画像が追いつくまでのフレーム期間は、速度vmax1以下の移動でフレームが推移してくようにする。
 従って速度vに応じた切り出し範囲の設定は、速度成分については速度vmax1でリミッタがかけられた状態で設定されるということになる。
When it is determined in step S104 that the speed v exceeds the speed vmax1, that is, when the imaging user Ua turns his head to the left at a speed exceeding the speed vmax1, the front of the display image seen by the observing user Ub is displaced to the left at a speed vmax1 slower than the line of sight change speed (speed v) of the imaging user Ua, that is, in a delayed state. That is, in step S104, the display control unit 205 causes the frames to progress at a speed vmax1 or less during the frame period until the display image catches up with the amount of displacement (change in scenery) at the speed v due to the behavior of the imaging user Ua.
Therefore, the cut-out range according to the speed v is set in a state where the speed component is limited at the speed vmax1.

 この状態を図7,図8,図9で模式的に説明する。各図で矢印51は撮像ユーザUaの正面方向、つまり全天周画像等の撮像画像の正面方向を示している。
 撮像ユーザUaは、音楽ライブ会場で撮像を行っているとし、撮像ユーザUaの前方には被写体50として左からピアノ奏者、ギター奏者、ドラム奏者、サキソフォン奏者が並んでいるとする。
This state will be explained typically with reference to Figures 7, 8 and 9. In each figure, an arrow 51 indicates the front direction of the imaging user Ua, that is, the front direction of an imaged image such as a panoramic image.
It is assumed that the imaging user Ua is capturing images at a live music venue, and that in front of the imaging user Ua are lined up, from the left, as subjects 50: a piano player, a guitar player, a drummer, and a saxophone player.

 図7の時点t=t0においては、撮像ユーザUaの視線の正面方向にギター奏者が居る。この場合、撮像画像である全天周画像の投影面52において、正面側となるギター奏者の位置が、表示画像のための切り出し中心となる。図で通常切り出し画像54とは、画像酔い防止のための遅延処理を行わない状態を示している。通常切り出し画像54では表示画像の中央にギター奏者が映されている。 At time t=t0 in Figure 7, the guitar player is directly in front of the line of sight of the imaging user Ua. In this case, the position of the guitar player on the front side on the projection surface 52 of the panoramic image, which is the captured image, becomes the center of cropping for the displayed image. In the figure, normal cropped image 54 shows a state in which delay processing to prevent motion sickness is not performed. In normal cropped image 54, the guitar player is shown in the center of the displayed image.

 一方、投影面56及び速度可変切り出し画像58は遅延処理を行う場合を示している。時点t=t1においては撮像画像の変位はないため、投影面56においてギター奏者の位置が、表示画像のための切り出し中心となる。また速度可変切り出し画像58は通常切り出し画像54と同じである。 On the other hand, the projection plane 56 and speed-variable cut-out image 58 show the case where delay processing is performed. Since there is no displacement of the captured image at time t=t1, the position of the guitar player on the projection plane 56 becomes the cut-out center for the displayed image. Also, the speed-variable cut-out image 58 is the same as the normal cut-out image 54.

 図8の時点t=t1において、撮像ユーザUaの頭部が素早く動き、視線方向がサキソフォン奏者の方向に向いたとする。
 その動きの速度vでそのまま切り出し中心53を求めると、投影面52で前方になるサキソフォン奏者の位置が切り出し中心53となり、通常切り出し画像54における画像中央はサキソフォン奏者となる。つまり観察ユーザUbが見る表示画像の光景が素早く横に流れるような状態となり、画像酔いを起こしやすい。
At time t=t1 in FIG. 8, it is assumed that the head of the image capturing user Ua moves quickly and the line of sight is directed toward the saxophone player.
If the cut-out center 53 is found at the speed v of that movement, the position of the saxophonist in the foreground on the projection plane 52 becomes the cut-out center 53, and the saxophonist is in the center of the image in the normal cut-out image 54. In other words, the scene of the displayed image seen by the observing user Ub appears to flow quickly sideways, which can easily cause motion sickness.

 そこでこのような場合に表示画像の変位の速度を速度vmax1に制限する。例えば投影面56のように前方にサキソフォン奏者が位置しても、ギター奏者とドラム奏者の間あたりを切り出し中心57となるように設定する。これにより速度可変切り出し画像58が表示画像とされると、ギター奏者とドラム奏者の間が中央に位置する状態となる。つまり時点t0から時点t1までの表示画像の動きが、撮像ユーザUaの視線方向の動きよりも遅くなる。 In such a case, the speed of displacement of the display image is limited to a speed vmax1. For example, even if a saxophone player is positioned in front of the projection surface 56, the cut-out center 57 is set to be somewhere between the guitar player and the drummer. As a result, when the speed-variable cut-out image 58 is used as the display image, the center will be located between the guitar player and the drummer. In other words, the movement of the display image from time t0 to time t1 is slower than the movement in the line of sight of the imaging user Ua.

 図9の時点t=t2は、撮像ユーザUaは引き続きサキソフォン奏者の方を見ているとする。表示画像の変位速度が速度vmax1に制限されているため、例えば投影面56のように前方にサキソフォン奏者が位置していても、ドラム奏者のあたりを切り出し中心57となるように設定する。これにより速度可変切り出し画像58が表示画像とされるとドラム奏者の間が中央に位置する状態となる。 At time t=t2 in FIG. 9, it is assumed that the imaging user Ua is still looking at the saxophone player. Because the displacement speed of the displayed image is limited to speed vmax1, even if the saxophone player is positioned in front, as in the projection surface 56, the cut-out center 57 is set to be around the drummer. As a result, when the speed-variable cut-out image 58 is used as the displayed image, the drummer will be positioned in the center.

 その後の時点t3、t4も同様に表示画像の変位速度が速度vmax1に制限されることで、表示画像の変位はゆっくりしたものとなり、例えば図10に示すように、時点t4で、速度可変切り出し画像58(=表示画像)においてサキソフォン奏者が正面となる。 Then, at time points t3 and t4, the displacement speed of the displayed image is similarly limited to speed vmax1, so that the displacement of the displayed image becomes slower. For example, as shown in FIG. 10, at time point t4, the saxophone player is in front in the variable speed cut-out image 58 (=display image).

 このように撮像画像は時点t1(図7),t2(図8)で正面方向がギター奏者からサキソフォン奏者に変位しているのに対し、表示画像は時点t1から時点t4(図9)までを要して、正面方向がギター奏者からサキソフォン奏者に変位している。
 つまり撮像ユーザUaが素早く頭部を動かしても、観察ユーザUbが視認する表示画像の光景の全体は、ゆっくり変位する。特にステップS103,S104に進む場合は、観察ユーザUbは変位を指示する入力(位置指定操作や位置非指定操作)を行っておらず、表示画像の変位は予期しない変位である。このため表示画像の速い動きは画像酔いを誘発しやすい。そこで、表示画像の変位速度を低めに制限するようにしている。
In this way, the captured image has its front direction shifted from the guitar player to the saxophone player at times t1 (FIG. 7) and t2 (FIG. 8), whereas the displayed image has its front direction shifted from the guitar player to the saxophone player over the course of from time t1 to time t4 (FIG. 9).
That is, even if the imaging user Ua moves his/her head quickly, the entire scene of the display image visually recognized by the observing user Ub changes slowly. In particular, when proceeding to steps S103 and S104, the observing user Ub has not performed an input (position designation operation or position non-designation operation) to instruct the change, and the change in the display image is an unexpected change. For this reason, fast movement of the display image is likely to induce motion sickness. Therefore, the change speed of the display image is limited to a low value.

 図6のステップS102において観察ユーザUbによる入力があるとした場合、表示制御部205はステップS110に進み、まず速度上限値vmax=0とする。これは撮像画像に動きがあり、その動きに応じてステップS101で速度vが設定されたとしても、撮像画像の変位に応じた表示画像の変位速度は「0」であるとする処理である。つまり撮像画像に動きが無いとみなすための処理である。 If there is an input from the observing user Ub in step S102 of FIG. 6, the display control unit 205 proceeds to step S110 and first sets the upper speed limit vmax = 0. This is processing that assumes that even if there is movement in the captured image and the speed v is set in step S101 according to that movement, the displacement speed of the displayed image according to the displacement of the captured image is "0". In other words, this processing is for assuming that there is no movement in the captured image.

 そして表示制御部205はステップS111で、ステップS101で設定した速度vを用いて、切り出し基準位置算出を行う。これは撮像ユーザUaによる正面方向が変位しても、観察ユーザUbが見る表示画像の変位が生じないようにする切り出し範囲を設定し、そこを観察ユーザUbの入力による変位の基準の位置とすることである。速度vの変位に対し速度(-v)に相当する変位の位置(つまり撮像ユーザUaの動きによる変位前の位置)が切り出し基準位置となる。
 例えば撮像ユーザUaの頭部(視線方向)が左に向いても、切り出し範囲を同じ変位量だけ右に移動させれば、観察ユーザUbが見る表示画像は変位しないことになる。そのような表示画像が撮像ユーザUaの影響で変位しないようにする切り出し範囲の中心が、切り出し基準位置となる。
Then, in step S111, the display control unit 205 calculates the cutout reference position using the velocity v set in step S101. This means that a cutout range is set so that the display image seen by the observing user Ub does not displace even if the front direction is displaced by the imaging user Ua, and this is set as the reference position for the displacement due to the input of the observing user Ub. The position of the displacement equivalent to the velocity (-v) for the displacement of the velocity v (i.e., the position before the displacement due to the movement of the imaging user Ua) becomes the cutout reference position.
For example, even if the head (gaze direction) of the imaging user Ua faces left, if the cut-out range is moved to the right by the same amount of displacement, the display image seen by the observing user Ub will not be displaced. The center of the cut-out range that prevents such a display image from being displaced due to the influence of the imaging user Ua becomes the cut-out reference position.

 ステップS112で表示制御部205は、ユーザ入力が位置指定操作であったか否かにより処理を分岐する。ここでいう位置指定操作とは、観察ユーザUbが或る切り出し範囲(切り出し範囲の中心位置)を指定するような操作を指す。例えば投影面上で特定の位置や被写体を指定したり、或いは「90°左回転」のように変位量を指定したりする操作が、ここでいう位置指定操作となる。つまり変位先が指定される操作である。一方、例えばドラッグ操作、スクロール操作などのように連続的に変位を指示する操作は、ここでいう位置指定ではなく、位置非指定操作となる。 In step S112, the display control unit 205 branches the process depending on whether the user input was a position designation operation. A position designation operation here refers to an operation in which the observing user Ub designates a certain cut-out range (the center position of the cut-out range). For example, an operation in which a specific position or subject is designated on the projection surface, or an operation in which an amount of displacement is designated such as "rotate left 90 degrees", is a position designation operation referred to here. In other words, it is an operation in which the destination of displacement is designated. On the other hand, an operation in which continuous displacement is designated, such as a drag operation or a scroll operation, is not a position designation operation as referred to here, but a non-position designation operation.

 位置指定操作であった場合は、表示制御部205はステップS113で、指定位置と現在位置の差分により表示画像の変位の速度vを設定する。すなわち、表示制御部205はステップS101で設定した速度vを、指定位置と現在位置の差分による速度に変更する。速度vは、現在位置から指定位置を目標としたベクトルとして設定される。
 また表示制御部205はステップS114で、速度上限値vmaxを速度vmax2に設定する。上述のようにvmax2>vmax1である。
 そしてステップS117で表示制御部205は、速度上限値vmaxを速度vmax2とした状態で、速度vに応じて、指定位置に向かって表示画像が変位するように、以降の各フレームの切り出し範囲を算出する。
If the operation is a position designation operation, the display control unit 205 sets a speed v of the displacement of the display image based on the difference between the designated position and the current position in step S113. That is, the display control unit 205 changes the speed v set in step S101 to a speed based on the difference between the designated position and the current position. The speed v is set as a vector from the current position to the designated position as a target.
Furthermore, in step S114, the display control unit 205 sets the upper speed limit vmax to the speed vmax2. As described above, vmax2>vmax1.
Then, in step S117, the display control unit 205 calculates the cropping range for each subsequent frame such that the display image is displaced toward the specified position according to the speed v, with the upper speed limit vmax set to speed vmax2.

 速度vが速度vmax2以下であれば、表示制御部205は1フレーム毎に速度vに相当する変位量が生ずるように、各フレームの切り出し範囲を設定する。例えば位置指定操作による指定位置と現在位置の距離が近く、速度vmax2以下の遷移で移動できる場合は、その速度vで表示画像が遷移していくように、各フレームの切り出し範囲を設定する。
 一方、速度vが速度vmax2を越える速度の場合は、表示制御部205は速度vmax2で表示画像が遷移していくように、各フレームの切り出し範囲を設定する。
 従って速度vに応じた切り出し範囲の設定は、速度成分については速度vmax2でリミッタがかけられた状態で設定されるということになる。
If the speed v is equal to or less than speed vmax2, the display control unit 205 sets the cutout range of each frame so that a displacement amount equivalent to the speed v occurs for each frame. For example, if the distance between the position specified by the position specification operation and the current position is short and the transition can be made at a speed equal to or less than vmax2, the cutout range of each frame is set so that the display image transitions at that speed v.
On the other hand, when the speed v exceeds the speed vmax2, the display control unit 205 sets the cutout range of each frame so that the display image transitions at the speed vmax2.
Therefore, the cut-out range according to the speed v is set in a state where the speed component is limited at the speed vmax2.

 ここでvmax2>vmax1であるということは、ユーザ入力がない場合と比較して、より速い速度であっても、その速度で表示画像が変位されるということである。これは観察ユーザUb自身の操作により表示画像の変位が発生するため、画像酔いが発生する恐れが少ないことによる。 Here, vmax2>vmax1 means that the displayed image is displaced at that speed, even if it is faster than when there is no user input. This is because the displacement of the displayed image occurs due to the observation user Ub's own operation, so there is little risk of visual sickness occurring.

 図7,図11,図12で説明する。
 上述した図7の時点t=t0の状態から、図11,図12の時点t1,t2において、撮像ユーザUaの視線の正面方向がギター奏者の方向からサキソフォン奏者の方向に変化したとする。しかし観察ユーザUbが位置指定操作を行っていた場合は、この視線変化は表示画像に反映されない。
 但し観察ユーザUbが、サキソフォン奏者の位置を指定したとする。すると、その操作に基づいて、速度vが速度vmax2を越えない速度で、速度可変切り出し画像58が変位する。例えば時点t1では図11のようにドラム奏者が表示画像の中央になる状態まで変位し、時点t2では図12のようにサキソフォン奏者が表示画像の中央になる状態まで変位する。
This will be explained with reference to Figures 7, 11, and 12.
Suppose that the line of sight of the imaging user Ua changes from the direction of the guitar player to the direction of the saxophone player at times t1 and t2 in Figures 11 and 12 from the state at time t=t0 in Figure 7 described above. However, if the observing user Ub is performing a position designation operation, this change in line of sight is not reflected in the displayed image.
However, suppose that the observing user Ub specifies the position of the saxophone player. In response to this operation, the speed-variable cut-out image 58 is displaced at a speed v that does not exceed the speed vmax2. For example, at time t1, the drummer is displaced to the center of the displayed image as shown in FIG. 11, and at time t2, the saxophone player is displaced to the center of the displayed image as shown in FIG. 12.

 先の図7,図8,図9,図10の例では、表示画像は、時点t4で、サキソフォン奏者が正面になる状態に追いついたことに対し、位置指定操作があった場合は、時点t2という、より速い時点でサキソフォン奏者が正面になる状態となる。 In the examples of Figures 7, 8, 9, and 10, the displayed image catches up to a state where the saxophone player is facing forward at time t4, but if a position specification operation is performed, the saxophone player will face forward at an earlier time, time t2.

 また以上は、撮像ユーザUaの視線方向の変位と、観察ユーザUbの指定位置への変位が同方向であった場合を示したが、観察ユーザUbの指定位置が撮像ユーザUaの視線方向の変位とは逆方向の変位で移動する位置であることもある。そのような場合も、あくまでも指定位置に向かって変位するようにする。 In the above, we have shown a case where the displacement of the imaging user Ua's line of sight and the displacement of the observing user Ub's line of sight toward the designated position are in the same direction, but there are also cases where the designated position of the observing user Ub moves with a displacement in the opposite direction to the displacement of the imaging user Ua's line of sight. Even in such cases, the displacement is always toward the designated position.

 図7,図13,図14で説明する。
 上述した図7の時点t=t0の状態から、図13,図14の時点t1,t2において、撮像ユーザUaの視線の正面方向がギター奏者の方向からサキソフォン奏者の方向に変化したとする。しかし観察ユーザUbが位置指定操作を行っており、ピアノ奏者の位置を指定したとする。
 すると、その操作に基づいて、速度vが速度vmax2を越えない速度で、速度可変切り出し画像58が変位する。例えば時点t1では図13のようにギター奏者とピアノ奏者の中間位置が表示画像の中央になる状態まで変位し、時点t2では図14のようにピアノ奏者が表示画像の中央になる状態まで変位する。
This will be explained with reference to Figures 7, 13 and 14.
7, the forward direction of the line of sight of the imaging user Ua is changed from the direction of the guitar player to the direction of the saxophone player at times t1 and t2 in Figures 13 and 14. However, suppose that the observing user Ub performs a position designation operation and designates the position of the piano player.
Then, based on the operation, the speed-variable cut-out image 58 is displaced at a speed v that does not exceed the speed vmax2. For example, at time t1, the image is displaced until the midpoint between the guitar player and the piano player is at the center of the displayed image, as shown in Figure 13, and at time t2, the image is displaced until the piano player is at the center of the displayed image, as shown in Figure 14.

 つまり、図6のステップS114、S117によっては、速度上限値vmaxは、比較的速い速度vmax2とされながら、あくまでも観察ユーザUbの操作に従った方向及び変位量で、表示画像が変位される。
 これは、観察ユーザUbが変位を意識しているため、変位速度の遅延量を抑えても画像酔いが生じる可能性が少ないためであり、またより反応良く観察ユーザUbの操作に対応するためである。
That is, in steps S114 and S117 in FIG. 6, the upper speed limit vmax is set to a relatively high speed vmax2, but the displayed image is displaced in a direction and by an amount of displacement that strictly conforms to the operation of the observing user Ub.
This is because, since the observing user Ub is aware of the displacement, there is little possibility that visual sickness will occur even if the delay amount of the displacement speed is reduced, and also because the operation of the observing user Ub can be responded to more responsively.

 観察ユーザUbの入力が位置指定操作ではない場合、例えばドラッグ操作やスクロール操作など、目的位置が不定な変位操作である位置非指定操作であった場合は、表示制御部205はステップS112からステップS115に進み、ステップS101で設定した速度vを、ドラッグ操作等の操作量に応じた速度vに置き換える。置き換えられた速度vは、ドラッグ操作の操作方向と操作速度の成分を含むベクトルとして設定される。
 また表示制御部205はステップS116で速度上限値vmax=|v|に設定する。これは速度vにおける絶対値である速度成分は、その位置非指定操作による変位速度が上限とされるもので、これは実質的に上限がなく、ドラッグ操作等の操作による変位速度がそのまま切り出し範囲の設定に用いられることを意味する。
If the input by the observing user Ub is not a position designation operation, for example, if it is a non-position designation operation that is a displacement operation with an indefinite target position, such as a drag operation or a scroll operation, the display control unit 205 proceeds from step S112 to step S115, and replaces the velocity v set in step S101 with a velocity v according to the operation amount of the drag operation, etc. The replaced velocity v is set as a vector including components of the operation direction and operation speed of the drag operation.
Furthermore, the display control unit 205 sets the upper speed limit vmax=|v| in step S116. This means that the velocity component, which is the absolute value of the velocity v, has the upper limit set to the displacement velocity due to the position non-designation operation, which means that there is essentially no upper limit, and the displacement velocity due to an operation such as a drag operation is used as is to set the cut-out range.

 そしてステップS117で表示制御部205は、速度上限値vmax=|v|とした状態、つまり操作に応じた変位速度で、操作により指定された方向に表示画像が変位するように、以降の各フレームの切り出し範囲を算出する。 Then, in step S117, the display control unit 205 calculates the cropping range for each subsequent frame so that the display image is displaced in the direction specified by the operation with the upper speed limit vmax = |v|, that is, at a displacement speed according to the operation.

 この場合も、撮像ユーザUaの挙動(撮像ユーザUa側での位置非指定操作)による撮像画像の正面方向の変位は表示画像に反映されず、表示画像はあくまでドラッグ操作等の操作量の分だけ、その操作方向に変位することになる。そして観察ユーザUbが自ら速度や方向を指示するものであり、画像酔いの可能性は極めて小さいので、操作量に応じて変位させる。これによって操作に対する応答性のよい表示を実現できる。 In this case, too, the forward displacement of the captured image due to the behavior of the capturing user Ua (non-position designation operation on the capturing user Ua's side) is not reflected in the displayed image, and the displayed image is displaced only in the direction of the operation amount, such as a drag operation. And since the observing user Ub specifies the speed and direction himself, and the possibility of image-induced sickness is extremely low, the image is displaced according to the amount of operation. This makes it possible to realize a display with good responsiveness to operations.

 以上の図6の処理では、撮像ユーザUaの挙動で撮像画像の正面方向の変位があった場合の処理例を説明したが、その場合の観察ユーザUbの入力の有無や、入力内容が位置指定操作であるか否かなどの状況情報により、表示画像の変位速度が異なるものとなる。
 これによって画像酔いが起こりやすい状況では、変位速度を低下させ、画像酔いが起こりにくい状況であれば、変位速度の低下量を抑えるということになり、なるべくリアルタイム性、操作に対する応答性の良い表示を実現する。
The above processing in Figure 6 has described an example of processing when the behavior of the imaging user Ua causes a displacement of the captured image in the forward direction. In that case, the displacement speed of the displayed image will differ depending on situational information such as whether or not there is an input from the observing user Ub and whether or not the input content is a position designation operation.
This means that in situations where motion sickness is likely to occur, the displacement speed is slowed down, and in situations where motion sickness is unlikely to occur, the amount of reduction in the displacement speed is suppressed, thereby achieving a display that is as real-time and responsive to operations as possible.

 ところで図6では、表示画像の変位の速度を、撮像ユーザUaの挙動に起因する変位であるか観察ユーザUbの入力による変位であるかの状況に応じて可変するものとしたが、他の状況情報によっても変化させてもよい。 In FIG. 6, the speed of the displacement of the displayed image is variable depending on the situation, whether the displacement is due to the behavior of the imaging user Ua or due to input by the observing user Ub, but it may also be changed depending on other situation information.

 例えば図6の処理を行うことを前提とし、以下の図15から図19の処理で、速度vmax2や、速度vmax1の設定を変更することが考えられる。 For example, assuming that the processing in Figure 6 is performed, the settings of speed vmax2 and speed vmax1 can be changed in the processing in Figures 15 to 19 below.

 図15は出力デバイス30に応じた設定変更を示している。
 表示制御部205はステップS201で出力デバイス30に関する情報が取得できたか否かを確認する。
 取得できた場合は、ステップS202に進み、速度vmax2、速度vmax1の一方又は両方の設定を変更する。
FIG. 15 shows the setting changes according to the output device 30.
In step S201, the display control unit 205 checks whether or not information about the output device 30 has been acquired.
If the speed vmax2 and/or the speed vmax1 are acquired, the process proceeds to step S202, where the settings of the speed vmax2 and/or the speed vmax1 are changed.

 出力デバイス30としては、図2に示した様にHMD型、大型のモニタディスプレイ、スマートフォンに設けられる比較的小型のディスプレイなどがある。これらのデバイス種別の情報が取得できる場合がある。
 また表示画像を、画面の一部のウインドウとして表示させているか、全画面表示させているかというような、現在の表示態様の情報を取得できる場合がある。
2, the output device 30 may be an HMD type, a large monitor display, a relatively small display provided on a smartphone, etc. There are cases where information on the type of these devices can be acquired.
Also, there are cases where it is possible to obtain information on the current display mode, such as whether the displayed image is being displayed as a window occupying part of the screen or being displayed on the entire screen.

 これらの出力デバイスに関する情報で速度設定を変更する。
 例えばサイズの小さいモニタ装置、スマートフォンのディスプレイを出力デバイス30として用いている場合や、画面内ウインドウにより所定サイズより小さく表示している場合などは、画像酔いがおきにくいため、速度vmax2、速度vmax1の値を上げる。つまり遅延を抑える方向に更新する。
Change the speed settings with information about these output devices.
For example, when a small-sized monitor device or a smartphone display is used as the output device 30, or when an image is displayed smaller than a predetermined size in a window within the screen, motion sickness is less likely to occur, so the values of the speed vmax2 and the speed vmax1 are increased. In other words, the values are updated to reduce the delay.

 一方、サイズの大きいモニタ装置、スマートフォンのディスプレイを出力デバイス30として用いている場合や、所定サイズより大きいウインドウ表示をしている場合などは、画像酔いがおき易いため、速度vmax2、速度vmax1の値を下げる。つまり画像酔い防止のための変位の遅延が生じる傾向を高める。 On the other hand, when a large monitor device or smartphone display is used as the output device 30, or when a window larger than a certain size is displayed, motion sickness is likely to occur, so the values of speed vmax2 and speed vmax1 are lowered. In other words, the tendency for a delay in displacement to prevent motion sickness to occur is increased.

 図16はユーザ情報に応じた設定変更を示している。
 表示制御部205はステップS210で観察ユーザUbのユーザ情報の入力があったか否かを確認する。
 所定のユーザ情報の入力があった場合は、ステップS211に進み、速度vmax2、速度vmax1の一方又は両方の設定を変更する。
FIG. 16 shows the change of settings according to user information.
In step S210, the display control unit 205 checks whether or not user information of the observing user Ub has been input.
If the predetermined user information has been input, the process proceeds to step S211, where the settings of one or both of the speeds vmax2 and vmax1 are changed.

 例えば観察ユーザUbは、入力部206の操作などにより、自身が画像酔いしやすい体質か否かを入力する。表示制御部205は、この入力情報に基づいて速度設定を変更する。
 例えば観察ユーザUbが画像酔いしにくい体質であれば、速度vmax2、速度vmax1の値を上げる。つまり遅延を抑える方向に更新する。
 一方、観察ユーザUbが画像酔いしやすい体質であれば、速度vmax2、速度vmax1の値を下げる。つまりリアルタイム性は抑えても画像酔い防止を重視する。
For example, the observer user Ub inputs whether or not he or she is prone to visual motion sickness by operating the input unit 206. The display control unit 205 changes the speed setting based on this input information.
For example, if the observing user Ub is not prone to motion sickness, the values of the speeds vmax2 and vmax1 are increased. In other words, the values are updated to reduce the delay.
On the other hand, if the observing user Ub is prone to motion sickness, the values of the speeds vmax2 and vmax1 are reduced. In other words, emphasis is placed on preventing motion sickness even if the real-time performance is suppressed.

 図17は画像内容に応じた設定変更を示している。
 表示制御部205はステップS221で撮像画像の解析を行い、解析結果に基づいて、ステップS222で設定変更を行うか否かを判定する。設定変更を行うと判定した場合は、表示制御部205はステップS223に進み、速度vmax2、速度vmax1の一方又は両方の設定を変更する。
FIG. 17 shows how settings are changed depending on image contents.
The display control unit 205 analyzes the captured image in step S221, and determines whether or not to change the settings based on the analysis result in step S222. If it is determined that the settings are to be changed, the display control unit 205 proceeds to step S223, where it changes one or both of the settings of the speed vmax2 and the speed vmax1.

 例えば撮像画像を解析することで、暗めのシーンの画像であること、遠景画像であること、ダイナミックレンジが小さい画像であること、高周波成分が少ない画像であること、などを判定できる。これらは画像酔いを生じさせにくい画像内容である。そこで、撮像画像がこれらのいずれかに該当する場合は、速度vmax2、速度vmax1の値を上げる。 For example, by analyzing the captured image, it is possible to determine whether the image is of a dark scene, a distant view, has a small dynamic range, has few high frequency components, etc. These are image contents that are unlikely to cause motion sickness. Therefore, if the captured image falls into any of these categories, the values of speed vmax2 and speed vmax1 are increased.

 一方、明るいシーンの画像、近接被写体の画像、意味を持つ文字情報(観察ユーザUbが読みたいと思う文字)が含まれている画像、ダイナミックレンジが大きい画像、高周波成分が多い画像などは、画像酔いを生じさせ易い。そこで、撮像画像がこれらのいずれかに該当する場合は、速度vmax2、速度vmax1の値を下げる。 On the other hand, images of bright scenes, images of close-up subjects, images containing meaningful text information (text that the observing user Ub wants to read), images with a large dynamic range, images with a lot of high-frequency components, etc. are likely to cause motion sickness. Therefore, if the captured image falls into any of these categories, the values of speed vmax2 and speed vmax1 are reduced.

 図18は視聴環境に応じた設定変更を示している。
 表示制御部205はステップS231で視聴環境の情報が取得できたか否かを判定する。取得できた場合は、表示制御部205はステップS232で、前回取得した視聴環境の情報と比較して環境変化があるか否かを確認する。
 環境変化があると判定したら、表示制御部205はステップS223に進み、速度vmax2、速度vmax1の一方又は両方の設定を変更する。
FIG. 18 shows how settings are changed depending on the viewing environment.
In step S231, the display control unit 205 determines whether or not information on the viewing environment has been acquired. If information on the viewing environment has been acquired, in step S232, the display control unit 205 compares the acquired information with the previously acquired information on the viewing environment to check whether or not there has been a change in the environment.
If it is determined that there is an environmental change, the display control unit 205 proceeds to step S223, and changes the settings of one or both of the speed vmax2 and the speed vmax1.

 例えば明るい部屋でモニタディスプレイを見ている場合は、画像酔いが生じにくいが、暗い部屋でもモニタディスプレイで明るい画像をみていると、画像酔いが生じやすい。そこで明るい視聴環境である場合は、速度vmax2、速度vmax1の値を上げる。また暗い視聴環境である場合、速度vmax2、速度vmax1の値を下げるようにする。 For example, when viewing a monitor display in a bright room, motion sickness is unlikely to occur, but even in a dark room, viewing a bright image on a monitor display can easily cause motion sickness. Therefore, if the viewing environment is bright, the values of speed vmax2 and speed vmax1 should be increased. Also, if the viewing environment is dark, the values of speed vmax2 and speed vmax1 should be decreased.

 なお単に暗い視聴環境というだけなく、暗い視聴環境で、かつ表示画像の画像内容が比較的明るいものである場合に、速度vmax2、速度vmax1の値を下げるようにしてもよい。 In addition, if the viewing environment is not simply dark, but is also dark and the image content of the displayed image is relatively bright, the values of speed vmax2 and speed vmax1 may be reduced.

 図19は視覚以外の出力に応じた設定変更を示している。
 表示制御部205はステップS241で視覚以外の出力があるか否かを判定する。例えば観察ユーザUbに対して出力される音声、振動、力覚等である。
FIG. 19 shows a setting change according to an output other than visual.
In step S241, the display control unit 205 determines whether there is any output other than visual output, such as audio, vibration, or force output to the observing user Ub.

 これらの視覚以外の出力がある場合は、表示制御部205はステップS242で、それら音声、振動、力覚等に関する情報の判定を行う。例えば音量、振動のレベル、力覚のレベルなどを判定する。そして判定結果に応じて表示制御部205は、ステップS243で速度vmax2、速度vmax1の一方又は両方の設定を変更する。 If there is any output other than the visual sense, the display control unit 205 judges the information related to the sound, vibration, force sense, etc. in step S242. For example, it judges the volume, vibration level, force sense level, etc. Then, depending on the judgment result, the display control unit 205 changes the setting of one or both of the speed vmax2 and the speed vmax1 in step S243.

 例えば音量が大きい場合や、力覚や振動が変位と同時に観察ユーザUbに与えられるような場合は、画像酔いしにくい場合がある。そこで、視覚以外の出力が、画像酔いを抑えるように影響するものかどうかを判定し、判定結果により速度vmax2、速度vmax1の値を上げるようにする。 For example, if the volume is high, or if the observing user Ub is given a sense of force or vibration at the same time as the displacement, then motion sickness may be less likely to occur. Therefore, it is determined whether output other than vision has an effect on suppressing motion sickness, and the values of speed vmax2 and speed vmax1 are increased based on the result of the determination.

 以上の例のように各種の状況情報を鑑みて、表示画像の変位速度を可変制御することで、必要に応じて遅延を大きくし、一方で画像酔いが生じ難い状況であればリアルタイム性を優先すると言うことが可能になる。 As in the above examples, by taking into account various situational information and variably controlling the displacement speed of the displayed image, it is possible to increase the delay as necessary, while prioritizing real-time display in situations where visual sickness is unlikely to occur.

 上述の図6の変形例を図20で説明する。図6と同一の処理は同一のステップ番号を付して重複説明を避ける。図20は、切り出し範囲の算出をより具体化した例である。 A modified example of the above-mentioned FIG. 6 is explained in FIG. 20. The same processes as those in FIG. 6 are given the same step numbers to avoid duplicate explanations. FIG. 20 shows a more specific example of the calculation of the cutout range.

 表示制御部205はステップS101で撮像画像における変位の速度を速度vとしたら、ステップS151で切り出し基準位置P0を算出する。これは図6のステップS111と同様に、撮像ユーザUaによる正面方向が変位しても、観察ユーザUbが見る表示画像の変位が生じないようにする切り出し範囲を設定し、変位の基準の位置とするものである。具体的には切り出し基準位置P0=-v相当の位置とする。つまり撮像ユーザUaによる正面方向の変位がキャンセルされた位置とする。 In step S101, the display control unit 205 sets the speed of displacement in the captured image to speed v, and in step S151 calculates the cut-out reference position P0. As with step S111 in FIG. 6, this sets a cut-out range that prevents displacement of the displayed image seen by the observing user Ub even if the forward displacement caused by the capturing user Ua occurs, and sets this as the reference position for displacement. Specifically, the cut-out reference position P0 is set to a position equivalent to -v. In other words, this is set to the position where the forward displacement caused by the capturing user Ua is cancelled.

 ステップS102で表示制御部205は、観察ユーザUbによる変位操作の入力があったか否かにより処理を分岐する。変位操作入力がない場合は、ステップS152で速度v’を設定する。この場合、撮像ユーザUaの視線位置と、現在位置の差分に相当する速度を速度v’とする。撮像ユーザUaの視線位置は撮像画像の正面方向の中心点の位置であり、現在位置とは、撮像ユーザUaが動く前の視線位置である。
 そして表示制御部205はステップS103に進み、速度上限値vmaxを速度vmax1に設定して、ステップS155に進む。
In step S102, the display control unit 205 branches the process depending on whether or not a displacement operation has been input by the observing user Ub. If no displacement operation has been input, a speed v' is set in step S152. In this case, the speed v' is set to a speed equivalent to the difference between the gaze position of the imaging user Ua and the current position. The gaze position of the imaging user Ua is the position of the center point in the front direction of the captured image, and the current position is the gaze position before the imaging user Ua moves.
The display control unit 205 then proceeds to step S103, sets the upper speed limit vmax to the speed vmax1, and proceeds to step S155.

 ステップS102で観察ユーザUbによる入力があると判定した場合、表示制御部205はステップS112で入力が、位置指定操作か位置非指定操作かにより処理を分岐する。 If it is determined in step S102 that there is an input from the observing user Ub, the display control unit 205 branches the process in step S112 depending on whether the input is a position designation operation or a position non-designation operation.

 位置指定操作の場合は、表示制御部205はステップS113Aで、指定位置と現在位置の差分により表示画像の変位の速度v’を設定する。そして表示制御部205はステップS114で速度上限値vmaxを速度vmax2に設定し、ステップS155に進む。 In the case of a position specification operation, in step S113A, the display control unit 205 sets the speed v' of the displacement of the displayed image based on the difference between the specified position and the current position. Then, in step S114, the display control unit 205 sets the upper speed limit vmax to speed vmax2, and proceeds to step S155.

 位置非指定操作の場合は、表示制御部205はステップS115Aで、ドラッグ操作等の操作量に応じた速度v’を設定する。そしてステップS117で表示制御部205は、速度上限値vmax=|v|とし、ステップS155に進む。 In the case of a non-position-specified operation, in step S115A, the display control unit 205 sets a speed v' according to the amount of operation such as a drag operation. Then, in step S117, the display control unit 205 sets the upper speed limit vmax = |v| and proceeds to step S155.

 ステップS155で表示制御部205は、切り出し基準位置P0と、ステップS152、ステップS113A、又はステップS115Aのいずれかで設定した速度v’と、ステップS103、ステップS114、又はステップS116のいずれかで設定した速度上限値vmaxを用いて切り出し範囲を算出する。 In step S155, the display control unit 205 calculates the cut-out range using the cut-out reference position P0, the speed v' set in either step S152, step S113A, or step S115A, and the upper speed limit vmax set in either step S103, step S114, or step S116.

 具体的には、P0+min(max(v’,-vmax),vmax)とする。vmaxは速度上限値としての正の値であり、-vmaxは負側の速度上限値である。
 つまり速度v’と-vmaxの大きい方を選択し、選択した方と速度上限値vmaxのうちの小さい方の速度に相当する変位量を、切り出し基準位置P0に加える。そして、その求められた速度の分だけ変位した位置を中心として切り出し範囲を求める。
Specifically, it is set as P0+min(max(v', -vmax), vmax), where vmax is a positive value as the upper speed limit, and -vmax is a negative value as the upper speed limit.
That is, the larger of the speeds v' and -vmax is selected, and the displacement amount equivalent to the smaller of the selected speed and the upper speed limit vmax is added to the cut-out reference position P0. Then, the cut-out range is determined with the position displaced by the calculated speed as the center.

 これによりステップS152,S103を経た場合は、撮像ユーザUaの動きによる速度v’と速度上限値vmax(=vmax1)のうちの小さい方の速度による変位量となる切り出し範囲が求められる。
 またステップS113A,S114を経た場合は、観察ユーザUbの位置指定位置に対する現在位置からの変位量に応じた速度v’と速度上限値vmax(=vmax2)のうちの小さい方の速度による変位量となる切り出し範囲が求められる。
 またステップS115A,S116を経た場合は、観察ユーザUbの位置非指定位置の操作量に応じた速度v’による変位量となる切り出し範囲が求められる。
As a result, when steps S152 and S103 are performed, an extraction range is obtained that has a displacement amount due to the smaller of the velocity v' due to the movement of the image capturing user Ua and the upper velocity limit vmax (=vmax1).
Also, if steps S113A and S114 have been performed, an extraction range is obtained in which the displacement amount is the smaller of the speed v' corresponding to the displacement amount from the current position to the specified position of the observing user Ub and the upper speed limit vmax (=vmax2).
Furthermore, if steps S115A and S116 have been carried out, an extraction range is obtained that is a displacement amount due to the velocity v' corresponding to the operation amount of the non-designated position of the observing user Ub.

<4.まとめ及び変形例>
 以上の実施の形態によれば次のような効果が得られる。
4. Summary and Modifications
According to the above embodiment, the following effects can be obtained.

 実施の形態の情報処理装置20は、撮像ユーザUaが装着した撮像装置で撮像された撮像画像(第1画像)から、撮像画像の画角よりも狭い画角の範囲を切り出して観察ユーザUbが観察するための表示画像(第2画像)とする切り出し処理を行うとともに、切り出し処理において撮像画像の正面方向の変位があったときの切り出し範囲の変更による表示画像の変位速度を、状況情報に応じて可変する表示制御部205を備えている。
 これにより撮像ユーザUaの動き(位置非指定操作)による撮像画像の正面方向の変位があった際に、観察ユーザUbに視認される表示画像(速度可変切り出し画像58)の動きは、状況に応じて変位速度が変えられる。従って画像酔いを生じさせないような表示画像の変位速度低減を行いつつ、画像酔いが生じにくい状況では、変位速度の低下を抑え、あまり時間遅れが生じないようにすることができる。つまり画像酔いの防止を行いつつ、画像酔いが生じにくい場合は撮像画像の変位に対する表示画像の遅れをできるだけ少なくし、リアルタイム性を損ないにくくするということが実現できる。
The information processing device 20 of the embodiment performs a cut-out process to cut out a range with a narrower angle of view than the angle of view of the captured image from an captured image (first image) captured by an imaging device worn by the imaging user Ua, and to create a display image (second image) for observation by the observing user Ub, and is equipped with a display control unit 205 that varies the displacement speed of the display image due to a change in the cut-out range when the captured image is displaced in the forward direction during the cut-out process, in accordance with situation information.
As a result, when the captured image is displaced in the front direction due to the movement (non-position designation operation) of the image capturing user Ua, the movement of the display image (speed-variable cut-out image 58) viewed by the observing user Ub can be changed in displacement speed depending on the situation. Therefore, while reducing the displacement speed of the display image so as not to cause image-induced sickness, in a situation where image-induced sickness is unlikely to occur, it is possible to suppress the decrease in the displacement speed and prevent a large time delay. In other words, while preventing image-induced sickness, it is possible to reduce the delay of the display image relative to the displacement of the captured image as much as possible in a situation where image-induced sickness is unlikely to occur, and to prevent a loss of real-time performance.

 実施の形態では、撮像画像は撮像ユーザUaが装着した撮像デバイス40で撮像された画像であり、撮像画像の正面方向の変位は、撮像ユーザUaの位置非指定操作によって発生する例を挙げた。
 撮像ユーザUaが例えば頭部などに装着した撮像デバイス40によって得られる撮像画像に対して、表示画像の切り出しを行うシステムである。この場合に、画像酔いを生じさせないような表示画像の変位速度低減が行われることや、状況に応じて変位速度が可変されることで、撮像ユーザUaは、観察ユーザUbの画像酔いの防止を意識して行動する必要がなくなる。これにより撮像ユーザUaは自由な挙動で撮像を行うことができ、より現地の状況を無意識に撮られた画像を観察ユーザUbに提供できるシステムとなる。
In the embodiment, an example has been given in which the captured image is an image captured by the imaging device 40 worn by the imaging user Ua, and the displacement of the captured image in the forward direction occurs due to a position non-designation operation by the imaging user Ua.
This system cuts out a display image from an image captured by an imaging device 40 worn by an imaging user Ua, for example, on the head. In this case, the displacement speed of the display image is reduced so as not to cause motion sickness, and the displacement speed is varied according to the situation, so that the imaging user Ua does not need to be conscious of preventing motion sickness in the observing user Ub. This allows the imaging user Ua to capture images freely, and the system can provide the observing user Ub with images that are unconsciously captured to better reflect the local situation.

 実施の形態では、表示制御部205は、撮像画像の正面方向の変位の有無に関わらず、表示画像の変位速度を、撮像画像の変位の速度以下の範囲で可変設定するものとした(図6,図20参照)。
 表示画像の変位を実際の撮像画像の変位よりも遅くすることで、画像酔いの防止に有効であるが、状況によっては、表示画像の撮像画像に対する変位の遅れを小さくしたり、或いは遅れがない状態としたりする。撮像画像の変位速度以下の範囲で可変することで、画像酔いを防止しつつ、リアルタイム性をなるべく維持することに好適となる。
 なお図6,図20の処理は撮像画像の正面方向の変位があったときに適用する処理とすることもできる。
In the embodiment, the display control unit 205 variably sets the displacement speed of the displayed image within a range equal to or less than the displacement speed of the captured image, regardless of whether or not the captured image is displaced in the forward direction (see Figures 6 and 20).
Making the displacement of the displayed image slower than the displacement of the actual captured image is effective in preventing image-induced motion sickness, but depending on the situation, the delay in the displacement of the displayed image relative to the captured image may be reduced or eliminated. By varying the displacement speed within a range equal to or less than the displacement speed of the captured image, it is preferable to prevent image-induced motion sickness while maintaining real-time performance as much as possible.
The processes in FIGS. 6 and 20 can also be applied when there is a displacement in the front direction of the captured image.

 実施の形態では、状況情報には、観察ユーザUbの表示画像に対する変位操作の有無が含まれるものとした。そして表示制御部205は、観察ユーザUbの変位操作があった場合の表示画像の変位速度は、変位操作がなかった場合の表示画像の変位速度よりも速くなるようにする例を挙げた。
 観察ユーザUbの操作があったときは、観察ユーザUbが自発的に表示画像の変位を生じさせるため、表示に動きがあっても画像酔いはしにくい。そこで、観察ユーザUbの操作があった場合には、操作がない場合よりも、表示画像の変位を遅らせる程度を小さくし、これにより不要な遅れが生じないようにしている。
In the embodiment, the situation information includes the presence or absence of a displacement operation on the display image of the observing user Ub. Then, an example has been given in which the display control unit 205 makes the displacement speed of the display image when the displacement operation of the observing user Ub has been performed faster than the displacement speed of the display image when the displacement operation has not been performed.
When an operation is performed by the observing user Ub, the observing user Ub spontaneously displaces the displayed image, so that the observing user is unlikely to get motion sickness even if there is movement in the display. Therefore, when an operation is performed by the observing user Ub, the degree of delay in displacing the displayed image is made smaller than when there is no operation, thereby preventing unnecessary delays from occurring.

 実施の形態では、表示制御部205は、観察ユーザUbの変位操作があった場合は、撮像画像の正面方向の変位がないものとして観察ユーザUbの変位操作に応じて切り出し範囲を設定するものとした(図6のステップS110からS117参照)。
 観察ユーザUbの操作があったときは、例えば撮像ユーザUaの挙動による撮像画像の変位(正面方向の変位)があっても、それをキャンセルした状態で、観察ユーザUbの操作に応じて切り出し範囲を設定する。これにより観察ユーザUbは、撮像ユーザUaの挙動にかかわらず、自分の操作に応じて変位する表示画像を見ることができる。
In the embodiment, when the observing user Ub performs a displacement operation, the display control unit 205 assumes that there is no displacement in the forward direction of the captured image and sets the cut-out range in accordance with the displacement operation of the observing user Ub (see steps S110 to S117 in FIG. 6).
When an operation is performed by the observing user Ub, even if there is a displacement of the captured image (displacement in the front direction) due to the behavior of the imaging user Ua, the displacement is canceled and the cropping range is set according to the operation of the observing user Ub. This allows the observing user Ub to view the display image that displaces according to his/her own operation, regardless of the behavior of the imaging user Ua.

 実施の形態では、表示制御部205は、観察ユーザUbの変位操作がなかった場合の表示画像の変位速度を、第1の速度上限値(速度vmax1)以下に設定する。また変位操作として、表示画像を視認するユーザが切り出し範囲を指定する操作を行った場合は、表示画像の変位距離に応じた変位速度を、第1の速度上限値より速い速度である第2の速度上限値(vmax2)以下に設定する例を挙げた(図6のステップS113,S114,S117参照)。
 観察ユーザUbが切り出し範囲を直接又は間接的に指定する操作があったときは、現在の表示画像の位置から切り出し領域の位置の距離差に応じて速度が決まるが、その速度は速度vmax2以下とする。速度vmax2は速度vmax1よりも速い速度の値である。つまり観察ユーザUbの自発的な操作による表示画像の変位となるため、表示画像の変位を遅らせる程度を小さくし、遅れを少なくする。
 なお切り出し範囲を指定する操作としては、例えば観察ユーザUbが或る画像上のポイントや特定の被写体等を指定する操作などとし、表示制御部205は、そのポイント等の周囲の所定範囲を切り出し範囲とすればよい。
In the embodiment, the display control unit 205 sets the displacement speed of the display image in the absence of a displacement operation by the observing user Ub to a first upper speed limit value (speed vmax1) or less. Also, in the case where a user viewing the display image performs an operation to specify a cutout range as a displacement operation, an example has been given in which the displacement speed according to the displacement distance of the display image is set to a second upper speed limit value (vmax2) or less, which is a speed faster than the first upper speed limit value (see steps S113, S114, and S117 in FIG. 6).
When the observing user Ub performs an operation to directly or indirectly specify the cut-out range, the speed is determined according to the difference in distance from the position of the current displayed image to the position of the cut-out region, and the speed is set to be equal to or less than speed vmax2. Speed vmax2 is a speed value faster than speed vmax1. In other words, since the displacement of the displayed image is caused by a voluntary operation of the observing user Ub, the degree to which the displacement of the displayed image is delayed is reduced, and the delay is reduced.
The operation of specifying the cut-out range may be, for example, an operation in which the observing user Ub specifies a point on an image or a specific subject, etc., and the display control unit 205 may set a predetermined range around the point, etc. as the cut-out range.

 実施の形態では、表示制御部205は、変位操作として、観察ユーザUbが位置非指定操作を行った場合は、表示画像の変位速度が、その位置非指定操作に応じた速度となるようにする例を挙げた(図6のステップS115,S116,S117参照)。
 観察ユーザUbがドラッグ、スワイプ、スクロールなどの操作で表示画像を移動させる位置非指定操作を行った場合は、撮像画像の正面方向の変位があったとしても、その位置非指定操作に従った速度及び方向で表示画像が変位するように切り出し範囲を設定する。これにより、観察ユーザUbは、撮像ユーザUaの挙動にかかわらず、ドラッグ操作等で任意に表示画像を変位させることができる。
In the embodiment, an example was given in which, when the observing user Ub performs a non-position designation operation as a displacement operation, the display control unit 205 sets the displacement speed of the displayed image to a speed corresponding to the non-position designation operation (see steps S115, S116, and S117 in Figure 6).
When the observing user Ub performs a position-unspecified operation to move the display image by an operation such as dragging, swiping, scrolling, etc., the cropping range is set so that the display image is displaced at a speed and in a direction according to the position-unspecified operation, even if there is a displacement in the front direction of the captured image. This allows the observing user Ub to displace the display image as desired by a dragging operation, etc., regardless of the behavior of the capturing user Ua.

 実施の形態では、表示制御部205は、変位操作として、撮像画像の変位方向とは異なる方向への変位を指示する操作があった場合は、変位操作に応じた方向へ変位されるように切り出し範囲を設定する例を挙げた。つまり観察ユーザUbが表示画像に対する変位操作を行った方向が、撮像ユーザUaの挙動による撮像画像の変位方向とは異なる方向であった場合は、観察ユーザUbの変位操作に応じた方向に表示画像が変位するようにする。これにより観察ユーザUbは、撮像ユーザUaの挙動にかかわらず、操作により任意に表示画像を変位させることができる。 In the embodiment, an example has been given in which, when a displacement operation is performed to instruct a displacement in a direction different from the displacement direction of the captured image, the display control unit 205 sets the cropping range so that the image is displaced in a direction corresponding to the displacement operation. In other words, when the direction in which the observing user Ub performs a displacement operation on the displayed image is different from the displacement direction of the captured image due to the behavior of the imaging user Ua, the displayed image is displaced in a direction corresponding to the displacement operation of the observing user Ub. This allows the observing user Ub to displace the displayed image as desired by operation, regardless of the behavior of the imaging user Ua.

 実施の形態では、表示制御部205は、撮像画像の正面方向の変位があるときに、撮像画像の正面方向の変位がないとした場合での表示画像の切り出し範囲に相当する切り出し基準位置を設定し、切り出し基準位置と変位操作に応じた変位量に基づいて表示画像の変位速度を可変設定する例を挙げた。
 これにより撮像ユーザUaの動きに起因する全画面動きをキャンセルした状態で、表示画像の変位を行うようにすることができる。
In the embodiment, an example is given in which, when there is a displacement of the captured image in the forward direction, the display control unit 205 sets a cut-out reference position equivalent to the cut-out range of the displayed image in the case where there is no displacement of the captured image in the forward direction, and variably sets the displacement speed of the displayed image based on the cut-out reference position and the displacement amount according to the displacement operation.
This makes it possible to displace the display image while canceling the movement of the entire screen caused by the movement of the imaging user Ua.

 実施の形態では、状況情報には表示画像を表示させる出力デバイス30の情報が含まれるとした。
 観察ユーザUbが視聴する表示部31を有する出力デバイス30の種類、サイズ、表示態様などの情報に応じて、表示画像の変位速度を設定する。例えば速度上限値vmaxを速度vmax2や速度vmax1の値を変更する。これにより、画像酔いしやすい表示デバイス、画像酔いしにくい表示デバイスに応じた表示画像の変位速度制御が可能になる。
In the embodiment, the situation information includes information about the output device 30 that displays the display image.
The displacement speed of the display image is set according to information such as the type, size, and display mode of the output device 30 having the display unit 31 viewed by the observing user Ub. For example, the upper speed limit vmax is changed to a value of a speed vmax2 or a speed vmax1. This makes it possible to control the displacement speed of the display image according to a display device that is likely to cause visual sickness and a display device that is unlikely to cause visual sickness.

 実施の形態では、状況情報には、観察ユーザUbの入力情報が含まれるものとした。
 例えば観察ユーザUbが、自分が画像酔いしやすいか、しにくいか等を入力する。これに応じて表示制御部205は例えば速度上限値vmaxとして適用される速度vmax2や速度vmax1の値を変更する。これによりユーザ個人の画像酔いのしやすさに応じた表示画像の変位速度制御が可能になる。
In the embodiment, the situation information includes the input information of the observing user Ub.
For example, the observing user Ub inputs whether he or she is prone to motion sickness. In response to this, the display control unit 205 changes the value of the speed vmax2 or the speed vmax1 applied as the upper speed limit vmax. This makes it possible to control the displacement speed of the displayed image according to the individual user's susceptibility to motion sickness.

 実施の形態では、状況情報には、表示画像の画像内容に関する情報が含まれるものとした。
 例えば表示画像の明暗、ダイナミックレンジ、近景/遠景、高周波成分などにより画像酔いしやすいか、しにくいかが異なる。そこで表示制御部205はこれらの画像内容に関する情報に応じて例えば速度vmax2や速度vmax1の値を変更する。これにより表示画像の画像酔いのしやすさに応じて変位速度制御が可能になる。
In the embodiment, the situation information includes information about the image contents of the display image.
For example, whether or not a person is susceptible to motion sickness varies depending on the brightness, dynamic range, close view/distant view, high frequency components, etc. of the displayed image. Therefore, the display control unit 205 changes the values of the speed vmax2 and the speed vmax1, for example, according to information about the image content. This makes it possible to control the displacement speed according to the susceptibility of the displayed image to motion sickness.

 実施の形態では、状況情報には、表示画像の視聴環境に関する情報が含まれるものとした。
 例えば表示デバイスを置いている部屋の明暗に応じて例えば速度vmax2や速度vmax1の値を変更する。これにより画像酔いしやすい環境か否かに応じて変位速度制御が可能になる。
In the embodiment, the situation information includes information about the viewing environment of the displayed image.
For example, the value of the speed vmax2 or the speed vmax1 is changed depending on the brightness of the room the display device is placed in. This makes it possible to control the displacement speed depending on whether the environment is prone to visually-induced motion sickness.

 実施の形態では、状況情報には、表示画像とともに出力される視覚外情報に関する情報が含まれるものとした。
 例えば表示画像とともに、音声、振動、触覚出力等を行う場合は、それらの出力の程度に応じて例えば速度vmax2や速度vmax1の値を変更する。これにより画像酔いしやすい状況か否かに応じて変位速度制御が可能になる。
In the embodiment, the situation information includes information regarding extra-visual information that is output together with the display image.
For example, when audio, vibration, haptic output, etc. are to be provided together with the displayed image, the values of the speed vmax2 and the speed vmax1 are changed according to the level of these outputs. This makes it possible to control the displacement speed according to whether or not the situation is prone to visual sickness.

 なお表示制御部205による処理、つまり状況情報に応じて表示画像の変位速度を可変する処理は、撮像ユーザUa側の情報処理装置10において実行し、速度設定処理後の表示画像を、コンテンツサーバ90を介して観察ユーザUb側の情報処理装置20に送信してもよい。
 或いはコンテンツサーバ90が表示制御部205による処理を行って、速度設定処理後の表示画像を情報処理装置20に送信してもよい。
The processing by the display control unit 205, i.e., the processing of varying the displacement speed of the display image according to the situation information, may be executed by the information processing device 10 on the imaging user Ua side, and the display image after the speed setting processing may be transmitted to the information processing device 20 on the observing user Ub side via the content server 90.
Alternatively, the content server 90 may perform processing by the display control unit 205 and transmit the display image after the speed setting processing to the information processing device 20 .

 実施の形態のプログラムは、上述の図6の処理や、図15から図19のような処理を、例えばCPU、DSP等のプロセッサ、或いはこれらを含むデバイスに実行させるプログラムである。
 即ち実施の形態のプログラムは、情報処理装置20(或いは情報処理装置10、或いはコンテンツサーバ90)に、撮像ユーザUaが装着した撮像装置で撮像された第1画像から、第1画像よりも狭い画角の範囲を切り出して観察ユーザUbが観察するための第2画像とする切り出し処理を行うとともに、第1画像の正面方向の変位があったときの第2画像の切り出し範囲の変更によって生ずる、第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御を実行させるプログラムである。
 このようなプログラムにより、実施の形態の情報処理装置20等を実現できる。
The program of the embodiment is a program that causes a processor such as a CPU or a DSP, or a device including these, to execute the process of FIG. 6 described above or the processes of FIGS.
In other words, the program of the embodiment is a program that causes the information processing device 20 (or the information processing device 10, or the content server 90) to perform a cut-out process to cut out a range with a narrower angle of view than the first image from a first image captured by an imaging device worn by the imaging user Ua, and to create a second image for observation by the observing user Ub, and also to execute display control to vary the displacement speed of the displayed image based on the second image, which is caused by changing the cut-out range of the second image when there is a displacement of the first image in the forward direction, in accordance with situational information.
Such a program can realize the information processing device 20 according to the embodiment.

 以上のような実施の形態のプログラムは、コンピュータ装置等の機器に内蔵されている記録媒体としてのHDDや、CPUを有するマイクロコンピュータ内のROM等に予め記録しておくことができる。また、このようなプログラムは、フレキシブルディスク、CD-ROM(Compact Disc Read Only Memory)、MO(Magneto Optical)ディスク、DVD(Digital Versatile Disc)、ブルーレイディスク(Blu-ray Disc(登録商標))、磁気ディスク、半導体メモリ、メモリカードなどのリムーバブル記録媒体に、一時的あるいは永続的に格納(記録)しておくことができる。このようなリムーバブル記録媒体は、いわゆるパッケージソフトウェアとして提供することができる。
 また、このようなプログラムは、リムーバブル記録媒体からパーソナルコンピュータ等にインストールする他、ダウンロードサイトから、LAN(Local Area Network)、インターネットなどのネットワークを介してダウンロードすることもできる。
The programs of the above-described embodiments can be pre-recorded in a HDD as a recording medium built into a device such as a computer device, or in a ROM in a microcomputer having a CPU. Such programs can also be temporarily or permanently stored (recorded) in removable recording media such as flexible disks, CD-ROMs (Compact Disc Read Only Memory), MO (Magneto Optical) disks, DVDs (Digital Versatile Discs), Blu-ray Discs (registered trademark), magnetic disks, semiconductor memories, and memory cards. Such removable recording media can be provided as so-called package software.
Such a program can be installed in a personal computer or the like from a removable recording medium, or can be downloaded from a download site via a network such as a LAN (Local Area Network) or the Internet.

 またこのようなプログラムによれば、実施の形態の情報処理装置10等の広範な提供に適している。例えばパーソナルコンピュータ、通信機器、スマートフォンやタブレット等の携帯端末装置、携帯電話機、ゲーム機器、ビデオ機器、PDA(Personal Digital Assistant)等にプログラムをダウンロードすることで、これらの装置を本開示の情報処理装置10等として機能させることができる。 Furthermore, such a program is suitable for widespread provision of the information processing device 10 of the embodiment. For example, by downloading the program to personal computers, communication devices, mobile terminal devices such as smartphones and tablets, mobile phones, game devices, video devices, PDAs (Personal Digital Assistants), etc., these devices can function as the information processing device 10 of the present disclosure.

 なお、本明細書に記載された効果はあくまでも例示であって限定されるものではなく、また他の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limiting, and other effects may also be present.

 なお本技術は以下のような構成も採ることができる。
 (1)
 撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御部を備える
 情報処理装置。
 (2)
 前記第1画像の正面方向の変位は、前記撮像ユーザの位置非指定操作によって発生する
 上記(1)に記載の情報処理装置。
 (3)
 前記表示制御部は、
 前記第2画像の切り出し位置の変更による表示画像の変位速度を、前記第1画像の変位の速度以下の範囲で可変設定する
 上記(2)に記載の情報処理装置。
 (4)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がある場合の表示画像の変位速度は、前記変位操作がない場合の表示画像の変位速度よりも速くする
 上記(2)又は(3)に記載の情報処理装置。
 (5)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がない場合の表示画像の変位速度を第1の速度上限値以下に設定し、
 前記変位操作がある場合の表示画像の変位速度を前記第1の速度上限値より速い速度である第2の速度上限値以下に設定する
 上記(2)から(4)のいずれかに記載の情報処理装置。
 (6)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がある場合は、前記第1画像の正面方向の変位がないものとして前記変位操作に応じて前記第2画像の切り出し範囲を設定する
 上記(2)から(5)のいずれかに記載の情報処理装置。
 (7)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がない場合の表示画像の変位速度を、第1の速度上限値以下に設定し、
 前記変位操作として、前記観察ユーザが前記第2画像の切り出し範囲を指定する操作を行った場合は、表示画像の変位距離に応じた変位速度を、前記第1の速度上限値より速い速度である第2の速度上限値以下に設定する
 上記(2)から(6)のいずれかに記載の情報処理装置。
 (8)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作として、前記観察ユーザが位置非指定操作を行っていた場合は、表示画像の変位速度を、前記位置非指定操作に応じた速度とする
 上記(2)から(7)のいずれかに記載の情報処理装置。
 (9)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作として、前記第1画像の変位方向とは異なる方向への変位を指示する操作がある場合は、前記変位操作に応じた方向へ変位されるように前記第2画像の切り出し範囲を設定する
 上記(2)から(8)のいずれかに記載の情報処理装置。
 (10)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記第1画像の正面方向の変位があるときに、前記第1画像の正面方向の変位がないとした場合での前記第2画像の切り出し範囲に相当する切り出し基準位置を設定し、前記切り出し基準位置と前記変位操作に応じた変位量に基づいて表示画像の変位速度を可変設定する
 上記(2)から(9)のいずれかに記載の情報処理装置。
 (11)
 前記状況情報には、前記第2画像による表示画像を表示させる表示装置の情報が含まれる
 上記(2)から(10)のいずれかに記載の情報処理装置。
 (12)
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの入力情報が含まれる
 上記(2)から(11)のいずれかに記載の情報処理装置。
 (13)
 前記状況情報には、前記第2画像による表示画像の画像内容に関する情報が含まれる
 上記(2)から(12)のいずれかに記載の情報処理装置。
 (14)
 前記状況情報には、前記第2画像による表示画像の視聴環境に関する情報が含まれる
 上記(2)から(13)のいずれかに記載の情報処理装置。
 (15)
 前記状況情報には、前記第2画像による表示画像とともに出力される視覚以外の出力に関する情報が含まれる
 上記(2)から(14)のいずれかに記載の情報処理装置。
 (16)
 情報処理装置が、
 撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第1画像の正面方向の変位があったときの前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御を行う
 情報処理方法。
 (17)
 撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第1画像の正面方向の変位があったときの前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御を
 情報処理装置に実行させるプログラム。
The present technology can also be configured as follows.
(1)
An information processing device comprising: a display control unit that performs a cut-out process for cutting out a range of an angle of view narrower than that of a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user; and that varies a displacement speed of a displayed image based on the second image, which is caused by changing the cut-out range of the second image, in accordance with situation information.
(2)
The information processing device according to (1), wherein the displacement in the front direction of the first image occurs due to a position non-designation operation of the image capturing user.
(3)
The display control unit is
The information processing device according to (2) above, wherein a displacement speed of the display image caused by changing the cut-out position of the second image is variably set within a range equal to or less than a displacement speed of the first image.
(4)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to (2) or (3), wherein a displacement speed of the display image when the displacement operation is performed is made faster than a displacement speed of the display image when the displacement operation is not performed.
(5)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
setting a displacement speed of the display image in the absence of the displacement operation to a first upper speed limit value or less;
The information processing device according to any one of (2) to (4) above, wherein a displacement speed of a display image when the displacement operation is performed is set to a second upper speed limit value or less that is faster than the first upper speed limit value.
(6)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to any one of (2) to (5) above, wherein when the displacement operation is performed, the cropping range of the second image is set according to the displacement operation, assuming that there is no displacement of the first image in the forward direction.
(7)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
setting a displacement speed of the display image in the absence of the displacement operation to a first upper speed limit value or less;
An information processing device as described in any of (2) to (6) above, wherein when the observing user performs an operation to specify a cut-out range of the second image as the displacement operation, a displacement speed corresponding to the displacement distance of the displayed image is set to be equal to or lower than a second speed upper limit value that is faster than the first speed upper limit value.
(8)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to any one of (2) to (7) above, wherein, when the observing user performs a position non-designation operation as the displacement operation, a displacement speed of the displayed image is set to a speed corresponding to the position non-designation operation.
(9)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device described in any of (2) to (8) above, wherein when the displacement operation includes an operation instructing a displacement in a direction different from the displacement direction of the first image, the cut-out range of the second image is set so that it is displaced in a direction corresponding to the displacement operation.
(10)
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
An information processing device as described in any of (2) to (9) above, wherein, when there is a displacement of the first image in the forward direction, a cut-out reference position is set that corresponds to the cut-out range of the second image in the case where there is no displacement of the first image in the forward direction, and a displacement speed of the displayed image is variably set based on the cut-out reference position and a displacement amount corresponding to the displacement operation.
(11)
The information processing device according to any one of (2) to (10), wherein the status information includes information of a display device that displays a display image based on the second image.
(12)
The information processing device according to any one of (2) to (11) above, wherein the situation information includes input information of an observing user observing a display image based on the second image.
(13)
The information processing device according to any one of (2) to (12) above, wherein the status information includes information regarding the image content of the display image based on the second image.
(14)
The information processing device according to any one of (2) to (13) above, wherein the situation information includes information regarding a viewing environment of an image displayed by the second image.
(15)
The information processing device according to any one of (2) to (14) above, wherein the status information includes information regarding a non-visual output that is output together with an image displayed by the second image.
(16)
An information processing device,
An information processing method comprising: performing a cut-out process for cutting out a range of a narrower angle of view than a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user; and performing display control for varying the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.
(17)
A program that causes an information processing device to perform a cut-out process for cutting out a range of a narrower angle of view than a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user, and to execute display control for varying the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.

1 情報処理システム
10,20 情報処理装置
30 出力デバイス
31 表示部
33 検知部
40 撮像デバイス
41,41a,41b 撮像部
90 コンテンツサーバ
95 記憶部
101 通信部
103 画像合成部
105 認識部
201 通信部
203 解析部
205 表示制御部
206 入力部
207 力覚制御部
901 通信部
903 コンテンツ制御部
Ua 撮像ユーザ
Ub 観察ユーザ
REFERENCE SIGNS LIST 1 Information processing system 10, 20 Information processing device 30 Output device 31 Display unit 33 Detection unit 40 Imaging device 41, 41a, 41b Imaging unit 90 Content server 95 Storage unit 101 Communication unit 103 Image synthesis unit 105 Recognition unit 201 Communication unit 203 Analysis unit 205 Display control unit 206 Input unit 207 Force/sense control unit 901 Communication unit 903 Content control unit Ua Imaging user Ub Observation user

Claims (17)

 撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御部を備える
 情報処理装置。
An information processing device comprising: a display control unit that performs a cut-out process for cutting out a range of an angle of view narrower than that of a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user; and that varies a displacement speed of a displayed image based on the second image, which is caused by changing the cut-out range of the second image, in accordance with situation information.
 前記第1画像の正面方向の変位は、前記撮像ユーザの位置非指定操作によって発生する
 請求項1に記載の情報処理装置。
The information processing device according to claim 1 , wherein the displacement of the first image in the forward direction occurs due to a position non-designation operation by the image capturing user.
 前記表示制御部は、
 前記第2画像の切り出し位置の変更による表示画像の変位速度を、前記第1画像の変位の速度以下の範囲で可変設定する
 請求項2に記載の情報処理装置。
The display control unit is
The information processing apparatus according to claim 2 , wherein a speed at which the display image is displaced by changing the cut-out position of the second image is variably set within a range equal to or lower than a speed at which the first image is displaced.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がある場合の表示画像の変位速度は、前記変位操作がない場合の表示画像の変位速度よりも速くする
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to claim 2 , wherein a movement speed of the display image when the movement operation is performed is set to be faster than a movement speed of the display image when the movement operation is not performed.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がない場合の表示画像の変位速度を第1の速度上限値以下に設定し、
 前記変位操作がある場合の表示画像の変位速度を前記第1の速度上限値より速い速度である第2の速度上限値以下に設定する
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
setting a displacement speed of the display image in the absence of the displacement operation to a first upper speed limit value or less;
The information processing device according to claim 2 , wherein a displacement speed of the display image when the displacement operation is performed is set to a second upper speed limit value or less that is faster than the first upper speed limit value.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がある場合は、前記第1画像の正面方向の変位がないものとして前記変位操作に応じて前記第2画像の切り出し範囲を設定する
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to claim 2 , wherein, when the displacement operation is performed, the cutout range of the second image is set in accordance with the displacement operation, assuming that there is no displacement of the first image in the front direction.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作がない場合の表示画像の変位速度を、第1の速度上限値以下に設定し、
 前記変位操作として、前記観察ユーザが前記第2画像の切り出し範囲を指定する操作を行った場合は、表示画像の変位距離に応じた変位速度を、前記第1の速度上限値より速い速度である第2の速度上限値以下に設定する
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
setting a displacement speed of the display image in the absence of the displacement operation to a first upper speed limit value or less;
3. The information processing device according to claim 2, wherein when the observing user performs an operation to specify a cut-out range of the second image as the displacement operation, a displacement speed corresponding to a displacement distance of the displayed image is set to a second upper speed limit value that is faster than the first upper speed limit value.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作として、前記観察ユーザが位置非指定操作を行っていた場合は、表示画像の変位速度を、前記位置非指定操作に応じた速度とする
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to claim 2 , wherein, when the observer performs a position non-designation operation as the displacement operation, a displacement speed of the displayed image is set to a speed corresponding to the position non-designation operation.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記変位操作として、前記第1画像の変位方向とは異なる方向への変位を指示する操作がある場合は、前記変位操作に応じた方向へ変位されるように前記第2画像の切り出し範囲を設定する
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
The information processing device according to claim 2 , wherein when the displacement operation includes an operation instructing displacement in a direction different from the displacement direction of the first image, the cropping range of the second image is set so as to be displaced in a direction corresponding to the displacement operation.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの表示画像に対する変位操作の有無が含まれ、
 前記表示制御部は、
 前記第1画像の正面方向の変位があるときに、前記第1画像の正面方向の変位がないとした場合での前記第2画像の切り出し範囲に相当する切り出し基準位置を設定し、前記切り出し基準位置と前記変位操作に応じた変位量に基づいて表示画像の変位速度を可変設定する
 請求項2に記載の情報処理装置。
the situation information includes whether or not an observing user who observes a display image based on the second image performs a displacement operation on the display image,
The display control unit is
3. The information processing device according to claim 2, wherein, when there is a displacement of the first image in the forward direction, a cut-out reference position is set that corresponds to the cut-out range of the second image in a case where there is no displacement of the first image in the forward direction, and a displacement speed of the displayed image is variably set based on the cut-out reference position and a displacement amount corresponding to the displacement operation.
 前記状況情報には、前記第2画像による表示画像を表示させる表示装置の情報が含まれる
 請求項2に記載の情報処理装置。
The information processing device according to claim 2 , wherein the status information includes information about a display device that displays an image based on the second image.
 前記状況情報には、前記第2画像による表示画像を観察する観察ユーザの入力情報が含まれる
 請求項2に記載の情報処理装置。
The information processing apparatus according to claim 2 , wherein the situation information includes input information of an observing user who observes a display image based on the second image.
 前記状況情報には、前記第2画像による表示画像の画像内容に関する情報が含まれる
 請求項2に記載の情報処理装置。
The information processing device according to claim 2 , wherein the status information includes information related to an image content of the display image based on the second image.
 前記状況情報には、前記第2画像による表示画像の視聴環境に関する情報が含まれる
 請求項2に記載の情報処理装置。
The information processing device according to claim 2 , wherein the situation information includes information related to a viewing environment of the display image based on the second image.
 前記状況情報には、前記第2画像による表示画像とともに出力される視覚以外の出力に関する情報が含まれる
 請求項2に記載の情報処理装置。
The information processing device according to claim 2 , wherein the status information includes information regarding a non-visual output that is output together with the display image based on the second image.
 情報処理装置が、
 撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第1画像の正面方向の変位があったときの前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御を行う
 情報処理方法。
An information processing device,
An information processing method comprising: performing a cut-out process for cutting out a range of a narrower angle of view than a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user; and performing display control for varying the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.
 撮像ユーザが装着した撮像装置で撮像された第1画像から、前記第1画像よりも狭い画角の範囲を切り出して観察ユーザが観察するための第2画像とする切り出し処理を行うとともに、前記第1画像の正面方向の変位があったときの前記第2画像の切り出し範囲の変更によって生ずる、前記第2画像に基づく表示画像の変位速度を、状況情報に応じて可変する表示制御を
 情報処理装置に実行させるプログラム。
A program that causes an information processing device to perform a cut-out process for cutting out a range of a narrower angle of view than a first image from a first image captured by an imaging device worn by an imaging user to create a second image for observation by an observing user, and to execute display control for varying the displacement speed of a displayed image based on the second image, which is caused by a change in the cut-out range of the second image when the first image is displaced in a forward direction, in accordance with situational information.
PCT/JP2024/008737 2023-03-17 2024-03-07 Information processing device, information processing method, and program Pending WO2024195562A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023043443 2023-03-17
JP2023-043443 2023-03-17

Publications (1)

Publication Number Publication Date
WO2024195562A1 true WO2024195562A1 (en) 2024-09-26

Family

ID=92841949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/008737 Pending WO2024195562A1 (en) 2023-03-17 2024-03-07 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2024195562A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001333880A (en) * 2000-05-29 2001-12-04 Fuji Photo Optical Co Ltd Electronic endoscope
JP2002034909A (en) * 2000-07-28 2002-02-05 Fuji Photo Optical Co Ltd Endoscope apparatus for being attached to finger
JP2007537010A (en) * 2004-05-14 2007-12-20 ウィルソン−クック・メディカル・インコーポレーテッド Endoscope storage device
US20130076944A1 (en) * 2011-09-26 2013-03-28 Sony Mobile Communications Japan, Inc. Image photography apparatus
WO2014077046A1 (en) * 2012-11-13 2014-05-22 ソニー株式会社 Image display device and image display method, mobile body device, image display system, and computer program
WO2019026929A1 (en) * 2017-08-03 2019-02-07 ソニー・オリンパスメディカルソリューションズ株式会社 Medical observation device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001333880A (en) * 2000-05-29 2001-12-04 Fuji Photo Optical Co Ltd Electronic endoscope
JP2002034909A (en) * 2000-07-28 2002-02-05 Fuji Photo Optical Co Ltd Endoscope apparatus for being attached to finger
JP2007537010A (en) * 2004-05-14 2007-12-20 ウィルソン−クック・メディカル・インコーポレーテッド Endoscope storage device
US20130076944A1 (en) * 2011-09-26 2013-03-28 Sony Mobile Communications Japan, Inc. Image photography apparatus
WO2014077046A1 (en) * 2012-11-13 2014-05-22 ソニー株式会社 Image display device and image display method, mobile body device, image display system, and computer program
WO2019026929A1 (en) * 2017-08-03 2019-02-07 ソニー・オリンパスメディカルソリューションズ株式会社 Medical observation device

Similar Documents

Publication Publication Date Title
US10009542B2 (en) Systems and methods for environment content sharing
US9143693B1 (en) Systems and methods for push-button slow motion
US11320655B2 (en) Graphic interface for real-time vision enhancement
US11487354B2 (en) Information processing apparatus, information processing method, and program
JP6642430B2 (en) Information processing apparatus, information processing method, and image display system
US12449946B2 (en) Methods for displaying user interface elements relative to media content
JP6130478B1 (en) Program and computer
US11151804B2 (en) Information processing device, information processing method, and program
US20240320930A1 (en) Devices, methods, and graphical user interfaces for capturing media with a camera application
JP6969577B2 (en) Information processing equipment, information processing methods, and programs
CN115623149A (en) Video recording and playback system and method
US10771707B2 (en) Information processing device and information processing method
WO2019187732A1 (en) Information processing device, information processing method, and program
US20230060453A1 (en) Electronic device and operation method thereof
CN115617160A (en) Video processing and playback system and method
CN111902859B (en) Information processing device, information processing method and program
US20220191577A1 (en) Changing Resource Utilization associated with a Media Object based on an Engagement Score
JP2017121082A (en) Program and computer
JP7392723B2 (en) Information processing device, information processing method, and program
JP2025114850A (en) Entertainment Systems and Programs
US20240019928A1 (en) Gaze and Head Pose Interaction
WO2024195562A1 (en) Information processing device, information processing method, and program
JP7589268B2 (en) program
US12429945B2 (en) Information processing apparatus and adjustment screen display method
US20250111596A1 (en) Dynamic Transparency of User Representations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24774701

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE