[go: up one dir, main page]

WO2020115824A1 - Virtual space display system - Google Patents

Virtual space display system Download PDF

Info

Publication number
WO2020115824A1
WO2020115824A1 PCT/JP2018/044631 JP2018044631W WO2020115824A1 WO 2020115824 A1 WO2020115824 A1 WO 2020115824A1 JP 2018044631 W JP2018044631 W JP 2018044631W WO 2020115824 A1 WO2020115824 A1 WO 2020115824A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual space
head
motion
detection unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/044631
Other languages
French (fr)
Japanese (ja)
Inventor
西川隼矢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockin' Pool Inc
Original Assignee
Rockin' Pool Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockin' Pool Inc filed Critical Rockin' Pool Inc
Priority to PCT/JP2018/044631 priority Critical patent/WO2020115824A1/en
Publication of WO2020115824A1 publication Critical patent/WO2020115824A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual space display system, and more particularly to a virtual space display system for displaying a virtual space and an object arranged in the virtual space on a display device mounted on a user's head in a water area as an image.
  • a technology for displaying a virtual space created by VR (Vertical Reality) technology on a head mounted display (Head Mount Display: HMD) mounted on the user's head is popular.
  • Patent Document 1 discloses a virtual space display system that displays a virtual space on the HMD of a user who wears an HMD and floats in a water area.
  • the motion of the user's head in a state of being suspended in the water in the prone position is detected, and an image of the virtual space is generated according to the detected motion of the head.
  • the user can enjoy the image in the virtual space while feeling floating in the water.
  • the interaction between the image of the virtual space and the user can be realized if it is possible to reflect not only the movement of the head but also the sensation that the user receives from the water area.
  • the interest of the user is improved.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a virtual space display system capable of improving the interest of the user.
  • a virtual space display system for achieving the above object is a virtual space display system for displaying a virtual space and an object arranged in the virtual space on a display device mounted on a user's head in a water area as an image.
  • a head motion detection unit that detects a motion of the user's head
  • a resistance detection unit that is attached to an arbitrary body part of the user and detects resistance that the user receives from the water area in the body part
  • a head motion detection unit And a virtual space processing program that transforms a virtual space and an object based on the resistance of the user's head detected by 1. and the resistance detected by the resistance detection unit from the water area.
  • This virtual space display system is characterized by including a gyro sensor and an acceleration sensor that are attached to an arbitrary body part of the user and detect movements of the body part in the water area.
  • the display device mounted on the user's head based on the motion of the user's head detected by the head motion detection unit and the resistance received by the user from the water area detected by the resistance detection unit.
  • the virtual space and the objects displayed as images are transformed.
  • the virtual space and the object are transformed based on the user's action, the interaction between the image of the virtual space and the image of the object and the user is realized, and the interest of the user is improved.
  • the resistance detection unit that detects the resistance received from the water area since the resistance detection unit that detects the resistance received from the water area is provided, the user in the water area can reflect a specific action force received from the water area, such as resistance or buoyancy, in the virtual space and the object, and the user's interest can be enhanced. Is expected to improve further.
  • the virtual space processing program of this virtual space display system is configured to detect the motion of the head of one user detected by the head motion detection unit that detects the motion of the head of one user and the resistance that the user receives from the water area.
  • the display device attached to the heads of the respective users is displayed.
  • the virtual space and the object displayed as an image are transformed.
  • the virtual space and the object are transformed based on the action of each user, so that the interaction between the image of the virtual space and the image of the object and each user is realized, and the interaction between each user is realized. Therefore, the interest of each user is further improved among a plurality of users.
  • the virtual space and the object are transformed based on the user's action, the interaction between the image of the virtual space and the image of the object and the user is realized, and the interest of the user is improved.
  • FIGS. 1 to 6 Next, a virtual space display system according to an embodiment of the present invention will be described based on FIGS. 1 to 6.
  • FIG. 1 is a diagram for explaining the outline of the configuration of the virtual space display system according to this embodiment.
  • the virtual space display system 10 is a server 20 that creates a virtual space 11 and the like, and is a display device that is attached to a user 1 located in a water area W and displays the virtual space 11 created by the server 20.
  • a head mounted display (Head Mount Display: HMD) 30 and an operation unit 40 connected to the server 20 and attached to the hand of the user 1 are provided.
  • the user 1 in the present embodiment may be floating in the water body W, or may be used with the legs attached to the bottom of the water body (or a member such as a table placed on the bottom). ..
  • a virtual sphere 12 which is a spherical object expressed in three dimensions
  • a virtual hand 13 which is an object in which a hand which is a body part of the user 1 is displayed in three dimensions. Will be placed.
  • the virtual hand 13 is operated by the operation unit 40 attached to the hand of the user 1 in the present embodiment.
  • FIG. 2 is a block diagram for explaining the outline of the configuration of the server 20 according to this embodiment.
  • the server 20 includes a processor 21, a memory 22, a storage 23, a transmission/reception unit 24, and an input/output unit 25 as main components, which are electrically connected to each other via a bus 26.
  • the processor 21 is an arithmetic device that controls the operation of the server 20, controls the transmission and reception of data between each element, and performs the processing necessary for executing an application program.
  • the processor 21 is, for example, a CPU (Central Processing Unit) in the present embodiment, and executes each application by executing an application program stored in a storage 23 described later and expanded in a memory 22.
  • a CPU Central Processing Unit
  • the memory 22 includes a main storage device configured by a volatile storage device such as a DRAM (Dynamic Random Access Memory) and an auxiliary storage device configured by a non-volatile storage device such as a flash memory or an HDD (Hard Disc Drive). ..
  • a volatile storage device such as a DRAM (Dynamic Random Access Memory)
  • auxiliary storage device configured by a non-volatile storage device such as a flash memory or an HDD (Hard Disc Drive). ..
  • the memory 22 is used as a work area of the processor 21, and also stores a BIOS (Basic Input/Output System) executed when the server 20 is started, and various setting information.
  • BIOS Basic Input/Output System
  • the storage 23 stores application programs and data used for various processes.
  • the virtual space processing program 23A is stored. The outline of the configuration of the virtual space processing program 23A will be described later.
  • the transmission/reception unit 24 connects the server 20 to the network, and in the present embodiment, the HMD 30 and the operation unit 40 are connected to the transmission/reception unit 24 via the network.
  • the transmission/reception unit 24 may have a short-range communication interface such as Bluetooth (registered trademark) or BLE (Bluetooth Low Energy).
  • Bluetooth registered trademark
  • BLE Bluetooth Low Energy
  • the input/output unit 25 may be connected with an information input device such as a keyboard and a mouse or an output device such as a display, if necessary.
  • the bus 26 transmits, for example, an address signal, a data signal, and various control signals among the connected processor 21, memory 22, storage 23, transmission/reception unit 24, and input/output unit 25.
  • the HMD 30 is formed in a helmet type to be worn on the head of the user 1, and includes a smartphone storage unit 31 and a respirator 32.
  • the smartphone storage unit 31 stores a smartphone 33 which is formed at a portion corresponding to the position of the eyes of the user 1 and which is connected to the server 20 via the network.
  • the respirator 32 is formed in a tubular shape or a pipe shape that extends upward from an area corresponding to the position of the mouth of the user 1 and opens, and inspires and exhausts according to the breathing of the user 1. Is done.
  • FIG. 3 is a block diagram illustrating an outline of the configuration of the smartphone 33 according to the present embodiment.
  • the smartphone 33 includes a display 33A and a head movement detection unit 33B.
  • the virtual space 11, the virtual sphere 12, and the virtual hand 13 generated by the server 20 are displayed on the display 33A as images.
  • This display 33A is mounted as a display of the HMD 30 in the present embodiment.
  • the head motion detection unit 33B includes a geomagnetic sensor 33Ba, a gyro sensor 33Bb, and an acceleration sensor 33Bc, detects the motion of the head of the user 1 wearing the HMD 30 in which the smartphone 33 is stored, Generate a head motion signal.
  • the gyro sensor 33Bb detects the angular velocities of the HMD 30 around the three axes with time according to the movement of the head of the user 1 and the corresponding movement of the HMD 30, and determines the time change of the angle (tilt) around each axis. can do.
  • the smartphone 33 mounts the HMD 30 stored in the smartphone storage unit 31, in the present embodiment, the user 1 observes only the display 33A of the smartphone 33.
  • the user 1 loses all the external field of view, and thus can completely immerse himself in the virtual space 11 displayed on the display 33A of the smartphone 33.
  • the user 1 who is immersed in the virtual space 11 can execute the interaction with the virtual sphere 12 by using the virtual hand 13 that operates in conjunction with the operation of his or her hand.
  • the operation unit 40 has a glove shape that can be worn on the hand of the user 1, and is attached to both hands of the user 1 in the present embodiment.
  • the first motion detection unit 41 includes a resistance sensor 41a, which is a resistance detection unit, a gyro sensor 41b, and an acceleration sensor 41c.
  • a resistance sensor 41a which is a resistance detection unit
  • a gyro sensor 41b As the characteristics of water, there are four elements of buoyancy, resistance, water pressure, and water temperature.
  • the present invention identifies the movement or displacement of the user 1 by detecting/detecting his/her own resistance among them. Not only the resistance but also the water pressure due to the difference in water depth may be detected if necessary.
  • the “detection unit” may be considered as a single unit, or may be built in a controller (operation unit) or the like.
  • the resistance sensor 41a is a sensor that detects the resistance that the user 1 receives from the water area W, and detects the movement of the hand of the user 1 in cooperation with the gyro sensor 41b and the acceleration sensor 41c. 1 Generate an operation signal.
  • the sensor according to the present embodiment is not limited to this, for example, a strain sensor, a torque sensor, a semiconductor resistance sensor, etc., the movement of the object, the displacement, such as speed, it is possible to obtain the information of the change, and underwater Any material may be used as long as it can be used (or processed so that it can be used in water).
  • the second motion detection unit 42 is composed of a motion capture mechanism including a sensor that detects the motion of the finger of the user 1, and detects the motion of the finger of the user 1 to generate a second motion signal.
  • FIG. 5 is a block diagram for explaining the outline of the configuration of the virtual space processing program 23A according to this embodiment.
  • the virtual space processing program 23A includes a video generation unit 23Aa, a video transformation signal generation unit 23Ab, and a feedback signal generation unit 23Ac.
  • the image generation unit 23Aa generates the virtual space 11, the virtual sphere 12 and the virtual hand 13 based on the preset virtual space generation data D, and also virtualizes the virtual sphere 12 and the virtual hand 13. It is placed in the space 11.
  • the image generation unit 23Aa changes the virtual space 11, the virtual sphere 12, and the virtual hand 13 based on the image conversion signal generated by the image generation signal generation unit 23Ab described below.
  • the image transformation signal generation unit 23Ab includes the head movement signal S0 generated by the head movement detection unit 33B of the smartphone 33 and the first movement generated by the first movement detection unit 41 of the operation unit 40.
  • the image transformation signal S3 is generated based on the signal S1 and the second motion signal S2 generated by the second motion detector 42.
  • the feedback signal generation unit 23Ac includes the head motion signal S0 generated by the head motion detection unit 33B of the smartphone 33 and the first motion signal generated by the first motion detection unit 41 of the operation unit 40.
  • the feedback signal S4 is generated based on S1 and the second motion signal S2 generated by the second motion detector 42.
  • the virtual space 11 In a state in which the user 1 wearing the HMD 30 and the operation unit 40 floats in the water W, the virtual space 11, the virtual sphere based on the virtual space generation data D preset by the video generation unit 23Aa of the virtual space processing program 23A. 12 and the virtual hand 13 are generated, the virtual sphere 12 and the virtual hand 13 are arranged in the virtual space 11, and these are displayed as an image on the HMD 30 (display 33A of the smartphone 33).
  • the head motion detection unit 33B detects the motion of the head of the user 1 and generates the head motion signal S0 according to the motion. To do.
  • the resistance sensor 41a of the first motion detection unit 41 detects the resistance that the user 1 receives from the water area W in his hand in response to the motion, and the gyro sensor 41b. Also, in cooperation with the acceleration sensor 41c, the motion of the hand of the user 1 is detected to generate the first motion signal S1.
  • the second motion detection unit 42 detects the motion of the finger of the user 1 according to the motion, and generates the second motion signal S2.
  • the video transformation signal S3 is generated by the video transformation signal generation unit 23Ab. Is generated.
  • the virtual space 11, the virtual sphere 12, and the virtual hand 13 generated by the image generation unit 23Aa are transformed by the image generation unit 23Aa, and the transformed virtual space 11, virtual sphere 12, and The image of the virtual hand 13 is displayed on the HMD 30.
  • the feedback signal generation unit 23Ac when the head movement signal S0, the first movement signal S1, and the second movement signal S2 are input to the feedback signal generation unit 23Ac, the feedback signal generation unit 23Ac generates the feedback signal S4.
  • the virtual space 11, the virtual sphere 12, and the virtual hand 13 generated by the image generation unit 23Aa are the head motion signal S0 generated by the head motion detection unit 33B, and the resistance sensor of the first motion detection unit 41. It is transformed based on the first motion signal S1 generated by 41a and the like and the second motion signal S2 generated by the second motion detector 42.
  • the virtual space 11, the virtual sphere 12, and the virtual hand 13 are transformed based on the motion of the user 1, the interaction between the image of the virtual space 11, the image of the virtual sphere 12, and the image of the virtual hand 13 and the user 1. Is realized, and the interest of the user 1 is improved.
  • the resistance sensor 41a that detects the resistance received from the water area W cooperates with the gyro sensor 41b and the acceleration sensor 41c to detect the motion of the hand of the user 1 and generate the first motion signal S1. Therefore, the specific acting force (buoyancy, resistance, etc.) that the user 1 floating in the water area W receives from the water area W can be reflected in the virtual space 11, the virtual sphere 12, and the virtual hand 13.
  • FIG. 6 is a diagram illustrating an operation outline when the virtual space display system 10 is used by a plurality of users. As illustrated, one user 1 and another user 2 floating in different water areas W1 and W2 respectively mount an HMD 30 and an operation unit 40 that can communicate with the server 20, and each HMD 30 has a virtual space 11 , The virtual sphere 12, the virtual hand 13, and the virtual hand 14 of the user 2 are displayed.
  • the head motion signal S0, the first motion signal S1 and the second motion signal S2 are respectively supplied to the users 1 and 2 depending on the motions of the heads, hands and fingers of the users 1 and 2 floating in the water areas W1 and W2. 1(2) is generated respectively, and the video alteration signal generation unit 23Ab generates the video alteration signal S3 for each user 1(2).
  • the virtual sphere 12 and the virtual hand 13(14) are transformed in the virtual space 11, and the transformed virtual space 11, virtual sphere 12 and virtual are created.
  • the image of the hand 13 (14) is displayed on the HMD 30 of each user 1 (2).
  • the head motion signal S0, the first motion signal S1, and the second motion signal S2 are generated according to the motion of the head, hand, and finger of each user 1(2) floating in the water areas W1 and W2.
  • the feedback signal S4 for each user 1(2) is generated.
  • the virtual space 11, the virtual sphere 12, and the virtual hand 13 (14) generated by the video generation unit 23Aa are generated for each user 1 (2) based on the motions of the plurality of users 1 (2).
  • the transformation is performed based on the head movement signal S0, the first movement signal S1, and the second movement signal S2.
  • the virtual space 11, the virtual sphere 12, and the virtual hand 13 are transformed based on the respective movements of the user 1 (2), so that the image of the virtual space 11, the image of the virtual sphere 12, and the virtual hand 13 (14). Since the interaction between the video and the user 1(2) is realized and the interaction between the user 1 and the user 2 is realized, the interest of the user 1(2) is further improved.
  • the present invention is not limited to the above-mentioned embodiments, and various modifications can be made without departing from the spirit of the invention.
  • the operation unit 40 may be attached to another body part such as a leg.
  • the virtual space display system 10 is used by the user 1 or a plurality of users 1(2) has been described, but the number of users is not particularly limited.
  • the display of the HMD 30 is implemented by the display 33A of the smartphone 33 stored in the HMD 30 has been described, but it may be implemented by incorporating the display in the HMD 30.
  • the water areas W, W1 and W2 may be pools and various kinds of water storage tanks, seas, lakes and the like, or may be pools and bathtubs at home or other facilities where the body below the neck can be immersed. Any body of water may be used.
  • the above-mentioned HMD 830 was mainly a device related to VR (Vertical Reality) technology, but in the field of AR (Augmented Reality) technology and MR (Mixed Reality) technology, for example.
  • the present invention can also be applied.
  • the position of the user including the displacement of a part of the body
  • the process according to the position may be performed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To provide a virtual space display system capable of increasing the appeal to a user. [Solution] A virtual space display system which displays, as images on a display device mounted on a head part of a user floating in a water area, a virtual space and an object arranged in the virtual space. The virtual space display system is provided with: a head part motion detection unit which detects motion of the head part of the user; a resistance detection unit that is mounted on an arbitrary body part of the user and detects resistance that the user receives from the water area at the body part; and a virtual space processing program which transforms the virtual space and the object on the basis of the motion of the head part of the user detected by the head part motion detection unit and the resistance that the user receives from the water area and detected by the resistance detection unit.

Description

仮想空間表示システムVirtual space display system

 本発明は、仮想空間表示システム、特に、水域におけるユーザの頭部に装着された表示装置に仮想空間及び仮想空間に配置されるオブジェクトを映像として表示する仮想空間表示システムに関する。 The present invention relates to a virtual space display system, and more particularly to a virtual space display system for displaying a virtual space and an object arranged in the virtual space on a display device mounted on a user's head in a water area as an image.

 ユーザの頭部に装着されたヘッドマウントディスプレイ(Head Mount Dispray:HMD)に、VR(Vertual Reality)技術によって生成された仮想空間を表示する技術が普及している。 A technology for displaying a virtual space created by VR (Vertical Reality) technology on a head mounted display (Head Mount Display: HMD) mounted on the user's head is popular.

 特許文献1には、HMDを装着して水域に浮遊するユーザのHMDに仮想空間を表示する仮想空間表示システムが開示されている。 Patent Document 1 discloses a virtual space display system that displays a virtual space on the HMD of a user who wears an HMD and floats in a water area.

 この特許文献1の仮想空間表示システムによれば、伏臥位で水域に浮遊した状態のユーザの頭部の動作を検知し、検知した頭部の動作に応じた仮想空間の映像を生成することから、水域での浮遊感を得ながらユーザが仮想空間の映像を享受することができる。 According to the virtual space display system of Patent Document 1, the motion of the user's head in a state of being suspended in the water in the prone position is detected, and an image of the virtual space is generated according to the detected motion of the head. The user can enjoy the image in the virtual space while feeling floating in the water.

特開2018-41180公報Japanese Patent Laid-Open No. 2018-41180

 ところで、ユーザが水域において仮想空間の映像を視認する際に、頭部の動作以外にもユーザが水域から受ける体感を仮想空間に反映することができれば、仮想空間の映像とユーザとのインタラクションが実現され、ユーザの興趣が向上する。 By the way, when the user visually recognizes the image of the virtual space in the water area, the interaction between the image of the virtual space and the user can be realized if it is possible to reflect not only the movement of the head but also the sensation that the user receives from the water area. The interest of the user is improved.

 本発明は、上記事情に鑑みてなされたものであり、ユーザの興趣を向上させることができる仮想空間表示システムを提供することを課題とするものである。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide a virtual space display system capable of improving the interest of the user.

 上記課題を達成するための、本発明に係る仮想空間表示システムは、水域におけるユーザの頭部に装着された表示装置に仮想空間及び仮想空間に配置されるオブジェクトを映像として表示する仮想空間表示システムにおいて、ユーザの頭部の動作を検知する頭部動作検知部と、ユーザの任意の身体部位に装着されて身体部位においてユーザが水域から受ける抵抗を検知する抵抗検知部と、頭部動作検知部で検知したユーザの頭部の動作と抵抗検知部で検知したユーザが水域から受ける抵抗とに基づいて仮想空間及びオブジェクトを変成する仮想空間処理プログラムと、を備えることを特徴としている。 A virtual space display system according to the present invention for achieving the above object is a virtual space display system for displaying a virtual space and an object arranged in the virtual space on a display device mounted on a user's head in a water area as an image. In, a head motion detection unit that detects a motion of the user's head, a resistance detection unit that is attached to an arbitrary body part of the user and detects resistance that the user receives from the water area in the body part, and a head motion detection unit And a virtual space processing program that transforms a virtual space and an object based on the resistance of the user's head detected by 1. and the resistance detected by the resistance detection unit from the water area.

 この仮想空間表示システムは、ユーザの任意の身体部位に装着されて水域における身体部位の動作を検知するジャイロセンサ及び加速度センサを備えることを特徴としている。 This virtual space display system is characterized by including a gyro sensor and an acceleration sensor that are attached to an arbitrary body part of the user and detect movements of the body part in the water area.

 この仮想空間表示システムによれば、頭部動作検知部で検知したユーザの頭部の動作及び抵抗検知部で検知したユーザが水域から受ける抵抗に基づいて、ユーザの頭部に装着された表示装置に映像として表示される仮想空間及びオブジェクトが変成される。 According to this virtual space display system, the display device mounted on the user's head based on the motion of the user's head detected by the head motion detection unit and the resistance received by the user from the water area detected by the resistance detection unit. The virtual space and the objects displayed as images are transformed.

 したがって、ユーザの動作に基づいて仮想空間及びオブジェクトが変成されることから、仮想空間の映像及びオブジェクトの映像とユーザとのインタラクションが実現され、ユーザの興趣が向上する。 Therefore, since the virtual space and the object are transformed based on the user's action, the interaction between the image of the virtual space and the image of the object and the user is realized, and the interest of the user is improved.

 特に、水域から受ける抵抗を検知する抵抗検知部を備えることから、水域におけるユーザが、例えば抵抗や浮力といった水域から受ける特有の作用力を仮想空間及びオブジェクトに反映させることができ、ユーザ一の興趣が更に向上することが期待される。 In particular, since the resistance detection unit that detects the resistance received from the water area is provided, the user in the water area can reflect a specific action force received from the water area, such as resistance or buoyancy, in the virtual space and the object, and the user's interest can be enhanced. Is expected to improve further.

 さらに、この仮想空間表示システムの仮想空間処理プログラムは、一のユーザの頭部の動作を検知する頭部動作検知部で検知した一のユーザの頭部の動作及び一のユーザが水域から受ける抵抗を検知する抵抗検知部で検知した抵抗と、他のユーザの頭部の動作を検知する頭部動作検知部で検知した他のユーザの頭部の動作及び他のユーザが水域から受ける抵抗を検知する抵抗検知部で検知した抵抗と、に基づいて仮想空間及びオブジェクトを変成することを特徴としている。 Furthermore, the virtual space processing program of this virtual space display system is configured to detect the motion of the head of one user detected by the head motion detection unit that detects the motion of the head of one user and the resistance that the user receives from the water area. The resistance detected by the resistance detection unit that detects the movement of the head of another user detected by the head movement detection unit that detects the movement of the head of another user, and the resistance received by the other user from the water area. It is characterized in that the virtual space and the object are transformed based on the resistance detected by the resistance detecting section.

 このように、頭部動作検知部で検知した複数のユーザの頭部の動作及び抵抗検知部で検知した各ユーザが水域から受ける抵抗に基づいて、各ユーザの頭部に装着された表示装置に映像として表示される仮想空間及びオブジェクトがそれぞれ変成される。 As described above, based on the movements of the heads of a plurality of users detected by the head movement detection unit and the resistance received by the users from the water area detected by the resistance detection unit, the display device attached to the heads of the respective users is displayed. The virtual space and the object displayed as an image are transformed.

 したがって、各ユーザの動作に基づいて仮想空間及びオブジェクトがそれぞれ変成されることによって、仮想空間の映像及びオブジェクトの映像と各ユーザとのインタラクションが実現され、かつ各ユーザの間でのインタラクションが実現されることから、複数のユーザの間において各ユーザの興趣が更に向上する。 Therefore, the virtual space and the object are transformed based on the action of each user, so that the interaction between the image of the virtual space and the image of the object and each user is realized, and the interaction between each user is realized. Therefore, the interest of each user is further improved among a plurality of users.

 この発明によれば、ユーザの動作に基づいて仮想空間及びオブジェクトが変成されることから、仮想空間の映像及びオブジェクトの映像とユーザとのインタラクションが実現され、ユーザの興趣が向上する。 According to the present invention, since the virtual space and the object are transformed based on the user's action, the interaction between the image of the virtual space and the image of the object and the user is realized, and the interest of the user is improved.

本発明の実施の形態に係る仮想空間表示システムの構成の概略を説明する図である。It is a figure explaining an outline of composition of a virtual space display system concerning an embodiment of the invention. 同じく、本実施の形態に係る仮想空間表示システムのサーバの構成の概略を説明するブロック図である。Similarly, it is a block diagram illustrating an outline of a configuration of a server of the virtual space display system according to the present exemplary embodiment. 同じく、本実施の形態に係る仮想空間表示システムのヘッドマウントディスプレイに格納されるスマートフォンの構成の概略を説明するブロック図である。Similarly, it is a block diagram explaining an outline of a configuration of a smartphone stored in a head mounted display of the virtual space display system according to the present embodiment. 同じく、本実施の形態に係る仮想空間表示システムの操作部の構成の概略を説明するブロック図である。Similarly, it is a block diagram illustrating an outline of a configuration of an operation unit of the virtual space display system according to the present embodiment. 同じく、本実施の形態に係る仮想空間処理プログラムの構成の概略を説明するブロック図である。Similarly, it is a block diagram illustrating an outline of a configuration of a virtual space processing program according to the present embodiment. 同じく、本実施の形態に係る仮想空間表示システムが複数のユーザによって利用される場合の作動概略を説明する図である。Similarly, it is a diagram for explaining an outline of operation when the virtual space display system according to the present embodiment is used by a plurality of users.

 次に、図1~図6に基づいて、本発明の実施の形態に係る仮想空間表示システムについて説明する。 Next, a virtual space display system according to an embodiment of the present invention will be described based on FIGS. 1 to 6.

 図1は、本実施の形態に係る仮想空間表示システムの構成の概略を説明する図である。図示のように、仮想空間表示システム10は、仮想空間11等を生成するサーバ20、水域Wに位置するユーザ1に装着されてサーバ20で生成された仮想空間11が表示される表示装置であるヘッドマウントディスプレイ(Head Mount Dispray:HMD)30、及びサーバ20に接続されてユーザ1の手に装着される操作部40を備える。なお、本実施の形態におけるユーザ1は、水域Wに浮遊していてもよいし、水域の底部(又は底部におかれた台などの部材)に脚が着いた状態で利用されていてもよい。 FIG. 1 is a diagram for explaining the outline of the configuration of the virtual space display system according to this embodiment. As shown in the figure, the virtual space display system 10 is a server 20 that creates a virtual space 11 and the like, and is a display device that is attached to a user 1 located in a water area W and displays the virtual space 11 created by the server 20. A head mounted display (Head Mount Display: HMD) 30 and an operation unit 40 connected to the server 20 and attached to the hand of the user 1 are provided. The user 1 in the present embodiment may be floating in the water body W, or may be used with the legs attached to the bottom of the water body (or a member such as a table placed on the bottom). ..

 仮想空間11には、本実施の形態では、三次元で表現された球体状のオブジェクトである仮想球12及びユーザ1の身体部位である手が三次元で表示されたオブジェクトである仮想手13が配置される。 In this embodiment, in the virtual space 11, there are a virtual sphere 12 which is a spherical object expressed in three dimensions and a virtual hand 13 which is an object in which a hand which is a body part of the user 1 is displayed in three dimensions. Will be placed.

 仮想手13は、本実施の形態では、ユーザ1の手に装着される操作部40によって操作される。 The virtual hand 13 is operated by the operation unit 40 attached to the hand of the user 1 in the present embodiment.

 図2は、本実施の形態に係るサーバ20の構成の概略を説明するブロック図である。図示のように、サーバ20は、プロセッサ21、メモリ22、ストレージ23、送受信部24、及び入出力部25を主要構成として備え、これらが互いにバス26を介して電気的に接続される。 FIG. 2 is a block diagram for explaining the outline of the configuration of the server 20 according to this embodiment. As illustrated, the server 20 includes a processor 21, a memory 22, a storage 23, a transmission/reception unit 24, and an input/output unit 25 as main components, which are electrically connected to each other via a bus 26.

 プロセッサ21は、サーバ20の動作を制御し、各要素間におけるデータの送受信の制御や、アプリケーションプログラムの実行に必要な処理等を行う演算装置である。 The processor 21 is an arithmetic device that controls the operation of the server 20, controls the transmission and reception of data between each element, and performs the processing necessary for executing an application program.

 このプロセッサ21は、本実施の形態では例えばCPU(Central Processing Unit)であり、後述するストレージ23に格納されてメモリ22に展開されたアプリケーションプログラムを実行して各処理を行う。 The processor 21 is, for example, a CPU (Central Processing Unit) in the present embodiment, and executes each application by executing an application program stored in a storage 23 described later and expanded in a memory 22.

 メモリ22は、DRAM(Dynamic Random Access Memory)等の揮発性記憶装置で構成される主記憶装置、及びフラッシュメモリやHDD(Hard Disc Drive)等の不揮発性記憶装置で構成される補助記憶装置を備える。 The memory 22 includes a main storage device configured by a volatile storage device such as a DRAM (Dynamic Random Access Memory) and an auxiliary storage device configured by a non-volatile storage device such as a flash memory or an HDD (Hard Disc Drive). ..

 このメモリ22は、プロセッサ21の作業領域として使用される一方、サーバ20の起動時に実行されるBIOS(Basic Input/Output System)、及び各種の設定情報等が格納される。 The memory 22 is used as a work area of the processor 21, and also stores a BIOS (Basic Input/Output System) executed when the server 20 is started, and various setting information.

 ストレージ23には、アプリケーションプログラムや各種の処理に用いられるデータ等が格納されており、本実施の形態では、仮想空間処理プログラム23Aが格納されている。この仮想空間処理プログラム23Aの構成の概略については、後述する。 The storage 23 stores application programs and data used for various processes. In this embodiment, the virtual space processing program 23A is stored. The outline of the configuration of the virtual space processing program 23A will be described later.

 送受信部24は、サーバ20をネットワークに接続するものであって、本実施の形態では、HMD30及び操作部40がネットワークを介して送受信部24に接続されている。 The transmission/reception unit 24 connects the server 20 to the network, and in the present embodiment, the HMD 30 and the operation unit 40 are connected to the transmission/reception unit 24 via the network.

 この送受信部24は、Bluetooth(登録商標)やBLE(Bluetooth Low Energy)といった近距離通信インターフェースを具備するものであってもよい。 The transmission/reception unit 24 may have a short-range communication interface such as Bluetooth (registered trademark) or BLE (Bluetooth Low Energy).

 入出力部25には、必要に応じて、キーボードやマウスといった情報入力機器やディスプレイ等の出力機器が接続されていてもよい。 The input/output unit 25 may be connected with an information input device such as a keyboard and a mouse or an output device such as a display, if necessary.

 バス26は、接続したプロセッサ21、メモリ22、ストレージ23、送受信部24及び入出力部25の間において、例えばアドレス信号、データ信号及び各種の制御信号を伝達する。 The bus 26 transmits, for example, an address signal, a data signal, and various control signals among the connected processor 21, memory 22, storage 23, transmission/reception unit 24, and input/output unit 25.

 図1で示すように、HMD30は、本実施の形態ではユーザ1の頭部に装着されるヘルメット型に形成され、スマートフォン格納部31及び呼吸器32を備える。 As shown in FIG. 1, in the present embodiment, the HMD 30 is formed in a helmet type to be worn on the head of the user 1, and includes a smartphone storage unit 31 and a respirator 32.

 スマートフォン格納部31は、本実施の形態では、ユーザ1の目の位置に対応する部位に形成され、ネットワークを介してサーバ20と接続されるスマートフォン33が格納される。 In the present embodiment, the smartphone storage unit 31 stores a smartphone 33 which is formed at a portion corresponding to the position of the eyes of the user 1 and which is connected to the server 20 via the network.

 呼吸器32は、本実施の形態では、ユーザ1の口の位置に対応する部位から上方に向かって伸長して開口する筒状あるいはパイプ状に形成され、ユーザ1の呼吸に応じて吸気及び排気がなされる。 In the present embodiment, the respirator 32 is formed in a tubular shape or a pipe shape that extends upward from an area corresponding to the position of the mouth of the user 1 and opens, and inspires and exhausts according to the breathing of the user 1. Is done.

 図3は、本実施の形態に係るスマートフォン33の構成の概略を説明するブロック図である。図示のように、スマートフォン33は、ディスプレイ33A及び頭部動作検知部33Bを備える。 FIG. 3 is a block diagram illustrating an outline of the configuration of the smartphone 33 according to the present embodiment. As illustrated, the smartphone 33 includes a display 33A and a head movement detection unit 33B.

 ディスプレイ33Aには、本実施の形態では、サーバ20で生成された仮想空間11、仮想球12及び仮想手13が映像として表示される。このディスプレイ33Aが、本実施の形態ではHMD30のディスプレイとして実装される。 In the present embodiment, the virtual space 11, the virtual sphere 12, and the virtual hand 13 generated by the server 20 are displayed on the display 33A as images. This display 33A is mounted as a display of the HMD 30 in the present embodiment.

 頭部動作検知部33Bは、本実施の形態では、地磁気センサ33Ba、ジャイロセンサ33Bb及び加速度センサ33Bcを備え、スマートフォン33が格納されたHMD30を装着したユーザ1の頭部の動作を検知して、頭部動作信号を生成する。 In the present embodiment, the head motion detection unit 33B includes a geomagnetic sensor 33Ba, a gyro sensor 33Bb, and an acceleration sensor 33Bc, detects the motion of the head of the user 1 wearing the HMD 30 in which the smartphone 33 is stored, Generate a head motion signal.

 特にジャイロセンサ33Bbは、ユーザ1の頭部の動作と対応するHMD30の動作に応じて、HMD30の3軸回りの角速度を経時的に検知し、各軸回りの角度(傾き)の時間変化を決定することができる。 In particular, the gyro sensor 33Bb detects the angular velocities of the HMD 30 around the three axes with time according to the movement of the head of the user 1 and the corresponding movement of the HMD 30, and determines the time change of the angle (tilt) around each axis. can do.

 スマートフォン33がスマートフォン格納部31に格納されたHMD30を装着すると、本実施の形態では、ユーザ1はスマートフォン33のディスプレイ33Aのみを観察することとなる。 When the smartphone 33 mounts the HMD 30 stored in the smartphone storage unit 31, in the present embodiment, the user 1 observes only the display 33A of the smartphone 33.

 これによりユーザ1は、外界の視界を全て失うことから、スマートフォン33のディスプレイ33Aに表示される仮想空間11に完全に没入することが可能となる。 With this, the user 1 loses all the external field of view, and thus can completely immerse himself in the virtual space 11 displayed on the display 33A of the smartphone 33.

 本実施の形態では、仮想空間11に没入したユーザ1は、自らの手の動作と連動して動作する仮想手13を用いて、仮想球12とのインタラクションを実行することが可能となる。 In the present embodiment, the user 1 who is immersed in the virtual space 11 can execute the interaction with the virtual sphere 12 by using the virtual hand 13 that operates in conjunction with the operation of his or her hand.

 図1で示すように、操作部40は、ユーザ1の手に装着可能なグローブ状であって、本実施の形態では、ユーザ1の両手に装着される。 As shown in FIG. 1, the operation unit 40 has a glove shape that can be worn on the hand of the user 1, and is attached to both hands of the user 1 in the present embodiment.

 図4は、本実施の形態に係る操作部40の構成の概略を説明するブロック図である。図示のように、操作部40は、ユーザ1の手の動作を検知する第1動作検知部41、ユーザ1の指の動作を検知する第2動作検知部42、第1動作検知部41及び第2動作検知部42で検知した指の動作に基づいて駆動するモータ43、及び操作部40の各種の動作を制御する制御部44を備える。 FIG. 4 is a block diagram illustrating an outline of the configuration of the operation unit 40 according to the present embodiment. As illustrated, the operation unit 40 includes a first motion detection unit 41 that detects a motion of the hand of the user 1, a second motion detection unit 42 that detects a motion of the finger of the user 1, a first motion detection unit 41, and a first motion detection unit 41. The motor 43 is driven based on the finger motion detected by the two-motion detector 42, and the controller 44 controls various operations of the operation unit 40.

 第1動作検知部41は、本実施の形態では、抵抗検知部である抵抗センサ41a、ジャイロセンサ41b及び加速度センサ41cを備える。ここで、水の特性としては、浮力、抵抗、水圧、水温の4つの要素が存在する。本発明は、このうち、自らの抵抗を検知・検出することによってユーザ1の動きや変位を特定している。なお、抵抗だけでなく、必要に応じて、水深の違いによる水圧を検知することとしてもよい。また、「検知部」は、それ単体で観念するものであってもよいし、コントローラ(操作部)等に内蔵されることとしてもよい。 In the present embodiment, the first motion detection unit 41 includes a resistance sensor 41a, which is a resistance detection unit, a gyro sensor 41b, and an acceleration sensor 41c. Here, as the characteristics of water, there are four elements of buoyancy, resistance, water pressure, and water temperature. The present invention identifies the movement or displacement of the user 1 by detecting/detecting his/her own resistance among them. Not only the resistance but also the water pressure due to the difference in water depth may be detected if necessary. Further, the “detection unit” may be considered as a single unit, or may be built in a controller (operation unit) or the like.

 抵抗センサ41aは、本実施の形態では、ユーザ1が水域Wから受ける抵抗を検知するセンサであって、ジャイロセンサ41b及び加速度センサ41cと協働してユーザ1の手の動作を検知して第1動作信号を生成する。本実施の形態によるセンサは、これに限られず、例えば、ひずみセンサ、トルクセンサ、半導体抵抗センサなど、対象物の動き、変位、速度など、変化の情報を取得できるものであって、かつ水中で利用できるもの(又は水中で利用できるように加工したもの)であれば、どのようなものであってもよい。 In the present embodiment, the resistance sensor 41a is a sensor that detects the resistance that the user 1 receives from the water area W, and detects the movement of the hand of the user 1 in cooperation with the gyro sensor 41b and the acceleration sensor 41c. 1 Generate an operation signal. The sensor according to the present embodiment is not limited to this, for example, a strain sensor, a torque sensor, a semiconductor resistance sensor, etc., the movement of the object, the displacement, such as speed, it is possible to obtain the information of the change, and underwater Any material may be used as long as it can be used (or processed so that it can be used in water).

 第2動作検知部42は、ユーザ1の指の動作を検知するセンサを備えたモーションキャプチャ機構で構成され、ユーザ1の指の動作を検知して第2動作信号を生成する。 The second motion detection unit 42 is composed of a motion capture mechanism including a sensor that detects the motion of the finger of the user 1, and detects the motion of the finger of the user 1 to generate a second motion signal.

 図5は、本実施の形態に係る仮想空間処理プログラム23Aの構成の概略を説明するブロック図である。図示のように、仮想空間処理プログラム23Aは、映像生成部23Aa、映像変成信号生成部23Ab及びフィードバック信号生成部23Acを備える。 FIG. 5 is a block diagram for explaining the outline of the configuration of the virtual space processing program 23A according to this embodiment. As illustrated, the virtual space processing program 23A includes a video generation unit 23Aa, a video transformation signal generation unit 23Ab, and a feedback signal generation unit 23Ac.

 映像生成部23Aaは、本実施の形態では、予め設定された仮想空間生成データDに基づいて、仮想空間11、仮想球12及び仮想手13を生成するとともに、仮想球12及び仮想手13を仮想空間11に配置する。 In the present embodiment, the image generation unit 23Aa generates the virtual space 11, the virtual sphere 12 and the virtual hand 13 based on the preset virtual space generation data D, and also virtualizes the virtual sphere 12 and the virtual hand 13. It is placed in the space 11.

 さらに映像生成部23Aaは、次述の映像生成信号生成部23Abで生成された映像変成信号に基づいて、仮想空間11、仮想球12及び仮想手13を変成する。 Furthermore, the image generation unit 23Aa changes the virtual space 11, the virtual sphere 12, and the virtual hand 13 based on the image conversion signal generated by the image generation signal generation unit 23Ab described below.

 映像変成信号生成部23Abは、本実施の形態では、スマートフォン33の頭部動作検知部33Bで生成された頭部動作信号S0、操作部40の第1動作検知部41で生成された第1動作信号S1及び第2動作検知部42で生成された第2動作信号S2に基づいて、映像変成信号S3を生成する。 In the present embodiment, the image transformation signal generation unit 23Ab includes the head movement signal S0 generated by the head movement detection unit 33B of the smartphone 33 and the first movement generated by the first movement detection unit 41 of the operation unit 40. The image transformation signal S3 is generated based on the signal S1 and the second motion signal S2 generated by the second motion detector 42.

 フィードバック信号生成部23Acは、本実施の形態では、スマートフォン33の頭部動作検知部33Bで生成された頭部動作信号S0、操作部40の第1動作検知部41で生成された第1動作信号S1及び第2動作検知部42で生成された第2動作信号S2に基づいて、フィードバック信号S4を生成する。 In the present embodiment, the feedback signal generation unit 23Ac includes the head motion signal S0 generated by the head motion detection unit 33B of the smartphone 33 and the first motion signal generated by the first motion detection unit 41 of the operation unit 40. The feedback signal S4 is generated based on S1 and the second motion signal S2 generated by the second motion detector 42.

 次に、仮想空間表示システム10の作動概略を説明する。なお、以下における説明では、ユーザ1が水域Wに浮遊する状態を主として説明するが、本実施の形態はこれに限られず体の一部が水域の底(又は水域内の台などの構造体)と接していてもよい。 Next, an outline of the operation of the virtual space display system 10 will be described. In the following description, the state in which the user 1 floats in the water area W will be mainly described, but the present embodiment is not limited to this, and a part of the body is the bottom of the water area (or a structure such as a table in the water area). May be in contact with.

 HMD30及び操作部40を装着したユーザ1が水域Wに浮遊する状態において、仮想空間処理プログラム23Aの映像生成部23Aaによって、予め設定された仮想空間生成データDに基づいて、仮想空間11、仮想球12及び仮想手13が生成され、仮想空間11に仮想球12及び仮想手13が配置され、これらが映像としてHMD30(スマートフォン33のディスプレイ33A)に表示される。 In a state in which the user 1 wearing the HMD 30 and the operation unit 40 floats in the water W, the virtual space 11, the virtual sphere based on the virtual space generation data D preset by the video generation unit 23Aa of the virtual space processing program 23A. 12 and the virtual hand 13 are generated, the virtual sphere 12 and the virtual hand 13 are arranged in the virtual space 11, and these are displayed as an image on the HMD 30 (display 33A of the smartphone 33).

 この状態において、水域Wに浮遊するユーザ1が頭部を動作させると、その動作に応じて、頭部動作検知部33Bがユーザ1の頭部の動作を検知して頭部動作信号S0を生成する。 In this state, when the user 1 floating in the water W moves his/her head, the head motion detection unit 33B detects the motion of the head of the user 1 and generates the head motion signal S0 according to the motion. To do.

 水域Wに浮遊するユーザ1が手を動作させると、その動作に応じて、第1動作検知部41の抵抗センサ41aが、ユーザ1が手において水域Wから受ける抵抗を検知して、ジャイロセンサ41b及び加速度センサ41cと協働してユーザ1の手の動作を検知して第1動作信号S1を生成する。 When the user 1 floating in the water area W moves his or her hand, the resistance sensor 41a of the first motion detection unit 41 detects the resistance that the user 1 receives from the water area W in his hand in response to the motion, and the gyro sensor 41b. Also, in cooperation with the acceleration sensor 41c, the motion of the hand of the user 1 is detected to generate the first motion signal S1.

 同様に、水域Wに浮遊するユーザ1が指を動作させると、その動作に応じて、第2動作検知部42がユーザ1の指の動作を検知して第2動作信号S2を生成する。 Similarly, when the user 1 floating in the water W moves his/her finger, the second motion detection unit 42 detects the motion of the finger of the user 1 according to the motion, and generates the second motion signal S2.

 このように生成された頭部動作信号S0、第1動作信号S1及び第2動作信号S2が、映像変成信号生成部23Abに入力されると、この映像変成信号生成部23Abで映像変成信号S3が生成される。 When the head movement signal S0, the first movement signal S1, and the second movement signal S2 generated in this way are input to the video transformation signal generation unit 23Ab, the video transformation signal S3 is generated by the video transformation signal generation unit 23Ab. Is generated.

 映像変成信号S3が生成されると、映像生成部23Aaで生成された仮想空間11、仮想球12及び仮想手13が、映像生成部23Aaにおいて変成され、変成された仮想空間11、仮想球12及び仮想手13の映像がHMD30に表示される。 When the image transformation signal S3 is generated, the virtual space 11, the virtual sphere 12, and the virtual hand 13 generated by the image generation unit 23Aa are transformed by the image generation unit 23Aa, and the transformed virtual space 11, virtual sphere 12, and The image of the virtual hand 13 is displayed on the HMD 30.

 一方、頭部動作信号S0、第1動作信号S1及び第2動作信号S2が、フィードバック信号生成部23Acに入力されると、このフィードバック信号生成部23Acでフィードバック信号S4が生成される。 On the other hand, when the head movement signal S0, the first movement signal S1, and the second movement signal S2 are input to the feedback signal generation unit 23Ac, the feedback signal generation unit 23Ac generates the feedback signal S4.

 フィードバック信号S4が生成されると、操作部40のモータ43が駆動し、モータ43のトルクによってユーザ1の手及び指の動作量が制限されることから、例えばユーザ1が仮想手13で仮想球12に接触すると、仮想球12に接触した際の擬似的な触覚が操作部40を介してユーザ1にフィードバックされる。 When the feedback signal S4 is generated, the motor 43 of the operation unit 40 is driven, and the torque of the motor 43 limits the amount of movement of the hand and fingers of the user 1. When touching the virtual sphere 12, a pseudo tactile sensation when touching the virtual sphere 12 is fed back to the user 1 via the operation unit 40.

 このように、映像生成部23Aaで生成された仮想空間11、仮想球12及び仮想手13が、頭部動作検知部33Bで生成される頭部動作信号S0、第1動作検知部41の抵抗センサ41a等で生成される第1動作信号S1、及び第2動作検知部42で生成される第2動作信号S2に基づいて変成される。 As described above, the virtual space 11, the virtual sphere 12, and the virtual hand 13 generated by the image generation unit 23Aa are the head motion signal S0 generated by the head motion detection unit 33B, and the resistance sensor of the first motion detection unit 41. It is transformed based on the first motion signal S1 generated by 41a and the like and the second motion signal S2 generated by the second motion detector 42.

 したがって、ユーザ1の動作に基づいて仮想空間11、仮想球12及び仮想手13が変成されることから、仮想空間11の映像、仮想球12の映像及び仮想手13の映像とユーザ1とのインタラクションが実現され、ユーザ1の興趣が向上する。 Therefore, since the virtual space 11, the virtual sphere 12, and the virtual hand 13 are transformed based on the motion of the user 1, the interaction between the image of the virtual space 11, the image of the virtual sphere 12, and the image of the virtual hand 13 and the user 1. Is realized, and the interest of the user 1 is improved.

 特に、本実施の形態では、水域Wから受ける抵抗を検知する抵抗センサ41aがジャイロセンサ41b及び加速度センサ41cと協働してユーザ1の手の動作を検知して第1動作信号S1を生成することから、水域Wに浮遊するユーザ1が水域Wから受ける特有の作用力(浮力、抵抗等)を仮想空間11、仮想球12及び仮想手13に反映させることができる。 In particular, in the present embodiment, the resistance sensor 41a that detects the resistance received from the water area W cooperates with the gyro sensor 41b and the acceleration sensor 41c to detect the motion of the hand of the user 1 and generate the first motion signal S1. Therefore, the specific acting force (buoyancy, resistance, etc.) that the user 1 floating in the water area W receives from the water area W can be reflected in the virtual space 11, the virtual sphere 12, and the virtual hand 13.

 したがって、ユーザ一の興趣が更に向上することが期待される。 Therefore, it is expected that the interest of users will be further improved.

 図6は、複数のユーザによって仮想空間表示システム10が利用される場合の作動概略を説明する図である。図示のように、互いに異なる水域W1、W2にそれぞれ浮遊する一のユーザ1及び他のユーザ2は、サーバ20と通信可能なHMD30及び操作部40をそれぞれ装着し、それぞれのHMD30には仮想空間11、仮想球12、仮想手13及びユーザ2の仮想手14が表示される。 FIG. 6 is a diagram illustrating an operation outline when the virtual space display system 10 is used by a plurality of users. As illustrated, one user 1 and another user 2 floating in different water areas W1 and W2 respectively mount an HMD 30 and an operation unit 40 that can communicate with the server 20, and each HMD 30 has a virtual space 11 , The virtual sphere 12, the virtual hand 13, and the virtual hand 14 of the user 2 are displayed.

 この状態において、水域W1、W2に浮遊するユーザ1及びユーザ2の頭部、手及び指の動作に応じて、頭部動作信号S0、第1動作信号S1及び第2動作信号S2がそれぞれのユーザ1(2)についてそれぞれ生成され、映像変成信号生成部23Abにおいて、それぞれのユーザ1(2)についての映像変成信号S3が生成される。 In this state, the head motion signal S0, the first motion signal S1 and the second motion signal S2 are respectively supplied to the users 1 and 2 depending on the motions of the heads, hands and fingers of the users 1 and 2 floating in the water areas W1 and W2. 1(2) is generated respectively, and the video alteration signal generation unit 23Ab generates the video alteration signal S3 for each user 1(2).

 それぞれのユーザ1(2)についての映像変成信号S3が生成されると、仮想空間11に仮想球12、及び仮想手13(14)が変成され、変成された仮想空間11、仮想球12及び仮想手13(14)の映像がそれぞれのユーザ1(2)のHMD30に表示される。 When the image transformation signal S3 for each user 1(2) is generated, the virtual sphere 12 and the virtual hand 13(14) are transformed in the virtual space 11, and the transformed virtual space 11, virtual sphere 12 and virtual are created. The image of the hand 13 (14) is displayed on the HMD 30 of each user 1 (2).

 一方、水域W1、W2に浮遊するそれぞれのユーザ1(2)の頭部、手及び指の動作に応じて頭部動作信号S0、第1動作信号S1及び第2動作信号S2が生成されると、フィードバック信号生成部23Acにおいて、それぞれのユーザ1(2)についてのフィードバック信号S4が生成される。 On the other hand, when the head motion signal S0, the first motion signal S1, and the second motion signal S2 are generated according to the motion of the head, hand, and finger of each user 1(2) floating in the water areas W1 and W2. In the feedback signal generation unit 23Ac, the feedback signal S4 for each user 1(2) is generated.

 それぞれのユーザ1(2)についてのフィードバック信号S4が生成されると、操作部40のモータ43が駆動してモータ43のトルクによってユーザ1(2)の手及び指の動作量が制限され、仮想空間11における仮想手13(14)の擬似的な触覚が操作部40を介してユーザ1(2)にフィードバックされる。 When the feedback signal S4 for each user 1(2) is generated, the motor 43 of the operation unit 40 is driven, and the torque of the motor 43 limits the amount of movement of the hand and finger of the user 1(2). The pseudo tactile sense of the virtual hand 13 (14) in the space 11 is fed back to the user 1 (2) via the operation unit 40.

 このように、映像生成部23Aaで生成された仮想空間11、仮想球12及び仮想手13(14)が、複数のユーザ1(2)の動作に基づいてそれぞれのユーザ1(2)についてそれぞれ生成される頭部動作信号S0、第1動作信号S1及び第2動作信号S2に基づいて変成される。 As described above, the virtual space 11, the virtual sphere 12, and the virtual hand 13 (14) generated by the video generation unit 23Aa are generated for each user 1 (2) based on the motions of the plurality of users 1 (2). The transformation is performed based on the head movement signal S0, the first movement signal S1, and the second movement signal S2.

 したがって、ユーザ1(2)のそれぞれ動作に基づいて仮想空間11、仮想球12及び仮想手13が変成されることによって、仮想空間11の映像、仮想球12の映像及び仮想手13(14)の映像とユーザ1(2)とのインタラクションが実現され、かつユーザ1とユーザ2との間でのインタラクションが実現されることから、ユーザ1(2)の興趣が更に向上する。 Therefore, the virtual space 11, the virtual sphere 12, and the virtual hand 13 are transformed based on the respective movements of the user 1 (2), so that the image of the virtual space 11, the image of the virtual sphere 12, and the virtual hand 13 (14). Since the interaction between the video and the user 1(2) is realized and the interaction between the user 1 and the user 2 is realized, the interest of the user 1(2) is further improved.

 なお、本発明は上記実施の形態に限定されることはなく、発明の趣旨を逸脱しない範囲で種々の変更が可能である。上記実施の形態では、操作部40がユーザ1(2)の手に装着される場合を説明したが、例えば脚部等、他の身体部位に装着されるものであってもよい。 The present invention is not limited to the above-mentioned embodiments, and various modifications can be made without departing from the spirit of the invention. In the above embodiment, the case where the operation unit 40 is attached to the hand of the user 1(2) has been described, but the operation unit 40 may be attached to another body part such as a leg.

 上記実施の形態では、仮想空間表示システム10がユーザ1、あるいは複数のユーザ1(2)によって利用される場合を説明したが、ユーザの数は特に制限されるものではない。 In the above embodiment, the case where the virtual space display system 10 is used by the user 1 or a plurality of users 1(2) has been described, but the number of users is not particularly limited.

 上記実施の形態では、HMD30のディスプレイがHMD30に格納されるスマートフォン33のディスプレイ33Aで実装される場合を説明したが、HMD30にディスプレイを内蔵することによって実装してもよい。 In the above embodiment, the case where the display of the HMD 30 is implemented by the display 33A of the smartphone 33 stored in the HMD 30 has been described, but it may be implemented by incorporating the display in the HMD 30.

 上記実施の形態では、水域W1、W2が互いに異なる水域である場合を説明したが、同じ水域であってもよい。 In the above embodiment, the case where the water areas W1 and W2 are different water areas has been described, but they may be the same water area.

 さらに、水域W、W1、W2は、プール及びこれに類する各種の貯水槽、海、湖沼等であってもよいし、自宅のプールや浴槽その他、身体の首から下が浸かるような設備であればどのような水域であってもよい。また、上述したHMD830は、主にVR(Vertual Reality:仮想現実)技術に関する機器であったが、例えば、AR(Augmented Reality:拡張現実)技術や、MR(Mixed Reality:複合現実)技術の分野において本発明を適用することも可能である。この場合、例えば、ユーザの位置(身体の一部の変位を含む)をポジショントラッキング技術によって把握し、当該位置に応じた処理を行うこととしてもよい。 Further, the water areas W, W1 and W2 may be pools and various kinds of water storage tanks, seas, lakes and the like, or may be pools and bathtubs at home or other facilities where the body below the neck can be immersed. Any body of water may be used. Further, the above-mentioned HMD 830 was mainly a device related to VR (Vertical Reality) technology, but in the field of AR (Augmented Reality) technology and MR (Mixed Reality) technology, for example. The present invention can also be applied. In this case, for example, the position of the user (including the displacement of a part of the body) may be grasped by the position tracking technique, and the process according to the position may be performed.

1、2  ユーザ
10  仮想空間表示システム
11  仮想空間
12  仮想球(オブジェクト)
13、14  仮想手(オブジェクト)
20  サーバ
23A  仮想空間処理プログラム
23Aa  映像生成部
23Ab  映像変成信号生成部
23Ac  フィードバック信号生成部
30  HMD(表示装置)
33  スマートフォン
33A  ディスプレイ
33B  頭部動作検知部
40  操作部
41  第1動作検知部
41a  抵抗センサ(抵抗検知部)
41b  ジャイロセンサ
41c  加速度センサ
42  第2動作検知部
S0  頭部動作信号
S1  第1動作信号
S2  第2動作信号
S3  映像変成信号
S4  フィードバック信号
W、W1、W2  水域
1, 2 User 10 Virtual space display system 11 Virtual space 12 Virtual sphere (object)
13, 14 Virtual hand (object)
20 server 23A virtual space processing program 23Aa video generation unit 23Ab video transformation signal generation unit 23Ac feedback signal generation unit 30 HMD (display device)
33 Smartphone 33A Display 33B Head motion detection unit 40 Operation unit 41 First motion detection unit 41a Resistance sensor (resistance detection unit)
41b Gyro sensor 41c Acceleration sensor 42 Second motion detector S0 Head motion signal S1 First motion signal S2 Second motion signal S3 Video transformation signal S4 Feedback signals W, W1, W2 Water area

Claims (3)

 水域においてユーザの頭部に装着された表示装置に仮想空間及び該仮想空間に配置されるオブジェクトを映像として表示する仮想空間表示システムにおいて、
 前記ユーザの前記頭部の動作を検知する頭部動作検知部と、
 前記ユーザの任意の身体部位に装着されて該身体部位において前記ユーザが前記水域から受ける抵抗を検知する抵抗検知部と、
 前記頭部動作検知部で検知した前記ユーザの前記頭部の動作と前記抵抗検知部で検知した前記ユーザが前記水域から受ける抵抗とに基づいて前記仮想空間及び前記オブジェクトを変成する仮想空間処理プログラムと、
 を備えることを特徴とする仮想空間表示システム。
In a virtual space display system for displaying a virtual space and an object arranged in the virtual space on a display device mounted on a user's head in a water area as an image,
A head motion detection unit that detects a motion of the head of the user,
A resistance detection unit that is attached to any body part of the user and detects resistance received by the user from the body of water in the body part,
A virtual space processing program that transforms the virtual space and the object based on the motion of the head of the user detected by the head motion detection unit and the resistance received by the user from the water area detected by the resistance detection unit. When,
A virtual space display system comprising:
 前記ユーザの任意の身体部位に装着されて前記水域における前記身体部位の動作を検知するセンサを備えることを特徴とする請求項1に記載の仮想空間表示システム。 The virtual space display system according to claim 1, further comprising a sensor that is attached to an arbitrary body part of the user and detects a motion of the body part in the water area.  前記仮想空間処理プログラムは、
 一のユーザの頭部の動作を検知する頭部動作検知部で検知した一の前記ユーザの前記頭部の動作及び一の前記ユーザが前記水域から受ける抵抗を検知する抵抗検知部で検知した抵抗と、他のユーザの頭部の動作を検知する頭部動作検知部で検知した他の前記ユーザの前記頭部の動作及び他の前記ユーザが前記水域から受ける抵抗を検知する抵抗検知部で検知した抵抗と、に基づいて前記仮想空間及び前記オブジェクトを変成することを特徴とする請求項1または2に記載の仮想空間表示システム。
The virtual space processing program,
The resistance detected by the resistance detection unit that detects the movement of the head of the one user detected by the head movement detection unit that detects the movement of the head of the one user and the resistance that the one user receives from the water area. And a resistance detection unit that detects the movement of the head of another user detected by the head movement detection unit that detects the movement of the head of another user and the resistance that the other user receives from the water area. The virtual space display system according to claim 1, wherein the virtual space and the object are transformed based on the resistance.
PCT/JP2018/044631 2018-12-04 2018-12-04 Virtual space display system Ceased WO2020115824A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/044631 WO2020115824A1 (en) 2018-12-04 2018-12-04 Virtual space display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/044631 WO2020115824A1 (en) 2018-12-04 2018-12-04 Virtual space display system

Publications (1)

Publication Number Publication Date
WO2020115824A1 true WO2020115824A1 (en) 2020-06-11

Family

ID=70973853

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/044631 Ceased WO2020115824A1 (en) 2018-12-04 2018-12-04 Virtual space display system

Country Status (1)

Country Link
WO (1) WO2020115824A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022190178A (en) * 2021-06-13 2022-12-23 株式会社Rockin′Pool Exercise system in pool

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08266685A (en) * 1995-03-31 1996-10-15 Sony Corp Diver support information centralized display device
JP2010541306A (en) * 2007-08-27 2010-12-24 シャオ,チュエン Method and apparatus for simulating somatosensory experience in space
JP2018041180A (en) * 2016-09-06 2018-03-15 株式会社マクスマラシステムズ Virtual space display system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08266685A (en) * 1995-03-31 1996-10-15 Sony Corp Diver support information centralized display device
JP2010541306A (en) * 2007-08-27 2010-12-24 シャオ,チュエン Method and apparatus for simulating somatosensory experience in space
JP2018041180A (en) * 2016-09-06 2018-03-15 株式会社マクスマラシステムズ Virtual space display system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022190178A (en) * 2021-06-13 2022-12-23 株式会社Rockin′Pool Exercise system in pool

Similar Documents

Publication Publication Date Title
US20190018567A1 (en) Input device for vr/ar applications
Frati et al. Using Kinect for hand tracking and rendering in wearable haptics
CN103765352B (en) Spherical three-dimensional controller
JP2022535315A (en) Artificial reality system with self-tactile virtual keyboard
JP6250592B2 (en) Head mounted display, information processing apparatus, display control method, and program
KR101827912B1 (en) Walkable virtual reality apparatus
JP2016115122A (en) Head-mounted display system, method of displaying on head-mounted display, and program
WO2007053116A1 (en) Virtual interface system
KR20220016984A (en) Artificial reality system with digit-mapped self-haptic input method
WO2018093900A1 (en) Input controller stabilization techniques for virtual reality systems
JP2022184958A (en) animation production system
US20250238083A1 (en) Device for intuitive dexterous touch and feel interaction in virtual worlds
Young et al. An arm-mounted inertial controller for 6DOF input: Design and evaluation
JP6518931B1 (en) Virtual space display system
WO2020115824A1 (en) Virtual space display system
JP6964302B2 (en) Animation production method
Chow 3D spatial interaction with the Wii remote for head-mounted display virtual reality
JP2020077069A (en) Feedback generating device
JP2016115329A (en) Head-mounted display system, method of displaying on head-mounted display, and program
JP2024066390A (en) VR device for adult content that provides tactile sensations
JP7584250B2 (en) Animation Production System
CN117251058B (en) A control method for a multi-information somatosensory interactive system
CN117572965B (en) A multi-information somatosensory interactive glove system for virtual reality system
JP7115695B2 (en) animation production system
JP7218873B6 (en) Animation production system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18942386

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.08.2021)

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 18942386

Country of ref document: EP

Kind code of ref document: A1