WO2023001019A1 - Mixed reality apparatus and device, information processing method, and storage medium - Google Patents
Mixed reality apparatus and device, information processing method, and storage medium Download PDFInfo
- Publication number
- WO2023001019A1 WO2023001019A1 PCT/CN2022/105084 CN2022105084W WO2023001019A1 WO 2023001019 A1 WO2023001019 A1 WO 2023001019A1 CN 2022105084 W CN2022105084 W CN 2022105084W WO 2023001019 A1 WO2023001019 A1 WO 2023001019A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- user
- diving
- display
- mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
Definitions
- Embodiments of the present disclosure relate to but are not limited to the technical field of information processing, and in particular, relate to a mixed reality device and device, an information processing method, and a storage medium.
- the embodiment of the present disclosure provides a mixed reality device, including: a processing module, a display module and a diving information collection module, wherein,
- the diving information collection module is configured to collect diving information, and send the diving information to the processing module; wherein, the diving information includes: the user's head information, the user's eye information and the user's location One or more of the environmental information of the underwater environment;
- the processing module is configured to implement any function in the augmented reality AR diving mode and the virtual reality VR diving mode based on the diving information; wherein, the AR diving mode includes: for monitoring the underwater environment where the user is located.
- the first function mode is whether there is an abnormal state in the lower environment
- the second function mode is used to monitor whether the user has an abnormal state
- the third function mode is used to display the introduction information of the gaze object
- the fourth function mode is used to initiate diving interaction One or more of them
- the VR diving mode includes: one or more of a fifth function mode for responding to diving interactions and a sixth function mode for displaying diving tracks;
- the display module is configured to display any one of augmented reality images and virtual reality images.
- an embodiment of the present disclosure provides an information processing method, which is applied to the mixed reality device described in the above embodiment, the method includes: acquiring diving information through a diving information collection module, and the diving information includes: user One or more of the head information of the user, the user's eye information and the environmental information of the underwater environment in which the user is located; based on the diving information, any one of the augmented reality AR diving mode and the virtual reality VR diving mode can be realized.
- the AR diving mode includes: a first function mode for monitoring whether an abnormal state occurs in the underwater environment in which the user is located, a second function mode for monitoring whether the user has an abnormal state, and a display for displaying One or more of the third functional mode of gazing at the introduction information of the object and the fourth functional mode for initiating diving interaction
- the VR diving mode includes: a fifth functional mode for responding to diving interaction and for displaying One or more of the sixth functional modes of the diving track; the display module is controlled to display any one of augmented reality images and virtual reality images.
- an embodiment of the present disclosure provides a mixed reality device, including: a processor and a memory storing a computer program that can run on the processor, wherein the above embodiments are implemented when the processor executes the program The steps of the information processing method described in.
- an embodiment of the present disclosure provides a computer-readable storage medium, including a stored program, wherein when the program is running, the device where the storage medium is located is controlled to execute the information processing method described in the above-mentioned embodiments A step of.
- FIG. 1 is a schematic structural diagram of a mixed reality diving system in an exemplary embodiment of the present disclosure
- FIG. 2 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure
- FIG. 3 is a schematic flowchart of an information processing method in an exemplary embodiment of the present disclosure
- Fig. 4 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure.
- ordinal numerals such as “first”, “second” or “third” are provided to avoid confusion of constituent elements, rather than to limit in terms of quantity.
- connection In the exemplary embodiments of the present disclosure, the terms “installation”, “connection” or “connection” should be interpreted in a broad sense unless otherwise clearly specified and limited. For example, it may be a fixed connection, or a detachable connection, or an integral connection; it may be a mechanical connection, or an electrical connection; it may be a direct connection, or an indirect connection through an intermediate piece, or an internal communication between two components. Those of ordinary skill in the art can understand the meanings of the above terms in the present disclosure according to the actual situation.
- module used may refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or a combination of hardware and/or software codes capable of executing and The function associated with this element.
- the terms “interface” and “user interface” used may refer to a medium interface for interaction and information exchange between an application program or an operating system and a user, which realizes the internal form of information and the user can Accepts conversions between forms.
- the commonly used form of user interface is the graphical user interface (Graphic User Interface, GUI), which refers to the user interface related to the operation displayed in a graphical way.
- GUI Graphic User Interface
- the user interface may include visual interface elements such as icons, windows, buttons, and dialog boxes.
- MR Mixed reality
- AR Augmented Reality
- VR Virtual Reality
- MR technology is the development of virtual reality technology.
- MR technology can introduce real scene information into the virtual environment and set up an interactive feedback information loop between the virtual world, the real world and users, which can enhance the sense of reality of user experience. , has the characteristics of authenticity, real-time interaction and conception.
- An embodiment of the present disclosure provides a mixed reality diving system.
- the mixed reality diving system can be used in diving activities such as sightseeing, survey, salvage, repair and underwater engineering.
- FIG. 1 is a schematic structural diagram of a mixed reality diving system in an exemplary embodiment of the present disclosure.
- the mixed reality diving system may include: a terminal 11 and N mixed reality devices, wherein the mixed reality device and the terminal 11 Communication connection is possible.
- N is a positive integer greater than or equal to 1.
- the N mixed reality devices may include: mixed reality device 121 , mixed reality device 122 , . . . , and mixed reality device 12N.
- the terminal may be an electronic device such as a server, a smart phone, a tablet computer, a notebook computer, or a desktop computer.
- a server a smart phone
- a tablet computer a notebook computer
- a desktop computer a desktop computer
- the server is configured to process and feed back one or more types of information transmitted by the processing module of the mixed reality device, and feed back to the processing module of the mixed reality device at the same time content to be displayed.
- the server is configured to process the environmental information, so as to determine whether the underwater environment in which the user is in an abnormal state based on the environmental information, so as to generate and issue a warning message when the abnormal state occurs; or, when the diver is in an abnormal state, Send the user's location information to rescuers, so that rescuers can rescue divers who are in an abnormal state.
- the information processed by the server is different according to the function of the diving mode limited by the mixed reality device.
- the embodiments of the present disclosure do not limit this.
- the terminal may have multiple graphics card ports, each mixed reality device may be communicatively connected to the terminal through one graphics card port, and each graphics card port has a port identifier.
- the graphics card port may be a high-definition multimedia interface (High Definition Multimedia Interface, HDMI) or a high-definition digital display interface (Display Port, DP).
- HDMI High Definition Multimedia Interface
- DP high-definition digital display interface
- the embodiments of the present disclosure do not limit this.
- the mixed reality diving system may further include: a plurality of wireless signal transmitters, the plurality of wireless signal transmitters are arranged in at least two graphics card ports of the terminal in one-to-one correspondence, and the plurality of mixed reality devices One-to-one wireless communication connection with multiple wireless signal transmitters, so that multiple mixed reality devices are wireless communication connection with the terminal.
- the port identifier corresponding to each mixed reality device may be the port identifier of the graphics card port connected to the mixed reality device, or the port identifier of the graphics card port where the wireless signal transmitter connected to the mixed reality device is located.
- the mixed reality device may be a wearable display device.
- the wearable display device may include a head-mounted display device or an ear-mounted display device.
- the wearable display device may be MR diving glasses or an MR diving helmet.
- the embodiments of the present disclosure do not limit this.
- An embodiment of the present disclosure provides a mixed reality device.
- the mixed reality device can be used in diving activities such as sightseeing, survey, salvage, repair and underwater engineering.
- FIG. 2 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure.
- the mixed reality device 12 may include: a processing module 21, a display module 22 and a diving information collection module 23; wherein, the processing module 21 is connected with display module 22 and diving information collection module 23 respectively;
- the diving information collection module 23 is configured to collect diving information, and send the diving information to the processing module 21; wherein, the diving information may include: the user's head information, the user's eye information and the underwater environment where the user is located. One or more of environmental information;
- the processing module 21 is configured to implement any function in the augmented reality AR diving mode and the virtual reality VR diving mode based on the diving information; wherein, the AR diving mode may include: for monitoring whether the underwater environment where the user is located appears One of the first function mode for abnormal state, the second function mode for monitoring whether the user has an abnormal state, the third function mode for displaying the introduction information of the gaze object, and the fourth function mode for initiating diving interaction or more, the VR diving mode may include: one or more of a fifth function mode for responding to diving interactions and a sixth function mode for displaying diving tracks;
- the display module 22 is configured to display any one of the augmented reality picture corresponding to the AR diving mode and the virtual reality picture corresponding to the VR diving mode.
- the user may refer to a diver who wears the mixed reality device and conducts diving activities in an underwater environment.
- the mixed reality device when the user wears the mixed reality device to carry out diving activities in the underwater environment, collects diving information through the diving information collection module, and based on the collected diving information through the processing module, can Realizing any one of the functions of the AR diving mode and the VR diving mode can realize an intelligent diving device with rich functions, which can facilitate the development and progress of diving activities.
- the occurrence of an abnormal state in the underwater environment where the user is located may include: in the vicinity of the underwater environment where the user is located (for example, within a preset distance centered on the user's position)
- dangerous objects or dangerous environments may threaten the safety of users.
- dangerous objects may include: dangerous animals and plants or obstacles.
- a hazardous environment may include water velocity exceeding a preset threshold, etc.
- the exemplary embodiments of the present disclosure do not limit this.
- the occurrence of an abnormal state by the user may include: the user is in a physical discomfort, for example, the user is in a state of fatigue or fainting.
- the exemplary embodiments of the present disclosure do not limit this.
- the gaze object may include: one or more of at least one underwater object in the underwater environment and the underwater environment itself.
- the introduction information of the gaze object may include: one or more of text, image, video and other information.
- diving interaction may refer to a diver sharing what he or she sees with one or more other divers (for example, an underwater environment, an object in an underwater environment, etc.), or , may refer to a diver inviting one or more other divers to conduct diving activities together in the same water area, etc.
- the exemplary embodiments of the present disclosure do not limit this.
- the diving information collection module may include: a sensor for collecting diving information.
- the diving information collection module can collect diving information in real time at preset time intervals.
- the preset time interval may be 1s (second), 2s or 3s, etc.
- the embodiments of the present disclosure do not limit this.
- the diving information collection module 23 may include: Head information collection module 231, eye information collection module 232 and environment information collection module 233, wherein, head information collection module 231 is configured to collect the user's head information, and send the head information to the processing module 21;
- the external information collection module 232 is configured to collect the eye information of the user, and sends the eye information to the processing module 21;
- the environmental information collection module 233 is configured to collect the environmental information of the underwater environment where the user lives, and sends the environmental information to the processing module 21. sent to the processing module 21.
- the user's head information may include user's head posture information.
- the head information collection module may include, but is not limited to, an attitude sensor.
- the attitude sensor is a high-performance three-dimensional motion attitude measuring device based on Micro-Electro-Mechanical System (MEMS) technology, which usually includes a three-axis gyroscope, a three-axis accelerometer, and a three-axis electronic compass, etc.
- MEMS Micro-Electro-Mechanical System
- the motion sensor and posture sensor can use these motion sensors to realize the collection of user's head posture information.
- the head information collection module can also be implemented by other sensors, which is not limited in this embodiment of the present disclosure.
- the user's eye information may include: user's eye image information.
- the eye information collection module may include, but is not limited to, a camera using an image sensor.
- the camera may be a camera using a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) image sensor.
- CMOS complementary Metal Oxide Semiconductor
- the eye information collection module can also be implemented by other sensors, which is not limited in this embodiment of the present disclosure.
- the environmental information of the underwater environment where the user is located may include: one or more of environmental image information, environmental depth information, and environmental location information of the underwater environment where the user is located.
- the embodiments of the present disclosure do not limit this.
- the environmental information collection module may include, but is not limited to, a camera using an image sensor.
- the environmental information collection module may be a wide-angle camera (Wide Angle Camera), a fisheye camera (Fisheye Camera), or a depth camera (Deepth Camera), etc.
- the environmental information collection module may be a CMOS camera or the like.
- the environmental information collection module may include a first camera for collecting environmental image information of the underwater environment where the user is located and a second camera for collecting environmental depth information of the underwater environment where the user is located.
- the information acquisition module scans the underwater environment where the user is located, and can collect the environmental image information and environmental depth information of the underwater environment where the user is located, and can perform real-time positioning and map construction (SLAM, Simultaneous Localization And Mapping) .
- the environmental information collection module may also include other sensors, for example, a positioning sensor for collecting environmental position information of the underwater environment where the user is located, which is not limited in this embodiment of the present disclosure.
- the environmental information collection module includes: a first camera for collecting environmental image information of the underwater environment where the user is located; the eye information collection module includes: a first camera for collecting user's eye images
- the third camera for information and the mixed reality device are head-mounted display devices as an example, the third camera can be set on the inside of the head-mounted display device body, and the first camera can be set on the outside of the head-mounted display device body.
- the first camera can be directed towards the user's eyes, and the third camera can be directed towards the underwater environment where the user is located.
- Environmental image information for underwater environments are head-mounted display devices as an example, the third camera can be set on the inside of the head-mounted display device body, and the first camera can be set on the outside of the head-mounted display device body.
- the processing module may include, but not limited to, a central processing unit (Central Processing Unit, CPU), other general purpose processors, a digital signal processor (Digital Signal Processor, DSP), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components or application-specific integrated circuits, etc.
- the general-purpose processor may be a microprocessor (Micro Processor Unit, MPU) or the processor may be any conventional processor or the like.
- MPU Micro Processor Unit
- the embodiments of the present disclosure do not limit this.
- the display module may include at least one display module or a device including a display module, for example, a mixed reality display, a head-mounted display, and the like.
- the display module may be an organic light emitting diode (Organic Light Emitting Diode, OLED) display, a quantum dot light emitting diode (Quantum-dot Light Emitting Diodes, QLED) display, and the like.
- OLED Organic Light Emitting Diode
- QLED Quantum-dot Light Emitting Diodes
- the mixed reality diving system including: a server and multiple mixed reality devices as an example, and taking each mixed reality device including: a head information collection module, a head information collection module and an environment information collection module as an example, the exemplary The different working modes of the mixed reality device provided in the embodiments are described in detail.
- the working modes of the mixed reality device may include: an AR diving mode and a VR diving mode.
- the following describes how to switch the working mode of the mixed reality device between the AR diving mode and the VR diving mode.
- the processing module is configured to acquire the first header information; when the first header information meets the first preset condition, control the display module to display the first confirmation interface (for example, for confirmation Whether to switch the interface of the working mode), and obtain the first eye information; based on the first eye information, determine the user's gaze area on the first confirmation interface; when it is determined that the user's gaze area on the first confirmation interface is the preset first When the area is displayed, switch the working mode from one of the AR diving mode and the VR diving mode to the other of the AR diving mode and the VR diving mode; or, when it is determined that the user's gaze area on the first confirmation interface is the preset In the second display area, keep the working mode unchanged.
- the first confirmation interface is triggered to be displayed through the collected head information of the user, and whether to switch the working mode is confirmed through the collected user’s eye information, so that the user can conveniently check the Mixed reality device to operate. Furthermore, the use and operation of the diving device is facilitated, and the convenience of use of the diving device is improved.
- the first preset condition may be that the user's head is in a head-down state, the user's head is in a head-up state, and the user's head shakes his head one or more times within a preset time (for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold), or, the user's head nods one or more times within the preset time (for example, the shaking of the user's head in the second direction magnitude greater than the preset second threshold), or, within the preset time, the user's head makes a circular movement (for example, the amplitude of the shaking of the user's head in both the first direction and the second direction is greater than the preset third threshold )Wait.
- a preset time for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold
- the user's head nods one or more times within the preset time for example, the shaking of the user's head in the second
- the first head information is the preset head information that indicates that the user performs a head-shaking action within the preset time, that is, the first head information collected within the preset time all indicates that the user moves in the first direction. If the shake of the head is greater than the preset threshold, it can be determined that the first head information meets the first preset condition, and thus, the first confirmation interface can pop up, so that the user can choose whether to switch the working mode through eye movements.
- the processing module is configured to initialize the mixed reality device, and set the working mode of the mixed reality device to AR diving mode.
- the embodiments of the present disclosure do not limit this.
- the first eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the first confirmation interface located in the line of sight direction as the user's The gaze area of the first confirmation interface.
- the first confirmation interface when the working mode is switched from the AR diving mode to the VR diving mode, the first confirmation interface may be a user interface implemented based on AR technology. Or, when switching the working mode from the VR diving mode to the AR diving mode, the first confirmation interface may be a user interface implemented based on VR technology.
- the following describes multiple functional modes in which the working mode of the mixed reality device is the AR diving mode.
- the processing module is configured to obtain the first environmental information in the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state; and send the first environmental information to the server so that the server determines whether the underwater environment in which the user is in an abnormal state based on the first environmental information; receiving the augmented reality screen sent by the server containing warning information indicating that the underwater environment in which the user is in an abnormal state; controlling The display module displays an augmented reality image including warning information.
- the processing module is configured to obtain the first environmental information in the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state; and send the first environmental information to the server so that the server determines whether the underwater environment in which the user is in an abnormal state based on the first environmental information; receiving the augmented reality screen sent by the server containing warning information indicating that the underwater environment in which the user is in an abnormal state; controlling The display module displays an augmented reality image including warning information.
- the occurrence of an abnormal state in the underwater environment where the user is located may include: in the vicinity of the underwater environment where the user is located (for example, within a preset distance centered on the user's position)
- dangerous objects or dangerous environments may threaten the safety of users.
- dangerous objects may include: dangerous animals and plants, obstacles, and the like.
- a hazardous environment may include water velocity exceeding a preset threshold, etc.
- the exemplary embodiments of the present disclosure do not limit this.
- the warning information may include one or more of information about dangerous objects threatening the user's life in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route.
- the first target route is a route capable of avoiding dangerous objects.
- the warning information includes information about dangerous objects threatening the user's life safety in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route
- the processing The module is also configured to control the display module to display the augmented reality picture containing the information of the dangerous object, and obtain the second eye information; based on the second eye information, determine whether the user is watching the information of the dangerous object; when it is determined that the user is watching
- the display module is controlled to display an augmented reality picture including navigation information.
- the mixed reality device can display the information of the dangerous objects to the user; and after confirming that the user has seen the information of the dangerous objects, display navigation information to the user , so that the user avoids dangerous objects while traveling.
- the user pays attention to the information of the dangerous object, and the navigation information for avoiding the dangerous object can be displayed to the user. Therefore, the personal safety of the divers in the underwater environment can be effectively guaranteed, the sense of safety of the divers can be improved, and the interest in diving activities can be enhanced.
- the second eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the first confirmation interface located in the line of sight direction as the user's The gaze area of the first confirmation interface.
- the processing module is configured to acquire third eye information in the second functional mode for monitoring whether the user is in an abnormal state; When two preset conditions are met, obtain the second environment information; determine the user's location information based on the second environment information; send the user's location information to the server, so that the server sends the user's location information to other mixed reality devices to request Rescue divers using other mixed reality devices.
- the diver's time to keep still is abnormal and accompanied by abnormal eye gaze
- it can automatically switch to the distress mode, which is convenient for calling rescuers in time, and the user's location information Send it to other mixed reality devices, and send distress signals to surrounding divers for timely rescue, thereby facilitating timely rescue of divers who have accidents, reducing the risk of accidents for divers, reducing the incidence of diving accidents, and improving the safety of diving devices .
- whether the user is in an abnormal state may include: whether the user is in a physical discomfort, for example, whether the user is in a state of fatigue or fainting.
- the exemplary embodiments of the present disclosure do not limit this.
- the second preset condition may be that the user performs the following eye movements, including but not limited to: the user's gaze stays in the same area for more than a preset time, the number of times the user blinks within the preset time is less than The preset threshold, or the user continues to close the eyes within the preset time, etc.
- the collected user's eye information including: the user's eye image information as an example
- the eye image information when it is detected that the eye image information does not include the iris boundary image area including the user's eyeball, it can be determined that the user performs an eye-closing action
- it is detected that the iris area of the user's eyeball gradually decreases and then gradually increases through the multiple pieces of eye image information collected within a preset time it can be determined that the user blinks.
- the processing module is further configured to receive the rescuer's information sent by the server; and control the display module to display the rescuer's information.
- the processing module is configured to obtain the third environment information and the fourth eye information in the third functional mode for displaying the introduction information of the gaze object; based on the fourth eye information, from In the third environment information, the identification information of the user's gaze object in the underwater environment is obtained; the identification information of the gaze object is sent to the server, so that the server sends the introduction information of the gaze object; the introduction information containing the gaze object sent by the server is received and controlling the display module to display the augmented reality screen including the introduction information of the gaze object.
- divers can use the eye gaze control to display the introduction information of the gaze object, which is helpful for divers to have a deep understanding of underwater objects and environments.
- the gaze object may include: one or more of at least one underwater object in the underwater environment and the underwater environment itself.
- underwater objects may include: animals, plants, or rocks.
- the underwater environment may include: ocean trenches or volcanoes.
- the exemplary embodiments of the present disclosure do not limit this.
- the fourth eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the information of the third environment information located in the line of sight direction as the user's The identification information of the gaze object in the underwater environment.
- the identification information of the gazing object may include: image information of the gazing object.
- the image information of the gazing object is extracted from the environment image information, and the image information of the gazing object is determined as the identification information of the gazing object.
- the introduction information of the gaze object may include: one or more of information such as text, image, and video.
- the introduction information of the gaze object may include: name, subject, classification, morphological characteristics, living habits, or protection level, etc.
- the exemplary embodiments of the present disclosure do not limit this.
- the processing module is configured to acquire the second header information in the fourth functional mode for initiating diving interaction; when the second header information meets the third preset condition, control the display
- the module displays the second confirmation interface; acquires the fifth eye information; determines the gaze area of the user on the second confirmation interface based on the fifth eye information; when it is determined that the gaze area of the user on the second confirmation interface is the preset third display area , send a request message to the server for requesting diving interaction.
- the user can conveniently operate the mixed reality device through the head and eyes, so as to initiate diving interactive activities.
- multiple divers can be used as a whole to facilitate real-time sharing of the pictures they see (for example, the underwater environment they have visited, the animals and plants in the underwater environment they have watched. And then , can enhance the fun of diving.
- the third preset condition may be that the user's head is in a head-down state, the user's head is in a head-up state, and the user's head shakes his head one or more times within a preset time (for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold), or, the user's head nods one or more times within the preset time (for example, the shaking of the user's head in the second direction magnitude greater than the preset second threshold), or, within the preset time, the user's head makes a circular movement (for example, the amplitude of the shaking of the user's head in both the first direction and the second direction is greater than the preset third threshold )Wait.
- a preset time for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold
- the user's head nods one or more times within the preset time for example, the shaking of the user's head in the second
- the second head information is the preset head information indicating that the user performs a nodding action within the preset time, that is, the first head information collected within the preset time all indicates that the user is in the second direction. If the shake of the user's head is greater than the preset threshold, it can be determined that the second head information meets the third preset condition, and thus, a second confirmation interface can pop up, so that the user can choose to initiate diving interaction through eye movements. For example, when the gaze area of the user's eyes on the second confirmation interface is the preset third display area, it indicates that the user chooses to initiate a diving interactive activity.
- the second confirmation interface may be a user interface implemented based on AR technology.
- the fifth eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the second confirmation interface located in the line of sight direction as the user's The gaze area of the second confirmation interface.
- the processing module is further configured to receive the augmented reality screen containing the information list of divers sent by the server, and control the display module to display the augmented reality screen containing the information list of divers; information; based on the information of the sixth eye, from the information list of divers, obtain the information of the target divers that the user is looking at; send the information of the target divers to the server, so that the server sends the information to the mixed reality device corresponding to the target divers Request message for invitation to dive interaction.
- the user can select the target divers to be invited to conduct diving interaction in a targeted manner through eyes.
- the diver information list may include: information about one or more divers in the underwater environment within a preset distance from the location of the interaction initiator.
- the diver information in the diver information list may include: location information of the diver or introduction information (for example, name, image information, etc.) of the diver.
- location information of the diver or introduction information for example, name, image information, etc.
- the number of target divers may be one or more, for example, two, three, four, and so on.
- the embodiments of the present disclosure do not limit this.
- the request message for inviting diving interaction may include: introduction information of the object to be shared, and the object to be shared includes: the underwater environment of the interaction initiator and the gaze of the interaction initiator One or more of the underwater objects found, the introduction information of the object to be shared includes: one or more of text, image and video.
- introduction information of the object to be shared includes: one or more of text, image and video.
- the processing module is further configured to receive the augmented reality screen containing the position information of at least one diver among the target divers sent by the server; and control the display module to display the augmented reality screen containing the position information of at least one diver. Augmented reality screen. In this way, it is convenient for the initiator of the diving interaction to know the location information of the receiver of the diving interaction, which can enhance the fun of diving.
- the processing module is further configured to receive an augmented reality screen containing the updated position information of at least one diver; and control the display module to display the augmented reality screen containing the updated position information of at least one diver. realistic picture. In this way, during the process of waiting for the diving interaction recipient, the location of the diving interaction recipient can be updated in real time.
- the sixth eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; The information of the target diver who is determined to be the user's gaze.
- the processing module is further configured to display the third confirmation interface when it is determined that the gaze area of the user on the second confirmation interface is the preset fourth display area; acquire seventh eye information; based on The seventh eye information determines the gaze area of the user on the third confirmation interface; when it is determined that the gaze area of the user on the third confirmation interface is the preset fifth display area, the current working mode ends.
- the user can conveniently operate the mixed reality device through the head and eyes, so as to end the current working mode, for example, end the diving interactive activity.
- the third confirmation interface may be a user interface implemented based on AR technology.
- the seventh eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the third confirmation interface located in the line of sight direction as the user's The third confirms the gaze area of the interface.
- the working mode of the mixed reality device is described below as a plurality of functional modes of the VR diving mode.
- the processing module is configured to control the display module to display a fourth confirmation in response to a request message sent by the server for confirming whether to perform diving interaction in the fifth functional mode for responding to diving interaction interface; obtain eighth eye information; determine the gaze area of the user on the fourth confirmation interface based on the eighth eye information; when it is determined that the gaze area of the user on the fourth confirmation interface is the preset sixth display area, the receiving server sends The virtual reality screen containing the introduction information of the object to be shared; the display module is controlled to display the virtual reality screen containing the introduction information of the object to be shared. In this way, when receiving the diving interaction invitation, the fifth confirmation interface can be displayed, so that the user can conveniently operate the fifth confirmation interface through eyes to determine whether to accept the diving interaction.
- the object to be shared includes: one or more of the underwater environment of the initiator of the diving interaction and the underwater object that the initiator of the interaction is watching, and the introduction of the object to be shared
- the information includes: one or more of text, image and video.
- the fourth confirmation interface may be a user interface implemented based on VR technology.
- the processing module is further configured to send a response message for accepting diving interaction to the server, so that the server sends the second navigation information for instructing the user to travel along the second target route; receiving the message sent by the server the virtual reality screen containing the second navigation information, and control the display module to display the virtual reality screen containing the second navigation information, wherein the second target route is a route from the position of the diving interaction acceptor to the position of the diving interaction initiator.
- the recipient of the diving interaction can perform contact interaction according to traveling to the underwater environment where the initiator of the diving interaction is located, thereby enhancing the fun of diving.
- it helps the diving interaction recipient to quickly enter the underwater environment where the diving interaction initiator is located, saving a lot of time and cost.
- the processing module is further configured to acquire the third header information; when the third header information meets the fourth preset condition, control the display module to display the fifth confirmation interface; acquire the ninth-eye internal information; based on the ninth eye information, determine the gaze area of the user on the fifth confirmation interface; when it is determined that the gaze area of the user on the fifth confirmation interface is the preset seventh display area, send a response message to the server to accept diving interaction , so that the server sends the second navigation information for instructing the user to travel along the second target route. In this way, the user can control whether to send a response message to the server to accept diving interaction through the head and eyes.
- the fourth preset condition may be that the user's head is in a head-down state, the user's head is in a head-up state, and the user's head shakes his head one or more times within a preset time (for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold), or, the user's head nods one or more times within the preset time (for example, the shaking of the user's head in the second direction magnitude greater than the preset second threshold), or, within the preset time, the user's head makes a circular movement (for example, the amplitude of the shaking of the user's head in both the first direction and the second direction is greater than the preset third threshold )Wait.
- a preset time for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold
- the user's head nods one or more times within the preset time for example, the shaking of the user's head in the second
- the third header information is preset header information indicating that the user has performed a nodding action within a preset time
- Confirmation interface so that the user can confirm whether to send a response message to the server to accept diving interaction through eye movements.
- the ninth eye information may be eye image information of the user.
- the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the fifth confirmation interface located in the line of sight direction as the user's The fifth confirms the gaze area of the interface.
- the fifth confirmation interface (that is, the interface for confirming whether to travel to the area where the initiator of the diving interaction is located for the diving interaction) may be a user interface implemented based on VR technology.
- the processing module is configured to obtain fifth environmental information in the sixth functional mode for displaying diving tracks; determine the current location information of the user based on the fifth environmental information; The position information and the historical position information generate a virtual reality picture including the user's diving track; the display module is controlled to display the virtual reality picture including the user's diving track.
- the VR diving mode can narrow the overall view, which helps divers form an overall awareness and make the diving direction clearer. It is oriented and convenient for divers to quickly enter the next diving destination, saving a lot of energy and time for diving.
- the fifth environment information may be environment location information.
- the environmental location information of the underwater environment where the user is currently located may be used as the user's current location information. In this way, the location information of the user is obtained.
- the embodiment of the present disclosure also provides an information processing method. It can be applied to the mixed reality device in one or more of the above exemplary embodiments.
- Fig. 3 is a schematic flowchart of an information processing method in an exemplary embodiment of the present disclosure. As shown in Figure 3, the information processing method may include:
- Step 31 Obtain diving information through the diving information collection module, and the diving information includes: one or more of the user's head information, the user's eye information, and the environmental information of the underwater environment in which the user is located;
- Step 32 Based on the diving information, realize the function of any one of the augmented reality AR diving mode and the virtual reality VR diving mode;
- the AR diving mode includes: the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state, the second functional mode for monitoring whether the user is in an abnormal state, and a function mode for displaying the introduction information of the staring object.
- the VR diving mode includes: the fifth function mode for responding to diving interaction and the sixth function mode for displaying diving tracks one or more of .
- Step 33 Control the display module to display any one of augmented reality images and virtual reality images.
- step 32 may include:
- Step 3211 Obtain the first header information
- Step 3212 When the first header information meets the first preset condition, control the display module to display the first confirmation interface, and acquire the first eye information;
- Step 3213 Based on the first eye information, determine the gaze area of the user on the first confirmation interface;
- Step 3214 When it is determined that the gaze area of the user on the first confirmation interface is the preset first display area, switch the working mode from one of the AR diving mode and the VR diving mode to the other of the AR diving mode and the VR diving mode One; or, when it is determined that the gaze area of the user on the first confirmation interface is the preset second display area, keep the working mode unchanged.
- step 32 may include:
- Step 3221 In the first function mode, obtain the first environment information
- Step 3222 Send the first environmental information to the server, so that the server determines whether the underwater environment where the user is in is abnormal based on the first environmental information;
- Step 3223 receiving the augmented reality image sent by the server and including the warning information indicating that the underwater environment where the user is in is abnormal;
- Step 3224 Control the display module to display the augmented reality screen containing the warning information.
- the warning information includes: one or more of information about dangerous objects threatening the user's life in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route.
- the first target route is a route capable of avoiding dangerous objects.
- step 3224 may include: controlling the display module to display an augmented reality picture containing information about dangerous objects, and acquiring second eye information; based on the second eye information, determining whether the user is looking at the dangerous object information; when it is determined that the user is gazing at the information of the dangerous object, the display module is controlled to display an augmented reality picture containing navigation information.
- step 32 may include:
- Step 3231 In the second function mode, obtain the third eye information
- Step 3232 When the third eye information meets the second preset condition that the user is in an abnormal state, acquire the second environment information;
- Step 3233 Determine the location information of the user based on the second environment information
- Step 3234 Send the user's location information to the server, so that the server sends the user's location information to other mixed reality devices, so as to request rescue from divers using other mixed reality devices.
- step 32 may further include:
- Step 3235 Receive the information of rescuers sent by the server
- Step 3236 Control the display module to display information about rescuers.
- step 32 may include:
- Step 3241 In the third function mode, acquire the third environment information and the fourth eye information;
- Step 3242 Based on the fourth eye information, obtain the identification information of the user's gaze object in the underwater environment from the third environmental information;
- Step 3243 Send the identification information of the gaze object to the server, so that the server sends the introduction information of the gaze object;
- Step 3244 Receive the augmented reality image including the introduction information of the gaze object sent by the server, and control the display module to display the augmented reality image including the introduction information of the gaze object.
- step 32 may include:
- Step 3251 In the fourth function mode, obtain the second header information
- Step 3252 When the second header information meets the third preset condition, control the display module to display the second confirmation interface;
- Step 3253 Obtain the fifth eye information; determine the gaze area of the user on the second confirmation interface based on the fifth eye information;
- Step 3254a When it is determined that the gaze area of the user on the second confirmation interface is the preset third display area, send a request message for requesting initiation of diving interaction to the server.
- step 32 may further include:
- Step 3255a Receive the augmented reality screen containing the information list of divers sent by the server, and control the display module to display the augmented reality screen containing the information list of divers;
- Step 3256a Obtain the sixth eye information
- Step 3257a Based on the sixth eye information, obtain the information of the target diver that the user is looking at from the diver information list;
- Step 3258a Send the information of the target diver to the server, so that the server sends a request message for inviting diving interaction to the mixed reality device corresponding to the target diver.
- the request message for inviting diving interaction includes: introduction information of the object to be shared, the object to be shared includes: one or more of the underwater environment and underwater objects, the object to be shared
- the introduction information includes: one or more of text, image and video.
- step 32 may further include:
- Step 3259a Receive the augmented reality picture including the location information of at least one diver among the target divers sent by the server, and control the display module to display the augmented reality picture including the location information of at least one diver.
- step 32 may include:
- Step 3254b When it is determined that the gaze area of the user on the second confirmation interface is the preset fourth display area, display the third confirmation interface;
- Step 3255b Obtain the seventh eye information
- Step 3256b Determine the gaze area of the user on the third confirmation interface based on the seventh eye information
- Step 3257b When it is determined that the gaze area of the user on the third confirmation interface is the preset fifth display area, end the current working mode.
- step 32 may include:
- Step 3261 In the fifth function mode, in response to the request message sent by the server for inviting diving interaction, control the display module to display the fourth confirmation interface;
- Step 3262 Obtain the eighth eye information
- Step 3263 Based on the eighth eye information, determine the gaze area of the user on the fourth confirmation interface;
- Step 3264 When it is determined that the user's gaze area on the fourth confirmation interface is the preset sixth display area, receive the virtual reality screen containing the introduction information of the object to be shared sent by the server;
- Step 3265 Control the display module to display the virtual reality screen containing the introduction information of the object to be shared.
- the objects to be shared include: one or more of underwater environments and underwater objects, and the introduction information of the objects to be shared includes: one or more of text, images and videos.
- step 32 may further include:
- Step 3266 Send a response message of accepting diving interaction to the server, so that the server sends the second navigation information for instructing the user to travel along the second target route;
- Step 3267 Receive the virtual reality image containing the second navigation information sent by the server, and control the display module to display the virtual reality image containing the second navigation information.
- the second target route is a route from the location of the diving interaction recipient to the location of the diving interaction initiator.
- step 3266 may include: acquiring the third header information; when the third header information meets the fourth preset condition, controlling the display module to display the fifth confirmation interface; acquiring the ninth eye information ; Based on the ninth eye information, determine the gaze area of the user on the fifth confirmation interface; when it is determined that the gaze area of the user on the fifth confirmation interface is the preset seventh display area, send a response message accepting diving interaction to the server, to The server is made to issue second navigation information for instructing the user to travel along the second target route. In this way, the user can control whether to send a response message to the server to accept diving interaction through the head and eyes.
- step 32 may include:
- Step 3271 In the sixth function mode, obtain fifth environmental information
- Step 3272 Determine the current location information of the user based on the fifth environmental information
- Step 3273 Based on the user's current location information and historical location information, generate a virtual reality screen including the user's diving track;
- Step 3274 Control the display module to display the virtual reality screen including the user's diving track.
- the following AR diving mode includes: the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state, the second functional mode for monitoring whether the user is in an abnormal state, and the one for displaying the introduction information of the staring object
- the third function mode and the fourth function mode for initiating diving interaction includes: the fifth function mode for responding to diving interaction and the sixth function mode for displaying diving tracks as an example, taking an exemplary embodiment An application scenario of the above information processing method will be described.
- the information processing method may include the following processes:
- Step 1 The processing module controls the display module to display the initialization setting interface, and the user can operate the initialization setting interface through eyes and select the initialization working mode.
- the initial working mode can be selected as AR diving mode.
- Step 2 The head information collection module collects the first eye information, and sends the first eye information to the head processing module.
- the processing module controls the display module to display the first confirmation interface (that is, the confirmation switching interface), or, when the first head information does not meet the first preset condition, keep the current Operating mode.
- Step 3 the eye information collection module acquires the first eye information of the diver; when the processing module determines that the user's gaze area (ie eye gaze information) on the first confirmation interface is the preset first When the area is displayed, the processing module switches the working mode from the AR diving mode to the VR diving mode, and can continue to perform step 20, or, when the processing module determines that the user's gaze area on the first confirmation interface is the preset second display area, Keep the working mode unchanged, and proceed to step 4.
- the processing module determines that the user's gaze area (ie eye gaze information) on the first confirmation interface is the preset first
- the processing module switches the working mode from the AR diving mode to the VR diving mode, and can continue to perform step 20, or, when the processing module determines that the user's gaze area on the first confirmation interface is the preset second display area, Keep the working mode unchanged, and proceed to step 4.
- Step 4 The environmental information collection module (for example, including multiple cameras) collects the environmental image information (for example, first environmental information) of the underwater environment where the user is in real time and transmits it to the processing module.
- the environmental image information for example, first environmental information
- Step 5 The processing module obtains the first environmental information in real time and sends it to the server, and the server returns the processing result of the current environment where the processor is located: when the server calculates that there is a threat to the user in the underwater environment where the user is located based on the first environmental information
- dangerous objects such as dangerous creatures or obstacles
- step 6 When there are dangerous objects (such as dangerous creatures or obstacles) that are life-safe, it indicates that the underwater environment where the user is in an abnormal state, then perform step 6, or the processing result of the current environment indicates that the underwater environment where the user is in If there is no abnormal state, go to step 8;
- Step 6 The processing module obtains the augmented reality screen sent by the server and contains warning information indicating that dangerous objects that threaten the user's life safety appear in the underwater environment where the user is located.
- the warning information may include: the location and size of the dangerous object, Controlling the display module to display an augmented reality picture containing warning information for warning reminders;
- Step 7 After controlling the display module to display the augmented reality picture containing the information of dangerous objects, the processing module obtains the second eye information collected by the eye information collection module, when the processing module determines that the diver's gaze is on the basis of the second eye information After receiving the information of the dangerous object, the processing module controls the display module to display an augmented reality picture including navigation information. Continue to perform steps 4 to 7 to monitor whether the underwater environment where the user is in is abnormal;
- Step 8 The eye information collection module acquires the diver's third eye information and sends it to the processing module;
- Step 9 The processing module records the eyeball activity duration data in real time, and when the third eye information meets the second preset condition that the user is in an abnormal state (for example, the user does not blink continuously or does not open the eyes for more than the preset duration), execute Step 10, or, when the third eye information does not meet the second preset condition that the user is in an abnormal state, perform step 13;
- the third eye information meets the second preset condition that the user is in an abnormal state (for example, the user does not blink continuously or does not open the eyes for more than the preset duration)
- Step 10 The environmental information collection module acquires the second environmental information (including environmental location information) and sends it to the processing module, so that the processing module calculates the location information of the user in an abnormal state based on the second environmental information, and sends the user's location information to the server;
- Step 11 The server obtains the information list of surrounding divers of the user in an abnormal state, and sends a distress message to the processing module corresponding to the information list of surrounding divers, so as to request other divers to rescue;
- Step 12 The processing module receives the rescuer's information sent by the server, controls the display module to display the rescuer's information and distance, and executes step 28;
- Step 13 According to the fourth eye information obtained by the eye information collection module and the third environmental information collected by the environmental information collection module, the processing module calculates the identification information of the gaze object that the diver is watching, and obtains the introduction of the gaze object from the server Information, control the display module to display, so that the divers can have a deep understanding of the underwater organisms and the environment;
- Step 14 When the second head information obtained by the processing module in the AR diving mode meets the third preset condition (for example, indicating that the diver has performed a nodding action), the processing module controls the display module to display the second confirmation interface (that is, Confirm shared view interface), the user can confirm whether to share the view of the field of view at this time after interacting with the eyes.
- the processing module determines that the gaze area of the user on the second confirmation interface is the preset third display area, and the selection is confirmed to initiate diving interaction to share the view, then step 15 is performed, or, when the processing module determines that the gaze area of the user on the second confirmation interface
- the fourth display area is preset, a third confirmation interface is displayed, and step 19 is performed to confirm whether to end the current working mode;
- Step 15 The processing module sends a request message to the server for requesting to initiate a diving interaction, and the server responds to the request message and sends the acquired information list of divers in VR diving mode in the diving field to the initiator of the diving interaction Corresponding processing module;
- Step 16 The processing module controls the display module to display the augmented reality screen containing the information list of divers.
- the initiator of the diving interaction can select the diving interaction object through the eyes, and the information of the target divers is sent to the server, and the server will carry out one-on-one or one-on-one Send a request message for inviting diving interaction to many, and accept the feedback result of whether to accept the invitation;
- Step 17 After the processing module corresponding to the diving interaction initiator receives the feedback information from the server, it can choose whether to wait for the arrival of the diving companion to conduct diving interaction. When the diving interaction initiator chooses to wait, perform step 18, or, when the diving interaction If the initiator chooses not to wait, go to step 2;
- Step 18 The server updates the location information of the divers receiving the invitation in real time, calculates the distance and feeds it back to the processing module of the initiator of the diving interaction, so as to update the display interface;
- Step 19 When the processing module determines that the user's gaze area on the third confirmation interface is the preset fifth display area based on the seventh eye information, then perform step 28, or determine that the user's gaze area on the third confirmation interface is not To preset the fifth display area, proceed to step 2;
- Step 20 The processing module monitors whether a request message for inviting diving interaction sent by the server is received.
- the processing module controls the display module to display a fourth confirmation interface (for example, an interface for confirming whether to display the introduction information of the object to be shared shared by the initiator of the diving interaction).
- Step 21 Based on the acquired eighth eye information, the processing module determines that the gaze area of the user on the fourth confirmation interface is the preset sixth display area, indicating that the user chooses to accept diving interaction, and then receives the information sent by the server containing the object to be shared. Introduce the virtual reality screen of the information; after controlling the display module to display the virtual reality screen of the introduction information containing the object to be shared, perform step 22, or, if the user chooses not to accept, then perform step 24;
- Step 22 When the third head information obtained by the processing module in the AR diving mode meets the fourth preset condition (for example, indicating that the diver has performed a nodding action), the processing module controls the display module to display the fifth confirmation interface (for example, , to confirm whether to travel to the area where the initiator of the diving interaction is located to perform the diving interaction interface). After that, interact with the user's eyes to confirm whether to join the contact diving interactive invitation.
- the user chooses to join the invitation then perform step 23, so as to travel to the area where the initiator of the diving interaction is located, or, when the user chooses not to join the invitation, then Execute step 24;
- Step 23 The processing module sends a response message of accepting diving interaction to the server, so that the server issues second navigation information for instructing the user to travel along the second target route (for example, including path planning between the initiator and the recipient) .
- the processing module of the diving interaction recipient receives the virtual reality screen containing the second navigation information sent by the server, and controls the display module of the diving interaction recipient to display the virtual reality screen containing the second navigation information for route prompting;
- Step 24 The processing module obtains the fifth environmental information collected by the environmental information collection module and sends it to the server.
- Step 25 Based on the fifth environmental information and the pre-stored historical environmental information, the server records all the diver's travel points and marks them as traveled diving tracks, and generates a virtual reality screen containing the user's diving tracks;
- Step 26 the server sends the virtual reality screen containing the user's diving track to the processing module;
- Step 27 The control display module sent by the processing module server displays a virtual reality screen containing the user's diving track, and then step 2 can be performed;
- Step 28 End the dive.
- An embodiment of the present disclosure also provides a mixed reality device, including: a memory and a memory storing a computer program that can run on a processor, wherein, when the processor executes the program, the information processing method in one or more of the above embodiments is implemented A step of.
- the mixed reality device 40 may include: at least one processor 401; at least one memory 402 and a bus 403 connected to the processor 401; wherein, the processor 401, The memory 402 communicates with each other through the bus 403; the processor 401 is used to call the program instructions in the memory 402, so as to execute the steps of the information processing method in one or more embodiments above.
- the above-mentioned processor may be a CPU, other general-purpose processors, DSP, field FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or application-specific integrated circuits.
- a general purpose processor can be an MPU or the processor can be any conventional processor or the like.
- the embodiments of the present disclosure do not limit this.
- the above-mentioned memory may include a non-permanent memory in a computer-readable storage medium, a random access memory (Random Access Memory, RAM) and/or a non-volatile memory, such as a read-only memory ( Read Only Memory, ROM) or flash memory (Flash RAM), the memory includes at least one memory chip.
- RAM Random Access Memory
- ROM Read Only Memory
- Flash RAM flash memory
- the bus may also include a power bus, a control bus, a status signal bus, and the like.
- the various buses are labeled as bus 403 in FIG. 4 for clarity of illustration.
- the embodiments of the present disclosure do not limit this.
- the processing performed by the mixed reality device may be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software. That is, the method steps in the embodiments of the present disclosure may be implemented by a hardware processor, or by a combination of hardware and software modules in the processor.
- the software module may be located in storage media such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, and the like.
- the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
- Embodiments of the present disclosure further provide a computer-readable storage medium, including a stored program, wherein when the program is running, the touch device where the storage medium is located is controlled to execute the steps of the information processing method in one or more of the above-mentioned embodiments.
- the above-mentioned computer-readable storage medium may be, for example: ROM/RAM, magnetic disk, optical disk, and the like.
- this disclosure is not limited in this embodiment.
- the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof.
- the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute.
- Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
- Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
- computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer.
- communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
本申请要求于2021年07月23日提交中国专利局、申请号为202110836392.5、发明名称为“混合现实装置及设备、信息处理方法及存储介质”的中国专利申请的优先权,其内容应理解为通过引用的方式并入本申请中。This application claims the priority of the Chinese patent application submitted to the China Patent Office on July 23, 2021, with the application number 202110836392.5, and the title of the invention is "mixed reality device and equipment, information processing method and storage medium", the content of which should be understood as Incorporated into this application by reference.
本公开实施例涉及但不限于信息处理技术领域,尤其涉及一种混合现实装置及设备、信息处理方法及存储介质。Embodiments of the present disclosure relate to but are not limited to the technical field of information processing, and in particular, relate to a mixed reality device and device, an information processing method, and a storage medium.
潜水人员在水下环境中进行游览、查勘、打捞、修理和水下工程等潜水活动时通常会携带潜水装置,但是,受限于用于水下环境的潜水装置功能单一、操作不够智能等因素,使得大多数大众无法感受到潜水活动的乐趣,不利于推进潜水活动的发展与进步。因此,有必要提供一种功能丰富的智能化的潜水装置。Divers usually carry diving equipment when conducting diving activities such as sightseeing, surveying, salvage, repairing and underwater engineering in the underwater environment. However, due to factors such as single function and insufficient intelligence of the diving equipment used in the underwater environment , making most of the public unable to feel the fun of diving activities, which is not conducive to promoting the development and progress of diving activities. Therefore, it is necessary to provide an intelligent diving device with rich functions.
发明内容Contents of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics described in detail in this article. This summary is not intended to limit the scope of the claims.
第一方面,本公开实施例提供了一种混合现实装置,包括:处理模块、显示模块和潜水信息采集模块,其中,In the first aspect, the embodiment of the present disclosure provides a mixed reality device, including: a processing module, a display module and a diving information collection module, wherein,
所述潜水信息采集模块,被配置为采集潜水信息,并将所述潜水信息发送至所述处理模块;其中,所述潜水信息包括:用户的头部信息、用户的眼部信息和用户所处的水下环境的环境信息中的一种或多种;The diving information collection module is configured to collect diving information, and send the diving information to the processing module; wherein, the diving information includes: the user's head information, the user's eye information and the user's location One or more of the environmental information of the underwater environment;
所述处理模块,被配置为基于所述潜水信息,实现增强现实AR潜水模式和虚拟现实VR潜水模式中任意一种的功能;其中,所述AR潜水模式包括:用于监测用户所处的水下环境是否出现异常状态的第一功能模式、用于监测用户是否出现异常状态的第二功能模式、用于展示注视对象的介绍信息 的第三功能模式和用于发起潜水互动的第四功能模式中的一种或多种,所述VR潜水模式包括:用于响应潜水互动的第五功能模式和用于展示潜水轨迹的第六功能模式中的一种或多种;The processing module is configured to implement any function in the augmented reality AR diving mode and the virtual reality VR diving mode based on the diving information; wherein, the AR diving mode includes: for monitoring the underwater environment where the user is located. The first function mode is whether there is an abnormal state in the lower environment, the second function mode is used to monitor whether the user has an abnormal state, the third function mode is used to display the introduction information of the gaze object, and the fourth function mode is used to initiate diving interaction One or more of them, the VR diving mode includes: one or more of a fifth function mode for responding to diving interactions and a sixth function mode for displaying diving tracks;
所述显示模块,被配置为显示增强现实画面和虚拟现实画面中任意一种。The display module is configured to display any one of augmented reality images and virtual reality images.
第二方面,本公开实施例提供了一种信息处理方法,应用于上述实施例中所述的混合现实装置,所述方法包括:通过潜水信息采集模块获取潜水信息,所述潜水信息包括:用户的头部信息、用户的眼部信息和用户所处的水下环境的环境信息中的一种或多种;基于所述潜水信息,实现增强现实AR潜水模式和虚拟现实VR潜水模式中任意一种的功能;其中,所述AR潜水模式包括:用于监测用户所处的水下环境是否出现异常状态的第一功能模式、用于监测用户是否出现异常状态的第二功能模式、用于展示注视对象的介绍信息的第三功能模式和用于发起潜水互动的第四功能模式中的一种或多种,所述VR潜水模式包括:用于响应潜水互动的第五功能模式和用于展示潜水轨迹的第六功能模式中的一种或多种;控制显示模块显示增强现实画面和虚拟现实画面中任意一种。In a second aspect, an embodiment of the present disclosure provides an information processing method, which is applied to the mixed reality device described in the above embodiment, the method includes: acquiring diving information through a diving information collection module, and the diving information includes: user One or more of the head information of the user, the user's eye information and the environmental information of the underwater environment in which the user is located; based on the diving information, any one of the augmented reality AR diving mode and the virtual reality VR diving mode can be realized. wherein, the AR diving mode includes: a first function mode for monitoring whether an abnormal state occurs in the underwater environment in which the user is located, a second function mode for monitoring whether the user has an abnormal state, and a display for displaying One or more of the third functional mode of gazing at the introduction information of the object and the fourth functional mode for initiating diving interaction, and the VR diving mode includes: a fifth functional mode for responding to diving interaction and for displaying One or more of the sixth functional modes of the diving track; the display module is controlled to display any one of augmented reality images and virtual reality images.
第三方面,本公开实施例提供了一种混合现实设备,包括:处理器以及存储有可在处理器上运行的计算机程序的存储器,其中,所述处理器执行所述程序时实现上述实施例中所述的信息处理方法的步骤。In a third aspect, an embodiment of the present disclosure provides a mixed reality device, including: a processor and a memory storing a computer program that can run on the processor, wherein the above embodiments are implemented when the processor executes the program The steps of the information processing method described in.
第四方面,本公开实施例提供了一种计算机可读存储介质,包括存储的程序,其中,在所述程序运行时控制所述存储介质所在的设备执行上述实施例中所述的信息处理方法的步骤。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, including a stored program, wherein when the program is running, the device where the storage medium is located is controlled to execute the information processing method described in the above-mentioned embodiments A step of.
本公开的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本公开而了解。本公开的其他优点可通过在说明书以及附图中所描述的方案来实现和获得。Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure. Other advantages of the present disclosure can be realized and obtained through the solutions described in the specification and the accompanying drawings.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent to others upon reading and understanding the drawings and detailed description.
附图用来提供对本公开技术方案的理解,并且构成说明书的一部分,与 本公开的实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。附图中每个部件的形状和大小不反映真实比例,目的只是示意说明本公开内容。The accompanying drawings are used to provide an understanding of the technical solutions of the present disclosure, and constitute a part of the specification, and are used together with the embodiments of the present disclosure to explain the technical solutions of the present disclosure, and do not constitute limitations to the technical solutions of the present disclosure. The shape and size of each component in the drawings do not reflect true scale, but are for the purpose of schematically illustrating the present disclosure.
图1为本公开示例性实施例中的混合现实潜水系统的结构示意图;FIG. 1 is a schematic structural diagram of a mixed reality diving system in an exemplary embodiment of the present disclosure;
图2为本公开示例性实施例中的混合现实装置的结构示意图;FIG. 2 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure;
图3为本公开示例性实施例中的信息处理方法的流程示意图;FIG. 3 is a schematic flowchart of an information processing method in an exemplary embodiment of the present disclosure;
图4为本公开示例性实施例中的混合现实装备的结构示意图。Fig. 4 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure.
本文描述了多个实施例,但是该描述是示例性的,而不是限制性的,在本文所描述的实施例包含的范围内可以有更多的实施例和实现方案。尽管在附图中示出了许多可能的特征组合,并在示例性实施方式中进行了讨论,但是所公开的特征的许多其它组合方式也是可能的。除非特意加以限制的情况以外,任何实施例的任何特征或元件可以与任何其它实施例中的任何其他特征或元件结合使用,或可以替代任何其它实施例中的任何其他特征或元件。Several embodiments are described herein, but the description is exemplary rather than limiting, and many more embodiments and implementations are possible within the scope of the embodiments described herein. Although many possible combinations of features are shown in the drawings and discussed in the exemplary embodiments, many other combinations of the disclosed features are possible. Except where expressly limited, any feature or element of any embodiment may be used in combination with, or substituted for, any other feature or element of any other embodiment.
在描述具有代表性的实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文步骤的特定顺序的程度上,该方法或过程不应限于的特定顺序的步骤。如本领域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本领域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本公开实施例的精神和范围内。In describing representative embodiments, the specification may have presented a method and/or process as a particular sequence of steps. However, to the extent the method or process is not dependent on a particular order of the steps herein, the method or process should not be limited to the particular order of the steps. Other sequences of steps are also possible, as will be appreciated by those of ordinary skill in the art. Therefore, the specific order of the steps set forth in the specification should not be construed as limitations on the claims. Furthermore, claims to the method and/or process should not be limited to performing their steps in the order written, as those skilled in the art can readily appreciate that such order can be varied and still remain within the spirit and scope of the disclosed embodiments Inside.
在本公开附图中,有时为了明确起见,夸大表示了每个构成要素的大小、层的厚度或区域。因此,本公开的一个方式并不一定限定于该尺寸,附图中每个部件的形状和大小不反映真实比例。此外,附图示意性地示出了理想的例子,本公开的一个方式不局限于附图所示的形状或数值等。In the drawings of the present disclosure, the size of each component, the thickness of a layer, or a region is sometimes exaggerated for the sake of clarity. Therefore, one aspect of the present disclosure is not necessarily limited to the size, and the shape and size of each component in the drawings do not reflect the true scale. In addition, the drawings schematically show ideal examples, and one aspect of the present disclosure is not limited to shapes, numerical values, and the like shown in the drawings.
在本公开示例性实施例中,“第一”、“第二”或者“第三”等序数词是为了避免构成要素的混同而设置,而不是为了在数量方面上进行限定的。In the exemplary embodiments of the present disclosure, ordinal numerals such as "first", "second" or "third" are provided to avoid confusion of constituent elements, rather than to limit in terms of quantity.
在本公开示例性实施例中,,为了方便起见,使用“中部”、“上”、“下”、“前”、“后”、“竖直”、“水平”、“顶”、“底”、“内”或者“外”等指示方位或位置关系的词句以参照附图说明构成要素的位置关系,仅是为了便于描述本说明书和简化描述,而不是指示或暗示所指的装置或元件具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。构成要素的位置关系根据描述每个构成要素的方向适当地改变。因此,不局限于在说明书中说明的词句,根据情况可以适当地更换。In the exemplary embodiments of the present disclosure, for convenience, "middle", "upper", "lower", "front", "rear", "vertical", "horizontal", "top", "bottom" are used ", "inner" or "outer" and other words indicating orientation or positional relationship are used to illustrate the positional relationship of constituent elements with reference to the drawings, which are only for the convenience of describing this specification and simplifying the description, rather than indicating or implying the referred device or element Having a particular orientation, being constructed in a particular orientation, and operating in a particular orientation, therefore, are not to be construed as limitations of the present disclosure. The positional relationship of the constituent elements changes appropriately according to the direction in which each constituent element is described. Therefore, it is not limited to the words and phrases described in the specification, and may be appropriately replaced according to circumstances.
在本公开示例性实施例中,除非另有明确的规定和限定,术语“安装”、“相连”或者“连接”应做广义理解。例如,可以是固定连接,或可拆卸连接,或一体地连接;可以是机械连接,或电连接;可以是直接相连,或通过中间件间接相连,或两个元件内部的连通。对于本领域的普通技术人员而言,可以实际情况理解上述术语在本公开中的含义。In the exemplary embodiments of the present disclosure, the terms "installation", "connection" or "connection" should be interpreted in a broad sense unless otherwise clearly specified and limited. For example, it may be a fixed connection, or a detachable connection, or an integral connection; it may be a mechanical connection, or an electrical connection; it may be a direct connection, or an indirect connection through an intermediate piece, or an internal communication between two components. Those of ordinary skill in the art can understand the meanings of the above terms in the present disclosure according to the actual situation.
在本公开示例性实施例中,使用的术语“模块”,可以是指任何已知或后来开发的硬件、软件、固件、人工智能、模糊逻辑或硬件或/和软件代码的组合,能够执行与该元件相关的功能。In the exemplary embodiments of the present disclosure, the term "module" used may refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or a combination of hardware and/or software codes capable of executing and The function associated with this element.
在本公开示例性实施例中,使用的术语“界面”和“用户界面”,可以是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面常用的表现形式是图形用户界面(Graphic User Interface,GUI),是指采用图形方式显示的与操作相关的用户界面。用户界面可以包括图标、窗口、按钮、对话框等可视的界面元素。In the exemplary embodiments of the present disclosure, the terms "interface" and "user interface" used may refer to a medium interface for interaction and information exchange between an application program or an operating system and a user, which realizes the internal form of information and the user can Accepts conversions between forms. The commonly used form of user interface is the graphical user interface (Graphic User Interface, GUI), which refers to the user interface related to the operation displayed in a graphical way. The user interface may include visual interface elements such as icons, windows, buttons, and dialog boxes.
混合现实(Mixed Reality,MR)技术实际上是增强现实(Augmented Reality,AR)技术和虚拟现实(Virtual Reality,VR)技术的一种结合。利用MR技术,用户可以看到真实世界(AR技术的特点),同时可以会看到虚拟的物体(VR技术的特点)。因此,MR技术是虚拟现实技术的发展,MR技术可以通过在虚拟环境中引入现实场景信息,在虚拟世界、现实世界和用户之间搭起一个交互反馈的信息回路,可以增强用户体验的真实感,具有真实性、实时互动性以及构想性等特点。Mixed reality (Mixed Reality, MR) technology is actually a combination of augmented reality (Augmented Reality, AR) technology and virtual reality (Virtual Reality, VR) technology. Using MR technology, users can see the real world (a feature of AR technology) and at the same time see virtual objects (a feature of VR technology). Therefore, MR technology is the development of virtual reality technology. MR technology can introduce real scene information into the virtual environment and set up an interactive feedback information loop between the virtual world, the real world and users, which can enhance the sense of reality of user experience. , has the characteristics of authenticity, real-time interaction and conception.
本公开实施例提供一种混合现实潜水系统。在实际应用中,该混合现实 潜水系统可以应用于诸如游览、查勘、打捞、修理和水下工程等潜水活动场合中。An embodiment of the present disclosure provides a mixed reality diving system. In practical applications, the mixed reality diving system can be used in diving activities such as sightseeing, survey, salvage, repair and underwater engineering.
图1为本公开示例性实施例中的混合现实潜水系统的结构示意图,如图1所示,该混合现实潜水系统可以包括:终端11和N个混合现实装置,其中,混合现实装置与终端11可通信连接。N为大于或者等于1的正整数。例如,如图1所示,N个混合现实装置可以包括:混合现实装置121、混合现实装置122、….、以及混合现实装置12N等。FIG. 1 is a schematic structural diagram of a mixed reality diving system in an exemplary embodiment of the present disclosure. As shown in FIG. 1 , the mixed reality diving system may include: a terminal 11 and N mixed reality devices, wherein the mixed reality device and the terminal 11 Communication connection is possible. N is a positive integer greater than or equal to 1. For example, as shown in FIG. 1 , the N mixed reality devices may include: mixed
在一种示例性实施例中,终端可以为服务器、智能手机、平板电脑、笔记本电脑或者台式电脑等电子设备。这里,本公开实施例对此不做限定。In an exemplary embodiment, the terminal may be an electronic device such as a server, a smart phone, a tablet computer, a notebook computer, or a desktop computer. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,以终端为服务器为例,服务器,被配置为处理并反馈混合现实装置的处理模块所传输过来的一种或多种信息,同时反馈给混合现实装置的处理模块待显示的内容。例如,服务器被配置为处理环境信息,以基于环境信息,确定用户所处的水下环境是否出现异常状态,以便在出现异常状态时生成并发出警示信息;或者,当潜水人员处于异常状态时,将用户的位置信息发送给救援人员,以便救援人员对处于异常状态的潜水人员进行援救等。根据混合现实装置所时限的潜水模式的功能的不同,服务器所处理的信息不同。这里,本公开实施例对此不做限定。In an exemplary embodiment, taking the terminal as an example, the server is configured to process and feed back one or more types of information transmitted by the processing module of the mixed reality device, and feed back to the processing module of the mixed reality device at the same time content to be displayed. For example, the server is configured to process the environmental information, so as to determine whether the underwater environment in which the user is in an abnormal state based on the environmental information, so as to generate and issue a warning message when the abnormal state occurs; or, when the diver is in an abnormal state, Send the user's location information to rescuers, so that rescuers can rescue divers who are in an abnormal state. The information processed by the server is different according to the function of the diving mode limited by the mixed reality device. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,终端可以具有多个显卡端口,每一个混合现实装置可以通过一个显卡端口与终端通信连接,且每一个显卡端口具有一个端口标识。例如,显卡端口可以为高清晰度多媒体接口(High Definition Multimedia Interface,HDMI)或者高清数字显示接口(Display Port,DP)等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the terminal may have multiple graphics card ports, each mixed reality device may be communicatively connected to the terminal through one graphics card port, and each graphics card port has a port identifier. For example, the graphics card port may be a high-definition multimedia interface (High Definition Multimedia Interface, HDMI) or a high-definition digital display interface (Display Port, DP). Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,混合现实潜水系统还可以包括:多个无线信号发射器,多个无线信号发射器一一对应设置在终端的至少两个显卡端口中,该多个混合现实装置与多个无线信号发射器一一对应无线通信连接,使得多个混合现实装置与终端无线通信连接。In an exemplary embodiment, the mixed reality diving system may further include: a plurality of wireless signal transmitters, the plurality of wireless signal transmitters are arranged in at least two graphics card ports of the terminal in one-to-one correspondence, and the plurality of mixed reality devices One-to-one wireless communication connection with multiple wireless signal transmitters, so that multiple mixed reality devices are wireless communication connection with the terminal.
例如,每个混合现实装置对应的端口标识可以是该混合现实装置连接的显卡端口的端口标识,或者是与该混合现实装置连接的无线信号发射器所在的显卡端口的端口标识。For example, the port identifier corresponding to each mixed reality device may be the port identifier of the graphics card port connected to the mixed reality device, or the port identifier of the graphics card port where the wireless signal transmitter connected to the mixed reality device is located.
在一种示例性实施例中,混合现实装置可以为可穿戴显示设备。例如,可穿戴显示设备可以包括头戴式显示设备或者挂耳式显示设备等。例如,可穿戴显示设备可以为MR潜水眼镜或MR潜水头盔等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the mixed reality device may be a wearable display device. For example, the wearable display device may include a head-mounted display device or an ear-mounted display device. For example, the wearable display device may be MR diving glasses or an MR diving helmet. Here, the embodiments of the present disclosure do not limit this.
本公开实施例提供一种混合现实装置。在实际应用中,该混合现实装置可以应用于诸如游览、查勘、打捞、修理和水下工程等潜水活动场合中。An embodiment of the present disclosure provides a mixed reality device. In practical applications, the mixed reality device can be used in diving activities such as sightseeing, survey, salvage, repair and underwater engineering.
图2为本公开示例性实施例中的混合现实装置的结构示意图,如图2所示,该混合现实装置12可以包括:处理模块21、显示模块22和潜水信息采集模块23;其中,处理模块21分别与显示模块22和潜水信息采集模块23连接;FIG. 2 is a schematic structural diagram of a mixed reality device in an exemplary embodiment of the present disclosure. As shown in FIG. 2 , the
潜水信息采集模块23,被配置为采集潜水信息,并将潜水信息发送至处理模块21;其中,潜水信息可以包括:用户的头部信息、用户的眼部信息和用户所处的水下环境的环境信息中的一种或多种;The diving
处理模块21,被配置为基于潜水信息,实现增强现实AR潜水模式和虚拟现实VR潜水模式中任意一种的功能;其中,AR潜水模式可以包括:用于监测用户所处的水下环境是否出现异常状态的第一功能模式、用于监测用户是否出现异常状态的第二功能模式、用于展示注视对象的介绍信息的第三功能模式和用于发起潜水互动的第四功能模式中的一种或多种,VR潜水模式可以包括:用于响应潜水互动的第五功能模式和用于展示潜水轨迹的第六功能模式中的一种或多种;The
显示模块22,被配置为显示与AR潜水模式对应的增强现实画面和与VR潜水模式对应的虚拟现实画面中任意一种。The
这里,用户可以是指穿戴混合现实装置在水下环境中进行潜水活动的潜水人员。Here, the user may refer to a diver who wears the mixed reality device and conducts diving activities in an underwater environment.
如此,本公开实施例所提供的混合现实装置,在用户穿戴混合现实装置在水下环境中进行潜水活动时,通过潜水信息采集模块采集潜水信息,并通过处理模块基于采集到的潜水信息,可以实现AR潜水模式和VR潜水模式中任意一种的功能,可以实现功能丰富的智能化的潜水装置,可以利于推进潜水活动的发展与进步。In this way, the mixed reality device provided by the embodiments of the present disclosure, when the user wears the mixed reality device to carry out diving activities in the underwater environment, collects diving information through the diving information collection module, and based on the collected diving information through the processing module, can Realizing any one of the functions of the AR diving mode and the VR diving mode can realize an intelligent diving device with rich functions, which can facilitate the development and progress of diving activities.
在一种示例性实施例中,用户所处的水下环境出现异常状态可以包括:在用户所处的水下环境的附近区域(例如,以用户的位置为中心预设距离的区域内方)出现可能威胁用户生命安全的危险物体或者危险环境。例如,危险物体可以包括:危险动植物或者障碍物等。例如,危险环境可以包括水流速度超过预设阈值等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, the occurrence of an abnormal state in the underwater environment where the user is located may include: in the vicinity of the underwater environment where the user is located (for example, within a preset distance centered on the user's position) There are dangerous objects or dangerous environments that may threaten the safety of users. For example, dangerous objects may include: dangerous animals and plants or obstacles. For example, a hazardous environment may include water velocity exceeding a preset threshold, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,用户出现异常状态可以包括:用户出现身体不适,例如,用户处于疲劳状态或者昏厥状态等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, the occurrence of an abnormal state by the user may include: the user is in a physical discomfort, for example, the user is in a state of fatigue or fainting. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,注视对象可以包括:水下环境中的至少一个水下物体和水下环境本身中的一种或多种。例如,注视对象的介绍信息可以包括:文字、图像和视频等信息中的一种或多种。In an exemplary embodiment, the gaze object may include: one or more of at least one underwater object in the underwater environment and the underwater environment itself. For example, the introduction information of the gaze object may include: one or more of text, image, video and other information.
在一种示例性实施例中,潜水互动可以是指一个潜水人员向一个或多个其它潜水人员共享自己所观看到的景象(例如,水下环境、水下环境中的某物体等),或者,可以是指一个潜水人员邀请一个或多个其它潜水人员在同一片水域中共同进行潜水活动等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, diving interaction may refer to a diver sharing what he or she sees with one or more other divers (for example, an underwater environment, an object in an underwater environment, etc.), or , may refer to a diver inviting one or more other divers to conduct diving activities together in the same water area, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,潜水信息采集模块可以包括:用于采集潜水信息的传感器。例如,潜水信息采集模块可以以预设时间间隔实时采集潜水信息。例如,预设时间间隔可以为1s(秒)、2s或者3s等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the diving information collection module may include: a sensor for collecting diving information. For example, the diving information collection module can collect diving information in real time at preset time intervals. For example, the preset time interval may be 1s (second), 2s or 3s, etc. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,以潜水信息包括:用户的头部信息、用户的眼部信息和用户所处的水下环境为例,如图2所示,潜水信息采集模块23可以包括:头部信息采集模块231、眼部信息采集模块232和环境信息采集模块233,其中,头部信息采集模块231,被配置为采集用户的头部信息,将头部信息发送至处理模块21;眼部信息采集模块232,被配置为采集用户的眼部信息,将眼部信息发送至处理模块21;环境信息采集模块233,被配置为采集用户所处的水下环境的环境信息,将环境信息发送至处理模块21。In an exemplary embodiment, taking the diving information including: the user's head information, the user's eye information and the underwater environment where the user is located as an example, as shown in FIG. 2 , the diving
在一种示例性实施例中,用户的头部信息可以包括用户的头部姿态信息。In an exemplary embodiment, the user's head information may include user's head posture information.
在一种示例性实施例中,头部信息采集模块可以包括但不限于为姿态传感器。例如,姿态传感器是一种基于微机电系统(Micro-Electro-Mechanical System,MEMS)技术的高性能三维运动姿态测量器,其通常可以包括三轴陀螺仪、三轴加速度计和三轴电子罗盘等运动传感器,姿态传感器可以利用这些运动传感器实现用户的头部姿态信息的采集。当然,头部信息采集模块还可以通过其它传感器来实现,这里,本公开实施例对此不做限定。In an exemplary embodiment, the head information collection module may include, but is not limited to, an attitude sensor. For example, the attitude sensor is a high-performance three-dimensional motion attitude measuring device based on Micro-Electro-Mechanical System (MEMS) technology, which usually includes a three-axis gyroscope, a three-axis accelerometer, and a three-axis electronic compass, etc. The motion sensor and posture sensor can use these motion sensors to realize the collection of user's head posture information. Of course, the head information collection module can also be implemented by other sensors, which is not limited in this embodiment of the present disclosure.
在一种示例性实施例中,用户的眼部信息可以包括:用户的眼部图像信息。In an exemplary embodiment, the user's eye information may include: user's eye image information.
在一种示例性实施例中,眼部信息采集模块可以包括但不限于为采用图像传感器的摄像头。例如,摄像头可以为采用互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)图像传感器的摄像头。当然,眼部信息采集模块还可以通过其它传感器来实现,这里,本公开实施例对此不做限定。In an exemplary embodiment, the eye information collection module may include, but is not limited to, a camera using an image sensor. For example, the camera may be a camera using a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) image sensor. Of course, the eye information collection module can also be implemented by other sensors, which is not limited in this embodiment of the present disclosure.
在一种示例性实施例中,用户所处的水下环境的环境信息可以包括:用户所处的水下环境的环境图像信息、环境深度信息和环境位置信息中的一种或多种。这里,本公开实施例对此不做限定。In an exemplary embodiment, the environmental information of the underwater environment where the user is located may include: one or more of environmental image information, environmental depth information, and environmental location information of the underwater environment where the user is located. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,环境信息采集模块可以包括但不限于为采用图像传感器的摄像头。例如,环境信息采集模块可以为广角摄像头(Wide Angle Camera)、鱼眼摄像头(Fisheye Camera)、或者深度摄像头(Deepth Camera)等。例如,环境信息采集模块可以为CMOS摄像头等。例如,环境信息采集模块可以包括用于采集用户所处的水下环境的环境图像信息的第一摄像头和用于采集用户所处的水下环境的环境深度信息的第二摄像头,如此,通过环境信息采集模块来对用户所处的水下环境进行环境扫描,可以采集到用户所处的水下环境的环境图像信息和环境深度信息,可以进行即时定位与地图构建(SLAM,Simultaneous Localization And Mapping)。当然,环境信息采集模块还可以包括其它传感器,例如,用于采集用户所处的水下环境的环境位置信息的定位传感器等,这里,本公开实施例对此不做限定。In an exemplary embodiment, the environmental information collection module may include, but is not limited to, a camera using an image sensor. For example, the environmental information collection module may be a wide-angle camera (Wide Angle Camera), a fisheye camera (Fisheye Camera), or a depth camera (Deepth Camera), etc. For example, the environmental information collection module may be a CMOS camera or the like. For example, the environmental information collection module may include a first camera for collecting environmental image information of the underwater environment where the user is located and a second camera for collecting environmental depth information of the underwater environment where the user is located. The information acquisition module scans the underwater environment where the user is located, and can collect the environmental image information and environmental depth information of the underwater environment where the user is located, and can perform real-time positioning and map construction (SLAM, Simultaneous Localization And Mapping) . Of course, the environmental information collection module may also include other sensors, for example, a positioning sensor for collecting environmental position information of the underwater environment where the user is located, which is not limited in this embodiment of the present disclosure.
在一种示例性实施例中,以环境信息采集模块包括:用于采集用户所处的水下环境的环境图像信息的第一摄像头、眼部信息采集模块包括:用于采集用户的眼部图像信息的第三摄像头、以及混合现实装置为头戴式显示设备为例,第三摄像头可以设置在头戴式显示设备本体的内侧,第一摄像头可以 设置在头戴式显示设备本体的外侧,在头戴式显示设备本体被用户穿戴时,第一摄像头可以朝向用户眼部,第三摄像头可以朝向用户所处的水下环境,如此,就可以采集到用户的眼部图像信息和用户所处的水下环境的环境图像信息。In an exemplary embodiment, the environmental information collection module includes: a first camera for collecting environmental image information of the underwater environment where the user is located; the eye information collection module includes: a first camera for collecting user's eye images The third camera for information and the mixed reality device are head-mounted display devices as an example, the third camera can be set on the inside of the head-mounted display device body, and the first camera can be set on the outside of the head-mounted display device body. When the head-mounted display device body is worn by the user, the first camera can be directed towards the user's eyes, and the third camera can be directed towards the underwater environment where the user is located. Environmental image information for underwater environments.
在一种示例性实施例中,处理模块可以包括但不限于为中央处理单元(Central Processing Unit,CPU)、其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件或者专用集成电路等。通用处理器可以是微处理器(Micro Processor Unit,MPU)或者该处理器可以是任何常规的处理器等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the processing module may include, but not limited to, a central processing unit (Central Processing Unit, CPU), other general purpose processors, a digital signal processor (Digital Signal Processor, DSP), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components or application-specific integrated circuits, etc. The general-purpose processor may be a microprocessor (Micro Processor Unit, MPU) or the processor may be any conventional processor or the like. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,显示模块可以包括至少一个显示模组或者包含显示模组的设备,例如,混合现实显示器、头戴式显示器等。例如,显示模块可以为有机发光二极管(Organic Light Emitting Diode,OLED)显示器、量子点发光二极管(Quantum-dot Light Emitting Diodes,QLED)显示器等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the display module may include at least one display module or a device including a display module, for example, a mixed reality display, a head-mounted display, and the like. For example, the display module may be an organic light emitting diode (Organic Light Emitting Diode, OLED) display, a quantum dot light emitting diode (Quantum-dot Light Emitting Diodes, QLED) display, and the like. Here, the embodiments of the present disclosure do not limit this.
下面以混合现实潜水系统包括:服务器与多个混合现实装置为例,并以每个混合现实装置包括:头部信息采集模块、头部信息采集模块和环境信息采集模块为例,对本公开示例性实施例中提供的混合现实装置的不同工作模式进行详细说明。Taking the mixed reality diving system including: a server and multiple mixed reality devices as an example, and taking each mixed reality device including: a head information collection module, a head information collection module and an environment information collection module as an example, the exemplary The different working modes of the mixed reality device provided in the embodiments are described in detail.
在一种示例性实施例中,混合现实装置的工作模式可以包括:AR潜水模式和VR潜水模式。In an exemplary embodiment, the working modes of the mixed reality device may include: an AR diving mode and a VR diving mode.
下面对如何将混合现实装置的工作模式在AR潜水模式和VR潜水模式之间进行切换进行说明。The following describes how to switch the working mode of the mixed reality device between the AR diving mode and the VR diving mode.
在一种示例性实施例中,处理模块,被配置为获取第一头部信息;当第一头部信息符合第一预设条件时,控制显示模块显示第一确认界面(例如,用于确认是否切换工作模式的界面),并获取第一眼部信息;基于第一眼部信息,确定用户在第一确认界面的注视区域;当确定用户在第一确认界面的注视区域为预设第一显示区域时,将工作模式从AR潜水模式和VR潜水模 式中的一种切换为AR潜水模式和VR潜水模式中的另一种;或者,当确定用户在第一确认界面的注视区域为预设第二显示区域时,保持工作模式不变。如此,通过采集到的用户的头部信息来触发显示第一确认界面,并通过采集到的用户的眼部信息来确认是否切换工作模式,从而,用户可以便捷地通过头部和眼部来对混合现实装置进行操作。进而,方便了潜水装置的使用和操作,改善了潜水装置的使用便利性。In an exemplary embodiment, the processing module is configured to acquire the first header information; when the first header information meets the first preset condition, control the display module to display the first confirmation interface (for example, for confirmation Whether to switch the interface of the working mode), and obtain the first eye information; based on the first eye information, determine the user's gaze area on the first confirmation interface; when it is determined that the user's gaze area on the first confirmation interface is the preset first When the area is displayed, switch the working mode from one of the AR diving mode and the VR diving mode to the other of the AR diving mode and the VR diving mode; or, when it is determined that the user's gaze area on the first confirmation interface is the preset In the second display area, keep the working mode unchanged. In this way, the first confirmation interface is triggered to be displayed through the collected head information of the user, and whether to switch the working mode is confirmed through the collected user’s eye information, so that the user can conveniently check the Mixed reality device to operate. Furthermore, the use and operation of the diving device is facilitated, and the convenience of use of the diving device is improved.
在一种示例性实施例中,第一预设条件可以为用户头部处于低头状态、用户头部处于仰头状态、在预设时间内用户头部进行一次或多次摇头动作(例如,在第一方向上用户的头部的晃动幅度大于预设第一阈值),或者,在预设时间内用户头部进行一次或多次点头动作(例如,在第二方向上用户的头部的晃动幅度大于预设第二阈值),或者,在预设时间内用户头部进行一次画圈动作(例如,在第一方向和第二方向上用户的头部的晃动幅度均大于预设第三阈值)等。例如,当第一头部信息为表征用户在预设时间内进行一次摇头动作的预设头部信息时,即在预设时间内采集到的第一头部信息均表示在第一方向上用户的头部的晃动幅度大于预设阈值,则可以确定第一头部信息符合第一预设条件,从而,可以弹出第一确认界面,以便用户通过眼部动作来选择是否切换工作模式。In an exemplary embodiment, the first preset condition may be that the user's head is in a head-down state, the user's head is in a head-up state, and the user's head shakes his head one or more times within a preset time (for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold), or, the user's head nods one or more times within the preset time (for example, the shaking of the user's head in the second direction magnitude greater than the preset second threshold), or, within the preset time, the user's head makes a circular movement (for example, the amplitude of the shaking of the user's head in both the first direction and the second direction is greater than the preset third threshold )Wait. For example, when the first head information is the preset head information that indicates that the user performs a head-shaking action within the preset time, that is, the first head information collected within the preset time all indicates that the user moves in the first direction. If the shake of the head is greater than the preset threshold, it can be determined that the first head information meets the first preset condition, and thus, the first confirmation interface can pop up, so that the user can choose whether to switch the working mode through eye movements.
在一种示例性实施例中,处理模块,被配置为对混合现实装置进行初始化,将混合现实装置的工作模式设置为AR潜水模式。这里,本公开实施例对此不做限定。In an exemplary embodiment, the processing module is configured to initialize the mixed reality device, and set the working mode of the mixed reality device to AR diving mode. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,第一眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的第一确认界面的区域确定为用户在第一确认界面的注视区域。In an exemplary embodiment, the first eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the first confirmation interface located in the line of sight direction as the user's The gaze area of the first confirmation interface.
在一种示例性实施例中,当将工作模式从AR潜水模式切换至VR潜水模式时,第一确认界面可以为基于AR技术实现的用户界面。或者,当将工作模式VR潜水模式切换为AR潜水模式时,第一确认界面可以为基于VR技术实现的用户界面。In an exemplary embodiment, when the working mode is switched from the AR diving mode to the VR diving mode, the first confirmation interface may be a user interface implemented based on AR technology. Or, when switching the working mode from the VR diving mode to the AR diving mode, the first confirmation interface may be a user interface implemented based on VR technology.
下面对混合现实装置的工作模式为AR潜水模式的多个功能模式进行说 明。The following describes multiple functional modes in which the working mode of the mixed reality device is the AR diving mode.
在一种示例性实施例中,处理模块,被配置为在用于监测用户所处的水下环境是否出现异常状态的第一功能模式,获取第一环境信息;将第一环境信息发送至服务器,以使服务器基于第一环境信息,确定用户所处的水下环境是否出现异常状态;接收服务器发送的包含用于指示用户所处的水下环境出现异常状态的警示信息的增强现实画面;控制显示模块显示包含警示信息的增强现实画面。如此,可以监测用户所处的水下环境是否出现异常状态,在出现异常状态时可以通过警示信息向用户进行警示。从而,可以保障潜水人员的人身安全,提高潜水人员的安全感,提升对潜水活动的兴趣。In an exemplary embodiment, the processing module is configured to obtain the first environmental information in the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state; and send the first environmental information to the server so that the server determines whether the underwater environment in which the user is in an abnormal state based on the first environmental information; receiving the augmented reality screen sent by the server containing warning information indicating that the underwater environment in which the user is in an abnormal state; controlling The display module displays an augmented reality image including warning information. In this way, it is possible to monitor whether there is an abnormal state in the underwater environment where the user is located, and when an abnormal state occurs, the user can be warned by a warning message. Therefore, the personal safety of the divers can be guaranteed, the sense of security of the divers can be improved, and the interest in diving activities can be enhanced.
在一种示例性实施例中,用户所处的水下环境出现异常状态可以包括:在用户所处的水下环境的附近区域(例如,以用户的位置为中心预设距离的区域内方)出现可能威胁用户生命安全的危险物体或者危险环境。例如,危险物体可以包括:危险动植物、障碍物等。例如,危险环境可以包括水流速度超过预设阈值等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, the occurrence of an abnormal state in the underwater environment where the user is located may include: in the vicinity of the underwater environment where the user is located (for example, within a preset distance centered on the user's position) There are dangerous objects or dangerous environments that may threaten the safety of users. For example, dangerous objects may include: dangerous animals and plants, obstacles, and the like. For example, a hazardous environment may include water velocity exceeding a preset threshold, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,警示信息可以包括用户所处的水下环境中威胁用户生命安全的危险物体的信息和用于指示用户沿第一目标线路行进的导航信息中的一种或多种,其中,第一目标路线为能够规避危险物的路线。In an exemplary embodiment, the warning information may include one or more of information about dangerous objects threatening the user's life in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route. , wherein the first target route is a route capable of avoiding dangerous objects.
在一种示例性实施例中,以警示信息包括用户所处的水下环境中威胁用户生命安全的危险物体的信息和用于指示用户沿第一目标线路行进的导航信息为例,那么,处理模块,还被配置为控制显示模块显示包含危险物体的信息的增强现实画面,并获取第二眼部信息;基于第二眼部信息,确定用户是否注视到危险物体的信息;当确定用户注视到危险物体的信息时,控制显示模块显示包含导航信息的增强现实画面。如此,在用户潜水过程中,当用户所处的水下环境出现危险物体时,混合现实装置可以向用户显示危险物体的信息;并在确认用户看到危险物体的信息之后,向用户显示导航信息,以便用户在行进时避开危险物体。如此,在监测到用户所处的水下环境出现危险物体时,能够保证用户注视到危险物体的信息,并向用户显示规避危险物的导航信息。从而,可以有效保障潜水人员在水下环境中的人身安全,提高潜水人员的安全感,提升对潜水活动的兴趣。In an exemplary embodiment, as an example, the warning information includes information about dangerous objects threatening the user's life safety in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route, then the processing The module is also configured to control the display module to display the augmented reality picture containing the information of the dangerous object, and obtain the second eye information; based on the second eye information, determine whether the user is watching the information of the dangerous object; when it is determined that the user is watching When information about dangerous objects is displayed, the display module is controlled to display an augmented reality picture including navigation information. In this way, during the user's diving, when dangerous objects appear in the underwater environment where the user is located, the mixed reality device can display the information of the dangerous objects to the user; and after confirming that the user has seen the information of the dangerous objects, display navigation information to the user , so that the user avoids dangerous objects while traveling. In this way, when a dangerous object appears in the underwater environment where the user is located, it can be ensured that the user pays attention to the information of the dangerous object, and the navigation information for avoiding the dangerous object can be displayed to the user. Therefore, the personal safety of the divers in the underwater environment can be effectively guaranteed, the sense of safety of the divers can be improved, and the interest in diving activities can be enhanced.
在一种示例性实施例中,第二眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的第一确认界面的区域确定为用户在第一确认界面的注视区域。In an exemplary embodiment, the second eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the first confirmation interface located in the line of sight direction as the user's The gaze area of the first confirmation interface.
在一种示例性实施例中,处理模块,被配置为在用于监测用户是否出现异常状态的第二功能模式,获取第三眼部信息;当第三眼部信息符合用户处于异常状态的第二预设条件时,获取第二环境信息;基于第二环境信息,确定用户的位置信息;将用户的位置信息发送至服务器,以使服务器将用户的位置信息发送至其它混合现实装置,以请求使用其它混合现实装置的潜水人员进行救援。如此,当基于用户的眼部信息,监测到潜水人员保持静止的时间出现异常并伴有眼球注视状态异常的情况下,可以自动切换至求救模式,便于及时呼叫救援人员,并将用户的位置信息发送至其它混合现实装置,向周围潜水人员发送求救信号便于及时救援,从而,便于及时解救出现意外的潜水人员,降低了潜水人员发生意外风险,降低潜水意外发生率,提升了潜水装置的安全性。In an exemplary embodiment, the processing module is configured to acquire third eye information in the second functional mode for monitoring whether the user is in an abnormal state; When two preset conditions are met, obtain the second environment information; determine the user's location information based on the second environment information; send the user's location information to the server, so that the server sends the user's location information to other mixed reality devices to request Rescue divers using other mixed reality devices. In this way, based on the user's eye information, when it is detected that the diver's time to keep still is abnormal and accompanied by abnormal eye gaze, it can automatically switch to the distress mode, which is convenient for calling rescuers in time, and the user's location information Send it to other mixed reality devices, and send distress signals to surrounding divers for timely rescue, thereby facilitating timely rescue of divers who have accidents, reducing the risk of accidents for divers, reducing the incidence of diving accidents, and improving the safety of diving devices .
在一种示例性实施例中,用户是否出现异常状态可以包括:用户是否出现身体不适,例如,用户是否处于疲劳状态、昏厥状态等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, whether the user is in an abnormal state may include: whether the user is in a physical discomfort, for example, whether the user is in a state of fatigue or fainting. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,第二预设条件可以为用户进行以下眼部动作包括但不限于:用户的视线停留在同一区域超过预设时间、用户在预设时间内眨眼动作的次数小于预设阈值、或者用户在预设时间内持续进行闭眼动作等。例如,以采集到的用户的眼部信息包括:用户的眼部图像信息为例,当检测到眼部图像信息不存在包括用户眼球的虹膜边界图像区域时,则可以判断用户进行闭眼动作,或者,当通过预设时间内采集到的多张眼部图像信息,检测到用户眼球的虹膜面积逐渐减小,后又逐渐增大时,则可以判断用户进行眨眼动作。In an exemplary embodiment, the second preset condition may be that the user performs the following eye movements, including but not limited to: the user's gaze stays in the same area for more than a preset time, the number of times the user blinks within the preset time is less than The preset threshold, or the user continues to close the eyes within the preset time, etc. For example, taking the collected user's eye information including: the user's eye image information as an example, when it is detected that the eye image information does not include the iris boundary image area including the user's eyeball, it can be determined that the user performs an eye-closing action, Alternatively, when it is detected that the iris area of the user's eyeball gradually decreases and then gradually increases through the multiple pieces of eye image information collected within a preset time, it can be determined that the user blinks.
在一种示例性实施例中,处理模块,还被配置为接收服务器发送的救援人员的信息;控制显示模块显示救援人员的信息。In an exemplary embodiment, the processing module is further configured to receive the rescuer's information sent by the server; and control the display module to display the rescuer's information.
在一种示例性实施例中,处理模块,被配置为在用于展示注视对象的介 绍信息的第三功能模式,获取第三环境信息和第四眼部信息;基于第四眼部信息,从第三环境信息中,获取用户在水下环境中的注视对象的标识信息;将注视对象的标识信息发送至服务器,以使服务器发送注视对象的介绍信息;接收服务器发送的包含注视对象的介绍信息的增强现实画面,并控制显示模块显示包含注视对象的介绍信息的增强现实画面。如此,潜水人员可以通过眼部注视控制显示注视对象的介绍信息,有助于潜水人员深入地了解水下物体及环境。In an exemplary embodiment, the processing module is configured to obtain the third environment information and the fourth eye information in the third functional mode for displaying the introduction information of the gaze object; based on the fourth eye information, from In the third environment information, the identification information of the user's gaze object in the underwater environment is obtained; the identification information of the gaze object is sent to the server, so that the server sends the introduction information of the gaze object; the introduction information containing the gaze object sent by the server is received and controlling the display module to display the augmented reality screen including the introduction information of the gaze object. In this way, divers can use the eye gaze control to display the introduction information of the gaze object, which is helpful for divers to have a deep understanding of underwater objects and environments.
在一种示例性实施例中,注视对象可以包括:水下环境中的至少一个水下物体和水下环境本身中的一种或多种。例如,水下物体可以包括:动物、植物或者岩石等。例如,水下环境可以包括:海沟或者火山等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, the gaze object may include: one or more of at least one underwater object in the underwater environment and the underwater environment itself. For example, underwater objects may include: animals, plants, or rocks. For example, the underwater environment may include: ocean trenches or volcanoes. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,第四眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的第三环境信息的信息确定为用户在水下环境中的注视对象的标识信息。In an exemplary embodiment, the fourth eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the information of the third environment information located in the line of sight direction as the user's The identification information of the gaze object in the underwater environment.
在一种示例性实施例中,以采集到的环境信息包括:环境图像信息为例,注视对象的标识信息可以包括:注视对象的图像信息。例如,基于图像识别算法,从环境图像信息中,提取注视对象的图像信息,将注视对象的图像信息确定为注视对象的标识信息。In an exemplary embodiment, taking the collected environment information including: environment image information as an example, the identification information of the gazing object may include: image information of the gazing object. For example, based on an image recognition algorithm, the image information of the gazing object is extracted from the environment image information, and the image information of the gazing object is determined as the identification information of the gazing object.
在一种示例性实施例中,注视对象的介绍信息可以包括:文字、图像和视频等信息中的一种或多种。例如,以目标物体为海洋动物为例,则注视对象的介绍信息可以包括:名称、所属科目、分类情况、形态特征、生活习性、或者保护级别等。这里,本公开示例性实施例对此不做限定。In an exemplary embodiment, the introduction information of the gaze object may include: one or more of information such as text, image, and video. For example, taking the target object as an example of a marine animal, the introduction information of the gaze object may include: name, subject, classification, morphological characteristics, living habits, or protection level, etc. Here, the exemplary embodiments of the present disclosure do not limit this.
在一种示例性实施例中,处理模块,被配置为在用于发起潜水互动的第四功能模式,获取第二头部信息;当第二头部信息符合第三预设条件时,控制显示模块显示第二确认界面;获取第五眼部信息;基于第五眼部信息,确定用户在第二确认界面的注视区域;当确定用户在第二确认界面的注视区域为预设第三显示区域时,向服务器发送用于请求发起潜水互动的请求消息。如此,用户可以便捷地通过头部和眼部来对混合现实装置进行操作,以便发 起潜水互动活动。从而,通过发起潜水互动活动,可以使得多个潜水人员可以作为一个整体,便于实时共享所见画面(例如,游历过的水下环境、所观看到的水下环境中的动物、植物等。进而,能够增强潜水趣味性。In an exemplary embodiment, the processing module is configured to acquire the second header information in the fourth functional mode for initiating diving interaction; when the second header information meets the third preset condition, control the display The module displays the second confirmation interface; acquires the fifth eye information; determines the gaze area of the user on the second confirmation interface based on the fifth eye information; when it is determined that the gaze area of the user on the second confirmation interface is the preset third display area , send a request message to the server for requesting diving interaction. In this way, the user can conveniently operate the mixed reality device through the head and eyes, so as to initiate diving interactive activities. Thus, by initiating a diving interactive activity, multiple divers can be used as a whole to facilitate real-time sharing of the pictures they see (for example, the underwater environment they have visited, the animals and plants in the underwater environment they have watched. And then , can enhance the fun of diving.
在一种示例性实施例中,第三预设条件可以为用户头部处于低头状态、用户头部处于仰头状态、在预设时间内用户头部进行一次或多次摇头动作(例如,在第一方向上用户的头部的晃动幅度大于预设第一阈值),或者,在预设时间内用户头部进行一次或多次点头动作(例如,在第二方向上用户的头部的晃动幅度大于预设第二阈值),或者,在预设时间内用户头部进行一次画圈动作(例如,在第一方向和第二方向上用户的头部的晃动幅度均大于预设第三阈值)等。例如,当第二头部信息为表征用户进行在预设时间内进行一次点头动作的预设头部信息时,即在预设时间内采集到的第一头部信息均表示在第二方向上用户的头部的晃动幅度大于预设阈值,则可以确定第二头部信息符合第三预设条件,从而,可以弹出第二确认界面,以便用户通过眼部动作来选择发起潜水互动。例如,当用户眼部注视在第二确认界面的注视区域为预设第三显示区域时,表明用户选择发起潜水互动活动。In an exemplary embodiment, the third preset condition may be that the user's head is in a head-down state, the user's head is in a head-up state, and the user's head shakes his head one or more times within a preset time (for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold), or, the user's head nods one or more times within the preset time (for example, the shaking of the user's head in the second direction magnitude greater than the preset second threshold), or, within the preset time, the user's head makes a circular movement (for example, the amplitude of the shaking of the user's head in both the first direction and the second direction is greater than the preset third threshold )Wait. For example, when the second head information is the preset head information indicating that the user performs a nodding action within the preset time, that is, the first head information collected within the preset time all indicates that the user is in the second direction. If the shake of the user's head is greater than the preset threshold, it can be determined that the second head information meets the third preset condition, and thus, a second confirmation interface can pop up, so that the user can choose to initiate diving interaction through eye movements. For example, when the gaze area of the user's eyes on the second confirmation interface is the preset third display area, it indicates that the user chooses to initiate a diving interactive activity.
在一种示例性实施例中,第二确认界面可以为基于AR技术实现的用户界面。In an exemplary embodiment, the second confirmation interface may be a user interface implemented based on AR technology.
在一种示例性实施例中,第五眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的第二确认界面的区域确定为用户在第二确认界面的注视区域。In an exemplary embodiment, the fifth eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the second confirmation interface located in the line of sight direction as the user's The gaze area of the second confirmation interface.
在一种示例性实施例中,处理模块,还被配置为接收服务器发送的包含潜水人员信息列表的增强现实画面,并控制显示模块显示包含潜水人员信息列表的增强现实画面;获取第六眼部信息;基于第六眼部信息,从潜水人员信息列表中,获取用户所注视的目标潜水人员的信息;将目标潜水人员的信息发送给服务器,以使服务器向目标潜水人员对应的混合现实装置发出用于邀请进行潜水互动的请求消息。如此,通过用户可以眼部,有针对性地选择待邀请进行潜水互动的目标潜水人员。In an exemplary embodiment, the processing module is further configured to receive the augmented reality screen containing the information list of divers sent by the server, and control the display module to display the augmented reality screen containing the information list of divers; information; based on the information of the sixth eye, from the information list of divers, obtain the information of the target divers that the user is looking at; send the information of the target divers to the server, so that the server sends the information to the mixed reality device corresponding to the target divers Request message for invitation to dive interaction. In this way, the user can select the target divers to be invited to conduct diving interaction in a targeted manner through eyes.
在一种示例性实施例中,潜水人员信息列表中可以包括:以互动发起方 的位置为中心预设距离内的水下环境中的一个或多个潜水人员的信息。例如,潜水人员信息列表中潜水人员的信息可以包括:潜水人员的位置信息或者潜水人员的介绍信息(例如,姓名、图像信息等)等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the diver information list may include: information about one or more divers in the underwater environment within a preset distance from the location of the interaction initiator. For example, the diver information in the diver information list may include: location information of the diver or introduction information (for example, name, image information, etc.) of the diver. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,目标潜水人员的数量可以为一个或多个,例如,两个、三个、四个等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the number of target divers may be one or more, for example, two, three, four, and so on. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,用于邀请进行潜水互动的请求消息可以包括:待分享对象的介绍信息,待分享对象包括:互动发起方的所处的水下环境和互动发起方的所注视到的水下物体中的一种或多种,待分享对象的介绍信息包括:文字、图像和视频中的一种或多种。如此,多个潜水人员可以作为一个整体,可以实时共享所见画面(例如,游历过的水下环境、所观看到的水下环境中的动物、植物等),能够丰富潜水活动,提升潜水体验。In an exemplary embodiment, the request message for inviting diving interaction may include: introduction information of the object to be shared, and the object to be shared includes: the underwater environment of the interaction initiator and the gaze of the interaction initiator One or more of the underwater objects found, the introduction information of the object to be shared includes: one or more of text, image and video. In this way, multiple divers can share what they see in real time as a whole (for example, the underwater environment they have visited, the animals and plants in the underwater environment they have seen), which can enrich diving activities and enhance diving experience .
在一种示例性实施例中,处理模块,还被配置为接收服务器发送的包含目标潜水人员中至少一个潜水人员的位置信息的增强现实画面;控制显示模块显示包含至少一个潜水人员的位置信息的增强现实画面。如此,便于潜水互动发起方知晓潜水互动接受方的位置信息,能提升潜水乐趣。In an exemplary embodiment, the processing module is further configured to receive the augmented reality screen containing the position information of at least one diver among the target divers sent by the server; and control the display module to display the augmented reality screen containing the position information of at least one diver. Augmented reality screen. In this way, it is convenient for the initiator of the diving interaction to know the location information of the receiver of the diving interaction, which can enhance the fun of diving.
在一种示例性实施例中,处理模块,还被配置为接收包含更新后的至少一个潜水人员的位置信息的增强现实画面;控制显示模块显示包含更新后的至少一个潜水人员的位置信息的增强现实画面。如此,在等待潜水互动接受方的过程中,可以实时更新潜水互动接受方的位置。In an exemplary embodiment, the processing module is further configured to receive an augmented reality screen containing the updated position information of at least one diver; and control the display module to display the augmented reality screen containing the updated position information of at least one diver. realistic picture. In this way, during the process of waiting for the diving interaction recipient, the location of the diving interaction recipient can be updated in real time.
在一种示例性实施例中,第六眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的潜水人员信息列表中的潜水人员的信息确定为用户所注视的目标潜水人员的信息。In an exemplary embodiment, the sixth eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; The information of the target diver who is determined to be the user's gaze.
在一种示例性实施例中,处理模块,还被配置为当确定用户在第二确认界面的注视区域为预设第四显示区域时,显示第三确认界面;获取第七眼部信息;基于第七眼部信息,确定用户在第三确认界面的注视区域;当确定用户在第三确认界面的注视区域为预设第五显示区域时,结束当前工作模式。如此,用户可以便捷地通过头部和眼部来对混合现实装置进行操作,以便结 束当前工作模式,例如,结束潜水互动活动。In an exemplary embodiment, the processing module is further configured to display the third confirmation interface when it is determined that the gaze area of the user on the second confirmation interface is the preset fourth display area; acquire seventh eye information; based on The seventh eye information determines the gaze area of the user on the third confirmation interface; when it is determined that the gaze area of the user on the third confirmation interface is the preset fifth display area, the current working mode ends. In this way, the user can conveniently operate the mixed reality device through the head and eyes, so as to end the current working mode, for example, end the diving interactive activity.
在一种示例性实施例中,第三确认界面可以为基于AR技术实现的用户界面。In an exemplary embodiment, the third confirmation interface may be a user interface implemented based on AR technology.
在一种示例性实施例中,第七眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的第三确认界面的区域确定为用户在第三确认界面的注视区域。In an exemplary embodiment, the seventh eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the third confirmation interface located in the line of sight direction as the user's The third confirms the gaze area of the interface.
下面对混合现实装置的工作模式为VR潜水模式的多个功能模式进行说明。The working mode of the mixed reality device is described below as a plurality of functional modes of the VR diving mode.
在一种示例性实施例中,处理模块,被配置为在用于响应潜水互动的第五功能模式,响应于服务器发送的用于确认是否进行潜水互动的请求消息,控制显示模块显示第四确认界面;获取第八眼部信息;基于第八眼部信息,确定用户在第四确认界面的注视区域;当确定用户在第四确认界面的注视区域为预设第六显示区域时,接收服务器发送的包含待分享对象的介绍信息的虚拟现实画面;控制显示模块显示包含待分享对象的介绍信息虚拟现实画面。如此,在接收到潜水互动邀请时,可以显示第五确认界面,以便用户可以便捷地通过眼部来对第五确认界面进行操作,确定是否接受潜水互动。如此,通过接受潜水互动活动,可以使得多个潜水人员可以实时共享所见画面(例如,游历过的水下环境、所观看到的水下环境中的动物、植物等。进而,能够增强潜水趣味性。并且,通过VR技术来显示待分享对象的介绍信息,能够通过在现实水下环境中引入虚拟水下环境信息,能够提升用户的潜水体验。In an exemplary embodiment, the processing module is configured to control the display module to display a fourth confirmation in response to a request message sent by the server for confirming whether to perform diving interaction in the fifth functional mode for responding to diving interaction interface; obtain eighth eye information; determine the gaze area of the user on the fourth confirmation interface based on the eighth eye information; when it is determined that the gaze area of the user on the fourth confirmation interface is the preset sixth display area, the receiving server sends The virtual reality screen containing the introduction information of the object to be shared; the display module is controlled to display the virtual reality screen containing the introduction information of the object to be shared. In this way, when receiving the diving interaction invitation, the fifth confirmation interface can be displayed, so that the user can conveniently operate the fifth confirmation interface through eyes to determine whether to accept the diving interaction. In this way, by accepting diving interactive activities, multiple divers can share the pictures they see in real time (for example, the underwater environment they have visited, the animals and plants in the underwater environment they have watched. Furthermore, the fun of diving can be enhanced. Moreover, displaying the introduction information of the object to be shared through VR technology can enhance the user's diving experience by introducing virtual underwater environment information into the real underwater environment.
在一种示例性实施例中,待分享对象包括:潜水互动发起方的所处的水下环境和互动发起方的所注视到的水下物体中的一种或多种,待分享对象的介绍信息包括:文字、图像和视频中的一种或多种。如此,多个潜水人员可以作为一个整体,可以实时共享所见画面(例如,游历过的水下环境、所观看到的水下环境中的动物、植物等),能够丰富潜水活动,提升潜水体验。In an exemplary embodiment, the object to be shared includes: one or more of the underwater environment of the initiator of the diving interaction and the underwater object that the initiator of the interaction is watching, and the introduction of the object to be shared The information includes: one or more of text, image and video. In this way, multiple divers can share what they see in real time as a whole (for example, the underwater environment they have visited, the animals and plants in the underwater environment they have seen), which can enrich diving activities and enhance diving experience .
在一种示例性实施例中,第四确认界面可以为基于VR技术实现的用户界面。In an exemplary embodiment, the fourth confirmation interface may be a user interface implemented based on VR technology.
在一种示例性实施例中,处理模块,还被配置为向服务器发送接受潜水 互动的响应消息,以使服务器下发用于指示用户沿第二目标线路行进的第二导航信息;接收服务器发送的包含第二导航信息的虚拟现实画面,并控制显示模块显示包含第二导航信息的虚拟现实画面,其中,第二目标路线为从潜水互动接受方的位置至潜水互动发起方的位置的路线。如此,在进行潜水互动时,通过显示包含第二导航信息的虚拟现实画面,以便潜水互动接受方根据行进至潜水互动发起方所在水下环境,进行接触式互动,增强潜水趣味性。而且,有助于潜水互动接受方快速进入潜水互动发起方所在水下环境,大量节约时间成本。In an exemplary embodiment, the processing module is further configured to send a response message for accepting diving interaction to the server, so that the server sends the second navigation information for instructing the user to travel along the second target route; receiving the message sent by the server the virtual reality screen containing the second navigation information, and control the display module to display the virtual reality screen containing the second navigation information, wherein the second target route is a route from the position of the diving interaction acceptor to the position of the diving interaction initiator. In this way, during the diving interaction, by displaying the virtual reality screen containing the second navigation information, the recipient of the diving interaction can perform contact interaction according to traveling to the underwater environment where the initiator of the diving interaction is located, thereby enhancing the fun of diving. Moreover, it helps the diving interaction recipient to quickly enter the underwater environment where the diving interaction initiator is located, saving a lot of time and cost.
在一种示例性实施例中,处理模块,还被配置为获取第三头部信息;当第三头部信息符合第四预设条件时,控制显示模块显示第五确认界面;获取第九眼部信息;基于第九眼部信息,确定用户在第五确认界面的注视区域;当确定用户在第五确认界面的注视区域为预设第七显示区域时,向服务器发送接受潜水互动的响应消息,以使服务器下发用于指示用户沿第二目标线路行进的第二导航信息。如此,用户可以通过头部和眼部,控制是否向服务器发送接受潜水互动的响应消息。In an exemplary embodiment, the processing module is further configured to acquire the third header information; when the third header information meets the fourth preset condition, control the display module to display the fifth confirmation interface; acquire the ninth-eye internal information; based on the ninth eye information, determine the gaze area of the user on the fifth confirmation interface; when it is determined that the gaze area of the user on the fifth confirmation interface is the preset seventh display area, send a response message to the server to accept diving interaction , so that the server sends the second navigation information for instructing the user to travel along the second target route. In this way, the user can control whether to send a response message to the server to accept diving interaction through the head and eyes.
在一种示例性实施例中,第四预设条件可以为用户头部处于低头状态、用户头部处于仰头状态、在预设时间内用户头部进行一次或多次摇头动作(例如,在第一方向上用户的头部的晃动幅度大于预设第一阈值),或者,在预设时间内用户头部进行一次或多次点头动作(例如,在第二方向上用户的头部的晃动幅度大于预设第二阈值),或者,在预设时间内用户头部进行一次画圈动作(例如,在第一方向和第二方向上用户的头部的晃动幅度均大于预设第三阈值)等。例如,当第三头部信息为表征用户在预设时间内进行了一次点头动作的预设头部信息时,则可以确定第三头部信息符合第四预设条件,从而,可以弹出第五确认界面,以便用户通过眼部动作来确认是否向服务器发送接受潜水互动的响应消息。In an exemplary embodiment, the fourth preset condition may be that the user's head is in a head-down state, the user's head is in a head-up state, and the user's head shakes his head one or more times within a preset time (for example, during The shaking amplitude of the user's head in the first direction is greater than the preset first threshold), or, the user's head nods one or more times within the preset time (for example, the shaking of the user's head in the second direction magnitude greater than the preset second threshold), or, within the preset time, the user's head makes a circular movement (for example, the amplitude of the shaking of the user's head in both the first direction and the second direction is greater than the preset third threshold )Wait. For example, when the third header information is preset header information indicating that the user has performed a nodding action within a preset time, it can be determined that the third header information meets the fourth preset condition, and thus, the fifth header information can be popped up. Confirmation interface, so that the user can confirm whether to send a response message to the server to accept diving interaction through eye movements.
在一种示例性实施例中,第九眼部信息可以为用户的眼部图像信息。例如,处理模块,被配置为根据用户的眼部图像信息,确定用户的眼球方位;根据用户的眼球方位,确定用户的视线方向;将位于视线方向上的第五确认界面的区域确定为用户在第五确认界面的注视区域。In an exemplary embodiment, the ninth eye information may be eye image information of the user. For example, the processing module is configured to determine the user's eyeball orientation according to the user's eye image information; determine the user's line of sight direction according to the user's eyeball orientation; determine the area of the fifth confirmation interface located in the line of sight direction as the user's The fifth confirms the gaze area of the interface.
在一种示例性实施例中,第五确认界面(即确认是否游历至潜水互动发起方所在区域进行潜水互动的界面)可以为基于VR技术实现的用户界面。In an exemplary embodiment, the fifth confirmation interface (that is, the interface for confirming whether to travel to the area where the initiator of the diving interaction is located for the diving interaction) may be a user interface implemented based on VR technology.
在一种示例性实施例中,处理模块,被配置为在用于展示潜水轨迹的第六功能模式,获取第五环境信息;基于第五环境信息,确定用户的当前位置信息;基于用户的当前位置信息和历史位置信息,生成包含用户潜水轨迹的虚拟现实画面;控制显示模块显示包含用户潜水轨迹的虚拟现实画面。如此,可以获取潜水人员的当前位置信息,便于潜水人员对自己游历过的路线及特殊标记物进行标记,可以VR潜水模式可以缩小整体视局,有助于潜水人员形成整体的意识使得潜水方向更具有导向型,方便潜水人员快速进入到下一个潜水目的地,大量节约潜水耗能和时间成本。In an exemplary embodiment, the processing module is configured to obtain fifth environmental information in the sixth functional mode for displaying diving tracks; determine the current location information of the user based on the fifth environmental information; The position information and the historical position information generate a virtual reality picture including the user's diving track; the display module is controlled to display the virtual reality picture including the user's diving track. In this way, the current location information of divers can be obtained, which is convenient for divers to mark the routes they have traveled and special markers. The VR diving mode can narrow the overall view, which helps divers form an overall awareness and make the diving direction clearer. It is oriented and convenient for divers to quickly enter the next diving destination, saving a lot of energy and time for diving.
在一种示例性实施例中,第五环境信息可以为环境位置信息。如此,可以将用户当前所处的水下环境的环境位置信息作为用户的当前位置信息。这样,就获得了用户的位置信息。In an exemplary embodiment, the fifth environment information may be environment location information. In this way, the environmental location information of the underwater environment where the user is currently located may be used as the user's current location information. In this way, the location information of the user is obtained.
本公开实施例还提供一种信息处理方法。可应用于上述一个或多个示例性实施例中的混合现实装置中。The embodiment of the present disclosure also provides an information processing method. It can be applied to the mixed reality device in one or more of the above exemplary embodiments.
图3为本公开示例性实施例中的信息处理方法的流程示意图。如图3所示,该信息处理方法可以包括:Fig. 3 is a schematic flowchart of an information processing method in an exemplary embodiment of the present disclosure. As shown in Figure 3, the information processing method may include:
步骤31:通过潜水信息采集模块获取潜水信息,潜水信息包括:用户的头部信息、用户的眼部信息和用户所处的水下环境的环境信息中的一种或多种;Step 31: Obtain diving information through the diving information collection module, and the diving information includes: one or more of the user's head information, the user's eye information, and the environmental information of the underwater environment in which the user is located;
步骤32:基于潜水信息,实现增强现实AR潜水模式和虚拟现实VR潜水模式中任意一种的功能;Step 32: Based on the diving information, realize the function of any one of the augmented reality AR diving mode and the virtual reality VR diving mode;
其中,AR潜水模式包括:用于监测用户所处的水下环境是否出现异常状态的第一功能模式、用于监测用户是否出现异常状态的第二功能模式、用于展示注视对象的介绍信息的第三功能模式和用于发起潜水互动的第四功能模式中的一种或多种,VR潜水模式包括:用于响应潜水互动的第五功能模式和用于展示潜水轨迹的第六功能模式中的一种或多种。Among them, the AR diving mode includes: the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state, the second functional mode for monitoring whether the user is in an abnormal state, and a function mode for displaying the introduction information of the staring object. One or more of the third function mode and the fourth function mode for initiating diving interaction, the VR diving mode includes: the fifth function mode for responding to diving interaction and the sixth function mode for displaying diving tracks one or more of .
步骤33:控制显示模块显示增强现实画面和虚拟现实画面中任意一种。Step 33: Control the display module to display any one of augmented reality images and virtual reality images.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3211:获取第一头部信息;Step 3211: Obtain the first header information;
步骤3212:当第一头部信息符合第一预设条件时,控制显示模块显示第一确认界面,并获取第一眼部信息;Step 3212: When the first header information meets the first preset condition, control the display module to display the first confirmation interface, and acquire the first eye information;
步骤3213:基于第一眼部信息,确定用户在第一确认界面的注视区域;Step 3213: Based on the first eye information, determine the gaze area of the user on the first confirmation interface;
步骤3214:当确定用户在第一确认界面的注视区域为预设第一显示区域时,将工作模式从AR潜水模式和VR潜水模式中的一种切换为AR潜水模式和VR潜水模式中的另一种;或者,当确定用户在第一确认界面的注视区域为预设第二显示区域时,保持工作模式不变。Step 3214: When it is determined that the gaze area of the user on the first confirmation interface is the preset first display area, switch the working mode from one of the AR diving mode and the VR diving mode to the other of the AR diving mode and the VR diving mode One; or, when it is determined that the gaze area of the user on the first confirmation interface is the preset second display area, keep the working mode unchanged.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3221:在第一功能模式,获取第一环境信息;Step 3221: In the first function mode, obtain the first environment information;
步骤3222:将第一环境信息发送至服务器,以使服务器基于第一环境信息,确定用户所处的水下环境是否出现异常状态;Step 3222: Send the first environmental information to the server, so that the server determines whether the underwater environment where the user is in is abnormal based on the first environmental information;
步骤3223:接收服务器发送的包含用于指示用户所处的水下环境出现异常状态的警示信息的增强现实画面;Step 3223: receiving the augmented reality image sent by the server and including the warning information indicating that the underwater environment where the user is in is abnormal;
步骤3224:控制显示模块显示包含警示信息的增强现实画面。Step 3224: Control the display module to display the augmented reality screen containing the warning information.
在一种示例性实施例中,警示信息包括:用户所处的水下环境中威胁用户生命安全的危险物体的信息和用于指示用户沿第一目标线路行进的导航信息中的一种或多种,其中,第一目标路线为能够规避危险物的路线。In an exemplary embodiment, the warning information includes: one or more of information about dangerous objects threatening the user's life in the underwater environment where the user is located and navigation information for instructing the user to travel along the first target route. , wherein the first target route is a route capable of avoiding dangerous objects.
在一种示例性实施例中,步骤3224可以包括:控制显示模块显示包含危险物体的信息的增强现实画面,并获取第二眼部信息;基于第二眼部信息,确定用户是否注视到危险物体的信息;当确定用户注视到危险物体的信息时,控制显示模块显示包含导航信息的增强现实画面。In an exemplary embodiment, step 3224 may include: controlling the display module to display an augmented reality picture containing information about dangerous objects, and acquiring second eye information; based on the second eye information, determining whether the user is looking at the dangerous object information; when it is determined that the user is gazing at the information of the dangerous object, the display module is controlled to display an augmented reality picture containing navigation information.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3231:在第二功能模式,获取第三眼部信息;Step 3231: In the second function mode, obtain the third eye information;
步骤3232:当第三眼部信息符合用户处于异常状态的第二预设条件时, 获取第二环境信息;Step 3232: When the third eye information meets the second preset condition that the user is in an abnormal state, acquire the second environment information;
步骤3233:基于第二环境信息,确定用户的位置信息;Step 3233: Determine the location information of the user based on the second environment information;
步骤3234:将用户的位置信息发送至服务器,以使服务器将用户的位置信息发送至其它混合现实装置,以请求使用其它混合现实装置的潜水人员进行救援。Step 3234 : Send the user's location information to the server, so that the server sends the user's location information to other mixed reality devices, so as to request rescue from divers using other mixed reality devices.
在一种示例性实施例中,在步骤3234之后,步骤32还可以包括:In an exemplary embodiment, after step 3234, step 32 may further include:
步骤3235:接收服务器发送的救援人员的信息;Step 3235: Receive the information of rescuers sent by the server;
步骤3236:控制显示模块显示救援人员的信息。Step 3236: Control the display module to display information about rescuers.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3241:在第三功能模式,获取第三环境信息和第四眼部信息;Step 3241: In the third function mode, acquire the third environment information and the fourth eye information;
步骤3242:基于第四眼部信息,从第三环境信息中,获取用户在水下环境中的注视对象的标识信息;Step 3242: Based on the fourth eye information, obtain the identification information of the user's gaze object in the underwater environment from the third environmental information;
步骤3243:将注视对象的标识信息发送至服务器,以使服务器发送注视对象的介绍信息;Step 3243: Send the identification information of the gaze object to the server, so that the server sends the introduction information of the gaze object;
步骤3244:接收服务器发送的包含注视对象的介绍信息的增强现实画面,并控制显示模块显示包含注视对象的介绍信息的增强现实画面。Step 3244: Receive the augmented reality image including the introduction information of the gaze object sent by the server, and control the display module to display the augmented reality image including the introduction information of the gaze object.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3251:在第四功能模式,获取第二头部信息;Step 3251: In the fourth function mode, obtain the second header information;
步骤3252:当第二头部信息符合第三预设条件时,控制显示模块显示第二确认界面;Step 3252: When the second header information meets the third preset condition, control the display module to display the second confirmation interface;
步骤3253:获取第五眼部信息;基于第五眼部信息,确定用户在第二确认界面的注视区域;Step 3253: Obtain the fifth eye information; determine the gaze area of the user on the second confirmation interface based on the fifth eye information;
步骤3254a:当确定用户在第二确认界面的注视区域为预设第三显示区域时,向服务器发送用于请求发起潜水互动的请求消息。Step 3254a: When it is determined that the gaze area of the user on the second confirmation interface is the preset third display area, send a request message for requesting initiation of diving interaction to the server.
在一种示例性实施例中,在步骤3254a之后,步骤32还可以包括:In an exemplary embodiment, after step 3254a, step 32 may further include:
步骤3255a:接收服务器发送的包含潜水人员信息列表的增强现实画面,并控制显示模块显示包含潜水人员信息列表的增强现实画面;Step 3255a: Receive the augmented reality screen containing the information list of divers sent by the server, and control the display module to display the augmented reality screen containing the information list of divers;
步骤3256a:获取第六眼部信息;Step 3256a: Obtain the sixth eye information;
步骤3257a:基于第六眼部信息,从潜水人员信息列表中,获取用户所注视的目标潜水人员的信息;Step 3257a: Based on the sixth eye information, obtain the information of the target diver that the user is looking at from the diver information list;
步骤3258a:将目标潜水人员的信息发送给服务器,以使服务器向目标潜水人员对应的混合现实装置发出用于邀请进行潜水互动的请求消息。Step 3258a: Send the information of the target diver to the server, so that the server sends a request message for inviting diving interaction to the mixed reality device corresponding to the target diver.
在一种示例性实施例中,用于邀请进行潜水互动的请求消息包括:待分享对象的介绍信息,待分享对象包括:水下环境和水下物体中的一种或多种,待分享对象的介绍信息包括:文字、图像和视频中的一种或多种。In an exemplary embodiment, the request message for inviting diving interaction includes: introduction information of the object to be shared, the object to be shared includes: one or more of the underwater environment and underwater objects, the object to be shared The introduction information includes: one or more of text, image and video.
在一种示例性实施例中,在步骤3258a之后,步骤32还可以包括:In an exemplary embodiment, after step 3258a, step 32 may further include:
步骤3259a:接收服务器发送的包含目标潜水人员中至少一个潜水人员的位置信息的增强现实画面,并控制显示模块显示包含至少一个潜水人员的位置信息的增强现实画面。Step 3259a: Receive the augmented reality picture including the location information of at least one diver among the target divers sent by the server, and control the display module to display the augmented reality picture including the location information of at least one diver.
在一种示例性实施例中,在步骤3253之后,步骤32可以包括:In an exemplary embodiment, after step 3253, step 32 may include:
步骤3254b:当确定用户在第二确认界面的注视区域为预设第四显示区域时,显示第三确认界面;Step 3254b: When it is determined that the gaze area of the user on the second confirmation interface is the preset fourth display area, display the third confirmation interface;
步骤3255b:获取第七眼部信息;Step 3255b: Obtain the seventh eye information;
步骤3256b:基于第七眼部信息,确定用户在第三确认界面的注视区域;Step 3256b: Determine the gaze area of the user on the third confirmation interface based on the seventh eye information;
步骤3257b:当确定用户在第三确认界面的注视区域为预设第五显示区域时,结束当前工作模式。Step 3257b: When it is determined that the gaze area of the user on the third confirmation interface is the preset fifth display area, end the current working mode.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3261:在第五功能模式,响应于服务器发送的用于邀请进行潜水互动的请求消息,控制显示模块显示第四确认界面;Step 3261: In the fifth function mode, in response to the request message sent by the server for inviting diving interaction, control the display module to display the fourth confirmation interface;
步骤3262:获取第八眼部信息;Step 3262: Obtain the eighth eye information;
步骤3263:基于第八眼部信息,确定用户在第四确认界面的注视区域;Step 3263: Based on the eighth eye information, determine the gaze area of the user on the fourth confirmation interface;
步骤3264:当确定用户在第四确认界面的注视区域为预设第六显示区域时,接收服务器发送的包含待分享对象的介绍信息的虚拟现实画面;Step 3264: When it is determined that the user's gaze area on the fourth confirmation interface is the preset sixth display area, receive the virtual reality screen containing the introduction information of the object to be shared sent by the server;
步骤3265:控制显示模块显示包含待分享对象的介绍信息虚拟现实画面。Step 3265: Control the display module to display the virtual reality screen containing the introduction information of the object to be shared.
其中,待分享对象包括:水下环境和水下物体中的一种或多种,待分享对象的介绍信息包括:文字、图像和视频中的一种或多种。Wherein, the objects to be shared include: one or more of underwater environments and underwater objects, and the introduction information of the objects to be shared includes: one or more of text, images and videos.
在一种示例性实施例中,在步骤3265之后,步骤32还可以包括:In an exemplary embodiment, after step 3265, step 32 may further include:
步骤3266:向服务器发送接受潜水互动的响应消息,以使服务器下发用于指示用户沿第二目标线路行进的第二导航信息;Step 3266: Send a response message of accepting diving interaction to the server, so that the server sends the second navigation information for instructing the user to travel along the second target route;
步骤3267:接收服务器发送的包含第二导航信息的虚拟现实画面,并控制显示模块显示包含第二导航信息的虚拟现实画面。Step 3267: Receive the virtual reality image containing the second navigation information sent by the server, and control the display module to display the virtual reality image containing the second navigation information.
其中,第二目标路线为从潜水互动接受方的位置至潜水互动发起方的位置的路线。Wherein, the second target route is a route from the location of the diving interaction recipient to the location of the diving interaction initiator.
在一种示例性实施例中,步骤3266可以包括:获取第三头部信息;当第三头部信息符合第四预设条件时,控制显示模块显示第五确认界面;获取第九眼部信息;基于第九眼部信息,确定用户在第五确认界面的注视区域;当确定用户在第五确认界面的注视区域为预设第七显示区域时,向服务器发送接受潜水互动的响应消息,以使服务器下发用于指示用户沿第二目标线路行进的第二导航信息。如此,用户可以通过头部和眼部,控制是否向服务器发送接受潜水互动的响应消息。In an exemplary embodiment, step 3266 may include: acquiring the third header information; when the third header information meets the fourth preset condition, controlling the display module to display the fifth confirmation interface; acquiring the ninth eye information ; Based on the ninth eye information, determine the gaze area of the user on the fifth confirmation interface; when it is determined that the gaze area of the user on the fifth confirmation interface is the preset seventh display area, send a response message accepting diving interaction to the server, to The server is made to issue second navigation information for instructing the user to travel along the second target route. In this way, the user can control whether to send a response message to the server to accept diving interaction through the head and eyes.
在一种示例性实施例中,步骤32可以包括:In an exemplary embodiment, step 32 may include:
步骤3271:在第六功能模式,获取第五环境信息;Step 3271: In the sixth function mode, obtain fifth environmental information;
步骤3272:基于第五环境信息,确定用户的当前位置信息;Step 3272: Determine the current location information of the user based on the fifth environmental information;
步骤3273:基于用户的当前位置信息和历史位置信息,生成包含用户潜水轨迹的虚拟现实画面;Step 3273: Based on the user's current location information and historical location information, generate a virtual reality screen including the user's diving track;
步骤3274:控制显示模块显示包含用户潜水轨迹的虚拟现实画面。Step 3274: Control the display module to display the virtual reality screen including the user's diving track.
下面以AR潜水模式包括:用于监测用户所处的水下环境是否出现异常状态的第一功能模式、用于监测用户是否出现异常状态的第二功能模式、用于展示注视对象的介绍信息的第三功能模式和用于发起潜水互动的第四功能模式,VR潜水模式包括:用于响应潜水互动的第五功能模式和用于展示潜水轨迹的第六功能模式为例,以示例性实施例对上述信息处理方法的应用场景进行说明。The following AR diving mode includes: the first functional mode for monitoring whether the underwater environment where the user is in an abnormal state, the second functional mode for monitoring whether the user is in an abnormal state, and the one for displaying the introduction information of the staring object The third function mode and the fourth function mode for initiating diving interaction, the VR diving mode includes: the fifth function mode for responding to diving interaction and the sixth function mode for displaying diving tracks as an example, taking an exemplary embodiment An application scenario of the above information processing method will be described.
在一种示例性实施例中,信息处理方法可以包括以下过程:In an exemplary embodiment, the information processing method may include the following processes:
步骤1:处理模块控制显示模块显示初始化设置界面,用户通过眼部可以对初始化设置界面进行操作,选择初始化工作模式。例如,初始化工作模式可以选为AR潜水模式。Step 1: The processing module controls the display module to display the initialization setting interface, and the user can operate the initialization setting interface through eyes and select the initialization working mode. For example, the initial working mode can be selected as AR diving mode.
步骤2:头部信息采集模块采集第一眼部信息,并将第一眼部信息发送至头处理模块,当第一头部信息符合第一预设条件(例如,在第一方向上用户的头部的晃动幅度大于预设第一阈值)时,处理模块控制显示模块显示第一确认界面(即确认切换界面),或者,当第一头部信息不符合第一预设条件,保持当前的工作模式。Step 2: The head information collection module collects the first eye information, and sends the first eye information to the head processing module. When the first head information meets the first preset condition (for example, the user's When the shaking amplitude of the head is greater than the preset first threshold), the processing module controls the display module to display the first confirmation interface (that is, the confirmation switching interface), or, when the first head information does not meet the first preset condition, keep the current Operating mode.
步骤3:眼部信息采集模块获取潜水人员的第一眼部信息;当处理模块确定用户在第一确认界面的注视区域(即眼部注视信息)为用于控制切换工作模式的预设第一显示区域时,则处理模块将工作模式从AR潜水模式切换至VR潜水模式,可以继续执行步骤20,或者,当处理模块确定用户在第一确认界面的注视区域为预设第二显示区域时,保持工作模式不变,可以继续执行步骤4。Step 3: the eye information collection module acquires the first eye information of the diver; when the processing module determines that the user's gaze area (ie eye gaze information) on the first confirmation interface is the preset first When the area is displayed, the processing module switches the working mode from the AR diving mode to the VR diving mode, and can continue to perform step 20, or, when the processing module determines that the user's gaze area on the first confirmation interface is the preset second display area, Keep the working mode unchanged, and proceed to step 4.
步骤4:环境信息采集模块(例如,包括多个摄像头)实时采集用户所处的水下环境的环境图像信息(例如,第一环境信息)并传送至处理模块。Step 4: The environmental information collection module (for example, including multiple cameras) collects the environmental image information (for example, first environmental information) of the underwater environment where the user is in real time and transmits it to the processing module.
步骤5:处理模块实时获取第一环境信息并发送给服务器,服务器返回给处理器当前所处环境的处理结果:当服务器基于第一环境信息,计算出用户所处的水下环境中出现威胁用户生命安全的危险物体(例如,危险生物或者障碍物)时,表明用户所处的水下环境出现异常状态,则执行步骤6,或者,当前所处环境的处理结果表明用户所处的水下环境未出现异常状态,则执行步骤8;Step 5: The processing module obtains the first environmental information in real time and sends it to the server, and the server returns the processing result of the current environment where the processor is located: when the server calculates that there is a threat to the user in the underwater environment where the user is located based on the first environmental information When there are dangerous objects (such as dangerous creatures or obstacles) that are life-safe, it indicates that the underwater environment where the user is in an abnormal state, then perform step 6, or the processing result of the current environment indicates that the underwater environment where the user is in If there is no abnormal state, go to step 8;
步骤6:处理模块获取服务器发送的包含用于指示用户所处的水下环境出现威胁用户生命安全的危险物体的警示信息的增强现实画面,其中,警示信息可以包括:危险物体的位置及大小,控制显示模块显示包含警示信息的增强现实画面,以进行警示提醒;Step 6: The processing module obtains the augmented reality screen sent by the server and contains warning information indicating that dangerous objects that threaten the user's life safety appear in the underwater environment where the user is located. The warning information may include: the location and size of the dangerous object, Controlling the display module to display an augmented reality picture containing warning information for warning reminders;
步骤7:在控制显示模块显示包含危险物体的信息的增强现实画面之后,处理模块获取由眼部信息采集模块所采集的第二眼部信息,当处理模块基于 第二眼部信息确定潜水人员注视危险物体的信息后,处理模块控制显示模块显示包含导航信息的增强现实画面。继续执行步骤4至步骤7监测用户所处的水下环境是否出现异常状态;Step 7: After controlling the display module to display the augmented reality picture containing the information of dangerous objects, the processing module obtains the second eye information collected by the eye information collection module, when the processing module determines that the diver's gaze is on the basis of the second eye information After receiving the information of the dangerous object, the processing module controls the display module to display an augmented reality picture including navigation information. Continue to perform steps 4 to 7 to monitor whether the underwater environment where the user is in is abnormal;
步骤8:眼部信息采集模块获取潜水人员的第三眼部信息并发送至处理模块;Step 8: The eye information collection module acquires the diver's third eye information and sends it to the processing module;
步骤9:处理模块实时记录眼球活跃时长数据,当第三眼部信息符合用户处于异常状态的第二预设条件(例如,用户连续不眨眼或不睁眼的时长超过预设时长)时,执行步骤10,或者,当第三眼部信息不符合用户处于异常状态的第二预设条件,执行步骤13;Step 9: The processing module records the eyeball activity duration data in real time, and when the third eye information meets the second preset condition that the user is in an abnormal state (for example, the user does not blink continuously or does not open the eyes for more than the preset duration), execute Step 10, or, when the third eye information does not meet the second preset condition that the user is in an abnormal state, perform step 13;
步骤10:环境信息采集模块获取第二环境信息(包括环境位置信息)并发送至处理模块,以使处理模块基于第二环境信息,计算处于异常状态的用户的位置信息,将用户的位置信息发送至服务器;Step 10: The environmental information collection module acquires the second environmental information (including environmental location information) and sends it to the processing module, so that the processing module calculates the location information of the user in an abnormal state based on the second environmental information, and sends the user's location information to the server;
步骤11:服务器获取处于异常状态的用户的周围潜水人员信息列表,并对周围潜水人员信息列表对应的处理模块发送求救信息,以请求其它潜水人员进行救援;Step 11: The server obtains the information list of surrounding divers of the user in an abnormal state, and sends a distress message to the processing module corresponding to the information list of surrounding divers, so as to request other divers to rescue;
步骤12:处理模块接收到服务器的发送的救援人员的信息,控制显示模块显示救援人员信息及距离,执行步骤28;Step 12: The processing module receives the rescuer's information sent by the server, controls the display module to display the rescuer's information and distance, and executes step 28;
步骤13:根据眼部信息采集模块获取的第四眼部信息以及环境信息采集模块采集的第三环境信息,处理模块计算潜水人员所注视的注视对象的标识信息,并从服务器获取注视对象的介绍信息,控制显示模块进行显示,以方便潜水人员深入了解水底生物及环境;Step 13: According to the fourth eye information obtained by the eye information collection module and the third environmental information collected by the environmental information collection module, the processing module calculates the identification information of the gaze object that the diver is watching, and obtains the introduction of the gaze object from the server Information, control the display module to display, so that the divers can have a deep understanding of the underwater organisms and the environment;
步骤14:当处理模块在AR潜水模式下获取到的第二头部信息符合第三预设条件(例如,表征潜水人员进行了点头动作)时,处理模块控制显示模块显示第二确认界面(即确认共享视图界面),用户通过眼部交互后,可确认是否共享此时视场的视图。当确定用户在第二确认界面的注视区域为预设第三显示区域时,确认选择发起潜水互动,以共享视图,则执行步骤15,或者,当处理模块确定用户在第二确认界面的注视区域为预设第四显示区域时,显示第三确认界面,执行步骤19,以确认是否结束当前工作模式;Step 14: When the second head information obtained by the processing module in the AR diving mode meets the third preset condition (for example, indicating that the diver has performed a nodding action), the processing module controls the display module to display the second confirmation interface (that is, Confirm shared view interface), the user can confirm whether to share the view of the field of view at this time after interacting with the eyes. When it is determined that the gaze area of the user on the second confirmation interface is the preset third display area, and the selection is confirmed to initiate diving interaction to share the view, then step 15 is performed, or, when the processing module determines that the gaze area of the user on the second confirmation interface When the fourth display area is preset, a third confirmation interface is displayed, and step 19 is performed to confirm whether to end the current working mode;
步骤15:处理模块向服务器发送用于请求发起潜水互动的请求消息,服务器响应于该请求消息,将获取到的该潜水领域中处于VR潜水模式的潜水人员信息列表,并发送给潜水互动发起者对应的处理模块;Step 15: The processing module sends a request message to the server for requesting to initiate a diving interaction, and the server responds to the request message and sends the acquired information list of divers in VR diving mode in the diving field to the initiator of the diving interaction Corresponding processing module;
步骤16:处理模块控制显示模块显示包含潜水人员信息列表的增强现实画面,潜水互动发起者可通过眼部选择潜水互动对象,并目标潜水人员的信息发送至服务器,服务器将进行一对一或者一对多发送用于邀请进行潜水互动的请求消息,并接受所反馈的是否接受邀请的结果;Step 16: The processing module controls the display module to display the augmented reality screen containing the information list of divers. The initiator of the diving interaction can select the diving interaction object through the eyes, and the information of the target divers is sent to the server, and the server will carry out one-on-one or one-on-one Send a request message for inviting diving interaction to many, and accept the feedback result of whether to accept the invitation;
步骤17:潜水互动发起者对应的处理模块接收到服务器的反馈信息后,可通过选择是否等待潜水同伴的到来进行潜水互动,当潜水互动发起者选择等待,则执行步骤18,或者,当潜水互动发起者选择不等待,则执行步骤2;Step 17: After the processing module corresponding to the diving interaction initiator receives the feedback information from the server, it can choose whether to wait for the arrival of the diving companion to conduct diving interaction. When the diving interaction initiator chooses to wait, perform step 18, or, when the diving interaction If the initiator chooses not to wait, go to step 2;
步骤18:服务器实时更新接收邀请的潜水人员的位置信息计算出距离并反馈至潜水互动发起者的处理模块,以便进行显示界面更新;Step 18: The server updates the location information of the divers receiving the invitation in real time, calculates the distance and feeds it back to the processing module of the initiator of the diving interaction, so as to update the display interface;
步骤19:当处理模块基于第七眼部信息,确定用户在第三确认界面的注视区域为预设第五显示区域时,则执行步骤28,或者,确定用户在第三确认界面的注视区域不为预设第五显示区域,则继续执行步骤2;Step 19: When the processing module determines that the user's gaze area on the third confirmation interface is the preset fifth display area based on the seventh eye information, then perform step 28, or determine that the user's gaze area on the third confirmation interface is not To preset the fifth display area, proceed to step 2;
步骤20:处理模块监测是否接收到服务器发送的用于邀请进行潜水互动的请求消息。当接收到该请求消息,处理模块控制显示模块显示第四确认界面(例如,确认是否显示潜水互动发起方所共享待分享对象的介绍信息的界面)。Step 20: The processing module monitors whether a request message for inviting diving interaction sent by the server is received. When receiving the request message, the processing module controls the display module to display a fourth confirmation interface (for example, an interface for confirming whether to display the introduction information of the object to be shared shared by the initiator of the diving interaction).
步骤21:处理模块基于获取到的第八眼部信息,确定用户在第四确认界面的注视区域为预设第六显示区域,表明用户选择接受潜水互动,则接收服务器发送的包含待分享对象的介绍信息的虚拟现实画面;控制显示模块显示包含待分享对象的介绍信息虚拟现实画面之后,执行步骤22,或者,若用户选择不接受,则执行步骤24;Step 21: Based on the acquired eighth eye information, the processing module determines that the gaze area of the user on the fourth confirmation interface is the preset sixth display area, indicating that the user chooses to accept diving interaction, and then receives the information sent by the server containing the object to be shared. Introduce the virtual reality screen of the information; after controlling the display module to display the virtual reality screen of the introduction information containing the object to be shared, perform
步骤22:当处理模块在AR潜水模式下获取到的第三头部信息符合第四预设条件(例如,表征潜水人员进行了点头动作)时,处理模块控制显示模块显示第五确认界面(例如,确认是否游历至潜水互动发起方所在区域进行潜水互动的界面)。之后,通过用户的眼部进行交互,确认是否加入接触式潜水互动邀请,当用户选择加入邀请,则执行步骤23,以便游历至潜水互动 发起方所在区域,或者,当用户选择不加入邀请,则执行步骤24;Step 22: When the third head information obtained by the processing module in the AR diving mode meets the fourth preset condition (for example, indicating that the diver has performed a nodding action), the processing module controls the display module to display the fifth confirmation interface (for example, , to confirm whether to travel to the area where the initiator of the diving interaction is located to perform the diving interaction interface). After that, interact with the user's eyes to confirm whether to join the contact diving interactive invitation. When the user chooses to join the invitation, then perform
步骤23:处理模块向服务器发送接受潜水互动的响应消息,以使服务器下发用于指示用户沿第二目标线路行进的第二导航信息(例如,包括发起方与接受方之间的路径规划)。潜水互动接受方的处理模块接收服务器发送的包含第二导航信息的虚拟现实画面,并控制潜水互动接受方的显示模块显示包含第二导航信息的虚拟现实画面,以进行路线提示;Step 23: The processing module sends a response message of accepting diving interaction to the server, so that the server issues second navigation information for instructing the user to travel along the second target route (for example, including path planning between the initiator and the recipient) . The processing module of the diving interaction recipient receives the virtual reality screen containing the second navigation information sent by the server, and controls the display module of the diving interaction recipient to display the virtual reality screen containing the second navigation information for route prompting;
步骤24:处理模块获取环境信息采集模块所采集的第五环境信息并发送给服务器。Step 24: The processing module obtains the fifth environmental information collected by the environmental information collection module and sends it to the server.
步骤25:服务器基于第五环境信息和预先存储的历史环境信息,记录潜水人员的所有游历点并标识成已游历的潜水轨迹,生成包含用户潜水轨迹的虚拟现实画面;Step 25: Based on the fifth environmental information and the pre-stored historical environmental information, the server records all the diver's travel points and marks them as traveled diving tracks, and generates a virtual reality screen containing the user's diving tracks;
步骤26:服务器将包含用户潜水轨迹的虚拟现实画面发送给处理模块;Step 26: the server sends the virtual reality screen containing the user's diving track to the processing module;
步骤27:处理模块服务器发送的控制显示模块显示包含用户潜水轨迹的虚拟现实画面,之后,可以执行步骤2;Step 27: The control display module sent by the processing module server displays a virtual reality screen containing the user's diving track, and then step 2 can be performed;
步骤28:结束潜水。Step 28: End the dive.
以上方法实施例的描述,与上述装置实施例的描述是类似的,具有同装置实施例相似的有益效果。对于本公开方法实施例中未披露的技术细节,本领域的技术人员请参照本公开装置实施例中的描述而理解,在此不再赘述。The description of the above method embodiment is similar to the description of the above device embodiment, and has similar beneficial effects as the device embodiment. For the technical details not disclosed in the method embodiments of the present disclosure, those skilled in the art can refer to the descriptions in the device embodiments of the present disclosure to understand, and details are not repeated here.
本公开实施例还提供一种混合现实设备,包括:存储器以及存储有可在处理器上运行的计算机程序的存储器,其中,处理器执行程序时实现上述一个或多个实施例中的信息处理方法的步骤。An embodiment of the present disclosure also provides a mixed reality device, including: a memory and a memory storing a computer program that can run on a processor, wherein, when the processor executes the program, the information processing method in one or more of the above embodiments is implemented A step of.
在一种示例性实施例中,如图4所示,该混合现实设备40可以包括:至少一个处理器401;以及与处理器401连接的至少一个存储器402、总线403;其中,处理器401、存储器402通过总线403完成相互间的通信;处理器401用于调用存储器402中的程序指令,以执行上述一个或多个实施例中的信息处理方法的步骤。In an exemplary embodiment, as shown in FIG. 4 , the
在一种示例性实施例中,上述处理器可以是CPU、其他通用处理器、DSP、 现场FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件或者专用集成电路等。通用处理器可以是MPU或者该处理器可以是任何常规的处理器等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the above-mentioned processor may be a CPU, other general-purpose processors, DSP, field FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or application-specific integrated circuits. A general purpose processor can be an MPU or the processor can be any conventional processor or the like. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,上述存储器可能包括计算机可读存储介质中的非永久性存储器,随机存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash RAM),存储器包括至少一个存储芯片。这里,本公开实施例对此不做限定。In an exemplary embodiment, the above-mentioned memory may include a non-permanent memory in a computer-readable storage medium, a random access memory (Random Access Memory, RAM) and/or a non-volatile memory, such as a read-only memory ( Read Only Memory, ROM) or flash memory (Flash RAM), the memory includes at least one memory chip. Here, the embodiments of the present disclosure do not limit this.
在一种示例性实施例中,总线除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图4中将各种总线都标为总线403。这里,本公开实施例对此不做限定。In an exemplary embodiment, besides a data bus, the bus may also include a power bus, a control bus, a status signal bus, and the like. However, the various buses are labeled as
在实现过程中,混合现实设备所执行的处理可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。即本公开实施例的方法步骤可以体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。During implementation, the processing performed by the mixed reality device may be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software. That is, the method steps in the embodiments of the present disclosure may be implemented by a hardware processor, or by a combination of hardware and software modules in the processor. The software module may be located in storage media such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
本公开实施例还提供一种计算机可读存储介质,包括存储的程序,其中,在程序运行时控制存储介质所在的触控设备执行上述一个或多个实施例中的信息处理方法的步骤。Embodiments of the present disclosure further provide a computer-readable storage medium, including a stored program, wherein when the program is running, the touch device where the storage medium is located is controlled to execute the steps of the information processing method in one or more of the above-mentioned embodiments.
在一种示例性实施例中,上述计算机可读存储介质可以如:ROM/RAM、磁碟、光盘等。这里,本公开实施例对此不做限定。In an exemplary embodiment, the above-mentioned computer-readable storage medium may be, for example: ROM/RAM, magnetic disk, optical disk, and the like. Here, this disclosure is not limited in this embodiment.
以上触控设备或计算机可读存储介质实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本公开触控设备或计算机可读存储介质实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。在此不再赘述。The above description of the embodiment of the touch control device or the computer-readable storage medium is similar to the description of the above method embodiment, and has similar beneficial effects as the method embodiment. For the technical details not disclosed in the embodiments of the touch control device or the computer-readable storage medium of the present disclosure, please refer to the description of the method embodiments of the present disclosure for understanding. I won't repeat them here.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一 定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those of ordinary skill in the art can understand that all or some of the steps in the methods disclosed above, the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer. In addition, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
虽然本公开所揭露的实施方式如上,但上述的内容仅为便于理解本公开而采用的实施方式,并非用以限定本公开。任何本公开所属领域内的技术人员,在不脱离本公开所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本公开的专利保护范围,仍须以所附的权利要求书所界定的范围为准。Although the embodiments disclosed in the present disclosure are as above, the above-mentioned content is only an embodiment adopted to facilitate understanding of the present disclosure, and is not intended to limit the present disclosure. Anyone skilled in the art to which this disclosure belongs can make any modifications and changes in the form and details of implementation without departing from the spirit and scope disclosed in this disclosure, but the scope of patent protection of this disclosure must still be The scope defined by the appended claims shall prevail.
Claims (20)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110836392.5 | 2021-07-23 | ||
| CN202110836392.5A CN115686183A (en) | 2021-07-23 | 2021-07-23 | Mixed reality device and equipment, information processing method and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023001019A1 true WO2023001019A1 (en) | 2023-01-26 |
Family
ID=84978883
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/105084 Ceased WO2023001019A1 (en) | 2021-07-23 | 2022-07-12 | Mixed reality apparatus and device, information processing method, and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN115686183A (en) |
| WO (1) | WO2023001019A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5301668A (en) * | 1991-06-20 | 1994-04-12 | Hales Lynn B | Field of view underwater diving computer monitoring and display system |
| US20110055746A1 (en) * | 2007-05-15 | 2011-03-03 | Divenav, Inc | Scuba diving device providing underwater navigation and communication capability |
| US20170011557A1 (en) * | 2015-07-06 | 2017-01-12 | Samsung Electronics Co., Ltd | Method for providing augmented reality and virtual reality and electronic device using the same |
| CN207946596U (en) * | 2018-02-11 | 2018-10-09 | 亮风台(上海)信息科技有限公司 | A kind of diving face mirror |
| WO2019117325A1 (en) * | 2017-12-12 | 2019-06-20 | 전자부품연구원 | Mixed reality-based non-submerged surface supply diving virtual training apparatus and system |
-
2021
- 2021-07-23 CN CN202110836392.5A patent/CN115686183A/en active Pending
-
2022
- 2022-07-12 WO PCT/CN2022/105084 patent/WO2023001019A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5301668A (en) * | 1991-06-20 | 1994-04-12 | Hales Lynn B | Field of view underwater diving computer monitoring and display system |
| US20110055746A1 (en) * | 2007-05-15 | 2011-03-03 | Divenav, Inc | Scuba diving device providing underwater navigation and communication capability |
| US20170011557A1 (en) * | 2015-07-06 | 2017-01-12 | Samsung Electronics Co., Ltd | Method for providing augmented reality and virtual reality and electronic device using the same |
| WO2019117325A1 (en) * | 2017-12-12 | 2019-06-20 | 전자부품연구원 | Mixed reality-based non-submerged surface supply diving virtual training apparatus and system |
| CN207946596U (en) * | 2018-02-11 | 2018-10-09 | 亮风台(上海)信息科技有限公司 | A kind of diving face mirror |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115686183A (en) | 2023-02-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108139799B (en) | System and method for processing image data based on a region of interest (ROI) of a user | |
| CN110998666B (en) | Information processing device, information processing method and program | |
| US11282248B2 (en) | Information display by overlay on an object | |
| US10809530B2 (en) | Information processing apparatus and information processing method | |
| US10650600B2 (en) | Virtual path display | |
| US10818088B2 (en) | Virtual barrier objects | |
| US20180181119A1 (en) | Method and electronic device for controlling unmanned aerial vehicle | |
| US20150161762A1 (en) | Information processing apparatus, information processing method, and program | |
| WO2018004786A1 (en) | Systems and methods for mixed reality transitions | |
| JP2016045874A (en) | Information processor, method for information processing, and program | |
| US20190064528A1 (en) | Information processing device, information processing method, and program | |
| US11004273B2 (en) | Information processing device and information processing method | |
| CN108463839A (en) | Information processing device and user guide presentation method | |
| JP2015118442A (en) | Information processor, information processing method, and program | |
| US20230252691A1 (en) | Passthrough window object locator in an artificial reality system | |
| JPWO2019131143A1 (en) | Information processing equipment, information processing methods, and programs | |
| US10771707B2 (en) | Information processing device and information processing method | |
| US12024284B2 (en) | Information processing device, information processing method, and recording medium | |
| WO2023001019A1 (en) | Mixed reality apparatus and device, information processing method, and storage medium | |
| JP6953744B2 (en) | Information sharing system, information sharing method, terminal device and information processing program | |
| US11240482B2 (en) | Information processing device, information processing method, and computer program | |
| US20180101226A1 (en) | Information processing apparatus | |
| US10762715B2 (en) | Information processing apparatus | |
| US20200036939A1 (en) | Transparency system for commonplace camera | |
| US20210232219A1 (en) | Information processing apparatus, information processing method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22845184 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.05.2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22845184 Country of ref document: EP Kind code of ref document: A1 |