[go: up one dir, main page]

WO2022099853A1 - Dispositif de visiocasque et procédé d'évitement d'obstacle associé et support d'enregistrement lisible par ordinateur - Google Patents

Dispositif de visiocasque et procédé d'évitement d'obstacle associé et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2022099853A1
WO2022099853A1 PCT/CN2020/136949 CN2020136949W WO2022099853A1 WO 2022099853 A1 WO2022099853 A1 WO 2022099853A1 CN 2020136949 W CN2020136949 W CN 2020136949W WO 2022099853 A1 WO2022099853 A1 WO 2022099853A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
head
display device
mounted display
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/136949
Other languages
English (en)
Chinese (zh)
Inventor
孙立致
葛祥军
樊迪生
姜浩
姜滨
迟小羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Publication of WO2022099853A1 publication Critical patent/WO2022099853A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3661Guidance output on an external device, e.g. car radio
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3632Guidance using simplified or iconic instructions, e.g. using arrows
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted

Definitions

  • the present invention relates to the technical field of head-mounted display devices, and in particular, to an obstacle avoidance method, a head-mounted display device and a computer-readable storage medium.
  • the main purpose of the present invention is to provide an obstacle avoidance method applied to a head-mounted display device, which aims to avoid collision between a user of the head-mounted display device and surrounding objects, and ensure the safety of the user when using the head-mounted display device.
  • the present invention provides an obstacle avoidance method, which is applied to a head-mounted display device, and the obstacle avoidance method includes the following steps:
  • the image identification information is displayed according to the display characteristic parameter.
  • the step of acquiring obstacle information in the space where the head-mounted display device is located includes:
  • the expected collision time is determined as the obstacle information.
  • the step of acquiring a position parameter of the obstacle relative to the head-mounted display device includes:
  • the location parameter is determined according to image feature information corresponding to the feature image.
  • the step of obtaining the estimated collision time corresponding to the obstacle includes:
  • the predicted collision time is determined according to the motion characteristic parameter and the distance.
  • the position parameter includes a distance
  • the step of determining the display characteristic parameter of the image identification information according to the obstacle information includes:
  • the depth of the displayed color increases with the increase of the distance or the expected collision time.
  • the step of determining the display color of the image identification information according to the distance or the expected collision time includes:
  • the position parameter includes a distance
  • the step of determining the display characteristic parameter of the image identification information according to the obstacle information includes:
  • the contrast increases as the distance or the predicted time to impact increases.
  • the obstacle information includes a position parameter
  • the position parameter includes a direction of the obstacle relative to the head-mounted display device
  • the display feature of the image identification information is determined according to the obstacle information.
  • the parameter steps include:
  • the display area is determined as the display characteristic parameter.
  • the step of acquiring the image identification information corresponding to the obstacle includes:
  • the image outline is determined as the image identification information.
  • the obstacle information includes the distance of the obstacle relative to the head-mounted display device
  • the image identification information is displayed on the current display screen of the head-mounted display device according to the display characteristic parameter.
  • steps also include:
  • the step of displaying the image identification information according to the display characteristic parameter on the current display screen of the head-mounted display device is performed.
  • the present application also proposes a head-mounted display device, the head-mounted display device includes: a memory, a processor, and an obstacle avoidance program stored on the memory and executable on the processor , when the obstacle avoidance program is executed by the processor, the steps of the obstacle avoidance method described in any of the above are implemented.
  • the present application also proposes a computer-readable storage medium, where an obstacle avoidance program is stored on the computer-readable storage medium, and the obstacle avoidance program is implemented as described in any of the above when the obstacle avoidance program is executed by the processor.
  • the steps of the obstacle avoidance method are described in any of the above when the obstacle avoidance program is executed by the processor.
  • the present invention proposes an obstacle avoidance method applied to a head-mounted display device.
  • the method obtains obstacle information in the space where the head-mounted display device is located and the image identification information corresponding to the obstacle, and determines the obstacle according to the obtained obstacle information.
  • the display characteristic parameters of the image identification information, the image identification information is displayed on the current display screen of the head-mounted display device according to the display characteristic parameters, so that the user can know the location of the head-mounted display device in time from the image identification information on the display screen when using the head-mounted display device.
  • the situation of obstacles in the actual scene in the space so that the user can take effective measures in time to avoid the collision between himself and the surrounding objects, and ensure the safety of the user when using the head-mounted display device.
  • FIG. 1 is a schematic diagram of a hardware structure involved in the operation of an embodiment of a head-mounted display device according to the present invention
  • FIG. 2 is a schematic flowchart of an embodiment of an obstacle avoidance method according to the present invention.
  • FIG. 3 is a schematic diagram of the refinement process of step S10 in FIG. 2;
  • FIG. 4 is a schematic flowchart of another embodiment of the obstacle avoidance method of the present invention.
  • FIG. 5 is a schematic flowchart of another embodiment of the obstacle avoidance method of the present invention.
  • FIG. 6 is a schematic flowchart of still another embodiment of the obstacle avoidance method of the present invention.
  • FIG. 7 is a schematic flowchart of another embodiment of the obstacle avoidance method of the present invention.
  • the main solutions of the embodiments of the present invention are: acquiring information on obstacles in the space where the head-mounted display device is located, and acquiring image identification information corresponding to the obstacles; determining the display characteristics of the image identification information according to the obstacle information parameter; on the current display screen of the head-mounted display device, display the image identification information according to the display characteristic parameter.
  • the present invention provides the above solution, aiming at avoiding collision between the user of the head-mounted display device and surrounding objects, and ensuring the safety of the user when using the head-mounted display device.
  • An embodiment of the present invention provides a head-mounted display device, which mainly refers to a head-mounted device that can be used to display a virtual reality picture.
  • the head-mounted display device may include a helmet, glasses, etc. with a video playback function.
  • the head-mounted display device includes: a processor 1001 (eg, a CPU), a memory 1002 and the like.
  • the memory 1002 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
  • the memory 1002 may also be a storage device independent of the aforementioned processor 1001 .
  • the memory 1002 is connected to the processor 1001 .
  • the processor 1001 can work with the detection device 1 in the space where the head-mounted display device is located.
  • the detection device 1 is used to detect obstacle information in the space where the head-mounted display device is located (such as the distance of the obstacle relative to the device, the distance of the obstacle relative to the device). direction, speed of movement of the device, and/or direction of movement, etc.).
  • the detection device 1 can be installed in the head-mounted display device, or can be provided outside the device independently of the head-mounted display device.
  • the detection device 1 may specifically include a depth camera, an ultrasonic ranging device, an infrared ranging device, a radar, and the like.
  • FIG. 1 does not constitute a limitation to the device, and may include more or less components than the one shown, or combine some components, or arrange different components.
  • the memory 1002 which is a computer-readable storage medium, may include an obstacle avoidance program.
  • the processor 1001 may be configured to call the obstacle avoidance program stored in the memory 1002, and execute the relevant steps of the obstacle avoidance method in the following embodiments.
  • An embodiment of the present invention further provides an obstacle avoidance method, which is applied to the above-mentioned head-mounted display device.
  • the obstacle avoidance method includes:
  • Step S10 obtaining obstacle information in the space where the head-mounted display device is located, and obtaining image identification information corresponding to the obstacle;
  • the obstacle information specifically refers to characteristic information representing objects in the space where the head-mounted display device is located that may cause obstacles to the user.
  • the obstacle information specifically includes a position parameter (eg, distance/or direction) of the obstacle relative to the head-mounted display device, the estimated time when the user of the head-mounted display device collides with the obstacle, the type of the obstacle, and the like.
  • the obstacle information in the space where the head-mounted display device is located can be collected in real time.
  • the obstacle information can be obtained by collecting an image of the space where the head-mounted display device is located or by analyzing the distance detection information of objects in the space where the head-mounted display device is located.
  • the image identification information specifically refers to feature information used to identify obstacles in the displayed image of the head-mounted display device.
  • the image identification information may include text, images, lines, and the like.
  • the image identification information may be preset information, or may be information determined based on the time monitoring situation of the obstacle.
  • the image identification information is specifically a line used to represent the shape of the obstacle in the image displayed by the head-mounted display device.
  • the contour feature information of the obstacle may be acquired; the image contour line of the obstacle may be generated according to the contour feature information; and the image contour line may be determined as the image identification information.
  • the contour feature information here can be obtained based on the analysis of the obstacle information monitored in real time.
  • contour feature information obtained by the analysis is then fitted to obtain an image contour line used for reality on the head-mounted reality device, which is used as the contour feature information here.
  • Step S20 determining the display characteristic parameter of the image identification information according to the obstacle information
  • the display characteristic parameter specifically refers to the characteristic parameter when the image identification information corresponding to the obstacle is displayed on the display screen of the head-mounted display device.
  • the display characteristic parameters may specifically include characteristic parameters such as display color, contrast, and display position.
  • the corresponding relationship between the obstacle information and the display feature parameters can be preset, which can be set by the user according to their own needs, or can be the default configuration of the system. Based on this, the corresponding relationship can be obtained, and the feature parameter corresponding to the currently obtained obstacle information is determined as the display feature parameter here by using the corresponding relationship.
  • Step S30 on the current display screen of the head-mounted display device, display the image identification information according to the display characteristic parameter.
  • the image identification information corresponding to the display feature parameters is superimposed and displayed on the current display screen of the head-mounted display device, that is, the head-mounted display device displays the image identification information while maintaining the virtual reality screen display.
  • An actual scene is used to illustrate the solution of this embodiment: when the head-mounted display device currently displays a picture of a real environment and there is a table in the environment, the outline of the image of the table in the picture of the real environment can be identified by lines.
  • the outline of the table image is darker, and the outline of the table image is darker when the distance between the user of the headset and the table is farther.
  • the user can know the current distance from the table from the depth of the line in the real picture.
  • the obstacle refers to the table
  • the obstacle information refers to the distance between the table and the user
  • the image identification information corresponding to the obstacle refers to the outline of the table image.
  • the display feature parameter specifically refers to the depth of the contour line.
  • the solution of this embodiment is described with another actual scene: when the head-mounted display device is currently displaying a virtual picture and the real picture has a table, the table image captured by the head-mounted display device can be obtained, and the table The image or contour lines (such as contour lines) extracted from the table image are superimposed and displayed on the virtual screen.
  • the closer the distance between the user of the head-mounted display device and the table the higher the clarity of the table image or the outline of the table displayed on the virtual screen.
  • the higher the clarity of the table image or the table outline displayed on the screen the user can know the current distance from the table from the sharpness of the table image or the table outline displayed on the virtual screen in the real screen.
  • the obstacle refers to the table
  • the obstacle information refers to the distance between the table and the user
  • the image identification information corresponding to the obstacle refers to the table image or table.
  • the contour lines of the display feature parameters specifically refer to the level of clarity.
  • An obstacle avoidance method applied to a head-mounted display device proposed by an embodiment of the present invention, the method obtains obstacle information in the space where the head-mounted display device is located and the image identification information corresponding to the obstacle, according to the obtained obstacle
  • the information determines the display characteristic parameters of the image identification information, and displays the image identification information on the current display screen of the head-mounted display device according to the display characteristic parameters, so that the user can know the image identification information on the display screen in time when using the head-mounted display device.
  • the situation of obstacles in the actual scene in the space where it is located so that the user can take effective measures in time to avoid the collision between himself and the surrounding objects, and ensure the safety of the user when using the head-mounted display device.
  • the image identification information is superimposed and displayed on the screen currently displayed by the device, obstacle avoidance is achieved while ensuring the user's viewing experience when using the head-mounted display device.
  • the step S10 includes:
  • Step S11 obtaining a position parameter of the obstacle relative to the head-mounted display device, or obtaining an estimated collision time corresponding to the obstacle;
  • the position parameter specifically refers to a parameter of the orientation and/or distance of the obstacle relative to the head-mounted display device.
  • the location parameter may characterize the location of the obstacle relative to the user of the head mounted display device.
  • the position parameter may be detected by the image acquisition module and/or the ranging module set on the current head-mounted display device.
  • the location parameters here can be obtained by acquiring data collected by a depth camera, an ultrasonic ranging module, an infrared ranging module, or a radar of the device on the head-mounted display device, and analyzing the acquired data.
  • the process of obtaining the position parameters of the obstacle relative to the head-mounted display device is as follows: obtaining a depth image in the space where the head-mounted display device is located; The feature image of the obstacle is determined; the position parameter is determined according to the image feature information corresponding to the feature image.
  • the depth image refers to an image whose pixel value is the distance (depth) from the image collector on the head-mounted display device to each point in the space where the head-mounted display device is located, which can directly reflect the geometric shape of the visible surface of the obstacle.
  • the position based on the preset conversion relationship between the image coordinates corresponding to the depth image and the spatial coordinates, can determine the direction of the obstacle in the space where the head-mounted display device is located relative to the head-mounted display device.
  • the obtained distance and/or direction of the obstacle relative to the head-mounted display device can be used as a position parameter of the obstacle relative to the head-mounted display device.
  • the estimated time to collision specifically refers to the estimated time interval between the current state of the user of the head-mounted display device and the time when the user collides with the obstacle.
  • the distance of the obstacle relative to the head-mounted display device and the motion characteristic parameter of the user of the head-mounted display device can be obtained; according to the motion characteristic parameter and the distance The estimated time to collision is determined.
  • Motion feature parameters specifically refer to the parameters that characterize the current motion state of the user.
  • Step S12 determining the position parameter or the estimated collision time as the obstacle information.
  • An actual scene is used to illustrate the solution of this embodiment: when the user uses the head-mounted display device, there is a table in the real scene, the depth image captured by the depth camera on the head-mounted display device is obtained, and the image of the table in the depth image is extracted. Corresponding depth parameters, convert the depth parameters to obtain the distance between the table and the user. After the distance is obtained, the user's movement speed can be further detected by the acceleration sensor in the head-mounted display device, and the user's current position can be calculated from the distance and the user's movement speed. The time required to move to the location of the table.
  • the obstacle refers to the table
  • the position parameter refers to the distance between the user and the table
  • the estimated collision time is the calculated user moving from the current position to the position of the table.
  • the required time for the position; the obstacle information can be the distance between the user and the table, or the calculated time required for the user to move from the current position to the position of the table.
  • the position parameter of the obstacle relative to the head-mounted display device or the expected collision time corresponding to the obstacle is obtained as the display characteristic parameter for determining the image identification information, so that the user is using the head-mounted display device.
  • the position parameter includes the distance of the obstacle relative to the head-mounted display device.
  • the step S20 includes:
  • Step S21 determining the display color of the image identification information according to the distance or the expected collision time
  • Step S22 determining that the display color is the display characteristic parameter.
  • Different distances or different estimated time of impact may correspond to display colors with different characteristics.
  • different distances or different expected collision times can be represented by display colors with the same color system but different depths. Specifically, the depth of the displayed color tends to increase as the distance or the expected collision time decreases.
  • An actual scene is used to illustrate the solution of this embodiment: there is a table in the actual scene, the outline of the table can be displayed on the display screen of the head-mounted display device, and the relationship between the table and the user is obtained based on the method in the above embodiment.
  • the obstacle refers to the table
  • the table outline is the image identification information
  • the display feature parameter refers to the display color of the table outline (such as red, green, dark yellow, etc.). , light yellow).
  • the distance or the estimated collision time may be divided into at least two numerical intervals in advance, and each numerical interval may be correspondingly set with a different color.
  • the first distance interval corresponds to the associated red color as the first set color
  • the second distance interval corresponds to the associated green color as the second preset color, wherein the distance in the first distance interval is smaller than the distance in the second distance interval
  • the first time interval corresponds to the associated red as the third set color
  • the second time interval corresponds to the associated green as the fourth set color, wherein the time length in the first time interval is smaller than the time length in the second time interval.
  • the numerical range in which the distance or the estimated collision time is located is determined; the set color corresponding to the numerical range is obtained as the display color; wherein, different numerical ranges correspond to different set colors. For example, if the distance of the current obstacle is in the first distance interval, red is determined as the display color of the image identification information; for another example, if the current estimated collision time is in the second time interval, green is determined as the display color of the image identification information.
  • the user can accurately distinguish the distance between the obstacle and the obstacle or the estimated time when it collides with the obstacle through the change of the color of the image identification information on the display screen, thereby effectively preventing the user from using the headset In the process of displaying the device, it collides with obstacles in the space to ensure the safety of the user when using the head-mounted display device.
  • the position parameter includes the distance of the obstacle relative to the head-mounted display device.
  • the step S20 includes:
  • Step S23 determining the contrast of the image identification information relative to the currently displayed screen according to the distance or the expected collision time; wherein, the contrast increases with the increase of the distance or the expected collision time;
  • Step S24 determining that the contrast is the display characteristic parameter.
  • Different distances or different estimated collision times have different contrasts of the corresponding image identification information relative to the currently displayed screen.
  • the smaller the distance between the obstacle and the head-mounted display device the greater the contrast of the image identification information relative to the currently displayed screen in the device, and the clearer the image identification information viewed by the user on the display screen; the obstacle;
  • the greater the distance between the object and the head-mounted display device the smaller the contrast of the image identification information relative to the currently displayed screen in the device, and the less clear the image identification information viewed by the user on the display screen.
  • a practical scenario is used to illustrate the solution of this embodiment: for example, there is a table in the real scenario, the outline of the table can be displayed on the display screen of the head-mounted display device, and the table and the user can be obtained based on the method in the above embodiment.
  • the distance between the distances or the estimated time of collision, the contrast between the table outline and other images displayed by the headset is A1 when the distance is D1, and the contrast between the table outline and other images displayed by the headset when the distance is D2.
  • the contrast between the images is A2, where D1 is less than D2 and A1 is greater than A2.
  • the obstacle refers to the table
  • the table outline is the image identification information
  • the display feature parameter refers to the difference between the table outline and other images displayed by the head-mounted display device. the contrast between them (eg A1, A2).
  • the head-mounted display device collides with obstacles in the space during the process of using the head-mounted display device to ensure the safety of the user when using the head-mounted display device.
  • steps S23 and S24 in this embodiment may be performed synchronously with steps S21 and S22 in the above-mentioned embodiment according to actual needs, or steps S21 and S22 may not be performed when steps S23 and S24 are performed. Steps S23 and S24 are not executed when steps S21 and S22 can be executed.
  • the obtained estimated time of collision may not be used to determine the display characteristic parameters corresponding to the image identification information, but is directly displayed on the current display. on the screen. For example, when the distance of the obstacle relative to the head-mounted display device is less than or equal to the first set distance, the display characteristic parameters of the image identification information are determined according to the position parameter of the obstacle relative to the head-mounted display device, and the display characteristic parameters of the image identification information are determined according to the determined display parameters.
  • the characteristic parameter displays image identification information in the current display screen of the device; further, when the distance of the obstacle relative to the head-mounted display device is less than or equal to the second set distance (the second set distance is less than the first set distance), That is, when the user is about to collide with an object, in addition to displaying the image identification information based on the display feature parameters of the position parameters, the estimated collision time can also be displayed on the current display screen of the device, so as to realize double prompting of the user's dangerous state to further avoid The user collides with objects in their space.
  • the obstacle information includes a position parameter
  • the position parameter includes the direction of the obstacle relative to the head-mounted display device.
  • the step S20 includes:
  • Step S25 determining the display area of the image identification information in the current display screen according to the direction
  • Step S26 determining that the display area is the display characteristic parameter.
  • different directions correspond to different display areas.
  • the current display screen can be divided into two upper and lower image areas, and the image identification information is displayed in the lower image area;
  • the direction of the obstacle relative to the head-mounted display device is When it is on the left side, the current display screen can be divided into left and right image areas, and the image identification information can be displayed in the left image area.
  • the display screen can also be divided into more display areas, and one of the display areas representing the direction of the obstacle relative to the user is selected as the display area of the image identification information based on the direction of the obstacle relative to the head-mounted display device.
  • An actual scene is used to illustrate the solution of this embodiment: there is a table in the actual scene, after acquiring the image of the actual scene captured by the head-mounted display device, the table image and the image position where it is located in the image of the actual scene are identified, based on the table
  • the image position of the image to determine the orientation of the table relative to the user, for example, when the table image is in the left image area in the real scene image, the table is on the left relative to the user, and when the table image is in the right image area in the real scene image , the table is located on the right side relative to the user, based on this, when the table is located on the left side relative to the user, the outline of the table image is displayed on the left area of the current display screen of the head-mounted display device, and when the table is located on the right side relative to the user, The outline of the table image is displayed in the right area of the currently displayed screen on the HMD.
  • the obstacle refers to the table
  • the outline of the table is the image identification information
  • the direction of the obstacle relative to the head-mounted display device refers to the direction of the table relative to the user ( If the table is located on the left or right relative to the user), the display area refers to the area displayed by the table in the current screen of the head-mounted display (eg, the left area or the right area of the current screen of the head-mounted display device).
  • the user can accurately distinguish the direction of the obstacle relative to itself through the display position of the image identification information on the current display screen, thereby further effectively preventing the user from using the head During the process of wearing the display device, it collides with obstacles in the space to ensure the safety of the user when using the head-mounted display device.
  • step S25 and step S26 in this embodiment may be performed synchronously with step S21 and step S22, step S23 and step S24 in the above-mentioned embodiment according to actual needs, or step S23 and step S24 may not be performed when step S23 and step S24 are performed.
  • Step S21, Step S22, Step S25 and Step S26, Step S21 and Step S22 can also be executed without executing Step S23 and Step S24, Step S25 and Step S26, and Step S25 and Step S26 can also be executed without executing Step S23 and Step S24 , step S21 and step S22.
  • the obstacle information includes the distance of the obstacle relative to the head-mounted display device. Referring to FIG. 7 , before the step S30, it further includes:
  • Step S01 judging whether the distance is less than or equal to the set distance
  • step S30 is executed; if the distance is greater than the set distance, step S40 is executed.
  • the set distance here specifically refers to the safe distance between the object that can ensure the user and the object expand and the head-mounted display device, and the specific value can be set according to the actual situation.
  • Step S40 controlling the head-mounted display device not to display the image identification information on the current display screen.
  • An actual scene is used to illustrate the solution of this embodiment: there is a table in the actual scene, and the distance between the table and the user is obtained by the method in the above embodiment, and the obtained distance is less than or equal to 2m, then the outline of the table is displayed in the table. If the obtained distance of the current screen of the head-mounted display device is greater than 2m, the outline of the table is not displayed on the current screen of the head-mounted display device, and the head-mounted display device maintains the original display screen state for display.
  • the obstacle refers to a table
  • the outline of the table is image identification information
  • the set distance refers to 2m.
  • step S01 when the distance between the user of the head-mounted display device and the obstacle is too small, indicating that the object in the space will pose a safety threat to the user, the image identification information representing the obstacle situation is displayed, thereby Ensure user safety and at the same time ensure the viewing experience of users using head-mounted display devices.
  • an embodiment of the present invention further provides a computer-readable storage medium, where an obstacle avoidance program is stored on the computer-readable storage medium, and when the obstacle avoidance program is executed by a processor, implements any of the above obstacle avoidance method embodiments related steps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Environmental Sciences (AREA)
  • Optics & Photonics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un procédé d'évitement d'obstacle, un dispositif de visiocasque et un support d'enregistrement lisible par ordinateur. Ledit procédé est appliqué à un dispositif de visiocasque et consiste à : acquérir des informations d'un obstacle dans un espace dans lequel se trouve un dispositif de visiocasque, et acquérir des informations d'identifiant d'image correspondant à l'obstacle (S10) ; déterminer des paramètres de caractéristiques d'affichage des informations d'identifiant d'image en fonction des informations d'obstacle (S20) ; et afficher, sur une image d'affichage actuelle du dispositif de visiocasque, les informations d'identifiant d'image en fonction des paramètres de caractéristiques d'affichage (S30). Ledit procédé vise à éviter une collision entre un utilisateur d'un dispositif de visiocasque et un objet environnant, et à assurer la sécurité de l'utilisateur lors de l'utilisation du dispositif de visiocasque.
PCT/CN2020/136949 2020-11-13 2020-12-16 Dispositif de visiocasque et procédé d'évitement d'obstacle associé et support d'enregistrement lisible par ordinateur Ceased WO2022099853A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011269260.0A CN112362077A (zh) 2020-11-13 2020-11-13 头戴显示设备及其避障方法、计算机可读存储介质
CN202011269260.0 2020-11-13

Publications (1)

Publication Number Publication Date
WO2022099853A1 true WO2022099853A1 (fr) 2022-05-19

Family

ID=74514712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136949 Ceased WO2022099853A1 (fr) 2020-11-13 2020-12-16 Dispositif de visiocasque et procédé d'évitement d'obstacle associé et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN112362077A (fr)
WO (1) WO2022099853A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116153021A (zh) * 2022-12-26 2023-05-23 歌尔科技有限公司 碰撞预警方法、装置、头戴设备、头戴系统及介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158845A (zh) * 2021-04-02 2021-07-23 歌尔光学科技有限公司 手势识别方法、头戴显示设备及非易失性存储介质
CN115617153A (zh) * 2021-07-13 2023-01-17 北京有竹居网络技术有限公司 一种碰撞检测方法、装置、电子设备以及存储介质
CN113920688A (zh) * 2021-11-24 2022-01-11 青岛歌尔声学科技有限公司 一种碰撞预警方法、装置、vr头戴设备及存储介质
CN114527880B (zh) * 2022-02-25 2024-07-02 歌尔科技有限公司 空间位置识别方法、装置、设备及存储介质
CN115147747A (zh) * 2022-06-07 2022-10-04 海信视像科技股份有限公司 一种安全提示信息的展示方法及虚拟显示设备
CN115984691A (zh) * 2022-12-25 2023-04-18 北京梦想绽放科技有限公司 一种虚拟现实设备的避障预警方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080040040A1 (en) * 2006-08-08 2008-02-14 Takanori Goto Obstacle avoidance method and obstacle-avoidable mobile apparatus
WO2016021747A1 (fr) * 2014-08-05 2016-02-11 엘지전자 주식회사 Visiocasque et son procédé de commande
CN106646876A (zh) * 2016-11-25 2017-05-10 捷开通讯(深圳)有限公司 头戴式显示系统及其安全提示方法
CN108614635A (zh) * 2016-12-12 2018-10-02 北京康得新创科技股份有限公司 虚拟现实设备、虚拟现实设备的控制方法及装置
CN109813317A (zh) * 2019-01-30 2019-05-28 京东方科技集团股份有限公司 一种避障方法、电子设备及虚拟现实设备
CN111260789A (zh) * 2020-01-07 2020-06-09 青岛小鸟看看科技有限公司 避障方法、虚拟现实头戴设备以及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104216520B (zh) * 2014-09-09 2018-07-06 联想(北京)有限公司 一种信息处理方法及电子设备
CN105807920A (zh) * 2016-03-03 2016-07-27 北京小鸟看看科技有限公司 虚拟现实设备及其使用场景下的地面障碍物监测方法和装置
CN106295581A (zh) * 2016-08-15 2017-01-04 联想(北京)有限公司 障碍物检测方法、装置及虚拟现实设备
CN108521808B (zh) * 2017-10-31 2021-12-07 深圳市大疆创新科技有限公司 一种障碍信息显示方法、显示装置、无人机及系统
CN109730910B (zh) * 2018-11-30 2021-07-13 深圳市智瞻科技有限公司 出行的视觉辅助系统及其辅助设备、方法、可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080040040A1 (en) * 2006-08-08 2008-02-14 Takanori Goto Obstacle avoidance method and obstacle-avoidable mobile apparatus
WO2016021747A1 (fr) * 2014-08-05 2016-02-11 엘지전자 주식회사 Visiocasque et son procédé de commande
CN106646876A (zh) * 2016-11-25 2017-05-10 捷开通讯(深圳)有限公司 头戴式显示系统及其安全提示方法
CN108614635A (zh) * 2016-12-12 2018-10-02 北京康得新创科技股份有限公司 虚拟现实设备、虚拟现实设备的控制方法及装置
CN109813317A (zh) * 2019-01-30 2019-05-28 京东方科技集团股份有限公司 一种避障方法、电子设备及虚拟现实设备
CN111260789A (zh) * 2020-01-07 2020-06-09 青岛小鸟看看科技有限公司 避障方法、虚拟现实头戴设备以及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116153021A (zh) * 2022-12-26 2023-05-23 歌尔科技有限公司 碰撞预警方法、装置、头戴设备、头戴系统及介质

Also Published As

Publication number Publication date
CN112362077A (zh) 2021-02-12

Similar Documents

Publication Publication Date Title
WO2022099853A1 (fr) Dispositif de visiocasque et procédé d'évitement d'obstacle associé et support d'enregistrement lisible par ordinateur
US10489981B2 (en) Information processing device, information processing method, and program for controlling display of a virtual object
JP6596883B2 (ja) ヘッドマウントディスプレイ及びヘッドマウントディスプレイの制御方法、並びにコンピューター・プログラム
JP6747504B2 (ja) 情報処理装置、情報処理方法、及びプログラム
US9667952B2 (en) Calibration for directional display device
WO2021103987A1 (fr) Procédé de commande pour robot de balayage, robot de balayage et support de rangement
US9442561B2 (en) Display direction control for directional display device
US9791934B2 (en) Priority control for directional display device
KR20170026164A (ko) 가상 현실 디스플레이 장치 및 그 장치의 표시 방법
JP2016515242A5 (fr)
WO2017169273A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN108789500B (zh) 人机安全防护系统及安全防护方法
US20180165853A1 (en) Head-mounted display apparatus and virtual object display system
TWI489326B (zh) 操作區的決定方法與系統
JP6221292B2 (ja) 集中度判定プログラム、集中度判定装置、および集中度判定方法
WO2017169272A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2020016970A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP2019102828A (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
JP6977823B2 (ja) 情報処理装置、制御方法、及びプログラム
TWI486054B (zh) A portrait processing device, a three-dimensional image display device, a method and a program
KR20200042782A (ko) 입체 모델 생성 장치 및 그 영상 표시 방법
US10642349B2 (en) Information processing apparatus
CN116153021B (zh) 碰撞预警方法、装置、头戴设备、头戴系统及介质
TWI851973B (zh) 虛擬視窗配置裝置、虛擬視窗配置方法及虛擬視窗配置系統
US20240380876A1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20961407

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20961407

Country of ref document: EP

Kind code of ref document: A1