[go: up one dir, main page]

WO2023005007A1 - Method and system for vr collision detection - Google Patents

Method and system for vr collision detection Download PDF

Info

Publication number
WO2023005007A1
WO2023005007A1 PCT/CN2021/124685 CN2021124685W WO2023005007A1 WO 2023005007 A1 WO2023005007 A1 WO 2023005007A1 CN 2021124685 W CN2021124685 W CN 2021124685W WO 2023005007 A1 WO2023005007 A1 WO 2023005007A1
Authority
WO
WIPO (PCT)
Prior art keywords
collision detection
detected
collision
weight coefficient
devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2021/124685
Other languages
French (fr)
Chinese (zh)
Inventor
尚家乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Inc
Original Assignee
Goertek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Inc filed Critical Goertek Inc
Publication of WO2023005007A1 publication Critical patent/WO2023005007A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Definitions

  • the present application relates to the technical field of collision detection, and more specifically, to a VR virtual collision detection method and system.
  • Virtual reality refers to the use of computer technology as the core of modern high-tech means to generate a virtual environment, users use special input / output equipment to interact naturally with objects in the virtual world, Hearing and tactile sensations are the same as those in the real world.
  • VR Virtual reality
  • VR head-mounted display devices due to the closed nature of VR head-mounted display devices, when multiple people interact with VR in the same scene, collisions will occur.
  • virtual reality collision detection technology has also become a key technology for judging the performance of VR products. .
  • the purpose of this application is to provide a VR collision detection method and system to solve the problems of high cost, high power consumption, low safety performance, and poor precision existing in existing collision detection.
  • the VR collision detection method provided by this application includes: performing the first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtaining the corresponding first collision detection results; based on the second collision detection mode to perform the second collision detection on the VR devices in the first collision detection result, and obtain the corresponding second collision detection result; based on the third collision detection mode, perform the third collision detection on the VR devices in the second collision detection result , and obtain the final collision detection result between VR devices.
  • the detection process of the first collision detection mode includes: based on the scanned ID information in the broadcast frame broadcast by the VR device to be detected, establishing a BLE connection with the VR device to be detected, broadcasting The frame is constructed based on the degree of freedom pose information of the VR device to be detected and the preset ID; based on the ID information, the corresponding first public key is requested from the server, and an encrypted packet is formed based on the first public key and the acquired position vector label of the current detection point ;Based on the verification result of the encrypted package by the VR device to be detected, obtain the DOF pose information sent by the VR device to be detected, and render the preliminary position corresponding to the VR device to be detected according to the DOF pose information; determine the current position based on the preliminary position The first collision detection result between the detection point and the VR device to be detected.
  • the detection process of the second collision detection mode includes: increasing the broadcast frequency in the first collision detection mode according to a preset range; Any VR device in the first collision detection result performs high-frequency collision detection, and obtains the corresponding high-frequency collision detection result; the infrared sensor based on the current detection point emits detection rays, and conducts infrared detection on any VR device in the high-frequency collision detection result. Collision detection, and obtain the second collision detection result.
  • the process of obtaining the degree of freedom pose information sent by the VR device to be detected includes: the VR device to be detected extracts the position vector label in the encrypted packet, And judge whether the distance between the detection point and the VR device to be detected meets the preset collision distance; if the distance between the detection point and the VR device to be detected meets the preset collision distance, then generate the second public key of the VR device to be detected, And generate the corresponding first public key based on the first public key in the encrypted package; the detection point generates the corresponding second public key based on the second public key of the VR device to be detected, and obtains degrees of freedom according to the second public key pose information.
  • the detection process of the third collision detection mode includes: generating corresponding enclosing vertices based on the distance sensor in the VR device in the second collision result and the outline of the VR device; Based on the OBB collision detection method, the collision result between any two OBB bounding boxes is obtained as the final collision detection result between the corresponding VR devices.
  • an optional technical solution is, before performing a collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, it also includes: parameterizing the VR devices to be detected based on the light and environment texture of the scene The initialization process is to determine the inertial weight coefficient and the visual weight coefficient of the VR device to be detected; based on the inertial weight coefficient and the visual weight coefficient, determine the degree of freedom pose information of the VR device to be detected.
  • the parameter initialization process includes: obtaining the light conditions of the scene based on the light sensor of the VR device to be detected; The visual sensor acquires the environmental picture of the scene; extracts point features and line features in the scene based on the environmental picture; if the point features and line features meet the requirements of the second preset threshold, then increase the visual weight coefficient according to the preset ratio, and reduce the Small inertia weight factor.
  • an optional technical solution is, if the light conditions do not meet the requirements of the first preset threshold, then increase the inertia weight coefficient according to the preset ratio, and reduce the visual weight coefficient; if the point features and line features do not meet the second preset threshold If the threshold is set, the inertia weight coefficient will be increased according to the preset ratio, and the visual weight coefficient will be reduced.
  • v represents the pose information of degrees of freedom
  • represents the visual weight coefficient
  • represents the inertial weight coefficient
  • P represents the 6-degree-of-freedom pose information acquired by the visual sensor
  • I represents the 6-degree-of-freedom pose information of the inertial sensor.
  • a VR collision detection system including: a first collision detection unit, configured to perform a collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and Obtain the corresponding first collision detection result; the second collision detection unit is configured to perform secondary collision detection on the VR device in the first collision detection result based on the second collision detection mode, and obtain the corresponding second collision detection result; The third collision detection unit is configured to perform three collision detections on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.
  • FIG. 1 is a flowchart of a VR collision detection method according to an embodiment of the present application
  • Fig. 2 is the encryption and decryption flow chart of the encrypted package according to the embodiment of the present application
  • FIG. 3 is a flow chart of parameter initialization according to an embodiment of the present application.
  • FIG. 4 is a detailed flowchart of a VR collision detection method according to an embodiment of the present application.
  • Fig. 5 is a logic block diagram of a VR collision detection system according to an embodiment of the present application.
  • Fig. 1 shows a flowchart of a VR collision detection method according to an embodiment of the present application.
  • the VR collision detection method of the embodiment of the present application includes:
  • S110 Perform a first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtain corresponding first collision detection results.
  • the detection process of the first collision detection mode may include:
  • the broadcast frame is constructed based on the degree of freedom pose information of the VR device to be detected and the preset ID ;
  • the detected party (that is, the VR device to be detected, the same below) will firstly determine the degree of freedom pose of the VR device to be detected based on the inertial weight coefficient and the visual weight coefficient Information and preset IDs are combined into BLR broadcast frames for low-frequency and low-power broadcasting.
  • the detection party (that is, the VR device at the current detection point, the same below) will also perform low-frequency scanning.
  • the detecting party When there is a detected party around the scan , the detecting party will actively establish a BLE connection with the detected party according to the ID information in the broadcast, and request the cloud server for the first public key generated by the corresponding ID based on the ECDH algorithm, and then send the location of the current detection point in the scene
  • the information forms a position vector label, which is encrypted in the BLE channel together with the first public key and then transmitted to the detected party.
  • the encryption and decryption process designed based on the ECDH algorithm can refer to the specific example shown in FIG. 2 .
  • the process of obtaining the DOF pose information sent by the VR device to be detected further includes:
  • the VR device to be detected extracts the position vector label in the encrypted package, and judges whether the distance between the detection point and the VR device to be detected meets the preset collision distance. The geographical location of the detected party is close. If it is far away, the BLE connection can be directly terminated to end the location detection process.
  • the second public key of the VR device to be detected is generated on the detected side, and the corresponding first public key is generated based on the first public key in the encrypted package secret key, and then send the second public key of the detected party to the detecting party, and the detecting party generates a corresponding second public key based on the second public key of the VR device to be detected, through the first public key, the second public key,
  • the first public key and the second public key can realize encrypted data transmission between the detecting party and the detected party, and obtain the degree of freedom pose information according to the second public key.
  • the detecting party can render the initial position of the other party in the VR environment according to the pose information of the degree of freedom of the detected party, and according to the initial position, the first collision detection result between the current detection point and the VR device to be detected can be determined.
  • the detecting party detects that the distance between the detected parties is less than the preset collision distance, it can be determined that there is a possibility of collision between the two, and the first collision detection result includes all detected objects that the detecting party can detect. This process belongs to the first level of collision detection, and initially determines possible collision targets.
  • S120 Perform a second collision detection on the VR devices in the first collision detection result based on the second collision detection mode, and acquire a corresponding second collision detection result.
  • the detection process of the second collision detection mode further includes:
  • the infrared sensor based on the current detection point emits detection rays, and performs infrared collision detection on any VR device in the high-frequency collision detection result, and obtains the second collision detection result.
  • the BLE broadcast frequency can be increased, and according to the first collision detection In the process of detection mode, the collision detection is performed again to confirm the detection results more accurately.
  • the detection party will further use the infrared sensor to emit detection rays, and through the method of radiation detection A second collision detection result is further determined based on the first collision detection result.
  • the second collision detection result is a higher-precision detection screening of the first collision detection result, therefore, the range of the second collision detection result may be smaller than the first collision detection result.
  • S130 Perform a third collision detection on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.
  • the detection process of the third collision detection mode includes:
  • the collision detection results between any two OBB direction bounding boxes are obtained as the final collision detection results between the corresponding VR devices.
  • the surrounding vertices surrounding itself and the OBB direction bounding box can be formed through the corresponding distance sensor, When any two OBB bounding boxes overlap, it can be determined that the two have collided. At this time, information such as the collision angle and position can be determined based on the OBB bounding boxes, and an early warning is given before the collision occurs.
  • the OBB bounding box can reserve a certain safety boundary. When the OBB bounding boxes overlap, it is determined that two VR devices in the virtual scene collide. safe distance to ensure user safety.
  • the VR device before performing the first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, it may also include: based on the light and environment texture of the scene, The VR device performs parameter initialization processing to determine the inertial weight coefficient and visual weight coefficient of the VR device to be detected; finally, based on the inertial weight coefficient and visual weight coefficient, the degree of freedom pose information of the VR device to be detected is determined.
  • FIG. 3 shows a schematic process of parameter initialization according to an embodiment of the present application.
  • the parameter initialization process further includes: starting the initialization configuration process in a new environment, and then obtaining the light conditions of the scene based on the light sensor of the VR device to be detected; if the light conditions meet the requirements of the first preset threshold, Then, based on the visual sensor of the VR device to be detected, the production environment picture is obtained; the point features and line features in the scene are extracted based on the environment picture; Increase the visual weight factor, and decrease the inertial weight factor.
  • the inertia weight coefficient is increased according to the preset ratio, and the visual weight coefficient is reduced; if the point features and line features do not meet the requirements of the second preset threshold, then the The preset ratio increases the inertia weight coefficient and reduces the visual weight coefficient.
  • the VR device can increase the inertia weight coefficient when the light is too dark or too bright, and increase the visual weight coefficient when the light is moderate, so as to avoid light, The impact of environmental factors such as texture on the positioning accuracy of a single sensor.
  • the expression formula of the degree of freedom pose information of the VR device to be detected is:
  • v represents the pose information of degrees of freedom
  • represents the visual weight coefficient
  • represents the inertial weight coefficient
  • P represents the 6-degree-of-freedom pose information acquired by the visual sensor
  • I represents the 6-degree-of-freedom pose information of the inertial sensor
  • the visual and inertial Sensor fusion positioning can dynamically adjust the weight according to the combination of ambient light sensor and image feature point recognition, so as to more accurately fuse visual and inertial data and improve the accuracy of collision detection.
  • FIG. 4 shows a detailed schematic flow of a VR collision detection method according to an embodiment of the present application.
  • the VR collision detection method of the embodiment of the present application includes:
  • the detected party When the detected party is detected by the detected party, verify the first public key sent by the detected party, and when the verification is passed, obtain the corresponding second public key of the detected party, and obtain the first public key ;
  • the degree of freedom information is averagely adopted and encrypted for transmission
  • the detection party it will continue to perform low-frequency scanning until the VR device is scanned, establish a BLE connection channel, and request the corresponding first public key from the cloud server with the ID in the broadcast, and combine it with its own geographic tag sent to the detected party;
  • the detection party obtains the 6-DOF pose information and name code information of the detected party according to the second public key, performs low-frequency model rendering in the virtual reality scene, determines the rough position, and completes the first collision of the first level Detection mode; wherein, during this process, the analyzed pose information of the detected party can be uploaded to the server for the server to build a virtual map between multiple VR devices;
  • the detecting party and the detected party involved in this application can be understood as VR devices within the same scene range, and each VR device can be understood as the detecting party and the detected party, and collision detection is also Detection between any two VR devices, so there is no special limitation on the detecting party and the detected party.
  • the present application further provides a VR collision detection system.
  • FIG. 4 shows a logic block diagram of a VR collision detection system according to an embodiment of the present application.
  • the VR collision detection system 200 of the embodiment of the present application includes:
  • the first collision detection unit 210 is configured to perform a first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtain a corresponding first collision detection result;
  • the second collision detection unit 220 is configured to perform a second collision detection on the VR device in the first collision detection result based on the second collision detection mode, and obtain a corresponding second collision detection result;
  • the third collision detection unit 230 is configured to perform a third collision detection on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided in the present application are a method and a system for VR collision detection. The method comprises: on the basis of a first collision detection mode, performing first collision detection on all VR devices to be detected within the scope of the same scene, and acquiring the corresponding first collision detection result; on the basis of a second collision detection mode, performing second collision detection on the VR devices in the first collision detection result, and acquiring a corresponding second collision detection result; and on the basis of a third collision detection mode, performing third collision detection on the VR devices in the second collision detection result, and acquiring a final collision detection result between the VR devices. Using the described application, VR virtual collision detection can be achieved with low costs, low power consumption and high precision.

Description

VR碰撞检测方法及系统VR collision detection method and system

本申请要求于2021年7月30日提交中国专利局、申请号为202110871726.2、发明名称为“VR碰撞检测方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202110871726.2 and the title of the invention "VR collision detection method and system" submitted to the China Patent Office on July 30, 2021, the entire contents of which are incorporated herein by reference.

技术领域technical field

本申请涉及碰撞检测技术领域,更为具体地,涉及一种VR虚拟碰撞检测方法及系统。The present application relates to the technical field of collision detection, and more specifically, to a VR virtual collision detection method and system.

背景技术Background technique

虚拟现实(Virtual Reality,VR),是指采用计算机技术为核心的现代高科技手段生成一种虚拟环境,用户借助特殊的输入/输出设备,与虚拟世界中的物体进行自然的交互,通过视觉、听觉和触觉等获得与真实世界相同的感受,随着5G技术在全球的迅速发展,VR市场也越来越火热。Virtual reality (Virtual Reality, VR) refers to the use of computer technology as the core of modern high-tech means to generate a virtual environment, users use special input / output equipment to interact naturally with objects in the virtual world, Hearing and tactile sensations are the same as those in the real world. With the rapid development of 5G technology in the world, the VR market is becoming more and more popular.

目前,VR头戴显示设备由于其封闭性特点,当同场景下多人进行VR互动时,会出现碰撞的情况,对应地虚拟现实碰撞检测技术也成为一项评判VR产品性能优劣的关键技术。At present, due to the closed nature of VR head-mounted display devices, when multiple people interact with VR in the same scene, collisions will occur. Correspondingly, virtual reality collision detection technology has also become a key technology for judging the performance of VR products. .

但是,现有的VR设备多依靠昂贵的外部辅助定位设备,成本及功耗高,且在多人使用VR头显出现在同一个区域内时,会导致碰撞检测的精度差,安全性低,影响用户体验。However, existing VR devices mostly rely on expensive external auxiliary positioning devices, which have high cost and power consumption, and when multiple people use VR headsets to appear in the same area, the accuracy of collision detection will be poor and the safety will be low. affect user experience.

发明内容Contents of the invention

鉴于上述问题,本申请的目的是提供一种VR碰撞检测方法及系统,以解决现有碰撞检测存在的成本高、功耗高、安全性能低,精度差等问题。In view of the above problems, the purpose of this application is to provide a VR collision detection method and system to solve the problems of high cost, high power consumption, low safety performance, and poor precision existing in existing collision detection.

本申请提供的VR碰撞检测方法,包括:基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测,并获取对应的第一碰撞检测结果;基于第二碰撞检测模式对第一碰撞检测结果内的VR设备进行第二次碰撞检测,并获取对应的第二碰撞检测结果;基于第三碰撞检测模式对第 二碰撞检测结果内的VR设备进行第三次碰撞检测,并获取VR设备之间的最终碰撞检测结果。The VR collision detection method provided by this application includes: performing the first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtaining the corresponding first collision detection results; based on the second collision detection mode to perform the second collision detection on the VR devices in the first collision detection result, and obtain the corresponding second collision detection result; based on the third collision detection mode, perform the third collision detection on the VR devices in the second collision detection result , and obtain the final collision detection result between VR devices.

此外,可选的技术方案是,第一碰撞检测模式的检测过程包括:基于扫描到的待检测VR设备所广播的广播帧中的ID信息,建立与待检测VR设备之间的BLE连接,广播帧基于待检测VR设备的自由度位姿信息以及预设ID构建;基于ID信息向服务器请求对应的第一公钥,并基于第一公钥以及获取的当前检测点的位置向量标签形成加密包;基于待检测VR设备对加密包的验证结果,获取待检测VR设备发送的自由度位姿信息,并根据自由度位姿信息渲染与待检测VR设备相对应的初步位置;基于初步位置确定当前检测点与待检测VR设备的第一碰撞检测结果。In addition, an optional technical solution is that the detection process of the first collision detection mode includes: based on the scanned ID information in the broadcast frame broadcast by the VR device to be detected, establishing a BLE connection with the VR device to be detected, broadcasting The frame is constructed based on the degree of freedom pose information of the VR device to be detected and the preset ID; based on the ID information, the corresponding first public key is requested from the server, and an encrypted packet is formed based on the first public key and the acquired position vector label of the current detection point ;Based on the verification result of the encrypted package by the VR device to be detected, obtain the DOF pose information sent by the VR device to be detected, and render the preliminary position corresponding to the VR device to be detected according to the DOF pose information; determine the current position based on the preliminary position The first collision detection result between the detection point and the VR device to be detected.

此外,可选的技术方案是,第二碰撞检测模式的检测过程包括:按照预设幅度提高第一碰撞检测模式中的广播的频率;基于提高广播频率后的第一碰撞检测模式的检测过程对第一碰撞检测结果内的任意VR设备进行高频碰撞检测,并获取对应的高频碰撞检测结果;基于当前检测点的红外传感器发射检测射线,对高频碰撞检测结果内的任意VR设备进行红外碰撞检测,并获取第二碰撞检测结果。In addition, an optional technical solution is that the detection process of the second collision detection mode includes: increasing the broadcast frequency in the first collision detection mode according to a preset range; Any VR device in the first collision detection result performs high-frequency collision detection, and obtains the corresponding high-frequency collision detection result; the infrared sensor based on the current detection point emits detection rays, and conducts infrared detection on any VR device in the high-frequency collision detection result. Collision detection, and obtain the second collision detection result.

此外,可选的技术方案是,基于待检测VR设备对加密包的验证结果,获取待检测VR设备发送的自由度位姿信息的过程包括:待检测VR设备提取加密包中的位置向量标签,并判断检测点与待检测VR设备之间的距离是否满足预设碰撞距离;如果检测点与待检测VR设备之间的距离满足预设碰撞距离,则生成待检测VR设备的第二公钥,并基于加密包中的第一公钥生成对应的第一公共密钥;检测点基于待检测VR设备的第二公钥生成对应的第二公共密钥,并根据第二公共密钥获取自由度位姿信息。In addition, an optional technical solution is that, based on the verification result of the encrypted packet by the VR device to be detected, the process of obtaining the degree of freedom pose information sent by the VR device to be detected includes: the VR device to be detected extracts the position vector label in the encrypted packet, And judge whether the distance between the detection point and the VR device to be detected meets the preset collision distance; if the distance between the detection point and the VR device to be detected meets the preset collision distance, then generate the second public key of the VR device to be detected, And generate the corresponding first public key based on the first public key in the encrypted package; the detection point generates the corresponding second public key based on the second public key of the VR device to be detected, and obtains degrees of freedom according to the second public key pose information.

此外,可选的技术方案是,第三碰撞检测模式的检测过程包括:基于第二碰撞结果内的VR设备中的距离传感器以及VR设备的外形轮廓,生成对应的包围顶点;基于包围顶点形成对应的OBB方向包围盒;基于OBB碰撞检测方法,获取任意两OBB方向包围盒之间的碰撞结果,作为对应的VR设备之间的最终碰撞检测结果。In addition, an optional technical solution is that the detection process of the third collision detection mode includes: generating corresponding enclosing vertices based on the distance sensor in the VR device in the second collision result and the outline of the VR device; Based on the OBB collision detection method, the collision result between any two OBB bounding boxes is obtained as the final collision detection result between the corresponding VR devices.

此外,可选的技术方案是,在基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行一次碰撞检测之前,还包括:基于场景的光线和环 境纹理,对待检测VR设备进行参数初始化处理,以确定待检测VR设备的惯性权重系数和视觉权重系数;基于惯性权重系数和视觉权重系数,确定待检测VR设备的自由度位姿信息。In addition, an optional technical solution is, before performing a collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, it also includes: parameterizing the VR devices to be detected based on the light and environment texture of the scene The initialization process is to determine the inertial weight coefficient and the visual weight coefficient of the VR device to be detected; based on the inertial weight coefficient and the visual weight coefficient, determine the degree of freedom pose information of the VR device to be detected.

此外,可选的技术方案是,参数初始化处理过程,包括:基于待检测VR设备的光传感器,获取场景的光线条件;如光线条件满足第一预设阈值的要求,则基于待检测VR设备的视觉传感器获取场景的环境图片;基于环境图片提取场景中的点特征和线特征;如点特征和线特征均满足第二预设阈值的要求,则按预设比例增大视觉权重系数,并减小惯性权重系数。In addition, an optional technical solution is that the parameter initialization process includes: obtaining the light conditions of the scene based on the light sensor of the VR device to be detected; The visual sensor acquires the environmental picture of the scene; extracts point features and line features in the scene based on the environmental picture; if the point features and line features meet the requirements of the second preset threshold, then increase the visual weight coefficient according to the preset ratio, and reduce the Small inertia weight factor.

此外,可选的技术方案是,如光线条件不满足第一预设阈值的要求,则按照预设比例提高惯性权重系数,并减小视觉权重系数;如点特征和线特征不满足第二预设阈值的要求,则按照预设比例提高惯性权重系数,并减小视觉权重系数。In addition, an optional technical solution is, if the light conditions do not meet the requirements of the first preset threshold, then increase the inertia weight coefficient according to the preset ratio, and reduce the visual weight coefficient; if the point features and line features do not meet the second preset threshold If the threshold is set, the inertia weight coefficient will be increased according to the preset ratio, and the visual weight coefficient will be reduced.

此外,可选的技术方案是,待检测VR设备的自由度位姿信息的表达公式为:In addition, an optional technical solution is that the expression formula of the degree of freedom pose information of the VR device to be detected is:

其中,v表示自由度位姿信息,α表示视觉权重系数,β表示惯性权重系数,P表示视觉传感器获取的6自由度位姿信息,I表示惯性传感器的6自由度位姿信息。Among them, v represents the pose information of degrees of freedom, α represents the visual weight coefficient, β represents the inertial weight coefficient, P represents the 6-degree-of-freedom pose information acquired by the visual sensor, and I represents the 6-degree-of-freedom pose information of the inertial sensor.

根据本申请的另一方面,提供一种VR碰撞检测系统,包括:第一次碰撞检测单元,用于基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行一次碰撞检测,并获取对应的第一碰撞检测结果;第二次碰撞检测单元,用于基于第二碰撞检测模式对第一碰撞检测结果内的VR设备进行二次碰撞检测,并获取对应的第二碰撞检测结果;第三次碰撞检测单元,用于基于第三碰撞检测模式对第二碰撞检测结果内的VR设备进行三次碰撞检测,并获取VR设备之间的最终碰撞检测结果。According to another aspect of the present application, a VR collision detection system is provided, including: a first collision detection unit, configured to perform a collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and Obtain the corresponding first collision detection result; the second collision detection unit is configured to perform secondary collision detection on the VR device in the first collision detection result based on the second collision detection mode, and obtain the corresponding second collision detection result; The third collision detection unit is configured to perform three collision detections on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.

利用上述VR碰撞检测方法及系统,逐级别的通过第一次碰撞检测、第二碰撞检测和第三碰撞检测对同一场景范围内的所有待检测VR设备进行碰撞检测,能够在降低检测功耗以及检测成本的同时,提高检测效率及安全性,满足用户的高质量体验。Utilizing the above VR collision detection method and system, through the first collision detection, the second collision detection and the third collision detection level by level, all the VR devices to be detected in the same scene are subjected to collision detection, which can reduce the detection power consumption and While reducing the detection cost, improve the detection efficiency and safety to meet the high-quality experience of users.

为了实现上述以及相关目的,本申请的一个或多个方面包括后面将详细说明的特征。下面的说明以及附图详细说明了本申请的某些示例性方面。然而,这些方面指示的仅仅是可使用本申请的原理的各种方式中的一些方式。此外,本申请旨在包括所有这些方面以及它们的等同物。In order to achieve the above and related objects, one or more aspects of the present application include features that will be described in detail later. The following description and accompanying drawings detail certain exemplary aspects of the present application. These aspects are indicative, however, of but a few of the various ways in which the principles of the present application may be employed. Furthermore, this application is intended to cover all such aspects and their equivalents.

附图说明Description of drawings

通过参考以下结合附图的说明,并且随着对本申请的更全面理解,本申请的其它目的及结果将更加明白及易于理解。在附图中:By referring to the following descriptions in conjunction with the accompanying drawings, and with a more comprehensive understanding of the application, other objectives and results of the application will become clearer and easier to understand. In the attached picture:

图1为根据本申请实施例的VR碰撞检测方法的流程图;FIG. 1 is a flowchart of a VR collision detection method according to an embodiment of the present application;

图2为根据本申请实施例的加密包的加解密流程图;Fig. 2 is the encryption and decryption flow chart of the encrypted package according to the embodiment of the present application;

图3为根据本申请实施例的参数初始化流程图;FIG. 3 is a flow chart of parameter initialization according to an embodiment of the present application;

图4为根据本申请实施例的VR碰撞检测方法的详细流程图;FIG. 4 is a detailed flowchart of a VR collision detection method according to an embodiment of the present application;

图5为根据本申请实施例的VR碰撞检测系统的逻辑框图。Fig. 5 is a logic block diagram of a VR collision detection system according to an embodiment of the present application.

在所有附图中相同的标号指示相似或相应的特征或功能。The same reference numerals indicate similar or corresponding features or functions throughout the drawings.

具体实施方式Detailed ways

在下面的描述中,出于说明的目的,为了提供对一个或多个实施例的全面理解,阐述了许多具体细节。然而,很明显,也可以在没有这些具体细节的情况下实现这些实施例。在其它例子中,为了便于描述一个或多个实施例,公知的结构和设备以方框图的形式示出。In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments. It may be evident, however, that these embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more embodiments.

在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。In the description of the present application, it should be understood that the terms "center", "longitudinal", "transverse", "length", "width", "thickness", "upper", "lower", "front", " Back", "Left", "Right", "Vertical", "Horizontal", "Top", "Bottom", "Inner", "Outer", "Clockwise", "Counterclockwise", "Axial", The orientation or positional relationship indicated by "radial", "circumferential", etc. is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the application and simplifying the description, rather than indicating or implying the referred device or element Must be in a particular orientation, constructed, and operate in a particular orientation, and thus should not be construed as limiting of the application.

为详细描述本申请的VR碰撞检测方法及系统,以下将结合附图对本申请的具体实施例进行详细描述。In order to describe the VR collision detection method and system of the present application in detail, specific embodiments of the present application will be described in detail below in conjunction with the accompanying drawings.

图1示出了根据本申请实施例的VR碰撞检测方法的流程图。Fig. 1 shows a flowchart of a VR collision detection method according to an embodiment of the present application.

如图1所示,本申请实施例的VR碰撞检测方法,包括:As shown in Figure 1, the VR collision detection method of the embodiment of the present application includes:

S110:基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测,并获取对应的第一碰撞检测结果。S110: Perform a first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtain corresponding first collision detection results.

其中,第一碰撞检测模式的检测过程可以包括:Wherein, the detection process of the first collision detection mode may include:

1、基于扫描到的待检测VR设备所广播的广播帧中的ID信息,建立与待检测VR设备之间的BLE连接,广播帧基于待检测VR设备的自由度位姿信息以及预设ID构建;1. Based on the scanned ID information in the broadcast frame broadcast by the VR device to be detected, establish a BLE connection with the VR device to be detected. The broadcast frame is constructed based on the degree of freedom pose information of the VR device to be detected and the preset ID ;

2、基于ID信息向服务器请求对应的第一公钥,并基于第一公钥以及获取的当前检测点的位置向量标签形成加密包;2. Request the server for the corresponding first public key based on the ID information, and form an encrypted packet based on the first public key and the acquired position vector label of the current detection point;

3、基于待检测VR设备对加密包的验证结果,获取待检测VR设备发送的自由度位姿信息,并根据自由度位姿信息渲染与待检测VR设备相对应的初步位置;3. Obtain the DOF pose information sent by the VR device to be detected based on the verification result of the encrypted package by the VR device to be detected, and render the preliminary position corresponding to the VR device to be detected according to the DOF pose information;

4、基于初步位置确定当前检测点与待检测VR设备的第一碰撞检测结果。4. Determine the first collision detection result between the current detection point and the VR device to be detected based on the preliminary position.

具体地,在第一碰撞检测模式的检测过程中,被检测方(即待检测VR设备,下同)会首先将基于惯性权重系数和视觉权重系数,确定的待检测VR设备的自由度位姿信息以及预设ID组合成BLR的广播帧进行低频低功耗广播,此时的检测方(即当前检测点的VR设备,下同)也会进行低频扫描,当扫描到周围存在被检测方时,检测方会主动根据广播中的ID信息建立与被检测方之间的BLE连接,并向云端服务器请求对应ID基于ECDH算法生成的第一公钥,然后将场景内的当前检测点的地理位置信息形成位置向量标签,同第一公钥一起在BLE通道中加密后传递至被检测方。Specifically, in the detection process of the first collision detection mode, the detected party (that is, the VR device to be detected, the same below) will firstly determine the degree of freedom pose of the VR device to be detected based on the inertial weight coefficient and the visual weight coefficient Information and preset IDs are combined into BLR broadcast frames for low-frequency and low-power broadcasting. At this time, the detection party (that is, the VR device at the current detection point, the same below) will also perform low-frequency scanning. When there is a detected party around the scan , the detecting party will actively establish a BLE connection with the detected party according to the ID information in the broadcast, and request the cloud server for the first public key generated by the corresponding ID based on the ECDH algorithm, and then send the location of the current detection point in the scene The information forms a position vector label, which is encrypted in the BLE channel together with the first public key and then transmitted to the detected party.

其中,基于ECDH算法设计的加解密流程可参考图2中所示具体示例。Wherein, the encryption and decryption process designed based on the ECDH algorithm can refer to the specific example shown in FIG. 2 .

如图2所示,基于待检测VR设备对加密包的验证结果,获取待检测VR设备发送的自由度位姿信息的过程进一步包括:As shown in Figure 2, based on the verification result of the encrypted packet by the VR device to be detected, the process of obtaining the DOF pose information sent by the VR device to be detected further includes:

首先,待检测VR设备提取加密包中的位置向量标签,并判断检测点与待检测VR设备之间的距离是否满足预设碰撞距离,该预设碰撞距离主要是判断当前检测当的地理位置是否与被检测方的地理位置相近,如果离得较远,可直接终止BLE连接,结束位置检测过程。First, the VR device to be detected extracts the position vector label in the encrypted package, and judges whether the distance between the detection point and the VR device to be detected meets the preset collision distance. The geographical location of the detected party is close. If it is far away, the BLE connection can be directly terminated to end the location detection process.

如果检测点与待检测VR设备之间的距离满足预设碰撞距离,则在被检测方生成待检测VR设备的第二公钥,并基于加密包中的第一公钥生成对应的第 一公共密钥,然后将被检测方的第二公钥发送至检测方,检测方基于待检测VR设备的第二公钥生成对应的第二公共密钥,通过第一公钥、第二公钥、第一公共密钥,第二公共密钥就可实现检测方和被检测方之间的数据加密传输,并根据第二公共密钥获取自由度位姿信息。If the distance between the detection point and the VR device to be detected meets the preset collision distance, the second public key of the VR device to be detected is generated on the detected side, and the corresponding first public key is generated based on the first public key in the encrypted package secret key, and then send the second public key of the detected party to the detecting party, and the detecting party generates a corresponding second public key based on the second public key of the VR device to be detected, through the first public key, the second public key, The first public key and the second public key can realize encrypted data transmission between the detecting party and the detected party, and obtain the degree of freedom pose information according to the second public key.

最终,检测方可根据被检测方的自由度位姿信息在VR环境中渲染出对方的初始位置,根据该初始位置即可确定当前检测点与待检测VR设备的第一碰撞检测结果。一般情况下,当检测方检测到被检测方之间的距离小于预设碰撞距离时,即可确定二者之间存在碰撞可能,第一碰撞检测结果中包含所有检测方能够检测到的被检测方,该过程属于碰撞的第一层级检测,初步确定可能的碰撞目标。Finally, the detecting party can render the initial position of the other party in the VR environment according to the pose information of the degree of freedom of the detected party, and according to the initial position, the first collision detection result between the current detection point and the VR device to be detected can be determined. Generally, when the detecting party detects that the distance between the detected parties is less than the preset collision distance, it can be determined that there is a possibility of collision between the two, and the first collision detection result includes all detected objects that the detecting party can detect. This process belongs to the first level of collision detection, and initially determines possible collision targets.

S120:基于第二碰撞检测模式对第一碰撞检测结果内的VR设备进行第二次碰撞检测,并获取对应的第二碰撞检测结果。S120: Perform a second collision detection on the VR devices in the first collision detection result based on the second collision detection mode, and acquire a corresponding second collision detection result.

该步骤中,第二碰撞检测模式的检测过程进一步包括:In this step, the detection process of the second collision detection mode further includes:

1、按照预设幅度提高第一碰撞检测模式中的广播的频率;1. Increase the frequency of the broadcast in the first collision detection mode according to the preset range;

2、基于提高后的广播频率根据第一碰撞检测模式对第一碰撞检测结果内的任意VR设备进行高频碰撞检测,获取对应的高频碰撞检测结果;2. Based on the increased broadcast frequency, perform high-frequency collision detection on any VR device in the first collision detection result according to the first collision detection mode, and obtain the corresponding high-frequency collision detection result;

3、基于当前检测点的红外传感器发射检测射线,对高频碰撞检测结果内的任意VR设备进行红外碰撞检测,并获取第二碰撞检测结果。3. The infrared sensor based on the current detection point emits detection rays, and performs infrared collision detection on any VR device in the high-frequency collision detection result, and obtains the second collision detection result.

具体地,当第一碰撞检测模式结束后,且第一碰撞检测结果存在VR设备时,即第一碰撞检测结果为存在可能发生碰撞的VR设备时,可提高BLE广播频率,并按照第一碰撞检测模式的流程,再次进行碰撞检测,从而更精确的确认检测结果,当高频BLE广播同样确认可能会发生碰撞的VR设备后,检测方会进一步利用红外传感器发射检测射线,通过射线检测的方法基于第一碰撞检测结果进一步确定第二碰撞检测结果。Specifically, when the first collision detection mode ends and there is a VR device in the first collision detection result, that is, when the first collision detection result indicates that there is a VR device that may collide, the BLE broadcast frequency can be increased, and according to the first collision detection In the process of detection mode, the collision detection is performed again to confirm the detection results more accurately. When the high-frequency BLE broadcast also confirms the VR device that may collide, the detection party will further use the infrared sensor to emit detection rays, and through the method of radiation detection A second collision detection result is further determined based on the first collision detection result.

其中,如果第一碰撞检测结果为零,即不存在VR设备可能与检测方发生碰撞,则无需进行第二次以及第三次碰撞检测,进而能够降低功耗。此外,第二碰撞检测结果是对第一碰撞检测结果的更高精度的检测筛选,因此,第二碰撞检测结果的范围可能会小于第一碰撞检测结果。Wherein, if the result of the first collision detection is zero, that is, there is no possible collision between the VR device and the detection party, the second and third collision detections are unnecessary, thereby reducing power consumption. In addition, the second collision detection result is a higher-precision detection screening of the first collision detection result, therefore, the range of the second collision detection result may be smaller than the first collision detection result.

S130:基于第三碰撞检测模式对第二碰撞检测结果内的VR设备进行第三次碰撞检测,并获取VR设备之间的最终碰撞检测结果。S130: Perform a third collision detection on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.

其中,第三碰撞检测模式的检测过程包括:Wherein, the detection process of the third collision detection mode includes:

1、基于第二碰撞检测结果内的VR设备中的距离传感器以及VR设备的外形轮廓,生成对应的包围顶点;1. Based on the distance sensor in the VR device in the second collision detection result and the outline of the VR device, generate corresponding enclosing vertices;

2、基于包围顶点形成对应的OBB方向包围盒;2. Form the corresponding OBB direction bounding box based on the surrounding vertices;

3、基于OBB碰撞检测方法,获取任意两OBB方向包围盒之间的碰撞检测结果,作为对应的VR设备之间的最终碰撞检测结果。3. Based on the OBB collision detection method, the collision detection results between any two OBB direction bounding boxes are obtained as the final collision detection results between the corresponding VR devices.

具体地,由于在每个VR设备上均设置有多个距离传感器,当第二碰撞检测结果中存在目标VR设备时,可通过对应的距离传感器,形成包围自身的包围顶点以及OBB方向包围盒,当任意两个OBB方向包围盒存在重叠时,即可确定二者发生碰撞,此时可基于OBB方向包围盒确定碰撞角度以及位置等信息,并在发生碰撞之前给出预警。Specifically, since multiple distance sensors are provided on each VR device, when there is a target VR device in the second collision detection result, the surrounding vertices surrounding itself and the OBB direction bounding box can be formed through the corresponding distance sensor, When any two OBB bounding boxes overlap, it can be determined that the two have collided. At this time, information such as the collision angle and position can be determined based on the OBB bounding boxes, and an early warning is given before the collision occurs.

需要说明的是,该OBB方向包围盒可预留一定的安全边界,当OBB方向包围盒存在重叠时,确定虚拟场景中的两个VR设备发生碰撞,但是现实场景中,二者可能还存在一定的安全距离,确保用户安全。It should be noted that the OBB bounding box can reserve a certain safety boundary. When the OBB bounding boxes overlap, it is determined that two VR devices in the virtual scene collide. safe distance to ensure user safety.

在本申请的一个具体实施方式中,在基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测之前,还可以包括:基于场景的光线和环境纹理,对待检测VR设备进行参数初始化处理,以确定待检测VR设备的惯性权重系数和视觉权重系数;最终,基于惯性权重系数和视觉权重系数,确定待检测VR设备的自由度位姿信息。In a specific embodiment of the present application, before performing the first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, it may also include: based on the light and environment texture of the scene, The VR device performs parameter initialization processing to determine the inertial weight coefficient and visual weight coefficient of the VR device to be detected; finally, based on the inertial weight coefficient and visual weight coefficient, the degree of freedom pose information of the VR device to be detected is determined.

具体地,图3示出了根据本申请实施例的参数初始化示意流程。Specifically, FIG. 3 shows a schematic process of parameter initialization according to an embodiment of the present application.

如图3所示,参数初始化处理过程进一步包括:在新环境下启动初始化配置流程,然后基于待检测VR设备的光传感器,获取场景的光线条件;如光线条件满足第一预设阈值的要求,则基于待检测VR设备的视觉传感器获取产经的环境图片;基于环境图片提取场景中的点特征和线特征;如点特征和线特征均满足第二预设阈值的要求,则按预设比例增大视觉权重系数,并减小惯性权重系数。As shown in Figure 3, the parameter initialization process further includes: starting the initialization configuration process in a new environment, and then obtaining the light conditions of the scene based on the light sensor of the VR device to be detected; if the light conditions meet the requirements of the first preset threshold, Then, based on the visual sensor of the VR device to be detected, the production environment picture is obtained; the point features and line features in the scene are extracted based on the environment picture; Increase the visual weight factor, and decrease the inertial weight factor.

此外,如光线条件不满足第一预设阈值的要求,则按照预设比例提高惯性权重系数,并减小视觉权重系数;如点特征和线特征不满足第二预设阈值的要求,则按照预设比例提高惯性权重系数,并减小视觉权重系数,VR设备能够在光线太暗或者太亮的情况下,提高惯性权重系数,而在光线适中的情 况下,提高视觉权重系数,从而避免光线、纹理等环境因素对单一传感器定位精度产生的影响。In addition, if the light conditions do not meet the requirements of the first preset threshold, the inertia weight coefficient is increased according to the preset ratio, and the visual weight coefficient is reduced; if the point features and line features do not meet the requirements of the second preset threshold, then the The preset ratio increases the inertia weight coefficient and reduces the visual weight coefficient. The VR device can increase the inertia weight coefficient when the light is too dark or too bright, and increase the visual weight coefficient when the light is moderate, so as to avoid light, The impact of environmental factors such as texture on the positioning accuracy of a single sensor.

作为具体示例,待检测VR设备的自由度位姿信息的表达公式为:As a specific example, the expression formula of the degree of freedom pose information of the VR device to be detected is:

其中,v表示自由度位姿信息,α表示视觉权重系数,β表示惯性权重系数,P表示视觉传感器获取的6自由度位姿信息,I表示惯性传感器的6自由度位姿信息,视觉与惯性传感器进行融合定位,能够根据环境光传感器和图像特征点识别结合的方法来动态调节权重,从而更加精准的融合视觉和惯性数据,提高碰撞检测精度。Among them, v represents the pose information of degrees of freedom, α represents the visual weight coefficient, β represents the inertial weight coefficient, P represents the 6-degree-of-freedom pose information acquired by the visual sensor, I represents the 6-degree-of-freedom pose information of the inertial sensor, and the visual and inertial Sensor fusion positioning can dynamically adjust the weight according to the combination of ambient light sensor and image feature point recognition, so as to more accurately fuse visual and inertial data and improve the accuracy of collision detection.

作为具体示例,图4示出了根据本申请实施例的VR碰撞检测方法的详细示意流程。As a specific example, FIG. 4 shows a detailed schematic flow of a VR collision detection method according to an embodiment of the present application.

如图4所示,本申请实施例的VR碰撞检测方法,包括:As shown in Figure 4, the VR collision detection method of the embodiment of the present application includes:

1、对视觉传感器和惯性传感器的权重系数进行初始化,然后将碰撞检测服务ID以及加密的6自由度位姿信息通过广播帧的形式进行低频BLE广播;1. Initialize the weight coefficients of the visual sensor and inertial sensor, and then broadcast the collision detection service ID and the encrypted 6-DOF pose information in the form of a broadcast frame through low-frequency BLE;

2、当被检测方被检测方检测到时,对检测方发送的第一公钥进行验证,并在验证通过时,获取对应的被检测方的第二公钥,并获取第一公共密钥;2. When the detected party is detected by the detected party, verify the first public key sent by the detected party, and when the verification is passed, obtain the corresponding second public key of the detected party, and obtain the first public key ;

3、基于第一公共密钥对其自由度信息进行平均采用并加密传输;3. Based on the first public key, the degree of freedom information is averagely adopted and encrypted for transmission;

4、对于检测方而言,会持续进行低频扫描,直至扫描到VR设备后,建立BLE连接通道,并更具广播中的ID向云端服务器请求对应的第一公钥,并结合自己的地理标签发送至被检测方;4. For the detection party, it will continue to perform low-frequency scanning until the VR device is scanned, establish a BLE connection channel, and request the corresponding first public key from the cloud server with the ID in the broadcast, and combine it with its own geographic tag sent to the detected party;

5、检测方根据第二公共密钥获取被检测方的6自由度位姿信息和名称代号等信息,在虚拟现实场景中进行低频模型渲染,确定粗略的位置,完成第一层级的第一碰撞检测模式;其中,在此过程中,解析出来的被检测方的位姿信息,可上传至服务器,供服务器构建多VR设备间的虚拟地图;5. The detection party obtains the 6-DOF pose information and name code information of the detected party according to the second public key, performs low-frequency model rendering in the virtual reality scene, determines the rough position, and completes the first collision of the first level Detection mode; wherein, during this process, the analyzed pose information of the detected party can be uploaded to the server for the server to build a virtual map between multiple VR devices;

6、进行第二层级的第二碰撞检测模式,通过高频BLE进行广播,并重复执行上述步骤,然后通过主红外线传感器进行射线包围检测;6. Carry out the second collision detection mode of the second level, broadcast through high-frequency BLE, and repeat the above steps, and then perform ray surround detection through the main infrared sensor;

7、进行第三层级的第三碰撞检测模式,确定OBB方向包围盒,并根据OBB方向包围盒返回最终的碰撞检测结果,并进行碰撞预警。7. Carry out the third collision detection mode of the third level, determine the OBB direction bounding box, and return the final collision detection result according to the OBB direction bounding box, and perform collision warning.

需要说明的是,为方便描述,本申请所涉及的检测方和被检测方均可理解为同一场景范围内的VR设备,每个VR设备均可理解为检测方和被检测方, 碰撞检测也是任意两个VR设备之间的检测,因此并不对检测方和被检测方进行特殊限定。It should be noted that, for the convenience of description, the detecting party and the detected party involved in this application can be understood as VR devices within the same scene range, and each VR device can be understood as the detecting party and the detected party, and collision detection is also Detection between any two VR devices, so there is no special limitation on the detecting party and the detected party.

与上述VR碰撞检测方法相对应,本申请还提供一种VR碰撞检测系统。Corresponding to the aforementioned VR collision detection method, the present application further provides a VR collision detection system.

具体地,图4示出了根据本申请实施例的VR碰撞检测系统的逻辑框图。Specifically, FIG. 4 shows a logic block diagram of a VR collision detection system according to an embodiment of the present application.

如图4所示,本申请实施例的VR碰撞检测系统200,包括:As shown in FIG. 4, the VR collision detection system 200 of the embodiment of the present application includes:

第一次碰撞检测单元210,用于基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测,并获取对应的第一碰撞检测结果;The first collision detection unit 210 is configured to perform a first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtain a corresponding first collision detection result;

第二次碰撞检测单元220,用于基于第二碰撞检测模式对第一碰撞检测结果内的VR设备进行第二次碰撞检测,并获取对应的第二碰撞检测结果;The second collision detection unit 220 is configured to perform a second collision detection on the VR device in the first collision detection result based on the second collision detection mode, and obtain a corresponding second collision detection result;

第三次碰撞检测单元230,用于基于第三碰撞检测模式对第二碰撞检测结果内的VR设备进行第三次碰撞检测,并获取VR设备之间的最终碰撞检测结果。The third collision detection unit 230 is configured to perform a third collision detection on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.

需要说明的是,上述VR碰撞检测系统的实施例可参考VR碰撞检测方法实施例中的描述,此处不再一一赘述。It should be noted that, for the embodiments of the above VR collision detection system, reference may be made to the description in the embodiments of the VR collision detection method, and details will not be repeated here.

根据上述本申请的VR碰撞检测方法及系统,具有以下优点:According to the above-mentioned VR collision detection method and system of the present application, it has the following advantages:

1、能够在同一场景下实现低功耗、低成本的多人VR碰撞检测。1. It can realize low-power and low-cost multi-person VR collision detection in the same scene.

2、结合地理位置信息和BLE加解密流程,能够更为安全和有效的进行碰撞检测相关信息的同步,实现虚拟位姿信息的融合。2. Combined with geographic location information and BLE encryption and decryption process, it can more safely and effectively synchronize information related to collision detection, and realize the fusion of virtual pose information.

3、能够根据VR头显的真实使用场景调整视觉和惯性传感器在位姿定位过程中的比重,从而克服多样化使用场景的限制,更为精准和稳定的实现融合定位。3. It is possible to adjust the proportion of visual and inertial sensors in the pose positioning process according to the real use scene of the VR head display, so as to overcome the limitations of diverse use scenes and achieve more accurate and stable fusion positioning.

4、通过多次碰撞检测,实现多层级的检测过程,确保检测结果的准确。4. Through multiple collision detection, a multi-level detection process is realized to ensure the accuracy of the detection results.

如上参照图1和图2以示例的方式描述根据本申请的VR碰撞检测方法及系统。但是,本领域技术人员应当理解,对于上述本申请所提出的VR碰撞检测方法及系统,还可以在不脱离本申请内容的基础上做出各种改进。因此,本申请的保护范围应当由所附的权利要求书的内容确定。The VR collision detection method and system according to the present application are described above by way of example with reference to FIG. 1 and FIG. 2 . However, those skilled in the art should understand that various improvements can be made to the above-mentioned VR collision detection method and system proposed in the present application without departing from the content of the present application. Therefore, the protection scope of this application should be determined by the contents of the appended claims.

Claims (10)

一种VR碰撞检测方法,其特征在于,所述方法包括:A VR collision detection method, characterized in that the method comprises: 基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测,并获取对应的第一碰撞检测结果;Perform the first collision detection on all the VR devices to be detected within the same scene range based on the first collision detection mode, and obtain the corresponding first collision detection results; 基于第二碰撞检测模式对所述第一碰撞检测结果内的VR设备进行第二次碰撞检测,并获取对应的第二碰撞检测结果;Performing a second collision detection on the VR devices in the first collision detection result based on the second collision detection mode, and obtaining a corresponding second collision detection result; 基于第三碰撞检测模式对所述第二碰撞检测结果内的VR设备进行第三次碰撞检测,并获取所述VR设备之间的最终碰撞检测结果。Perform a third collision detection on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices. 如权利要求1所述的VR碰撞检测方法,其特征在于,所述第一碰撞检测模式的检测过程包括:The VR collision detection method according to claim 1, wherein the detection process of the first collision detection mode comprises: 基于扫描到的待检测VR设备所广播的广播帧中的ID信息,建立与所述待检测VR设备之间的BLE连接,所述广播帧基于所述待检测VR设备的自由度位姿信息以及预设ID构建;Based on the scanned ID information in the broadcast frame broadcast by the VR device to be detected, establish a BLE connection with the VR device to be detected, the broadcast frame is based on the degree of freedom pose information of the VR device to be detected and Preset ID construction; 基于所述ID信息向服务器请求对应的第一公钥,并基于所述第一公钥以及获取的当前检测点的位置向量标签形成加密包;Requesting a corresponding first public key from the server based on the ID information, and forming an encrypted packet based on the first public key and the obtained position vector label of the current detection point; 基于所述待检测VR设备对所述加密包的验证结果,获取所述待检测VR设备发送的所述自由度位姿信息,并根据所述自由度位姿信息渲染与所述待检测VR设备相对应的初步位置;Obtain the DOF pose information sent by the VR device to be detected based on the verification result of the encrypted packet by the VR device to be detected, and render and render the VR device to be detected according to the DOF pose information. the corresponding initial position; 基于所述初步位置确定所述当前检测点与所述待检测VR设备的所述第一碰撞检测结果。Determine the first collision detection result between the current detection point and the VR device to be detected based on the preliminary position. 如权利要求2所述的VR碰撞检测方法,其特征在于,所述第二碰撞检测模式的检测过程包括:The VR collision detection method according to claim 2, wherein the detection process of the second collision detection mode comprises: 按照预设幅度提高所述第一碰撞检测模式中的广播的频率;increasing the frequency of the broadcast in the first collision detection mode according to a preset range; 基于提高后的广播频率根据所述第一碰撞检测模式对所述第一碰撞检测结果内的任意VR设备进行高频碰撞检测,并获取对应的高频碰撞检测结果;Perform high-frequency collision detection on any VR device in the first collision detection result according to the first collision detection mode based on the increased broadcast frequency, and obtain a corresponding high-frequency collision detection result; 基于所述当前检测点的红外传感器发射检测射线,对所述高频碰撞检测结果内的任意VR设备进行红外碰撞检测,并获取所述第二碰撞检测结果。The infrared sensor based on the current detection point emits a detection ray, performs infrared collision detection on any VR device in the high-frequency collision detection result, and obtains the second collision detection result. 如权利要求2所述的VR碰撞检测方法,其特征在于,基于所述待检测VR设备对所述加密包的验证结果,获取所述待检测VR设备发送的所述自由度位姿信息的过程包括:The VR collision detection method according to claim 2, characterized in that, based on the verification result of the encrypted packet by the VR device to be detected, the process of obtaining the DOF pose information sent by the VR device to be detected include: 所述待检测VR设备提取所述加密包中的位置向量标签,并判断所述检测点与所述待检测VR设备之间的距离是否满足预设碰撞距离;The VR device to be detected extracts the position vector label in the encrypted packet, and judges whether the distance between the detection point and the VR device to be detected meets a preset collision distance; 如果所述检测点与所述待检测VR设备之间的距离满足所述预设碰撞距离,则生成所述待检测VR设备的第二公钥,并基于所述加密包中的所述第一公钥生成对应的第一公共密钥;If the distance between the detection point and the VR device to be detected satisfies the preset collision distance, generate a second public key of the VR device to be detected, and based on the first The public key generates a corresponding first public key; 所述检测点基于所述待检测VR设备的所述第二公钥生成对应的第二公共密钥,并根据所述第二公共密钥获取所述自由度位姿信息。The detection point generates a corresponding second public key based on the second public key of the VR device to be detected, and obtains the degree-of-freedom pose information according to the second public key. 如权利要求1所述的VR碰撞检测方法,其特征在于,所述第三碰撞检测模式的检测过程包括:The VR collision detection method according to claim 1, wherein the detection process of the third collision detection mode comprises: 基于所述第二碰撞结果内的VR设备中的距离传感器以及所述VR设备的外形轮廓,生成对应的包围顶点;Generate corresponding enclosing vertices based on the distance sensor in the VR device in the second collision result and the outline of the VR device; 基于所述包围顶点形成对应的OBB方向包围盒;forming a corresponding OBB direction bounding box based on the surrounding vertices; 基于OBB碰撞检测方法,获取任意两OBB方向包围盒之间的碰撞检测结果,作为对应的所述VR设备之间的最终碰撞检测结果。Based on the OBB collision detection method, the collision detection results between any two OBB direction bounding boxes are obtained as the final collision detection results between the corresponding VR devices. 如权利要求1所述的VR碰撞检测方法,其特征在于,在基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测之前,还包括:The VR collision detection method according to claim 1, further comprising: 基于所述场景的光线和环境纹理,对所述待检测VR设备进行参数初始化处理,以确定所述待检测VR设备的惯性权重系数和视觉权重系数;Based on the light and environmental texture of the scene, perform parameter initialization processing on the VR device to be detected, so as to determine the inertial weight coefficient and the visual weight coefficient of the VR device to be detected; 基于所述惯性权重系数和所述视觉权重系数,确定所述待检测VR设备的自由度位姿信息。Based on the inertial weight coefficient and the visual weight coefficient, determine the degree of freedom pose information of the VR device to be detected. 如权利要求6所述的VR碰撞检测方法,其特征在于,所述参数初始化处理过程,包括:The VR collision detection method according to claim 6, wherein the parameter initialization process includes: 基于所述待检测VR设备的光传感器,获取所述场景的光线条件;Acquiring light conditions of the scene based on the light sensor of the VR device to be detected; 如所述光线条件满足第一预设阈值的要求,则基于所述待检测VR设备的视觉传感器获取所述场景的环境图片;If the light conditions meet the requirements of the first preset threshold, then based on the visual sensor of the VR device to be detected, the environmental picture of the scene is acquired; 基于所述环境图片提取所述场景中的点特征和线特征;extracting point features and line features in the scene based on the environment picture; 如所述点特征和所述线特征均满足第二预设阈值的要求,则按预设比例增大所述视觉权重系数,并减小所述惯性权重系数。If both the point features and the line features meet the requirements of the second preset threshold, the visual weight coefficient is increased according to a preset ratio, and the inertial weight coefficient is decreased. 如权利要求7所述的VR碰撞检测方法,其特征在于,VR collision detection method as claimed in claim 7, is characterized in that, 如所述光线条件不满足所述第一预设阈值的要求,则按照所述预设比例提高所述惯性权重系数,并减小所述视觉权重系数;If the light conditions do not meet the requirements of the first preset threshold, increasing the inertia weight coefficient according to the preset ratio, and reducing the visual weight coefficient; 如所述点特征和所述线特征不满足所述第二预设阈值的要求,则按照所述预设比例提高所述惯性权重系数,并减小所述视觉权重系数。If the point features and the line features do not meet the requirements of the second preset threshold, the inertia weight coefficient is increased according to the preset ratio, and the visual weight coefficient is decreased. 如权利要求7所述的VR碰撞检测方法,其特征在于,所述待检测VR设备的自由度位姿信息的表达公式为:The VR collision detection method according to claim 7, wherein the expression formula of the degree of freedom pose information of the VR device to be detected is: 其中,v表示所述自由度位姿信息,α表示所述视觉权重系数,β表示惯性权重系数,P表示所述视觉传感器获取的6自由度位姿信息,I表示所述惯性传感器的6自由度位姿信息。Among them, v represents the pose information of the degrees of freedom, α represents the visual weight coefficient, β represents the inertial weight coefficient, P represents the 6-degree-of-freedom pose information acquired by the visual sensor, and I represents the 6-degree-of-freedom of the inertial sensor. degree pose information. 一种VR碰撞检测系统,其特征在于,包括:A VR collision detection system is characterized in that it comprises: 第一次碰撞检测单元,用于基于第一碰撞检测模式对同一场景范围内的所有待检测VR设备进行第一次碰撞检测,并获取对应的第一碰撞检测结果;The first collision detection unit is configured to perform the first collision detection on all VR devices to be detected within the same scene range based on the first collision detection mode, and obtain the corresponding first collision detection results; 第二次碰撞检测单元,用于基于第二碰撞检测模式对所述第一碰撞检测结果内的VR设备进行第二次碰撞检测,并获取对应的第二碰撞检测结果;The second collision detection unit is configured to perform a second collision detection on the VR device in the first collision detection result based on the second collision detection mode, and obtain a corresponding second collision detection result; 第三次碰撞检测单元,用于基于第三碰撞检测模式对所述第二碰撞检测结果内的VR设备进行第三次碰撞检测,并获取所述VR设备之间的最终碰撞检测结果。The third collision detection unit is configured to perform a third collision detection on the VR devices in the second collision detection result based on the third collision detection mode, and obtain a final collision detection result between the VR devices.
PCT/CN2021/124685 2021-07-30 2021-10-19 Method and system for vr collision detection Ceased WO2023005007A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110871726.2 2021-07-30
CN202110871726.2A CN113838215B (en) 2021-07-30 2021-07-30 VR collision detection method and system

Publications (1)

Publication Number Publication Date
WO2023005007A1 true WO2023005007A1 (en) 2023-02-02

Family

ID=78963058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124685 Ceased WO2023005007A1 (en) 2021-07-30 2021-10-19 Method and system for vr collision detection

Country Status (2)

Country Link
CN (1) CN113838215B (en)
WO (1) WO2023005007A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081695A1 (en) * 2005-10-04 2007-04-12 Eric Foxlin Tracking objects with markers
CN110969687A (en) * 2019-11-29 2020-04-07 中国商用飞机有限责任公司北京民用飞机技术研究中心 Collision detection method, device, equipment and medium
CN111062135A (en) * 2019-12-18 2020-04-24 哈尔滨理工大学 An Accurate Collision Detection Method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791549B2 (en) * 2001-12-21 2004-09-14 Vrcontext S.A. Systems and methods for simulating frames of complex virtual environments
CN102368280A (en) * 2011-10-21 2012-03-07 北京航空航天大学 Virtual assembly-oriented collision detection method based on AABB (Axis Aligned Bounding Box)-OBB (Oriented Bounding Box) mixed bounding box
US10078919B2 (en) * 2016-03-31 2018-09-18 Magic Leap, Inc. Interactions with 3D virtual objects using poses and multiple-DOF controllers
CN107270900A (en) * 2017-07-25 2017-10-20 广州阿路比电子科技有限公司 A kind of 6DOF locus and the detecting system and method for posture
CN110865650B (en) * 2019-11-19 2022-12-20 武汉工程大学 Adaptive estimation method of UAV pose based on active vision
CN111652908A (en) * 2020-04-17 2020-09-11 国网山西省电力公司晋中供电公司 An operation collision detection method for virtual reality scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081695A1 (en) * 2005-10-04 2007-04-12 Eric Foxlin Tracking objects with markers
CN110969687A (en) * 2019-11-29 2020-04-07 中国商用飞机有限责任公司北京民用飞机技术研究中心 Collision detection method, device, equipment and medium
CN111062135A (en) * 2019-12-18 2020-04-24 哈尔滨理工大学 An Accurate Collision Detection Method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MEIPING WU ; LIYUN SHI ; ZHIJUN REN: "A Hierarchical Collision Detection Algorithm for VA", CONTROL AND DECISION CONFERENCE (CCDC), 2010 CHINESE, IEEE, PISCATAWAY, NJ, USA, 26 May 2010 (2010-05-26), Piscataway, NJ, USA , pages 4315 - 4319, XP031699606, ISBN: 978-1-4244-5181-4 *

Also Published As

Publication number Publication date
CN113838215B (en) 2024-09-24
CN113838215A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US20250342642A1 (en) Distributed processing in computer generated reality system
JP4198054B2 (en) 3D video conferencing system
US10740431B2 (en) Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
TW202028928A (en) Cross layer traffic optimization for split xr
CN102467661B (en) Multimedia device and method for controlling the same
JP2022053334A (en) Distribution device, distribution system, distribution method and distribution program
CN103310683B (en) Intelligent glasses and based on the voice intercommunicating system of intelligent glasses and method
CN111602139A (en) Image processing method and device, control terminal and mobile device
CN109615659A (en) A method and device for obtaining camera parameters of a vehicle-mounted multi-camera surround view system
JP5500513B2 (en) 3D (3D) video for 2D (2D) video messenger applications
KR101007679B1 (en) Distortion image generator and method for curved screen
CN110958390B (en) Image processing method and related device
CN107563304A (en) Terminal equipment unlocking method and device, and terminal equipment
CN105898271A (en) 360-degree panoramic video playing method, playing module and mobile terminal
TW200948043A (en) Method and image-processing device for hole filling
JPWO2018021065A1 (en) Image processing apparatus and image processing method
US9118903B2 (en) Device and method for 2D to 3D conversion
US20190156511A1 (en) Region of interest image generating device
CN107438161A (en) Shooting picture processing method, device and terminal
CN112991439B (en) Method, device, electronic device and medium for locating target object
CN108694389A (en) Safety verification method and electronic equipment based on front dual cameras
JP2018194985A (en) Image processing apparatus, image processing method and image processing program
WO2023005007A1 (en) Method and system for vr collision detection
US20240015264A1 (en) System for broadcasting volumetric videoconferences in 3d animated virtual environment with audio information, and procedure for operating said device
CN106991376A (en) With reference to the side face verification method and device and electronic installation of depth information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21951579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/06/2024)

122 Ep: pct application non-entry in european phase

Ref document number: 21951579

Country of ref document: EP

Kind code of ref document: A1