[go: up one dir, main page]

WO2021210725A1 - Appareil et procédé permettant de traiter des informations de nuage de points - Google Patents

Appareil et procédé permettant de traiter des informations de nuage de points Download PDF

Info

Publication number
WO2021210725A1
WO2021210725A1 PCT/KR2020/007969 KR2020007969W WO2021210725A1 WO 2021210725 A1 WO2021210725 A1 WO 2021210725A1 KR 2020007969 W KR2020007969 W KR 2020007969W WO 2021210725 A1 WO2021210725 A1 WO 2021210725A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
point cloud
coordinate system
cloud information
additional images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2020/007969
Other languages
English (en)
Korean (ko)
Inventor
이형민
조규성
박재완
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxst Co Ltd
Original Assignee
Maxst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maxst Co Ltd filed Critical Maxst Co Ltd
Publication of WO2021210725A1 publication Critical patent/WO2021210725A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • Disclosed embodiments relate to a technology for processing point cloud (Point Cloud) information for a three-dimensional space.
  • Point Cloud point cloud
  • SfM Structure from Motion
  • the SfM algorithm has a process of extracting feature points from images, a process of matching feature points between images, and a process of reconstructing a three-dimensional point cloud by triangulating the matched feature points.
  • various types of SfM algorithms exist according to detailed differences in each process.
  • the disclosed embodiments are for processing the 3D point cloud information so that the previously generated 3D point cloud information can indicate a location in real space according to the direction of gravity.
  • An apparatus for processing point cloud information includes a point cloud information acquisition unit configured to acquire 3D point cloud information for a 3D space, one or more additional images obtained by photographing at least a portion of the 3D space, and the one or more An additional information obtaining unit for obtaining direction information indicating the direction of gravity on the coordinate system in which the additional image is captured, and one axis of the coordinate system of the three-dimensional point cloud information based on the one or more additional images and the direction information to match the direction of gravity and a processing unit that transforms and displays the 3D point cloud information using a coordinate system of the transformed 3D point cloud information.
  • the additional information obtaining unit may obtain sensing information in which the direction of gravity is detected, and may obtain the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured.
  • the processing unit may extract a plurality of feature points from the one or more additional images, map each of the plurality of feature points and each point in the 3D point cloud information, and the one or more additional images are captured from the mapping result Transformation relation information between the coordinate system and the coordinate system of the 3D point cloud information may be calculated, and one axis of the coordinate system of the 3D point cloud information may be transformed based on the transformation relation information to match the direction of gravity.
  • the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information.
  • the rotation relationship may be expressed as a three-dimensional rotation matrix or quaternion, or a Euler angle, or an Axis-angle, or other methods.
  • the method for processing point cloud information includes obtaining three-dimensional point cloud (Point Cloud) information for a three-dimensional space; obtaining direction information indicating the direction of gravity on a photographed coordinate system; converting one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the one or more additional images and the direction information; and the transformation and displaying the 3D point cloud information using the coordinate system of the 3D point cloud information.
  • Point Cloud three-dimensional point cloud
  • the obtaining of the additional image and the direction information may include obtaining sensing information in which the direction of gravity is detected, and obtaining the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured. have.
  • the converting may include extracting a plurality of feature points from the one or more additional images, mapping each of the plurality of feature points with each point in the 3D point cloud information, and capturing the one or more additional images from the mapping result. Calculating transformation relationship information between the coordinate system and the coordinate system of the three-dimensional point cloud information, and converting one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the transformation relationship information.
  • the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information.
  • the rotation relationship may be expressed as a three-dimensional rotation matrix or quaternion, or a Euler angle, or an Axis-angle, or other methods.
  • a computer program stored in a non-transitory computer-readable storage medium When a computer program stored in a non-transitory computer-readable storage medium according to an embodiment is executed by a computing device having one or more processors, it acquires three-dimensional point cloud information for a three-dimensional space, and the three-dimensional space At least one additional image obtained by photographing at least a part of and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are captured, and based on the at least one additional image and the direction information, the coordinate system of the three-dimensional point cloud information It may include one or more commands for transforming one axis of ⁇ to coincide with the direction of gravity, and displaying the 3D point cloud information using a coordinate system of the transformed 3D point cloud information.
  • the direction of gravity is determined.
  • the point cloud information on the 3D space can be expressed through the coordinate system used as the axis, so that the point cloud information on the 3D space can be scalably utilized for various applications.
  • FIG. 1 is a block diagram illustrating an apparatus for processing point cloud information according to an embodiment
  • FIG. 2 is a flowchart illustrating a method of processing point cloud information according to an embodiment
  • step 206 is a flowchart illustrating step 206 in more detail according to an embodiment.
  • FIG. 4 is a block diagram illustrating and describing a computing environment including a computing device suitable for use in example embodiments;
  • an apparatus 100 for processing point cloud information includes a point cloud information acquisition unit 102 , an additional information acquisition unit 104 , and a processing unit 106 .
  • the point cloud information acquisition unit 102 acquires 3D point cloud information for a 3D space.
  • the 'three-dimensional space' may mean an area having an arbitrary range in an outdoor or indoor environment.
  • '3D point cloud information' refers to information obtained by reconstructing a corresponding 3D space based on a 2D image obtained by photographing the above-described 3D space.
  • the 3D point cloud information may include a plurality of points corresponding to structures such as buildings, objects, and living things in the above-described 3D space and a descriptor corresponding to each point.
  • the descriptor may be a vector for expressing peripheral characteristics of each point in a three-dimensional space.
  • the point cloud information acquisition unit 102 may acquire 3D point cloud information for the 3D space using a predetermined point cloud information generation algorithm, for example, the aforementioned SfM algorithm. In another embodiment, the point cloud information acquisition unit 102 may acquire the 3D point cloud information by receiving the 3D point cloud information calculated by another computing device using a wired or wireless communication means.
  • a predetermined point cloud information generation algorithm for example, the aforementioned SfM algorithm.
  • the point cloud information acquisition unit 102 may acquire the 3D point cloud information by receiving the 3D point cloud information calculated by another computing device using a wired or wireless communication means.
  • the three-dimensional point cloud information acquired by the point cloud information obtaining unit 102 is expressed using a coordinate system for an arbitrary direction, not a direction coincident with the actual direction of gravity. That is, the coordinates of each point included in the 3D point cloud information have only relative meanings, and it is impossible to determine which position each point points to in real space from the coordinates of each point.
  • the additional information obtaining unit 104 obtains one or more additional images obtained by photographing at least a portion of the three-dimensional space, and direction information indicating a direction of gravity on a coordinate system in which the one or more additional images are photographed.
  • the 'coordinate system in which the additional image was captured' may be determined depending on which position and at what inclination (direction) the image capturing means for capturing the additional image took the photo, and the origin means the position at which the image was captured.
  • the additional information obtaining unit 104 captures at least a portion of the above-described three-dimensional space with an image capturing means such as a camera, downloads an image previously uploaded to a web site, or performs web crawling ( Additional images may be obtained by extracting images through web crawling) or by obtaining images through a satellite map service provided by a web site (eg, Google's Street View). That is, it is possible to use an image obtained from the web without directly photographing the above-described three-dimensional space.
  • an image capturing means such as a camera
  • downloads an image previously uploaded to a web site or performs web crawling ( Additional images may be obtained by extracting images through web crawling) or by obtaining images through a satellite map service provided by a web site (eg, Google's Street View). That is, it is possible to use an image obtained from the web without directly photographing the above-described three-dimensional space.
  • a satellite map service provided by a web site
  • the additional information acquisition unit 104 may acquire the direction information by acquiring sensing information in which the direction of gravity is detected, and reflecting the sensing information in a coordinate system in which the one or more additional images are captured.
  • the 'sensing information' is information obtained by detecting the direction of gravity by using separate software or a separate sensor when an additional image is acquired, and may be a virtual coordinate system including information on the direction of gravity.
  • the sensing information may be generated through the method (1) or (2) below, but is not limited thereto, and the method of generating the sensing information may be changed according to software or a sensor.
  • the floor is detected using software that detects the floor in 3D space (eg, Apple's ARKit, Google's ARCore, etc.). Thereafter, when a coordinate system based on the detected floor is formed by the software, one axis of the formed coordinate system coincides with the direction of gravity.
  • software that detects the floor in 3D space (eg, Apple's ARKit, Google's ARCore, etc.).
  • a direction of gravity is detected in a three-dimensional space using an accelerometer, a gyroscope sensor, or a geomagnetic sensor built into the image capturing means. Thereafter, the sensor may form a coordinate system such that the detected direction of gravity coincides with the direction of one axis, or may form an arbitrary coordinate system including a unit vector of the detected direction of gravity in a coordinate space.
  • the principle that the additional information obtaining unit 104 reflects the sensing information to the coordinate system in which the additional image is captured is as follows.
  • the sensing information generated through the method (1) or (2) becomes a coordinate system in which one axis coincides with the direction of gravity or an arbitrary coordinate system including a unit vector in the direction of gravity in the coordinate space. Let's call this coordinate system a 'sensed coordinate system'.
  • the additional information obtaining unit 104 determines that one axis of the 'coordinate system in which the additional image is captured' coincides with the direction of gravity of the 'sensed coordinate system' Transform so that it is in the same direction as the axis.
  • the additional information obtaining unit 104 determines that one axis of the 'coordinate system in which the additional image is captured' is indicated by the unit vector. change to the direction.
  • the processing unit 106 converts one axis of the coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the one or more additional images and the direction information obtained through the additional information obtaining unit 104 .
  • the processing unit 106 may extract a plurality of feature points from the one or more additional images.
  • the processing unit 106 may extract an end point of a line segment, a corner of a polygon, or the like indicating the characteristics of the additional image as a feature point.
  • the processing unit 106 includes a scale-invariant feature transform (SIFT), speeded-up robust features (SURF), features from accelerated segment test (FAST), oriented fast and rotated brief (ORB), etc. Any one of the feature point extraction algorithms may be used, but the present invention is not limited thereto.
  • SIFT scale-invariant feature transform
  • SURF speeded-up robust features
  • FAST accelerated segment test
  • ORB oriented fast and rotated brief
  • the processing unit 106 may map each of the plurality of feature points extracted from the one or more additional images and each point in the 3D point cloud information. In an embodiment, the processing unit 106 may perform the mapping by matching a descriptor of each of the plurality of extracted feature points with a descriptor of each point in the 3D point cloud information.
  • the processing unit 106 may calculate transformation relationship information between the coordinate system in which the one or more additional images are captured and the coordinate system of the 3D point cloud information from the mapping result.
  • the transformation relation information may include, but is not limited to, relation information on the coordinate axis direction between the coordinate system in which the one or more additional images are captured and the coordinate system of the three-dimensional point cloud information, and the one or more additional images are It may further include relationship information about the position of the origin of each coordinate system of the captured coordinate system and the 3D point cloud information.
  • the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of 3D point cloud information.
  • the rotation relationship may be expressed in a three-dimensional rotation matrix or Quaternion, or Euler angle, or Axis-angle, or other methods.
  • the 'dimensional rotation matrix' refers to a matrix that rotates one coordinate system in a three-dimensional space around an origin based on its element values.
  • the processing unit 106 may use a mapping pair of a feature point in the one or more additional images generated as a result of the mapping and each point in the three-dimensional point cloud information to the one or more A rotation matrix between the coordinate system in which the additional image is captured and the coordinate system of the 3D point cloud information may be calculated.
  • the processing unit 106 may use various perspective-n-point (PnP) algorithms to calculate the transformation relation information from the mapping result.
  • PnP perspective-n-point
  • applicable PnP algorithms include, but are not limited to, a P3P algorithm, an efficient PnP (EPnP) algorithm, and the like, and any algorithms may be used if the transformation relationship information can be calculated from the mapping result. .
  • the processing unit 106 may transform one axis of the coordinate system of the 3D point cloud information to match the direction of gravity based on the transformation relationship information.
  • the processing unit 106 may rotate the coordinate system of the 3D point cloud information based on the rotation relationship so that one axis of the coordinate system of the 3D point cloud information coincides with the direction of gravity of the coordinate system in which the additional image is captured.
  • the processing unit 106 displays the 3D point cloud information by using the coordinate system of the transformed 3D point cloud information to determine where the coordinates of each point in the 3D point cloud information represent in real space.
  • 3D point cloud information will be able to be easily utilized in actual industrial fields such as virtual reality and autonomous driving.
  • FIG. 2 is a flowchart 200 for explaining a method of processing point cloud information according to an embodiment.
  • the method illustrated in FIG. 2 may be performed, for example, by the above-described point cloud information processing apparatus 100 .
  • step 202 the point cloud information acquisition unit 102 acquires 3D point cloud information for a 3D space.
  • the additional information obtaining unit 104 obtains one or more additional images obtained by photographing at least a portion of the three-dimensional space, and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are photographed.
  • step 206 the processing unit 106 converts one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the additional image and sensor information obtained in step 204 .
  • step 208 the processing unit 106 displays the 3D point cloud information by using the coordinate system of the transformed 3D point cloud information.
  • the method is described by dividing the method into a plurality of steps 202 to 208, but at least some of the steps are performed in a different order, are performed in combination with other steps, are omitted, or are performed by dividing into detailed steps, or One or more steps not shown may be added and performed.
  • FIG. 3 is a flowchart 300 for explaining step 206 in more detail according to an embodiment.
  • the method shown in FIG. 3 may be performed, for example, by the above-described processing unit 106, but is not necessarily limited thereto.
  • the processing unit 106 may extract a plurality of feature points from the one or more additional images acquired by the additional information acquiring unit 104 .
  • the processing unit 106 may map each of the extracted feature points and each point in the 3D point cloud information.
  • the processing unit 106 may calculate transformation relationship information between the coordinate system in which the one or more additional images are captured and the coordinate system of the 3D point cloud information from the mapping result.
  • the processing unit 106 may transform one axis of the coordinate system of the 3D point cloud information to coincide with the direction of gravity based on the transformation relationship information.
  • the method is divided into a plurality of steps 302 to 308, but at least some of the steps are performed in a different order, are performed in combination with other steps, are omitted, or are performed in separate steps, or One or more steps not shown may be added and performed.
  • each component may have different functions and capabilities other than those described below, and may include additional components in addition to those described below.
  • the illustrated computing environment 10 includes a computing device 12 .
  • the computing device 12 may be the point cloud information processing device 100 .
  • Computing device 12 includes at least one processor 14 , computer readable storage medium 16 , and communication bus 18 .
  • the processor 14 may cause the computing device 12 to operate in accordance with the exemplary embodiments discussed above.
  • the processor 14 may execute one or more programs stored in the computer-readable storage medium 16 .
  • the one or more programs may include one or more computer-executable instructions that, when executed by the processor 14, configure the computing device 12 to perform operations in accordance with the exemplary embodiment. can be
  • Computer-readable storage medium 16 is configured to store computer-executable instructions or program code, program data, and/or other suitable form of information.
  • the program 20 stored in the computer readable storage medium 16 includes a set of instructions executable by the processor 14 .
  • computer-readable storage medium 16 includes memory (volatile memory, such as random access memory, non-volatile memory, or a suitable combination thereof), one or more magnetic disk storage devices, optical disk storage devices, flash It may be memory devices, other forms of storage medium accessed by computing device 12 and capable of storing desired information, or a suitable combination thereof.
  • Communication bus 18 interconnects various other components of computing device 12 , including processor 14 and computer readable storage medium 16 .
  • Computing device 12 may also include one or more input/output interfaces 22 and one or more network communication interfaces 26 that provide interfaces for one or more input/output devices 24 .
  • the input/output interface 22 and the network communication interface 26 are coupled to the communication bus 18 .
  • Input/output device 24 may be coupled to other components of computing device 12 via input/output interface 22 .
  • Exemplary input/output device 24 may include a pointing device (such as a mouse or trackpad), a keyboard, a touch input device (such as a touchpad or touchscreen), a voice or sound input device, various types of sensor devices, and/or imaging devices. input devices and/or output devices such as display devices, printers, speakers and/or network cards.
  • the exemplary input/output device 24 may be included in the computing device 12 as a component constituting the computing device 12 , and may be connected to the computing device 12 as a separate device distinct from the computing device 12 . may be
  • an embodiment of the present invention may include a program for performing the methods described in this specification on a computer, and a computer-readable recording medium including the program.
  • the computer-readable recording medium may include program instructions, local data files, local data structures, etc. alone or in combination.
  • the medium may be specially designed and configured for the present invention, or may be commonly used in the field of computer software.
  • Examples of computer-readable recording media include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and program instructions specially configured to store and execute program instructions such as ROMs, RAMs, flash memories, and the like.
  • Hardware devices are included.
  • Examples of the program may include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes such as those generated by a compiler.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La divulgation concerne un appareil et un procédé destinés à traiter des informations de nuage de points. Selon un mode de réalisation, un appareil destiné à traiter des informations de nuage de points comprend : une unité d'acquisition d'informations de nuage de points pour acquérir des informations de nuage de points en 3D concernant un espace en 3D ; une unité d'acquisition d'informations supplémentaires pour acquérir au moins une image supplémentaire obtenue par photographie d'au moins une partie de l'espace en 3D, et des informations de direction indiquant la direction de la gravité dans un système de coordonnées dans lequel la ou les images supplémentaires sont capturées ; et une unité de traitement pour transformer, en fonction de la ou des images supplémentaires et des informations de direction, le système de coordonnées des informations de nuage de points en 3D de telle sorte qu'un axe de celui-ci coïncide avec la direction de la gravité, et pour afficher les informations de nuage de points en 3D à l'aide du système de coordonnées transformé des informations de nuage de points en 3D.
PCT/KR2020/007969 2020-04-14 2020-06-19 Appareil et procédé permettant de traiter des informations de nuage de points Ceased WO2021210725A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200045137A KR102158316B1 (ko) 2020-04-14 2020-04-14 점군 정보 가공 장치 및 방법
KR10-2020-0045137 2020-04-14

Publications (1)

Publication Number Publication Date
WO2021210725A1 true WO2021210725A1 (fr) 2021-10-21

Family

ID=72707886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/007969 Ceased WO2021210725A1 (fr) 2020-04-14 2020-06-19 Appareil et procédé permettant de traiter des informations de nuage de points

Country Status (2)

Country Link
KR (1) KR102158316B1 (fr)
WO (1) WO2021210725A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102339625B1 (ko) * 2021-01-29 2021-12-16 주식회사 맥스트 공간 지도 갱신 장치 및 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140031345A (ko) * 2012-01-13 2014-03-12 소프트키네틱 소프트웨어 자동 장면 교정
JP2014186565A (ja) * 2013-03-25 2014-10-02 Geo Technical Laboratory Co Ltd 3次元点群解析方法
JP2015125685A (ja) * 2013-12-27 2015-07-06 Kddi株式会社 空間構造推定装置、空間構造推定方法及び空間構造推定プログラム
KR20150082358A (ko) * 2012-11-02 2015-07-15 퀄컴 인코포레이티드 기준 좌표계 결정
JP2016186488A (ja) * 2011-04-13 2016-10-27 株式会社トプコン 三次元データ処理装置、三次元データ処理システム、三次元データ処理方法およびプログラム

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102477031B1 (ko) 2018-04-20 2022-12-14 삼성전자주식회사 3차원 데이터를 프로세싱하기 위한 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016186488A (ja) * 2011-04-13 2016-10-27 株式会社トプコン 三次元データ処理装置、三次元データ処理システム、三次元データ処理方法およびプログラム
KR20140031345A (ko) * 2012-01-13 2014-03-12 소프트키네틱 소프트웨어 자동 장면 교정
KR20150082358A (ko) * 2012-11-02 2015-07-15 퀄컴 인코포레이티드 기준 좌표계 결정
JP2014186565A (ja) * 2013-03-25 2014-10-02 Geo Technical Laboratory Co Ltd 3次元点群解析方法
JP2015125685A (ja) * 2013-12-27 2015-07-06 Kddi株式会社 空間構造推定装置、空間構造推定方法及び空間構造推定プログラム

Also Published As

Publication number Publication date
KR102158316B1 (ko) 2020-09-21

Similar Documents

Publication Publication Date Title
WO2022050473A1 (fr) Appareil et procédé d'estimation de pose de caméra
WO2019164379A1 (fr) Procédé et système de reconnaissance faciale
KR20210097145A (ko) 느슨하게 결합된 재국소화 서비스 및 자산 관리 서비스를 통해 인공 환경들 내에서 디지털 자산들을 제공하기 위한 시스템들 및 방법들
WO2015174729A1 (fr) Procédé et système de fourniture de réalité augmentée destinés à fournir des informations spatiales, ainsi que support d'enregistrement et système de distribution de fichier
WO2010027193A2 (fr) Affichage à corrélation spatiale d'un contenu tridimensionnel sur des composants d'affichage ayant des positions arbitraires
WO2021187793A1 (fr) Dispositif électronique pour détecter un objet 3d sur la base de la fusion d'une caméra et d'un capteur radar et son procédé de fonctionnement
WO2011034308A2 (fr) Procédé et système de mise en correspondance d'images panoramiques à l'aide d'une structure de graphe, et support d'enregistrement lisible par ordinateur
US10672191B1 (en) Technologies for anchoring computer generated objects within augmented reality
WO2011040710A2 (fr) Procédé, terminal et support d'enregistrement lisible par ordinateur destinés à effectuer une recherche visuelle en fonction du mouvement ou de la position dudit terminal
WO2020017890A1 (fr) Système et procédé d'association 3d d'objets détectés
WO2021112382A1 (fr) Appareil et procédé de rectification multi-caméra dynamique à l'aide de caméras de profondeur
WO2019229301A1 (fr) Solution pour générer une représentation de réalité virtuelle
CN111612842A (zh) 生成位姿估计模型的方法和装置
WO2021167189A1 (fr) Procédé et dispositif de génération d'informations de fusion sur la base de données de multiples capteurs pour une détection sur 360 degrés et une reconnaissance d'objet environnant
WO2021025364A1 (fr) Procédé et système utilisant un lidar et une caméra pour améliorer des informations de profondeur concernant un point caractéristique d'image
WO2019221340A1 (fr) Procédé et système de calcul de coordonnées spatiales d'une région d'intérêt et support d'enregistrement non transitoire lisible par ordinateur
US10600202B2 (en) Information processing device and method, and program
WO2018026094A1 (fr) Procédé et système pour générer automatiquement une texture d'orthophotographie en utilisant des données dem
WO2021221334A1 (fr) Dispositif de génération de palette de couleurs formée sur la base d'informations gps et de signal lidar, et son procédé de commande
WO2011034305A2 (fr) Procédé et système de mise en correspondance hiérarchique d'images de bâtiments, et support d'enregistrement lisible par ordinateur
WO2011078596A2 (fr) Procédé, système et support d'enregistrement lisible par ordinateur pour réalisation adaptative d'une adaptation d'image selon certaines conditions
CN112102479A (zh) 基于模型对齐的增强现实方法和装置、存储介质、电子设备
WO2021125578A1 (fr) Procédé et système de reconnaissance de position reposant sur un traitement d'informations visuelles
WO2021206200A1 (fr) Dispositif et procédé permettant de traiter des informations de nuage de points
WO2021210725A1 (fr) Appareil et procédé permettant de traiter des informations de nuage de points

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20931000

Country of ref document: EP

Kind code of ref document: A1