[go: up one dir, main page]

WO2021210725A1 - Apparatus and method for processing point cloud information - Google Patents

Apparatus and method for processing point cloud information Download PDF

Info

Publication number
WO2021210725A1
WO2021210725A1 PCT/KR2020/007969 KR2020007969W WO2021210725A1 WO 2021210725 A1 WO2021210725 A1 WO 2021210725A1 KR 2020007969 W KR2020007969 W KR 2020007969W WO 2021210725 A1 WO2021210725 A1 WO 2021210725A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
point cloud
coordinate system
cloud information
additional images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2020/007969
Other languages
French (fr)
Korean (ko)
Inventor
이형민
조규성
박재완
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxst Co Ltd
Original Assignee
Maxst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maxst Co Ltd filed Critical Maxst Co Ltd
Publication of WO2021210725A1 publication Critical patent/WO2021210725A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • Disclosed embodiments relate to a technology for processing point cloud (Point Cloud) information for a three-dimensional space.
  • Point Cloud point cloud
  • SfM Structure from Motion
  • the SfM algorithm has a process of extracting feature points from images, a process of matching feature points between images, and a process of reconstructing a three-dimensional point cloud by triangulating the matched feature points.
  • various types of SfM algorithms exist according to detailed differences in each process.
  • the disclosed embodiments are for processing the 3D point cloud information so that the previously generated 3D point cloud information can indicate a location in real space according to the direction of gravity.
  • An apparatus for processing point cloud information includes a point cloud information acquisition unit configured to acquire 3D point cloud information for a 3D space, one or more additional images obtained by photographing at least a portion of the 3D space, and the one or more An additional information obtaining unit for obtaining direction information indicating the direction of gravity on the coordinate system in which the additional image is captured, and one axis of the coordinate system of the three-dimensional point cloud information based on the one or more additional images and the direction information to match the direction of gravity and a processing unit that transforms and displays the 3D point cloud information using a coordinate system of the transformed 3D point cloud information.
  • the additional information obtaining unit may obtain sensing information in which the direction of gravity is detected, and may obtain the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured.
  • the processing unit may extract a plurality of feature points from the one or more additional images, map each of the plurality of feature points and each point in the 3D point cloud information, and the one or more additional images are captured from the mapping result Transformation relation information between the coordinate system and the coordinate system of the 3D point cloud information may be calculated, and one axis of the coordinate system of the 3D point cloud information may be transformed based on the transformation relation information to match the direction of gravity.
  • the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information.
  • the rotation relationship may be expressed as a three-dimensional rotation matrix or quaternion, or a Euler angle, or an Axis-angle, or other methods.
  • the method for processing point cloud information includes obtaining three-dimensional point cloud (Point Cloud) information for a three-dimensional space; obtaining direction information indicating the direction of gravity on a photographed coordinate system; converting one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the one or more additional images and the direction information; and the transformation and displaying the 3D point cloud information using the coordinate system of the 3D point cloud information.
  • Point Cloud three-dimensional point cloud
  • the obtaining of the additional image and the direction information may include obtaining sensing information in which the direction of gravity is detected, and obtaining the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured. have.
  • the converting may include extracting a plurality of feature points from the one or more additional images, mapping each of the plurality of feature points with each point in the 3D point cloud information, and capturing the one or more additional images from the mapping result. Calculating transformation relationship information between the coordinate system and the coordinate system of the three-dimensional point cloud information, and converting one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the transformation relationship information.
  • the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information.
  • the rotation relationship may be expressed as a three-dimensional rotation matrix or quaternion, or a Euler angle, or an Axis-angle, or other methods.
  • a computer program stored in a non-transitory computer-readable storage medium When a computer program stored in a non-transitory computer-readable storage medium according to an embodiment is executed by a computing device having one or more processors, it acquires three-dimensional point cloud information for a three-dimensional space, and the three-dimensional space At least one additional image obtained by photographing at least a part of and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are captured, and based on the at least one additional image and the direction information, the coordinate system of the three-dimensional point cloud information It may include one or more commands for transforming one axis of ⁇ to coincide with the direction of gravity, and displaying the 3D point cloud information using a coordinate system of the transformed 3D point cloud information.
  • the direction of gravity is determined.
  • the point cloud information on the 3D space can be expressed through the coordinate system used as the axis, so that the point cloud information on the 3D space can be scalably utilized for various applications.
  • FIG. 1 is a block diagram illustrating an apparatus for processing point cloud information according to an embodiment
  • FIG. 2 is a flowchart illustrating a method of processing point cloud information according to an embodiment
  • step 206 is a flowchart illustrating step 206 in more detail according to an embodiment.
  • FIG. 4 is a block diagram illustrating and describing a computing environment including a computing device suitable for use in example embodiments;
  • an apparatus 100 for processing point cloud information includes a point cloud information acquisition unit 102 , an additional information acquisition unit 104 , and a processing unit 106 .
  • the point cloud information acquisition unit 102 acquires 3D point cloud information for a 3D space.
  • the 'three-dimensional space' may mean an area having an arbitrary range in an outdoor or indoor environment.
  • '3D point cloud information' refers to information obtained by reconstructing a corresponding 3D space based on a 2D image obtained by photographing the above-described 3D space.
  • the 3D point cloud information may include a plurality of points corresponding to structures such as buildings, objects, and living things in the above-described 3D space and a descriptor corresponding to each point.
  • the descriptor may be a vector for expressing peripheral characteristics of each point in a three-dimensional space.
  • the point cloud information acquisition unit 102 may acquire 3D point cloud information for the 3D space using a predetermined point cloud information generation algorithm, for example, the aforementioned SfM algorithm. In another embodiment, the point cloud information acquisition unit 102 may acquire the 3D point cloud information by receiving the 3D point cloud information calculated by another computing device using a wired or wireless communication means.
  • a predetermined point cloud information generation algorithm for example, the aforementioned SfM algorithm.
  • the point cloud information acquisition unit 102 may acquire the 3D point cloud information by receiving the 3D point cloud information calculated by another computing device using a wired or wireless communication means.
  • the three-dimensional point cloud information acquired by the point cloud information obtaining unit 102 is expressed using a coordinate system for an arbitrary direction, not a direction coincident with the actual direction of gravity. That is, the coordinates of each point included in the 3D point cloud information have only relative meanings, and it is impossible to determine which position each point points to in real space from the coordinates of each point.
  • the additional information obtaining unit 104 obtains one or more additional images obtained by photographing at least a portion of the three-dimensional space, and direction information indicating a direction of gravity on a coordinate system in which the one or more additional images are photographed.
  • the 'coordinate system in which the additional image was captured' may be determined depending on which position and at what inclination (direction) the image capturing means for capturing the additional image took the photo, and the origin means the position at which the image was captured.
  • the additional information obtaining unit 104 captures at least a portion of the above-described three-dimensional space with an image capturing means such as a camera, downloads an image previously uploaded to a web site, or performs web crawling ( Additional images may be obtained by extracting images through web crawling) or by obtaining images through a satellite map service provided by a web site (eg, Google's Street View). That is, it is possible to use an image obtained from the web without directly photographing the above-described three-dimensional space.
  • an image capturing means such as a camera
  • downloads an image previously uploaded to a web site or performs web crawling ( Additional images may be obtained by extracting images through web crawling) or by obtaining images through a satellite map service provided by a web site (eg, Google's Street View). That is, it is possible to use an image obtained from the web without directly photographing the above-described three-dimensional space.
  • a satellite map service provided by a web site
  • the additional information acquisition unit 104 may acquire the direction information by acquiring sensing information in which the direction of gravity is detected, and reflecting the sensing information in a coordinate system in which the one or more additional images are captured.
  • the 'sensing information' is information obtained by detecting the direction of gravity by using separate software or a separate sensor when an additional image is acquired, and may be a virtual coordinate system including information on the direction of gravity.
  • the sensing information may be generated through the method (1) or (2) below, but is not limited thereto, and the method of generating the sensing information may be changed according to software or a sensor.
  • the floor is detected using software that detects the floor in 3D space (eg, Apple's ARKit, Google's ARCore, etc.). Thereafter, when a coordinate system based on the detected floor is formed by the software, one axis of the formed coordinate system coincides with the direction of gravity.
  • software that detects the floor in 3D space (eg, Apple's ARKit, Google's ARCore, etc.).
  • a direction of gravity is detected in a three-dimensional space using an accelerometer, a gyroscope sensor, or a geomagnetic sensor built into the image capturing means. Thereafter, the sensor may form a coordinate system such that the detected direction of gravity coincides with the direction of one axis, or may form an arbitrary coordinate system including a unit vector of the detected direction of gravity in a coordinate space.
  • the principle that the additional information obtaining unit 104 reflects the sensing information to the coordinate system in which the additional image is captured is as follows.
  • the sensing information generated through the method (1) or (2) becomes a coordinate system in which one axis coincides with the direction of gravity or an arbitrary coordinate system including a unit vector in the direction of gravity in the coordinate space. Let's call this coordinate system a 'sensed coordinate system'.
  • the additional information obtaining unit 104 determines that one axis of the 'coordinate system in which the additional image is captured' coincides with the direction of gravity of the 'sensed coordinate system' Transform so that it is in the same direction as the axis.
  • the additional information obtaining unit 104 determines that one axis of the 'coordinate system in which the additional image is captured' is indicated by the unit vector. change to the direction.
  • the processing unit 106 converts one axis of the coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the one or more additional images and the direction information obtained through the additional information obtaining unit 104 .
  • the processing unit 106 may extract a plurality of feature points from the one or more additional images.
  • the processing unit 106 may extract an end point of a line segment, a corner of a polygon, or the like indicating the characteristics of the additional image as a feature point.
  • the processing unit 106 includes a scale-invariant feature transform (SIFT), speeded-up robust features (SURF), features from accelerated segment test (FAST), oriented fast and rotated brief (ORB), etc. Any one of the feature point extraction algorithms may be used, but the present invention is not limited thereto.
  • SIFT scale-invariant feature transform
  • SURF speeded-up robust features
  • FAST accelerated segment test
  • ORB oriented fast and rotated brief
  • the processing unit 106 may map each of the plurality of feature points extracted from the one or more additional images and each point in the 3D point cloud information. In an embodiment, the processing unit 106 may perform the mapping by matching a descriptor of each of the plurality of extracted feature points with a descriptor of each point in the 3D point cloud information.
  • the processing unit 106 may calculate transformation relationship information between the coordinate system in which the one or more additional images are captured and the coordinate system of the 3D point cloud information from the mapping result.
  • the transformation relation information may include, but is not limited to, relation information on the coordinate axis direction between the coordinate system in which the one or more additional images are captured and the coordinate system of the three-dimensional point cloud information, and the one or more additional images are It may further include relationship information about the position of the origin of each coordinate system of the captured coordinate system and the 3D point cloud information.
  • the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of 3D point cloud information.
  • the rotation relationship may be expressed in a three-dimensional rotation matrix or Quaternion, or Euler angle, or Axis-angle, or other methods.
  • the 'dimensional rotation matrix' refers to a matrix that rotates one coordinate system in a three-dimensional space around an origin based on its element values.
  • the processing unit 106 may use a mapping pair of a feature point in the one or more additional images generated as a result of the mapping and each point in the three-dimensional point cloud information to the one or more A rotation matrix between the coordinate system in which the additional image is captured and the coordinate system of the 3D point cloud information may be calculated.
  • the processing unit 106 may use various perspective-n-point (PnP) algorithms to calculate the transformation relation information from the mapping result.
  • PnP perspective-n-point
  • applicable PnP algorithms include, but are not limited to, a P3P algorithm, an efficient PnP (EPnP) algorithm, and the like, and any algorithms may be used if the transformation relationship information can be calculated from the mapping result. .
  • the processing unit 106 may transform one axis of the coordinate system of the 3D point cloud information to match the direction of gravity based on the transformation relationship information.
  • the processing unit 106 may rotate the coordinate system of the 3D point cloud information based on the rotation relationship so that one axis of the coordinate system of the 3D point cloud information coincides with the direction of gravity of the coordinate system in which the additional image is captured.
  • the processing unit 106 displays the 3D point cloud information by using the coordinate system of the transformed 3D point cloud information to determine where the coordinates of each point in the 3D point cloud information represent in real space.
  • 3D point cloud information will be able to be easily utilized in actual industrial fields such as virtual reality and autonomous driving.
  • FIG. 2 is a flowchart 200 for explaining a method of processing point cloud information according to an embodiment.
  • the method illustrated in FIG. 2 may be performed, for example, by the above-described point cloud information processing apparatus 100 .
  • step 202 the point cloud information acquisition unit 102 acquires 3D point cloud information for a 3D space.
  • the additional information obtaining unit 104 obtains one or more additional images obtained by photographing at least a portion of the three-dimensional space, and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are photographed.
  • step 206 the processing unit 106 converts one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the additional image and sensor information obtained in step 204 .
  • step 208 the processing unit 106 displays the 3D point cloud information by using the coordinate system of the transformed 3D point cloud information.
  • the method is described by dividing the method into a plurality of steps 202 to 208, but at least some of the steps are performed in a different order, are performed in combination with other steps, are omitted, or are performed by dividing into detailed steps, or One or more steps not shown may be added and performed.
  • FIG. 3 is a flowchart 300 for explaining step 206 in more detail according to an embodiment.
  • the method shown in FIG. 3 may be performed, for example, by the above-described processing unit 106, but is not necessarily limited thereto.
  • the processing unit 106 may extract a plurality of feature points from the one or more additional images acquired by the additional information acquiring unit 104 .
  • the processing unit 106 may map each of the extracted feature points and each point in the 3D point cloud information.
  • the processing unit 106 may calculate transformation relationship information between the coordinate system in which the one or more additional images are captured and the coordinate system of the 3D point cloud information from the mapping result.
  • the processing unit 106 may transform one axis of the coordinate system of the 3D point cloud information to coincide with the direction of gravity based on the transformation relationship information.
  • the method is divided into a plurality of steps 302 to 308, but at least some of the steps are performed in a different order, are performed in combination with other steps, are omitted, or are performed in separate steps, or One or more steps not shown may be added and performed.
  • each component may have different functions and capabilities other than those described below, and may include additional components in addition to those described below.
  • the illustrated computing environment 10 includes a computing device 12 .
  • the computing device 12 may be the point cloud information processing device 100 .
  • Computing device 12 includes at least one processor 14 , computer readable storage medium 16 , and communication bus 18 .
  • the processor 14 may cause the computing device 12 to operate in accordance with the exemplary embodiments discussed above.
  • the processor 14 may execute one or more programs stored in the computer-readable storage medium 16 .
  • the one or more programs may include one or more computer-executable instructions that, when executed by the processor 14, configure the computing device 12 to perform operations in accordance with the exemplary embodiment. can be
  • Computer-readable storage medium 16 is configured to store computer-executable instructions or program code, program data, and/or other suitable form of information.
  • the program 20 stored in the computer readable storage medium 16 includes a set of instructions executable by the processor 14 .
  • computer-readable storage medium 16 includes memory (volatile memory, such as random access memory, non-volatile memory, or a suitable combination thereof), one or more magnetic disk storage devices, optical disk storage devices, flash It may be memory devices, other forms of storage medium accessed by computing device 12 and capable of storing desired information, or a suitable combination thereof.
  • Communication bus 18 interconnects various other components of computing device 12 , including processor 14 and computer readable storage medium 16 .
  • Computing device 12 may also include one or more input/output interfaces 22 and one or more network communication interfaces 26 that provide interfaces for one or more input/output devices 24 .
  • the input/output interface 22 and the network communication interface 26 are coupled to the communication bus 18 .
  • Input/output device 24 may be coupled to other components of computing device 12 via input/output interface 22 .
  • Exemplary input/output device 24 may include a pointing device (such as a mouse or trackpad), a keyboard, a touch input device (such as a touchpad or touchscreen), a voice or sound input device, various types of sensor devices, and/or imaging devices. input devices and/or output devices such as display devices, printers, speakers and/or network cards.
  • the exemplary input/output device 24 may be included in the computing device 12 as a component constituting the computing device 12 , and may be connected to the computing device 12 as a separate device distinct from the computing device 12 . may be
  • an embodiment of the present invention may include a program for performing the methods described in this specification on a computer, and a computer-readable recording medium including the program.
  • the computer-readable recording medium may include program instructions, local data files, local data structures, etc. alone or in combination.
  • the medium may be specially designed and configured for the present invention, or may be commonly used in the field of computer software.
  • Examples of computer-readable recording media include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and program instructions specially configured to store and execute program instructions such as ROMs, RAMs, flash memories, and the like.
  • Hardware devices are included.
  • Examples of the program may include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes such as those generated by a compiler.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An apparatus and a method for processing point cloud information are disclosed. An apparatus for processing point cloud information, according to one embodiment, comprises: a point cloud information acquisition unit for acquiring 3D point cloud information about a 3D space; an additional information acquisition unit for acquiring at least one additional image obtained by photographing at least a part of the 3D space, and direction information indicating the direction of gravity in a coordinate system in which the at least one additional image is captured; and a processing unit for transforming, on the basis of the at least one additional image and the direction information, the coordinate system of the 3D point cloud information so that one axis thereof coincides with the direction of gravity, and displaying the 3D point cloud information by using the transformed coordinate system of the 3D point cloud information.

Description

점군 정보 가공 장치 및 방법Point cloud information processing apparatus and method

개시되는 실시예들은 3차원 공간에 대한 점군(Point Cloud) 정보를 가공하는 기술에 관한 것이다.Disclosed embodiments relate to a technology for processing point cloud (Point Cloud) information for a three-dimensional space.

넓은 실내 공간이나 실외 공간 등의 영역을 촬영한 2차원 영상으로부터 해당 영역의 3차원 구조를 복원하기 위한 방법의 하나로서 SfM(Structure from Motion) 알고리즘이 이용된다. SfM 알고리즘은 이미지들의 특징점을 추출하는 과정, 이미지들 간의 특징점을 매칭하는 과정, 매칭된 특징점들을 삼각측량으로 계산하여 3차원 점군(Point Cloud)을 복원하는 과정을 가진다. 그리고, 각 과정에서의 세부적인 차이에 따라 다양한 방식의 SfM 알고리즘이 존재한다.SfM (Structure from Motion) algorithm is used as one of the methods for reconstructing a 3D structure of a corresponding area from a 2D image taken of an area such as a large indoor space or an outdoor space. The SfM algorithm has a process of extracting feature points from images, a process of matching feature points between images, and a process of reconstructing a three-dimensional point cloud by triangulating the matched feature points. In addition, various types of SfM algorithms exist according to detailed differences in each process.

하지만 상술한 SfM 알고리즘의 결과로서 복원되는 3차원 점군 내 각 점의 좌표들은 서로 간의 상대적인 위치에 관한 정보만을 제공할 뿐이어서, 각 점들이 실제 공간 상에서 어느 위치를 가리키는 것인지 명확하게 나타내지 못하는 문제점이 있다.However, since the coordinates of each point in the 3D point cloud restored as a result of the above-described SfM algorithm only provide information on the relative positions of each other, there is a problem in that it is not possible to clearly indicate where each point points in real space. .

개시되는 실시예들은 기존에 생성된 3차원 점군 정보가 중력 방향에 따른 실제 공간 상의 위치를 나타낼 수 있도록 3차원 점군 정보를 가공하기 위한 것이다.The disclosed embodiments are for processing the 3D point cloud information so that the previously generated 3D point cloud information can indicate a location in real space according to the direction of gravity.

일 실시예에 따른 점군 정보 가공 장치는, 3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득하는 점군 정보 획득부, 상기 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득하는 추가 정보 획득부 및 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하고, 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 상기 3차원 점군 정보를 나타내는 가공부를 포함한다.An apparatus for processing point cloud information according to an embodiment includes a point cloud information acquisition unit configured to acquire 3D point cloud information for a 3D space, one or more additional images obtained by photographing at least a portion of the 3D space, and the one or more An additional information obtaining unit for obtaining direction information indicating the direction of gravity on the coordinate system in which the additional image is captured, and one axis of the coordinate system of the three-dimensional point cloud information based on the one or more additional images and the direction information to match the direction of gravity and a processing unit that transforms and displays the 3D point cloud information using a coordinate system of the transformed 3D point cloud information.

상기 추가 정보 획득부는, 상기 중력 방향이 검출된 센싱 정보를 획득할 수 있고, 상기 센싱 정보를 상기 하나 이상의 추가 이미지가 촬영된 좌표계에 반영함으로써 상기 방향 정보를 획득할 수 있다.The additional information obtaining unit may obtain sensing information in which the direction of gravity is detected, and may obtain the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured.

상기 가공부는, 상기 하나 이상의 추가 이미지에서 복수의 특징점을 추출할 수 있고, 상기 복수의 특징점 각각과 상기 3차원 점군 정보 내의 각 점들을 매핑할 수 있고, 상기 매핑 결과로부터 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 변환 관계 정보를 산출할 수 있고, 상기 변환 관계 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환할 수 있다.The processing unit may extract a plurality of feature points from the one or more additional images, map each of the plurality of feature points and each point in the 3D point cloud information, and the one or more additional images are captured from the mapping result Transformation relation information between the coordinate system and the coordinate system of the 3D point cloud information may be calculated, and one axis of the coordinate system of the 3D point cloud information may be transformed based on the transformation relation information to match the direction of gravity.

상기 변환 관계 정보는, 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 회전 관계를 포함할 수 있다. 회전 관계는 3차원 회전 행렬(Rotation matrix) 혹은 쿼터니언 (Quaternion), 혹은 오일러 각 (Euler angle), 혹은 축 각 (Axis-angle), 혹은 그 이외의 방법으로 표현될 수 있다.The transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information. The rotation relationship may be expressed as a three-dimensional rotation matrix or quaternion, or a Euler angle, or an Axis-angle, or other methods.

일 실시예에 따른 점군 정보 가공 방법은, 3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득하는 단계, 상기 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득하는 단계, 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하는 단계 및 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 상기 3차원 점군 정보를 나타내는 단계를 포함한다.The method for processing point cloud information according to an embodiment includes obtaining three-dimensional point cloud (Point Cloud) information for a three-dimensional space; obtaining direction information indicating the direction of gravity on a photographed coordinate system; converting one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the one or more additional images and the direction information; and the transformation and displaying the 3D point cloud information using the coordinate system of the 3D point cloud information.

상기 추가 이미지 및 상기 방향 정보를 획득하는 단계는, 상기 중력 방향이 검출된 센싱 정보를 획득할 수 있고, 상기 센싱 정보를 상기 하나 이상의 추가 이미지가 촬영된 좌표계에 반영함으로써 상기 방향 정보를 획득할 수 있다.The obtaining of the additional image and the direction information may include obtaining sensing information in which the direction of gravity is detected, and obtaining the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured. have.

상기 변환하는 단계는, 상기 하나 이상의 추가 이미지에서 복수의 특징점을 추출하는 단계, 상기 복수의 특징점 각각과 상기 3차원 점군 정보 내의 각 점들을 매핑하는 단계, 상기 매핑 결과로부터 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 변환 관계 정보를 산출하는 단계 및 상기 변환 관계 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하는 단계를 포함할 수 있다.The converting may include extracting a plurality of feature points from the one or more additional images, mapping each of the plurality of feature points with each point in the 3D point cloud information, and capturing the one or more additional images from the mapping result. Calculating transformation relationship information between the coordinate system and the coordinate system of the three-dimensional point cloud information, and converting one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the transformation relationship information. can

상기 변환 관계 정보는, 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 회전 관계를 포함할 수 있다. 회전 관계는 3차원 회전 행렬(Rotation matrix) 혹은 쿼터니언 (Quaternion), 혹은 오일러 각 (Euler angle), 혹은 축 각 (Axis-angle), 혹은 그 이외의 방법으로 표현될 수 있다.The transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information. The rotation relationship may be expressed as a three-dimensional rotation matrix or quaternion, or a Euler angle, or an Axis-angle, or other methods.

일 실시예에 따른 비일시적 컴퓨터 판독 가능한 저장매체에 저장된 컴퓨터 프로그램은 하나 이상의 프로세서들을 갖는 컴퓨팅 장치에 의해 실행될 때, 3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득하고, 상기 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득하고, 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하고, 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 상기 3차원 점군 정보를 나타내도록 하는 하나 이상의 명령어들을 포함할 수 있다.When a computer program stored in a non-transitory computer-readable storage medium according to an embodiment is executed by a computing device having one or more processors, it acquires three-dimensional point cloud information for a three-dimensional space, and the three-dimensional space At least one additional image obtained by photographing at least a part of and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are captured, and based on the at least one additional image and the direction information, the coordinate system of the three-dimensional point cloud information It may include one or more commands for transforming one axis of α to coincide with the direction of gravity, and displaying the 3D point cloud information using a coordinate system of the transformed 3D point cloud information.

개시되는 실시예들에 따르면, 대상 영역에서 촬영된 이미지와 해당 이미지가 촬영된 좌표계 상의 중력 방향 정보를 이용하여 해당 영역의 3차원 공간에 대한 점군(Point Cloud) 정보를 가공함으로써, 중력 방향을 한 축으로 삼는 좌표계를 통해 3차원 공간에 대한 점군 정보를 표현할 수 있고, 이에 따라 3차원 공간에 대한 점군 정보가 다양한 애플리케이션에 확장성 있게 활용되도록 할 수 있다.According to the disclosed embodiments, by processing the point cloud information for the three-dimensional space of the corresponding area using the image taken in the target area and the gravitational direction information on the coordinate system in which the corresponding image is captured, the direction of gravity is determined. The point cloud information on the 3D space can be expressed through the coordinate system used as the axis, so that the point cloud information on the 3D space can be scalably utilized for various applications.

도 1은 일 실시예에 따른 점군 정보 가공 장치를 설명하기 위한 블록도1 is a block diagram illustrating an apparatus for processing point cloud information according to an embodiment;

도 2는 일 실시예에 따른 점군 정보 가공 방법을 설명하기 위한 흐름도2 is a flowchart illustrating a method of processing point cloud information according to an embodiment;

도 3은 일 실시예에 따른 단계 206을 보다 상세하게 설명하기 위한 흐름도3 is a flowchart illustrating step 206 in more detail according to an embodiment.

도 4는 예시적인 실시예들에서 사용되기에 적합한 컴퓨팅 장치를 포함하는 컴퓨팅 환경을 예시하여 설명하기 위한 블록도4 is a block diagram illustrating and describing a computing environment including a computing device suitable for use in example embodiments;

이하, 도면을 참조하여 구체적인 실시형태를 설명하기로 한다. 이하의 상세한 설명은 본 명세서에서 기술된 방법, 장치 및/또는 시스템에 대한 포괄적인 이해를 돕기 위해 제공된다. 그러나 이는 예시에 불과하며 개시되는 실시예들은 이에 제한되지 않는다.Hereinafter, specific embodiments will be described with reference to the drawings. The following detailed description is provided to provide a comprehensive understanding of the methods, apparatus, and/or systems described herein. However, this is merely an example and the disclosed embodiments are not limited thereto.

실시예들을 설명함에 있어서, 관련된 공지기술에 대한 구체적인 설명이 개시되는 실시예들의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략하기로 한다. 그리고, 후술되는 용어들은 개시되는 실시예들에서의 기능을 고려하여 정의된 용어들로서 이는 사용자, 운용자의 의도 또는 관례 등에 따라 달라질 수 있다. 그러므로 그 정의는 본 명세서 전반에 걸친 내용을 토대로 내려져야 할 것이다. 상세한 설명에서 사용되는 용어는 단지 실시예들을 기술하기 위한 것이며, 결코 제한적이어서는 안 된다. 명확하게 달리 사용되지 않는 한, 단수 형태의 표현은 복수 형태의 의미를 포함한다. 본 설명에서, "포함" 또는 "구비"와 같은 표현은 어떤 특성들, 숫자들, 단계들, 동작들, 요소들, 이들의 일부 또는 조합을 가리키기 위한 것이며, 기술된 것 이외에 하나 또는 그 이상의 다른 특성, 숫자, 단계, 동작, 요소, 이들의 일부 또는 조합의 존재 또는 가능성을 배제하도록 해석되어서는 안 된다.In describing the embodiments, if it is determined that a detailed description of a related known technology may unnecessarily obscure the gist of the disclosed embodiments, the detailed description thereof will be omitted. And, the terms to be described later are terms defined in consideration of functions in the disclosed embodiments, which may vary according to intentions or customs of users and operators. Therefore, the definition should be made based on the content throughout this specification. The terminology used in the detailed description is for the purpose of describing the embodiments only, and should in no way be limiting. Unless explicitly used otherwise, expressions in the singular include the meaning of the plural. In this description, expressions such as “comprising” or “comprising” are intended to indicate certain features, numbers, steps, acts, elements, some or a combination thereof, one or more other than those described. It should not be construed to exclude the presence or possibility of other features, numbers, steps, acts, elements, or any part or combination thereof.

도 1은 일 실시예에 따른 점군 정보 가공 장치(100)를 설명하기 위한 블록도이다. 도 1을 참조하면, 일 실시예에 따른 점군 정보 가공 장치(100)는 점군 정보 획득부(102), 추가 정보 획득부(104) 및 가공부(106)를 포함한다.1 is a block diagram illustrating an apparatus 100 for processing point cloud information according to an exemplary embodiment. Referring to FIG. 1 , an apparatus 100 for processing point cloud information according to an embodiment includes a point cloud information acquisition unit 102 , an additional information acquisition unit 104 , and a processing unit 106 .

점군 정보 획득부(102)는 3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득한다.The point cloud information acquisition unit 102 acquires 3D point cloud information for a 3D space.

개시되는 실시예들에서, '3차원 공간'은 실외 또는 실내 환경에서의 임의의 범위를 갖는 영역을 의미할 수 있다. 또한, '3차원 점군 정보'는 상술한 3차원 공간을 촬영한 2차원 이미지에 기반하여 해당 3차원 공간을 복원한 정보를 의미한다.In the disclosed embodiments, the 'three-dimensional space' may mean an area having an arbitrary range in an outdoor or indoor environment. In addition, '3D point cloud information' refers to information obtained by reconstructing a corresponding 3D space based on a 2D image obtained by photographing the above-described 3D space.

이때 상기 3차원 점군 정보는 상술한 3차원 공간 상의 건물, 사물, 생물 등의 구조에 대응되는 복수의 점(point) 및 각각의 점에 해당하는 설명자(descriptor)를 포함할 수 있다. 구체적으로, 설명자는 3차원 공간 상에서 각각의 점의 주변 특성을 표현하기 위한 벡터(vector)일 수 있다.In this case, the 3D point cloud information may include a plurality of points corresponding to structures such as buildings, objects, and living things in the above-described 3D space and a descriptor corresponding to each point. Specifically, the descriptor may be a vector for expressing peripheral characteristics of each point in a three-dimensional space.

일 실시예에서, 점군 정보 획득부(102)는 소정의 점군 정보 생성 알고리즘, 예를 들어 전술한 SfM 알고리즘 등을 이용하여 상기 3차원 공간에 대한 3차원 점군 정보를 획득할 수 있다. 다른 실시예에서, 점군 정보 획득부(102)는 다른 컴퓨팅 장치에서 계산된 3차원 점군 정보를 유선 또는 무선 통신 수단 등을 이용하여 수신함으로써 상기 3차원 점군 정보를 획득할 수도 있다.In an embodiment, the point cloud information acquisition unit 102 may acquire 3D point cloud information for the 3D space using a predetermined point cloud information generation algorithm, for example, the aforementioned SfM algorithm. In another embodiment, the point cloud information acquisition unit 102 may acquire the 3D point cloud information by receiving the 3D point cloud information calculated by another computing device using a wired or wireless communication means.

점군 정보 획득부(102)에서 획득한 3차원 점군 정보는 실제 중력 방향과 일치하는 방향이 아닌, 임의의 방향에 대한 좌표계를 이용하여 표현된다. 즉, 상기 3차원 점군 정보에 포함된 각 점들의 좌표는 서로 상대적인 의미만을 가질 뿐, 각 점들의 좌표로부터 각 점들이 현실 공간에서 어느 위치를 가리키는지 파악하는 것은 불가능하다.The three-dimensional point cloud information acquired by the point cloud information obtaining unit 102 is expressed using a coordinate system for an arbitrary direction, not a direction coincident with the actual direction of gravity. That is, the coordinates of each point included in the 3D point cloud information have only relative meanings, and it is impossible to determine which position each point points to in real space from the coordinates of each point.

추가 정보 획득부(104)는 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득한다.The additional information obtaining unit 104 obtains one or more additional images obtained by photographing at least a portion of the three-dimensional space, and direction information indicating a direction of gravity on a coordinate system in which the one or more additional images are photographed.

이때, '추가 이미지가 촬영된 좌표계'는 추가 이미지를 촬영한 영상 촬영 수단이 촬영 시 어느 위치, 어떤 기울기(방향)에서 촬영했는지에 따라 결정될 수 있으며, 원점은 곧 이미지가 촬영된 위치를 의미할 수 있다.At this time, the 'coordinate system in which the additional image was captured' may be determined depending on which position and at what inclination (direction) the image capturing means for capturing the additional image took the photo, and the origin means the position at which the image was captured. can

일 실시예에서, 추가 정보 획득부(104)는 상술한 3차원 공간의 적어도 일부를 카메라 등의 영상 촬영 수단으로 촬영하거나, 웹 사이트(web site)에 기 업로드된 이미지를 다운로드하거나, 웹 크롤링(web crawling)을 통해 이미지를 추출하거나, 웹 사이트에서 제공하는 위성 지도 서비스(예를 들어, 구글의 스트리트 뷰)를 통해 이미지를 획득함으로써 추가 이미지를 획득할 수 있다. 즉, 상술한 3차원 공간을 직접 촬영하지 않고도 웹 상에서 획득한 이미지를 이용할 수 있는 것이다.In an embodiment, the additional information obtaining unit 104 captures at least a portion of the above-described three-dimensional space with an image capturing means such as a camera, downloads an image previously uploaded to a web site, or performs web crawling ( Additional images may be obtained by extracting images through web crawling) or by obtaining images through a satellite map service provided by a web site (eg, Google's Street View). That is, it is possible to use an image obtained from the web without directly photographing the above-described three-dimensional space.

일 실시예에서, 추가 정보 획득부(104)는 중력 방향이 검출된 센싱 정보를 획득하고, 상기 센싱 정보를 상기 하나 이상의 추가 이미지가 촬영된 좌표계에 반영함으로써 상기 방향 정보를 획득할 수 있다.In an embodiment, the additional information acquisition unit 104 may acquire the direction information by acquiring sensing information in which the direction of gravity is detected, and reflecting the sensing information in a coordinate system in which the one or more additional images are captured.

이때, '센싱 정보'는 추가 이미지가 획득될 때 별도의 소프트웨어 또는 별도의 센서를 이용하여 중력 방향이 어느 방향인지 검출한 정보로서, 중력 방향에 대한 정보를 포함하는 가상의 좌표계일 수 있다. 예시적으로, 센싱 정보는 아래의 (1) 또는 (2) 방식을 통해 생성될 수 있으나, 반드시 이에 한정되는 것은 아니며 소프트웨어 또는 센서에 따라 센싱 정보가 생성되는 방식은 변경될 수 있다.In this case, the 'sensing information' is information obtained by detecting the direction of gravity by using separate software or a separate sensor when an additional image is acquired, and may be a virtual coordinate system including information on the direction of gravity. Illustratively, the sensing information may be generated through the method (1) or (2) below, but is not limited thereto, and the method of generating the sensing information may be changed according to software or a sensor.

(1) 3차원 공간에 대해 바닥을 검출하는 소프트웨어(예를 들어, 애플의 ARKit, 구글의 ARCore 등)를 이용하여 바닥을 검출한다. 이후, 소프트웨어가 검출한 바닥을 기준으로 하는 좌표계를 형성하면, 형성된 좌표계의 일 축은 중력 방향과 일치하게 된다.(1) The floor is detected using software that detects the floor in 3D space (eg, Apple's ARKit, Google's ARCore, etc.). Thereafter, when a coordinate system based on the detected floor is formed by the software, one axis of the formed coordinate system coincides with the direction of gravity.

(2) 영상 촬영 수단에 내장되어 있는 가속도계(accelerometer), 자이로스코프 센서(gyroscope sensor) 또는 지자기 센서(geomagnetic sensor)를 이용하여 3차원 공간 상에서 중력 방향을 검출한다. 이후, 센서는 검출된 중력 방향과 한 축의 방향이 일치하도록 좌표계를 형성하거나, 검출된 중력 방향의 단위 벡터를 좌표 공간 내에 포함하는 임의의 좌표계를 형성할 수 있다.(2) A direction of gravity is detected in a three-dimensional space using an accelerometer, a gyroscope sensor, or a geomagnetic sensor built into the image capturing means. Thereafter, the sensor may form a coordinate system such that the detected direction of gravity coincides with the direction of one axis, or may form an arbitrary coordinate system including a unit vector of the detected direction of gravity in a coordinate space.

상술한 '추가 이미지가 촬영된 좌표계' 및 '센싱 정보'에 관한 설명을 참조할 때, 추가 정보 획득부(104)가 센싱 정보를 추가 이미지가 촬영된 좌표계에 반영하는 원리는 다음과 같다.When referring to the description of the 'coordinate system in which the additional image is captured' and the 'sensing information' described above, the principle that the additional information obtaining unit 104 reflects the sensing information to the coordinate system in which the additional image is captured is as follows.

상기 (1) 또는 (2) 방식을 통해 생성되는 센싱 정보는 일 축이 중력 방향과 일치하는 좌표계 또는 중력 방향의 단위 벡터를 좌표 공간 내에 포함하는 임의의 좌표계가 된다. 이러한 좌표계를 '센싱된 좌표계'라 지칭하자.The sensing information generated through the method (1) or (2) becomes a coordinate system in which one axis coincides with the direction of gravity or an arbitrary coordinate system including a unit vector in the direction of gravity in the coordinate space. Let's call this coordinate system a 'sensed coordinate system'.

'센싱된 좌표계'가 일 축이 중력 방향과 일치하는 좌표계인 경우, 추가 정보 획득부(104)는 '추가 이미지가 촬영된 좌표계'의 일 축이 '센싱된 좌표계'의 중력 방향과 일치하는 일 축과 같은 방향이 되도록 변환한다.When the 'sensed coordinate system' is a coordinate system in which one axis coincides with the direction of gravity, the additional information obtaining unit 104 determines that one axis of the 'coordinate system in which the additional image is captured' coincides with the direction of gravity of the 'sensed coordinate system' Transform so that it is in the same direction as the axis.

한편, '센싱된 좌표계'가 중력 방향의 단위 벡터를 좌표 공간 내에 포함하는 임의의 좌표계인 경우, 추가 정보 획득부(104)는 '추가 이미지가 촬영된 좌표계'의 일 축이 상기 단위 벡터가 가리키는 방향이 되도록 변환한다.On the other hand, when the 'sensed coordinate system' is an arbitrary coordinate system including a unit vector in the direction of gravity in the coordinate space, the additional information obtaining unit 104 determines that one axis of the 'coordinate system in which the additional image is captured' is indicated by the unit vector. change to the direction.

가공부(106)는 추가 정보 획득부(104)를 통해 획득된 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 3차원 점군 정보의 좌표계의 한 축이 중력 방향과 일치되도록 변환한다.The processing unit 106 converts one axis of the coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the one or more additional images and the direction information obtained through the additional information obtaining unit 104 .

일 실시예에서, 가공부(106)는 상기 하나 이상의 추가 이미지에서 복수의 특징점을 추출할 수 있다.In an embodiment, the processing unit 106 may extract a plurality of feature points from the one or more additional images.

구체적으로, 가공부(106)는 추가 이미지의 특징을 나타내는 선분의 끝점, 다각형의 꼭지점(corner) 등을 특징점으로서 추출할 수 있다. 이때, 가공부(106)는 특징점을 추출하기 위해 SIFT(Scale-Invariant Feature Transform), SURF(Speeded-Up Robust Features), FAST(Features from Accelerated Segment Test), ORB(Oriented fast and Rotated Brief) 등의 특징점 추출 알고리즘 중 어느 하나를 이용할 수 있으나, 반드시 이에 한정되는 것은 아니다.Specifically, the processing unit 106 may extract an end point of a line segment, a corner of a polygon, or the like indicating the characteristics of the additional image as a feature point. At this time, in order to extract the feature points, the processing unit 106 includes a scale-invariant feature transform (SIFT), speeded-up robust features (SURF), features from accelerated segment test (FAST), oriented fast and rotated brief (ORB), etc. Any one of the feature point extraction algorithms may be used, but the present invention is not limited thereto.

또한, 가공부(106)는 상기 하나 이상의 추가 이미지에서 추출된 복수의 특징점 각각과 3차원 점군 정보 내의 각 점들을 매핑할 수 있다. 일 실시예에서, 가공부(106)는 추출된 복수의 특징점 각각의 설명자(descriptor)와 3차원 점군 정보 내의 각 점들의 설명자를 매칭함으로써, 상기 매핑을 수행할 수 있다.Also, the processing unit 106 may map each of the plurality of feature points extracted from the one or more additional images and each point in the 3D point cloud information. In an embodiment, the processing unit 106 may perform the mapping by matching a descriptor of each of the plurality of extracted feature points with a descriptor of each point in the 3D point cloud information.

이어서, 가공부(106)는 상기 매핑 결과로부터 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 3차원 점군 정보의 좌표계 사이의 변환 관계 정보를 산출할 수 있다.Subsequently, the processing unit 106 may calculate transformation relationship information between the coordinate system in which the one or more additional images are captured and the coordinate system of the 3D point cloud information from the mapping result.

구체적으로, 변환 관계 정보는 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 3차원 점군 정보의 좌표계 사이의 좌표축 방향에 대한 관계 정보를 포함할 수 있으나, 반드시 이에 한정되는 것은 아니며, 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 3차원 점군 정보의 좌표계 각각의 원점의 위치에 대한 관계 정보를 더 포함할 수도 있다.Specifically, the transformation relation information may include, but is not limited to, relation information on the coordinate axis direction between the coordinate system in which the one or more additional images are captured and the coordinate system of the three-dimensional point cloud information, and the one or more additional images are It may further include relationship information about the position of the origin of each coordinate system of the captured coordinate system and the 3D point cloud information.

더욱 상세하게, 변환 관계 정보는 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 3차원 점군 정보의 좌표계 사이의 회전 관계를 포함할 수 있다.In more detail, the transformation relationship information may include a rotation relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of 3D point cloud information.

이때, 회전 관계는 3차원 회전 행렬(Rotation matrix) 혹은 쿼터니언 (Quaternion), 혹은 오일러 각 (Euler angle), 혹은 축 각 (Axis-angle), 혹은 그 이외의 방법으로 표현될 수 있으며, 특히 '3차원 회전 행렬'이란 자신의 원소 값에 기초하여 하나의 좌표계를 원점을 중심으로 3차원 공간 내에서 회전시키는 행렬을 의미한다.In this case, the rotation relationship may be expressed in a three-dimensional rotation matrix or Quaternion, or Euler angle, or Axis-angle, or other methods. The 'dimensional rotation matrix' refers to a matrix that rotates one coordinate system in a three-dimensional space around an origin based on its element values.

예를 들어 상기 회전 관계가 3차원 회전 행렬인 경우, 가공부(106)는 상기 매핑 결과로서 생성된 상기 하나 이상의 추가 이미지 내 특징점과 3차원 점군 정보 내 각 점의 매핑 쌍을 이용하여 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 3차원 점군 정보의 좌표계 사이의 회전 행렬을 산출할 수 있다.For example, when the rotation relationship is a three-dimensional rotation matrix, the processing unit 106 may use a mapping pair of a feature point in the one or more additional images generated as a result of the mapping and each point in the three-dimensional point cloud information to the one or more A rotation matrix between the coordinate system in which the additional image is captured and the coordinate system of the 3D point cloud information may be calculated.

일 실시예에서, 가공부(106)는 상기 매핑 결과로부터 상기 변환 관계 정보를 산출하기 위해 다양한 Perspective-n-Point(PnP) 알고리즘을 이용할 수 있다. 예시적으로, 적용 가능한 PnP 알고리즘에는 P3P 알고리즘, Efficient PnP(EPnP) 알고리즘 등이 있으나, 반드시 이에 한정되지는 않으며, 상기 매핑 결과로부터 상기 변환 관계 정보를 산출할 수 있다면 임의의 알고리즘들이 이용될 수 있다.In an embodiment, the processing unit 106 may use various perspective-n-point (PnP) algorithms to calculate the transformation relation information from the mapping result. Illustratively, applicable PnP algorithms include, but are not limited to, a P3P algorithm, an efficient PnP (EPnP) algorithm, and the like, and any algorithms may be used if the transformation relationship information can be calculated from the mapping result. .

이어서, 가공부(106)는 상기 변환 관계 정보에 기초하여 3차원 점군 정보의 좌표계의 한 축이 중력 방향과 일치되도록 변환할 수 있다.Subsequently, the processing unit 106 may transform one axis of the coordinate system of the 3D point cloud information to match the direction of gravity based on the transformation relationship information.

일 실시예에서 가공부(106)는 상기 회전 관계에 기초하여 3차원 점군 정보의 좌표계를 회전시켜 3차원 점군 정보의 좌표계의 일 축을 추가 이미지가 촬영된 좌표계의 중력 방향과 일치시킬 수 있다.In an embodiment, the processing unit 106 may rotate the coordinate system of the 3D point cloud information based on the rotation relationship so that one axis of the coordinate system of the 3D point cloud information coincides with the direction of gravity of the coordinate system in which the additional image is captured.

상기 변환을 수행한 후, 가공부(106)는 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 3차원 점군 정보를 나타냄으로써 3차원 점군 정보 내의 각 점의 좌표들이 실제 공간 상에서 어느 위치를 나타내는 것인지 명확히 할 수 있다. 이에 따라 자연스럽게, 3차원 점군 정보가 가상현실, 자율주행 등의 실제 산업 분야에서 용이하게 활용될 수 있게 될 것이다.After performing the transformation, the processing unit 106 displays the 3D point cloud information by using the coordinate system of the transformed 3D point cloud information to determine where the coordinates of each point in the 3D point cloud information represent in real space. can be made clear Accordingly, naturally, 3D point cloud information will be able to be easily utilized in actual industrial fields such as virtual reality and autonomous driving.

도 2는 일 실시예에 따른 점군 정보 가공 방법을 설명하기 위한 흐름도(200)이다. 도 2에 도시된 방법은 예를 들어, 상술한 점군 정보 가공 장치(100)에 의해 수행될 수 있다.2 is a flowchart 200 for explaining a method of processing point cloud information according to an embodiment. The method illustrated in FIG. 2 may be performed, for example, by the above-described point cloud information processing apparatus 100 .

도 2를 참조하면 우선, 단계 202에서, 점군 정보 획득부(102)는 3차원 공간에 대한 3차원 점군 정보를 획득한다.Referring to FIG. 2 , first, in step 202 , the point cloud information acquisition unit 102 acquires 3D point cloud information for a 3D space.

이후 단계 204에서, 추가 정보 획득부(104)는 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득한다.Subsequently, in step 204, the additional information obtaining unit 104 obtains one or more additional images obtained by photographing at least a portion of the three-dimensional space, and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are photographed.

이후 단계 206에서, 가공부(106)는 단계 204를 통해 획득된 추가 이미지 및 센서 정보에 기초하여 3차원 점군 정보의 좌표계의 한 축이 중력 방향과 일치되도록 변환한다.Then, in step 206 , the processing unit 106 converts one axis of the coordinate system of the three-dimensional point cloud information to match the direction of gravity based on the additional image and sensor information obtained in step 204 .

이후 단계 208에서, 가공부(106)는 변환된 3차원 점군 정보의 좌표계를 이용하여 3차원 점군 정보를 나타낸다.Thereafter, in step 208, the processing unit 106 displays the 3D point cloud information by using the coordinate system of the transformed 3D point cloud information.

도시된 흐름도에서는 상기 방법을 복수 개의 단계 202 내지 208로 나누어 기재하였으나, 적어도 일부의 단계들은 순서를 바꾸어 수행되거나, 다른 단계와 결합되어 함께 수행되거나, 생략되거나, 세부 단계들로 나뉘어 수행되거나, 또는 도시되지 않은 하나 이상의 단계가 부가되어 수행될 수 있다.In the illustrated flowchart, the method is described by dividing the method into a plurality of steps 202 to 208, but at least some of the steps are performed in a different order, are performed in combination with other steps, are omitted, or are performed by dividing into detailed steps, or One or more steps not shown may be added and performed.

도 3은 일 실시예에 따른 단계 206을 보다 상세하게 설명하기 위한 흐름도(300)이다. 도 3에 도시된 방법은 예를 들어, 상술한 가공부(106)에 의해 수행될 수 있으나, 반드시 이에 한정되는 것은 아니다.3 is a flowchart 300 for explaining step 206 in more detail according to an embodiment. The method shown in FIG. 3 may be performed, for example, by the above-described processing unit 106, but is not necessarily limited thereto.

도 3을 참조하면 우선, 단계 302에서, 가공부(106)는 추가 정보 획득부(104)가 획득한 상기 하나 이상의 추가 이미지에서 복수의 특징점을 추출할 수 있다.Referring to FIG. 3 , first, in step 302 , the processing unit 106 may extract a plurality of feature points from the one or more additional images acquired by the additional information acquiring unit 104 .

이후 단계 304에서, 가공부(106)는 추출된 상기 복수의 특징점 각각과 3차원 점군 정보 내의 각 점들을 매핑할 수 있다.Then, in step 304 , the processing unit 106 may map each of the extracted feature points and each point in the 3D point cloud information.

이후 단계 306에서, 가공부(106)는 상기 매핑 결과로부터 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 3차원 점군 정보의 좌표계 사이의 변환 관계 정보를 산출할 수 있다.Subsequently, in step 306 , the processing unit 106 may calculate transformation relationship information between the coordinate system in which the one or more additional images are captured and the coordinate system of the 3D point cloud information from the mapping result.

이후 단계 308에서, 가공부(106)는 상기 변환 관계 정보에 기초하여 3차원 점군 정보의 좌표계의 한 축이 중력 방향과 일치되도록 변환할 수 있다.Thereafter, in step 308 , the processing unit 106 may transform one axis of the coordinate system of the 3D point cloud information to coincide with the direction of gravity based on the transformation relationship information.

도시된 흐름도에서는 상기 방법을 복수 개의 단계 302 내지 308로 나누어 기재하였으나, 적어도 일부의 단계들은 순서를 바꾸어 수행되거나, 다른 단계와 결합되어 함께 수행되거나, 생략되거나, 세부 단계들로 나뉘어 수행되거나, 또는 도시되지 않은 하나 이상의 단계가 부가되어 수행될 수 있다.In the illustrated flowchart, the method is divided into a plurality of steps 302 to 308, but at least some of the steps are performed in a different order, are performed in combination with other steps, are omitted, or are performed in separate steps, or One or more steps not shown may be added and performed.

도 4는 예시적인 실시예들에서 사용되기에 적합한 컴퓨팅 장치를 포함하는 컴퓨팅 환경(10)을 예시하여 설명하기 위한 블록도이다. 도시된 실시예에서, 각 컴포넌트들은 이하에 기술된 것 이외에 상이한 기능 및 능력을 가질 수 있고, 이하에 기술된 것 이외에도 추가적인 컴포넌트를 포함할 수 있다.4 is a block diagram illustrating and describing a computing environment 10 including a computing device suitable for use in example embodiments. In the illustrated embodiment, each component may have different functions and capabilities other than those described below, and may include additional components in addition to those described below.

도시된 컴퓨팅 환경(10)은 컴퓨팅 장치(12)를 포함한다. 일 실시예에서, 컴퓨팅 장치(12)는 점군 정보 가공 장치(100)일 수 있다. The illustrated computing environment 10 includes a computing device 12 . In an embodiment, the computing device 12 may be the point cloud information processing device 100 .

컴퓨팅 장치(12)는 적어도 하나의 프로세서(14), 컴퓨터 판독 가능 저장 매체(16) 및 통신 버스(18)를 포함한다. 프로세서(14)는 컴퓨팅 장치(12)로 하여금 앞서 언급된 예시적인 실시예에 따라 동작하도록 할 수 있다. 예컨대, 프로세서(14)는 컴퓨터 판독 가능 저장 매체(16)에 저장된 하나 이상의 프로그램들을 실행할 수 있다. 상기 하나 이상의 프로그램들은 하나 이상의 컴퓨터 실행 가능 명령어를 포함할 수 있으며, 상기 컴퓨터 실행 가능 명령어는 프로세서(14)에 의해 실행되는 경우 컴퓨팅 장치(12)로 하여금 예시적인 실시예에 따른 동작들을 수행하도록 구성될 수 있다.Computing device 12 includes at least one processor 14 , computer readable storage medium 16 , and communication bus 18 . The processor 14 may cause the computing device 12 to operate in accordance with the exemplary embodiments discussed above. For example, the processor 14 may execute one or more programs stored in the computer-readable storage medium 16 . The one or more programs may include one or more computer-executable instructions that, when executed by the processor 14, configure the computing device 12 to perform operations in accordance with the exemplary embodiment. can be

컴퓨터 판독 가능 저장 매체(16)는 컴퓨터 실행 가능 명령어 내지 프로그램 코드, 프로그램 데이터 및/또는 다른 적합한 형태의 정보를 저장하도록 구성된다. 컴퓨터 판독 가능 저장 매체(16)에 저장된 프로그램(20)은 프로세서(14)에 의해 실행 가능한 명령어의 집합을 포함한다. 일 실시예에서, 컴퓨터 판독 가능 저장 매체(16)는 메모리(랜덤 액세스 메모리와 같은 휘발성 메모리, 비휘발성 메모리, 또는 이들의 적절한 조합), 하나 이상의 자기 디스크 저장 디바이스들, 광학 디스크 저장 디바이스들, 플래시 메모리 디바이스들, 그 밖에 컴퓨팅 장치(12)에 의해 액세스되고 원하는 정보를 저장할 수 있는 다른 형태의 저장 매체, 또는 이들의 적합한 조합일 수 있다.Computer-readable storage medium 16 is configured to store computer-executable instructions or program code, program data, and/or other suitable form of information. The program 20 stored in the computer readable storage medium 16 includes a set of instructions executable by the processor 14 . In one embodiment, computer-readable storage medium 16 includes memory (volatile memory, such as random access memory, non-volatile memory, or a suitable combination thereof), one or more magnetic disk storage devices, optical disk storage devices, flash It may be memory devices, other forms of storage medium accessed by computing device 12 and capable of storing desired information, or a suitable combination thereof.

통신 버스(18)는 프로세서(14), 컴퓨터 판독 가능 저장 매체(16)를 포함하여 컴퓨팅 장치(12)의 다른 다양한 컴포넌트들을 상호 연결한다.Communication bus 18 interconnects various other components of computing device 12 , including processor 14 and computer readable storage medium 16 .

컴퓨팅 장치(12)는 또한 하나 이상의 입출력 장치(24)를 위한 인터페이스를 제공하는 하나 이상의 입출력 인터페이스(22) 및 하나 이상의 네트워크 통신 인터페이스(26)를 포함할 수 있다. 입출력 인터페이스(22) 및 네트워크 통신 인터페이스(26)는 통신 버스(18)에 연결된다. 입출력 장치(24)는 입출력 인터페이스(22)를 통해 컴퓨팅 장치(12)의 다른 컴포넌트들에 연결될 수 있다. 예시적인 입출력 장치(24)는 포인팅 장치(마우스 또는 트랙패드 등), 키보드, 터치 입력 장치(터치패드 또는 터치스크린 등), 음성 또는 소리 입력 장치, 다양한 종류의 센서 장치 및/또는 촬영 장치와 같은 입력 장치, 및/또는 디스플레이 장치, 프린터, 스피커 및/또는 네트워크 카드와 같은 출력 장치를 포함할 수 있다. 예시적인 입출력 장치(24)는 컴퓨팅 장치(12)를 구성하는 일 컴포넌트로서 컴퓨팅 장치(12)의 내부에 포함될 수도 있고, 컴퓨팅 장치(12)와는 구별되는 별개의 장치로 컴퓨팅 장치(12)와 연결될 수도 있다.Computing device 12 may also include one or more input/output interfaces 22 and one or more network communication interfaces 26 that provide interfaces for one or more input/output devices 24 . The input/output interface 22 and the network communication interface 26 are coupled to the communication bus 18 . Input/output device 24 may be coupled to other components of computing device 12 via input/output interface 22 . Exemplary input/output device 24 may include a pointing device (such as a mouse or trackpad), a keyboard, a touch input device (such as a touchpad or touchscreen), a voice or sound input device, various types of sensor devices, and/or imaging devices. input devices and/or output devices such as display devices, printers, speakers and/or network cards. The exemplary input/output device 24 may be included in the computing device 12 as a component constituting the computing device 12 , and may be connected to the computing device 12 as a separate device distinct from the computing device 12 . may be

한편, 본 발명의 실시예는 본 명세서에서 기술한 방법들을 컴퓨터상에서 수행하기 위한 프로그램, 및 상기 프로그램을 포함하는 컴퓨터 판독 가능 기록매체를 포함할 수 있다. 상기 컴퓨터 판독 가능 기록매체는 프로그램 명령, 로컬 데이터 파일, 로컬 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체는 본 발명을 위하여 특별히 설계되고 구성된 것들이거나, 또는 컴퓨터 소프트웨어 분야에서 통상적으로 사용 가능한 것일 수 있다. 컴퓨터 판독 가능 기록매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM, DVD와 같은 광 기록 매체, 및 롬, 램, 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 상기 프로그램의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함할 수 있다.Meanwhile, an embodiment of the present invention may include a program for performing the methods described in this specification on a computer, and a computer-readable recording medium including the program. The computer-readable recording medium may include program instructions, local data files, local data structures, etc. alone or in combination. The medium may be specially designed and configured for the present invention, or may be commonly used in the field of computer software. Examples of computer-readable recording media include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and program instructions specially configured to store and execute program instructions such as ROMs, RAMs, flash memories, and the like. Hardware devices are included. Examples of the program may include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes such as those generated by a compiler.

이상에서 본 발명의 대표적인 실시예들을 상세하게 설명하였으나, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자는 상술한 실시예에 대하여 본 발명의 범주에서 벗어나지 않는 한도 내에서 다양한 변형이 가능함을 이해할 것이다. 그러므로 본 발명의 권리범위는 설명된 실시예에 국한되어 정해져서는 안 되며, 후술하는 청구범위뿐만 아니라 이 청구범위와 균등한 것들에 의해 정해져야 한다.Although representative embodiments of the present invention have been described in detail above, those of ordinary skill in the art to which the present invention pertains will understand that various modifications are possible without departing from the scope of the present invention with respect to the above-described embodiments. . Therefore, the scope of the present invention should not be limited to the described embodiments, but should be defined by the following claims as well as the claims and equivalents.

Claims (9)

3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득하는 점군 정보 획득부;a point cloud information acquisition unit that acquires 3D point cloud information for a 3D space; 상기 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득하는 추가 정보 획득부; 및an additional information acquisition unit configured to acquire one or more additional images obtained by photographing at least a portion of the three-dimensional space and direction information indicating a direction of gravity on a coordinate system in which the one or more additional images are photographed; and 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하고, 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 상기 3차원 점군 정보를 나타내는 가공부를 포함하는, 점군 정보 가공 장치.One axis of the coordinate system of the 3D point cloud information is transformed to match the direction of gravity based on the one or more additional images and the direction information, and the 3D point cloud information is obtained using the coordinate system of the transformed 3D point cloud information. A point cloud information processing apparatus including a processing unit for indicating. 청구항 1에 있어서,The method according to claim 1, 상기 추가 정보 획득부는,The additional information obtaining unit, 상기 중력 방향이 검출된 센싱 정보를 획득하고, 상기 센싱 정보를 상기 하나 이상의 추가 이미지가 촬영된 좌표계에 반영함으로써 상기 방향 정보를 획득하는, 점군 정보 가공 장치.A point cloud information processing apparatus for acquiring the sensing information in which the direction of gravity is detected, and acquiring the direction information by reflecting the sensing information to a coordinate system in which the one or more additional images are captured. 청구항 1에 있어서,The method according to claim 1, 상기 가공부는,The processing unit, 상기 하나 이상의 추가 이미지에서 복수의 특징점을 추출하고, 상기 복수의 특징점 각각과 상기 3차원 점군 정보 내의 각 점들을 매핑하고, 상기 매핑 결과로부터 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 변환 관계 정보를 산출하고, 상기 변환 관계 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하는, 점군 정보 가공 장치.Extracting a plurality of feature points from the one or more additional images, mapping each of the plurality of feature points and each point in the 3D point cloud information, and the coordinate system in which the one or more additional images are captured and the 3D point cloud information from the mapping result A point cloud information processing apparatus for calculating transformation relationship information between coordinate systems of , and converting one axis of a coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the transformation relationship information. 청구항 3에 있어서,4. The method according to claim 3, 상기 변환 관계 정보는,The transformation relationship information is 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 회전 관계를 포함하는, 점군 정보 가공 장치.A point cloud information processing apparatus including a rotational relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the three-dimensional point cloud information. 3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득하는 단계;obtaining 3D point cloud information for a 3D space; 상기 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득하는 단계;obtaining one or more additional images obtained by photographing at least a portion of the three-dimensional space and direction information indicating a direction of gravity on a coordinate system in which the one or more additional images are photographed; 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하는 단계; 및converting one axis of the coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the one or more additional images and the direction information; and 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 상기 3차원 점군 정보를 나타내는 단계를 포함하는, 점군 정보 가공 방법.and displaying the 3D point cloud information using a coordinate system of the converted 3D point cloud information. 청구항 5에 있어서,6. The method of claim 5, 상기 추가 이미지 및 상기 방향 정보를 획득하는 단계는,The step of obtaining the additional image and the direction information comprises: 상기 중력 방향이 검출된 센싱 정보를 획득하고, 상기 센싱 정보를 상기 하나 이상의 추가 이미지가 촬영된 좌표계에 반영함으로써 상기 방향 정보를 획득하는, 점군 정보 가공 방법.A method for processing point cloud information, acquiring sensing information in which the direction of gravity is detected, and acquiring the direction information by reflecting the sensing information in a coordinate system in which the one or more additional images are captured. 청구항 5에 있어서,6. The method of claim 5, 상기 변환하는 단계는,The converting step is 상기 하나 이상의 추가 이미지에서 복수의 특징점을 추출하는 단계;extracting a plurality of feature points from the one or more additional images; 상기 복수의 특징점 각각과 상기 3차원 점군 정보 내의 각 점들을 매핑하는 단계;mapping each of the plurality of feature points and each point in the 3D point cloud information; 상기 매핑 결과로부터 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 변환 관계 정보를 산출하는 단계; 및calculating transformation relationship information between a coordinate system in which the one or more additional images are captured and a coordinate system of the 3D point cloud information from the mapping result; and 상기 변환 관계 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하는 단계를 포함하는, 점군 정보 가공 방법.and transforming one axis of the coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the transformation relationship information. 청구항 7에 있어서,8. The method of claim 7, 상기 변환 관계 정보는,The transformation relationship information is 상기 하나 이상의 추가 이미지가 촬영된 좌표계와 상기 3차원 점군 정보의 좌표계 사이의 회전 관계를 포함하는, 점군 정보 가공 방법.A method for processing point cloud information, including a rotational relationship between a coordinate system in which the one or more additional images are captured and a coordinate system of the three-dimensional point cloud information. 비일시적 컴퓨터 판독 가능한 저장매체(non-transitory computer readable storage medium)에 저장된 컴퓨터 프로그램으로서,As a computer program stored in a non-transitory computer readable storage medium, 상기 컴퓨터 프로그램은 하나 이상의 명령어들을 포함하고, 상기 명령어들은 하나 이상의 프로세서들을 갖는 컴퓨팅 장치에 의해 실행될 때, 상기 컴퓨팅 장치로 하여금,The computer program includes one or more instructions, which, when executed by a computing device having one or more processors, cause the computing device to: 3차원 공간에 대한 3차원 점군(Point Cloud) 정보를 획득하고,Acquire 3D point cloud information about 3D space, 상기 3차원 공간의 적어도 일부를 촬영한 하나 이상의 추가 이미지 및 상기 하나 이상의 추가 이미지가 촬영된 좌표계 상에서 중력 방향을 나타내는 방향 정보를 획득하고,At least one additional image obtained by photographing at least a portion of the three-dimensional space and direction information indicating the direction of gravity on the coordinate system in which the one or more additional images are photographed, 상기 하나 이상의 추가 이미지 및 상기 방향 정보에 기초하여 상기 3차원 점군 정보의 좌표계의 한 축이 상기 중력 방향과 일치되도록 변환하고,Transform one axis of the coordinate system of the three-dimensional point cloud information to coincide with the direction of gravity based on the one or more additional images and the direction information; 상기 변환된 3차원 점군 정보의 좌표계를 이용하여 상기 3차원 점군 정보를 나타내도록 하는, 비일시적 컴퓨터 판독 가능한 저장매체에 저장된 컴퓨터 프로그램.A computer program stored in a non-transitory computer-readable storage medium to represent the three-dimensional point cloud information by using the coordinate system of the converted three-dimensional point cloud information.
PCT/KR2020/007969 2020-04-14 2020-06-19 Apparatus and method for processing point cloud information Ceased WO2021210725A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200045137A KR102158316B1 (en) 2020-04-14 2020-04-14 Apparatus and method for processing point cloud
KR10-2020-0045137 2020-04-14

Publications (1)

Publication Number Publication Date
WO2021210725A1 true WO2021210725A1 (en) 2021-10-21

Family

ID=72707886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/007969 Ceased WO2021210725A1 (en) 2020-04-14 2020-06-19 Apparatus and method for processing point cloud information

Country Status (2)

Country Link
KR (1) KR102158316B1 (en)
WO (1) WO2021210725A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102339625B1 (en) * 2021-01-29 2021-12-16 주식회사 맥스트 Apparatus and method for updating space map

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140031345A (en) * 2012-01-13 2014-03-12 소프트키네틱 소프트웨어 Automatic scene calibration
JP2014186565A (en) * 2013-03-25 2014-10-02 Geo Technical Laboratory Co Ltd Analysis method of three-dimensional point group
JP2015125685A (en) * 2013-12-27 2015-07-06 Kddi株式会社 Spatial structure estimation apparatus, spatial structure estimation method, and spatial structure estimation program
KR20150082358A (en) * 2012-11-02 2015-07-15 퀄컴 인코포레이티드 Reference coordinate system determination
JP2016186488A (en) * 2011-04-13 2016-10-27 株式会社トプコン Three-dimensional data processing apparatus, three-dimensional data processing system, three-dimensional data processing method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102477031B1 (en) 2018-04-20 2022-12-14 삼성전자주식회사 Apparatus and method for processing data assoiciated with 3-dimensional data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016186488A (en) * 2011-04-13 2016-10-27 株式会社トプコン Three-dimensional data processing apparatus, three-dimensional data processing system, three-dimensional data processing method, and program
KR20140031345A (en) * 2012-01-13 2014-03-12 소프트키네틱 소프트웨어 Automatic scene calibration
KR20150082358A (en) * 2012-11-02 2015-07-15 퀄컴 인코포레이티드 Reference coordinate system determination
JP2014186565A (en) * 2013-03-25 2014-10-02 Geo Technical Laboratory Co Ltd Analysis method of three-dimensional point group
JP2015125685A (en) * 2013-12-27 2015-07-06 Kddi株式会社 Spatial structure estimation apparatus, spatial structure estimation method, and spatial structure estimation program

Also Published As

Publication number Publication date
KR102158316B1 (en) 2020-09-21

Similar Documents

Publication Publication Date Title
US11132841B2 (en) Systems and methods for presenting digital assets within artificial environments via a loosely coupled relocalization service and asset management service
WO2022050473A1 (en) Apparatus and method for estimating camera pose
WO2015174729A1 (en) Augmented reality providing method and system for providing spatial information, and recording medium and file distribution system
WO2010027193A2 (en) Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
WO2011034308A2 (en) Method and system for matching panoramic images using a graph structure, and computer-readable recording medium
US10672191B1 (en) Technologies for anchoring computer generated objects within augmented reality
WO2011040710A2 (en) Method, terminal and computer-readable recording medium for performing visual search based on movement or position of terminal
WO2021112382A1 (en) Apparatus and method for dynamic multi-camera rectification using depth camera
WO2019229301A1 (en) Solution for generating virtual reality representation
EP3776469A1 (en) System and method for 3d association of detected objects
CN111612842A (en) Method and device for generating pose estimation model
WO2021167189A1 (en) Method and device for multi-sensor data-based fusion information generation for 360-degree detection and recognition of surrounding object
WO2021025364A1 (en) Method and system using lidar and camera to enhance depth information about image feature point
WO2019221340A1 (en) Method and system for calculating spatial coordinates of region of interest, and non-transitory computer-readable recording medium
US10600202B2 (en) Information processing device and method, and program
WO2018026094A1 (en) Method and system for automatically generating ortho-photo texture by using dem data
WO2021221334A1 (en) Device for generating color map formed on basis of gps information and lidar signal, and control method for same
WO2011034305A2 (en) Method and system for hierarchically matching images of buildings, and computer-readable recording medium
WO2011078596A2 (en) Method, system, and computer-readable recording medium for adaptively performing image-matching according to conditions
CN112102479A (en) Augmented reality method and device based on model alignment, storage medium and electronic equipment
WO2021125578A1 (en) Position recognition method and system based on visual information processing
WO2021206200A1 (en) Device and method for processing point cloud information
WO2021210725A1 (en) Apparatus and method for processing point cloud information
WO2011083929A2 (en) Method, system, and computer-readable recording medium for providing information on an object using a viewing frustum
WO2011034306A2 (en) Method and system for removing redundancy from among panoramic images, and computer-readable recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20931000

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.03.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20931000

Country of ref document: EP

Kind code of ref document: A1