[go: up one dir, main page]

WO2021014435A1 - A method for automatically creating a georeferenced model - Google Patents

A method for automatically creating a georeferenced model Download PDF

Info

Publication number
WO2021014435A1
WO2021014435A1 PCT/IL2020/050754 IL2020050754W WO2021014435A1 WO 2021014435 A1 WO2021014435 A1 WO 2021014435A1 IL 2020050754 W IL2020050754 W IL 2020050754W WO 2021014435 A1 WO2021014435 A1 WO 2021014435A1
Authority
WO
WIPO (PCT)
Prior art keywords
multiplicity
georeferenced
images
model
site
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2020/050754
Other languages
French (fr)
Inventor
Issam FARRAN
Moti Yanuka
Rony Atoun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datumate Ltd
Original Assignee
Datumate Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datumate Ltd filed Critical Datumate Ltd
Publication of WO2021014435A1 publication Critical patent/WO2021014435A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B25/00Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
    • G09B25/06Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes for surveying; for geography, e.g. relief models

Definitions

  • the present invention relates to geomatics generally and more specifically to geo-referencing of three-dimensional models.
  • the present invention seeks to provide an improved system and technique for geo-referencing three-dimensional models.
  • a method for automatically creating a georeferenced model including creating a collection of photographically detectable Ground Control Points (GCPs) at a site, photographing the site at a first time to produce a first multiplicity of partially mutually overlapping images of the site in a manner that the collection of GCPs are discemable in multiple ones of the first multiplicity of partially mutually overlapping images, processing the first multiplicity of images to provide for each of the first multiplicity of images: a camera location and a camera orientation in three dimensions representing a first non-georeferenced 3D model of the site at the first time, identifying and marking each of the GCPs on at least two of the first multiplicity of images, thereby providing identified and marked GCPs, employing the first non- georeferenced 3D model and the identified and marked GCPs to produce a first georeferenced high-precision 3D model, which is anchored to a coordinate system of the collection of photographically detectable GCPs
  • the first non-georeferenced model and the first multiplicity of images are represented by a first point cloud.
  • the processing the first multiplicity of images also includes finding corresponding reference locations which appear in multiple ones of the multiplicity of at least partially overlapping images.
  • the employing the first non-georeferenced 3D model and the identified and marked Ground Control Points to produce a first georeferenced high-precision 3D model includes performing bundle adjustment on the first non-georeferenced 3D model.
  • the second non-georeferenced model and the second multiplicity of images are represented by a second point cloud.
  • the processing of the second multiplicity of images also includes finding corresponding reference locations which appear in multiple ones of the second multiplicity of at least partially overlapping images.
  • the employing the second non-georeferenced 3D model and the virtual Ground Control Points to produce a second georeferenced high-precision 3D model, which is anchored to the coordinate system includes performing bundle adjustment on the second non- georeferenced 3D model.
  • the finding corresponding reference locations which appear in multiple ones of the first multiplicity of at least partially overlapping images includes performing Structure from Motion (SfM) analysis on the first multiplicity of at least partially overlapping images.
  • the performing Structure from Motion (SfM) analysis on the multiplicity of at least partially overlapping images includes employing image metadata from the multiplicity of at least partially overlapping images.
  • the finding corresponding reference locations which appear in multiple ones of the second multiplicity of at least partially overlapping images includes performing Structure from Motion (SfM) analysis on the second multiplicity of at least partially overlapping images. Additionally, the performing Structure from Motion (SfM) analysis on the second multiplicity of at least partially overlapping images includes employing image metadata from the second multiplicity of at least partially overlapping images.
  • Fig. 1 is a simplified illustration of a site at a first point in time including a collection of photographically detectable Ground Control Points (GCPs) as known in the prior art;
  • GCPs Ground Control Points
  • Fig. 2 is a simplified illustration of a first multiplicity of partially overlapping images of the site shown in Fig. 1 taken at the first point in time as known in the prior art;
  • Fig. 3 is a simplified illustration of a non-georeferenced sparse point cloud representing a first model of the site of Figs. 1 & 2 at the first point in time as known in the prior art;
  • Fig. 4 is a simplified illustration of a georeferenced dense point cloud representing a first model of the site of Figs. 1 & 2 at the first point in time as known in the prior art;
  • Fig. 5 is a simplified illustration of the site of Figs. 1 - 4 at a second point in time including a collection of photographically detectable Ground Control Points and photographically detectable candidate Virtual Ground Control Points (VGCPs);
  • VGCPs Virtual Ground Control Points
  • Fig. 6 is a simplified illustration of a second multiplicity of partially overlapping images of the site shown in Fig. 5 taken at the second point in time showing at least some of the photographically detectable candidate Virtual Ground Control Points;
  • Fig. 7 is a simplified illustration of a non-georeferenced sparse point cloud representing a second model of the site of Figs. 1 - 6 at the second point in time;
  • Fig. 8 is a simplified illustration of a georeferenced dense point cloud representing a second model of the site of Figs. 1 - 7 at the second point in time;
  • Figs. 9A and 9B taken together, are a simplified flow chart illustrating steps in generating a georeferenced dense point cloud representing a first model of the site of Fig. 1 at the first point in time, as known in the prior art;
  • Fig. 10 is a simplified flow chart illustrating steps in the selection of a plurality of images from among a second multiplicity of images of the site, taken at a second point of time and exemplified in Fig. 6;
  • Fig. 11 is a simplified flow chart illustrating steps in the extraction and matching of features seen in first and the second multiplicities of images of the site taken at respective first and second points in time, thereby providing candidate VGCPs;
  • Figs. 12 A, 12B and 12C are, taken together, a simplified flow chart illustrating steps in the iterative selection of the most robust VGCPs from among the candidate VGCPs.
  • Figs. 9A and 9B are a simplified flow chart illustrating steps in generating a georeferenced dense point cloud representing a first model of a site, such as a site shown in Fig. 1, at a first point in time, as known in the prior art.
  • a conventional technique for generating a georeferenced model of a site exemplified by a georeferenced dense point cloud typically includes the following steps as seen in Figs. 9A and 9B.
  • Fig. 1 illustrates a typical site 100 having posted therein GCPs which are each designated by reference numeral 102.
  • a multiplicity of partially mutually overlapping 2-dimensional photographic images is generated. Photographing a typical site may generate approximately 5000 such images. Examples of such photographic images appear in Fig. 2 and are designated by reference numerals 200.
  • a subsequent step 920 Structure from Motion (SfM) analysis on the multiplicity of partially mutually overlapping photographic images is performed by employing conventional feature extraction and feature matching algorithms, to find corresponding reference locations which appear in multiple partially overlapping ones of the multiplicity of partially mutually overlapping photographic images.
  • the SfM analysis may also employ image metadata, such as EXIF information.
  • Typical reference locations are designated by reference numerals 202 in the 2-dimensional photographic images 200 appearing in Fig. 2.
  • a non-georeferenced 3D model of the site is generated, based on the SfM analysis, which may be exemplified by a Sparse Point Cloud (SPC).
  • SPC Sparse Point Cloud
  • a subsequent step 930 the GCPs in a sub-set of the multiplicity of partially mutually overlapping photographic images having a relatively high number of mutually corresponding reference locations, which are known as tie points, are identified and marked.
  • the steps of identifying and marking are currently done manually and are time consuming and costly.
  • the result is a collection of marked pixels representing known GCPs in the sub- set of two-dimensional images.
  • step 935 the 3D coordinates of the GCPs are reconstructed in the same arbitrary coordinate system of the non-georeferenced 3D model of the site as exemplified in the sparse point cloud illustrated in Fig. 3.
  • a rigid 3D transformation between the 3D coordinates of the identified and marked GCPs in the“world” coordinate system, in which the GCPs were originally measured, and the 3D coordinates of the GCPs in the arbitrary coordinate system of the non-georeferenced 3D model of the site, is calculated, as exemplified in the SPC illustrated in Fig. 3.
  • the measured 3D coordinates of the GCPs are transformed from the“world” coordinate system to the arbitrary coordinate system of the non-georeferenced 3D model of the site, as exemplified in the SPC illustrated in Fig. 3, using the calculated rigid 3D transformation.
  • the locations of the GCPs are marked with their“world” coordinates in the non-georeferenced 3D model of the site as exemplified in the SPC of Fig. 3.
  • SBA Sparse Bundle Adjustment
  • step 955 the corrected non-georeferenced 3D model exemplified by the SPC is transformed to“world” coordinates, using the inverse of the rigid 3D transformation, thereby creating a geo -referenced model, which may be exemplified by a geo -referenced SPC.
  • step 960 a georeferenced Dense Point Cloud (DPC) is generated in the“world” coordinate system by using the rigid 3D transformation to represent the 3D coordinates of the cameras in the“world” coordinate system and then using these coordinates as an input for DPC generation. This georeferenced DPC is illustrated in Fig. 4.
  • DPC Dense Point Cloud
  • Fig. 10 is a simplified flow chart showing steps in generating, at a second point in time, a georeferenced DPC representing a model of the site, which is shown in Fig. 1 at a first point in time, and illustrating steps in the selection of a plurality of images from among a second multiplicity of images of the site taken at a second point of time.
  • a technique for generating, at a second point in time, a second georeferenced model of the site of Fig. 1 exemplified by a second georeferenced DPC.
  • the technique preferably includes the following initial steps seen in Fig. 10.
  • a first step 1010 the site is photographed, from above, as exemplified in Fig. 5, as it appears at a second point in time, and a second multiplicity of partially mutually overlapping 2-dimensional photographic images is generated.
  • Photographing a typical site might generate approximately 5000 such images. Examples of such photographic images appear in Fig. 6 and are designated by reference numerals 300.
  • a next step 1020 Structure from Motion (SfM) analysis on the second multiplicity of partially mutually overlapping photographic images is performed by employing conventional feature extraction and feature matching algorithms, to find corresponding reference locations which appear in multiple partially overlapping ones of the second multiplicity of partially mutually overlapping photographic images.
  • the SfM analysis may also employ image metadata, such as EXIF information.
  • Typical reference locations are designated by reference numerals 302 in the 2-dimensional photographic images 300 appearing in Fig. 6.
  • a non-georeferenced 3D model of the site, as it appears at the second point of time, is generated, based on the SfM analysis, which may be exemplified by a SPC.
  • a typical SPC representing the site of Fig. 5 appears in Fig. 7. It is noted that the SPC shown in Fig. 7 is not to scale and is in an arbitrary co ordinate system and is not georeferenced (NR).
  • the encircled areas 304 shown in Fig. 7 represent areas in which a change in terrain occurred between the first point of time and the second point of time. The circles do not appear in the SPC.
  • a plurality of images from among the second multiplicity of partially mutually overlapping 2-dimensional photographic images, are selected to serve as images to be used for establishing Virtual Ground Control Points (VGCPs) by employing the geo-referenced model generated in step 955 of Fig. 9, as exemplified by a geo-referenced SPC produced thereby, which constitutes a first georeferenced (GR) data set, and by employing specifically the locations of the GCPs in the first georeferenced (GR) data set.
  • VGCPs Virtual Ground Control Points
  • the best images typically the three best images, from among the plurality of images, from among the first multiplicity of partially mutually overlapping 2-dimensional photographic images.
  • the selection is based on which of the GCPs have the greatest number of tie points which appear in those of the multiple partially overlapping ones of the first multiplicity of partially mutually overlapping photographic images in which the given GCP appears.
  • the best images are then ordered in decreasing order, preferably based on the number of tie points contained therein when only considering the best images. The best images are marked for future use.
  • Fig. 11 is a simplified flow chart illustrating steps in the extraction and matching of features seen in the first and the second multiplicities of images of the site taken at respective first and second points in time, thereby providing candidate Virtual Ground Control Points (VGCPs).
  • VGCPs Virtual Ground Control Points
  • the highest ranked image of the best images is designated, typically by the designation Al.
  • the two images which are closest in camera location to A1 are selected and designated as A2 and A3.
  • the three images from the second multiplicity of images used to define the NR dataset, which are closest in camera location to Al, are selected, using EXIF data, and designated as Bl, B2 and B3.
  • a 2D feature extraction algorithm which is scaling and rotation invariant, is employed to extract features that appear in all of the six images.
  • An example of a suitable conventional feature extraction algorithm is Binary Robust Invariant Scalable Keypoints (BRISK), available via the Internet from the Open Source Computer Vision Library (OpenCV), at website opencv.org.
  • BRISK Binary Robust Invariant Scalable Keypoints
  • a feature matching algorithm is employed to provide matching of multiple features in the following image pairs: Al and Bl, Al and B2, Al and B3, Bl and B2, Bl and B3, Al and A2, Al and A3. All features that are found in all of the above image pairs are considered to be VGCP candidates.
  • An example of a suitable feature matching algorithm is a Brute Force matcher based on Hamming distance, available via the Internet from the Open Source Computer Vision Library (OpenCV), at website opencv.org.
  • Figs. 12A, 12B, and 12C are a simplified flow chart illustrating steps in the iterative selection of the most robust VGCPs from among the candidate VGCPs.
  • the VGCP candidate points are added to the GR dataset as candidate VGCPs. These points have their 3D coordinates in the“world” coordinate system, set by operation of the Sparse Bundle Adjustment (SBA) algorithm in step 950 of Fig. 9, as described above.
  • SBA Sparse Bundle Adjustment
  • the 2D coordinates of these points in the arbitrary coordinate system are marked in steps 1140 and 1150 of Fig. 11, as described above.
  • a next step 1215 the Sparse Bundle Adjustment (SBA) algorithm is run on the GR dataset including the candidate VGCPs from step 1210.
  • SBA Sparse Bundle Adjustment
  • a subsequent step 1220 the differences, measured as a 3D distance, between the coordinates of the candidate VGCPs in the GR dataset of step 1210, prior to the SBA step 1215 above, and the GR dataset, following SBA step 1215 (Pre and Post the SBA of step 1215), are calculated.
  • the distance of all of the candidate VGCPs is compared to a predetermined threshold. If all of the candidate VGCPs have a distance less than the predetermined threshold, all of the candidate VGCPs are designated as VGCPs and this part of the process is complete and the process continues with step 1300 described hereinbelow.
  • the candidate VGCPs that have a distance greater than the predetermined threshold are discarded and the process continues with the remaining candidate VGCPs.
  • the remaining candidate VGCPs are added to the NR dataset as GCPs, so that a subsequent SBA operation does not move them and they serve as anchors to the subsequent SBA.
  • the remaining candidate VGCPs have their 3D coordinates in the“world” coordinate system set by operation of the Sparse Bundle Adjustment (SBA) algorithm in step 950 of Fig. 9, as described above.
  • SBA Sparse Bundle Adjustment
  • the 2D coordinates of these points in the arbitrary coordinate system are marked as in steps 1140 and 1150 of Fig. 11, as described above.
  • step 1240 the Sparse Bundle Adjustment (SBA) algorithm is run on the NR dataset including the GCPs resulting from step 1235.
  • SBA Sparse Bundle Adjustment
  • a next step 1245 the differences between the coordinates of the candidate VGCPs in the NR dataset of step 1235 following step 1240 and the NR dataset following step 1215 above (Pre and Post the SBA step 1240) are calculated.
  • the candidate VGCPs added to the NR dataset as GCPs in step 1235 serve as anchors to the SBA performed in step 1240, an estimated coordinate is calculated for each of these candidate VGCPs after the SBA is performed and therefore there may be a difference between the pre and post SBA step 1240 values of the coordinates of the candidate VGCPs.
  • the candidate VGCPs having a distance less than a predetermined threshold are designated as VGCPs in the NR dataset.
  • a next step 1255 the candidate VGCPs having a distance greater than the predetermined threshold are designated as candidate VGCPs in the NR dataset resulting from step 1250.
  • the Sparse Bundle Adjustment (SBA) algorithm is run on the NR dataset including the VGCPs resulting from step 1250 and the candidate VGCPs from step 1255.
  • the VGCPs resulting from step 1250 will be treated as actual GCPs in the next SBA.
  • a next step 1265 the differences, measured as a 3D distance, between the coordinates of the candidate VGCPs in the NR dataset of step 1255 prior to step 1260 and the NR dataset following step 1260 (Pre and Post the SBA step 1260) are calculated.
  • the candidate VGCPs that have a distance greater than the predetermined threshold are discarded.
  • step 1275 if none of the candidate VGCPs in the NR dataset have a distance less than the predetermined threshold, this part of the process is complete and the process continues with step 1300 described hereinbelow.
  • VGCPs in the NR dataset As seen in a next step 1280, if any of the candidate VGCPs in the NR dataset have a distance less than the predetermined threshold, the candidate VGCPs that have a distance less than the predetermined threshold are designated as VGCPs in the NR dataset.
  • step 1285 the SBA algorithm is run on the NR dataset resulting from step 1280 above. At this stage, all points are VGCPs.
  • a next step 1290 the differences, measured as a 3D distance, between the coordinates of the VGCPs in the NR dataset of step 1280 prior to step 1285 and the NR dataset following step 1285 (Pre and Post the SBA step 1285) are calculated to be used as a final filter to remove unqualified VGCPs.
  • the VGCPs that have a distance less than the predetermined threshold are designated as final VGCPs in the NR dataset and saved.
  • the VGCPs that have a distance greater than the predetermined threshold are discarded.
  • the resulting dataset either the NR dataset resulting from step 1295 or step 1275 or the GR dataset resulting from step 1225, is now considered to be a GR dataset representing the site at the second point of time and may be used as a reference for creating future GR datasets at future points in time by employing the procedures described hereinabove with reference to Figs. 10 - 12.
  • the resulting dataset is exemplified in a dense point cloud shown in Fig. 8.
  • the encircled areas shown in Fig. 8 correspond to the encircled areas in Fig. 7 and represent areas in which a change in terrain occurred between the first point of time and the second point of time. The circles do not appear in the dense point cloud.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A method for automatically creating a georeferenced model including creating a collection of photographically detectable Ground Control Points (GCPs) at a site, photographing the site to produce a first multiplicity of images, processing the first images to generate a first non-georeferenced 3D model of the site, identifying and marking the GCPs on at least two images, thereby providing identified and marked GCPs, employing the first non-georeferenced 3D model and the identified and marked GCPs to produce a first georeferenced 3D model, photographing the site at a second time to produce a second multiplicity of images, processing the second images to generate a second non-georeferenced 3D model of the site, automatically identifying features which appear in both sets of images and designating features in the first multiplicity of images as virtual GCPs and employing the second non-georeferenced 3D model and the virtual GCPs to produce a second georeferenced 3D model.

Description

A METHOD FOR AUTOMATICALLY CREATING
A GEOREFERENCED MODEL
REFERENCE TO RELATED APPLICATIONS
Reference is made to U.S. Provisional Patent Application Serial No. 62/876,608 filed July 20, 2019 and entitled: ALGORITHM FOR GEO- REFERENCING AUTOMATICALLY A POINT CLOUD BASED ON A GEO- REFERENCED CLOUD, SAMPLED AT SOME PREVIOUS DATE, the disclosure of which is hereby incorporated by reference and priority of which is claimed.
FIELD OF THE INVENTION The present invention relates to geomatics generally and more specifically to geo-referencing of three-dimensional models.
BACKGROUND OF THE INVENTION
Various techniques for geo -referencing of three-dimensional models are known in the art.
SUMMARY OF THE INVENTION
The present invention seeks to provide an improved system and technique for geo-referencing three-dimensional models.
There is thus provided in accordance with a preferred embodiment of the present invention a method for automatically creating a georeferenced model, the method including creating a collection of photographically detectable Ground Control Points (GCPs) at a site, photographing the site at a first time to produce a first multiplicity of partially mutually overlapping images of the site in a manner that the collection of GCPs are discemable in multiple ones of the first multiplicity of partially mutually overlapping images, processing the first multiplicity of images to provide for each of the first multiplicity of images: a camera location and a camera orientation in three dimensions representing a first non-georeferenced 3D model of the site at the first time, identifying and marking each of the GCPs on at least two of the first multiplicity of images, thereby providing identified and marked GCPs, employing the first non- georeferenced 3D model and the identified and marked GCPs to produce a first georeferenced high-precision 3D model, which is anchored to a coordinate system of the collection of photographically detectable GCPs at the site, photographing the site at at least a second time to produce a second multiplicity of at least partially overlapping images of portions of the site, processing the second multiplicity of images to provide for each of the second multiplicity of images: a camera location and a camera orientation in three dimensions representing a second non-georeferenced 3D model of the site at the second time, automatically identifying a multiplicity of features which appear in both the first multiplicity of images and the second multiplicity of images and designating at least some of the multiplicity of features in the first multiplicity of images as virtual GCPs and employing the second non-georeferenced 3D model and the virtual GCPs to produce a second georeferenced high-precision 3D model, which is anchored to the coordinate system.
In accordance with a preferred embodiment of the present invention, the first non-georeferenced model and the first multiplicity of images are represented by a first point cloud. Preferably, the processing the first multiplicity of images also includes finding corresponding reference locations which appear in multiple ones of the multiplicity of at least partially overlapping images.
In accordance with a preferred embodiment of the present invention the employing the first non-georeferenced 3D model and the identified and marked Ground Control Points to produce a first georeferenced high-precision 3D model includes performing bundle adjustment on the first non-georeferenced 3D model.
Preferably, the second non-georeferenced model and the second multiplicity of images are represented by a second point cloud.
In accordance with a preferred embodiment of the present invention the processing of the second multiplicity of images also includes finding corresponding reference locations which appear in multiple ones of the second multiplicity of at least partially overlapping images.
In accordance with a preferred embodiment of the present invention the employing the second non-georeferenced 3D model and the virtual Ground Control Points to produce a second georeferenced high-precision 3D model, which is anchored to the coordinate system includes performing bundle adjustment on the second non- georeferenced 3D model.
Preferably, the finding corresponding reference locations which appear in multiple ones of the first multiplicity of at least partially overlapping images includes performing Structure from Motion (SfM) analysis on the first multiplicity of at least partially overlapping images. Additionally, the performing Structure from Motion (SfM) analysis on the multiplicity of at least partially overlapping images includes employing image metadata from the multiplicity of at least partially overlapping images.
In accordance with a preferred embodiment of the present invention the finding corresponding reference locations which appear in multiple ones of the second multiplicity of at least partially overlapping images includes performing Structure from Motion (SfM) analysis on the second multiplicity of at least partially overlapping images. Additionally, the performing Structure from Motion (SfM) analysis on the second multiplicity of at least partially overlapping images includes employing image metadata from the second multiplicity of at least partially overlapping images. BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which:
Fig. 1 is a simplified illustration of a site at a first point in time including a collection of photographically detectable Ground Control Points (GCPs) as known in the prior art;
Fig. 2 is a simplified illustration of a first multiplicity of partially overlapping images of the site shown in Fig. 1 taken at the first point in time as known in the prior art;
Fig. 3 is a simplified illustration of a non-georeferenced sparse point cloud representing a first model of the site of Figs. 1 & 2 at the first point in time as known in the prior art;
Fig. 4 is a simplified illustration of a georeferenced dense point cloud representing a first model of the site of Figs. 1 & 2 at the first point in time as known in the prior art;
Fig. 5 is a simplified illustration of the site of Figs. 1 - 4 at a second point in time including a collection of photographically detectable Ground Control Points and photographically detectable candidate Virtual Ground Control Points (VGCPs);
Fig. 6 is a simplified illustration of a second multiplicity of partially overlapping images of the site shown in Fig. 5 taken at the second point in time showing at least some of the photographically detectable candidate Virtual Ground Control Points;
Fig. 7 is a simplified illustration of a non-georeferenced sparse point cloud representing a second model of the site of Figs. 1 - 6 at the second point in time;
Fig. 8 is a simplified illustration of a georeferenced dense point cloud representing a second model of the site of Figs. 1 - 7 at the second point in time;
Figs. 9A and 9B, taken together, are a simplified flow chart illustrating steps in generating a georeferenced dense point cloud representing a first model of the site of Fig. 1 at the first point in time, as known in the prior art; Fig. 10 is a simplified flow chart illustrating steps in the selection of a plurality of images from among a second multiplicity of images of the site, taken at a second point of time and exemplified in Fig. 6;
Fig. 11 is a simplified flow chart illustrating steps in the extraction and matching of features seen in first and the second multiplicities of images of the site taken at respective first and second points in time, thereby providing candidate VGCPs; and
Figs. 12 A, 12B and 12C are, taken together, a simplified flow chart illustrating steps in the iterative selection of the most robust VGCPs from among the candidate VGCPs.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
Reference is now made to Figs. 9A and 9B, which, taken together, are a simplified flow chart illustrating steps in generating a georeferenced dense point cloud representing a first model of a site, such as a site shown in Fig. 1, at a first point in time, as known in the prior art.
A conventional technique for generating a georeferenced model of a site exemplified by a georeferenced dense point cloud typically includes the following steps as seen in Figs. 9A and 9B.
As seen in a step 910, markers are posted at distributed places on the ground of the site, wherein the geographic location of those places is measured with high accuracy and recorded in a specific coordinate system, which is a“world” coordinate system and is universally accepted as correct. These places and their geographical locations are considered to be reference points and are termed Ground Control Points (GCPs). Fig. 1 illustrates a typical site 100 having posted therein GCPs which are each designated by reference numeral 102.
In a next step 915, the site is photographed from above at a first point in time and a multiplicity of partially mutually overlapping 2-dimensional photographic images is generated. Photographing a typical site may generate approximately 5000 such images. Examples of such photographic images appear in Fig. 2 and are designated by reference numerals 200.
In a subsequent step 920, Structure from Motion (SfM) analysis on the multiplicity of partially mutually overlapping photographic images is performed by employing conventional feature extraction and feature matching algorithms, to find corresponding reference locations which appear in multiple partially overlapping ones of the multiplicity of partially mutually overlapping photographic images. The SfM analysis may also employ image metadata, such as EXIF information. Typical reference locations are designated by reference numerals 202 in the 2-dimensional photographic images 200 appearing in Fig. 2.
In a next step 925, a non-georeferenced 3D model of the site is generated, based on the SfM analysis, which may be exemplified by a Sparse Point Cloud (SPC). A typical SPC representing the site of Fig. 1 appears in Fig. 3. It is noted that the SPC shown in Fig. 3 is not to scale and is in an arbitrary co-ordinate system.
In a subsequent step 930, the GCPs in a sub-set of the multiplicity of partially mutually overlapping photographic images having a relatively high number of mutually corresponding reference locations, which are known as tie points, are identified and marked. The steps of identifying and marking are currently done manually and are time consuming and costly. The result is a collection of marked pixels representing known GCPs in the sub- set of two-dimensional images.
Subsequently, as seen in step 935, the 3D coordinates of the GCPs are reconstructed in the same arbitrary coordinate system of the non-georeferenced 3D model of the site as exemplified in the sparse point cloud illustrated in Fig. 3.
In a next step 940, a rigid 3D transformation, between the 3D coordinates of the identified and marked GCPs in the“world” coordinate system, in which the GCPs were originally measured, and the 3D coordinates of the GCPs in the arbitrary coordinate system of the non-georeferenced 3D model of the site, is calculated, as exemplified in the SPC illustrated in Fig. 3.
In a next step 945, the measured 3D coordinates of the GCPs are transformed from the“world” coordinate system to the arbitrary coordinate system of the non-georeferenced 3D model of the site, as exemplified in the SPC illustrated in Fig. 3, using the calculated rigid 3D transformation. At this stage the locations of the GCPs are marked with their“world” coordinates in the non-georeferenced 3D model of the site as exemplified in the SPC of Fig. 3.
In a subsequent step 950, Sparse Bundle Adjustment (SBA) is performed on the SPC employing the 3D locations of the cameras and using the GCPs as anchor points. This achieves the dual objectives of correction of the non-georeferenced 3D model for drift and curvature and locking the corrected non-georeferenced 3D model solution to“world” coordinates.
Subsequently, as seen in step 955, the corrected non-georeferenced 3D model exemplified by the SPC is transformed to“world” coordinates, using the inverse of the rigid 3D transformation, thereby creating a geo -referenced model, which may be exemplified by a geo -referenced SPC. Finally, as seen in step 960, a georeferenced Dense Point Cloud (DPC) is generated in the“world” coordinate system by using the rigid 3D transformation to represent the 3D coordinates of the cameras in the“world” coordinate system and then using these coordinates as an input for DPC generation. This georeferenced DPC is illustrated in Fig. 4.
Reference is now made to Fig. 10, which is a simplified flow chart showing steps in generating, at a second point in time, a georeferenced DPC representing a model of the site, which is shown in Fig. 1 at a first point in time, and illustrating steps in the selection of a plurality of images from among a second multiplicity of images of the site taken at a second point of time.
As seen in Fig. 10, there is provided a technique, in accordance with a preferred embodiment of the present invention, for generating, at a second point in time, a second georeferenced model of the site of Fig. 1 exemplified by a second georeferenced DPC. The technique preferably includes the following initial steps seen in Fig. 10.
As seen in a first step 1010, the site is photographed, from above, as exemplified in Fig. 5, as it appears at a second point in time, and a second multiplicity of partially mutually overlapping 2-dimensional photographic images is generated. Photographing a typical site might generate approximately 5000 such images. Examples of such photographic images appear in Fig. 6 and are designated by reference numerals 300.
In a next step 1020, Structure from Motion (SfM) analysis on the second multiplicity of partially mutually overlapping photographic images is performed by employing conventional feature extraction and feature matching algorithms, to find corresponding reference locations which appear in multiple partially overlapping ones of the second multiplicity of partially mutually overlapping photographic images. The SfM analysis may also employ image metadata, such as EXIF information. Typical reference locations are designated by reference numerals 302 in the 2-dimensional photographic images 300 appearing in Fig. 6.
In a subsequent step 1030, a non-georeferenced 3D model of the site, as it appears at the second point of time, is generated, based on the SfM analysis, which may be exemplified by a SPC. A typical SPC representing the site of Fig. 5 appears in Fig. 7. It is noted that the SPC shown in Fig. 7 is not to scale and is in an arbitrary co ordinate system and is not georeferenced (NR).The encircled areas 304 shown in Fig. 7 represent areas in which a change in terrain occurred between the first point of time and the second point of time. The circles do not appear in the SPC.
In a next step 1040, a plurality of images, from among the second multiplicity of partially mutually overlapping 2-dimensional photographic images, are selected to serve as images to be used for establishing Virtual Ground Control Points (VGCPs) by employing the geo-referenced model generated in step 955 of Fig. 9, as exemplified by a geo-referenced SPC produced thereby, which constitutes a first georeferenced (GR) data set, and by employing specifically the locations of the GCPs in the first georeferenced (GR) data set.
Finally, as seen in step 1050, for each GCP seen in both the first multiplicity of images and in the second multiplicity of images, the best images, typically the three best images, from among the plurality of images, from among the first multiplicity of partially mutually overlapping 2-dimensional photographic images, are selected. The selection is based on which of the GCPs have the greatest number of tie points which appear in those of the multiple partially overlapping ones of the first multiplicity of partially mutually overlapping photographic images in which the given GCP appears. The best images are then ordered in decreasing order, preferably based on the number of tie points contained therein when only considering the best images. The best images are marked for future use.
Reference is now made to Fig. 11, which is a simplified flow chart illustrating steps in the extraction and matching of features seen in the first and the second multiplicities of images of the site taken at respective first and second points in time, thereby providing candidate Virtual Ground Control Points (VGCPs).
As indicated in Fig. 11, for each of the best images referenced in step 1050 of Fig. 10, starting with the highest ranked image of the best images, the following steps are carried out.
As seen in a first step 1110, the highest ranked image of the best images is designated, typically by the designation Al. In a next step 1120, using EXIF data, from the first multiplicity of images used to define the GR dataset, the two images which are closest in camera location to A1 are selected and designated as A2 and A3.
In a subsequent step 1130, the three images from the second multiplicity of images used to define the NR dataset, which are closest in camera location to Al, are selected, using EXIF data, and designated as Bl, B2 and B3.
In a next step 1140, for each of the six images, Al, A2, A3, Bl, B2 and B3, a 2D feature extraction algorithm, which is scaling and rotation invariant, is employed to extract features that appear in all of the six images. An example of a suitable conventional feature extraction algorithm is Binary Robust Invariant Scalable Keypoints (BRISK), available via the Internet from the Open Source Computer Vision Library (OpenCV), at website opencv.org.
In a final step 1150, a feature matching algorithm is employed to provide matching of multiple features in the following image pairs: Al and Bl, Al and B2, Al and B3, Bl and B2, Bl and B3, Al and A2, Al and A3. All features that are found in all of the above image pairs are considered to be VGCP candidates. An example of a suitable feature matching algorithm is a Brute Force matcher based on Hamming distance, available via the Internet from the Open Source Computer Vision Library (OpenCV), at website opencv.org.
Reference is now made to Figs. 12A, 12B, and 12C, which, taken together, are a simplified flow chart illustrating steps in the iterative selection of the most robust VGCPs from among the candidate VGCPs.
As seen in Figs. 12A, 12B and 12C, the following steps take place.
As seen in a first step 1210, the VGCP candidate points are added to the GR dataset as candidate VGCPs. These points have their 3D coordinates in the“world” coordinate system, set by operation of the Sparse Bundle Adjustment (SBA) algorithm in step 950 of Fig. 9, as described above. The 2D coordinates of these points in the arbitrary coordinate system are marked in steps 1140 and 1150 of Fig. 11, as described above.
In a next step 1215, the Sparse Bundle Adjustment (SBA) algorithm is run on the GR dataset including the candidate VGCPs from step 1210. In a subsequent step 1220, the differences, measured as a 3D distance, between the coordinates of the candidate VGCPs in the GR dataset of step 1210, prior to the SBA step 1215 above, and the GR dataset, following SBA step 1215 (Pre and Post the SBA of step 1215), are calculated.
As seen in a next step 1225, the distance of all of the candidate VGCPs is compared to a predetermined threshold. If all of the candidate VGCPs have a distance less than the predetermined threshold, all of the candidate VGCPs are designated as VGCPs and this part of the process is complete and the process continues with step 1300 described hereinbelow.
If not all of the candidate VGCPs have a distance less than the predetermined threshold, as seen in a step 1230, the candidate VGCPs that have a distance greater than the predetermined threshold are discarded and the process continues with the remaining candidate VGCPs.
As seen in a next step 1235, the remaining candidate VGCPs are added to the NR dataset as GCPs, so that a subsequent SBA operation does not move them and they serve as anchors to the subsequent SBA. The remaining candidate VGCPs have their 3D coordinates in the“world” coordinate system set by operation of the Sparse Bundle Adjustment (SBA) algorithm in step 950 of Fig. 9, as described above. The 2D coordinates of these points in the arbitrary coordinate system are marked as in steps 1140 and 1150 of Fig. 11, as described above.
In a subsequent step 1240, the Sparse Bundle Adjustment (SBA) algorithm is run on the NR dataset including the GCPs resulting from step 1235.
As seen in a next step 1245, the differences between the coordinates of the candidate VGCPs in the NR dataset of step 1235 following step 1240 and the NR dataset following step 1215 above (Pre and Post the SBA step 1240) are calculated.
It is appreciated that while, as noted above in reference to step 1235, the candidate VGCPs added to the NR dataset as GCPs in step 1235 serve as anchors to the SBA performed in step 1240, an estimated coordinate is calculated for each of these candidate VGCPs after the SBA is performed and therefore there may be a difference between the pre and post SBA step 1240 values of the coordinates of the candidate VGCPs. Subsequently, as seen in a next step 1250, the candidate VGCPs having a distance less than a predetermined threshold are designated as VGCPs in the NR dataset.
In a next step 1255, the candidate VGCPs having a distance greater than the predetermined threshold are designated as candidate VGCPs in the NR dataset resulting from step 1250.
In a subsequent step 1260, the Sparse Bundle Adjustment (SBA) algorithm is run on the NR dataset including the VGCPs resulting from step 1250 and the candidate VGCPs from step 1255. The VGCPs resulting from step 1250 will be treated as actual GCPs in the next SBA.
In a next step 1265, the differences, measured as a 3D distance, between the coordinates of the candidate VGCPs in the NR dataset of step 1255 prior to step 1260 and the NR dataset following step 1260 (Pre and Post the SBA step 1260) are calculated.
As seen in a next step 1270, the candidate VGCPs that have a distance greater than the predetermined threshold are discarded.
As seen in a next step 1275, if none of the candidate VGCPs in the NR dataset have a distance less than the predetermined threshold, this part of the process is complete and the process continues with step 1300 described hereinbelow.
As seen in a next step 1280, if any of the candidate VGCPs in the NR dataset have a distance less than the predetermined threshold, the candidate VGCPs that have a distance less than the predetermined threshold are designated as VGCPs in the NR dataset.
As seen in a subsequent step 1285, the SBA algorithm is run on the NR dataset resulting from step 1280 above. At this stage, all points are VGCPs.
In a next step 1290, the differences, measured as a 3D distance, between the coordinates of the VGCPs in the NR dataset of step 1280 prior to step 1285 and the NR dataset following step 1285 (Pre and Post the SBA step 1285) are calculated to be used as a final filter to remove unqualified VGCPs.
In a next step 1295, the VGCPs that have a distance less than the predetermined threshold are designated as final VGCPs in the NR dataset and saved. The VGCPs that have a distance greater than the predetermined threshold are discarded. As seen in final step 1300, the resulting dataset, either the NR dataset resulting from step 1295 or step 1275 or the GR dataset resulting from step 1225, is now considered to be a GR dataset representing the site at the second point of time and may be used as a reference for creating future GR datasets at future points in time by employing the procedures described hereinabove with reference to Figs. 10 - 12. The resulting dataset is exemplified in a dense point cloud shown in Fig. 8. The encircled areas shown in Fig. 8 correspond to the encircled areas in Fig. 7 and represent areas in which a change in terrain occurred between the first point of time and the second point of time. The circles do not appear in the dense point cloud.
It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the present invention includes both combinations and subcombinations of features described herein as well as modifications and variations thereof which are not in the prior art.

Claims

1. A method for automatically creating a georeferenced model, the method comprising:
creating a collection of photographically detectable Ground Control Points (GCPs) at a site;
photographing said site at a first time to produce a first multiplicity of partially mutually overlapping images of the site in a manner that said collection of GCPs are discernable in multiple ones of said first multiplicity of partially mutually overlapping images;
processing said first multiplicity of images to provide for each of said first multiplicity of images: a camera location and a camera orientation in three dimensions representing a first non-georeferenced 3D model of said site at said first time;
identifying and marking each of said GCPs on at least two of said first multiplicity of images, thereby providing identified and marked GCPs;
employing said first non-georeferenced 3D model and said identified and marked GCPs to produce a first georeferenced high-precision 3D model, which is anchored to a coordinate system of said collection of photographically detectable GCPs at said site;
photographing said site at at least a second time to produce a second multiplicity of at least partially overlapping images of portions of the site;
processing said second multiplicity of images to provide for each of said second multiplicity of images: a camera location and a camera orientation in three dimensions representing a second non-georeferenced 3D model of said site at said second time;
automatically identifying a multiplicity of features which appear in both said first multiplicity of images and said second multiplicity of images and designating at least some of said multiplicity of features in said first multiplicity of images as virtual GCPs; and employing said second non-georeferenced 3D model and said virtual GCPs to produce a second georeferenced high-precision 3D model, which is anchored to said coordinate system.
2. A method for automatically creating a georeferenced model according to claim 1 and wherein said first non-georeferenced model and said first multiplicity of images are represented by a first point cloud.
3. A method for automatically creating a georeferenced model according to claim 2 and wherein said processing said first multiplicity of images also comprises finding corresponding reference locations which appear in multiple ones of the multiplicity of at least partially overlapping images.
4. A method for automatically creating a georeferenced model according to claim 1 and wherein said employing said first non-georeferenced 3D model and said identified and marked Ground Control Points to produce a first georeferenced high- precision 3D model comprises performing bundle adjustment on said first non- georeferenced 3D model.
5. A method for automatically creating a georeferenced model according to claim 2 and wherein said second non-georeferenced model and said second multiplicity of images are represented by a second point cloud.
6. A method for automatically creating a georeferenced model according to claim 3 and wherein said processing of said second multiplicity of images also comprises finding corresponding reference locations which appear in multiple ones of said second multiplicity of at least partially overlapping images.
7. A method for automatically creating a georeferenced model according to claim 4 and wherein said employing said second non-georeferenced 3D model and said virtual Ground Control Points to produce a second georeferenced high-precision 3D model, which is anchored to said coordinate system comprises performing bundle adjustment on said second non-georeferenced 3D model.
8. A method for automatically creating a georeferenced model according to claim 3 and wherein said finding corresponding reference locations which appear in multiple ones of the first multiplicity of at least partially overlapping images comprises performing Structure from Motion (SfM) analysis on said first multiplicity of at least partially overlapping images.
9. A method for automatically creating a georeferenced model according to claim 8 and wherein said performing Structure from Motion (SfM) analysis on said multiplicity of at least partially overlapping images comprises employing image metadata from said multiplicity of at least partially overlapping images.
10. A method for automatically creating a georeferenced model according to claim 6 and wherein said finding corresponding reference locations which appear in multiple ones of said second multiplicity of at least partially overlapping images comprises performing Structure from Motion (SfM) analysis on said second multiplicity of at least partially overlapping images.
11. A method for automatically creating a georeferenced model according to claim 10 and wherein said performing Structure from Motion (SfM) analysis on said second multiplicity of at least partially overlapping images comprises employing image metadata from said second multiplicity of at least partially overlapping images.
PCT/IL2020/050754 2019-07-20 2020-07-06 A method for automatically creating a georeferenced model Ceased WO2021014435A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962876608P 2019-07-20 2019-07-20
US62/876,608 2019-07-20

Publications (1)

Publication Number Publication Date
WO2021014435A1 true WO2021014435A1 (en) 2021-01-28

Family

ID=74194028

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2020/050754 Ceased WO2021014435A1 (en) 2019-07-20 2020-07-06 A method for automatically creating a georeferenced model

Country Status (1)

Country Link
WO (1) WO2021014435A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180040137A1 (en) * 2015-09-17 2018-02-08 Skycatch, Inc. Generating georeference information for aerial images
US20180260626A1 (en) * 2015-08-06 2018-09-13 Accenture Global Services Limited Condition detection using image processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260626A1 (en) * 2015-08-06 2018-09-13 Accenture Global Services Limited Condition detection using image processing
US20180040137A1 (en) * 2015-09-17 2018-02-08 Skycatch, Inc. Generating georeference information for aerial images

Similar Documents

Publication Publication Date Title
US9324151B2 (en) System and methods for world-scale camera pose estimation
US9875404B2 (en) Automated metric information network
US10269147B2 (en) Real-time camera position estimation with drift mitigation in incremental structure from motion
US8971641B2 (en) Spatial image index and associated updating functionality
US8798357B2 (en) Image-based localization
US10269148B2 (en) Real-time image undistortion for incremental 3D reconstruction
TWI791405B (en) Method for depth estimation for variable focus camera, computer system and computer-readable storage medium
US20180315232A1 (en) Real-time incremental 3d reconstruction of sensor data
CN112750203B (en) Model reconstruction method, device, equipment and storage medium
KR102219561B1 (en) Unsupervised stereo matching apparatus and method using confidential correspondence consistency
Cheng et al. Extracting three-dimensional (3D) spatial information from sequential oblique unmanned aerial system (UAS) imagery for digital surface modeling
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
Maiwald et al. An automatic workflow for orientation of historical images with large radiometric and geometric differences
Huang et al. Cv-cities: Advancing cross-view geo-localization in global cities
CN117201708B (en) Unmanned aerial vehicle video stitching method, device, equipment and medium with position information
US20230215144A1 (en) Training apparatus, control method, and non-transitory computer-readable storage medium
CN116309836A (en) Three-dimensional object pose recognition method and device based on visual image and electronic equipment
CN110120090B (en) Three-dimensional panoramic model construction method and device and readable storage medium
Abdel-Wahab et al. Efficient reconstruction of large unordered image datasets for high accuracy photogrammetric applications
WO2021014435A1 (en) A method for automatically creating a georeferenced model
Xu et al. Extraction of image topological graph for recovering the scene geometry from UAV collections
CN116228992A (en) Visual positioning method for different types of images based on visual positioning system model
CN114413882A (en) Global initial positioning method and device based on multi-hypothesis tracking
Tamminen The Digital Dead: Virtual Modelling of Human Remains using Photogrammetry for Presentation and Preservation by Record.
CN120047505B (en) A structured enhancement driven large-scale multimodal image registration method and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844553

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20844553

Country of ref document: EP

Kind code of ref document: A1