[go: up one dir, main page]

NL2035685B1 - Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker - Google Patents

Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker Download PDF

Info

Publication number
NL2035685B1
NL2035685B1 NL2035685A NL2035685A NL2035685B1 NL 2035685 B1 NL2035685 B1 NL 2035685B1 NL 2035685 A NL2035685 A NL 2035685A NL 2035685 A NL2035685 A NL 2035685A NL 2035685 B1 NL2035685 B1 NL 2035685B1
Authority
NL
Netherlands
Prior art keywords
image
dimensional
images
operator
marker
Prior art date
Application number
NL2035685A
Other languages
Dutch (nl)
Inventor
Boeters Roeland
Pieter Kodde Martinus
Feenstra Jos
Original Assignee
Ingb Geodelta B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ingb Geodelta B V filed Critical Ingb Geodelta B V
Priority to NL2035685A priority Critical patent/NL2035685B1/en
Application granted granted Critical
Publication of NL2035685B1 publication Critical patent/NL2035685B1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

-17- The patent describes an apparatus and method for measuring three-dimensional points of interest from overlapping images taken by an image recording device. Operators can select a point-of-interest in one image and detect corresponding points in overlapping images. 5 Additionally, a virtual three-dimensional marker facilitates precise measurements, where the operator can directly perform measurements by changing the height ofthe marker. There are optional adaptations for panoramic images and stereoscopic displays.

Description

Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker
Field of the invention
[0001] The present invention relates to a method for determining the three-dimensional coordinates of objects in the real world from images taken with a calibrated camera system where coordinates are determined with human intervention of an operator, where the operator using the system operates a virtual three-dimensional marker and confirms the position of this marker when the projected position of this marker in a multitude of images is confirmed by the operator.
Background art
[0002] The use of images to obtain three-dimensional points is well known in practice, scientific literature and prior inventions. Patent document WO2020181506A1 accurately describes a method to obtain three-dimensional points from a multitude of images. However, the presented approach results in a large set of so-called feature points. These points can be anything and are detected by a computing device based on contrast differences. If the operator observes a point- of-interest in one these images, there is no way for the operator to get the coordinates of this image, if this particular point was not detected as one of the feature points. Hence, the coordinates of this point-of-interest can only be derived using interpolation, which degrades the final precision and reliability of coordinates of the point.
[0003] A well known method for detecting the feature points is documented in US6711293B1. In this method there is no way for the operator to control the location where feature points are detected.
[0004] The invention of document US11049274B2 describes how photogrammetric systems can be employed to generate three-dimensional structures of objects of which prior knowledge about the shape is available. However, in the built environment, this prior knowledge is only available for a limited set of structures, like building roofs, as described in US20200003554A1. For many other objects, a database with known objects shapes is not available and therefore intervention from the human operator with understanding of the scene in the image is needed.
[0005] One solution is documented in JP2007212187A. | this solution a marker is placed in the field and captured on the image. Recognising this marker in the images can be done automatically. However, it eliminates the possibility the obtain coordinates of objects that were not identified before the image acquisition and objects than cannot be reached physically.
[0006] Traditionally, these limitations were overcome with a so-called stereoplotter, as documented in US4539701A. By combining two images, taken in the same mathematical plane, with similar scale and looking-direction, an apparent three-dimensional model could be created by projecting on image in the left eye of the operator and the second image in the right eye of the operator. The operator could operate a three-dimensional marker by changing the X- and Y- coordinates of the marker as well as the horizontal parallax, creating the apparent horizontal en vertical movement of the marker within the anual three-dimensional environment. The disadvantage of this approach is that it only works with specialised hardware to create the stereoscopic display. It also only works on images that were taken from the same mathematical plane and the same viewing direction.
[0007] In order to overcome this limitation, the common work flow employs the manual identification of the same point in a multitude of images by the operator, as described in document
US5247356A. This approach is labour intensive and inconvenient.
[0008] Patent document US10019850B2 demonstrates that it is crucial to have intuitive means to work with complicated spatial data. In this specific document an approach is detailed on how to view data, but methods for measuring three dimensional data accurately are missing.
[0009] The present invention overcomes the limitations of the prior art by introducing a three- dimensional measurement marker which can be moved in three dimensions by the operator in order to perform precise and reliable measurements. The present invention therefore simplifies the measurement and makes accurate three-dimensional measurements possible with just two manual actions, being a click in the image and a change of the height value.
Summary of the invention
[0010] In order to manage cities, infrastructure, large industrial plants and other places in the physical environment (1004), accurate maps are needed. These maps typically are stored in a
Geographic Information System (GIS), where objects are represented by points, lines and polygons. Such maps might contain the location of lamp poles and trees, as well as building footprints and road pavements.
[0011] One common way to collect and update the information in these maps is by collecting images from a calibrated camera (1001). These images can be collected from an air plane, a helicopter, an unmanned aerial vehicle, a moving vehicle, a drone or a waterborne platform (1002).
[0012] The images can be taken in any direction (1007), as long as overlap between these images is guaranteed.
[0013] The overlap between images is determined by the distance between the camera (1001) and terrain (1004), the field of view or opening angle (1006) created by the objective lens (1003) and the distance between images.
[0014] After data collection, the so-called exterior orientation of the imaged needs to be obtained.
The exterior orientation can be measured directly with a Global Navigation Satellite System /
Inertial Measurement System (GNSS/INS) device. It is also possible to compute the exterior orientation retroactively using the so-called bundle block adjustment, which is s a photogrammetric technique employed to simultaneously refine 3D coordinates of object points and the orientation parameters of the images in which they are visible. This iterative method optimizes the predicted image observations with respect to the observed image coordinates, resulting in a best-fit solution.
[0015] It is a requirement that the camera is calibrated, The calibration should contain precise information about the focal length of the objective lens (2005), the horizontal offset of the imaging plane (principle point of auto collimation) relative to the main axis through the objective lens (2006) as well as other details such as lens distortion.
[0016] If such calibration was not performed before the image acquisition, it can be included as an on-the-job calibration within the bundle block adjustment.
[0017] Once the images with exterior orientations are available, the images can be displayed on a computer screen (3001). The operator can identify an area of interest, for instance by entering a street address or the known identifier of a nearby object.
[0018] Using the known exterior orientations of all images as well as the focal distance of the objective lens, the images best representing the desired area can be opened and shown to the operator (3002).
[0019] The operator can now start measuring three dimensional points from this image (1005).
Each three dimensional point contains three coordinates: one for the X-direction of the coordinate system, one for the Y-direction of the coordinate system and one for the Z-direction of the coordinate system.
[0020] Once the operator starts the measurement process, a three dimensional marker is created (1010). The three-dimensional marker is able to move in all three directions of the three dimensional coordinate system.
[0021] The operator identifies a point-of-interest in the image shown to the operator. He or she can do this by clicking in the image on the exact location of said point (3005).
[0022] Due to the two-dimensional nature of images, this individual click is not sufficient to obtain the three-dimensional coordinates for the marker. At this time it is only known that the three- dimensional point is somewhere on the line (1008) defined by the perspective centre (2008) of the camera and the representation of the point on the imaging plane (2011). Hence, the third dimension needs to be estimated using any of the following methods.
[0023] One way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is by assuming a constant height of all points identified in the image. The intersection of said line with the horizontal plane defined by this height results in the three-dimensional coordinate.
[0024] Another way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is by using a height model that was previously collected and made available in the system. The intersection of said line with the height model results in the three-dimensional coordinate.
[0025] Another way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is projecting said line in one or more overlapping images. This projection of the line is called the epipolar line. Subsequently, an automated matching approach, such as contrast matching, is used to estimate the same point-of-interested along the one or more projected lines. Once at least one other point is determined using this matching acess, the intersection between the two lines (1008, 1009) results in the three-dimensional point. A common way to compute these three coordinates is by means of a so-called forward intersection adjustment.
[0026] Another way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is assuming that the height of the point is equal to the height of the previous point that was measured.
[0027] Irrespective of the method chosen to obtain the three coordinates, the results might not be the most accurate possible. The operator is therefore additionally presented with at least one (3003) but possibly more (3004) overlapping images showing the same area of interest (3005, 3006, 3007). The position of the three-dimensional marker, which has now three estimated coordinates for the X-, Y- and Z-direction, is shown in each image that is currently visible to the operator (3008, 3009, 3010).
[0028] If the marker position correctly represents said three-dimensional point-of-interest, the projected marker position will be exactly on top of this point in all images where it is presented.
[0029] The operator now has the option to confirm (3019) the current measurement and store the three-dimensional coordinates in the computer system.
[0030] If the operator is not satisfied with the projection location of the marker in the images, the operator can adapt the marker position by only changing its height. In order to do so, the operator has a multitude of options.
[0031] One option is that the operator uses the scroll wheel of the mouse, optionally in combination with a keyboard short-cut, to change the height value.
[0032] One option is that the operator enters a new height value.
[0033] One option is that the operator uses the plus and minus button on the key board.
[0034] One option is that the operator uses a slider in a computer window to change the height value (3018).
[0035] One option is that the operator restarts the automatic matching between images (3017).
[0036] While correcting the height value, the three-dimensional marker moves up and down along the line (1008) between the perspective centre (2008) and the representation of the point onthe imaging plane (2011) of the first image where the point was identified.
[0037] At any time, the operator can observe the projection of the marker in all images presented to the operator.
[0038] Optionally, the operator can choose to change the selection of images presented, by selecting another set of images from a different perspective (3020, 3021).
[0039] Once the operator is satisfied that the marker correctly represents the point-of-interest in all images presented, the operator can confirm (3019) the new position of the three-dimensional marker.
Brief description of the drawings °
[0040] The present invention will be discussed in more detail below, with reference to the attached drawings, in which:
[0041] Fig. 1 depicts the general measurement setup from an airplane taking images in a multitude of directions of a urban environment.
[0042] Fig. 2 depicts the basic pinhole camera model that is used in all computations.
[0043] Fig. 3 depicts an environment where the operator can measure points-of-interest using a virtual three-dimensional marker.
[0044] Fig. 4 depicts the measurement setup that is applicable to panoramic images.
Best mode of the invention
[0045] Cities, infrastructure, industrial plants and other structures in our physical environment (1004) comprise many assets that need to be managed or maintained. In order to do this, it is needed to make an inventory of the location and physical state of these assets. This is typically achieved by creating a two-dimensional or three-dimensional map.
[0046] Besides the means of traditional land survey, such maps are typically better achieved by capturing the environment (1004) with a multitude of images.
[0047] Such images are best taken with a camera (1001) equipped with a sensor (2002), typically a Charge-coupled device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) with afixed lens (1003) from an air plane, drone, vehicle, tripod, boat or another platform (1002). The images should be taken at such a distance that the Ground Sampling Distance matches the intended precision if the ground coordinates. The images can be taken in any direction relative to the surface or area of interest (1004). Each image should have an overlap of at least 60% but preferably 80% with at least one other image in the set of images.
[0048] All images could be taken in either visible, near-infrared or ultra-violet light. Images taken in visible light can be either recorded and processed as colour images or as grey-scale images.
[0049] The shutter speed of the camera should be as fast as possible and such that any form of motion blur is avoided.
[0050] Optionally, the camera can be equipped with a positioning devise, such as a GNSS- receiver to obtain coordinates during the image acquisition.
[0051] Optionally, the camera can be equipped devise to measure the attitude of the camera, such as an Inertial Measurement System.
[0052] Optionally, well recognisable ground control points can be placed on the surface or object of interest, where for each point the precise coordinates are fixed or determined. Determination of the coordinates can for instance be achieved by means of traditional land survey using for example a theodolite and/or spirit level instrument. Such points should be spaced at such a distance that a high degree of geodetical reliability is achieved in positioning the images. If no
GNSS or INS is used, at least three ground control points of which a minimum of 7 values, indicating the X- or Y- or Z-direction, should be fixed.
[0053] Priorto the survey, the operator of the camera should take care that the relations between the aperture (2003), shutter speed and sensor gain is set in such a way that sharp, non blurred images are obtained with a high degree of contrast. Said settings should be updated on the camera if the light conditions change significantly. Alternatively, auto-exposure with a minimum shutter speed and a maximum aperture can be employed as well.
[0054] The recorded set of images is then analysed by a central processing unit.
[0055] Within the overlaps of all images, potentially corresponding common points are searched using a process as described in document US6711293B1. Corresponding common points are natural points, that are not signalized in the environment with markers. This results in a multitude of corresponding common points between all images. Due to the nature of this process, up to 80% of these potential corresponding common points could be wrongly identified and therefore constitute outliers.
[0056] The well recognisable ground control points are used to signal a certain recognizable location in the environment. These points can be measured automatically or manually.
[0057] While the images observed from the camera sensor (2002) equate to a negative plane, all images are converted to a positive plane (2004). All image coordinates of markers and corresponding common points are stored with respect to said virtual positive plane.
[0058] Based on the computed corresponding common points as well as the optional ground control points, the optional GNSS-observations and the optional INS-observations, approximate values should be computed for the position and orientation of every image as well as the three- dimensional coordinates of the points which are represented in the terrain by the corresponding points and the reference markers.
[0059] Approximate values for the focal length (2005) and the principal point of autocollimation (2006, 2007), the offset between the centre of the image (2008) and the intersection of the optical axis through the negative image plain (2009) or the positive image plane (2010), should be obtained with an a-priori camera calibration. Calibrated values should have a precision of at least 0.01 mm standard deviation.
[0060] With approximate values available for all terrain coordinates as well as the position and orientation of all images, the best unbiased values for these parameters can be estimated using anon-linear least squares adjustment. To do this, the image observations, expressed as row and column values in the images, are converted to x- and y-coordinates in millimetres on the camera sensor relative to the centre of the sensor (2008). Each image coordinate on the negative plane (2011) is optionally converted to a coordinate on the positive plane (2012). Each image coordinate is used as an observation in the least squares adjustment.
[0061] Each image coordinate (2011) relates to a terrain coordinate (2013) and an image position and orientation through the collinearity equations: y= _ Fed Xx) +7; (VV -Y9) +7, (Z — 2°) r3(X = XO) +r;3(V = YO) +733(Z — 29)
na (X -X9) + ol YO) +13,(Z — 29)
Y= XR) Tr (WP = FO) + 1s (2 — 20)
In this equation, (x2 ve ze) are the coordinates of the image position while (x vy z) are the three-dimensional terrain coordinates of the point (2013). The elements of the rotation matrix 2 of the camera attitude are denoted by r,;. The focal length of the camera is denoted as —r. All the elements are treated as unknown parameters in the least squares solution. The observed image coordinates are denoted by x and y, which represent the image pixel position converted to millimeters using the known sensor size.
[0062] The non-linear equations are linearized using an approximation from the first order of a
Taylor expansion. This linearization is performed using the approximate values computed before.
This results in a linear system of equations expressed as: y=4x+e with y the vector with observations, x the vector with unknown parameters to be estimated, 4 the model matrix and e the vector with unknown residuals.
[0063] The matrix 4 should be stored as a sparse matrix.
[0064] The system of equations is further extended with the terrestrial survey observations using their respective observation equations.
[0065] The system of equations is further extended with observed X, Y and Z coordinates of terrain points from GNSS-observations when available.
[0066] The system of equations is further extended with the calibrated focal distance (2005) and principal point of autocollimation (2008) when available.
[0067] Each observation in the y-vector is assigned an a-priori variance. This variance, which is the standard deviation squared, is used as a weight for each observation. The variance for each observation is expressed as E{y} a matrix where all variances are stored on the diagonal. In order to improve computational efficiency, the median value of all variances is stored as os. The variance matrix ¢, can now be computed as: E{y} = 020,
[0068] For best statistical testing covariance between observations can optionally be added to the adjustment by adding covariance values to the non-diagonal elements of the 529, matrix.
Covariance is to be assumed between the x- and y- image coordinates for best testing results.
[0069] The variance for all image coordinates should be approximately 1/3 of physical the pixel size on the sensor.
[0070] The variance of the calibrated focal distance (2005) and principal point of autocollimation (2006) should be equal to the variance expressed on the calibration report.
[0071] The variance of land survey and GNSS-observations should be estimated from experimentation before commencing the survey.
[0072] The values for the unknown parameters can now be estimated using: t= (A705 A) ATO.
[0073] The solution of this equation is computed using Cholesky decomposition for sparse matrices or other suitable solvers for linear systems of equations.
[0074] An iteration threshold is predefined at wh the improvement in the non-linear least squares iteration is small enough to assume that the adjustment is converged and the result does not significantly improve any more.
[0075] Due to the linearization with approximate values, the solution needs to be iterated with the outcome of one computation as approximate values for the next iteration. Iteration should continue until no improvement larger than the iteration threshold is obtained in the parameters.
[0076] After completing the final iteration, the ¢, matrix is computed. This matrix is computed as
Qf = (a7Q;ra) The ¢, matrix contains the a posteriori variance and covariances of all computed unknown parameters. In essence this means that this matrix contains the variance for all coordinates as well as the correlation between the parameters. This is crucial information for assessing and testing the significance of a deformation later on.
[0077] While the (47q;+4) matrix is typically sparse, the zsmatrix generally is not. Due to the large number of parameters, the ¢, matrix will not normally fit in the memory of a consumer grade computer. Therefore, a Sparse Cholesky decomposition is computed first. From the Cholesky decomposition each column is computed independently. This process can be performed in parallel using multiple cores on a multi-core computer machine.
[0078] On larger systems, the size of 9, can become too large, meaning that storage in memory is no longer possible. As the matrix is symmetric, only one triangle needs to be stored. This should be done in a storage structure that does not require a continuous block of memory, but individual memory blocks per column.
[0079] The matrix with variances and covariances of the adjusted observations is computed as
Q = AG;AT. This matrix is by definition even larger than ¢,. However, only storage of the elements on the main diagonal is needed. The computation should proceed in a nested loop for both matrix multiplications to avoid excessive delay when computing large systems. The nested loop runs as shown in the pseudo code below. In this code, m represents the number of observations. RowP is the array containing the row indices of the existing rows in the matrix A. Each element in the array gives the index of the element in the array Values that is first non-zero element in a row.
The array Columnind contains the column indices of the non-zero values. The elements correspond to the elements of the array Values. The Values array contains the actual values of the matrix A. for (i= 0; i< m; i++) v_Aii=0 for (j = RowP [i]; j < RowP [i+1]; j++) v_AQij=0 for (k = RowP [i]; k < RowP [i+1]; k++) v_AMij += Values[k] * Q[Columnind [j], Columnlnd [k]] v_Aii += v_AMij * Values][j] diagonal[i] = v_Aii;
-0-
[0080] An F-test is performed on the computed results and stored in a log file. The F-Test is computed as follows with m the number of observations and » the number of unknowns: g=y-AL
Vy = e870; m=n+b
[0081] For each observation a w-test is performed using the following equations: ê=y-Af
Qs = QQ =f
Te where ¢, is the i-th element from the vector e and s,‚ the square root of the i-th element from the main diagonal of the matrix 529.
[0082] The w-test threshold value can be determined from the 2? distribution function. Assuming an unreliability factor of 0.1%, the threshold values computed from this distribution is 3.29. Any other threshold can be applied as seen fit for the application.
[0083] If a w-test is larger than the threshold it means that the observation is rejected. Most probably due to a measurement error.
[0084] The observation with the largest w-test is eliminated from the observation vector after which the entire Bundle Block Adjustment is repeated. This process, called data snooping, is iterated until no other observations with a w-test larger than a certain threshold are present in the data.
[0085] For large systems a greater quantity of rejected observations than one can optionally be eliminated.
[0086] The process described above is repeated in its entirety for each subsequent epoch. The results for each epoch are stored in a database.
[0087] The result of this process is a sparse three-dimensional point cloud and a set of exterior orientations per image, which comprises the position and attitude per image.
[0088] Once the images with exterior orientations are available, the images can be displayed on a computer screen. A virtual 3D environment can be created where every image is shown at the position and with the attitude of the exterior orientation relative to a base map for reference.
[0089] The operator can select an image by immediately clicking on the image.
[0090] Alternatively, the operator can identify an area of interest, for instance by entering a street address or the known identifier of a nearby object. Such address or identifier can be converted to a set of coordinates using reverse geocoding based on a database.
[0091] Using the known exterior orientations of all images as well as the focal distance of the objective lens, the images best representing the desired area can be opened and shown to the operator.
[0092] The operator can now start measuring roe dimensional points from this image. Each three dimensional point contains three coordinates: one for the X-direction of the coordinate system, one for the Y-direction of the coordinate system and one for the Z-direction of the coordinate system.
[0093] Once the operator starts the measurement process, a three dimensional marker is created. The three-dimensional marker is able to move in all three directions of the three dimensional coordinate system.
[0094] The operator identifies the point-of-interest in the image shown to the operator. He or she can do this by clicking in the image on the exact location of said point. Based on the click on the image, the exact position of the click relative to the image coordinate system, taking the principal point of autccollimation into account. This image point location is stored in memory, together with the colour, contrast and brightness information of the surrounding pixels.
[0095] Due to the two-dimensional nature of images, this individual click is not sufficient to obtain the three-dimensional coordinates for the marker. At this time it is only known that the three- dimensional point (2013) is somewhere on the line (1008) defined by the perspective centre (2008) of the camera and the representation of the point on the imaging plane (2011). Hence, the third dimension needs to be estimated using any of the following methods.
[0096] One way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is by assuming a constant height of all points identified in the image. The intersection of said line with the horizontal plane defined by this height results in the three-dimensional coordinate.
[0097] Another way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is by using a height model that was previously collected and made available in the system. The intersection of said line with the height model results in the three-dimensional coordinate.
[0098] Another way to estimate the location of the point on the line (1008) between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is projecting said line in one or more overlapping images. This projection of the line is called the epipolar line.
[0099] In order to achieve this a template is created from the selected point. This template can have any size, but presently the size of 25 by 25 pixels appears ideal. In every image where the epipolar line is visible, template matching is performed to find the areas matching the template.
Different modes can be used to determine the match of the template, but normalised cross correlation typically works well. The best matching location per image is returned as a candidate for the corresponding point.
[0100] A forward intersection is performed based on the positions found in the template matching, where the matched positions are used as image observations and the exterior orientations as well as the camera calibration remain fixed or added as observations as well.
[0101] Using the forward intersection, all matched positions are tested using the w-test. If one w-test gets rejected for one observation, the largest w-test is removed from the set. This continuous until all w-tests are accepted and a ground coordinate is computed from the matched template locations.
[0102] The resulting coordinate can be improved further by getting a better match between the initially selected position and the matching areas in the other images.
[0103] One such method involves taking the image where the point was initially identified by the operator, which will be represented as g(y) for the purpose of this description, and taking one of the images where an initial template match was achieved. This second image will be identified by h(z). The best match between these two images can now be identified by least-squares matching.
[0104] The optimal result of a least squares match involves the creation of a symmetrical match.
This means that both images ¢(y) and n(z) are assumed to be noisy and a representation of an unknown underlying signal f(x). Such a method is for instance published by W. Förstner in 2020 (Symmetric Least Squares Matching — Sym-LSM). Once at least one other point is determined using this matching process, the intersection between the two lines results in the three- dimensional point. A common way to compute these three coordinates is by means of a so-called forward intersection adjustment.
[0105] Another way to estimate the location of the point on the line between the perspective centre (2008) and the representation of the point on the imaging plane (2011) is assuming that the height of the point is equal to the height of the previous point that was measured.
[0106] Irrespective of the method chosen to obtain the three coordinates, the results might not be the most accurate possible. The operator is therefore additionally presented with at least one but possibly more overlapping images. The position of the three-dimensional marker, which has now three estimated coordinates for the X-, Y- and Z-direction, is shown in each image that is currently visible to the operator. In order to do that, the point is back projected into each image, using the exterior orientation as well as the focal length, Principal point of auto collimation and lens distortion if present. This projection is performed using the following equations: xn = —f Tj (Xm = XO) +15, (Vy = YO) +732 — 2°) +x,
Tis (Xn — XO) + 13 (Vy, VO) +33 Zn — 29)
Yip = _ fz Em =X) +r VO) +132 (Zm — 29) +40 ris (An — XO) + 733 — YO) + 733 (Zim — 29)
In this equation, (x? yr? z*°) are the coordinates of the image position. The elements of the rotation matrix 2 of the camera attitude are denoted by »;. The matrix R in combination with xe ye zo) constitutes the exterior orientation. The focal length of the camera is denoted as -f and the principal point as x, and y,. The virtual marker position is denoted as (1,2). Using this equation the representation of the marker for each image, denoted as (xv), can be computed.
[0107] If the marker position correctly represents that three-dimensional point-of-interest, the projected marker position will be exactly on top of this point in all images where it is presented, which can now visually be confirmed by the operator. In order to double-check this observation, the operator has the option to zoom-in, zoom-out and pan on the image, as well as the option to open additional images where the marker location can be confirmed.
[0108] For further inspection, the operator has the option to change the contrast, brightness and colour settings individually for each image currently on display.
[0109] The operator now has the option to confirm the current measurement and store the three- dimensional coordinates of the virtual marker in the computer system.
[0110] Ifthe operator is not satisfied with the projection location of the marker in the images, the operator can adapt the marker position by only changing its height. In order to do so, the operator has a multitude of options. One option is that the operator uses the scroll wheel of the mouse, optionally in combination with a keyboard short-cut, to change the height value. Another option is that the operator enters a new height value. Alternatively, the operator uses the plus and minus button on the key board. Another alternative is using a slider in a computer window to change the height value.
[0111] While correcting the height value, the three-dimensional marker moves up and down along the line between the perspective centre (2008) and the representation of the point on the imaging plane (2011) of the first image where the point was identified. This means that with changing the height value, the X- and Y-value of the virtual marker changes as well. At every change, the representation of the marker in all available images is recomputed. Hence, at any time, the operator can observe the projection of the marker in all images presented to the operator.
[0112] Once the operator is satisfied that the marker correctly represents the point-of-interest in all images presented, the operator can confirm the new position of the three-dimensional marker.
These coordinates will now be attributed to the point position, which can subsequently be send to a GIS-system.
Description of further embodiments
[0113] In another embodiment of the system, the display integrates a stereographic or holographic display for three-dimensional observation by the operator.
[0114] In another embodiment of the system, the system operates on panoramic or spherical images, where these images are not represented by a perspective projection but in a central spherical projection. In this embodiment the images are projected on a sphere (4001) at a chosen distance (4002) around a centre point (4003). Every point in this image can be represented by a horizontal direction » (4004) and a vertical angle v (4005). Similarly to the use with perspective images, a three-dimensional point (4008) can be estimated by finding the intersection of the line between the centre point and the representation of that point on the sphere from at least one other panoramic image (4007). The virtual marker (4008) is shown in the panoramic images by projecting the actual location of the marker on the sphere (4009).
[0115] In another embodiment of the system, the operator does adapt the height of the virtual marker, but instead changes the distance between the marker and the camera of one selected image.
[0116] List of reference signs 1001 = Camera 1002 = Observation platform 1003 = Objective lens 1004 = Terrain 1005 = Three-dimensional point-of-interest 1006 = Opening angle of the camera 1007 = Forward motion of the platform 1008 = Line between perspective center of the first image and the point-of-interest 1009 = Line between perspective center of a second image and the point-of-interest 1010 = Position of the virtual marker 2001 = Camera housing 2002 = Image sensor 2003 = Aperture 2004 = Mathematical positive plane 2005 = Focal length 2006 = Principal point of auto collimation in negative plane 2007 = Principal point of auto collimation in positive plane 2008 = Perspective centre of the camera 2009 = Perspective centre of the camera projected on the negative plane 2010 = Perspective centre of the camera projected on the positive plane 2011 = Point P projected on the negative plane 2012 = Point P projected on the positive plane 2013 = Point P in the terrain as a three-dimensional point 3001 = Computer display 3002 = Area in the computer display to show the first image 3003 = Area in the computer display to show a second image 3004 = Area in the computer display to show a third image 3005 = Object of interest captured in the first image 3006 = Object of interest captured in a second image 3007 = Object of interest captured in a third image 3008 = Point-of-interest captured in the first image 3009 = Point-of-interest captured in a second image
3010 = Point-of-interest captured in a third mage 3011 = Clickable area to zoom in on the first image 3012 = Clickable area to zoom out on the first image 3013 = Clickable area to zoom in on the second image 3014 = Clickable area to zoom out on the second image 3015 = Clickable area to zoom in on the third image 3016 = Clickable area to zoom out on the fourth image 3017 = Clickable area to initiate automatic template matching 3018 = Slider to control the height of the virtual marker 3019 = Clickable area to confirm the current position of the virtual marker 3020 = Clickable area to choose an image with another perspective from the left 3021 = Clickable area to choose an image with another perspective from the right 4001 = Sphere on which the panoramic image is projected 4002 = Radius from centre point to the sphere 4003 = Centre point of the panoramic image 4004 = Horizontal direction to the point-of-interest 4005 = Vertical angle to the point-of-interest 4006 = Three dimensional location of the point-of-interest 4007 = A second panoramic image in which the point-of-interest is visible 4008 = Location of the virtual marker 4009 = Projection of the virtual marker on the panoramic image

Claims (9)

ConclusiesConclusions 1. Een apparaat voor het meten van driedimensionale interessepunten (1005) uit een veelvoud van overlappende beelden genomen van een beeldopnameapparaat (1001) bevestigd aan een montagetoestel; waarbij het beeldopnameapparaat een behuizing (2001) heeft; waarbij het beeldopnameapparaat verder een objectieflens (1003) heeft bevestigd aan de behuizing; de objectieflens met een diafragma (2003); waarbij het beeldopnameapparaat een beeldsensor (2002, 7002) bevat die zich in de behuizing bevindt; waarbij het beeldopnameapparaat verder een verwerkingseenheid (7003) heeft voor het verwerken van beeldgegevens van de beeldsensor; een opslagapparaat verbonden met het beeldopnameapparaat voor het opslaan van beeldgegevens; een rekenapparaat; waarbij voornoemd rekenapparaat zo is geordend dat het kan communiceren met een geheugen dat een computerprogramma opslaat met instructies en gegevens, welk computerprogramma kan worden uitgevoerd door het rekenapparaat; waarbij voornoemd apparaat is ingericht om beeldgegevens van te monitoren gebieden op te nemen, waarbij optioneel het apparaat verder een detectie- eenheid omvat, waarbij de detectie-eenheid bestaat uit ten minste een accelerometer of inertie-meeteenheid voor het meten van de houding van het beeldapparaat, een GNSS-ontvanger voor het meten van de positie van het beeldapparaat.1. An apparatus for measuring three-dimensional interest points (1005) from a plurality of overlapping images taken from an image pickup device (1001) attached to a mounting device; the image pickup device having a housing (2001); the image pickup device further having an objective lens (1003) attached to the housing; the objective lens having an aperture (2003); the image pickup device including an image sensor (2002, 7002) disposed in the housing; the image pickup device further having a processing unit (7003) for processing image data from the image sensor; a storage device connected to the image pickup device for storing image data; a computing device; said computing device being arranged to communicate with a memory storing a computer program having instructions and data, the computer program being executable by the computing device; said device being adapted to record image data of areas to be monitored, optionally the device further comprising a detection unit, the detection unit comprising at least an accelerometer or inertia measuring unit for measuring the attitude of the imaging device, a GNSS receiver for measuring the position of the imaging device. 2. Een methode voor het meten van driedimensionale punten met het apparaat volgens conclusie 1 met de stappen van: het opnemen van een veelvoud van beelden van een te onderzoeken gebied, het opslaan van de beeldgegevens van voornoemd veelvoud van beelden in een opslagapparaat, het verwerken van voornoemde beeldgegevens om geografische informatiegegevens voor voornoemd gebied te verkrijgen.2. A method of measuring three-dimensional points with the apparatus of claim 1 comprising the steps of: capturing a plurality of images of an area to be surveyed, storing the image data of said plurality of images in a storage device, processing said image data to obtain geographic information data for said area. 3. De methode voor het meten van driedimensionale punten volgens conclusie 2, verder bevattend de stap van het aan de operator bieden van een display (3001) om een interessepunt (3008) in één beeld te selecteren en de stap van het detecteren van de overeenkomstige punten in ten minste één (3009) maar mogelijk meer (3010) overlappende beeldgegevens.3. The method for measuring three-dimensional points according to claim 2, further comprising the step of providing the operator with a display (3001) to select a point of interest (3008) in one image and the step of detecting the corresponding points in at least one (3009) but possibly more (3010) overlapping image data. 4. De methode voor het meten van driedimensionale punten volgens conclusie 2 of conclusie 3, verder bevattend de stap van het aan de operator presenteren van middelen om de kwaliteit van de meting to orbeteren, door het verstrekken van een virtuele driedimensionale marker die de te meten coördinaten voorstelt en waar deze virtuele driedimensionale marker door de operator kan worden verplaatst door het beeld in drie richtingen, zijnde de X-richting, de Y-richting en de Z-richting van het coördinatensysteem.4. The method for measuring three-dimensional points according to claim 2 or claim 3, further comprising the step of presenting to the operator means for improving the quality of the measurement, by providing a virtual three-dimensional marker representing the coordinates to be measured and where said virtual three-dimensional marker can be moved by the operator through the image in three directions, being the X-direction, the Y-direction and the Z-direction of the coordinate system. 5. De methode voor het meten van driedimensionale punten volgens conclusie 4, waarbij het verplaatsen van de virtuele driedimensionale marker aan de operator wordt vergemakkelijkt door de operator een middel te bieden om een interessepunt in één beeld te identificeren en de operator een middel te bieden om de hoogte van dat punt te veranderen, waarbij de door de operator gegeven hoogtewaarde wordt getransformeerd naar een afstand tussen het diafragma van de eerste camera en de nieuwe locatie van de virtuele driedimensionale marker.5. The method of measuring three-dimensional points according to claim 4, wherein moving the virtual three-dimensional marker is facilitated to the operator by providing the operator with a means to identify a point of interest in one image and providing the operator with a means to change the height of that point, wherein the height value given by the operator is transformed into a distance between the aperture of the first camera and the new location of the virtual three-dimensional marker. 6. De methode voor het meten van driedimensionale punten volgens conclusie 5, waarbij de operator verder wordt geholpen door het bieden van een schuifregelaar die kan worden bediend door een muis of toetsenbord om snel de hoogtewaarde van de virtuele driedimensionale marker te wijzigen.6. The method for measuring three-dimensional points according to claim 5, wherein the operator is further assisted by providing a slider operable by a mouse or keyboard to quickly change the height value of the virtual three-dimensional marker. 7. De methode voor het meten van driedimensionale punten volgens aanspraken 2 tot 6, waarbij de beelden geen reguliere perspectiefbeelden zijn, maar in plaats daarvan panoramabeelden in een centrale sferische projectie waarbij de beelden op een bol (4001) worden geprojecteerd op een gekozen afstand (4002) rond een middelpunt (4003) en een driedimensionaal punt (4006) kan worden geschat door het vinden van het snijpunt van de lijn tussen het middelpunt en de weergave van dat punt op de bol uit ten minste één ander panoramabeeld (4007) en waarbij de virtuele marker (4008) wordt getoond in de panoramabeelden door de feitelijke locatie van de marker op de bol (4009) te projecteren.7. The method for measuring three-dimensional points according to claims 2 to 6, wherein the images are not regular perspective images, but instead panoramic images in a central spherical projection wherein the images are projected onto a sphere (4001) at a selected distance (4002) around a center point (4003) and a three-dimensional point (4006) can be estimated by finding the intersection of the line between the center point and the representation of that point on the sphere from at least one other panoramic image (4007) and wherein the virtual marker (4008) is shown in the panoramic images by projecting the actual location of the marker onto the sphere (4009). 8. De methode voor het meten van driedimensionale punten volgens aanspraken 4 tot 7 waarbij de virtuele driedimensionale marker niet wordt verplaatst door de hoogtewaarde te veranderen maar in plaats daarvan door de afstand tussen de camera van één beeld en de gewenste locatie van de virtuele driedimensionale marker te veranderen, waaruit vervolgens de positie van de marker wordt berekend, die vervolgens in de beelden kan worden teruggeprojecteerd.8. The method of measuring three-dimensional points according to claims 4 to 7, wherein the virtual three-dimensional marker is not moved by changing the height value but instead by changing the distance between the camera of one image and the desired location of the virtual three-dimensional marker, from which the position of the marker is then calculated, which can then be backprojected into the images. 9. De methode voor het meten van driedimensionale punten volgens aanspraken 4 tot 8 waarbij de virtuele driedimensionale marker aan de operator wordt gepresenteerd via een stereoscopische display, waarbij de operator een stereo-omgeving waarneemt door twee overlappende beelden in respectievelijk het linker- en rechteroog te projecteren en de markerlocatie in elk beeld te projecteren, waardoor de schijnbare driedimensionale beweging van de marker naar de operator wordt gecreëerd.9. The method of measuring three-dimensional points according to claims 4 to 8, wherein the virtual three-dimensional marker is presented to the operator via a stereoscopic display, the operator perceiving a stereo environment by projecting two overlapping images into the left and right eyes respectively and projecting the marker location in each image, thereby creating the apparent three-dimensional movement of the marker towards the operator.
NL2035685A 2023-08-28 2023-08-28 Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker NL2035685B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2035685A NL2035685B1 (en) 2023-08-28 2023-08-28 Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2035685A NL2035685B1 (en) 2023-08-28 2023-08-28 Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker

Publications (1)

Publication Number Publication Date
NL2035685B1 true NL2035685B1 (en) 2025-03-11

Family

ID=90097727

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2035685A NL2035685B1 (en) 2023-08-28 2023-08-28 Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker

Country Status (1)

Country Link
NL (1) NL2035685B1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4539701A (en) 1981-10-01 1985-09-03 The Commonwealth Of Australia Photogrammetric stereoplotter
US5247356A (en) 1992-02-14 1993-09-21 Ciampa John A Method and apparatus for mapping and measuring land
US6711293B1 (en) 1999-03-08 2004-03-23 The University Of British Columbia Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image
JP2007212187A (en) 2006-02-07 2007-08-23 Mitsubishi Electric Corp Stereo photo measuring device, stereo photo measuring method, and stereo photo measuring program
US20110007948A1 (en) * 2004-04-02 2011-01-13 The Boeing Company System and method for automatic stereo measurement of a point of interest in a scene
US20120062730A1 (en) * 2009-05-19 2012-03-15 Novatel Inc. Aerial camera system and method for correcting distortions in an aerial photograph
US10019850B2 (en) 2013-05-31 2018-07-10 Apple Inc. Adjusting location indicator in 3D maps
US20200003554A1 (en) 2009-05-22 2020-01-02 Pictometry International Corp. System and process for roof measurement using imagery
US20200273150A1 (en) * 2018-07-16 2020-08-27 Mapbox, Inc. Displaying oblique imagery
WO2020181506A1 (en) 2019-03-12 2020-09-17 深圳市大疆创新科技有限公司 Image processing method, apparatus and system
US11049274B2 (en) 2016-11-22 2021-06-29 Lego A/S System for acquiring a 3D digital representation of a physical object

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4539701A (en) 1981-10-01 1985-09-03 The Commonwealth Of Australia Photogrammetric stereoplotter
US5247356A (en) 1992-02-14 1993-09-21 Ciampa John A Method and apparatus for mapping and measuring land
US6711293B1 (en) 1999-03-08 2004-03-23 The University Of British Columbia Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image
US20110007948A1 (en) * 2004-04-02 2011-01-13 The Boeing Company System and method for automatic stereo measurement of a point of interest in a scene
JP2007212187A (en) 2006-02-07 2007-08-23 Mitsubishi Electric Corp Stereo photo measuring device, stereo photo measuring method, and stereo photo measuring program
US20120062730A1 (en) * 2009-05-19 2012-03-15 Novatel Inc. Aerial camera system and method for correcting distortions in an aerial photograph
US20200003554A1 (en) 2009-05-22 2020-01-02 Pictometry International Corp. System and process for roof measurement using imagery
US10019850B2 (en) 2013-05-31 2018-07-10 Apple Inc. Adjusting location indicator in 3D maps
US11049274B2 (en) 2016-11-22 2021-06-29 Lego A/S System for acquiring a 3D digital representation of a physical object
US20200273150A1 (en) * 2018-07-16 2020-08-27 Mapbox, Inc. Displaying oblique imagery
WO2020181506A1 (en) 2019-03-12 2020-09-17 深圳市大疆创新科技有限公司 Image processing method, apparatus and system

Similar Documents

Publication Publication Date Title
US9965870B2 (en) Camera calibration method using a calibration target
CN105928498B (en) Method, the geodetic mapping and survey system, storage medium of information about object are provided
US20210012520A1 (en) Distance measuring method and device
KR100912715B1 (en) Digital photogrammetry method and device by heterogeneous sensor integrated modeling
CN109813335B (en) Calibration method, device and system of data acquisition system and storage medium
US8031933B2 (en) Method and apparatus for producing an enhanced 3D model of an environment or an object
EP2918972B1 (en) Method and handheld distance measuring device for generating a spatial model
US8139111B2 (en) Height measurement in a perspective image
Barazzetti et al. 3D scanning and imaging for quick documentation of crime and accident scenes
US9367962B2 (en) Augmented image display using a camera and a position and orientation sensor
Rumpler et al. Evaluations on multi-scale camera networks for precise and geo-accurate reconstructions from aerial and terrestrial images with user guidance
NL2027547B1 (en) Method of and apparatus for determining deformations of quay walls using a photogrammetric system
JPWO2017119202A1 (en) Structure identifying device and method for structure
KR102473804B1 (en) method of mapping monitoring point in CCTV video for video surveillance system
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
JPWO2020039937A1 (en) Position coordinate estimation device, position coordinate estimation method and program
JP4077385B2 (en) Global coordinate acquisition device using image processing
CN112862678A (en) Unmanned aerial vehicle image splicing method and device and storage medium
CN112749584A (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
JP2964402B1 (en) Method and apparatus for creating a three-dimensional map database
Al-Rawabdeh et al. A robust registration algorithm for point clouds from UAV images for change detection
NL2035685B1 (en) Method for determining accurate coordinates from images with human supervision using a virtual three-dimensional marker
Wagner¹ et al. Monitoring concepts using image assisted total stations
JP2005056186A (en) Traffic condition observation system
JP2011090047A (en) Movement locus chart creating device and computer program