NL2036195B1 - A method of and device for selecting a subset of photographic data items from a set of photographic data items - Google Patents
A method of and device for selecting a subset of photographic data items from a set of photographic data items Download PDFInfo
- Publication number
- NL2036195B1 NL2036195B1 NL2036195A NL2036195A NL2036195B1 NL 2036195 B1 NL2036195 B1 NL 2036195B1 NL 2036195 A NL2036195 A NL 2036195A NL 2036195 A NL2036195 A NL 2036195A NL 2036195 B1 NL2036195 B1 NL 2036195B1
- Authority
- NL
- Netherlands
- Prior art keywords
- photographic data
- data items
- photographic
- metadata
- point
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method of selecting a subset of photographic data items from a set of photographic data items is disclosed. The method is performed by a processor and comprises the steps of: creating a metadata database by cataloguing metadata related to each photographic data item of the set of photographic data items; obtaining a point of interest representing a target to be located, the target being present in a number of photographic data items of the set of photographic data items; selecting a subset of photographic data items from the set of photographic data items based on calculation performed using the metadata in the metadata database, wherein each of the selected photographic data item falls within a geographic boundary defined with reference to the point of interest, and outputting flle paths of and metadata related the selected subset of photographic data items. Unlocking insights from Geo-Data, the present invention further relates to improvements in sustainability and environmental developments: together we create a safe and liveable world. Page 17 of 17
Description
A METHOD OF AND DEVICE FOR SELECTING A SUBSET OF PHOTOGRAPHIC
DATA ITEMS FROM A SET OF PHOTOGRAPHIC DATA ITEMS
[0001] The present disclosure generally relates to the field of visual survey, and more specifically to a method of selecting a subset of photographic data items from a set of photographic data items and a device for performing the method. Unlocking insights from Geo-
Data, the present invention further relates to improvements in sustainability and environmental developments: together we create a safe and liveable world.
[0002] Visual survey, also known as visual inspection or visual assessment, is a method of data collection that involves directly observing and visually evaluating a specific area, object, or phenomenon. It is a fundamental technique used in various fields, including environmental monitoring, engineering, architecture, urban planning, wildlife studies, and more. The primary goal of a visual survey is to gather information and record observations without the need for complex equipment or instruments.
[0003] Visual surveys are quickly becoming a method of choice for many site characterization and asset integrity surveys. In these applications, geospatial photographs are obtained from various sources, including satellites, drones, aerial imagery, or ground-based cameras. It is possible that an application may involve an extremely large amount, such as millions, of geospatial photographs.
[0004] Managing a large number of overlapping geospatial photographs is a complex and challenging task that often involves geospatial data processing and management. For the purpose of facilitating automated on-demand data products such as 3D point clouds, photo mosaics, pseudo-video, and data metrics, it usually requires that photographs containing interested object or information are identified first, which involves a process of processing and analysing the large volume of photographs to select or determine the interested ones. After that, automated processing techniques, including for example 3D reconstruction, may be used to generate 3D point clouds based on the selected photographs, so that further study may be conducted.
Page 1 of 17
[0005] Currently, selection or identification of images of photographs of interest, from a large group of collected photographs, are normally done by an expert or specialist with hands- on knowledge about the images to be processed. The selection process is especially costly in terms of time and human resources, when the volume of images to be screened and selected is large.
[0006] In consideration of the above, it is desirable there is a method of selecting a smaller number of photographs from a large volume of photographs which helps to improve the efficiency of selection.
[0007] According to one aspect of the present disclosure, there is presented a method of selecting a subset of photographic data items from a set of photographic data items, the method is performed by a processor and comprises the steps of:
[0008] - creating a metadata database by cataloguing metadata related to each photographic data item of the set of photographic data items;
[0009] - obtaining a point of interest representing a target to be located, the target being present in a number of photographic data items of the set of photographic data items;
[0010] - selecting a subset of photographic data items from the set of photographic data items based on calculation(s) performed using information in the metadata database, wherein each of the selected photographic data item falls within a geographic boundary defined with reference to the point of interest, and
[0011] - outputting file paths of and metadata related the selected subset of photographic data items.
[0012] Based on the insight of the inventors of the present disclosure, selection of a subset of photographic data items with a smaller number of photographs from a set of photographic data items with a (much) larger number of photographs can be performed in a more efficient way. This is realised by performing the selection based on a designated point of interest and a metadata database comprising properly catalogued metadata related to the set of photographic data items.
[0013] With the designated point of interest and the metadata database, the selection is performed automatically based on calculations performed based on information in the metadata database and a geographic boundary defined with reference to the point of interest. The suitably
Page 2 of 17 defined geographic boundary ensures that the selected photographic data items comprise one or more objects or targets which the related visual survey looks for.
[0014] Based on the above method of the present disclosure, the selection of the interested photographic data items can be performed in a much shorter time period and with little reliance on human interference. The method therefore allows the selection of a subset of photographs from a large volume of photographs to be performed with improved efficiency in terms of both time and human resources.
[0015] In an example of the present disclosure, the method further comprises the following step prior to the creating step:
[0016] - translating metadata of one or more photographic data items into a designated catalogue format.
[0017] As can be understood by those skilled in the art, metadata of photographic data items obtained using different cameras may be in different formats. For the purpose of easy cataloguing the metadata of the large volume of photographic data items, the translation step is performed so that a uniform catalogue format is kept for the metadata of all photographic data items, which will facilitate the subsequent cataloguing step.
[0018] In an example of the present disclosure, the metadata related to each photographic data item comprises:
[0019] - a timestamp of the photographic data item; and
[0020] - geographic positioning information of the photographic data item.
[0021] The timestamp of each photographic data item indicates when the photograph was captured. This information is used to filter the photographic data items based on time-related criteria such that the selected photographic data items fall within a defined time period. As for the geographic positioning information, it is used to calculate a distance to the point of interest orto decide if a photograph falls within the defined geographic boundary.
[0022] In an example of the present disclosure, the obtaining step comprises receiving, from a user, geographic centre coordinates associated with the point of interest.
[0023] In this case, a user such as an expert supervising the visual survey can decide geographic coordinates related to an object or target of interest and have the geographic coordinates input to the processor running the method of the present disclosure. This is a straightforward and simple process and takes little effort from the user.
[0024] In another example of the present disclosure, the obtaining step comprises:
[0025] - receiving, from a user, a photographic data item selected by the user;
[0026] - determining geographic position of the selected photographic data item;
Page 3 of 17
[0027] - using the determining geographic position as the point of interest.
[0028] As an alternative way, the user can select or designate a photograph comprising the interested object or target and have the same input to a computer system running the method of the present disclosure. Geographic position related to the selected photograph can be determined using the metadata database, which will be used as the point of interest for subsequent steps.
[0029] In an example of the present disclosure, the selecting step comprises the steps of:
[0030] - obtaining a photographic proximity threshold,
[0031] - calculating a geographic distance between each of the set of photographic data items and the point of interest based on the metadata of the photographic data items;
[0032] - selecting photographic data items having a geographic distance smaller than the photographic proximity threshold.
[0033] In this example, the geographic boundary is defined as a sphere surrounding the point of interest, as any point on a surface of the sphere is distanced to the centre, that is, the point of interest, of the sphere by the same distance. Any photographic data falling into the sphere is considered as of interest to the visual survey and therefore selected. This is realised by calculating the geographic distance between each photographic data item and the point of interest, by using the metadata including coordinates of the photographic data item and that of the point of interest. The calculation is done by a processing device automatically and realised with high efficiency.
[0034] In another example of the present disclosure, the selecting step comprises the steps of:
[0035] - defining an area enclosing the point of interest by specifying a series of latitude and longitude points;
[0036] - selecting photographic data items with geographic coordinates falling within the defined area.
[0037] Instead of defining a sphere surrounding the point of interest by specifying a photographic proximity threshold, a spatial area may be defined by a series of latitude and longitude points. This allows the area defined to have a more adaptive or flexible shape. This approach is especially advantageous when combining the expertise of the user, which can result in computational resource reduction when the area is defined in a proper way.
[0038] In an example of the present disclosure, the photographic proximity threshold, or the series of latitude and longitude points are received from a user, and optionally adaptable based on a specific application.
Page 4 of 17
[0039] The easiest way of defining the geographic boundary for selecting the interested photographic data items is by having a user defining an area, by for example specifying the photographic proximity threshold or inputting the latitude and longitude points through a user interface. It is also possible that a future development using machine/computer vision would perform this task much faster, after training datasets are created.
[0040] As can be contemplated by those skilled in the art, the photographic proximity threshold, or the series of latitude and longitude points used for defining the geographic boundary may also be adapted where necessary, such that the visual survey may focus on for example target of varied size.
[0041] In an example of the present disclosure, the outputting step comprises outputting the file paths of and the metadata related to the selected subset of photographic data items to a processing module or software.
[0042] The further processing module or software may comprise for example a photogrammetry processing software for product generation, which is used to create representations for subsequent analysis by for example a user such as an expert specialized in a certain technical area.
[0043] As an example, the method described above further comprises the steps of creating at least one of 3D point clouds, photomosaics, videos and report based on the selected subset of photographic data items by the further processing module or software.
[0044] In an example of the present disclosure, wherein the photographic data items are obtained via at least one of drone or airborne photogrammetry, road or rail photogrammetry, remotely operated vehicle- or autonomous underwater vehicle-based subsea asset inspection.
[0045] As can be contemplated by those skilled in the art, the method of the present disclosure 1s not limited to one type of photographs obtained via a specific technology, instead, photographs obtained by a variety of different technologies can be processed collectively using the method of the present disclosure. This allows more insight to be derived from the related visual survey projects.
[0046] In a second aspect of the present disclosure, there is presented a device for selecting a subset of photographic data items from a set of photographic data items, the device comprising a processor configured to perform the method according to the first aspect of the present disclosure.
[0047] In a third aspect of the present disclosure, there is presented a method of processing a subset of photographic data items selected, using the method according to the first aspect of
Page 5 of 17 the present disclosure, from a set of photographic data items, the method 1s performed by a processor and comprising the steps of:
[0048] - receiving receiving the file paths of and the metadata related to the selected photographic data items from a further processor;
[0049] - rendering the selected photographic data items in at least one of 3D point clouds, photomosaics, videos and report.
[0050] The photographic data items selected using the method of the present disclosure are obtained based on the received file paths, which can then be processed so as to generate products which are readily available for visual survey.
[0051] In an example of the present disclosure, the photographic data items comprise marine survey photographic data items.
[0052] This is a specific application related to marine survey, which normally involves a large volume of photographs and the method of the present disclosure is especially suitable for such an application.
[0053] In a fourth aspect of the present disclosure, there is presented a computer program product comprising a computer readable storage medium storing instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to the first aspect of the present disclosure.
[0054] The above mentioned and other features and advantages of the disclosure will be best understood from the following description referring to the attached drawings. In the drawings, like reference numerals denote identical parts or parts performing an identical or comparable function or operation.
[0055] In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are therefore not to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Page 6 of 17
[0056] FIG. 1 schematically illustrates in a block diagram, a system comprising different modules for implementing the method of selecting a subset of photographic data items from a set of photographic data items, in accordance with an embodiment of the present disclosure.
[0057] FIG. 2 schematically illustrates a metadata database according to an embodiment of the present disclosure.
[0058] FIG. 3 schematically illustrates, in a flow chart type diagram, an embodiment of a method of selecting a subset of photographic data items from a set of photographic data items according to the present disclosure.
[0059] FIG. 4 schematically illustrates an example of selecting a subset of photographs based on a defined proximity threshold.
[0060] FIGs. 5 and 6 show an example of using the method of present disclosure to selected photographs relevant to the point of interest and generating a point cloud based on the selected photos.
5 [0061] Embodiments contemplated by the present disclosure will now be described in more detail with reference to the accompanying drawings. The disclosed subject matter should not be construed as limited to only the embodiments set forth herein. Rather, the illustrated embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
[0062] In the following description, the terms “photo”, “image”, “photograph” and “photographic data item” are used interchangeably.
[0063] The present disclosure describes a method and a system designed to manage the processing of large volumes of photographic data items, such as geo-tagged still images, and their associated metadata. The disclosed method enables the execution of automated processing workflows to systematically select and provide further applications with all of the necessary information to complete a task involving photographic data items of interest or concern for a visual survey project, based on user configurable parameters.
[0064] FIG. 1 schematically illustrates, in a block diagram, a system 10 comprising different modules for implementing the method of selecting a subset of photographic data items from a set of photographic data items, in accordance with an embodiment of the present disclosure.
[0065] The system 10 comprises a raw data processing module 11, a metadata database 12, and a selection module 13. The raw data processing module 11 is communicatively connected
Page 7 of 17 or coupled to the metadata database 12, which in turn is communicatively connected to the selection module 13.
[0066] The raw data processing module 11 is configured to have raw data including raw photos as well as metadata related to the photos, including for example timestamp and geospatial information, as well as file paths of the photos, ready for being catalogued.
[0067] As an example, photographic data collected by a remotely operated vehicle, ROV, may comprise individual image files, each representing a single photograph captured by the
ROV's onboard camera system. These images can be in a standard format such as JPEG or
TIFF. Each image typically contains metadata that includes a timestamp indicating date and time when the photograph was taken. This timestamp is used for correlating images with other data and understanding the temporal context of the observations. Geospatial information such as GPS coordinates, depth, and altitude is also obtained. This data is often embedded in the image metadata, allowing the precise location of each photograph to be determined.
[0068] The collected photographs are stored in a storage device, such as one or more server, in for example a distributed manner, or over the cloud.
[0069] The raw photographic data items can be obtained from various sources, including satellites, drones, aerial imagery, ground-based cameras or cameras deployed on underwater vehicles.
[0070] The images or photos collected are catalogued to create the metadata database 12, which is a catalogue or library of storage information, that is file paths, of all images collected and applicable position information at the time the images were taken.
[0071] Accurately cataloguing the metadata related to the photos taken during a data collection session and storage of the same in the metadata database 12 ensures the smooth running of subsequent steps of the method. This can be performed via process steps and tools for importing embedded geographic positioning information from the photos. The geospatial information may also be stored in other files which can be readily imported into the metadata database 12.
[0072] The system 10 may optionally also comprise a data ingest module 14 connected between the raw data processing module 11 and the metadata database 12. The ingest module 14 comprises source specific ingest drivers for translating inputs including metadata in different format into a standardized catalogue formatting. The ingest module 14 can update the previously catalogued images with the corrected position.
Page 8 of 17
[0073] The system 10 further comprises a user interface 15 which is configured to receive user input such as coordinates and proximity threshold for specifying a point of interest or for defining a geographic boundary.
[0074] The selection module 13 is communicatively connected or coupled to the metadata database 12 and the user interface 15, and is configured to receive user input and to perform the method of selecting a smaller subset of photos from a (much) larger set of photos. The selection module 13 is further connected to an optional further processing module 16, which is configured to process the selected photos to generate photographic products or reports. The generated products may be displayed on a display 17 connected to the further processing module 16.
[0075] Figure 2 schematically illustrates a metadata database 20 according to an embodiment of the present disclosure. The metadata database 20 comprises a source agnostic image catalogue 21 of a large number of images and metadata related to each image. As an example, the catalogue may comprise, for each image, image path and timestamp 211, as well geospatial information 212 of the image, including for example geographic coordinates and altitude of the image. The catalogue may also comprise information 213 related to missions where the images are collected, such as line names and mission names.
[0076] The geographic positioning information, often referred to as geospatial or geo-info, associated with a photograph typically includes geographical coordinates of the centre point of the photo. These coordinates indicate the specific latitude and longitude at which the centre of the image was captured.
[0077] Figure 3 schematically illustrates, in a flow chart type diagram, an embodiment of a method 30 of selecting a subset of photographic data items from a set of photographic data items according to the present disclosure.
[0078] When photographs from multiple sources are used, step 31 may be performed first to translate one or more photographs into a designated catalogue format. The designated catalogue format may be the same as photographs collected using a specific technology. It is also possible that the designated catalogue format is different than the formats of all photographs. In this case, the translating step 31 is performed to convert all photographs into the designated catalogue format.
[0079] When all photographs to be processed are in the designated catalogue format, a metadata database is created at step 32. This is realized by cataloguing metadata related to each photographic data item. The cataloguing step can create a metadata database as illustrated in
Page 9 of 17
Figure 2, in which all the metadata related to the photographs, including geotag data, as well as file paths of the raw photographs are stored.
[0080] Once the database is populated with the metadata related to the photographs, the next step 33 is performed to obtain a location or point of interest.
[0081] One approach of providing the point of interest is by having an expert, such as a subject matter expert, reviewing the individual photos select one or more photographs that he or she wishes to have a representation generated for. The representation can be for example a point cloud or mosaic representation.
[0082] The selection of a location or interest of interest is achieved by receiving a geographic centre coordinate or by receiving a photo selected by the operator in an image viewer. If a photo is selected instead of a geographic coordinate input, the image filename is matched with the image catalogue in the database to find its associated geographic position, which will be used as the point of interest.
[0083] The selection of the point of interest may also be performed automatically by a processor running the method using machine/computer vision, which will rely on training datasets created based on selections made by experts for applications involving different technologies.
[0084] After the point of interest is obtained, at step 34, a subset of photos is selected from the set of photos based on calculation performed using the metadata in the metadata database, wherein each of the selected photo falls within a geographic boundary defined with reference to the point of interest.
[0085] It will be understood by those skilled in the art that the phrase “falling within a geographic boundary” means that the geographic positioning information such as coordinates of a centre point of a photo fall into a defined geographic boundary. It does not mean that the whole photo falls into a defined geographic boundary.
[0086] This allows all the relevant photos, as well their metadata within the geographic boundary to be selected. The geographic boundary is user defined.
[0087] The geographic boundary may be defined based on a photographic proximity threshold that is specified by the operator or user. In this case, a spherical region surrounding the pointed of interest and having a maximum distance of the proximity threshold is defined.
Images distanced from the point of interest by a distance smaller than the proximity threshold are selected. The distance between each image and the point of interest is calculated based on the metadata thereof, that is, the distance between the coordinates of a centre point of each image and the point of interest as represented by its coordinates 1s calculated.
Page 10 of 17
[0088] As an alternative way, the geographic boundary may also be defined by a series of latitude and longitude points, a closed shape surrounding the point of interest may be defined based on the latitude and longitude points. Photographs having coordinates falling into the defined boundary are selected. The selection is also performed by using the metadata of the images. This approach allows the defined geographic boundary to have a more flexible shape, which can be used to make the selection more efficient while not having non relevant images selected.
[0089] After selecting the subset of images, at step 35, the file path of the selected subset of photos and metadata related thereto are output to a processor for further processing. As an example, the metadata related to the selected photographs, including navigation data and other task specific parameters related to the subset of photographs as well as their storage locations, can be encoded as a command batch that is passed to a photogrammetry processing software for product generation via for example a custom data processing script. Data products including 3D point clouds, photomosaics, and videos are generated automatically organized into an appropriate hierarchical filing structure.
[0090] The processor or software module for receiving the selected photographs and performing further processing can be (running) on the same or a different processor for selecting the photographs.
[0091] Referring to Figure 4, which schematically illustrates an example 40 selecting a subset of photographs based on a defined proximity threshold. In Figure 4, a point of interest 41, in the form of centre coordinates or an interested image is provided to the selection module (not shown) operating the metadata database 42. In the metadata database 42, metadata including positions, represented by dots 45, of all the collected photographs is stored. The photograph and its related metadata associated with the point of interest is located as point 43 in Figure 4. Based on a proximity threshold defined by the user, a geographic boundary 44 is defined. Those images having a distance, smaller than or equal to the proximity threshold, to the point of interest 43 are selected. The selected photos and their related metadata are then output to a processor or software module 45 for further processing.
[0092] As an example, for three-dimensional, 3D, reconstruction of autonomous underwater vehicle, AUV, photography consists of combining overlapping images in the along track as well as adjacent lines to create dense 3D point clouds. This can cover massive areas, with millions of images. If the whole set of images are to be processed, the task can take hours to finish. Otherwise, lower point density has to be used to avoid unreasonable processing time,
Page 11 of 17 which can compromise a survey result and lead to missing or incorrect identification of objects of interests.
[0093] Especially if a large area only contains a few points of interest, it can be wasteful to process the entire area, focusing on these points of interest is much more efficient. By using the method of the present disclosure, with the proper cataloguing of the metadata of the images and by specifying one or more point of interest, a smaller group of images and selected and the reconstruction is limited to the selected images. The processing can therefore be performed in a much shorter period of time such as in several minutes. Time savings can be spent on generation of even denser/more detailed point clouds of which shows the really interested objects.
[0094] Referring to Figures 5 and 6, which shows an example of using the present disclosure to selected photographs relevant to the point of interest, which is shown in Figure 5 by reference 51. By using the method of present disclosure, ten adjacent lines (including 1054 images) are automatically combined into a 916 megapixel GeoTIFF covering an area of 22 meter by 35 meter. This is created by simply providing a centre point latitude and longitude and a radius presenting a proximity threshold via the user interface. A matching point cloud as shown in Figure 6 is then generated.
[0095] The method described above may be implemented as a software module or suite comprising an interface such as a web interface, an image catalogue or database for storing the collected images, a task managing module for managing tasks related to visual survey to be conducted based on the images, a selection module for performing the selection and outputting the selected images and a processing module for generating the deliverable output. Different modules may be deployed in a distributed way which allows optimal resource allocation to be realised.
[0096] In practice, a user reviews images in an image viewer moves between the collected images to find an interested target or object. When the interested target is found, a task can be created by the user, by for example hitting a hotkey to add a photogrammetry task. The user can in the meanwhile continue the reviewing.
[0097] The user also defines the geographic boundary where interested images is to be selected by for example inputting proximity thresholds, coordinates, latitude and longitude for defining a region surrounding the point of interest.
[0098] The system, upon receiving a new task in the queue, will find that images geotag and any other images that fall within a defined geographic boundary of that location. This list of images, their associated navigation and all of the configuration parameters necessary to
Page 12 of 17 make a point cloud are passed to a processing module where they are processed for example by multiple computers in a distributed computing model.
[0099] The generated point clouds and other products can be automatically stored in an organized folder structure.
[0100] The method allows for seamless and maximal computing resource and improves the processing efficiency. As the processing can be done in a much shorter period, it allows for In-
Field Processing of 3D Point Clouds for images of interest, total turn around from image of interest identification to small 3D point cloud deliverable can be realised in about 5 minutes.
[0101] The method can be applied to to air-borne photogrammetry, road and rail photogrammetry and subsea asset inspection and characterisation.
[0102] The invention has been described by reference to certain embodiments discussed above. It will be recognized that these embodiments are susceptible to various modifications and alternative forms well known to those of skill in the art.
[0103] Further modifications in addition to those described above may be made to the structures and techniques described herein without departing from the spirit and scope of the invention. Accordingly, although specific embodiments have been described, these are examples only and are not limiting upon the scope of the invention.
Page 13 of 17
Claims (15)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NL2036195A NL2036195B1 (en) | 2023-11-07 | 2023-11-07 | A method of and device for selecting a subset of photographic data items from a set of photographic data items |
| PCT/EP2024/080367 WO2025098810A1 (en) | 2023-11-07 | 2024-10-28 | A method of and device for selecting a subset of photographic data items from a set of photographic data items |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| NL2036195A NL2036195B1 (en) | 2023-11-07 | 2023-11-07 | A method of and device for selecting a subset of photographic data items from a set of photographic data items |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| NL2036195B1 true NL2036195B1 (en) | 2025-05-19 |
Family
ID=89897493
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| NL2036195A NL2036195B1 (en) | 2023-11-07 | 2023-11-07 | A method of and device for selecting a subset of photographic data items from a set of photographic data items |
Country Status (2)
| Country | Link |
|---|---|
| NL (1) | NL2036195B1 (en) |
| WO (1) | WO2025098810A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210035312A1 (en) * | 2019-07-31 | 2021-02-04 | Memotix Corp C | Methods circuits devices systems and functionally associated machine executable instructions for image acquisition identification localization & subject tracking |
-
2023
- 2023-11-07 NL NL2036195A patent/NL2036195B1/en active
-
2024
- 2024-10-28 WO PCT/EP2024/080367 patent/WO2025098810A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210035312A1 (en) * | 2019-07-31 | 2021-02-04 | Memotix Corp C | Methods circuits devices systems and functionally associated machine executable instructions for image acquisition identification localization & subject tracking |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2025098810A1 (en) | 2025-05-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12066979B2 (en) | Intelligent and automated review of industrial asset integrity data | |
| US9892558B2 (en) | Methods for localization using geotagged photographs and three-dimensional visualization | |
| US20230052727A1 (en) | Method and system for detecting physical features of objects | |
| EP3635619A1 (en) | System and method for construction 3d modeling and analysis | |
| US10943321B2 (en) | Method and system for processing image data | |
| Park et al. | Bringing information to the field: automated photo registration and 4D BIM | |
| TWI441094B (en) | Geospatial modeling system using single optical images and associated methods | |
| US20140270524A1 (en) | System and methods for generating quality, verified, and synthesized information | |
| Ashfaq et al. | Synthetic crime scene generation using deep generative networks | |
| Levy et al. | Cyber-archaeology | |
| Cabral et al. | Optimal reconstruction of railway bridges using a machine learning framework based on UAV photogrammetry and LiDAR | |
| White et al. | Near real‐time monitoring of wading birds using uncrewed aircraft systems and computer vision | |
| Naughton et al. | Scaling the annotation of subtidal marine habitats | |
| NL2036195B1 (en) | A method of and device for selecting a subset of photographic data items from a set of photographic data items | |
| GB2566491A (en) | Damage detection and repair system | |
| JP7681826B2 (en) | Inspection device, learning device, inspection method, learning device production method, and program | |
| Bajauri et al. | Developing a geodatabase for efficient uav-based automatic container crane inspection | |
| Tai et al. | RTAIS: road traffic accident information system | |
| Auccahuasi et al. | Large video processing using GPU programming | |
| Sá et al. | Odyssey: A Spatial Data Infrastructure for Archaeology | |
| Ivanov et al. | Innovations, applications, and future perspectives in geospatial information visualization for disaster response: insights from the 2023 Kahramanmaras Earthquake urban search and rescue operations | |
| Teo et al. | The use of UAS for rapid 3D mapping in geomatics education | |
| Slonecker et al. | Automated imagery orthorectification pilot | |
| Razali et al. | Exploring the Potential of Geospatial Virtual Reality In Forensic CSI: An Overview | |
| Bolick et al. | A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion |