EP4000004A1 - Method and system for automatically detecting, locating and identifying objects in a 3d volume - Google Patents
Method and system for automatically detecting, locating and identifying objects in a 3d volumeInfo
- Publication number
- EP4000004A1 EP4000004A1 EP20747343.0A EP20747343A EP4000004A1 EP 4000004 A1 EP4000004 A1 EP 4000004A1 EP 20747343 A EP20747343 A EP 20747343A EP 4000004 A1 EP4000004 A1 EP 4000004A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- interest
- objects
- output
- sections
- section
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the invention relates to the detection, location and automatic identification of objects in a 3D volume.
- the aim of the present invention is to improve the situation. [0006] To this end, the present invention provides a method for detecting, locating and identifying objects contained in a complex scene.
- the method comprises the following steps:
- the invention makes it possible to significantly improve the quality of the detection, localization, and identification of objects in a complex scene thanks, from the 3D volume of the scene to be processed, to obtaining k 2D sections in each from which we proceed to a detection, localization and identification of the objects to be processed by artificial intelligence and semantic segmentation and the concatenation of the results in 3D.
- the invention comprises one or more of the following characteristics which can be used separately or in partial combination with one another or in total combination with one another: - the complex scene is previously transformed by 3D imaging in volume the method further comprises a step of indexing the labels for all the objects of interest in the complex scene;
- the resolution of the process depends on the size of the 2D bounding boxes
- the resolution of the process depends on the size of the 3D bounding boxes.
- the method comprises, in order to concatenate in 3D the results of all the k 2D output sections, for an object of interest among said objects of interest, the following steps: define, for each output 2D section, a reference mark local three-dimensional, one of the dimensions of which is perpendicular to the plane defined by the 2D section, and associate said reference with said 2D section; identify, in the output 2D sections, subsets or slices of the object of interest; transform each identified subset or slice of the object of interest by changing the coordinate system, from the local three-dimensional frame of the 2D section to which it belongs to a predetermined absolute Cartesian frame of reference; concatenate the transformed subsets or slices into a 3D icon.
- the invention also relates to a system for implementing the method defined above.
- the invention further relates to a computer program comprising program instructions for the execution of a method as defined above, when said program is executed on a computer.
- Figure 1 schematically illustrates the main steps of the process according to the invention
- Figure 1A is an index table of classes of objects of interest
- FIG. 2 schematically represents the steps for obtaining 2D sections
- FIG. 3 diagrammatically represents the steps of detection, location, and automatic identification of objects of interest in each 2D section by specialized artificial intelligence
- Figure 4 shows a bounding box of the object detected according to the method according to the invention.
- Figure 5 shows a segmented icon in accordance with the invention
- Figure 6 shows a 3D volume reconstructed in voxels in accordance with the invention
- Figure 7 shows principal sections in a 3D volume reconstructed by receptive tomography in accordance with the invention
- FIG. 8 represents an example of an OCT (Optical Coherence Tomography) section
- FIG. 9 represents an example of a complex 3D scene containing a camouflaged object reconstructed by reflective tomography from the 2D images;
- Figure 10 shows an example of a 2D section in the 3D scene
- Figure 11 shows the automatic detection and generation of the bounding box of the object camouflaged in the 2D section of the 3D scene
- Figure 12 shows the identification of the camouflaged object in the 2D section of the 3D scene.
- Figure 13 shows the generation of the bounding box and the identification of the object in the 3D scene.
- the invention relates to the detection, location and automatic identification of objects in 3D three-dimensional imagery forming a 3D three-dimensional volume in voxels (pixel volumetry).
- 3D imagery corresponds to a complex scene in which objects can hide from each other as illustrated in figure 9.
- the three-dimensional volume can be obtained by means of a reconstruction process by transmission or by fluorescence (Optical Projection Tomography, nuclear imaging or X-Ray Computed Tomography) or by reflection (reflection of a laser wave or by solar reflection in the case of the visible band (between 0.4 pm and 0.7 pm) or near infrared (between 0.7 pm and 1 pm or SWIR (Small Wave InfraRed between 1 pm and 3 pm) or taking into account the thermal emission of the object (thermal imaging between 3 pm and 5 pm and between 8 pm and 12 pm), this process three-dimensional reconstruction is described in the patent “Optronic system and method for producing three-dimensional images dedicated to identification” (US8836762B2, EP2333481 B1).
- the index (n) is at the value "n"
- the index (background) is at the value "0”.
- the detection, location and identification method according to the invention comprises the following general steps described with reference to Figure 1.
- k 2D sections are made in the reconstructed 3D volume.
- 3D volume (Section (k) ⁇ , k [1, 2, .., K ⁇ , K being the number of 2D sections made.
- step 20 for each input 2D section thus obtained, one proceeds to an automatic detection, location, and identification of the objects of interest by a specialized AI artificial intelligence method.
- Cut (k) ⁇ Object (k, m), Label (k, m), Boundingbox2D (k, m), lcone2D (k, m) ⁇ .
- the AI Artificial Intelligence method is based on deep learning, also called Deep Learning, of the "Faster R-CNN (Regions with Convolutional Neural Network features) object classification" type.
- the method applies a semantic segmentation of each lcone2D defined by a bounding box Boundingbox2D.
- the semantic segmentation is performed by Deep Learning, for example an R-CNN Mask (Regions with Convolutional Neural Network) designated for the semantic segmentation of the images.
- R-CNN Mask Regions with Convolutional Neural Network
- step 30 we finally proceed to the 3D concatenation of the results of all the 2D sections.
- the 3D concatenation of the results of the 2D sections is carried out by the following steps: Definition of a local three-dimensional coordinate system associated with each 2D section, including one of dimension is perpendicular to the plane defined by the 2D section, o Identification of 2D sections in which subsets or sections of the object of interest have been identified, o Mathematical transformation (translation and / or rotation) of all the local three-dimensional reference frames of the subsets or slices retained in a determined absolute Cartesian three-dimensional reference frame.
- the first precision also called resolution relates to the number and angle of 2D sections, for example 2D sections belong to the group formed by main sections, horizontal sections, vertical sections, oblique sections.
- 2D sections at different angles can provide better detection results and will be used in the 3D concatenation of results which will be described in more detail below.
- Boundingbox2D [(x1, x2), (y1, y2)].
- Boundingbox3D [(x1, x2), (y1, y2), (z1, z2)].
- the cutting module 12 From the 3D volume 1 1 (reconstructed in voxels), the cutting module 12 generates 2D sections 15 (in pixels) in response to the command from the choice module 13.
- the 2D sections 15 (in pixels) are managed and indexed by the management module 14 in accordance with the indexing table TAB (FIG. 1A).
- the output 2D section 23 generated by the IA method 22 comprises a 2D bounding box 24 surrounding an object of interest 25.
- the size of the 2D bounding box 24 surrounding the object 25 is defined by its coordinates on the abscissa X (x1 and x2) and on the ordinate Y (y1 and y2).
- FIG. 5 there is shown a 2D icon 50 semantically segmented in accordance with the invention.
- object 25 is indexed with the value "1" while the background is indexed with the value "0".
- FIG. 6 there is shown a 3D volume reconstructed in voxels according to the invention in which the object of interest 25 has the index "1" while another object of interest has the 'index "2"; in an index background volume "0".
- FIG. 8 an example of an OCT (Optical Coherence Tomography) section has been shown; in which a gap area is to be identified.
- OCT Optical Coherence Tomography
- the complex scene includes a vehicle camouflaged in the bushes.
- the 2D shooting is of the air ground type with 2D images of 415x693 pixels.
- Figure 10 there is shown an example of a 2D section (YZ section) in the 3D scene of Figure 9.
- FIG 1 there is shown the automatic detection and generation of the bounding box of the object camouflaged in the 2D section of the 3D scene illustrated in Figures 9 and 10.
- the fields of application of the invention are wide, covering the detection, classification, recognition and identification of objects of interest.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
DESCRIPTION DESCRIPTION
Titre de l’invention : Procédé et système de détection, de localisation et d’identification automatique d’objets dans un volume 3D Title of the invention: Method and system for the detection, location and automatic identification of objects in a 3D volume
DOMAINE DE L’INVENTION FIELD OF THE INVENTION
[0001 ] L’invention concerne la détection, la localisation et l’identification automatique d’objets dans un volume 3D. [0001] The invention relates to the detection, location and automatic identification of objects in a 3D volume.
[0002] Elle s’applique en général au domaine de la détection de cible, au domaine médical, au domaine de la microélectronique ainsi qu’aux domaines analogues. Elle permet en particulier de répondre aux interrogations applicatives rencontrées dans de nombreuses situations, telles que la détection automatique de petits fossiles (2-3 microns) dans de grands volumes 3D reconstruits par scanner des carottages d’exploration pétrolière, l’identification d’objets camouflés dans une scène 3D complexe ; l’identification de désordre pigmentaire bénin susceptible d’évoluer vers un carcinome ou un mélanome à partir d’une reconstruction tridimensionnelle cutanée ; l’identification des « anomalies cancérigènes » dans des coupes OCT (Optical Cohérence Tomography), ou encore la détection, localisation et identification automatique des tumeurs cancérigènes / zones lacunaires issues des reconstructions tridimensionnelles de scanner tomographiques ou d’IRM (Imagerie par Résonance Magnétique). [0002] It applies in general to the field of target detection, to the medical field, to the field of microelectronics as well as to similar fields. In particular, it makes it possible to respond to application questions encountered in many situations, such as the automatic detection of small fossils (2-3 microns) in large 3D volumes reconstructed by scanning oil exploration cores, the identification of objects camouflaged in a complex 3D scene; the identification of a benign pigment disorder likely to progress to carcinoma or melanoma from a three-dimensional skin reconstruction; the identification of "carcinogenic anomalies" in OCT (Optical Cohérence Tomography) slices, or the detection, localization and automatic identification of carcinogenic tumors / lacunar areas resulting from three-dimensional reconstructions of tomographic scans or MRI (Magnetic Resonance Imaging) .
CONTEXTE DE L’INVENTION BACKGROUND OF THE INVENTION
[0003] Actuellement, les interrogations applicatives mentionnées ci-avant sont le plus souvent traitées par l’expert métier (géophysicien, physicien, radiologue, dermatologue, ...) qui identifie les objets à intérêt dans des volumes 3D à l’aide d’outils de visualisation comme la MIP (Maximum Intensity Projection) par exemple. [0003] Currently, the application queries mentioned above are most often handled by the professional expert (geophysicist, physicist, radiologist, dermatologist, etc.) who identifies the objects of interest in 3D volumes using 'visualization tools such as MIP (Maximum Intensity Projection) for example.
[0004] Cependant, le traitement par l’expert métier est difficile à mettre en place car la masse de données à traiter est très importante. De plus, le taux de réussite de l’identification par l’expert métier est limité et dépasse rarement les 90%. [0004] However, the processing by the business expert is difficult to set up because the mass of data to be processed is very large. In addition, the success rate of the identification by the business expert is limited and rarely exceeds 90%.
RESUME DE L’INVENTION SUMMARY OF THE INVENTION
[0005] Le but de la présente invention est d’améliorer la situation. [0006] A cette fin, la présente invention propose un procédé de détection, de localisation et d’identification d’objets contenus dans une scène complexe. [0005] The aim of the present invention is to improve the situation. [0006] To this end, the present invention provides a method for detecting, locating and identifying objects contained in a complex scene.
[0007] Selon une définition générale de l’invention, le procédé comprend les étapes suivantes : [0007] According to a general definition of the invention, the method comprises the following steps:
- à partir d’un volume 3D en voxels de la scène complexe, obtenir k coupes 2D dans le volume 3D ; - from a 3D volume in voxels of the complex scene, obtain k 2D sections in the 3D volume;
- pour chaque coupe 2D d’entrée ainsi obtenue, détecter, localiser, et identifier automatiquement des objets à intérêt par une méthode d’intelligence artificielle spécialisée et agencée pour délivrer en sortie : - for each 2D input cut thus obtained, automatically detect, locate, and identify objects of interest by a specialized artificial intelligence method designed to deliver:
- un label correspondant à chaque objet identifié dans la coupe 2D courante d’entrée ; - a label corresponding to each object identified in the current 2D input section;
- une boîte englobante de chaque objet ainsi labellisé ; - a bounding box of each object thus labeled;
- une icône 2D définie par la boîte englobante 2D ainsi extraite ; - a 2D icon defined by the 2D bounding box thus extracted;
- pour chaque coupe 2D de sortie, segmenter sémantiquement chaque icône 2D définie par une boîte englobante 2D, et - for each output 2D cut, semantically segment each 2D icon defined by a 2D bounding box, and
- concaténer les résultats de toutes les k coupes 2D de sortie afin de générer les labels consolidés des objets à intérêt, de générer des boîtes englobantes 3D ainsi que générer des icônes 3D ainsi segmentées. - concatenate the results of all the k 2D output sections in order to generate the consolidated labels of the objects of interest, to generate 3D bounding boxes as well as to generate 3D icons thus segmented.
[0008] Ainsi l’invention permet de nettement améliorer la qualité de la détection, localisation, et identification des objets dans une scène complexe grâce, à partir du volume 3D de la scène à traiter, à l’obtention de k coupes 2D dans chacune desquelles on procède à une détection, localisation et identification des objets à traiter par intelligence artificielle et segmentation sémantique et à la concaténation des résultats en 3D. Il en résulte la possibilité de détecter, localiser et identifier des objets automatiquement avec une très grande précision même lorsque les objets sont masqués les uns les autres. [0008] Thus the invention makes it possible to significantly improve the quality of the detection, localization, and identification of objects in a complex scene thanks, from the 3D volume of the scene to be processed, to obtaining k 2D sections in each from which we proceed to a detection, localization and identification of the objects to be processed by artificial intelligence and semantic segmentation and the concatenation of the results in 3D. This results in the ability to detect, locate and identify objects automatically with very high precision even when the objects are hidden from each other.
[0009] Suivant des modes de réalisation préférés, l’invention comprend une ou plusieurs des caractéristiques suivantes qui peuvent être utilisées séparément ou en combinaison partielles entre elles ou en combinaison totales entre elles : - la scène complexe est préalablement transformée par imagerie 3D en volume - le procédé comprend en outre une étape d’indexation des labels pour tous les objets à intérêt de la scène complexe ; [0009] According to preferred embodiments, the invention comprises one or more of the following characteristics which can be used separately or in partial combination with one another or in total combination with one another: - the complex scene is previously transformed by 3D imaging in volume the method further comprises a step of indexing the labels for all the objects of interest in the complex scene;
- la résolution du procédé dépend du nombre et de la nature des coupes 2D ;- the resolution of the process depends on the number and the nature of the 2D sections;
- la résolution du procédé dépend de la taille des boîtes englobantes 2D ;- the resolution of the process depends on the size of the 2D bounding boxes;
- la résolution du procédé dépend de la taille des boîtes englobantes 3D. - the resolution of the process depends on the size of the 3D bounding boxes.
[0010] Avantageusement, le procédé comprend, pour concaténer en 3D les résultats de toutes les k coupes 2D de sortie, pour un objet à intérêt parmi lesdits objets à intérêt, les étapes suivantes : définir, pour chaque coupe 2D de sortie, un repère tridimensionnel local dont l’une des dimensions est perpendiculaire au plan défini par le coupe 2D, et associer ledit repère à ladite coupe 2D ; identifier, dans les coupes 2D de sortie, des sous-ensembles ou tranches de l’objet à intérêt ; transformer chaque sous-ensemble ou tranche identifié de l’objet à intérêt par le biais d’un changement de repère, du repère tridimensionnel local de la coupe 2D à laquelle il appartient vers un référentiel cartésien absolu prédéterminé ; concaténer les sous- ensembles ou tranches transformés en une icône 3D. Advantageously, the method comprises, in order to concatenate in 3D the results of all the k 2D output sections, for an object of interest among said objects of interest, the following steps: define, for each output 2D section, a reference mark local three-dimensional, one of the dimensions of which is perpendicular to the plane defined by the 2D section, and associate said reference with said 2D section; identify, in the output 2D sections, subsets or slices of the object of interest; transform each identified subset or slice of the object of interest by changing the coordinate system, from the local three-dimensional frame of the 2D section to which it belongs to a predetermined absolute Cartesian frame of reference; concatenate the transformed subsets or slices into a 3D icon.
[001 1 ] L’invention concerne également un système pour la mise en oeuvre du procédé précédemment défini. [001 1] The invention also relates to a system for implementing the method defined above.
[0012] L’invention concerne en outre un programme d’ordinateur comprenant des instructions de programme pour l’exécution d’un procédé tel que précédemment définie, lorsque ledit programme est exécuté sur un ordinateur. [0012] The invention further relates to a computer program comprising program instructions for the execution of a method as defined above, when said program is executed on a computer.
[0013] D’autres caractéristiques et avantages de l’invention apparaîtront à la lecture de la description qui suit d’un mode de réalisation préféré de l’invention, donnée à titre d’exemple et en référence aux dessins annexés. [0013] Other features and advantages of the invention will become apparent on reading the following description of a preferred embodiment of the invention, given by way of example and with reference to the accompanying drawings.
BREVE DESCRIPTION DES DESSINS BRIEF DESCRIPTION OF THE DRAWINGS
[0014] La figure 1 illustre schématiquement les principales étapes du procédé conforme à l’invention ; [0014] Figure 1 schematically illustrates the main steps of the process according to the invention;
[0015] La figure 1 A est une table d’indexation des classes d’objets à intérêt ; [0015] Figure 1A is an index table of classes of objects of interest;
[0016] La figure 2 représente schématiquement les étapes d’obtention des coupes 2D ; [0017] La figure 3 représente schématiquement les étapes de détection, localisation, et identification automatique des objets à intérêt dans chaque coupe 2D par une intelligence artificielle spécialisée ; FIG. 2 schematically represents the steps for obtaining 2D sections; [0017] FIG. 3 diagrammatically represents the steps of detection, location, and automatic identification of objects of interest in each 2D section by specialized artificial intelligence;
[0018] La figure 4 représente une boîte englobante de l’objet détecté selon le procédé conforme à l’invention ; [0018] Figure 4 shows a bounding box of the object detected according to the method according to the invention;
[0019] La figure 5 représente une icône segmentée conformément à l’invention ; [0019] Figure 5 shows a segmented icon in accordance with the invention;
[0020] La figure 6 représente un volume 3D reconstruit en voxels conformément à l’invention ; [0020] Figure 6 shows a 3D volume reconstructed in voxels in accordance with the invention;
[0021 ] La figure 7 représente des coupes principales dans un volume 3D reconstruit par tomographie réceptive conformément à l’invention ; [0021] Figure 7 shows principal sections in a 3D volume reconstructed by receptive tomography in accordance with the invention;
[0022] La figure 8 représente un exemple d’une coupe OCT (Optical Cohérence Tomography) ; [0022] FIG. 8 represents an example of an OCT (Optical Coherence Tomography) section;
[0023] La figure 9 représente un exemple d’une scène 3D complexe contenant un objet camouflée reconstruit par tomographie réflective à partir des images 2D ; [0023] FIG. 9 represents an example of a complex 3D scene containing a camouflaged object reconstructed by reflective tomography from the 2D images;
[0024] La figure 10 représente un exemple d’une coupe 2D dans la scène 3D ; [0024] Figure 10 shows an example of a 2D section in the 3D scene;
[0025] La figure 1 1 représente la détection automatique et la génération de la boite englobante de l’objet camouflé dans la coupe 2D de la scène 3D ; [0025] Figure 11 shows the automatic detection and generation of the bounding box of the object camouflaged in the 2D section of the 3D scene;
[0026] La figure 12 représente l’identification de l’objet camouflé dans la coupe 2D de la scène 3D ; et [0026] Figure 12 shows the identification of the camouflaged object in the 2D section of the 3D scene; and
[0027] La figure 13 représente la génération de la boite englobante et l’identification de l’objet dans la scène 3D. [0027] Figure 13 shows the generation of the bounding box and the identification of the object in the 3D scene.
DESCRIPTION DETAILLEE DE L’INVENTION DETAILED DESCRIPTION OF THE INVENTION
[0028] L’invention concerne la détection, la localisation et l’identification automatique d’objets dans une imagerie tridimensionnelle 3D formant un volume tridimensionnel 3D en voxels (volumétrie pixels). The invention relates to the detection, location and automatic identification of objects in 3D three-dimensional imagery forming a 3D three-dimensional volume in voxels (pixel volumetry).
[0029] Par exemple et de façon non limitative; l’imagerie 3D correspond à une scène complexe dans laquelle des objets peuvent se masquer les uns les autres comme illustrés en figure 9. [0029] For example and in a nonlimiting manner; 3D imagery corresponds to a complex scene in which objects can hide from each other as illustrated in figure 9.
[0030] En pratique, le volume tridimensionnel peut être obtenu grâce à un procédé de reconstruction par transmission ou par fluorescence (Optical Projection Tomography, imagerie nucléaire ou X-Ray Computed Tomography) ou par réflexion (réflexion d'une onde laser ou par réflexion solaire dans le cas de la bande visible (entre 0,4 pm et 0,7 pm) ou proche infrarouge (entre 0,7 pm et 1 pm ou SWIR (Small Wave InfraRed entre 1 pm et 3 pm) ou en prenant en compte l'émission thermique de l'objet (imagerie thermique entre 3 pm et 5 pm et entre 8 pm et 12 pm), ce processus de reconstruction tridimensionnelle est décrit dans le brevet « Système optronique et procédé d'élaboration d'images en trois dimensions dédiés à l'identification » (US8836762B2, EP2333481 B1 ). In practice, the three-dimensional volume can be obtained by means of a reconstruction process by transmission or by fluorescence (Optical Projection Tomography, nuclear imaging or X-Ray Computed Tomography) or by reflection (reflection of a laser wave or by solar reflection in the case of the visible band (between 0.4 pm and 0.7 pm) or near infrared (between 0.7 pm and 1 pm or SWIR (Small Wave InfraRed between 1 pm and 3 pm) or taking into account the thermal emission of the object (thermal imaging between 3 pm and 5 pm and between 8 pm and 12 pm), this process three-dimensional reconstruction is described in the patent “Optronic system and method for producing three-dimensional images dedicated to identification” (US8836762B2, EP2333481 B1).
[0031 ] On utilise l'ensemble des voxels issus d'une reconstruction tridimensionnelle avec l'intensité associée, cette reconstruction ayant de préférence été obtenue par réflexion. All of the voxels resulting from a three-dimensional reconstruction with the associated intensity are used, this reconstruction having preferably been obtained by reflection.
[0032] Au préalable en référence à la figure 1 A, on procède à une indexation des classes d’objets à intérêt. [0032] First with reference to FIG. 1A, the classes of objects of interest are indexed.
[0033] Une table TAB de correspondance « Index Classe x Label Classe » pour tous les classes d’objets à intérêt est ainsi créée. Par exemple à l’issue de l’indexation on obtient les éléments suivants : Classe(n) (Index(n), Label(n)}, n = [1 ,2,.., N}, N étant le nombre des classes d’objets à intérêt. A table TAB of correspondence "Index Class x Label Class" for all the classes of objects of interest is thus created. For example, at the end of the indexing, the following elements are obtained: Class (n) (Index (n), Label (n)}, n = [1, 2, .., N}, N being the number of object classes of interest.
[0034] A titre d’exemple, l’lndex(n) est à la valeur“n”, et l’lndex(background) est à la valeur“0”. [0034] For example, the index (n) is at the value "n", and the index (background) is at the value "0".
[0035] Le procédé de détection, de localisation et d’identification conforme à l’invention comporte les étapes générales suivantes décrites en référence à la figure 1 . The detection, location and identification method according to the invention comprises the following general steps described with reference to Figure 1.
[0036] Dans une première étape référencée 10, on réalise k coupes 2D dans le volume 3D reconstruit. Volume 3D (Coupe(k)}, k = [1 , 2,..,K}, K étant le nombre des coupes 2D réalisées. In a first step referenced 10, k 2D sections are made in the reconstructed 3D volume. 3D volume (Section (k)}, k = [1, 2, .., K}, K being the number of 2D sections made.
[0037] En référence à l’étape 20, pour chaque coupe 2D d’entrée ainsi obtenue, on procède à une détection, localisation, et identification automatique des objets à intérêt par une méthode d’intelligence artificielle IA spécialisée. [0037] With reference to step 20, for each input 2D section thus obtained, one proceeds to an automatic detection, location, and identification of the objects of interest by a specialized AI artificial intelligence method.
[0038] La méthode d’intelligence artificielle IA génère en sortie les éléments suivants : - Détection des Objets(k,m) dans la Coupe(k), m = {1 ,2,..,M(k)}, M(k) étant le nombre des objets détectés dans la Coupe(k) ; The AI artificial intelligence method outputs the following elements: - Detection of Objects (k, m) in Section (k), m = {1, 2, .., M (k)}, M (k) being the number of objects detected in Section (k);
- Génération du Label(k,m) correspondant à chaque Objet(k,m) identifié dans la Coupe(k) ; - Generation of the Label (k, m) corresponding to each Object (k, m) identified in the Section (k);
- Génération des boites englobantes 2D, appelées encore Boundingbox2D(k,m) de chaque Objet(k,m) dans la Coupe(k) ; - Generation of 2D bounding boxes, also called Boundingbox2D (k, m) of each Object (k, m) in Section (k);
- Extraction, depuis la Coupe(k), des lcones2D(k,m) définies par les boîtes englobantes 2D Boundingbox2D(k,m). - Extraction, from Section (k), of lcones2D (k, m) defined by 2D bounding boxes Boundingbox2D (k, m).
Ainsi on obtient les éléments suivants :Coupe(k) {Objet(k,m), Label(k,m), Boundingbox2D(k,m), lcone2D(k,m)}. Thus we obtain the following elements: Cut (k) {Object (k, m), Label (k, m), Boundingbox2D (k, m), lcone2D (k, m)}.
[0039] Par exemple, la méthode d’intelligence Artificielle IA est basée sur l’apprentissage profond appelé encore Deep Learnining de type « Faster R-CNN (Régions with Convolutional Neural Network features) object classification. [0039] For example, the AI Artificial Intelligence method is based on deep learning, also called Deep Learning, of the "Faster R-CNN (Regions with Convolutional Neural Network features) object classification" type.
[0040] Ensuite, le procédé applique une segmentation sémantique de chaque lcone2D définie par une boîte englobante Boundingbox2D. Then, the method applies a semantic segmentation of each lcone2D defined by a bounding box Boundingbox2D.
[0041 ] En pratique, la génération d’une lcone2Dsegmentée(k,m) de même taille que l’lcone2D(k,m) a pour chaque pixel soit la valeur de l’lndex(k,m) de l’Objet(k,m) identifié dans la Coupe(k), soit la valeur de l’lndex(background). En sortie on a donc lcone2D(k,m) lcone2Dsegmenté(k,m). In practice, the generation of a segmented lcone2D (k, m) of the same size as the lcone2D (k, m) has for each pixel that is the value of the index (k, m) of the Object ( k, m) identified in Section (k), or the value of the index (background). At the output we therefore have lcone2D (k, m) lcone2Dsegmented (k, m).
[0042] Par exemple, la segmentation sémantique est réalisée par Deep Learning, par exemple un Mask R-CNN (Régions with Convolutional Neural Network) désigné pour la segmentation sémantique des images. For example, the semantic segmentation is performed by Deep Learning, for example an R-CNN Mask (Regions with Convolutional Neural Network) designated for the semantic segmentation of the images.
[0043] En référence à l’étape 30, on procède enfin à la concaténation en 3D des résultats de toutes les coupes 2D. Referring to step 30, we finally proceed to the 3D concatenation of the results of all the 2D sections.
[0044] Dans un ensemble de modes de réalisation de l’invention, la concaténation en 3D des résultats des coupes 2D est réalisée par les étapes suivantes : o Définition d’un repère tridimensionnel local associé à chaque coupe 2D, dont l’une de dimensionne est perpendiculaire au plan défini par la coupe 2D, o Identification des coupes 2D dans lesquelles des sous-ensembles ou tranches de l’objet d’intérêt ont été identifiés, o Transformation mathématique (translation et/ou rotation) de tous les repères tridimensionnels locaux des sous-ensembles ou tranches retenus dans un référentiel tridimensionnel cartésien absolu déterminé. In a set of embodiments of the invention, the 3D concatenation of the results of the 2D sections is carried out by the following steps: Definition of a local three-dimensional coordinate system associated with each 2D section, including one of dimension is perpendicular to the plane defined by the 2D section, o Identification of 2D sections in which subsets or sections of the object of interest have been identified, o Mathematical transformation (translation and / or rotation) of all the local three-dimensional reference frames of the subsets or slices retained in a determined absolute Cartesian three-dimensional reference frame.
[0045] Ceci permet la reconstruction de l’objet tridimensionnel en assurant la continuité aux limites des sous-ensembles ou tranches de l’objet. [0045] This allows the reconstruction of the three-dimensional object by ensuring continuity at the limits of the subassemblies or slices of the object.
[0046] En sortie, on dispose alors des éléments suivants : At the output, we then have the following elements:
Concaténation des Labels(k,m) Génération des Labels(n) consolidés Concatenation of Labels (k, m) Generation of consolidated Labels (n)
Concaténation des Boundingbox2D(k,m) Génération des Boundingbox3D(n) Concatenation of Boundingbox2D (k, m) Generation of Boundingbox3D (n)
Concaténation des lcones2D(k,m)segmentées Génération des lcones3D(n)segmentées Concatenation of segmented 2D (k, m) lcones Generation of segmented 3D (n) lcones
(Objet(n), Label(n), Boundingbox3D(n), lcone3Dsegmentée(k,n)}, (n) appartenant à [1 ,2,..,N}, N étant le nombre des classes d’objets à intérêt. (Object (n), Label (n), Boundingbox3D (n), segmented lcone3D (k, n)}, (n) belonging to [1, 2, .., N}, N being the number of object classes to interest.
[0047] Le procédé conforme à l’invention présente plusieurs précisions. The method according to the invention presents several details.
[0048] La première précision, appelée encore résolution concerne le nombre et l’angle des coupes 2D, par exemple les coupes 2D appartiennent au groupe formé par coupes principales, coupes horizontales, coupes verticales, coupes obliques. Plus le nombre de coupes 2D est élevé, meilleure est la résolution de la détection. De plus, des coupes 2D de différents angles peuvent apporter des meilleurs résultats de détection et serviront dans la concaténation en 3D des résultats que l’on décrira plus en détail ci-après. The first precision, also called resolution relates to the number and angle of 2D sections, for example 2D sections belong to the group formed by main sections, horizontal sections, vertical sections, oblique sections. The higher the number of 2D slices, the better the detection resolution. In addition, 2D sections at different angles can provide better detection results and will be used in the 3D concatenation of results which will be described in more detail below.
[0049] La deuxième précision concerne les boîtes englobantes 2D, Boundingbox2D = [(x1 ,x2),(y1 ,y2)]. Plus la taille des boîtes englobantes 2D est petite, meilleure est la résolution de la détection. The second precision concerns the 2D bounding boxes, Boundingbox2D = [(x1, x2), (y1, y2)]. The smaller the size of 2D bounding boxes, the better the detection resolution.
[0050] La troisième précision concerne les boîtes englobantes 3D, Boundingbox3D = [(x1 ,x2),(y1 ,y2),(z1 ,z2)]. Plus la taille des boîtes englobantes 3D est petite, meilleure est la résolution de la détection. The third precision concerns the 3D bounding boxes, Boundingbox3D = [(x1, x2), (y1, y2), (z1, z2)]. The smaller the size of the 3D bounding boxes, the better the detection resolution.
[0051 ] En référence à la figure 2, on a représenté les modules relatifs à réalisation des coupes 2D. Referring to Figure 2, there is shown the modules relating to the realization of 2D sections.
[0052] Comme vu ci-avant, le choix et le nombre des coupes 2D va agir sur la résolution de la détection. [0053] A partir du volume 3D 1 1 (reconstruit en voxels), le module de coupe 12 génère des coupes 2D 15 (en pixels) en réponse à la commande émanant du module de choix 13. Les coupes 2D 15 (en pixels) sont gérées et indexées par le module de gestion 14 conformément à la table d’indexation TAB (figure 1 A). As seen above, the choice and the number of 2D sections will act on the resolution of the detection. From the 3D volume 1 1 (reconstructed in voxels), the cutting module 12 generates 2D sections 15 (in pixels) in response to the command from the choice module 13. The 2D sections 15 (in pixels) are managed and indexed by the management module 14 in accordance with the indexing table TAB (FIG. 1A).
[0054] En référence à la figure 3, on a représenté les modules relatifs à la méthode d’intelligence artificielle IA 22 appliquée à chaque coupe 2D d’entrée 21 ainsi générée et indexée. With reference to FIG. 3, the modules relating to the artificial intelligence method IA 22 applied to each input 2D section 21 thus generated and indexed have been shown.
[0055] La coupe 2D de sortie 23 générée par la méthode IA 22 comprend une boîte englobante 2D 24 entourant un objet d’intérêt 25. The output 2D section 23 generated by the IA method 22 comprises a 2D bounding box 24 surrounding an object of interest 25.
[0056] En référence à la figure 4, la taille de la boîte englobante 2D 24 entourant l’objet 25 est définie par ses coordonnées en abscisses X (x1 et x2) et en ordonnées Y (y1 et y2). Referring to Figure 4, the size of the 2D bounding box 24 surrounding the object 25 is defined by its coordinates on the abscissa X (x1 and x2) and on the ordinate Y (y1 and y2).
[0057] En référence à la figure 5, on a représenté une icône 2D 50 segmentée sémantiquement conformément à l’invention. Par exemple, l’objet 25 est indexé avec la valeur « 1 » tandis que le background est indexé à la valeur « 0 ». Referring to Figure 5, there is shown a 2D icon 50 semantically segmented in accordance with the invention. For example, object 25 is indexed with the value "1" while the background is indexed with the value "0".
[0058] En référence à la figure 6, on a représenté un volume 3D reconstruit en voxels conformément à l’invention dans lequel l’objet d’intérêt 25 présente l’index « 1 » tandis qu’un autre objet à intérêt présente l’index « 2 »; dans un volume de background d’index « 0 ». Referring to Figure 6, there is shown a 3D volume reconstructed in voxels according to the invention in which the object of interest 25 has the index "1" while another object of interest has the 'index "2"; in an index background volume "0".
[0059] Selon la figure 7, on a représenté des coupes principales 2D dans un volume 3D reconstruit par tomographie réflective conformément à l’invention, ici selon XY, XZ et ZY. According to Figure 7, there is shown 2D main sections in a 3D volume reconstructed by reflective tomography in accordance with the invention, here along XY, XZ and ZY.
[0060] Selon la figure 8, on a représenté un exemple d’une coupe OCT (Optical Cohérence Tomography) ; dans laquelle une zone lacunaire est à identifier. [0060] According to FIG. 8, an example of an OCT (Optical Coherence Tomography) section has been shown; in which a gap area is to be identified.
[0061 ] Selon la figure 9, on a représenté un exemple d’une scène 3D complexe contenant un objet camouflé reconstruite par tomographie réflective à partir des images 2D. [0061] According to Figure 9, there is shown an example of a complex 3D scene containing a camouflaged object reconstructed by reflective tomography from the 2D images.
[0062] Par exemple, la scène complexe comprend un véhicule camouflé dans les buissons. La prise de vue 2D est de type air sol avec des images 2D de 415x693 pixels. [0063] Selon la figure 10, on a représenté un exemple d’une coupe 2D (coupe YZ) dans la scène 3D de la figure 9. [0062] For example, the complex scene includes a vehicle camouflaged in the bushes. The 2D shooting is of the air ground type with 2D images of 415x693 pixels. According to Figure 10, there is shown an example of a 2D section (YZ section) in the 3D scene of Figure 9.
[0064] Selon la figure 1 1 , on a représenté la détection automatique et la génération de la boite englobante de l’objet camouflé dans la coupe 2D de la scène 3D illustrées en figures 9 et 10. According to Figure 1 1, there is shown the automatic detection and generation of the bounding box of the object camouflaged in the 2D section of the 3D scene illustrated in Figures 9 and 10.
[0065] Selon la figure 12, on a représenté l’identification de l’objet camouflé (index 1 ) dans la coupe 2D de la scène 3D avec les coordonnées de la boîte englobante 2D. [0065] According to Figure 12, the identification of the camouflaged object (index 1) is shown in the 2D section of the 3D scene with the coordinates of the 2D bounding box.
[0066] Selon la figure 13, on a représenté la génération de la boite englobante 3D (avec ses coordonnées) et l’identification de l’objet dans la scène 3D. [0066] According to Figure 13, the generation of the 3D bounding box (with its coordinates) and the identification of the object in the 3D scene have been shown.
[0067] Les domaines d’application de l’invention sont larges, couvrant la détection, la classification, la reconnaissance et l’identification d’objets à intérêt. [0067] The fields of application of the invention are wide, covering the detection, classification, recognition and identification of objects of interest.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR1908109A FR3098963B1 (en) | 2019-07-18 | 2019-07-18 | METHOD AND SYSTEM FOR THE AUTOMATIC DETECTION, LOCATION AND IDENTIFICATION OF OBJECTS IN A 3D VOLUME |
| PCT/EP2020/069056 WO2021008928A1 (en) | 2019-07-18 | 2020-07-07 | Method and system for automatically detecting, locating and identifying objects in a 3d volume |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4000004A1 true EP4000004A1 (en) | 2022-05-25 |
Family
ID=69190842
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP20747343.0A Withdrawn EP4000004A1 (en) | 2019-07-18 | 2020-07-07 | Method and system for automatically detecting, locating and identifying objects in a 3d volume |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US20220358714A1 (en) |
| EP (1) | EP4000004A1 (en) |
| JP (1) | JP2022540582A (en) |
| KR (1) | KR20220032562A (en) |
| FR (1) | FR3098963B1 (en) |
| WO (1) | WO2021008928A1 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2635830A (en) * | 2024-09-30 | 2025-05-28 | Vitvio Ltd | Method of determining the position of an object in a 3D volume |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| FR2953313B1 (en) | 2009-11-27 | 2012-09-21 | Thales Sa | OPTRONIC SYSTEM AND METHOD FOR PREPARING THREE-DIMENSIONAL IMAGES FOR IDENTIFICATION |
| US20140369583A1 (en) * | 2013-06-18 | 2014-12-18 | Konica Minolta, Inc. | Ultrasound diagnostic device, ultrasound diagnostic method, and computer-readable medium having recorded program therein |
| US9189689B2 (en) * | 2013-10-30 | 2015-11-17 | Nec Laboratories America, Inc. | Robust scale estimation in real-time monocular SFM for autonomous driving |
| US10293878B2 (en) * | 2015-06-19 | 2019-05-21 | Blubrake S.R.L. | Brake assist system for a cyclist on a bicycle by a haptic feedback |
| WO2018170472A1 (en) * | 2017-03-17 | 2018-09-20 | Honda Motor Co., Ltd. | Joint 3d object detection and orientation estimation via multimodal fusion |
| EP3392832A1 (en) * | 2017-04-21 | 2018-10-24 | General Electric Company | Automated organ risk segmentation machine learning methods and systems |
| US10646999B2 (en) * | 2017-07-20 | 2020-05-12 | Tata Consultancy Services Limited | Systems and methods for detecting grasp poses for handling target objects |
| JP6989688B2 (en) * | 2017-07-21 | 2022-01-05 | トヨタ モーター ヨーロッパ | Methods and systems for training neural networks used for semantic instance segmentation |
| US10586374B2 (en) * | 2017-07-26 | 2020-03-10 | Alvin D. Zimmerman | Bounding volume hierarchy using virtual grid |
| WO2019152027A1 (en) * | 2018-01-31 | 2019-08-08 | Hewlett-Packard Development Company, L.P. | Determine sample points on slices from nurbs models |
| US10779798B2 (en) * | 2018-09-24 | 2020-09-22 | B-K Medical Aps | Ultrasound three-dimensional (3-D) segmentation |
| WO2020106925A1 (en) * | 2018-11-21 | 2020-05-28 | The Trustees Of Columbia University In The City Of New York | Medical imaging based on calibrated post contrast timing |
| CN109816655B (en) * | 2019-02-01 | 2021-05-28 | 华院计算技术(上海)股份有限公司 | Pulmonary nodule image feature detection method based on CT image |
| EP3796210A1 (en) * | 2019-09-19 | 2021-03-24 | Siemens Healthcare GmbH | Spatial distribution of pathological image patterns in 3d image data |
-
2019
- 2019-07-18 FR FR1908109A patent/FR3098963B1/en active Active
-
2020
- 2020-07-07 WO PCT/EP2020/069056 patent/WO2021008928A1/en not_active Ceased
- 2020-07-07 JP JP2022500701A patent/JP2022540582A/en active Pending
- 2020-07-07 EP EP20747343.0A patent/EP4000004A1/en not_active Withdrawn
- 2020-07-07 KR KR1020227001559A patent/KR20220032562A/en not_active Ceased
- 2020-07-07 US US17/621,433 patent/US20220358714A1/en not_active Abandoned
Non-Patent Citations (1)
| Title |
|---|
| BURTON WILLIAM S.: "Applied Deep Learning in Orthopaedics", 30 June 2019 (2019-06-30), pages 1 - 132, XP093105770, Retrieved from the Internet <URL:https://www.proquest.com/dissertations-theses/applied-deep-learning-orthopaedics/docview/2290955447/se-2?accountid=29404> [retrieved on 20231127] * |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20220032562A (en) | 2022-03-15 |
| WO2021008928A1 (en) | 2021-01-21 |
| FR3098963A1 (en) | 2021-01-22 |
| FR3098963B1 (en) | 2022-06-10 |
| US20220358714A1 (en) | 2022-11-10 |
| JP2022540582A (en) | 2022-09-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Horn et al. | Artificial intelligence, 3D documentation, and rock art—approaching and reflecting on the automation of identification and classification of rock art images | |
| US10586337B2 (en) | Producing a segmented image of a scene | |
| Ko et al. | Object-of-interest image segmentation based on human attention and semantic region clustering | |
| Goferman et al. | Context-aware saliency detection | |
| EP3343507A1 (en) | Producing a segmented image of a scene | |
| EP4078522B1 (en) | Method for selecting surface points from a cad model for locating industrial 3d objects, application of this method to the location of industrial 3d objects, and augmented reality system using 3d objects thus located | |
| EP3234914B1 (en) | Method for discrimination and identification of objects of a scene by 3-d imaging | |
| Yi et al. | Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit | |
| FR3109635A1 (en) | Method of detecting at least one geological component of a rock sample | |
| Narayanaswamy et al. | 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors | |
| Bulatov et al. | Classification of airborne 3D point clouds regarding separation of vegetation in complex environments | |
| Mohammadi et al. | 2D/3D information fusion for building extraction from high-resolution satellite stereo images using kernel graph cuts | |
| Bhakuni et al. | Object Detection and Localization in Real-Time Using Image Processing and Deep Learning | |
| EP4000004A1 (en) | Method and system for automatically detecting, locating and identifying objects in a 3d volume | |
| Lee et al. | SAM-Net: LiDAR depth inpainting for 3D static map generation | |
| Jovanov et al. | Adaptive point cloud acquisition and upsampling for automotive lidar | |
| Agrafiotis et al. | Seafloor-invariant caustics removal from underwater imagery | |
| EP3343504A1 (en) | Producing a segmented image using markov random field optimization | |
| EP4042319B1 (en) | Method for object recognition with augmented representation | |
| Abubakr et al. | Learning deep domain-agnostic features from synthetic renders for industrial visual inspection | |
| Stegmann et al. | Few-shot AI segmentation of semiconductor device FIB-SEM tomography data | |
| Zhu et al. | Toward the ghosting phenomenon in a stereo-based map with a collaborative RGB-D repair | |
| Kassel et al. | DeePaste-Inpainting For Pasting | |
| Li et al. | Dynamic wind turbine blade 3D model reconstruction with event camera | |
| Pan et al. | Accuracy improvement of deep learning 3D point cloud instance segmentation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20211217 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
| 17Q | First examination report despatched |
Effective date: 20231130 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
| 18D | Application deemed to be withdrawn |
Effective date: 20240601 |