WO2019175620A1 - View based object detection in images - Google Patents
View based object detection in images Download PDFInfo
- Publication number
- WO2019175620A1 WO2019175620A1 PCT/IB2018/051592 IB2018051592W WO2019175620A1 WO 2019175620 A1 WO2019175620 A1 WO 2019175620A1 IB 2018051592 W IB2018051592 W IB 2018051592W WO 2019175620 A1 WO2019175620 A1 WO 2019175620A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- view
- different
- version
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
- G06V30/2504—Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches
Definitions
- the first step in producing the sharpened version of an image is to blur the image slightly (for each pixel taking into account its neighbour pixels) and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries we use them to segment the image into different objects.
- Unmanned Aerial Vehicle which is an aircraft with no pilot on board can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on preprogrammed flight plans or more complex dynamic automation systems. Unmanned Aerial Vehicles are used for detecting various objects and attacking the infiltrated ground targets.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Artificial Intelligence (AREA)
- Astronomy & Astrophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The first step to segment the image into different objects is producing the sharpened version of an image by blurring the image slightly and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image with thick edges to segment it into different objects. Now each object will have different salient features in different views and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object Recognition.
Description
View Based Object Detection in Images
In this invention we have different images consisting of different objects in an image. We can do edge detection and segment the image into different objects by sharpening the different edges of the image. The first step in producing the sharpened version of an image is to blur the image slightly (for each pixel taking into account its neighbour pixels) and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries we use them to segment the image into different objects. Now each object will have different salient features in different views like top view, left side view, right side view, rear view and bottom view, and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object Recognition. The above technique could be used in Unmanned Aerial Vehicle, which is an aircraft with no pilot on board can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on preprogrammed flight plans or more complex dynamic automation systems. Unmanned Aerial Vehicles are used for detecting various objects and attacking the infiltrated ground targets.
Claims
1 . In this invention we have different images consisting of different objects in an image. We can do edge detection and segment the image into different objects by sharpening the different edges of the image. The first step in producing the sharpened version of an image is to blur the image slightly (for each pixel taking into account its neighbour pixels) and then original image and the blurred version of the image are compared one pixel at a time with each other. If the original pixel is brighter than the blurred version of the image it is further brightened and if the original pixel is darker than the blurred version of the image it is further darkened, and the resulting image is the sharpened version of the original image. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries we use them to segment the image into different objects. Now each object will have different salient features in different views like top view, left side view, right side view, rear view and bottom view, and hence based on the salient features we detected for the object we will also narrow down what is the view of the detected object which will help us in doing Object
Recognition. The above technique could be used in Unmanned Aerial Vehicle, which is an aircraft with no pilot on board can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on preprogrammed flight plans or more complex dynamic automation systems.
Unmanned Aerial Vehicles are used for detecting various objects and attacking the infiltrated ground targets. The above novel technique of doing View Based Object Detection in images is the claim for this invention.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/IB2018/051592 WO2019175620A1 (en) | 2018-03-11 | 2018-03-11 | View based object detection in images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/IB2018/051592 WO2019175620A1 (en) | 2018-03-11 | 2018-03-11 | View based object detection in images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019175620A1 true WO2019175620A1 (en) | 2019-09-19 |
Family
ID=67907466
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2018/051592 Ceased WO2019175620A1 (en) | 2018-03-11 | 2018-03-11 | View based object detection in images |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019175620A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2302759A1 (en) * | 1997-11-05 | 1999-05-14 | British Aerospace Public Limited Company | Automatic target recognition apparatus and process |
| EP1835460A1 (en) * | 2005-01-07 | 2007-09-19 | Sony Corporation | Image processing system, learning device and method, and program |
| US8391645B2 (en) * | 2003-06-26 | 2013-03-05 | DigitalOptics Corporation Europe Limited | Detecting orientation of digital images using face detection information |
-
2018
- 2018-03-11 WO PCT/IB2018/051592 patent/WO2019175620A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA2302759A1 (en) * | 1997-11-05 | 1999-05-14 | British Aerospace Public Limited Company | Automatic target recognition apparatus and process |
| US8391645B2 (en) * | 2003-06-26 | 2013-03-05 | DigitalOptics Corporation Europe Limited | Detecting orientation of digital images using face detection information |
| EP1835460A1 (en) * | 2005-01-07 | 2007-09-19 | Sony Corporation | Image processing system, learning device and method, and program |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11526998B2 (en) | Methods and system for infrared tracking | |
| US20190265734A1 (en) | Method and system for image-based object detection and corresponding movement adjustment maneuvers | |
| CN108255198B (en) | Shooting cradle head control system and control method under unmanned aerial vehicle flight state | |
| US9355463B1 (en) | Method and system for processing a sequence of images to identify, track, and/or target an object on a body of water | |
| US10754354B2 (en) | Hover control | |
| US8446468B1 (en) | Moving object detection using a mobile infrared camera | |
| US11319086B2 (en) | Method and system for aligning a taxi-assist camera | |
| Nagarani et al. | Unmanned Aerial vehicle’s runway landing system with efficient target detection by using morphological fusion for military surveillance system | |
| US20140314270A1 (en) | Detection of floating objects in maritime video using a mobile camera | |
| Mukadam et al. | Detection of landing areas for unmanned aerial vehicles | |
| WO2018039925A1 (en) | Method and system for detecting obstructive object at projected locations within images | |
| US10210389B2 (en) | Detecting and ranging cloud features | |
| CA3091897A1 (en) | Image processing device, flight vehicle, and program | |
| KR20170075444A (en) | Apparatus and method for image processing | |
| CN108257179B (en) | Image processing method | |
| Ogawa et al. | Automated counting wild birds on UAV image using deep learning | |
| WO2019175620A1 (en) | View based object detection in images | |
| WO2020114432A1 (en) | Water detection method and apparatus, and unmanned aerial vehicle | |
| Wagoner et al. | Survey on detection and tracking of UAVs using computer vision | |
| US20200283163A1 (en) | Flight vision system and method for presenting images from the surrounding of an airborne vehicle in a flight vision system | |
| Ruf et al. | Enhancing automated aerial reconnaissance onboard UAVs using sensor data processing-characteristics and pareto front optimization | |
| Ma et al. | Video image clarity algorithm research of USV visual system under the sea fog | |
| CN108961311B (en) | Dual-mode rotor craft target tracking method | |
| Singh et al. | Investigating feasibility of target detection by visual servoing using UAV for oceanic applications | |
| Eaton et al. | Image segmentation for automated taxiing of unmanned aircraft |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18910171 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18910171 Country of ref document: EP Kind code of ref document: A1 |