GB2447246A - Applying pattern recognition to low resolution, wide FOV images to control further image capture - Google Patents
Applying pattern recognition to low resolution, wide FOV images to control further image capture Download PDFInfo
- Publication number
- GB2447246A GB2447246A GB0704325A GB0704325A GB2447246A GB 2447246 A GB2447246 A GB 2447246A GB 0704325 A GB0704325 A GB 0704325A GB 0704325 A GB0704325 A GB 0704325A GB 2447246 A GB2447246 A GB 2447246A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- capture device
- image capture
- images
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G06K9/00228—
-
- G06K9/00604—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H04N5/23203—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Collating Specific Patterns (AREA)
- Image Input (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
An image capture device is used to acquire a first, relatively low resolution image of a relatively wide field of view (FOV) containing a plurality of objects. A pattern recognition algorithm is then applied to the first image so as to identify objects of interest, and at least one object of interest is selected from the first image and its coordinates within the image are determined. The determined coordinates are then used to control either the image capture device or an additional image capture device so as to obtain a sequence of second, relatively high resolution images of a relatively narrow field of view (FOV) containing the at least one object of interest. A corresponding method is also independently claimed. The first image may be time multiplexed with the second images. The imaging device may perform digital zoom, pan and/or tilt, to select the narrow view field; alternatively, zoom, pan and/or tilt may be achieved electromagnetically. The system may be applied to face recognition, with face and/or eye locating being performed.
Description
Patent Application Controlled high resolution sub-image capture with
time domain multiplexed high speed full field of view reference video stream for image based biometric applications
Field of the invention
The invention relates to apparatus and methods for captunng high definition, high speed images from object specific targets in a range of biometric applications such as Iris recognition and particularly but not exclusively, for the capture of facial images for facial recognition systems.
Background to the invention
The performance of software based facial recognition systems is determined by many factors. Enhancements in this technology now mean that many facial parameters are taken into consideration by the software. These include facial geometry, specific identifying marks such as moles, skin colouration and more specifically its variation across the facial area. However, one common factor is the requirement for high-resolution good quality electronic images to enable the software to analyse these features, a relatively simple task to achieve if the subject poses at a specific distance from a high definition camera under controlled lighting conditions. In such a defined posed situation, all of the key elements for good image capture can be controlled. S...
Additionally, the image transfer time from the image capture device to the computer performing the facial recognition task is not critical. In uncontrolled environments the target may be moving and not looking directly at the camera. This invention describes a technique that acquires good quality images of the facial features of targets in both ideal S...
and non-ideal locations where the target may be stationary or moving and fast image * * capture is required. By adapting both the imaged area and image resolution the method transmits the minimal data required for good facial recognition. The invention also provides the means to acquire good quality facial images in areas where significant numbers of people are present. The method removes the requirement for moving parts, filtering, encoding or other image processing at the image source. As a result the invention provides optimal image quality of single or multiple facial areas within a viewing region whilst maintaining a high frame rate. It must also be stated that any image based biometric, regardless of the specifics of the biometric and the light spectrum used, places a similar requirement upon the image capture device. It is true to say that in image based biometric recognition in general, high resolution, noise free images of the area used for analysis are highly beneficial, and that multiple instances of these biometric regions may be present in any image captured using the full viewing cone of the capture device. Iris recognition is an example where the eyes form the object specific target within the full facial image. Analysis is generally constrained by time in any practical application; hence high throughput of relevant image data to a biometric recognition engine is of great importance. For reasons of clarity and brevity, we will continue to address the issue of face recognition, though it must be borne in mind that any image-based biometric could be substituted for the face.
In areas with significant target numbers, the opportunity to locate a large number of faces arises provided each face occupies a small area of the available image. However this prevents image recognition due to poor image detail. Within densely populated places, a high-resolution camera image could be captured and processed with high-resolution facial detail. Transmission of such images is slow, and a key element of good facial recognition in such dynamic situations is the use of a sequence of closely time-related images. Face recognition systems make a comparison between a previously taken reference image and current images. Because of the comparison process, different pose positions present problems to the recognition system. To minimise this problem in dynamic situations a sequence of images taken in quick succession allow the **** recognition system to select the optimum pose within the sequence. It is therefore important to maintain high frame rates during image capture. This invention uses dynamic object specific digital pan and zoom, where a down sampled, low resolution, typically VGA high-speed representation of the full high-resolution image is time multiplexed with high resolution sub regions containing facial detail. The data reduction allows multiple sub regions to be transmitted within single frame periods without significant reduction in frame rates. Such images may be derived from 13 to 8Mpixel * * sensors and may rise as camera technology improves.
The VGA resolution image is first transmitted to the host facial recognition computer, facial areas are located by head and/or eye location algorithms running on the host. The coordinates of the target specific area is passed back to the camera. The camera operates digital pan and zoom to send a high-resolution subset of the full image area but transmits only the data containing the areas of the identified targets. Several high-speed transmissions of these areas can be made to optimise the facial position for the image recognition software. This approach provides target area location and high-resolution facial images at frame rates significantly higher than 25fps. This has the potential for capturing and processing crowded areas within the image area at any one time; with suitable lenses this may be achieved at a distance of tens of metres from the camera.
Many existing facial recognition systems employ standard closed circuit TV (CCTV) based on CCD technology cameras, to acquire the image for recognition. In security applications the pixel resolution of these cameras is often below the normal maximums of 752(H)x582(V) for PAL and 768(H)x494(V) for NTS. In non-posed applications, mechanical control of camera tilt, pan and zoom can be employed to optimise the image size but the response time of these mechanical systems is often too slow to optimise each image capture [1]. Alternatively, a compromise may be made where the image of the facial area is a small subset of the total available image area allowing target acquisition over a larger area. In this situation the active area is large but the detail is insufficient for good facial recognition. In PAL applications where some level of posing is possible, the facial image size may be around 50% of the total available image (376x29lpixels) and in non-posed applications, facial image sizes of 10% of the total available image (75x58 pixels) may be the only ones available. With a resolution of only 58pixels in the vertical and a head size of 280mm, the geometry of the face can only be measured to a resolution of 280mm/58-4.8mrn. This results in an uncertainty on the size of features such as eye sockets of 4.8mmx2 =9.6mm. This poor geometric resolution limits the accuracy of the geometric component of the facial recognition system. If skin defects such as moles are considered these are generally smaller than normal features and the ability to identify these marks is significantly compromised. *
S
:. An alternative is to use high resolution CMOS or CCD imaging technology that currently produces images with resolutions up to 8M pixels and will continue to rise as imaging technology develops. A typical 5mega pixel device at present provides a resolution of :. 2592(H)xl 944(V). A like for like comparison with the aforementioned PAL system gives a l94pixel resolution in the vertical for an image area of 10% of the total. This results in a geometric resolution of 280mm/194=1.4mm under the same conditions. The 8Mpixel versions reduce this further to only 1.1mm. This is a significant improvement but requires a large increase in data transmission between the camera and the processing computer.
In the PAL system described, the total number of pixels is 752x582=438K, whilst the CMOS system has a total number of 2592x1944=5M, which is an 11-fold increase in the data transmission requirement. In addition, the CMOS device produces a lObit digital value for each pixel, unlike the PAL system, which transmits an analogue representation of the image. As transmission standards such as USB are 8Bit or l6Bit orientated, 5Mxl6bit (8OMbit) is required to transfer a full frame with lObit colour depth. With standard transmission methods such as USB1 and full speed USB2 only l2Mbit/s speeds are available; this determines a frame transfer speed of 6.6 seconds. In many applications this is much too slow. Other transmission formats, such as high speed USB2, transmit data at 48OMbit/s but this only achieves a frame rate of 6fps. This is significantly slower than the 25fps associated with PAL systems. This problem becomes even more difficult if an 8Mpixel or higher resolution imaging device is employed.
Coding systems may be employed within the imaging device, for example JPEG or MPEG. This will increase the image transfer rate but requires additional hardware within the camera to provide the coding functions and requires decoding at the host computer.
Compression techniques also reduce image quality if high levels of compression are employed; this limits the level of compression available and hence reduces the available frame rate.
Data reduction techniques described in US patent application 8,829,391 Adaptive resolution system and method for providing efficient low bit Rate transmission of image *.* * data for distributed applications, describe complex filtering of the data where upon data in the target area has high resolution whilst data in the surrounding area has a low level of detail' This is based on the log polar mapping [2,3] technique in US patent 5,103,306. Digital image compression employing a resolution gradient', a technique :. highly suited to images containing a single face where a single region of interest exists.
However, it becomes much more complex when multiple faces, requiring multiple S...
overlaid foveation patterns, exist within a single image. This increase in complexity * . significantly increases the computational load and raises the required bandwidth of the transmission if downgraded image quality is to be avoided. The alternative is to restrict the method to single target foveation, selecting only single facial images at any time regardless of the actual number of interest regions present in the image.
This new invention uses multiniode transmission with target specific area transmission to achieve high definition, high-speed transmission of facial images. Its implementation requires minimal computation within the video data stream, requiring only the formatting of data to conform to the chosen output format. The embedded controller operating outside the main video data stream is subsequently able to operate at lower speeds and implements the main control functionality.
High-speed head and eye location algorithms have been available for some time and are used in the early stage of facial recognition to locate and isolate the head from the rest of the image. The resulting image is then processed for recognition. This invention uses head location algorithms running on the host face recognition-processing Computer to locate the position of a face on a low-resolution (typically 640x48opixels) image. This image is derived from a high resolution CMOS or CCD imaging sensor, where the high-resolution image has been down sampled from for example 5Mpixels to 300Kpixels. The down sampled image is transmitted via a high-speed digital link to the processing host.
If high-speed USB 2 is employed, frame rates of around 1 OOfps can be transmitted at VGA resolution, the actual frame rate is determined by the speed of the image capture device and the number of sub frames transmitted per full frame. On receipt of the Pow-resolution image, the head is located and the coordinates of the head location are transmitted back to the imaging sensor. The coordinates either define the location of a default window size, at which point the entire default window area is transmitted, or alternatively two coordinates are used, determined by the face location algorithm, that define two diagonally opposed corners. In the case of the opposed corners, a variable window size results where the transmitted image area contains minimal data not relating to the facial region of interest. Image transmission rates in this system are therefore increased due to the high level of redundant data removal. The high-resolution subset of the image area is transmitted back to the host processor, allowing full facial recognition * to be applied to the high-resolution image area of the facial region of interest. Once the :. facial area is located the coordinates of the facial area may be recalculated from the high-resolution current image. The updated coordinates may then be retransmitted to *.** the imaging system, allowing tracking of the subject within the total available image area. * I..
S
Due to the high data rate transmission available with transmission media such as USB 2 and the high level of data reduction within this invention, multiple images can be transmitted within each frame period. This allows time division multiplexing to intersperse full area images with high-resolution facial images within a single frame period. The following example is based on a 64Ox48Opixel reference image with three facial windows. Each facial window is determined from the previous reference image. This allows the system to track and identify the subject in real time with high frame rates.
Full reference image 640x480x1 6bit=4.9Mbit Facial window I 300x200xl6bit=I Mbit Facial window 2 600x400x1 6bit3.8Mbit Facial window 3 700x450x1 Sbit=5Mbit Total transmission required =14 7Mbit Data rate available on USB2 48DMbit/s Frame Rate =480/14.7 = 32 frames/sec These high frame rate high-resolution images allow multiple images of a subject to be acquired while in motion, providing the opportunity to capture multiple facial images with the optimal facial orientation.
In applications where the distance between the camera and the host processor is too large for low cost digital transmission to be implemented, the digital interface may be replaced or supplemented by PAL or NTSC analogue transmission. An auxiliary digital communications line is employed to dynamically pass the digital pan, zoom and other commands to the camera system. If the auxiliary communications is in the form of RS485 or RS422, long cable runs may be employed allowing remote mounting of camera systems, however frame rates in this configuration reduce due to the limited transmission bandwidth.
: 25 Figi shows the image of the eye region taken with a CCD camera with a resolution of 752x582pixel. The camera has been rotated by 90 degrees to improve vertical resolution and resulting image rotated back through 90 degrees. The subjects' facial area occupies approximately 25% of the full image area. Fig 2 shows an image of the same eye region taken with a 3.1 Mpixel camera with the same total image area. Only the 640x48opixels in the facial where transmitted thus reducing the data transmission demand significantly compared to transmission of the full 3.lMpixel. Dynamic object specific systems only transmit the required data, therefore resolution and picture quality is significantly enhanced for a given data transmission rate, which results in much higher accuracy within the facial recognition system.
Fig 3 shows the image from a 3.IMpixet camera viewing an area where faces may appear. The object specific head and eye location identifies the area of the image that holds a valid facial region and passes the coordinates to the camera. The camera then selects this region and transmits the high definition version of this image area Fig 4.
This process requires the transmission of a 64Opixelx48Opixel reference image and 224x241pixe1 facial image. The resulting data is (5.8Mbit at l6bit colour depth). With 48OMbit/second on high speed USB 2 the transmission time is around 1 2m seconds.
Fig 5 shows an implementation of the invention such that the high-resolution image is converted to a digital bitmap. Sub sampling and windowing is performed by the image sensor under the control of the control microprocessor, which in turn derives the windowing control data from the remote host computer. Image data is transmitted to the host from the camera via a suitable interface such as USB 2 or Fire Wire. The digital communications may incorporate suitable encoding such as JPEG or MPEG. In this implementation the reference full frame is transmitted first. The host determines the widow coordinates, and a sequence of sub frames requested by the host. Each sub frame is derived from a new image captured by the sensing device and is therefore not at exactly the same time frame as the reference image.
Fig 6 shows an implementation such that the operation is as in Fig 5 with the addition of a full frame memory. In this implementation the full high-resolution image is stored in memory. Down sampling is then performed by reading subsets of the stored memory.
*. Windowing is also performed by reading subsets of stored data. This implementation generates a reference image and captures subset images at the same instant in time.
Fig 7 shows an implementation such that the image is transferred to the host via standard analogue transmission media such as PAL or NTSC. In this implementation sub frame transmission speeds remain the same as full frame transmission speeds and must be interposed with the full frame reference image. Image transfer rates in this embodiment are lower than in the digital transmission implementations The preferred features of the invention are applicable to all aspects of the invention and may be used in any possible combination.
Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of the words, for example "comprising" and "comprises", mean "including but not limited to", and are not intended to (and do not) exclude other components, integers, moieties, additives or steps.
Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
References [1] Alessio Del Bue, Dorm Comaniciu, Visvanathan Ramesh, Carlo Regazzoni, "Smart Cameras With Real-Time Video Object Generation". IEEE International Conference Image Processing, (ICIP'02), Rochester, NY, Vol 3, 429 -432, 2002 [2] Donn Comaniciu, Fabio Berton, Visvanathan Ramesh, "Adaptive Resolution System for Distributed Surveillance". Real Time Imaging, Vol 8, No5, 427-437,2002.
[3] Dorm Comaniciu, Visvanathan Ramesh, "Robust Detection and Tracking of Human Faces with an Active Camera". IEEE Int Workshop on Visual * ** Surveillance, Dublin, Ireland, 11-18,2000. * * * * .* **** * * **** ** ** * * * * S *
S S... **SS
S
S
Claims (15)
- CLAIMS: 1. A method of image processing for object recognitionapplications, the method comprising the steps of: using an image capture device to acquire a first, relatively low resolution image of a relatively wide field of view containing a plurality of objects; applying a pattern recognition algorithm to the first image so as to identify objects of interest; selecting at least one object of interest from the first image and determining its coordinates within the image; and using the determined coordinates to control either the image capture device or an additional image capture device so as to obtain a sequence of second, relatively high resolution images of a relatively narrow field of view containing the at least one object of interest.
- 2. A method according to claim 1, wherein the first image is time multiplexed with the sequence of second images.
- 3. A method according to any preceding claim, wherein a single image capture device is used to acquire both the first and the second images. * ***
- 4. A method according to claim 1 or 2, wherein a first image capture device acquires the first image, and a second image capture device acquires the second images. **** * 25 **.**.
- 5. A method according to any preceding claim, wherein the image capture device is controlled so as to perform digital zoom and optionally digital pan and/or tilt in order toselect the relatively narrow field of view.
- 6. A method according to any one of claims 1 to 4, wherein the image capture device is controlled so as to perform optical zoom and optionally electromechanical pan and/or tilt in order to select the relatively narrow field of view.
- 7. A method according to any preceding claim, wherein only the second images are used for object recognition processing.
- 8. Apparatus for image processing for object recognition applications, the apparatus compnsing: an image capture device adapted to acquire a first, relatively tow resolution image of a relatively wide field of view containing a plurality of objects; means for applying a pattern recognition algorithm to the first image so as to identify objects of interest; means for selecting at least one object of interest from the first image and determining its coordinates within the image; and means for using the determined coordinates to control either the image capture device or an additional image capture device so as to obtain a sequence of second, relatively high resolution images of a relatively narrow field of view containing the at least one object of interest.
- 9. Apparatus as claimed in claim 8, comprising means for time multiplexing the first image with the sequence of second images.
- 10. Apparatus as claimed in claim 8 or 9, comprising a single image capture device for acquiring both the first and the second images.
- 11. Apparatus as claimed in claim 8 or 9, comprising a first image capture device for acquiring the first image, and a second image capture device for acquiring the second * * images.
- 12. Apparatus as claimed in any one of claims 8 to 11, wherein the image capture device is controlled so as to perform digital zoom and optionally digital pan and/or tilt in order to select the relatively narrow field of view.
- 13. Apparatus as claimed in any one of claims 8 to 11, wherein the image capture device is controlled so as to perform optical zoom and optionally electromechanical pan andlor tilt in order to select the relatively narrow field of view.
- 14. A method of image processing for object recognition applications substantially as hereinbefore described with reference to or as shown in the accompanying drawings.
- 15. Apparat is for image processing for object recognition applications substantially as hereinbefore described with reference to or as shown in the accompanying drawings. * ** * S * * ** * *** * S **** S. S.S SS SS S..S *SSS S. S..S
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB0704325.0A GB2447246B (en) | 2007-03-07 | 2007-03-07 | Controlled high resolution sub-image capture with time domain multiplexed high speed full field of view reference video stream for image biometric application |
| PCT/GB2008/050144 WO2008107713A1 (en) | 2007-03-07 | 2008-03-03 | Controlled high resolution sub-image capture with time domain multiplexed high speed full field of view reference video stream for image based biometric applications |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB0704325.0A GB2447246B (en) | 2007-03-07 | 2007-03-07 | Controlled high resolution sub-image capture with time domain multiplexed high speed full field of view reference video stream for image biometric application |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB0704325D0 GB0704325D0 (en) | 2007-04-11 |
| GB2447246A true GB2447246A (en) | 2008-09-10 |
| GB2447246B GB2447246B (en) | 2012-04-18 |
Family
ID=37966027
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB0704325.0A Expired - Fee Related GB2447246B (en) | 2007-03-07 | 2007-03-07 | Controlled high resolution sub-image capture with time domain multiplexed high speed full field of view reference video stream for image biometric application |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2447246B (en) |
| WO (1) | WO2008107713A1 (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2352110A1 (en) * | 2009-12-01 | 2011-08-03 | HTC Corporation | Object image cropping method, object image cropping system and digital image device |
| EP2453386A1 (en) * | 2010-11-11 | 2012-05-16 | LG Electronics Inc. | Multimedia device, multiple image sensors having different types and method for controlling the same |
| EP2453596A3 (en) * | 2010-11-11 | 2012-08-08 | LG Electronics Inc. | Multimedia device, multiple image sensors having different types and method for controlling the same |
| WO2013093829A3 (en) * | 2011-12-23 | 2013-11-21 | Nokia Corporation | Controlling image capture and/or controlling image processing |
| EP2153374A4 (en) * | 2007-05-24 | 2014-03-05 | Digitaloptics Corp Europe Ltd | METHOD AND APPARATUS FOR IMAGE PROCESSING |
| GB2523356A (en) * | 2014-02-21 | 2015-08-26 | Tobii Technology Ab | Apparatus and method for robust eye/gaze tracking |
| US9886630B2 (en) | 2014-02-21 | 2018-02-06 | Tobii Ab | Apparatus and method for robust eye/gaze tracking |
| EP3534344A1 (en) * | 2018-03-02 | 2019-09-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and program |
| US10572008B2 (en) | 2014-02-21 | 2020-02-25 | Tobii Ab | Apparatus and method for robust eye/gaze tracking |
| WO2022081142A1 (en) * | 2020-10-13 | 2022-04-21 | Google Llc | Distributed sensor data processing using multiple classifiers on multiple devices |
| US11657608B1 (en) | 2019-08-05 | 2023-05-23 | Genetec Inc. | Method and system for video content analysis |
| US12057126B2 (en) | 2020-10-13 | 2024-08-06 | Google Llc | Distributed sensor data processing using multiple classifiers on multiple devices |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101822661B1 (en) | 2011-10-27 | 2018-01-26 | 삼성전자주식회사 | Vision recognition apparatus and method |
| US9667872B2 (en) * | 2012-12-05 | 2017-05-30 | Hewlett-Packard Development Company, L.P. | Camera to capture multiple images at multiple focus positions |
| US9639741B2 (en) | 2014-04-14 | 2017-05-02 | International Business Machines Corporation | Facial recognition with biometric pre-filters |
| CN107292228A (en) * | 2017-05-05 | 2017-10-24 | 珠海数字动力科技股份有限公司 | A kind of method for accelerating face recognition search speed |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO1999045511A1 (en) * | 1998-03-04 | 1999-09-10 | The Trustees Of Columbia University In The City Of New York | A combined wide angle and narrow angle imaging system and method for surveillance and monitoring |
| US20030071891A1 (en) * | 2001-08-09 | 2003-04-17 | Geng Z. Jason | Method and apparatus for an omni-directional video surveillance system |
| WO2007014216A2 (en) * | 2005-07-22 | 2007-02-01 | Cernium Corporation | Directed attention digital video recordation |
| GB2431312A (en) * | 2004-07-19 | 2007-04-18 | Grandeye Ltd | Automatically expanding the zoom capability of a wide-angle video camera |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6714665B1 (en) * | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
| CA2359269A1 (en) * | 2001-10-17 | 2003-04-17 | Biodentity Systems Corporation | Face imaging system for recordal and automated identity confirmation |
| JP2005173787A (en) * | 2003-12-09 | 2005-06-30 | Fujitsu Ltd | Image processing device that detects and recognizes moving objects |
-
2007
- 2007-03-07 GB GB0704325.0A patent/GB2447246B/en not_active Expired - Fee Related
-
2008
- 2008-03-03 WO PCT/GB2008/050144 patent/WO2008107713A1/en active Application Filing
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO1999045511A1 (en) * | 1998-03-04 | 1999-09-10 | The Trustees Of Columbia University In The City Of New York | A combined wide angle and narrow angle imaging system and method for surveillance and monitoring |
| US20030071891A1 (en) * | 2001-08-09 | 2003-04-17 | Geng Z. Jason | Method and apparatus for an omni-directional video surveillance system |
| GB2431312A (en) * | 2004-07-19 | 2007-04-18 | Grandeye Ltd | Automatically expanding the zoom capability of a wide-angle video camera |
| WO2007014216A2 (en) * | 2005-07-22 | 2007-02-01 | Cernium Corporation | Directed attention digital video recordation |
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2153374A4 (en) * | 2007-05-24 | 2014-03-05 | Digitaloptics Corp Europe Ltd | METHOD AND APPARATUS FOR IMAGE PROCESSING |
| EP2352110A1 (en) * | 2009-12-01 | 2011-08-03 | HTC Corporation | Object image cropping method, object image cropping system and digital image device |
| EP2453386A1 (en) * | 2010-11-11 | 2012-05-16 | LG Electronics Inc. | Multimedia device, multiple image sensors having different types and method for controlling the same |
| EP2453596A3 (en) * | 2010-11-11 | 2012-08-08 | LG Electronics Inc. | Multimedia device, multiple image sensors having different types and method for controlling the same |
| US8577092B2 (en) | 2010-11-11 | 2013-11-05 | Lg Electronics Inc. | Multimedia device, multiple image sensors having different types and method for controlling the same |
| US9025023B2 (en) | 2010-11-11 | 2015-05-05 | Lg Electronics Inc. | Method for processing image data in television having multiple image sensors and the television for controlling the same |
| US9473702B2 (en) | 2011-12-23 | 2016-10-18 | Nokia Technologies Oy | Controlling image capture and/or controlling image processing |
| WO2013093829A3 (en) * | 2011-12-23 | 2013-11-21 | Nokia Corporation | Controlling image capture and/or controlling image processing |
| US9886630B2 (en) | 2014-02-21 | 2018-02-06 | Tobii Ab | Apparatus and method for robust eye/gaze tracking |
| US9646207B2 (en) | 2014-02-21 | 2017-05-09 | Tobii Ab | Apparatus and method for robust eye/gaze tracking |
| GB2523356A (en) * | 2014-02-21 | 2015-08-26 | Tobii Technology Ab | Apparatus and method for robust eye/gaze tracking |
| US10282608B2 (en) | 2014-02-21 | 2019-05-07 | Tobii Ab | Apparatus and method for robust eye/gaze tracking |
| US10572008B2 (en) | 2014-02-21 | 2020-02-25 | Tobii Ab | Apparatus and method for robust eye/gaze tracking |
| EP3534344A1 (en) * | 2018-03-02 | 2019-09-04 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and program |
| CN110225318A (en) * | 2018-03-02 | 2019-09-10 | 佳能株式会社 | Image processing equipment and image processing method |
| US10841458B2 (en) | 2018-03-02 | 2020-11-17 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
| CN110225318B (en) * | 2018-03-02 | 2021-11-02 | 佳能株式会社 | Image processing device and image processing method |
| US11657608B1 (en) | 2019-08-05 | 2023-05-23 | Genetec Inc. | Method and system for video content analysis |
| WO2022081142A1 (en) * | 2020-10-13 | 2022-04-21 | Google Llc | Distributed sensor data processing using multiple classifiers on multiple devices |
| US12057126B2 (en) | 2020-10-13 | 2024-08-06 | Google Llc | Distributed sensor data processing using multiple classifiers on multiple devices |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2447246B (en) | 2012-04-18 |
| GB0704325D0 (en) | 2007-04-11 |
| WO2008107713A1 (en) | 2008-09-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| GB2447246A (en) | Applying pattern recognition to low resolution, wide FOV images to control further image capture | |
| JP3870124B2 (en) | Image processing apparatus and method, computer program, and computer-readable storage medium | |
| US8264524B1 (en) | System for streaming multiple regions deriving from a wide-angle camera | |
| US10277901B2 (en) | Encoding a video stream having a privacy mask | |
| US7450165B2 (en) | Multiple-view processing in wide-angle video camera | |
| US20180068540A1 (en) | Image processing method | |
| US20190199898A1 (en) | Image capturing apparatus, image processing apparatus, control method, and storage medium | |
| US10110929B2 (en) | Method of pre-processing digital images, and digital image preprocessing system | |
| US20040001149A1 (en) | Dual-mode surveillance system | |
| US9886961B2 (en) | Audio watermark in a digital video | |
| US10235972B2 (en) | Bit rate controller and a method for limiting output bit rate | |
| WO2016114950A1 (en) | Watermarking digital images to increase bit dept | |
| JP7050509B2 (en) | Image processing equipment, image processing methods, and programs | |
| CN112887587B (en) | Self-adaptive image data fast transmission method capable of carrying out wireless connection | |
| US20240296679A1 (en) | Transmission of a collage of detected objects in a video | |
| KR102474697B1 (en) | Image Pickup Apparatus and Method for Processing Images | |
| WO2006010910A1 (en) | Apparatus and method for capturing and transmitting images of a scene | |
| CN112261474A (en) | Multimedia video image processing system and processing method | |
| JP2021175085A (en) | Image processing device and program | |
| EP4492773A1 (en) | Video data processing technology for reducing transmission bandwidth | |
| CN113038147A (en) | Intelligent video parameter adjusting system | |
| CN1393111A (en) | Video communication using preferred secondary sensor | |
| WO2012119920A1 (en) | Monitoring device and method for monitoring a location | |
| EP3989587A1 (en) | Image processing device and method of pre-processing images of a video stream before encoding | |
| KR20240085151A (en) | Video failover recording |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20160307 |