WO2010115020A2 - Color and pattern detection system - Google Patents
Color and pattern detection system Download PDFInfo
- Publication number
- WO2010115020A2 WO2010115020A2 PCT/US2010/029658 US2010029658W WO2010115020A2 WO 2010115020 A2 WO2010115020 A2 WO 2010115020A2 US 2010029658 W US2010029658 W US 2010029658W WO 2010115020 A2 WO2010115020 A2 WO 2010115020A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- intensity values
- color
- loci
- image
- traffic signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/462—Computing operations in or between colour spaces; Colour management systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/463—Colour matching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/48—Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present invention relates to a device and method for acquiring and processing color digital images to identify objects relevant in an automotive driving environment.
- a device and method is shown for recognition of a traffic signal light and the operational state of that light.
- High-speed digital cameras and image analysis systems have become commonly used for remote sensing and identification of information. These cameras are becoming more prevalent as attachments to automobiles, airplanes and other moving objects and vehicles. These cameras can generally acquire images of approaching objects with an image quality sufficient to identify the object(s) of interest.
- the systems aid drivers by warning approaching or nearby dangers.
- the systems were first implemented as simple alternatives or additions to the rear view mirror, especially for large vehicles where visibility behind the vehicle is limited. More advanced systems warn of nearby vehicles, driving too near the edge of a road and aid in parking of a vehicle.
- Recognizing objects in digital images acquired in a driving system requires processing systems that can acquire an image of the relevant field of view surrounding the vehicle, differentiate the object of interest from the background within the image, recognize the shape and color of the object, and relate that shape and color to a traffic sign or signal that infers information that is relevant to the driver and then alerting the driver. This must all be done on a time scale such that the driver would have time to react to the information.
- Typical stopping distances that include cognition, reaction time and vehicle response for an automobile are 100 feet at 30 mile per hour and 300 feet at 60 miles per hour. 30 miles per hour corresponds to 44 feet per second. Therefore for a system to be usable it must acquire an image, process the image to identify an object and alert the driver at more than 100 feet distance.
- Example prior art patents and patent applications include: US patent application 20060203102, Yang, published, September 2006; US patent application 20100033571, Fujita, published Feb 2010 and US patent application 20050036660, Otsuka, published Feb 2005. DISCLOSURE OF THE INVENTION
- An imaging and processing system is described that meets the demanding needs of high speed imaging for remote recognition of objects and traffic signs and signals viewed under normal driving conditions.
- the system relies upon the use of an imager chip, a processing unit programmed to analyze the image and an output system to alert the driver.
- the system recognizes the red light of an electronic traffic signal.
- a traffic light detection system consists of an image acquisition device, a circle detection component, a stop light verification component and a notification component.
- the notification means is a sound generation device.
- the notification means is a light and in another embodiment the notification means is a video display alerting the driver to the location of the traffic signal displayed upon the acquired camera view of the scene.
- the circle detection component comprises a circular Hough that provides a transformed three-dimensional matrix based upon the original image. One of the dimensions provides a cumulated value indicative of points that are the center of circular objects. Other values within the matrix provide a dimension of the radius of the circular object.
- the transformed image is then filtered for circular objects that have size parameters within a predetermined range.
- the predetermined range of accepted radii is based upon the camera parameters and the viewing distance parameters at which traffic signals are to be detected and are consistent with the size of a traffic light when viewed with that camera at the targeted distance.
- a stop light verification component checks identified circles by estimating the average brightness of red, green, and blue light within the identified circle and comparing these values to threshold values. In another embodiment the circles may be verified as yellow lights or green lights.
- Figure 1 is a diagram of an automobile interior and view from that interior showing an arrangement of components of a color and pattern detection system.
- Figure 2 shows a street scene situation wherein an embodiment of the invention is practiced.
- Figure 3 is a block diagram of components of an embodiment of the invention.
- Figure 4 is a high level flow chart of an embodiment of the invention.
- Figures 5 is a block diagram of an embodiment of the invention further showing communication lines between components.
- Figure 6 is a flow chart of an embodiment of preprocessing an image.
- Figure 7 is a flow chart of the object identification and verification embodiment of the invention.
- Figure 8 is a flow chart for a color measurement embodiment of the invention.
- Figure 1 depicts an embodiment of the invention as installed in an automobile or truck vehicle.
- the inventive system 100 includes a camera 101 installed on the vehicle in such a fashion that it can acquire images in the direction of interest, in this case in the direction of forward travel of the vehicle.
- Cameras may be of the type using charge coupled devices (CCD) as sensors or
- CMOS Complementary metal-oxide-semiconductor
- Sensor sizes may range from less than a megapixel to several pixels.
- the camera is electrically connected 108 to image acquisition and processing electronics 102 which are further described in Figures 3 and 5 below.
- the Processor output is connected to output devices 103 - 106 that are used to signal the driver 112 of an approaching object of interest such as a traffic light 109.
- Exemplary output devices include a light
- the video output display may include output from the processor indicating a warning as well as an image 107 that shows the object detected.
- a traffic light 109 is detected by the system.
- the output display 106 also includes functions as a touch screen for user input of operating parameters.
- the yellow portion of the light 111 illuminated.
- the red portion of the light is dimmed.
- the user has selected parameters indicating a warning to be signaled upon detection of a yellow light.
- the system provides that warning through a light illuminated 103 in the drivers view, and a view of the yellow light illuminated on the video out device 107.
- the operator may select parameters to indicate that particular colors of objects shall trigger an alert, as was done in this exemplary case of the yellow light, or to indicate particular colors are to be ignored.
- the user may also select the type of alert to be issued such as an illuminated warning light, a flashing warning light a computer generated verbal through a speaker
- a buzzer alert 105 or a video display indicative of a particular traffic situation In this exemplary case the illumination of a yellow traffic light.
- the system parameters are defined to detect an illuminated red traffic light and provide an alert on the basis of a red traffic light whose image size falls within a range of selected and adjustable parameters.
- Figure 2 shows an exemplary embodiment as viewed from outside a vehicle.
- the system is installed in a vehicle 201 operated by a driver 202 and includes the image acquisition camera 203 as well as the exemplary electronics discussed in conjunction with Figure 1 above.
- the vehicle is travelling in the indicated direction 204 and is approaching an intersection that is marked by a traffic signal 206 and in the example shown the red signal 207 of the signal light is illuminated.
- the vehicle and sensor are seen to be a distance 205 from the traffic signal.
- the distance 205 is used to define a distance range in which a driver would be alerted as to for example the red light being detected as illuminated as shown. Selection of the distance range includes selection of a minimum and maximum value of the distance 205 that the vehicle is from a potential traffic signal to be alerted.
- a distance range for the distance 205 and a particular lens and detector on the camera 203 would then set a parameter as to the range of sizes of the light 207 when viewed in the acquired image.
- the distance at which alerts should be signaled can be set as a range of values of the distance 205 to the traffic light. Such a selection would then define a range of values for the size of the image of the light 207 given approximately by the formula:
- Wi Wo*f/L (1)
- Wi is the width of the image of the traffic signal feature within the acquired image
- Wo is the width of the actual traffic signal light 207
- f is the focal length of the lens on the camera 203
- L is the distance 205 between the camera 203 and the light.
- a minimum and maximum value for the Wi to be used in algorithms looking for circles within the acquired image are defined by minimum and maximum values for L.
- Figure 3 shows an exemplary embodiment of the components of the invention.
- the scene 300 illuminates a lens 301 in front of a sensor 302.
- the lens 301 may further include a filter and aperture to control exposure.
- the lens may be of fixed focus or variable focus.
- Light impinging on the sensor 302 produces an electrical response that is fed through an amplifier 303.
- the gain of the amplifier may be adjusted based upon the lighting conditions to produce an output signal that is optimized for the dynamic range of the analog to digital converter (ADC) 304.
- the digitized signal from the ADC is captured by a frame buffer whose output is an array of intensity values for the colors red, green and blue, each measured at every pixel location contained in the sensor 302.
- the output of the sensor is used in the form of individual values of red, green and blue intensities for each pixel location. It is known in the art that the output can be equivalently characterized by a hue, saturation and luminosity (HSL) value for each pixel location.
- HSL hue, saturation and luminosity
- the transformation from RGB to HSL is known in the art. Similarly other known coordinate systems for image descriptions may be used and represent only a transformation of variables.
- the frame buffer acquires an image at pre-selected intervals set as a parameter by the user or designer of the system. In a preferred embodiment the frame buffer acquires images at a rate of at least 2 frames per second.
- the image is stored in memory 306 associated with the processing system the image storage is typically solid-state memory but may be other types of memory such as magnetic disk storage.
- the image array data is passed to processor memory 307 for processing by electronic processor 308.
- the processor may be of any type such as those manufactured by Intel ® corporation and used in personal computers. Intel® is a registered trademark of Intel Corporation. The processor acts upon the image data as describe in the following discussions.
- Exemplary "acting" upon the image includes filtering, selecting pixels meeting selected color parameters, averaging of pixel values for noise reduction, applying transformations to enhance edges, applying Hough transformations to identify circular or other shapes, and verifying the identified circular objects as traffic signal light s of a particular color and any of the other myriad of operations known in the art for application to image files.
- Processed image files may be further stored in Processor memory 307.
- the system further includes a user interface 309 that allows input of operating parameters.
- Exemplary user interface devices include touch screen video devices as discussed in conjunction with Figure 1 above, a conventional computer keyboards and a computer mouse.
- the user interface for inputting parameters may also be provided by a device that is temporarily connected to the processor only for the time of loading or changing parameters or a remote wireless interface that allows loading of parameters remotely, perhaps by a remote user over the Internet.
- an embodiment of the system further includes an output driver 310 connected to a means to alert and inform the operator of the vehicle.
- the output driver 310 may provide signal to any of the exemplary devices as discussed above in conjunction with Figure 1.
- FIG. 4 A block diagram of an embodiment of the invention is shown in Figure 4.
- An image of the scene within the field of view of the camera is acquired 401.
- the image may be preprocessed 402 before color corrected and adjusted to remove noise and artifacts associated with the sensor chip producing an image 402 of the scene.
- Preprocessing the image includes smoothing the image, removing artifacts such as individual pixels that are anomalously bright and convolving the image with appropriate filters for edge enhancement such as a 2D filter of the form:
- the particular color of interest is selected by averaging values for red green and blue across the detected circle and comparing these average values with the a user selected of ranges for values of red green and blue. Circles whose color does not fall within the selected range would then be rejected.
- the range can be selected for any particular range of the values of red green and blue and therefore the detection could filter for circles of any particular color.
- the parameters are set for a minimum value of red and a maximum value for green and blue intensities to detect a red traffic signal light.
- the test would be is the count of detected and verified features greater than zero. In a preferred embodiment the test is whether the count of red circles of a given diameter is greater than zero and as such would indicate detection of a traffic light. In other embodiments the test might include a minimum number of features or a combination of features. Detected features selected on the basis of shape, size and color. In other embodiments the Hough transform parameters are set to search for objects shaped other than a circle. Exemplary shapes are rectangles, squares and ellipses. If detection is found positive the vehicle operator is then notified 408 that preselected shapes have been detected. Notification may be as indicated in discussion of Figure 1 and include indicator lights, buzzers, and messages on video screens and images on video screens.
- the camera 501 acquires and image and passes that image to an embedded processor 502 and in particular the main image processor 503 component of the embedded processor.
- the image is then manipulated and in one embodiment the red components of the image are passed 504 to a processor section 505.
- the processor 505 performs a circular Hough transform on the image and returns 506 the transform matrix of circle centers, radii and locations to the main processor 503.
- the main processor then passes the transformed matrix to a check circle processor 508 that checks if the detected circle matches a set of selected parameters.
- a light (or more general feature) found signal 509 is sent to the main processor 503 and the main processor sends a light found signal 510 to a user notification device 511.
- exemplary user notification devices include lights, speakers, buzzers and video devices and were discussed in conjunction with Figure 1.
- Non-limiting examples of components 503, 505 and 508 include computer processor chips such as those manufactured by Intel ® (Intel ® is a registered trademark of Intel corporation) corporation for use in personal computers, field programmable array processors and digital signal processors especially those adopted for processing image data.
- FIG. 6 A more detailed view of a process practiced in embodiments of the invention is shown in Figure 6.
- An image is acquired 601 producing an n by m matrix of values for the red, green and blue light intensity at each pixel location across the array of pixels of the imaging device.
- the image is then further processed to enhance the ability to reliably detect targeted features within the acquired image.
- the image is filtered for selected colors 602.
- an n by m image matrix is then defined for subsequent operations as an n by m matrix of values of intensities for the selected color.
- the selected color is red.
- the selected color is green and in another embodiment the value is a scaled factor indicating the intensity for a color that is a combination of red, green and blue intensities.
- a color filtered image is further processed to enhance the ability to detect particular features in the image file.
- One embodiment includes application of a Sobel transform 603 to the image.
- the Sobel transform produces an n by m matrix with values at each of the pixel locations corresponding to the gradient of the image at that point.
- the Sobel transform allows one to know if an edge is detected and based upon the gradient the direction of change in the intensity at that point.
- the gradient in the case of a circular feature that is brighter than the surrounding image would point to the center of the circle that defines the feature.
- An embodiment of the invention further includes application of a second derivative transform 604 to the image matrix.
- the Sobel transform 603 and the second derivative transform 604 may be performed in serial on the image file.
- the points detected as an edge in the Sobel transform are then further tested 605 for a zero crossing in the second derivative in the image file.
- a zero confirms the edge detected by the Sobel transform.
- a nonzero second derivative is indicative there is not actually an edge and the Sobel transform may then be corrected 606 by setting to zero.
- the embodiment therefore includes a redundant edge detection routine to confirm edges and avoid false positive detection of edges and therefore features.
- the matrix from the corrected Sobel transform is then subjected to circle detection 607 discussed in more detail in conjunction with Figure 7.
- FIG 7 shows a flow chart for a circle detection process embodiment.
- the corrected Sobel transformation 701 provides a starting point for detection of circles using the Hough transform 702.
- the Hough transform known in the art, accumulates values in each pixel location that correspond to the accumulated probability that the pixel location of interest is a center of a circle and the radius of the circle.
- the Hough transform in this case is modified for increased reliability by only accumulating values from those points within the corrected Sobel (described in conjunction with Figure 6) whose gradient vectors point in the direction of the center of the circle and whose radius would fall within the selected parameter range for the size of the radius in the image of the targeted object as discussed in conjunction with Figure 2 above and Equation 1.
- the Hough transform is then further filtered 703 using a Mexican hat transform to concentrate those spots within the Hough transform output.
- Multiple point circle centers are then removed 704 to produce an array with single point circle centers.
- the color of the targeted object is then checked 705 against the original image. The check is done to see that the color of the circle in the original image matches that of the targeted color of the object.
- the target object is a red traffic signal light.
- the pixels located within the circle found are then checked for red, green and blue intensity values. If the relative intensity of the green and blue intensity exceeds parameters used to define the red color of a traffic light the circle is rejected as being a red traffic signal.
- the targeted color of the object is green.
- the circle detect routines operate upon the green intensity values of the red, green blue array in the original image using the process discussed above.
- the pixel locations within a detected circle are then checked for color in that the relative value of the green intensity must exceed a given value and the relative intensity of the red and blue must be below given parameters.
- parameters are selected upon the basis of the actual color of a green traffic signal light. If the color of the circle is confirmed as being of the targeted color a flag is set that a circle has been detected and the location and size of the circle is transmitted to output 706. The output may then be used as described above to alert the operator that an object of interest has been detected.
- the color confirmation process is further detailed in Figure 8.
- confirming the detecting of the target object comprises comparing an average of the intensity value for each primary color for pixels in the acquired image and located within the loci of pixel locations identified as being within the boundaries of the target object, with a corresponding target range of intensity values for the selected primary color and affirming the target object is found and located at the loci of pixel locations if the average intensity values for each primary color are all within the corresponding target ranges of color intensity values and disaffirming the target object is found if any one of the average intensity values for each primary color is outside of the corresponding target range of color intensity values.
- the average intensity values for pixels located within the loci of pixel locations are only included in the average if the sum of the intensity values for within a target range for total intensity and are excluded from the average if the sum of the intensity vales are not within a target range for total intensity.
- the target range for red is that the average intensity is greater than a selected value and the target range for green and blue is that the average intensity values are both less than a selected value.
- the method is applied to a green light then the average value for green would be larger than a selected value and the average value for red and blue would be less than a selected threshold.
- the method would similarly apply to any color in the spectrum by selecting appropriate ranges for the average values of the primary colors. Note that the location of the object is detected in a transformed matrix and is confirmed in the original image. The separation of this confirmation step and reverting to analysis of the original image reduces the erros and false positive indications of a found object.
- the primary colors are selected as red, green and blue.
- the method could equivalently be applied if different primary colors were selected, such as cyan, magenta and yellow.
- the method could equivalently be applied if the color space describing the image were transformed to a hue, saturation and luminance and the average values for pixels within the location identified as containing the object of interest were then required to fall within a range of values for hue, saturation and luminance.
- Summary Devices and methods are disclosed for acquiring images and looking for objects within that image and alerting a user as to found objects.
- the preferred embodiment is the detection of red traffic signal lights as viewed from an automobile moving on a roadway.
- a second embodiment would be used to detect the green lights of traffic signals.
- the devices and methods are used to detect circular objects in the preferred embodiment. Transforms could also be used to detect other shaped objects and objects of colors other than red and green.
- the devices provide means to input operating parameters and alert drivers when target objects are found. Embodiments are included to improve the reliability and reduce false positive detection by rechecking color intensities within the coordinates of the found object.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
Abstract
Devices and methods are disclosed for acquiring images and looking for objects within that image and alerting a user as to found objects. The preferred embodiment is the detection of red traffic signal lights as viewed from an automobile moving on a roadway and alerting a driver when a traffic signal is found. A second embodiment would be used to detect the green lights of traffic signals. The devices and methods are used to detect circular objects in the preferred embodiment. Transforms could also be used to detect other shaped objects and objects of colors other than red and green. The devices provide means to input operating parameters and alert drivers when target objects are found. Embodiments are included to improve the reliability and reduce false positive detection by rechecking color intensities within the coordinates of the found object.
Description
COLOR AND PATTERN DETECTION SYSTEM
BACKGROUND OF THE INVENTION CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Patent Applications 61/211657, filed 04/01/2009, entitled "Color and Pattern Detection System", currently pending, by the same inventors and hereby incorporated by reference.
TECHNICAL FIELD
The present invention relates to a device and method for acquiring and processing color digital images to identify objects relevant in an automotive driving environment. In particular a device and method is shown for recognition of a traffic signal light and the operational state of that light.
RELATED BACKGROUND ART
High-speed digital cameras and image analysis systems have become commonly used for remote sensing and identification of information. These cameras are becoming more prevalent as attachments to automobiles, airplanes and other moving objects and vehicles. These cameras can generally acquire images of approaching objects with an image quality sufficient to identify the object(s) of interest. The systems aid drivers by warning approaching or nearby dangers. The systems were first implemented as simple alternatives or additions to the rear view mirror, especially for large vehicles where visibility behind the vehicle is limited. More advanced systems warn of nearby vehicles, driving too near the edge of a road and aid in parking of a vehicle.
Systems have been conceived and partially implemented in experimental situations that can go further and recognize objects in the surroundings of the vehicle. The sensing and recognition of street signs and traffic signals and warning or informing drivers of such nearby objects has yet to make it beyond the experimental stage. Sensing and recognition of traffic signs and signals is useful to the fully able driver in increasing awareness of the surroundings, providing help in navigation and warnings of potential dangers for the occasional lapse of attention. The partially disabled driver may be aided by such systems, especially in the cases where the driver suffers from color blindness and has diminished abilities to recognize the red light used in traffic signals and as a warning signal. There are more than 40,000 traffic fatalities per year in the United States alone. Over 90% of highway accidents have been attributed to human error and perception is a factor in over 80% of all
highway accidents. Most of this error is due to perceptual and attention failure or to inadequate highway visibility conditions. Moreover, visibility of warning labels, placement of signs and warnings, recognition of people and objects, etc are all ultimately an issue in visual perception. (Green, Marc, PhD, http://www.visualcxpcrt.com/why.html, Visited March, 2010). There is a need for devices to aid drivers in the perception of warnings and signals.
Recognizing objects in digital images acquired in a driving system requires processing systems that can acquire an image of the relevant field of view surrounding the vehicle, differentiate the object of interest from the background within the image, recognize the shape and color of the object, and relate that shape and color to a traffic sign or signal that infers information that is relevant to the driver and then alerting the driver. This must all be done on a time scale such that the driver would have time to react to the information. Typical stopping distances that include cognition, reaction time and vehicle response for an automobile are 100 feet at 30 mile per hour and 300 feet at 60 miles per hour. 30 miles per hour corresponds to 44 feet per second. Therefore for a system to be usable it must acquire an image, process the image to identify an object and alert the driver at more than 100 feet distance.
An issue with known systems is accuracy of the object recognition and accuracy of alert systems is critical. False positive alerts will tend to result in driver's ignoring subsequent accurate alerts. There are heretofore no systems that provide sufficient discrimination of shapes, accuracy of identification and do so on a time scale that is useful to alert the driver. Accuracy can be improved by fully utilizing all information contained within the acquired images. Systems are known that recognize shapes within an image based upon algorithms such as the Hough transform. Other approaches make use of a database of shapes and testing the acquired image against this database. However these systems have not been developed to the point of being commercially viable and available. Systems are available that make use of object shape but few systems combine this with object color. No systems are heretofore known that combine the information of object shape, color and luminance to provide an accurate identification and alert system.
Example prior art patents and patent applications include: US patent application 20060203102, Yang, published, September 2006; US patent application 20100033571, Fujita, published Feb 2010 and US patent application 20050036660, Otsuka, published Feb 2005. DISCLOSURE OF THE INVENTION
An imaging and processing system is described that meets the demanding needs of high speed imaging for remote recognition of objects and traffic signs and signals viewed under normal
driving conditions. The system relies upon the use of an imager chip, a processing unit programmed to analyze the image and an output system to alert the driver. In a preferred embodiment the system recognizes the red light of an electronic traffic signal.
In one embodiment a traffic light detection system consists of an image acquisition device, a circle detection component, a stop light verification component and a notification component. In one embodiment the notification means is a sound generation device. In another embodiment the notification means is a light and in another embodiment the notification means is a video display alerting the driver to the location of the traffic signal displayed upon the acquired camera view of the scene. The circle detection component comprises a circular Hough that provides a transformed three-dimensional matrix based upon the original image. One of the dimensions provides a cumulated value indicative of points that are the center of circular objects. Other values within the matrix provide a dimension of the radius of the circular object. The transformed image is then filtered for circular objects that have size parameters within a predetermined range. In one embodiment the predetermined range of accepted radii is based upon the camera parameters and the viewing distance parameters at which traffic signals are to be detected and are consistent with the size of a traffic light when viewed with that camera at the targeted distance. A stop light verification component checks identified circles by estimating the average brightness of red, green, and blue light within the identified circle and comparing these values to threshold values. In another embodiment the circles may be verified as yellow lights or green lights. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a diagram of an automobile interior and view from that interior showing an arrangement of components of a color and pattern detection system.
Figure 2 shows a street scene situation wherein an embodiment of the invention is practiced. Figure 3 is a block diagram of components of an embodiment of the invention. Figure 4 is a high level flow chart of an embodiment of the invention.
Figures 5 is a block diagram of an embodiment of the invention further showing communication lines between components.
Figure 6 is a flow chart of an embodiment of preprocessing an image.
Figure 7 is a flow chart of the object identification and verification embodiment of the invention. Figure 8 is a flow chart for a color measurement embodiment of the invention.
DETAILED DESCRIPTION
Figure 1 depicts an embodiment of the invention as installed in an automobile or truck vehicle. The inventive system 100 includes a camera 101 installed on the vehicle in such a fashion that it can acquire images in the direction of interest, in this case in the direction of forward travel of the vehicle. Cameras may be of the type using charge coupled devices (CCD) as sensors or
Complementary metal-oxide-semiconductor (CMOS) devices. Sensor sizes may range from less than a megapixel to several pixels. The camera is electrically connected 108 to image acquisition and processing electronics 102 which are further described in Figures 3 and 5 below. The Processor output is connected to output devices 103 - 106 that are used to signal the driver 112 of an approaching object of interest such as a traffic light 109. Exemplary output devices include a light
103, a speaker 104, a buzzer 105 and video output 106. The video output display may include output from the processor indicating a warning as well as an image 107 that shows the object detected. In the instant case shown a traffic light 109 is detected by the system. In another embodiment the output display 106 also includes functions as a touch screen for user input of operating parameters. In the example shown, the yellow portion of the light 111 illuminated. The red portion of the light is dimmed. The user has selected parameters indicating a warning to be signaled upon detection of a yellow light. The system provides that warning through a light illuminated 103 in the drivers view, and a view of the yellow light illuminated on the video out device 107. The operator may select parameters to indicate that particular colors of objects shall trigger an alert, as was done in this exemplary case of the yellow light, or to indicate particular colors are to be ignored. The user may also select the type of alert to be issued such as an illuminated warning light, a flashing warning light a computer generated verbal through a speaker
104, a buzzer alert 105 or a video display indicative of a particular traffic situation. In this exemplary case the illumination of a yellow traffic light. In a preferred embodiment the system parameters are defined to detect an illuminated red traffic light and provide an alert on the basis of a red traffic light whose image size falls within a range of selected and adjustable parameters.
Figure 2 shows an exemplary embodiment as viewed from outside a vehicle. The system is installed in a vehicle 201 operated by a driver 202 and includes the image acquisition camera 203 as well as the exemplary electronics discussed in conjunction with Figure 1 above. The vehicle is travelling in the indicated direction 204 and is approaching an intersection that is marked by a traffic signal 206 and in the example shown the red signal 207 of the signal light is illuminated. The vehicle and sensor are seen to be a distance 205 from the traffic signal. In one embodiment the distance 205 is used to define a distance range in which a driver would be alerted as to for example
the red light being detected as illuminated as shown. Selection of the distance range includes selection of a minimum and maximum value of the distance 205 that the vehicle is from a potential traffic signal to be alerted. Selection of a distance range for the distance 205 and a particular lens and detector on the camera 203 would then set a parameter as to the range of sizes of the light 207 when viewed in the acquired image. In one embodiment the distance at which alerts should be signaled can be set as a range of values of the distance 205 to the traffic light. Such a selection would then define a range of values for the size of the image of the light 207 given approximately by the formula:
Wi = Wo*f/L (1) Where Wi is the width of the image of the traffic signal feature within the acquired image, Wo is the width of the actual traffic signal light 207, f is the focal length of the lens on the camera 203, and L is the distance 205 between the camera 203 and the light. A minimum and maximum value for the Wi to be used in algorithms looking for circles within the acquired image are defined by minimum and maximum values for L. Figure 3 shows an exemplary embodiment of the components of the invention. The scene 300 illuminates a lens 301 in front of a sensor 302. The lens 301 may further include a filter and aperture to control exposure. The lens may be of fixed focus or variable focus. Light impinging on the sensor 302 produces an electrical response that is fed through an amplifier 303. The gain of the amplifier may be adjusted based upon the lighting conditions to produce an output signal that is optimized for the dynamic range of the analog to digital converter (ADC) 304. The digitized signal from the ADC is captured by a frame buffer whose output is an array of intensity values for the colors red, green and blue, each measured at every pixel location contained in the sensor 302. In the discussion that follows it is assumed that the output of the sensor is used in the form of individual values of red, green and blue intensities for each pixel location. It is known in the art that the output can be equivalently characterized by a hue, saturation and luminosity (HSL) value for each pixel location. The transformation from RGB to HSL is known in the art. Similarly other known coordinate systems for image descriptions may be used and represent only a transformation of variables. All discussions that follow may use transformed variable systems to the same effect. The frame buffer acquires an image at pre-selected intervals set as a parameter by the user or designer of the system. In a preferred embodiment the frame buffer acquires images at a rate of at least 2 frames per second. The image is stored in memory 306 associated with the processing system the image storage is typically solid-state memory but may be other types of memory such as magnetic disk storage. The image array data is passed to processor memory 307 for processing by electronic
processor 308. The processor may be of any type such as those manufactured by Intel ® corporation and used in personal computers. Intel® is a registered trademark of Intel Corporation. The processor acts upon the image data as describe in the following discussions. Exemplary "acting" upon the image includes filtering, selecting pixels meeting selected color parameters, averaging of pixel values for noise reduction, applying transformations to enhance edges, applying Hough transformations to identify circular or other shapes, and verifying the identified circular objects as traffic signal light s of a particular color and any of the other myriad of operations known in the art for application to image files. Processed image files may be further stored in Processor memory 307. The system further includes a user interface 309 that allows input of operating parameters. Exemplary user interface devices include touch screen video devices as discussed in conjunction with Figure 1 above, a conventional computer keyboards and a computer mouse. The user interface for inputting parameters may also be provided by a device that is temporarily connected to the processor only for the time of loading or changing parameters or a remote wireless interface that allows loading of parameters remotely, perhaps by a remote user over the Internet. Finally an embodiment of the system further includes an output driver 310 connected to a means to alert and inform the operator of the vehicle. The output driver 310 may provide signal to any of the exemplary devices as discussed above in conjunction with Figure 1.
A block diagram of an embodiment of the invention is shown in Figure 4. An image of the scene within the field of view of the camera is acquired 401. The image may be preprocessed 402 before color corrected and adjusted to remove noise and artifacts associated with the sensor chip producing an image 402 of the scene. Preprocessing the image includes smoothing the image, removing artifacts such as individual pixels that are anomalously bright and convolving the image with appropriate filters for edge enhancement such as a 2D filter of the form:
{ 0, 0,-1, 0, 0} { 0,-1,-2,-1, 0}
{-1,-2,16,-2,-1} = mx_sm (2)
{ 0,-1,-2,-1, 0}
{ 0, 0,-1, 0, 0}
where the elements of a 5X5 matrix mx sm are as defined in equation (2) . The convolution of the matrix mx sm with the image file produces a transformed image file. Similarly in another embodiment a Sobel transform, known in the art, is pre-applied to enhance image and facilitate the subsequent search for circles that would be found within the image and then verified as traffic
signal lights. After preprocessing the image is searched for circles 403 that have a particular radius and a particular color. The circles may be selected as having a radius between Rmin and Rmax where the values of Rmin and Rmax are selected on the basis of the search parameters and in association with the search and alert distance selected for the distance between the vehicle and the traffic signal as discussed in conjunction with Figure 2 above. In one embodiment the particular color of interest is selected by averaging values for red green and blue across the detected circle and comparing these average values with the a user selected of ranges for values of red green and blue. Circles whose color does not fall within the selected range would then be rejected. The range can be selected for any particular range of the values of red green and blue and therefore the detection could filter for circles of any particular color. In a preferred embodiment the parameters are set for a minimum value of red and a maximum value for green and blue intensities to detect a red traffic signal light. Once a circle is found and verified as to color and size the output 404 of the transform is an identification of a circle 405 with center location 406. A test 407 is then made as to whether there are features detected that meet the pre-selected parameters. In one embodiment the test would be is the count of detected and verified features greater than zero. In a preferred embodiment the test is whether the count of red circles of a given diameter is greater than zero and as such would indicate detection of a traffic light. In other embodiments the test might include a minimum number of features or a combination of features. Detected features selected on the basis of shape, size and color. In other embodiments the Hough transform parameters are set to search for objects shaped other than a circle. Exemplary shapes are rectangles, squares and ellipses. If detection is found positive the vehicle operator is then notified 408 that preselected shapes have been detected. Notification may be as indicated in discussion of Figure 1 and include indicator lights, buzzers, and messages on video screens and images on video screens. If no images are found the path routes back to the starting point of acquire an image 401. Communication links between components of an embodiment of the invention are shown in Figure 5. The camera 501 acquires and image and passes that image to an embedded processor 502 and in particular the main image processor 503 component of the embedded processor. The image is then manipulated and in one embodiment the red components of the image are passed 504 to a processor section 505. In a preferred embodiment the processor 505 performs a circular Hough transform on the image and returns 506 the transform matrix of circle centers, radii and locations to the main processor 503. The main processor then passes the transformed matrix to a check circle processor 508 that checks if the detected circle matches a set of selected parameters. If a match is found a light (or more general feature) found signal 509 is sent to the main processor 503 and the main
processor sends a light found signal 510 to a user notification device 511. Exemplary user notification devices include lights, speakers, buzzers and video devices and were discussed in conjunction with Figure 1. Non-limiting examples of components 503, 505 and 508 include computer processor chips such as those manufactured by Intel ® (Intel ® is a registered trademark of Intel corporation) corporation for use in personal computers, field programmable array processors and digital signal processors especially those adopted for processing image data.
A more detailed view of a process practiced in embodiments of the invention is shown in Figure 6. An image is acquired 601 producing an n by m matrix of values for the red, green and blue light intensity at each pixel location across the array of pixels of the imaging device. The image is then further processed to enhance the ability to reliably detect targeted features within the acquired image. In one embodiment the image is filtered for selected colors 602. In this embodiment an n by m image matrix is then defined for subsequent operations as an n by m matrix of values of intensities for the selected color. In a preferred embodiment the selected color is red. In another embodiment the selected color is green and in another embodiment the value is a scaled factor indicating the intensity for a color that is a combination of red, green and blue intensities. A color filtered image is further processed to enhance the ability to detect particular features in the image file. One embodiment includes application of a Sobel transform 603 to the image. The Sobel transform produces an n by m matrix with values at each of the pixel locations corresponding to the gradient of the image at that point. Thus the Sobel transform allows one to know if an edge is detected and based upon the gradient the direction of change in the intensity at that point. The gradient in the case of a circular feature that is brighter than the surrounding image would point to the center of the circle that defines the feature. An embodiment of the invention further includes application of a second derivative transform 604 to the image matrix. The Sobel transform 603 and the second derivative transform 604 may be performed in serial on the image file. For each point within the n by m matrix the points detected as an edge in the Sobel transform are then further tested 605 for a zero crossing in the second derivative in the image file. A zero confirms the edge detected by the Sobel transform. A nonzero second derivative is indicative there is not actually an edge and the Sobel transform may then be corrected 606 by setting to zero. The embodiment therefore includes a redundant edge detection routine to confirm edges and avoid false positive detection of edges and therefore features. The matrix from the corrected Sobel transform is then subjected to circle detection 607 discussed in more detail in conjunction with Figure 7.
Figure 7 shows a flow chart for a circle detection process embodiment. The corrected Sobel transformation 701 provides a starting point for detection of circles using the Hough transform 702.
The Hough transform, known in the art, accumulates values in each pixel location that correspond to the accumulated probability that the pixel location of interest is a center of a circle and the radius of the circle. The Hough transform in this case is modified for increased reliability by only accumulating values from those points within the corrected Sobel (described in conjunction with Figure 6) whose gradient vectors point in the direction of the center of the circle and whose radius would fall within the selected parameter range for the size of the radius in the image of the targeted object as discussed in conjunction with Figure 2 above and Equation 1. The Hough transform is then further filtered 703 using a Mexican hat transform to concentrate those spots within the Hough transform output. Multiple point circle centers are then removed 704 to produce an array with single point circle centers. The color of the targeted object is then checked 705 against the original image. The check is done to see that the color of the circle in the original image matches that of the targeted color of the object. In a preferred embodiment the target object is a red traffic signal light. The pixels located within the circle found are then checked for red, green and blue intensity values. If the relative intensity of the green and blue intensity exceeds parameters used to define the red color of a traffic light the circle is rejected as being a red traffic signal. In another embodiment the targeted color of the object is green. The circle detect routines operate upon the green intensity values of the red, green blue array in the original image using the process discussed above. The pixel locations within a detected circle are then checked for color in that the relative value of the green intensity must exceed a given value and the relative intensity of the red and blue must be below given parameters. In a preferred embodiment, parameters are selected upon the basis of the actual color of a green traffic signal light. If the color of the circle is confirmed as being of the targeted color a flag is set that a circle has been detected and the location and size of the circle is transmitted to output 706. The output may then be used as described above to alert the operator that an object of interest has been detected. The color confirmation process is further detailed in Figure 8. One issue in detecting the color of a light is that local hot spots within a lamp may skew the ability to detect the color based upon the relative level of the red, green and blue intensity. If the luminance is high the intensity values might in fact be saturated and not give an accurate measure of the relative intensity of the red, green and blue components. This is overcome through the routine described in figure 8. Starting at the center of the circle 801 the values of the red, green and blue intensity are first checked 802 to see if all three color intensities exceed a threshold 803. If the three color intensities exceed the threshold the intensity values at this radius are ignored 903. If the intensities do not exceed a threshold the values
are included in the average value for the intensity within the circle. R is then incremented 804 and checked to see if points are still within the circle and points are checked again 802.
In one embodiment confirming the detecting of the target object comprises comparing an average of the intensity value for each primary color for pixels in the acquired image and located within the loci of pixel locations identified as being within the boundaries of the target object, with a corresponding target range of intensity values for the selected primary color and affirming the target object is found and located at the loci of pixel locations if the average intensity values for each primary color are all within the corresponding target ranges of color intensity values and disaffirming the target object is found if any one of the average intensity values for each primary color is outside of the corresponding target range of color intensity values. The average intensity values for pixels located within the loci of pixel locations are only included in the average if the sum of the intensity values for within a target range for total intensity and are excluded from the average if the sum of the intensity vales are not within a target range for total intensity. As an example for a red traffic signal light the target range for red is that the average intensity is greater than a selected value and the target range for green and blue is that the average intensity values are both less than a selected value. Similarly if the method is applied to a green light then the average value for green would be larger than a selected value and the average value for red and blue would be less than a selected threshold. The method would similarly apply to any color in the spectrum by selecting appropriate ranges for the average values of the primary colors. Note that the location of the object is detected in a transformed matrix and is confirmed in the original image. The separation of this confirmation step and reverting to analysis of the original image reduces the erros and false positive indications of a found object.
In the examples the primary colors are selected as red, green and blue. The method could equivalently be applied if different primary colors were selected, such as cyan, magenta and yellow. Similarly the method could equivalently be applied if the color space describing the image were transformed to a hue, saturation and luminance and the average values for pixels within the location identified as containing the object of interest were then required to fall within a range of values for hue, saturation and luminance.
Summary Devices and methods are disclosed for acquiring images and looking for objects within that image and alerting a user as to found objects. The preferred embodiment is the detection of red traffic signal lights as viewed from an automobile moving on a roadway. A second embodiment would be
used to detect the green lights of traffic signals. The devices and methods are used to detect circular objects in the preferred embodiment. Transforms could also be used to detect other shaped objects and objects of colors other than red and green. The devices provide means to input operating parameters and alert drivers when target objects are found. Embodiments are included to improve the reliability and reduce false positive detection by rechecking color intensities within the coordinates of the found object.
Those skilled in the art will appreciate that various adaptations and modifications of the preferred embodiments can be configured without departing from the scope and spirit of the invention. Therefore, it is to be understood that the invention may be practiced other than as specifically described herein, within the scope of the appended claims.
Claims
1. A method of detecting a target object within an image, said target object having a shape, a size, and a target range of color intensity values for each primary color, said method comprising: a) acquiring an image from a digital camera, said image comprising a matrix of intensity values for each primary color for each pixel, and storing said matrix in digital memory, said camera having a lens with a focal length, b) defining a target size in the image for the object based upon the focal length and a range of distances between the camera and the object, c) extracting the intensity values for a selected color, said selected color matching the color of the target object, from the matrix thereby creating a matrix of monochrome intensity values and storing said monochrome matrix, d) detecting the pixel locations that represent edges based upon changes in monochrome intensity values within the matrix of monochrome intensity values thereby creating a matrix of pixel locations and intensities of edges, e) detecting loci of pixel locations, said loci having a shape that matches the shape of the target object and said loci having a size that falls within the target range of sizes of the target object, in the matrix of pixel locations and intensities of edges f) confirming the detecting of the loci of pixel locations, said confirming comprising comparing the intensity values for red, green and blue of each selected pixel in the original image, said selected pixels being located at the location of the loci of pixels, with the color of the target object.
2. The method of claim 1 where the confirming the detecting of the loci of pixel locations comprises comparing an average of the intensity value for each primary color for pixels in the acquired image and located within the loci of pixel locations, with the corresponding target range of intensity values for the selected primary color and affirming the target object is found and located at the loci of pixel locations if the average intensity values for each primary color are all within the corresponding target ranges of color intensity values and disaffirming the target object is found if any one of the average intensity values for each primary color is outside of the corresponding target range of color intensity values.
3. The method of claim 2 where the average intensity values for pixels located within the loci of pixel locations are included in the average if the sum of the intensity values are within a target range for total intensity and are excluded from the average if the sum of the intensity vales are not within a target range for total intensity.
4. The method of claim 3 where the target object is a red traffic signal light.
5. The method of claim 3 where the target object is a green traffic signal light.
6. A method of alerting a driver of a traffic signal, said traffic signal having a shape, a size, and a target range of color intensity values for each primary color, said method comprising: a) acquiring an image of a view in front of the driver from a digital camera, said image comprising a matrix of intensity values for each primary color for each pixel, and storing said matrix in digital memory, said camera having a lens with a focal length, b) defining a range of target sizes in the image for the traffic signal based upon the focal length and a range of distances for alerting the driver, c) extracting the intensity values for a selected color, said selected color matching the color of the traffic signal, from the matrix thereby creating a matrix of monochrome intensity values and storing said monochrome matrix, d) detecting the pixel locations that represent edges based upon changes in monochrome intensity values within the matrix of monochrome intensity values thereby creating a matrix of pixel locations and intensities of edges, e) detecting loci of pixel locations, said loci having a shape that matches the shape of the traffic signal and said loci having a size that falls within the range of the target sizes, in the matrix of pixel locations and intensities of edges f) confirming the detecting of the loci of pixel locations, said confirming comprising comparing the average intensity values for red, green and blue of selected pixels in the image, said selected pixels located in the location of the loci of pixels, with the color of the traffic signal, g) alerting the driver of a traffic signal if a traffic signal is detected and confirmed.
7. The method of claim 6 where the confirming the detecting of the loci of pixel locations comprises comparing an average of the intensity value for each primary color for pixels in the acquired image and located within the loci of pixel locations, with the corresponding target range of intensity values for the selected primary color and affirming the traffic signal is found and located at the loci of pixel locations if the average intensity values for each primary color are all within the corresponding target ranges of color intensity values and disaffirming the traffic signal is found if any one of the average intensity values for each primary color is outside of the corresponding target range of color intensity values.
8. The method of claim 7 where the average intensity values for pixels located within the loci of pixel locations are included in the average if the sum of the intensity values are within a target range for total intensity and are excluded from the average if the sum of the intensity vales are not within a target range for total intensity.
9. The method of claim 6 where the traffic signal is a red traffic signal light.
10. The method of claim 6 where the traffic signal is a green traffic signal light.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US21165709P | 2009-04-01 | 2009-04-01 | |
| US61/211,657 | 2009-04-01 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2010115020A2 true WO2010115020A2 (en) | 2010-10-07 |
| WO2010115020A3 WO2010115020A3 (en) | 2011-02-10 |
Family
ID=42828936
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2010/029658 Ceased WO2010115020A2 (en) | 2009-04-01 | 2010-04-01 | Color and pattern detection system |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2010115020A2 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2995522A3 (en) * | 2014-09-10 | 2016-08-17 | Continental Automotive Systems, Inc. | Detection system for color blind drivers |
| CN105930819A (en) * | 2016-05-06 | 2016-09-07 | 西安交通大学 | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system |
| US9824581B2 (en) | 2015-10-30 | 2017-11-21 | International Business Machines Corporation | Using automobile driver attention focus area to share traffic intersection status |
| EP3832337A3 (en) * | 2020-05-11 | 2021-08-04 | Beijing Baidu Netcom Science Technology Co., Ltd. | Positioning method and device, on-board equipment, vehicle, electronic device, and positioning system |
| US20210334980A1 (en) * | 2020-12-28 | 2021-10-28 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for determining location of signal light, storage medium, program and roadside device |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007263737A (en) * | 2006-03-28 | 2007-10-11 | Clarion Co Ltd | Navigation apparatus, method and program |
| ATE519193T1 (en) * | 2006-12-06 | 2011-08-15 | Mobileye Technologies Ltd | DETECTION AND RECOGNITION OF TRAFFIC SIGNS |
| JP2009015759A (en) * | 2007-07-09 | 2009-01-22 | Honda Motor Co Ltd | Traffic light recognition device |
| US8233670B2 (en) * | 2007-09-13 | 2012-07-31 | Cognex Corporation | System and method for traffic sign recognition |
-
2010
- 2010-04-01 WO PCT/US2010/029658 patent/WO2010115020A2/en not_active Ceased
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2995522A3 (en) * | 2014-09-10 | 2016-08-17 | Continental Automotive Systems, Inc. | Detection system for color blind drivers |
| US9824581B2 (en) | 2015-10-30 | 2017-11-21 | International Business Machines Corporation | Using automobile driver attention focus area to share traffic intersection status |
| US10282986B2 (en) | 2015-10-30 | 2019-05-07 | International Business Machines Corporation | Using automobile driver attention focus area to share traffic intersection status |
| US10650676B2 (en) | 2015-10-30 | 2020-05-12 | International Business Machines Corporation | Using automobile driver attention focus area to share traffic intersection status |
| CN105930819A (en) * | 2016-05-06 | 2016-09-07 | 西安交通大学 | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system |
| EP3832337A3 (en) * | 2020-05-11 | 2021-08-04 | Beijing Baidu Netcom Science Technology Co., Ltd. | Positioning method and device, on-board equipment, vehicle, electronic device, and positioning system |
| US11405744B2 (en) | 2020-05-11 | 2022-08-02 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Positioning method and device, on-board equipment, vehicle, and positioning system |
| US20210334980A1 (en) * | 2020-12-28 | 2021-10-28 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for determining location of signal light, storage medium, program and roadside device |
| EP3872701A3 (en) * | 2020-12-28 | 2022-01-12 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and appratus for determining location of signal light, storage medium, program and roadside device |
| US11810320B2 (en) | 2020-12-28 | 2023-11-07 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for determining location of signal light, storage medium, program and roadside device |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2010115020A3 (en) | 2011-02-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11745755B2 (en) | Vehicular driving assist system with driver monitoring | |
| US20090060273A1 (en) | System for evaluating an image | |
| KR101727054B1 (en) | Method for detecting and recognizing traffic lights signal based on features | |
| EP2477139A2 (en) | Lane departure warning system and method | |
| US20060257024A1 (en) | Method and device for visualizing the surroundings of a vehicle | |
| KR102021152B1 (en) | Method for detecting pedestrians based on far infrared ray camera at night | |
| EP1553516A2 (en) | Pedestrian extracting apparatus | |
| CN104509100B (en) | Three-dimensional body detection device and three-dimensional body detection method | |
| JP2007234019A (en) | Vehicle image area specifying device and method for it | |
| US20140270392A1 (en) | Apparatus and Method for Measuring Road Flatness | |
| JP4807354B2 (en) | Vehicle detection device, vehicle detection system, and vehicle detection method | |
| JP5401257B2 (en) | Far-infrared pedestrian detection device | |
| WO2010115020A2 (en) | Color and pattern detection system | |
| US20240015269A1 (en) | Camera system, method for controlling the same, storage medium, and information processing apparatus | |
| WO2010007718A1 (en) | Vehicle vicinity monitoring device | |
| JP2011227657A (en) | Device for monitoring periphery of vehicle | |
| JP5166933B2 (en) | Vehicle recognition device and vehicle | |
| JP4528283B2 (en) | Vehicle periphery monitoring device | |
| KR101276073B1 (en) | System and method for detecting distance between forward vehicle using image in navigation for vehicle | |
| JP4826355B2 (en) | Vehicle surrounding display device | |
| CN113771753A (en) | Auxiliary driving method and system for intersection green light reminding | |
| KR101982091B1 (en) | Surround view monitoring system | |
| WO2008037473A1 (en) | Park assist system visually marking up dangerous objects | |
| JP4813304B2 (en) | Vehicle periphery monitoring device | |
| JP2009193130A (en) | Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10759419 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 10759419 Country of ref document: EP Kind code of ref document: A2 |