US20100189364A1 - Method and system for image identification and identification result output - Google Patents
Method and system for image identification and identification result output Download PDFInfo
- Publication number
- US20100189364A1 US20100189364A1 US12/512,575 US51257509A US2010189364A1 US 20100189364 A1 US20100189364 A1 US 20100189364A1 US 51257509 A US51257509 A US 51257509A US 2010189364 A1 US2010189364 A1 US 2010189364A1
- Authority
- US
- United States
- Prior art keywords
- image
- feature
- identification
- sample
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
Definitions
- the present invention generally relates to an image identification technology and, more particularly, to a method and a system for image identification and identification result output by comparing a feature image under identification with a plurality of sample images respectively so as to obtain a plurality of similarity indexes associated with the plurality of sample images respectively.
- the similarity indexes are sorted and then a least one of comparison results is output.
- Taiwan Patent No. 197752 a CCD camera and an image acquiring unit are used to acquire a car image in the car lane and the car image is then read by an image reading unit. Then, a logarithmic greyscale operation unit is used to calculate the logarithmic greyscale of each pixel in the car image.
- the image corresponding to the logarithmic greyscales is decomposed by a wavelet decomposition unit into rough images, horizontally differentiated images, vertically differentiated images and diagonally differentiated images.
- An image binarization unit converts the logarithmic greyscale of each pixel in the horizontally differentiated images from real numbers into binary digits 0 and 1.
- a rough image dividing unit determines a region with the highest sum of binary digits within the whole car image according a pre-set license plate size and thus the region is initially referred to as a license plate region. Then, a license plate slantness correction unit corrects the slantness of the image corresponding to the license plate region. Finally, a fine image dividing unit removes the part that does not correspond to the license plate from the rough license plate region.
- Taiwan Patent Pub. No. 1286027 an integrated multiple lane free flow vehicle enforcement system is disclosed, wherein a portal framed equipment is established at the image enforcement point.
- the car lane is physically divided so that image enforcement can be realized with respect to various cars even though the system slows the cars to pass by the image enforcement point at a normal speed and to change lanes freely.
- Taiwan Patent Appl. No. 200802137 a serial license plate identification system is disclosed, using a license plate character region detection module to receive an image and determine each approximate license plate range in the image. Sequences of serial identical pixels in each approximate license plate range are obtained. The sequences of serial identical pixels are erased, filtered, and connected to blocks so as to obtain the image with respect to the license plate character region in each approximate license plate range and output verified image with respect to the license plate character region after verification. Then, the verified image with respect to the license plate character region is transmitted to the a license plate character dividing and identification module to acquire all the independent character images and thus all the license plate character information after the independent character images are identified.
- the present invention provides a method and a system for image identification and identification result output, wherein pixels in the sample images are provided with different weights and then calculated with images to be identified to obtain similarity indexes, which are to be sorted to output a least one of comparison results.
- the present invention provides a method and a system for image identification and identification result output, which can be used to identify the identification mark on a carrier. With the use of the identification mark, word feature is enhanced.
- the word identification technology is used to obtain a plurality of results and limit the range of search, which helps the user to identify cars that are suspected to cause accidents.
- the present invention provides a method for image identification and identification result output, comprising steps of: providing an image; acquiring a feature image from the image; providing a plurality of sample images, each having respectively a standard image region and at least a non-standard image region, wherein the standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively; performing a calculation on a third feature value of each pixel in the feature image and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of the feature image corresponding to the plurality of sample images respectively; collecting a plurality of similarity indexes with respect to the feature image compared with the plurality of sample images; and sorting the plurality of similarity indexes and outputting at least one of comparison results.
- the present invention provides a method for image identification and identification result output, comprising steps of: providing an image of a carrier with an identification mark thereon; acquiring from the image a plurality of feature images with respect to the identification mark; providing a plurality of sample images, each having respectively a standard image region and at least a non-standard image region, wherein the standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively; performing a calculation on a third feature value of each pixel in the plurality of feature images and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of each feature image corresponding to the plurality of sample images respectively; collecting a plurality of similarity indexes with respect to each feature image compared with the plurality of sample images; and sorting the plurality of similarity indexes corresponding to the identification mark and outputting at least one of comparison results.
- the present invention further provides a system for image identification and identification result output, comprising: a database capable of providing a plurality of sample images, each having respectively a standard image region and at least a non-standard image region, wherein the standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively; an image acquiring unit capable of acquiring an image of an object; a feature acquiring unit capable of acquiring a feature image from the image; an operation and processing unit capable of performing a calculation on a third feature value of each pixel in the feature image and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of the feature image corresponding to the plurality of sample images respectively, and sorting the plurality of similarity indexes and outputting at least one of comparison results; and an identification and output unit being connected to the operation and processing unit to output the comparison result identified by the operation and processing unit.
- FIG. 1 is a flowchart of a method for image identification and identification result output according to one embodiment of the present invention
- FIG. 2 is a flowchart of a step of providing sample images according to the present invention
- FIG. 3A is a schematic diagram of a sample image
- FIG. 3B is a schematic diagram showing a standard image region in a sample image
- FIG. 3C and FIG. 3D are respectively a schematic diagram showing a standard image region and a non-standard image region in a sample image
- FIG. 4 is a schematic diagram showing a normalized feature image
- FIG. 5 is a flowchart of a method for image identification and identification result output according to another embodiment of the present invention.
- FIG. 6 is a blur image of a car
- FIG. 7A and FIG. 7B depict schematically identification marks with different combinations
- FIG. 8A is a table for sorting the comparison results according to the present invention.
- FIG. 8B is a blur image of an identification mark on a car.
- FIG. 9 is a schematic diagram of a system for image identification and identification result output according to the present invention.
- FIG. 1 is a flowchart of a method for image identification and identification result output according to one embodiment of the present invention.
- the method starts with step 20 to provide an image.
- the image is acquired by an image acquiring unit such as a CCD or CMOS image acquiring unit.
- step 21 is performed to acquire a feature image from the image.
- the feature image denotes the image comprising features such as patterns or words, but not limited thereto.
- the feature image can be acquired manually or automatically.
- step 22 is performed to provide a plurality of sample images, each having respectively a standard image region and at least a non-standard image region.
- the standard image region has pixels corresponding to a first feature value respectively
- the non-standard image region has pixels corresponding to a second feature value respectively.
- step 220 determines the size of the sample images, as shown in FIG. 3A .
- the size of the sample image 5 is determined according to the user's demand, for example, 130 ⁇ 130 pixels, but not limited thereto.
- step 221 is performed to provide the standard image region 50 in the sample image 5 .
- the standard image region 50 comprises a plurality of pixels 500 and 501 to form a character, a digit, a word or a pattern as represented by the sample image.
- FIG. 3B the present embodiment is exemplified by a digit “1”.
- each pixel 500 and 501 is given a proper greyscale value to form a standard image region 50 , which draws the outline of the digit 1. Then, in the standard image region 50 , specific pixels 501 (pixels with oblique lines) are given a specific weight value.
- the greyscale value and the weight value are determined according to the user's demand. That is, each weight value may be different or identical. In the present embodiment, the weight value is positive.
- the greyscale value and the weight value for each pixel 500 and 501 are combined as the first feature value.
- step 222 is performed to provide in the sample image the non-standard image region 51 as shown in FIG. 3C .
- the non-standard image region 51 represents the content that the standard image region 50 is taken for. For example, digit “1” is often taken for letter “I” or “L” or even letter “E”. Therefore, locations for pixels 510 possibly mis-identified (pixels with dots) are given proper greyscale values and weight values as the second feature values corresponding to pixels 510 . In the present embodiment, locations for the pixels 510 in the non-standard image region 51 are determined according to the easily mis-identified character, digit or word in the standard image region 50 , which is not restricted. The greyscale values and weight values are determined according to practical demand. In the present embodiment, the weight values in the non-standard image region 51 are negative.
- FIG. 3D which is a schematic diagram showing another sample image 5 a provided according to digit 0, the sample image 5 a also comprises a standard image region and a non-standard image region.
- the pattern constructed by the pixels in the standard image region draws the outline of a digit “0”.
- the pattern constructed by the pixels in the non-standard image region denotes a word that digit “0” is taken for. For example, digit “0” is often taken for letter “Q” or digit “8”.
- Steps 221 and 222 can be performed using image processing software exemplified by, but not limited to, MS Paint.
- step 223 is performed to store the sample images, such as 0 to 9, A to Z and a to z, in the database. Then, in step 224 , the identification result is observed after a plurality times of training. In the present step, different images are compared with the database for identification and calculation to identify whether the result is correct. After the plurality times of testing, step 225 is performed to modify the weight values, greyscale values or locations of pixels in the standard image region and the non-standard image region in the sample images according to the result in step 224 .
- step 23 performs a calculation on a third feature value of each pixel in the feature image and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of the feature image corresponding to the plurality of sample images respectively.
- the third feature value is a greyscale value of each pixel in the feature image.
- FIG. 4 is a schematic diagram showing a normalized feature image.
- the normalized feature image can be processed with each of the sample images for further calculation to obtain a corresponding similarity index C uv .
- the calculation is based on normalized correlation matching, as described in equation (1).
- Normalized correlation matching is aimed at calculating the relation between the feature image and the sample image, wherein the standard deviation of the greyscale value of each image is regarded as a vector and is multiplied with the weight value so as to determine the optimal location.
- the standard correlation value is within the range between ⁇ 1 and 1 with higher similarity as it gets closer to 1. When C uv reaches its maximum, an optimal location is achieved.
- u i is the greyscale value of each pixel in the sample image
- v i is the greyscale value of each pixel in the feature image, i.e., the third feature value.
- ⁇ is the average greyscale value of all the pixels in the sample image
- v is the average greyscale value of all the pixels in the feature image.
- w i is the weight value of the pixels in the standard image region and the non-standard image region in the sample image. The weight value of pixels in the other region is 1.
- step 24 is performed to collect a plurality of similarity indexes with respect to the feature image compared with the plurality of sample images.
- the similarity indexes are sorted from the identification result with highest possibility to the identification result with lowest possibility.
- FIG. 5 is a flowchart of a method for image identification and identification result output according to another embodiment of the present invention.
- the flowchart of the method 3 is exemplified by the identification of an identification mark (such as a license plate) on a carrier (such as a car).
- step 30 is performed to provide an image of a carrier with an identification mark thereon.
- an image acquiring unit is often installed on one side of the road or at the crossroad so as to acquire dynamic images or static images of the accident event.
- the static images can be acquired from the recorded dynamic images.
- step 31 is performed to acquire from the image a plurality of feature images with respect to the identification mark. As shown in FIG.
- the carrier is a car with wheels and the identification mark is the license plate number.
- the license plate number there are 7 characters forming the license plate number, wherein the leading four characters are digits and last two characters are letters.
- the license plate number is formed with various formats of characters for different countries, which is not limited to the present embodiment.
- the content on the license plate is to be identified. Therefore, a plurality of feature images 900 can be acquired in the region of interest (ROI) 90 in FIG. 6 corresponding to the license plate.
- Each feature image represents respectively one character of the identification mark.
- the feature image can be acquired manually or automatically with the use of software.
- step 32 the feature image corresponding to each character in the identification mark is compared to a plurality of sample images in the database.
- the comparison process is similar to step 23 in FIG. 1 and is thus not presented herein.
- step 32 further comprises step 33 to exclude images of impossible characters or digits according to various combinations that form the identification marks.
- the identification mark can be formed as a combination of 4 leading digits and 2 following letters (as shown in FIG. 7A ) with a “-” therebetween. In another identification mark, 2 leading letters and 4 following digits are combined (as shown in FIG.
- step 34 is performed to collect a plurality of similarity indexes with respect to each feature image compared with the plurality of sample images.
- step 35 the similarity indexes corresponding to the identification mark are sorted. Therefore, in the present step, a plurality of combinations representing the identification mark can be obtained according to the sorting.
- step 36 the comparison results are output.
- FIG. 8A which FIG. 8A is a table for sorting the comparison results according to the present invention.
- the feature image corresponding to each character is identified to obtain a character or a digit with highest similarity and combination thereof as the first possible result shown in FIG.
- the suspected license plate number is possibly “1642-FV”.
- the second, third and fourth possible results can be obtained according to the character with the second, third and fourth highest similarity.
- the number of results is based on the practical demand, and is thus not limited to the number in FIG. 8 .
- the possible format of the identification mark is limited to the combination of digits and/or characters so as to help the user speed up identification.
- the identification mark in FIG. 8A is actually 6692-RV. According to the present invention, all the characters appear in the table of FIG. 8A . Therefore, the user is able to efficiently find the correct identification mark after choosing from the results.
- the user identifies the character image in the region of interest 90 (as shown in FIG. 8B ) corresponding to the license plate acquired in step 36 based on visual estimation and choose from FIG. 8A . For example, according to the image in FIG. 8B , the second character is “6” according to visual estimation.
- the user can identify the first, and the third to the seventh characters from FIG. 8A according to the similarity indexes to obtain the identification mark from lesser combinations.
- FIG. 9 is a schematic diagram of a system for image identification and identification result output according to the present invention.
- the system 4 is capable of implementing the flowchart in FIG. 1 or FIG. 5 for image identification and identification result output.
- the system 4 comprises a database 40 , an image processing unit 41 , an identification and output unit 42 and a plurality of image acquiring units 43 .
- the system 4 is capable of implementing the flowchart in FIG. 1 or FIG. 5 to output the identification results.
- the database 40 is capable of providing a plurality of sample images. The sample image is different from the images acquired by the image acquiring units 43 in viewing angles and distances.
- the plurality of image acquiring units 43 are electrically connected to the image processing unit 41 .
- Each image acquiring unit 43 is capable of acquiring an image of an object and transmits the image to the image processing unit 41 for identification.
- each of the image acquiring units 43 is capable of acquiring dynamic or static images of the object.
- the image acquiring units may be CCD or CMOS image acquiring units, but not limited thereto.
- the object may be a carrier with an identification mark thereon, for example, the license plate number of a car. Moreover, the object may also be a word, a character, a digit or combinations thereof.
- the image processing unit 41 comprises a feature acquiring unit 410 and an operation and processing unit 411 .
- the feature acquiring unit 410 is capable of receiving the image to acquiring a feature image from the image. Then, the operation and processing unit 411 performs a calculation.
- the operation and processing unit 411 further comprises an enhancing unit 4110 and an identification and comparison unit 4111 .
- the enhancing unit 4110 is capable of improving and normalizing the feature image (enhancing the contrast and the edge) so that the size and the angle of the feature image are identical to those of the sample image.
- the identification and comparison unit 4111 performs step 23 in FIG.
- the identification and output unit 42 is electrically connected to the processing unit 41 to output the comparison result identified by the processing unit 41 .
- the output from the identification and output unit 42 is as shown in FIG. 8A , which is capable of allowing the user to know the identification results displayed on a display.
- the present invention discloses a method and system for image identification and identification result output to improve speed for targetting suspected carrier and enhance the identification efficiency. Therefore, the present invention is useful, novel and non-obvious.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
- The present invention generally relates to an image identification technology and, more particularly, to a method and a system for image identification and identification result output by comparing a feature image under identification with a plurality of sample images respectively so as to obtain a plurality of similarity indexes associated with the plurality of sample images respectively. The similarity indexes are sorted and then a least one of comparison results is output.
- There are many people who get killed in traffic accidents. Theft and burglary using cars/motorcycles have been repeatedly reported. These may be attributed to poor image identification of license plates because of poor monitoring systems. Such monitoring systems are mostly problematic because of poor resolution (320×240 pixels) and slant angles of the image acquiring units to cause blur or imcomplete images that cannot be recognized so that the criminals can be at large.
- Conventionally, in Taiwan Patent No. 197752, a CCD camera and an image acquiring unit are used to acquire a car image in the car lane and the car image is then read by an image reading unit. Then, a logarithmic greyscale operation unit is used to calculate the logarithmic greyscale of each pixel in the car image. The image corresponding to the logarithmic greyscales is decomposed by a wavelet decomposition unit into rough images, horizontally differentiated images, vertically differentiated images and diagonally differentiated images. An image binarization unit converts the logarithmic greyscale of each pixel in the horizontally differentiated images from real numbers into
0 and 1. A rough image dividing unit determines a region with the highest sum of binary digits within the whole car image according a pre-set license plate size and thus the region is initially referred to as a license plate region. Then, a license plate slantness correction unit corrects the slantness of the image corresponding to the license plate region. Finally, a fine image dividing unit removes the part that does not correspond to the license plate from the rough license plate region.binary digits - Moreover, in Taiwan Patent Pub. No. 1286027, an integrated multiple lane free flow vehicle enforcement system is disclosed, wherein a portal framed equipment is established at the image enforcement point. The car lane is physically divided so that image enforcement can be realized with respect to various cars even though the system slows the cars to pass by the image enforcement point at a normal speed and to change lanes freely.
- Moreover, in Taiwan Patent Appl. No. 200802137, a serial license plate identification system is disclosed, using a license plate character region detection module to receive an image and determine each approximate license plate range in the image. Sequences of serial identical pixels in each approximate license plate range are obtained. The sequences of serial identical pixels are erased, filtered, and connected to blocks so as to obtain the image with respect to the license plate character region in each approximate license plate range and output verified image with respect to the license plate character region after verification. Then, the verified image with respect to the license plate character region is transmitted to the a license plate character dividing and identification module to acquire all the independent character images and thus all the license plate character information after the independent character images are identified.
- The present invention provides a method and a system for image identification and identification result output, wherein pixels in the sample images are provided with different weights and then calculated with images to be identified to obtain similarity indexes, which are to be sorted to output a least one of comparison results.
- The present invention provides a method and a system for image identification and identification result output, which can be used to identify the identification mark on a carrier. With the use of the identification mark, word feature is enhanced. The word identification technology is used to obtain a plurality of results and limit the range of search, which helps the user to identify cars that are suspected to cause accidents.
- In one embodiment, the present invention provides a method for image identification and identification result output, comprising steps of: providing an image; acquiring a feature image from the image; providing a plurality of sample images, each having respectively a standard image region and at least a non-standard image region, wherein the standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively; performing a calculation on a third feature value of each pixel in the feature image and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of the feature image corresponding to the plurality of sample images respectively; collecting a plurality of similarity indexes with respect to the feature image compared with the plurality of sample images; and sorting the plurality of similarity indexes and outputting at least one of comparison results.
- In another embodiment, the present invention provides a method for image identification and identification result output, comprising steps of: providing an image of a carrier with an identification mark thereon; acquiring from the image a plurality of feature images with respect to the identification mark; providing a plurality of sample images, each having respectively a standard image region and at least a non-standard image region, wherein the standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively; performing a calculation on a third feature value of each pixel in the plurality of feature images and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of each feature image corresponding to the plurality of sample images respectively; collecting a plurality of similarity indexes with respect to each feature image compared with the plurality of sample images; and sorting the plurality of similarity indexes corresponding to the identification mark and outputting at least one of comparison results.
- In one embodiment, the present invention further provides a system for image identification and identification result output, comprising: a database capable of providing a plurality of sample images, each having respectively a standard image region and at least a non-standard image region, wherein the standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively; an image acquiring unit capable of acquiring an image of an object; a feature acquiring unit capable of acquiring a feature image from the image; an operation and processing unit capable of performing a calculation on a third feature value of each pixel in the feature image and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of the feature image corresponding to the plurality of sample images respectively, and sorting the plurality of similarity indexes and outputting at least one of comparison results; and an identification and output unit being connected to the operation and processing unit to output the comparison result identified by the operation and processing unit.
- The objects and spirits of the embodiments of the present invention will be readily understood by the accompanying drawings and detailed descriptions, wherein:
-
FIG. 1 is a flowchart of a method for image identification and identification result output according to one embodiment of the present invention; -
FIG. 2 is a flowchart of a step of providing sample images according to the present invention; -
FIG. 3A is a schematic diagram of a sample image; -
FIG. 3B is a schematic diagram showing a standard image region in a sample image; -
FIG. 3C andFIG. 3D are respectively a schematic diagram showing a standard image region and a non-standard image region in a sample image; -
FIG. 4 is a schematic diagram showing a normalized feature image; -
FIG. 5 is a flowchart of a method for image identification and identification result output according to another embodiment of the present invention; -
FIG. 6 is a blur image of a car; -
FIG. 7A andFIG. 7B depict schematically identification marks with different combinations; -
FIG. 8A is a table for sorting the comparison results according to the present invention; -
FIG. 8B is a blur image of an identification mark on a car; and -
FIG. 9 is a schematic diagram of a system for image identification and identification result output according to the present invention. - The present invention can be exemplified but not limited by various embodiments as described hereinafter.
- Please refer to
FIG. 1 , which is a flowchart of a method for image identification and identification result output according to one embodiment of the present invention. In the present embodiment, the method starts withstep 20 to provide an image. The image is acquired by an image acquiring unit such as a CCD or CMOS image acquiring unit. Then,step 21 is performed to acquire a feature image from the image. The feature image denotes the image comprising features such as patterns or words, but not limited thereto. The feature image can be acquired manually or automatically. Then,step 22 is performed to provide a plurality of sample images, each having respectively a standard image region and at least a non-standard image region. The standard image region has pixels corresponding to a first feature value respectively, and the non-standard image region has pixels corresponding to a second feature value respectively. - Please refer to
FIG. 2 , which is a flowchart of a step of providing sample images according to the present invention. Firstly,step 220 determines the size of the sample images, as shown inFIG. 3A . The size of thesample image 5 is determined according to the user's demand, for example, 130×130 pixels, but not limited thereto. Then, step 221 is performed to provide thestandard image region 50 in thesample image 5. Thestandard image region 50 comprises a plurality of 500 and 501 to form a character, a digit, a word or a pattern as represented by the sample image. Referring topixels FIG. 3B , the present embodiment is exemplified by a digit “1”. In thesample image 5, each 500 and 501 is given a proper greyscale value to form apixel standard image region 50, which draws the outline of thedigit 1. Then, in thestandard image region 50, specific pixels 501 (pixels with oblique lines) are given a specific weight value. The greyscale value and the weight value are determined according to the user's demand. That is, each weight value may be different or identical. In the present embodiment, the weight value is positive. In thestandard image region 50, the greyscale value and the weight value for each 500 and 501 are combined as the first feature value.pixel - Referring to
FIG. 2 ,step 222 is performed to provide in the sample image thenon-standard image region 51 as shown inFIG. 3C . Thenon-standard image region 51 represents the content that thestandard image region 50 is taken for. For example, digit “1” is often taken for letter “I” or “L” or even letter “E”. Therefore, locations forpixels 510 possibly mis-identified (pixels with dots) are given proper greyscale values and weight values as the second feature values corresponding topixels 510. In the present embodiment, locations for thepixels 510 in thenon-standard image region 51 are determined according to the easily mis-identified character, digit or word in thestandard image region 50, which is not restricted. The greyscale values and weight values are determined according to practical demand. In the present embodiment, the weight values in thenon-standard image region 51 are negative. - As shown in
FIG. 3D , which is a schematic diagram showing another sample image 5 a provided according todigit 0, the sample image 5 a also comprises a standard image region and a non-standard image region. The pattern constructed by the pixels in the standard image region draws the outline of a digit “0”. Similarly, the pattern constructed by the pixels in the non-standard image region denotes a word that digit “0” is taken for. For example, digit “0” is often taken for letter “Q” or digit “8”. 221 and 222 can be performed using image processing software exemplified by, but not limited to, MS Paint.Steps - Referring to
FIG. 2 ,step 223 is performed to store the sample images, such as 0 to 9, A to Z and a to z, in the database. Then, instep 224, the identification result is observed after a plurality times of training. In the present step, different images are compared with the database for identification and calculation to identify whether the result is correct. After the plurality times of testing, step 225 is performed to modify the weight values, greyscale values or locations of pixels in the standard image region and the non-standard image region in the sample images according to the result instep 224. - Referring to
FIG. 1 , after the plurality of sample images are provided,step 23 performs a calculation on a third feature value of each pixel in the feature image and the first feature value or the second feature value corresponding to each pixel in the plurality of sample images to obtain a similarity index of the feature image corresponding to the plurality of sample images respectively. The third feature value is a greyscale value of each pixel in the feature image. Beforestep 23, since the distance and angle for acquiring the feature image influence the following identification, the feature image is normalized to adjust the size and the angle of the feature image so that the size of the feature image is identical to the size of the sample image after acquiring the feature image. The normalization processing is conventionally well known, and thus description thereof is not presented herein. - Please refer to
FIG. 4 , which is a schematic diagram showing a normalized feature image. The normalized feature image can be processed with each of the sample images for further calculation to obtain a corresponding similarity index Cuv. The calculation is based on normalized correlation matching, as described in equation (1). Normalized correlation matching is aimed at calculating the relation between the feature image and the sample image, wherein the standard deviation of the greyscale value of each image is regarded as a vector and is multiplied with the weight value so as to determine the optimal location. The standard correlation value is within the range between −1 and 1 with higher similarity as it gets closer to 1. When Cuv reaches its maximum, an optimal location is achieved. -
- wherein ui is the greyscale value of each pixel in the sample image, while vi is the greyscale value of each pixel in the feature image, i.e., the third feature value. Moreover, ū is the average greyscale value of all the pixels in the sample image, while
v is the average greyscale value of all the pixels in the feature image. wi is the weight value of the pixels in the standard image region and the non-standard image region in the sample image. The weight value of pixels in the other region is 1. - Based on equation (1), a calculation is performed on each pixel in
FIG. 4 and each pixel in the sample image. For example,FIG. 4 and the sample image (representing digit 1) inFIG. 3C (representing digit 1) and the sample image inFIG. 3D (representing digit 0) are calculated to obtain the similarity index Cuv of the feature image inFIG. 4 corresponding toFIG. 3C andFIG. 3D . Referring toFIG. 1 , after obtaining the similarity index,step 24 is performed to collect a plurality of similarity indexes with respect to the feature image compared with the plurality of sample images. In the present step, the similarity indexes are sorted from the identification result with highest possibility to the identification result with lowest possibility. Finally, instep 25, the comparison results are output. - Please refer to
FIG. 5 , which is a flowchart of a method for image identification and identification result output according to another embodiment of the present invention. The flowchart of themethod 3 is exemplified by the identification of an identification mark (such as a license plate) on a carrier (such as a car). Firstly, step 30 is performed to provide an image of a carrier with an identification mark thereon. In order to achieve traffic safety or to reconstruct an accident event, an image acquiring unit is often installed on one side of the road or at the crossroad so as to acquire dynamic images or static images of the accident event. Instep 30, the static images can be acquired from the recorded dynamic images. Then, step 31 is performed to acquire from the image a plurality of feature images with respect to the identification mark. As shown inFIG. 6 , which is a blur image of a car. In the present embodiment, the carrier is a car with wheels and the identification mark is the license plate number. In the present embodiment, there are 7 characters forming the license plate number, wherein the leading four characters are digits and last two characters are letters. The license plate number is formed with various formats of characters for different countries, which is not limited to the present embodiment. Instep 31, the content on the license plate is to be identified. Therefore, a plurality offeature images 900 can be acquired in the region of interest (ROI) 90 inFIG. 6 corresponding to the license plate. Each feature image represents respectively one character of the identification mark. The feature image can be acquired manually or automatically with the use of software. - Referring to
FIG. 5 , since there are 7 characters in the identification mark in the present embodiment, 7 feature images can be acquired. Instep 32, the feature image corresponding to each character in the identification mark is compared to a plurality of sample images in the database. In the present embodiment, the comparison process is similar to step 23 inFIG. 1 and is thus not presented herein. Moreover, step 32 further comprisesstep 33 to exclude images of impossible characters or digits according to various combinations that form the identification marks. For example, in one embodiment, the identification mark can be formed as a combination of 4 leading digits and 2 following letters (as shown inFIG. 7A ) with a “-” therebetween. In another identification mark, 2 leading letters and 4 following digits are combined (as shown inFIG. 7B ), with a “-” therebetween. In the present embodiment, there are two kinds of combinations to exemplify the license plates. Therefore, images of impossible characters or digits can be excluded according to the relative locations of the feature images in the identification mark so as to increase identification efficiency. For example, if the license plate is similarFIG. 7A , the feature images representing the leading 4 digit can be compared with the sample images representing digits in the database, the feature image corresponding to the fifth character is not compared with anything and is determined as “-”, and feature images corresponding to the sixth and the seventh characters are compared with the sample image representing letters in the database. - Referring to
FIG. 5 , afterstep 23 compares each feature image with each sample image in the database,step 34 is performed to collect a plurality of similarity indexes with respect to each feature image compared with the plurality of sample images. Finally, instep 35, the similarity indexes corresponding to the identification mark are sorted. Therefore, in the present step, a plurality of combinations representing the identification mark can be obtained according to the sorting. Then instep 36, the comparison results are output. Please refer toFIG. 8A , whichFIG. 8A is a table for sorting the comparison results according to the present invention. Afterstep 35, the feature image corresponding to each character is identified to obtain a character or a digit with highest similarity and combination thereof as the first possible result shown inFIG. 8A . In this case, the suspected license plate number is possibly “1642-FV”. The second, third and fourth possible results can be obtained according to the character with the second, third and fourth highest similarity. The number of results is based on the practical demand, and is thus not limited to the number inFIG. 8 . - In
step 36 of the present invention, the possible format of the identification mark is limited to the combination of digits and/or characters so as to help the user speed up identification. The identification mark inFIG. 8A is actually 6692-RV. According to the present invention, all the characters appear in the table ofFIG. 8A . Therefore, the user is able to efficiently find the correct identification mark after choosing from the results. Moreover, the user identifies the character image in the region of interest 90 (as shown inFIG. 8B ) corresponding to the license plate acquired instep 36 based on visual estimation and choose fromFIG. 8A . For example, according to the image inFIG. 8B , the second character is “6” according to visual estimation. As a result, the user can identify the first, and the third to the seventh characters fromFIG. 8A according to the similarity indexes to obtain the identification mark from lesser combinations. - Please refer to
FIG. 9 , which is a schematic diagram of a system for image identification and identification result output according to the present invention. Thesystem 4 is capable of implementing the flowchart inFIG. 1 orFIG. 5 for image identification and identification result output. Thesystem 4 comprises adatabase 40, animage processing unit 41, an identification andoutput unit 42 and a plurality ofimage acquiring units 43. Thesystem 4 is capable of implementing the flowchart inFIG. 1 orFIG. 5 to output the identification results. Thedatabase 40 is capable of providing a plurality of sample images. The sample image is different from the images acquired by theimage acquiring units 43 in viewing angles and distances. The plurality ofimage acquiring units 43 are electrically connected to theimage processing unit 41. Eachimage acquiring unit 43 is capable of acquiring an image of an object and transmits the image to theimage processing unit 41 for identification. In the present embodiment, each of theimage acquiring units 43 is capable of acquiring dynamic or static images of the object. The image acquiring units may be CCD or CMOS image acquiring units, but not limited thereto. The object may be a carrier with an identification mark thereon, for example, the license plate number of a car. Moreover, the object may also be a word, a character, a digit or combinations thereof. - The
image processing unit 41 comprises afeature acquiring unit 410 and an operation andprocessing unit 411. Thefeature acquiring unit 410 is capable of receiving the image to acquiring a feature image from the image. Then, the operation andprocessing unit 411 performs a calculation. In the present embodiment, the operation andprocessing unit 411 further comprises an enhancingunit 4110 and an identification andcomparison unit 4111. The enhancingunit 4110 is capable of improving and normalizing the feature image (enhancing the contrast and the edge) so that the size and the angle of the feature image are identical to those of the sample image. The identification andcomparison unit 4111 performsstep 23 inFIG. 1 to compare the feature image with the sample image to obtain the plurality of similarity indexes corresponding thereto, and sorts the plurality of similarity indexes to output at least one of comparison results. The identification andoutput unit 42 is electrically connected to theprocessing unit 41 to output the comparison result identified by theprocessing unit 41. The output from the identification andoutput unit 42 is as shown inFIG. 8A , which is capable of allowing the user to know the identification results displayed on a display. - Accordingly, the present invention discloses a method and system for image identification and identification result output to improve speed for targetting suspected carrier and enhance the identification efficiency. Therefore, the present invention is useful, novel and non-obvious.
- Although this invention has been disclosed and illustrated with reference to particular embodiments, the principles involved are susceptible for use in numerous other embodiments that will be apparent to persons skilled in the art. This invention is, therefore, to be limited only as indicated by the scope of the appended claims.
Claims (26)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW98103331A | 2009-01-23 | ||
| TW098103331A TWI410879B (en) | 2009-01-23 | 2009-01-23 | Method and system for identifying image and outputting identification result |
| TW098103331 | 2009-01-23 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20100189364A1 true US20100189364A1 (en) | 2010-07-29 |
| US8391559B2 US8391559B2 (en) | 2013-03-05 |
Family
ID=42354212
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/512,575 Active 2031-12-04 US8391559B2 (en) | 2009-01-23 | 2009-07-30 | Method and system for image identification and identification result output |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US8391559B2 (en) |
| TW (1) | TWI410879B (en) |
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140286619A1 (en) * | 2013-03-22 | 2014-09-25 | Casio Computer Co., Ltd. | Display control apparatus displaying image |
| US9558419B1 (en) | 2014-06-27 | 2017-01-31 | Blinker, Inc. | Method and apparatus for receiving a location of a vehicle service center from an image |
| US9563814B1 (en) | 2014-06-27 | 2017-02-07 | Blinker, Inc. | Method and apparatus for recovering a vehicle identification number from an image |
| US9589202B1 (en) | 2014-06-27 | 2017-03-07 | Blinker, Inc. | Method and apparatus for receiving an insurance quote from an image |
| US9589201B1 (en) | 2014-06-27 | 2017-03-07 | Blinker, Inc. | Method and apparatus for recovering a vehicle value from an image |
| US9594971B1 (en) | 2014-06-27 | 2017-03-14 | Blinker, Inc. | Method and apparatus for receiving listings of similar vehicles from an image |
| US9600733B1 (en) | 2014-06-27 | 2017-03-21 | Blinker, Inc. | Method and apparatus for receiving car parts data from an image |
| US9607236B1 (en) | 2014-06-27 | 2017-03-28 | Blinker, Inc. | Method and apparatus for providing loan verification from an image |
| CN106778735A (en) * | 2016-11-25 | 2017-05-31 | 北京大学深圳研究生院 | A kind of licence plate recognition method and device |
| US9754171B1 (en) | 2014-06-27 | 2017-09-05 | Blinker, Inc. | Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website |
| US9760776B1 (en) | 2014-06-27 | 2017-09-12 | Blinker, Inc. | Method and apparatus for obtaining a vehicle history report from an image |
| US9773184B1 (en) | 2014-06-27 | 2017-09-26 | Blinker, Inc. | Method and apparatus for receiving a broadcast radio service offer from an image |
| US9779318B1 (en) | 2014-06-27 | 2017-10-03 | Blinker, Inc. | Method and apparatus for verifying vehicle ownership from an image |
| US9818154B1 (en) | 2014-06-27 | 2017-11-14 | Blinker, Inc. | System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate |
| US9892337B1 (en) | 2014-06-27 | 2018-02-13 | Blinker, Inc. | Method and apparatus for receiving a refinancing offer from an image |
| CN108664957A (en) * | 2017-03-31 | 2018-10-16 | 杭州海康威视数字技术股份有限公司 | Number-plate number matching process and device, character information matching process and device |
| CN108681705A (en) * | 2018-05-15 | 2018-10-19 | 国网重庆市电力公司电力科学研究院 | A kind of measuring equipment consistency checking method and system based on figure identification |
| CN109145901A (en) * | 2018-08-14 | 2019-01-04 | 腾讯科技(深圳)有限公司 | Item identification method, device, computer readable storage medium and computer equipment |
| US10242284B2 (en) | 2014-06-27 | 2019-03-26 | Blinker, Inc. | Method and apparatus for providing loan verification from an image |
| US10515285B2 (en) | 2014-06-27 | 2019-12-24 | Blinker, Inc. | Method and apparatus for blocking information from an image |
| US10540564B2 (en) | 2014-06-27 | 2020-01-21 | Blinker, Inc. | Method and apparatus for identifying vehicle information from an image |
| US10572758B1 (en) | 2014-06-27 | 2020-02-25 | Blinker, Inc. | Method and apparatus for receiving a financing offer from an image |
| US10733471B1 (en) | 2014-06-27 | 2020-08-04 | Blinker, Inc. | Method and apparatus for receiving recall information from an image |
| CN111739307A (en) * | 2020-08-13 | 2020-10-02 | 深圳电目科技有限公司 | License plate recognition method and system for increasing license plate recognition rate |
| US10867327B1 (en) | 2014-06-27 | 2020-12-15 | Blinker, Inc. | System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate |
| CN112199998A (en) * | 2020-09-09 | 2021-01-08 | 浙江大华技术股份有限公司 | Face recognition method, device, equipment and medium |
| US20220100999A1 (en) * | 2020-09-30 | 2022-03-31 | Rekor Systems, Inc. | Systems and methods for suspect vehicle identification in traffic monitoring |
| WO2022267273A1 (en) * | 2021-06-24 | 2022-12-29 | 深圳市商汤科技有限公司 | Vehicle image retrieval method and apparatus |
| WO2023273437A1 (en) * | 2021-06-29 | 2023-01-05 | 上海商汤智能科技有限公司 | Image recognition method and apparatus, and device and storage medium |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108363715A (en) * | 2017-12-28 | 2018-08-03 | 中兴智能交通股份有限公司 | A kind of car plate picture management method and device |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5136658A (en) * | 1989-11-08 | 1992-08-04 | Kabushiki Kaisha Toshiba | Number plate image detecting apparatus |
| US5315664A (en) * | 1989-12-02 | 1994-05-24 | Ezel, Inc. | Number plate recognition system |
| US5651075A (en) * | 1993-12-01 | 1997-07-22 | Hughes Missile Systems Company | Automated license plate locator and reader including perspective distortion correction |
| US6038342A (en) * | 1988-08-10 | 2000-03-14 | Caere Corporation | Optical character recognition method and apparatus |
| US6185338B1 (en) * | 1996-03-26 | 2001-02-06 | Sharp Kabushiki Kaisha | Character recognition using candidate frames to determine character location |
| US6272244B1 (en) * | 1997-08-06 | 2001-08-07 | Nippon Telegraph And Telephone Corporation | Methods for extraction and recognition of pattern in an image method for image abnormality judging, and memory medium with image processing programs |
| US6553131B1 (en) * | 1999-09-15 | 2003-04-22 | Siemens Corporate Research, Inc. | License plate recognition with an intelligent camera |
| US6754369B1 (en) * | 2000-03-24 | 2004-06-22 | Fujitsu Limited | License plate reading apparatus and method |
| US20100272359A1 (en) * | 2007-11-20 | 2010-10-28 | Knut Tharald Fosseide | Method for resolving contradicting output data from an optical character recognition (ocr) system, wherein the output data comprises more than one recognition alternative for an image of a character |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5917928A (en) | 1997-07-14 | 1999-06-29 | Bes Systems, Inc. | System and method for automatically verifying identity of a subject |
| TWI286027B (en) * | 2003-09-10 | 2007-08-21 | Chunghwa Telecom Co Ltd | Integrated image-registration multiple lane free flow vehicle law enforcement system |
| TW200802137A (en) * | 2006-06-16 | 2008-01-01 | Univ Nat Chiao Tung | Serial-type license plate recognition system |
-
2009
- 2009-01-23 TW TW098103331A patent/TWI410879B/en active
- 2009-07-30 US US12/512,575 patent/US8391559B2/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6038342A (en) * | 1988-08-10 | 2000-03-14 | Caere Corporation | Optical character recognition method and apparatus |
| US5136658A (en) * | 1989-11-08 | 1992-08-04 | Kabushiki Kaisha Toshiba | Number plate image detecting apparatus |
| US5315664A (en) * | 1989-12-02 | 1994-05-24 | Ezel, Inc. | Number plate recognition system |
| US5651075A (en) * | 1993-12-01 | 1997-07-22 | Hughes Missile Systems Company | Automated license plate locator and reader including perspective distortion correction |
| US6185338B1 (en) * | 1996-03-26 | 2001-02-06 | Sharp Kabushiki Kaisha | Character recognition using candidate frames to determine character location |
| US6272244B1 (en) * | 1997-08-06 | 2001-08-07 | Nippon Telegraph And Telephone Corporation | Methods for extraction and recognition of pattern in an image method for image abnormality judging, and memory medium with image processing programs |
| US6553131B1 (en) * | 1999-09-15 | 2003-04-22 | Siemens Corporate Research, Inc. | License plate recognition with an intelligent camera |
| US6754369B1 (en) * | 2000-03-24 | 2004-06-22 | Fujitsu Limited | License plate reading apparatus and method |
| US20100272359A1 (en) * | 2007-11-20 | 2010-10-28 | Knut Tharald Fosseide | Method for resolving contradicting output data from an optical character recognition (ocr) system, wherein the output data comprises more than one recognition alternative for an image of a character |
Cited By (45)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9679383B2 (en) * | 2013-03-22 | 2017-06-13 | Casio Computer Co., Ltd. | Display control apparatus displaying image |
| US20140286619A1 (en) * | 2013-03-22 | 2014-09-25 | Casio Computer Co., Ltd. | Display control apparatus displaying image |
| US10176531B2 (en) | 2014-06-27 | 2019-01-08 | Blinker, Inc. | Method and apparatus for receiving an insurance quote from an image |
| US10192114B2 (en) | 2014-06-27 | 2019-01-29 | Blinker, Inc. | Method and apparatus for obtaining a vehicle history report from an image |
| US9589201B1 (en) | 2014-06-27 | 2017-03-07 | Blinker, Inc. | Method and apparatus for recovering a vehicle value from an image |
| US9594971B1 (en) | 2014-06-27 | 2017-03-14 | Blinker, Inc. | Method and apparatus for receiving listings of similar vehicles from an image |
| US9600733B1 (en) | 2014-06-27 | 2017-03-21 | Blinker, Inc. | Method and apparatus for receiving car parts data from an image |
| US9607236B1 (en) | 2014-06-27 | 2017-03-28 | Blinker, Inc. | Method and apparatus for providing loan verification from an image |
| US11436652B1 (en) | 2014-06-27 | 2022-09-06 | Blinker Inc. | System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate |
| US9563814B1 (en) | 2014-06-27 | 2017-02-07 | Blinker, Inc. | Method and apparatus for recovering a vehicle identification number from an image |
| US9754171B1 (en) | 2014-06-27 | 2017-09-05 | Blinker, Inc. | Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website |
| US9760776B1 (en) | 2014-06-27 | 2017-09-12 | Blinker, Inc. | Method and apparatus for obtaining a vehicle history report from an image |
| US9773184B1 (en) | 2014-06-27 | 2017-09-26 | Blinker, Inc. | Method and apparatus for receiving a broadcast radio service offer from an image |
| US9779318B1 (en) | 2014-06-27 | 2017-10-03 | Blinker, Inc. | Method and apparatus for verifying vehicle ownership from an image |
| US9818154B1 (en) | 2014-06-27 | 2017-11-14 | Blinker, Inc. | System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate |
| US9892337B1 (en) | 2014-06-27 | 2018-02-13 | Blinker, Inc. | Method and apparatus for receiving a refinancing offer from an image |
| US10885371B2 (en) | 2014-06-27 | 2021-01-05 | Blinker Inc. | Method and apparatus for verifying an object image in a captured optical image |
| US10867327B1 (en) | 2014-06-27 | 2020-12-15 | Blinker, Inc. | System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate |
| US10163025B2 (en) | 2014-06-27 | 2018-12-25 | Blinker, Inc. | Method and apparatus for receiving a location of a vehicle service center from an image |
| US10192130B2 (en) | 2014-06-27 | 2019-01-29 | Blinker, Inc. | Method and apparatus for recovering a vehicle value from an image |
| US10169675B2 (en) | 2014-06-27 | 2019-01-01 | Blinker, Inc. | Method and apparatus for receiving listings of similar vehicles from an image |
| US10733471B1 (en) | 2014-06-27 | 2020-08-04 | Blinker, Inc. | Method and apparatus for receiving recall information from an image |
| US9589202B1 (en) | 2014-06-27 | 2017-03-07 | Blinker, Inc. | Method and apparatus for receiving an insurance quote from an image |
| US9558419B1 (en) | 2014-06-27 | 2017-01-31 | Blinker, Inc. | Method and apparatus for receiving a location of a vehicle service center from an image |
| US10163026B2 (en) | 2014-06-27 | 2018-12-25 | Blinker, Inc. | Method and apparatus for recovering a vehicle identification number from an image |
| US10204282B2 (en) | 2014-06-27 | 2019-02-12 | Blinker, Inc. | Method and apparatus for verifying vehicle ownership from an image |
| US10210396B2 (en) | 2014-06-27 | 2019-02-19 | Blinker Inc. | Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website |
| US10210416B2 (en) | 2014-06-27 | 2019-02-19 | Blinker, Inc. | Method and apparatus for receiving a broadcast radio service offer from an image |
| US10210417B2 (en) | 2014-06-27 | 2019-02-19 | Blinker, Inc. | Method and apparatus for receiving a refinancing offer from an image |
| US10242284B2 (en) | 2014-06-27 | 2019-03-26 | Blinker, Inc. | Method and apparatus for providing loan verification from an image |
| US10515285B2 (en) | 2014-06-27 | 2019-12-24 | Blinker, Inc. | Method and apparatus for blocking information from an image |
| US10540564B2 (en) | 2014-06-27 | 2020-01-21 | Blinker, Inc. | Method and apparatus for identifying vehicle information from an image |
| US10572758B1 (en) | 2014-06-27 | 2020-02-25 | Blinker, Inc. | Method and apparatus for receiving a financing offer from an image |
| US10579892B1 (en) | 2014-06-27 | 2020-03-03 | Blinker, Inc. | Method and apparatus for recovering license plate information from an image |
| CN106778735A (en) * | 2016-11-25 | 2017-05-31 | 北京大学深圳研究生院 | A kind of licence plate recognition method and device |
| EP3605392A4 (en) * | 2017-03-31 | 2020-04-08 | Hangzhou Hikvision Digital Technology Co., Ltd. | LICENSE PLATE NUMBER MATCHING METHOD AND APPARATUS, AND CHARACTER INFORMATION MATCHING METHOD AND APPARATUS |
| CN108664957A (en) * | 2017-03-31 | 2018-10-16 | 杭州海康威视数字技术股份有限公司 | Number-plate number matching process and device, character information matching process and device |
| US11093782B2 (en) * | 2017-03-31 | 2021-08-17 | Hangzhou Hikvision Digital Technology Co., Ltd. | Method for matching license plate number, and method and electronic device for matching character information |
| CN108681705A (en) * | 2018-05-15 | 2018-10-19 | 国网重庆市电力公司电力科学研究院 | A kind of measuring equipment consistency checking method and system based on figure identification |
| CN109145901A (en) * | 2018-08-14 | 2019-01-04 | 腾讯科技(深圳)有限公司 | Item identification method, device, computer readable storage medium and computer equipment |
| CN111739307A (en) * | 2020-08-13 | 2020-10-02 | 深圳电目科技有限公司 | License plate recognition method and system for increasing license plate recognition rate |
| CN112199998A (en) * | 2020-09-09 | 2021-01-08 | 浙江大华技术股份有限公司 | Face recognition method, device, equipment and medium |
| US20220100999A1 (en) * | 2020-09-30 | 2022-03-31 | Rekor Systems, Inc. | Systems and methods for suspect vehicle identification in traffic monitoring |
| WO2022267273A1 (en) * | 2021-06-24 | 2022-12-29 | 深圳市商汤科技有限公司 | Vehicle image retrieval method and apparatus |
| WO2023273437A1 (en) * | 2021-06-29 | 2023-01-05 | 上海商汤智能科技有限公司 | Image recognition method and apparatus, and device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| US8391559B2 (en) | 2013-03-05 |
| TW201028935A (en) | 2010-08-01 |
| TWI410879B (en) | 2013-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8391559B2 (en) | Method and system for image identification and identification result output | |
| US8391560B2 (en) | Method and system for image identification and identification result output | |
| US9286533B2 (en) | Method for image recombination of a plurality of images and image identification and system for image acquiring and identification | |
| Comelli et al. | Optical recognition of motor vehicle license plates | |
| US9405988B2 (en) | License plate recognition | |
| CN101706873B (en) | Method and device for recognizing digit class limit mark | |
| CN103903005B (en) | License plate image identification system and method | |
| CN106874901B (en) | Driving license identification method and device | |
| Pinthong et al. | License plate tracking based on template matching technique | |
| CN105139011B (en) | A vehicle identification method and device based on a marker image | |
| US20160321510A1 (en) | Apparatus and method for detecting bar-type traffic sign in traffic sign recognition system | |
| CN104182769A (en) | A license plate detection method and system | |
| Thaiparnit et al. | Tracking vehicles system based on license plate recognition | |
| CN101882219B (en) | Image identification and output method and system thereof | |
| CN104268513B (en) | Road guides the acquisition methods and device of data | |
| CN102043941B (en) | Dynamic real-time relative relationship recognition method and system | |
| CN101814127B (en) | Image recognition and output method and system thereof | |
| CN107264469A (en) | A kind of VATS Vehicle Anti-Theft System based on recognition of face | |
| JP2023174008A (en) | Vehicle information acquisition system, vehicle information acquisition method, and computer program | |
| CN101882224B (en) | Recombination of multiple images and recognition method and image capture and recognition system | |
| CN106023209A (en) | Blind detection method for spliced image based on background noise | |
| CN111179452A (en) | ETC channel-based bus fee deduction system and method | |
| Mohammad et al. | An Efficient Method for Vehicle theft and Parking rule Violators Detection using Automatic Number Plate Recognition | |
| CN111881843B (en) | Face detection-based taxi passenger carrying number counting method | |
| CN106127190A (en) | A kind of Recognition Algorithm of License Plate based on the detection of image T node |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE, TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSAI, YA-HUI;HUANG, KUO-TANG;LIN, YU-TING;AND OTHERS;SIGNING DATES FROM 20090701 TO 20090703;REEL/FRAME:023028/0533 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |