[go: up one dir, main page]

CN106056076B - A Method for Determining Illumination Invariants of Complex Illuminated Face Images - Google Patents

A Method for Determining Illumination Invariants of Complex Illuminated Face Images Download PDF

Info

Publication number
CN106056076B
CN106056076B CN201610371321.1A CN201610371321A CN106056076B CN 106056076 B CN106056076 B CN 106056076B CN 201610371321 A CN201610371321 A CN 201610371321A CN 106056076 B CN106056076 B CN 106056076B
Authority
CN
China
Prior art keywords
illumination
image
invariant
face image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610371321.1A
Other languages
Chinese (zh)
Other versions
CN106056076A (en
Inventor
程勇
韩袁琛
曹雪虹
焦良葆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201610371321.1A priority Critical patent/CN106056076B/en
Publication of CN106056076A publication Critical patent/CN106056076A/en
Application granted granted Critical
Publication of CN106056076B publication Critical patent/CN106056076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of methods of the illumination invariant of determining complex illumination facial image, firstly, visual light imaging model --- the Lambert's model that research is classical, object analysis image-forming principle, the illumination estimation model to establish novel provide principle foundation;Then, when designing illumination estimation model, it is contemplated that under the conditions of complex illumination, the light conditions of a width facial image can be divided into it is unobstructed, block and these three regions of transition, thus be divided into two classes and discuss;Again, due to the correlation of illumination between adjacent pixel, the two class illumination estimation results defined before can be merged to obtain a final result;Finally, the illumination invariant of facial image can be derived from by classical simple Lambert's model.The method of the present invention can effectively eliminate the light differential of original image.And the numberical range of mentioned illumination invariant is between 0 and 1, and it is consistent with the numberical range of face intrinsic.

Description

A kind of method of the illumination invariant of determining complex illumination facial image
Technical field
The present invention relates to a kind of methods of the illumination invariant of determining complex illumination facial image, belong to face recognition technology Field.
Background technique
In recent years, in order to effectively eliminate influence of the complex illumination to recognition of face performance, domestic and foreign scholars have been proposed All multi-methods.Wherein, it is a kind of classical, effective method that illumination invariant is extracted from complex illumination facial image.Past, In order to isolate illumination invariant and imaging source from multiplying property model, assume initially that illumination invariant quickly changes, imaging Source is slowly varying, then implements illumination estimation using low-pass filtering and extracts illumination invariant indirectly.Such method can be divided into directly It connects and extracts illumination invariant with indirect both of which.Direct Model refer to from facial image extract high-frequency characteristic as illumination not Variable, effective high-frequency characteristic specifically include that Gradient Features, textural characteristics and transform domain high-frequency characteristic.Indirect pattern refer to first from Illumination is estimated in facial image, then implements illumination and the separation of face intrinsic, extracts illumination invariant, effective illumination estimation Method specifically includes that gaussian filtering, weighting Anisotropic fractals, logarithm total variation and is converted in smothing filtering.
Although these methods have been achieved for certain progress in complex illumination recognition of face, but still have limitation. On the one hand, it is assumed that the illumination invariant feature of face, which quickly changes, has certain narrow-mindedness.Because in face major part region Illumination invariant feature, such as eyebrow, pupil, mole and skin, be all it is slowly varying, only just there is illumination invariant spy between region Sign quickly variation.On the other hand, the angle of current low-pass filtering, smothing filtering and denoising model from acquisition image low-frequency information Estimate illumination (fuzzy image), contains excessive face intrinsic information, be only able to satisfy the slowly varying characteristic of illumination, ignore Image obtains the characteristic of model, is not associated with directly with image irradiation.
Summary of the invention
The technical problem to be solved by the present invention is to overcome the deficiencies of existing technologies, a kind of determining complex illumination face is provided The method of the illumination invariant of image no longer assumes that the frequency characteristic of face intrinsic on the basis of studying classical Lambert's model, But from the image-forming principle of image, illumination can be more accurately estimated from facial image, extracts more robust light According to invariant.
In order to solve the above technical problems, the present invention provides a kind of side of the illumination invariant of determining complex illumination facial image Method, comprising the following steps:
1) by analysis Lambert's model, complex illumination facial image model is determined;
2) illumination estimation model is designed, the image irradiation of facial image is solved;
3) image irradiation of the facial image solved according to the complex illumination facial image model and step 2) of step 1), meter Calculate human face light invariant.
In aforementioned step 1), complex illumination facial image model are as follows:
F (x, y)=I (x, y) R (x, y) (2)
Wherein, F (x, y) is facial image, and R (x, y) indicates that human face light invariant, I (x, y) indicate the figure of facial image As illumination.
Aforementioned step 2) designs illumination estimation model, and solving the image irradiation of facial image, detailed process is as follows:
Illumination estimation model I and illumination 2-1) are separately designed based on the slowly varying region of illumination and the quick region of variation of illumination Estimate modelⅱ:
Illumination estimation model I is defined as:
Illumination estimation modelⅱ is defined as:
Fa(x, y)=Im(x, y)-F (x, y) (5)
Wherein, Im(x, y) is the image irradiation under illumination estimation model I, Is(x, y) is the figure under illumination estimation modelⅱ As illumination, oI, jIt is point (x, y) in Ω1Consecutive points in neighborhood;Max () and min () are respectively indicated and are sought collective data Maximum value and minimum value;
2-2) calculate Im(x, y) and Is(x, y) will merge illumination estimation knot using illumination fusion in facial image F (x, y) Fruit Ims(x, y) is defined as:
T=mean (Fg(x, y))+k × (max (Fg(x, y))-mean (Fg(x, y))) (7)
Fg(x, y)=Fa(x, y)/Im(x, y) (8)
Wherein, mean () indicates to seek the average value of collective data;K is adjustable factors;
2-3) design the phase between image irradiation of the adaptive Anisotropic fractals of one kind to establish adjacent pixel Guan Xing, and by final image irradiation I (x, y) is defined as:
Wherein, G (x, y, Ω2) be standard deviation be ρ, convolution kernel scale is Ω2Gaussian kernel;P (x, y, Ω2) it is Ims(x, y) Corresponding anisotropy template;Ims(i, j) is Ims(x, y) is in Ω2Pixel in neighborhood.
Adjustable factors k above-mentioned is taken as 0.6.
Standard deviation ρ above-mentioned is taken as 1.
Ω above-mentioned1And Ω2Neighborhood window is set as 3 × 3.
Human face light invariant above-mentioned indicates are as follows:
R (x, y)=F (x, y)/I (x, y) (11)
Wherein, F (x, y) is facial image, and R (x, y) indicates that human face light invariant, I (x, y) indicate the figure of facial image As illumination.
Advantageous effects of the invention: the method for the present invention can effectively eliminate the light differential of original image.And And the numberical range of mentioned illumination invariant is between 0 and 1, it is consistent with the numberical range of face intrinsic.
Detailed description of the invention
Fig. 1 is the Yale B in the embodiment of the present invention+The illumination invariant of face database.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.Following embodiment is only used for clearly illustrating the present invention Technical solution, and not intended to limit the protection scope of the present invention.
The invention mainly comprises extraction two parts of the foundation of illumination estimation model and illumination invariant.Firstly, research warp Visual light imaging model --- the Lambert's model of allusion quotation, object analysis image-forming principle, the illumination estimation model to establish novel provide Principle foundation;Then, when designing illumination estimation model, it is contemplated that under the conditions of complex illumination, the illumination feelings of a width facial image Condition can be divided into it is unobstructed, block and these three regions of transition, thus be divided into two classes and discuss;Again, due to adjacent The two class illumination estimation results defined before can be merged to obtain a final result by the correlation of illumination between pixel;Most Afterwards, by classical simple Lambert's model, the illumination invariant of facial image can be derived from.Specifically comprise the following steps:
1, Lambert's model is analyzed:
Image refers to that target object surface is reflected into the measurement of the light intensity formed on image acquisition sensor.Lambert's mould Type is widely used in complex illumination recognition of face as classical visible images imaging model.Formula (1) gives bright Primary model describes the image-forming principle of target object.
G (x, y)=ρ (x, y) n (x, y)Ts (1)
Wherein, ρ (x, y) and n (x, y)TThe reflectivity and normal vector of target object surface are respectively indicated, s indicates imaging Source, G (x, y) indicate the image of target object.
The reflectivity and normal vector of body surface are unrelated with imaging source, are the internal characteristics (illumination invariant) of object. Therefore, the image-forming principle of target object can be described with simple Lambert's model, i.e. a width facial image F (x, y) can be with It indicates are as follows:
F (x, y)=I (x, y) R (x, y) (2)
Wherein, R (x, y) indicates face intrinsic (illumination invariant), and numberical range belongs to [0,1], and I (x, y) indicates face The imaging source (image irradiation) of image.
From Lambert's model: facial image is the product that face intrinsic is multiplied with imaging source;The numerical value of face intrinsic Range belongs to [0,1];The intensity of facial image is lower than the intensity of imaging source;The maximum value of facial image is than previous any light Method is closer to imaging source by estimate.
2, illumination estimation model is designed:
The light conditions of one width facial image can be divided into three parts: unobstructed region, occlusion area and transitional region (the unobstructed region between occlusion area).The illumination in these regions is presented below as feature respectively: the unobstructed area light of light It is slow according to brighter and variation;Light occlusion area illumination is more gloomy and variation is slow;Light transitional region illumination is by bright To dark and quick variation.Therefore, for the slowly varying region of illumination with quick region of variation separately design illumination estimation model I and II:
Illumination estimation model I is defined as:
Illumination estimation modelⅱ is defined as:
Fa(x, y)=Im(x, y)-F (x, y) (5)
Wherein, oI, jIt is point (x, j) in Ω1Consecutive points in neighborhood;Max (), min () and mean () difference table Show the maximum value, minimum value and average value for seeking collective data.
To the I in formula F (x, y)m(x, y) and IsAfter (x, y) is calculated, illumination estimation is improved using illumination fusion. In this process, we distinguish the shielding edge and other regions of light by image segmentation, and by facial image F (x, y) Middle fusion illumination estimation result Ims(x, y) is defined as:
T=mean (Fg(x, y))+k × (max (Fg(x, y))-mean (Fg(x, y))) (7)
Fg(x, y)=Fa(x, y)/Im(x, y) (8)
Wherein, mean () indicates to seek the average value of collective data;K ∈ [0,1] is an adjustable factors.
Since the illumination of neighborhood pixels there should be very big relationship, the adaptive Anisotropic fractals of one kind are designed to build Correlation between the illumination of vertical adjacent pixel, and by final image irradiation estimated result I (x, y) is defined as:
Wherein, G (x, y, Ω2) be standard deviation be ρ, convolution kernel scale is Ω2Gaussian kernel;P (x, y, Ω2) it is Ims(x, y) Corresponding anisotropy template;Ims(i, j) is Ims(x, y) is in Ω2Pixel in neighborhood.
Adjustable factors k and standard deviation ρ are respectively set to 0.6 and 1, Ω in the present invention1And Ω2Neighborhood window is set as 3 × 3.
3. deriving illumination invariant:
After estimating illumination in facial image, the Lambert's model that can be described according to formula (2) derives facial image Illumination invariant.The illumination invariant of facial image F (x, y) may be expressed as:
R (x, y)=F (x, y)/I (x, y) (11)
Proved by experimental verification: the method for the present invention can effectively eliminate the light differential of original image, and described The numberical range of illumination invariant R is consistent with the numberical range of face intrinsic between 0 and 1.
Embodiment:
In order to verify the validity of the method for the present invention, Yale B and extension Yale B are combined into Yale B+Face database into Row experiment.The library complexity light illumination mode is still a challenging problem for robust illumination face recognition algorithms.Identification Stage, principal component analysis are used for feature extraction, and the nearest neighbor classifier based on Euclidean distance is classified for identification.Inventive algorithm With current advanced algorithm: MSR, Gradientfaces and Guo have carried out comparative experiments, provide corresponding recognition effect.
Yale B+Face database includes 38 people, and 64 kinds of illumination modes amount to 2432 width images.All graphical rules are adjusted Whole is 100*100.According to the difference of light source and center of face axis angle, face database is divided into 5 set altogether.Fig. 1 gives The illumination invariant that one people, 5 width images of each set and the present invention extract, it can be seen that the present invention can effectively eliminate difference Influence of the illumination to face intrinsic.
Firstly, selecting a collection to be combined into training set respectively from 5 set, other four set are used as test set, table 1-5 Give the experimental result of algorithms of different.It can be seen that the discrimination of the mentioned algorithm of the present invention is higher than other algorithms, especially collect When closing 5 as training set, hence it is evident that be better than other algorithms.Then, in order to verify the high efficiency of inventive algorithm, everyone is arbitrarily selected Piece image is selected as training set (total 38 width facial images), other images are as test set (total 2394 width face figures Picture), experiment 60 times is repeated, the average recognition rate and standard deviation of algorithms of different are as shown in table 6, it can be seen that inventive algorithm is put down Equal discrimination is apparently higher than other algorithms, and discrimination standard deviation is minimum.
Table 1: discrimination (%) of the set 1 as training set algorithms of different.
Method Set 2 Set 3 Set 4 Set 5 Entire set
MSR 99.78 95.49 94.52 94.04 95.71
Gradientfaces 100.00 98.87 87.28 94.74 95.29
S&L 100.00 97.56 95.83 93.21 96.26
The method of the present invention 100.00 99.81 98.90 98.06 99.08
Table 2: discrimination (%) of the set 2 as training set algorithms of different.
Method Set 1 Set 3 Set 4 Set 5 Entire set
MSR 97.74 94.17 93.64 90.31 93.12
Gradientfaces 99.25 95.30 92.54 93.91 94.69
S&L 98.12 96.62 96.05 90.58 94.48
The method of the present invention 100.00 98.12 99.56 98.02 98.74
Table 3: discrimination (%) of the set 3 as training set algorithms of different.
Method Set 1 Set 2 Set 4 Set 5 Entire set
MSR 99.62 98.25 96.27 97.65 97.74
Gradientfaces 100.00 100.00 98.03 99.03 99.16
S&L 99.25 98.90 95.18 97.65 97.58
The method of the present invention 99.25 98.90 99.34 99.31 99.21
Table 4: discrimination (%) of the set 4 as training set algorithms of different.
Method Set 1 Set 2 Set 3 Set 5 Entire set
MSR 95.87 96.71 94.17 99.31 96.86
Gradientfaces 100.00 99.56 97.37 99.72 99..9
S&L 99.25 98.68 94.93 99.45 98.03
The method of the present invention 98.50 100.00 99.25 99.31 99.34
Table 5: discrimination (%) of the set 5 as training set algorithms of different.
Method Set 1 Set 2 Set 3 Set 4 Entire set
MSR 96.62 91.45 92.67 99.34 94.74
Gradientfaces 96.24 91.67 90.23 99.56 94.04
S&L 98.50 88.38 89.29 98.90 93.04
The method of the present invention 100.00 99.78 100.00 100.00 99.94
Table 6: everyone average recognition rate (%) of the image as training set algorithms of different is randomly selected.
Method Set 1 Set 2 Set 3 Set 4 Set 5 Entire set
MSR 84.19 81.36 74.37 74.21 80.20 78.45
Gradientfaces 87.65 80.16 73.94 82.18 90.19 82.95
S&L 85.63 79.09 70.25 71.76 79.81 76.73
The method of the present invention 94.44 92.35 91.48 93.01 95.06 93.32
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the technical principles of the invention, several improvement and deformations can also be made, these improvement and deformations Also it should be regarded as protection scope of the present invention.

Claims (6)

1.一种确定复杂光照人脸图像的光照不变量的方法,其特征在于,包括以下步骤:1. a method for determining the illumination invariant of complex illumination face image, is characterized in that, comprises the following steps: 1)通过分析朗伯模型,确定复杂光照人脸图像模型;1) Determine the complex illumination face image model by analyzing the Lambertian model; 2)设计光照估计模型,求解人脸图像的图像光照;具体过程如下:2) Design an illumination estimation model to solve the image illumination of the face image; the specific process is as follows: 2-1)基于光照缓慢变化区域和光照快速变化区域分别设计光照估计模型I和光照估计模型Ⅱ:2-1) Design illumination estimation model I and illumination estimation model II based on the slow illumination change area and the illumination rapid illumination change area respectively: 光照估计模型I被定义为:The illumination estimation model I is defined as: 光照估计模型Ⅱ被定义为:Illumination estimation model II is defined as: Fa(x,y)=Im(x,y)-F(x,y)(5)F a (x,y)= Im (x,y)-F(x,y)(5) 其中,Im(x,y)是光照估计模型I下的图像光照,Is(x,y)是光照估计模型Ⅱ下的图像光照,oi,j是点(x,y)在Ω1邻域中的相邻点;max(·)和min(·)分别表示求取集合数据的最大值和最小值;Among them, Im (x, y) is the image illumination under illumination estimation model I, Is ( x , y) is the image illumination under illumination estimation model II, o i, j is the point (x, y) in Ω 1 Adjacent points in the neighborhood; max( ) and min( ) represent the maximum and minimum values of the set data, respectively; 2-2)计算Im(x,y)和Is(x,y),利用光照融合将人脸图像F(x,y)中融合光照估计结果Ims(x,y)定义为:2-2) Calculate Im ( x , y) and Is (x, y), and use illumination fusion to define the fusion illumination estimation result Im (x, y) in the face image F (x, y) as: t=mean(Fg(x,y))+k×(max(Fg(x,y))-mean(Fg(x,y))) (7)t=mean(Fg( x ,y))+k×(max(Fg( x ,y))-mean(Fg( x ,y))) (7) Fg(x,y)=Fa(x,y)/Im(x,y) (8)F g (x, y)=F a (x, y)/I m (x, y) (8) 其中,mean(·)表示求取集合数据的平均值;k为可调因子;Among them, mean( ) means to obtain the average value of the set data; k is an adjustable factor; 2-3)设计一种自适应的各向异性高斯滤波来建立相邻像素的图像光照之间的相关性,并且将最终的图像光照I(x,y)定义为:2-3) Design an adaptive anisotropic Gaussian filter to establish the correlation between the image illumination of adjacent pixels, and define the final image illumination I(x, y) as: 其中,G(x,y,Ω2)为标准差为ρ、卷积核尺度为Ω2的高斯核;P(x,y,Ω2)是Ims(x,y)对应的各向异性模版;Ims(i,j)是Ims(x,y)在Ω2邻域中的像素点;Among them, G(x, y, Ω 2 ) is a Gaussian kernel with standard deviation ρ and convolution kernel scale Ω 2 ; P(x, y, Ω 2 ) is the anisotropy corresponding to I ms (x, y) Template; I ms (i, j) is the pixel point of I ms (x, y) in the Ω 2 neighborhood; 3)根据步骤1)的复杂光照人脸图像模型和步骤2)求解的人脸图像的图像光照,计算人脸光照不变量。3) Calculate the face illumination invariant according to the complex illumination face image model in step 1) and the image illumination of the face image obtained in step 2). 2.根据权利要求1所述的一种确定复杂光照人脸图像的光照不变量的方法,其特征在于,所述步骤1)中,复杂光照人脸图像模型为:2. a kind of method that determines the illumination invariant of complex illumination face image according to claim 1, is characterized in that, in described step 1), complex illumination face image model is: F(x,y)=I(x,y)·R(x,y) (2)F(x,y)=I(x,y)·R(x,y) (2) 其中,F(x,y)为人脸图像,R(x,y)表示人脸光照不变量,I(x,y)表示人脸图像的图像光照。Among them, F(x, y) is the face image, R(x, y) represents the face illumination invariant, and I(x, y) represents the image illumination of the face image. 3.根据权利要求1所述的一种确定复杂光照人脸图像的光照不变量的方法,其特征在于,所述可调因子k取为0.6。3 . The method for determining the illumination invariant of a complex illuminated face image according to claim 1 , wherein the adjustable factor k is set to be 0.6. 4 . 4.根据权利要求1所述的一种确定复杂光照人脸图像的光照不变量的方法,其特征在于,所述标准差ρ取为1。4 . The method for determining the illumination invariant of a complex illuminated face image according to claim 1 , wherein the standard deviation ρ is taken as 1. 5 . 5.根据权利要求1所述的一种确定复杂光照人脸图像的光照不变量的方法,其特征在于,所述Ω1和Ω2邻域窗口设置为3×3。5 . The method for determining the illumination invariant of a complex illuminated face image according to claim 1 , wherein the Ω 1 and Ω 2 neighborhood windows are set to 3×3. 6 . 6.根据权利要求1所述的一种确定复杂光照人脸图像的光照不变量的方法,其特征在于,所述人脸光照不变量表示为:6. the method for determining the illumination invariant of complex illumination face image according to claim 1, is characterized in that, described face illumination invariant is expressed as: R(x,y)=F(x,y)/I(x,y) (11)R(x,y)=F(x,y)/I(x,y) (11) 其中,F(x,y)为人脸图像,R(x,y)表示人脸光照不变量,I(x,y)表示人脸图像的图像光照。Among them, F(x, y) is the face image, R(x, y) represents the face illumination invariant, and I(x, y) represents the image illumination of the face image.
CN201610371321.1A 2016-05-30 2016-05-30 A Method for Determining Illumination Invariants of Complex Illuminated Face Images Active CN106056076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610371321.1A CN106056076B (en) 2016-05-30 2016-05-30 A Method for Determining Illumination Invariants of Complex Illuminated Face Images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610371321.1A CN106056076B (en) 2016-05-30 2016-05-30 A Method for Determining Illumination Invariants of Complex Illuminated Face Images

Publications (2)

Publication Number Publication Date
CN106056076A CN106056076A (en) 2016-10-26
CN106056076B true CN106056076B (en) 2019-06-14

Family

ID=57171435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610371321.1A Active CN106056076B (en) 2016-05-30 2016-05-30 A Method for Determining Illumination Invariants of Complex Illuminated Face Images

Country Status (1)

Country Link
CN (1) CN106056076B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239729B (en) * 2017-04-10 2020-09-01 南京工程学院 Illumination face recognition method based on illumination estimation
CN107451591A (en) * 2017-06-27 2017-12-08 重庆三峡学院 A kind of human face light invariant feature extraction method using Wallis operators
CN108335315A (en) * 2017-12-28 2018-07-27 国网北京市电力公司 The determination method, apparatus in illumination variation region

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2005365A2 (en) * 2006-04-13 2008-12-24 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
EP2580740A2 (en) * 2010-06-10 2013-04-17 Tata Consultancy Services Limited An illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
CN103530634A (en) * 2013-10-10 2014-01-22 中国科学院深圳先进技术研究院 Face characteristic extraction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175390B2 (en) * 2008-03-28 2012-05-08 Tandent Vision Science, Inc. System and method for illumination invariant image segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2005365A2 (en) * 2006-04-13 2008-12-24 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
EP2580740A2 (en) * 2010-06-10 2013-04-17 Tata Consultancy Services Limited An illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
CN103530634A (en) * 2013-10-10 2014-01-22 中国科学院深圳先进技术研究院 Face characteristic extraction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸认证中的光照不变特征图像提取方法研究;匡婷;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140215(第2期);第12-13页,摘要

Also Published As

Publication number Publication date
CN106056076A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
Sazak et al. The multiscale bowler-hat transform for blood vessel enhancement in retinal images
Mittapalli et al. Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma
Lam et al. General retinal vessel segmentation using regularization-based multiconcavity modeling
CN106407917B (en) The retinal vessel extracting method and system distributed based on Dynamic Multi-scale
CN101317183B (en) Method for locating pixels representing irises in an acquired image of an eye
CN101359365B (en) A Method of Iris Location Based on Maximum Inter-class Variance and Gray Level Information
Zhang et al. Multi-focus image fusion algorithm based on focused region extraction
Liu et al. Detecting wide lines using isotropic nonlinear filtering
CN106651888B (en) Colour eye fundus image optic cup dividing method based on multi-feature fusion
CN101599174A (en) A Level Set Method for Contour Extraction of Medical Ultrasound Image Regions Based on Edge and Statistical Features
JP2007188504A (en) Method for filtering pixel intensity in image
CN106778499B (en) Method for rapidly positioning human iris in iris acquisition process
Zhang et al. Level set evolution driven by optimized area energy term for image segmentation
CN106203375A (en) A kind of based on face in facial image with the pupil positioning method of human eye detection
CN106355599A (en) Non-fluorescent eye fundus image based automatic segmentation method for retinal blood vessels
CN103955949A (en) Moving target detection method based on Mean-shift algorithm
CN106056076B (en) A Method for Determining Illumination Invariants of Complex Illuminated Face Images
CN109165551A (en) A kind of expression recognition method of adaptive weighted fusion conspicuousness structure tensor and LBP feature
Zhong et al. Filterable sample consensus based on angle variance for pupil segmentation
CN106372593B (en) Optic disk area positioning method based on vascular convergence
CN108596928A (en) Based on the noise image edge detection method for improving Gauss-Laplace operator
Zebari et al. Thresholding-based approach for segmentation of melanocytic skin lesion in dermoscopic images
Zhou et al. A novel approach for red lesions detection using superpixel multi-feature classification in color fundus images
Chen et al. A computational efficient iris extraction approach in unconstrained environments
Ahmed et al. Retina based biometric authentication using phase congruency

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant