TWI684920B - Headlight state analysis method, headlight state analysis system, and non-transitory computer readable media - Google Patents
Headlight state analysis method, headlight state analysis system, and non-transitory computer readable media Download PDFInfo
- Publication number
- TWI684920B TWI684920B TW107143736A TW107143736A TWI684920B TW I684920 B TWI684920 B TW I684920B TW 107143736 A TW107143736 A TW 107143736A TW 107143736 A TW107143736 A TW 107143736A TW I684920 B TWI684920 B TW I684920B
- Authority
- TW
- Taiwan
- Prior art keywords
- interest
- images
- image
- neural network
- regions
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 35
- 239000013598 vector Substances 0.000 claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 26
- 230000015654 memory Effects 0.000 claims description 13
- 230000007787 long-term memory Effects 0.000 abstract description 18
- 230000006403 short-term memory Effects 0.000 abstract description 18
- 238000000034 method Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
本案是有關於一種車燈狀態分析方法、車燈狀態分析系統及非暫態電腦可讀取媒體,且特別是有關於卷積神經網路運算模型與長短期記憶時間特徵萃取模型的車燈狀態分析方法、車燈狀態分析系統及非暫態電腦可讀取媒體。 This case is about a lamp status analysis method, a lamp status analysis system, and a non-transitory computer-readable medium, and in particular, the lamp status of a convolutional neural network operation model and a long-term and short-term memory time feature extraction model Analysis methods, vehicle light status analysis systems, and non-transitory computer-readable media.
現有的車燈偵測與辨識技術容易受到光影氣候環境影響而不穩定,其辨識成功率較低。此外,現有的車燈偵測與辨識技術無法使用同一流程處理日夜間車燈辨識,且須因應日夜間環境設定多重參數門檻值於車燈辨識,設計上的複雜度很高。 Existing vehicle light detection and identification technology is easily affected by light and shadow climate environment and is unstable, and its identification success rate is low. In addition, the existing vehicle light detection and identification technology cannot use the same process to handle day and night light identification, and multiple thresholds must be set for the light identification in response to the day and night environment. The design complexity is very high.
本案之一態樣是在提供一種車燈狀態分析方 法。此車燈狀態分析方法包含以下步驟取得多個影像,其中多個影像於時間上連續;依據第一卷積神經網路運算模型以取得多個影像的多個特徵向量;以及依據多個特徵向量以及長短期記憶時間特徵萃取模型以判定與多個影像相對應的車燈組合。 One aspect of this case is to provide an analysis method law. The vehicle lamp state analysis method includes the following steps to obtain multiple images, wherein the multiple images are continuous in time; based on the first convolutional neural network operation model to obtain multiple feature vectors of multiple images; and based on multiple feature vectors And a long-term and short-term memory time feature extraction model to determine the combination of vehicle lights corresponding to multiple images.
本案之另一態樣是在提供一種車燈狀態分析系統。此車燈狀態分析系統包含儲存裝置以及處理器。儲存裝置用以儲存第一卷積神經網路運算模型、第二卷積神經網路運算模型以及長短期記憶時間特徵萃取模型。處理器與儲存裝置電性連接。處理器用以取得多個影像的多個感興趣區域,依據第一卷積神經網路運算模型以取得多個影像的多個特徵向量,以及依據多個特徵向量以及長短期記憶時間特徵萃取模型以判定與多個影像相對應的車燈組合。 Another aspect of this case is to provide a vehicle lamp status analysis system. The lamp status analysis system includes a storage device and a processor. The storage device is used to store the first convolutional neural network operation model, the second convolutional neural network operation model, and the long and short-term memory time feature extraction model. The processor is electrically connected to the storage device. The processor is used to obtain multiple regions of interest of multiple images, obtain multiple feature vectors of multiple images according to the first convolutional neural network operation model, and extract feature models based on multiple feature vectors and long and short-term memory time features Determine the combination of lights corresponding to multiple images.
本案之另一態樣是在提供一種非暫態電腦可讀取媒體,包含至少一指令程序,由處理器執行至少一指令程序以實行車燈狀態分析方法。車燈狀態分析方法包含:取得多個影像的多個感興趣區域;依據第一卷積神經網路運算模型以取得多個影像的多個特徵向量;以及依據多個特徵向量以及長短期記憶時間特徵萃取模型以判定與多個影像相對應的車燈組合。 Another aspect of the case is to provide a non-transitory computer readable medium, which includes at least one instruction program, and the processor executes at least one instruction program to implement the vehicle lamp status analysis method. The method for analyzing the state of the light includes: acquiring multiple regions of interest of multiple images; acquiring multiple feature vectors of multiple images according to the first convolutional neural network operation model; and based on multiple feature vectors and long-term and short-term memory time Feature extraction model to determine the combination of vehicle lights corresponding to multiple images.
因此,根據本案之技術態樣,本案之實施例藉由提供一種車燈狀態分析方法、車燈狀態分析系統及非暫態電腦可讀取媒體,利用卷積神經網路運算模型與長短期記憶時間特徵萃取模型,相較於現有技術,無須設定多重參數門檻值, 並可適應於各種天候的情況。 Therefore, according to the technical aspect of the case, the embodiments of the case utilize a convolutional neural network operation model and long-term and short-term memory by providing a method for analyzing the state of the lamp, a system for analyzing the state of the lamp, and a non-transitory computer-readable medium. Compared with the existing technology, the time feature extraction model does not need to set multiple parameter thresholds. And can adapt to various weather conditions.
100‧‧‧車燈狀態分析系統 100‧‧‧Car light status analysis system
110‧‧‧儲存裝置 110‧‧‧Storage device
130‧‧‧處理器 130‧‧‧ processor
DB1至DB4‧‧‧模型 DB1 to DB4‧‧‧ models
V‧‧‧影像 V‧‧‧Image
C1至C5‧‧‧卷積層 C1 to C5‧‧‧Convolutional layer
P1至P4‧‧‧池化層 P1 to P4‧‧‧ Pooling layer
FC1、FC2‧‧‧全連接網路層 FC1, FC2 ‧‧‧ fully connected network layer
Output‧‧‧輸出特徵向量 Output‧‧‧ Output feature vector
xt-1、xt、xt+1‧‧‧特徵向量 x t-1 , x t , x t+1 ‧‧‧ feature vector
ct-1、ct、ct+1‧‧‧記憶特徵 c t-1 , c t , c t+1 ‧‧‧ memory features
ft-1、ft、ft+1‧‧‧空間狀態特徵 f t-1 , f t , f t+1 ‧‧‧ space state characteristics
ht-1、ht、ht+1‧‧‧時間特徵向量 h t-1 , h t , h t+1 ‧‧‧ time feature vector
200‧‧‧車燈狀態分析方法 200‧‧‧Analysis method of car lamp status
S210至S250‧‧‧步驟 S210 to S250‧‧‧ steps
為讓本發明之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附圖式之說明如下:第1圖係根據本案之一些實施例所繪示之一種車燈狀態分析系統的示意圖;第2圖係根據本案之一些實施例所繪示之一種車燈狀態組合的表;第3圖係根據本案之一些實施例所繪示之一種車燈狀態分析方法的流程圖;第4圖係根據本案之一些實施例所繪示之第一卷積神經網路運算模型的示意圖;以及第5圖係根據本案之一些實施例所繪示之長短期記憶時間特徵萃取模型的示意圖。 In order to make the above and other objects, features, advantages and embodiments of the present invention more obvious and understandable, the drawings are described as follows: FIG. 1 is a vehicle lamp status analysis system according to some embodiments of the case Fig. 2 is a table showing a combination of lamp states according to some embodiments of the case; Fig. 3 is a flowchart of a method for analyzing a lamp state according to some embodiments of the case; Figure 4 is a schematic diagram of a first convolutional neural network operation model according to some embodiments of the present case; and Figure 5 is a schematic diagram of a long-short term memory time feature extraction model according to some embodiments of the present case.
以下揭示提供許多不同實施例或例證用以實施本發明的不同特徵。特殊例證中的元件及配置在以下討論中被用來簡化本揭示。所討論的任何例證只用來作為解說的用途,並不會以任何方式限制本發明或其例證之範圍和意義。此外,本揭示在不同例證中可能重複引用數字符號且/或字母,這些重複皆為了簡化及闡述,其本身並未指定以下討論中不同實施例且/或配置之間的關係。 The following disclosure provides many different embodiments or illustrations to implement different features of the present invention. The elements and configurations in the specific illustrations are used to simplify this disclosure in the following discussion. Any illustrations discussed are for illustrative purposes only, and do not limit the scope and meaning of the invention or its illustrations in any way. In addition, the present disclosure may repeatedly refer to numerical symbols and/or letters in different illustrations. These repetitions are for simplicity and explanation, and do not specify the relationship between different embodiments and/or configurations in the following discussion.
請參閱第1圖。第1圖係根據本案之一些實施例所繪示之一種車燈狀態分析系統100的示意圖。如第1圖所繪示,車燈狀態分析系統100包含儲存裝置110以及處理器130。處理器130電性連接至儲存裝置110。儲存裝置110用以儲存第一卷積神經網路運算模型DB1、第二卷積神經網路運算模型DB2、長短期記憶時間特徵萃取模型DB3以及物件追蹤模型DB4。
Please refer to Figure 1. FIG. 1 is a schematic diagram of a vehicle lamp
於本發明各實施例中,處理器130可以實施為積體電路如微控制單元(microcontroller)、微處理器(microprocessor)、數位訊號處理器(digital signal processor)、特殊應用積體電路(application specific integrated circuit,ASIC)、邏輯電路或其他類似元件或上述元件的組合。儲存裝置110可以實施為記憶體、硬碟、隨身碟、記憶卡等。關於車燈狀態分析系統100的詳細操作方式將於以下參照第2圖至第5圖一併說明。
In the embodiments of the present invention, the
請參閱第2圖。第2圖係根據本案之一些實施例所繪示之一種車燈狀態組合的表。於部分實施例中,第二卷積神經網路運算模型DB2係為依據分類至多個組合的多張影像所建立之模型。舉例而言,如第2圖所示,車燈狀包含十六種組合G1至G16。組合G1至G16分別依據被分類至該群組的多張影像進行深度學習,以優化第二卷積神經網路運算模型DB2。如第2圖所示之車燈狀態組合G1至G16僅作為例示說明之用,本案之實施方式不以此為限。 Please refer to figure 2. Fig. 2 is a table showing a combination of lamp states according to some embodiments of the present case. In some embodiments, the second convolutional neural network operation model DB2 is a model created based on multiple images classified into multiple combinations. For example, as shown in FIG. 2, the lamp shape includes sixteen combinations G1 to G16. Combinations G1 to G16 perform deep learning based on multiple images classified into the group, respectively, to optimize the second convolutional neural network operation model DB2. As shown in FIG. 2, the combination of lamp states G1 to G16 is for illustrative purposes only, and the implementation of the present case is not limited to this.
請參閱第3圖。第3圖係根據本案之一些實施例 所繪示之一種車燈狀態分析方法300的流程圖。於一些實施例中,車燈狀態分析方法300包含步驟S210至S250。 Please refer to Figure 3. Figure 3 is based on some embodiments of the case A flow chart of a method 300 for analyzing a lamp state is shown. In some embodiments, the vehicle lamp state analysis method 300 includes steps S210 to S250.
於步驟S210中,取得多個影像,其中多個影像於時間上連續。於部分實施例中,步驟S210可由影像擷取裝置、攝影機等輸入輸出裝置(未繪示)取得。此時,n=0。n係為一變數,用以紀錄已進行步驟S215至S245的影像張數。於部分實施例中,車燈狀態分析方法300依據M張影像以判定車燈狀態係屬於組合G1至G16中的哪一個。舉例而言,M可為150,即燈狀態分析方法300依據150張影像以判定車燈狀態係屬於組合G1至G16中的哪一個,然本案不以此為限。 In step S210, multiple images are obtained, wherein the multiple images are continuous in time. In some embodiments, step S210 can be obtained by input and output devices (not shown) such as image capture devices and cameras. At this time, n=0. n is a variable used to record the number of images from steps S215 to S245. In some embodiments, the vehicle lamp state analysis method 300 determines which of the combinations G1 to G16 the vehicle lamp state belongs to based on M images. For example, M can be 150, that is, the lamp state analysis method 300 determines which of the combinations G1 to G16 the lamp state belongs to based on 150 images, but this case is not limited to this.
於步驟S215中,由影像取得物件。於部分實施例中,步驟S215可由第1圖中的處理器130執行。舉例而言,處理器130執行儲存於儲存裝置110中的物件辨識模組(未繪示)以取得影像中的物件。於部分實施例中,若處理器130可由影像取得物件後,處理器130執行步驟S220。而若是處理器無法由影像取得物件,處理器繼續執行步驟S215直到可由影像取得物件。詳細而言,若是處理器130由第一張影像無法取得物件,處理器繼續執行步驟S215並判斷是否能由第二張影像取得物件。而若是處理器130由第一張影像可取得物件,處理器接著執行步驟S220。
In step S215, the object is obtained from the image. In some embodiments, step S215 may be executed by the
於步驟S220中,判斷物件是否為已偵測物件。於部分實施例中,步驟S220可由第1圖中的處理器130執行。若是判定物件不為已偵測物件,執行步驟S222。若是
判定物件為已偵測物件,執行步驟S224。於部分實施例中,儲存裝置110中儲存有變數(未繪示),用以紀錄物件是否為已偵測物件,以供處理器130判斷物件是否為已偵測物件。舉例而言,當變數為真(True)時,判定為已偵測物件。當變數為假(False)時,判定不為已偵測物件。
In step S220, it is determined whether the object is a detected object. In some embodiments, step S220 may be executed by the
於步驟S222中,依據影像與第二卷積神經網路運算模型以取得感興趣區域。於部分實施例中,步驟S222可由第1圖中的處理器130執行。第二卷積神經網路運算模型DB2可為YOLOv3、SSD、MobileNetv2等,但本案不以上述為限。
In step S222, the region of interest is obtained according to the image and the second convolutional neural network operation model. In some embodiments, step S222 may be executed by the
於步驟S224中,依據影像與物件追蹤模型以取得感興趣區域。於部分實施例中,步驟S224可由第1圖中的處理器130執行。物件追蹤模型DB4可為MDNet、TLD等模型,但本案不以上述為限。
In step S224, the region of interest is obtained based on the image and object tracking model. In some embodiments, step S224 may be executed by the
由於物件追蹤模型DB4的計算成本相較於第二卷積神經網路運算模型DB2的計算成本來的低,透過判定是否為已偵測物件並依據判定的結果選擇以物件追蹤模型DB4或第二卷積神經網路運算模型DB2取得影像的感興趣區域,以降低整體的計算成本。 Since the computational cost of the object tracking model DB4 is lower than the computational cost of the second convolutional neural network operation model DB2, by determining whether it is a detected object and selecting the object tracking model DB4 or the second according to the result of the determination The convolutional neural network operation model DB2 obtains the region of interest of the image to reduce the overall calculation cost.
於步驟S230中,依據第一卷積神經網路運算模型與感興趣區域以取得影像的特徵向量。於部分實施例中,步驟S230可由第1圖中的處理器130執行。
In step S230, the image feature vector is obtained according to the first convolutional neural network operation model and the region of interest. In some embodiments, step S230 may be executed by the
請一併參閱第4圖。第4圖係根據本案之一些實施例所繪示之第一卷積神經網路運算模型DB1的示意圖。如 第4圖所繪示,第一卷積神經網路運算模型DB1包含十層子運算模型,係為輕量的卷積神經網路運算模型。詳細而言,卷積神經網路運算模型DB1的十層子運算模型依序包含卷積層C1、C2、池化層P1、卷積層C3、池化層P2、卷積層C4、池化層P3、卷積層C5、池化層P4以及全連接網路層FC1。各個子運算模型上所述之數字係代表各個子運算模型的維度。 Please also refer to Figure 4. FIG. 4 is a schematic diagram of the first convolutional neural network operation model DB1 according to some embodiments of the present case. Such as As shown in FIG. 4, the first convolutional neural network operation model DB1 includes a ten-layer sub-operational model, which is a lightweight convolutional neural network operation model. In detail, the ten-layer sub-operation model of the convolutional neural network operation model DB1 includes the convolution layer C1, C2, the pooling layer P1, the convolution layer C3, the pooling layer P2, the convolution layer C4, the pooling layer P3, Convolutional layer C5, pooling layer P4 and fully connected network layer FC1. The numbers described on each sub-operation model represent the dimensions of each sub-operation model.
於部分實施例中,影像V經由卷積神經網路運算模型DB1的十層子運算模型後,於全連接網路層FC1會產生長度為4096的特徵向量,用以表示影像V空間能量的狀態。 In some embodiments, after the image V passes through the ten-layer sub-operation model of the convolutional neural network operation model DB1, a feature vector with a length of 4096 is generated at the fully connected network layer FC1 to represent the state of the image V space energy .
請回頭參閱第3圖。於步驟S240中,依據特徵向量以及長短期記憶時間特徵萃取模型以產生時間特徵向量,並將n加1。於部分實施例中,步驟S240可由第1圖中的處理器130執行。
Please refer back to Figure 3. In step S240, a time feature vector is generated according to the feature vector and the long and short-term memory time feature extraction model, and n is incremented by 1. In some embodiments, step S240 may be executed by the
請一併參閱第5圖。第5圖係根據本案之一些實施例所繪示之長短期記憶時間特徵萃取模型DB3的示意圖。xt-1、xt、xt+1係為卷積神經網路運算模型DB1所產生的特徵向量。ct-1、ct、ct+1係為長短期記憶時間特徵萃取模型DB3依據各個影像所產生的記憶特徵。ft-1、ft、ft+1係為長短期記憶時間特徵萃取模型DB3依據各個影像所產生的空間狀態特徵。ht-1、ht、ht+1係為長短期記憶時間特徵萃取模型DB3依據各個影像的特徵向量、各個影像的前一張影像的空間狀態特徵、各個影像的前一張影像的記憶特徵 所產生的時間特徵向量。 Please also refer to Figure 5. FIG. 5 is a schematic diagram of a long-short-term memory time feature extraction model DB3 according to some embodiments of the present case. x t-1 , x t and x t+1 are the feature vectors generated by the convolutional neural network operation model DB1. c t-1 , c t and c t+1 are the memory features generated by each image based on the long and short-term memory time feature extraction model DB3. f t-1 , f t , and f t+1 are the spatial state features generated by the long and short-term memory time feature extraction model DB3 based on each image. h t-1 , h t , and h t+1 are long-term and short-term memory time feature extraction models DB3 based on the feature vector of each image, the spatial state characteristics of the previous image of each image, and the memory of the previous image of each image Time feature vector generated by feature.
請回頭參閱第3圖。於步驟S245中,判斷n是否=M。於部分實施例中,步驟S245可由第1圖中的處理器130執行。當n等於M時,表示有M張影像已執行步驟S215至S245,執行步驟S250。當n不等於M時,表示尚未有M張影像已執行步驟S215至S245,執行步驟S215以取得下一張影像的物件。
Please refer back to Figure 3. In step S245, it is determined whether n=M. In some embodiments, step S245 may be executed by the
於步驟S250中,判定與多個影像相對應的車燈組合。於部分實施例中,步驟S250可由第1圖中的處理器130執行。
In step S250, a combination of vehicle lights corresponding to a plurality of images is determined. In some embodiments, step S250 may be executed by the
請一併參閱第5圖。當M張影像均已執行步驟S215至S245後,長短期記憶時間特徵萃取模型DB3產生長度為512的輸出特徵向量Output,長短期記憶時間特徵萃取模型DB3的全連接網路層FC2產生長度為16的機率向量。
Please also refer to Figure 5. After all the M images have executed steps S215 to S245, the long- and short-term memory time feature extraction model DB3 generates an output feature vector of
於部分實施例中,於長短期記憶時間特徵萃取模型DB3中,對機率向量進行正規化處理以產生對應於如第2圖中所繪示的多個組合G1對G16的機率。接著,於長短期記憶時間特徵萃取模型DB3中,判定多個組合G1對G16當中機率最大者為多張影像相對應的組合。 In some embodiments, in the long- and short-term memory time feature extraction model DB3, the probability vector is normalized to generate a probability corresponding to multiple combinations G1 vs. G16 as shown in FIG. 2. Next, in the long and short-term memory time feature extraction model DB3, it is determined that the most probable among the multiple combinations G1 to G16 is the combination corresponding to multiple images.
根據前述的實施例,本揭示內容之一些其他實施例提供一種非暫態電腦可讀取媒體。非暫態電腦可讀取媒體儲存電腦軟體並用以執行前述如第3圖所示之車燈狀態分析方法200。
According to the aforementioned embodiments, some other embodiments of the present disclosure provide a non-transitory computer readable medium. The non-transitory computer can read the media and store the computer software and use it to perform the
如上所述,由於車燈的狀態係為動態的,例如 閃爍的狀態,因此無法由單一靜態的影像判定車燈的狀態。於本案的實施例中,透過卷積神經網路運算模型以及長短期記憶時間特徵萃取模型,以依據多張時間上連續的影像中的空間能量特徵判定車燈的狀態。透過神經網路學習以及長短期記憶時間特徵萃取模型的判斷,本案之實施例相較於現有技術,無須設定多重參數門檻值,並可適應於各種天候的情況。 As mentioned above, since the state of the lamp is dynamic, for example The flickering state, it is not possible to determine the state of the lamp from a single static image. In the embodiment of the present invention, a convolutional neural network operation model and a long-term and short-term memory time feature extraction model are used to determine the state of the vehicle light based on the spatial energy features in multiple temporally continuous images. Through the neural network learning and the judgment of the long-term and short-term memory time feature extraction model, compared with the prior art, the embodiment of this case does not need to set multiple parameter thresholds, and can be adapted to various weather conditions.
另外,上述例示包含依序的示範步驟,但該些步驟不必依所顯示的順序被執行。以不同順序執行該些步驟皆在本揭示內容的考量範圍內。在本揭示內容之實施例的精神與範圍內,可視情況增加、取代、變更順序及/或省略該些步驟。 In addition, the above example includes exemplary steps in order, but the steps need not be performed in the order shown. Performing these steps in different orders is within the scope of this disclosure. Within the spirit and scope of the embodiments of the present disclosure, the order may be added, replaced, changed, and/or omitted as appropriate.
雖然本案已以實施方式揭示如上,然其並非用以限定本案,任何熟習此技藝者,在不脫離本案之精神和範圍內,當可作各種之更動與潤飾,因此本案之保護範圍當視後附之申請專利範圍所界定者為準。 Although this case has been disclosed as above by way of implementation, it is not intended to limit this case. Anyone who is familiar with this skill can make various changes and modifications within the spirit and scope of this case, so the scope of protection of this case should be considered The scope of the attached patent application shall prevail.
200‧‧‧車燈狀態分析方法 200‧‧‧Analysis method of car lamp status
S210至S250‧‧‧步驟 S210 to S250‧‧‧ steps
Claims (7)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW107143736A TWI684920B (en) | 2018-12-05 | 2018-12-05 | Headlight state analysis method, headlight state analysis system, and non-transitory computer readable media |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW107143736A TWI684920B (en) | 2018-12-05 | 2018-12-05 | Headlight state analysis method, headlight state analysis system, and non-transitory computer readable media |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI684920B true TWI684920B (en) | 2020-02-11 |
| TW202022689A TW202022689A (en) | 2020-06-16 |
Family
ID=70413568
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW107143736A TWI684920B (en) | 2018-12-05 | 2018-12-05 | Headlight state analysis method, headlight state analysis system, and non-transitory computer readable media |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI684920B (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7317406B2 (en) * | 2005-02-03 | 2008-01-08 | Toyota Technical Center Usa, Inc. | Infrastructure-based collision warning using artificial intelligence |
| US20080294315A1 (en) * | 1995-06-07 | 2008-11-27 | Intelligent Technologies International, Inc. | System and Method for Controlling Vehicle Headlights |
| TW201241796A (en) * | 2011-04-15 | 2012-10-16 | Hon Hai Prec Ind Co Ltd | System and method for inspection of cars that violate traffic regulations using images |
| CN107563265A (en) * | 2016-06-30 | 2018-01-09 | 杭州海康威视数字技术股份有限公司 | A kind of high beam detection method and device |
| CN108052861A (en) * | 2017-11-08 | 2018-05-18 | 北京卓视智通科技有限责任公司 | A kind of nerve network system and the model recognizing method based on the nerve network system |
| CN108197538A (en) * | 2017-12-21 | 2018-06-22 | 浙江银江研究院有限公司 | A kind of bayonet vehicle searching system and method based on local feature and deep learning |
| CN108647700A (en) * | 2018-04-14 | 2018-10-12 | 华中科技大学 | Multitask vehicle part identification model based on deep learning, method and system |
| CN108921060A (en) * | 2018-06-20 | 2018-11-30 | 安徽金赛弗信息技术有限公司 | Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method |
-
2018
- 2018-12-05 TW TW107143736A patent/TWI684920B/en active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080294315A1 (en) * | 1995-06-07 | 2008-11-27 | Intelligent Technologies International, Inc. | System and Method for Controlling Vehicle Headlights |
| US7317406B2 (en) * | 2005-02-03 | 2008-01-08 | Toyota Technical Center Usa, Inc. | Infrastructure-based collision warning using artificial intelligence |
| TW201241796A (en) * | 2011-04-15 | 2012-10-16 | Hon Hai Prec Ind Co Ltd | System and method for inspection of cars that violate traffic regulations using images |
| CN107563265A (en) * | 2016-06-30 | 2018-01-09 | 杭州海康威视数字技术股份有限公司 | A kind of high beam detection method and device |
| CN108052861A (en) * | 2017-11-08 | 2018-05-18 | 北京卓视智通科技有限责任公司 | A kind of nerve network system and the model recognizing method based on the nerve network system |
| CN108197538A (en) * | 2017-12-21 | 2018-06-22 | 浙江银江研究院有限公司 | A kind of bayonet vehicle searching system and method based on local feature and deep learning |
| CN108647700A (en) * | 2018-04-14 | 2018-10-12 | 华中科技大学 | Multitask vehicle part identification model based on deep learning, method and system |
| CN108921060A (en) * | 2018-06-20 | 2018-11-30 | 安徽金赛弗信息技术有限公司 | Motor vehicle based on deep learning does not use according to regulations clearance lamps intelligent identification Method |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202022689A (en) | 2020-06-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110991311B (en) | A target detection method based on densely connected deep network | |
| CN109086811B (en) | Multi-label image classification method and device and electronic equipment | |
| CN108197326B (en) | Vehicle retrieval method and device, electronic equipment and storage medium | |
| CN110210474B (en) | Target detection method and device, equipment and storage medium | |
| CN114926722B (en) | YOLOv 5-based scale self-adaptive target detection method and storage medium | |
| CA3148760C (en) | Automated image retrieval with graph neural network | |
| CN108734210B (en) | An object detection method based on cross-modal multi-scale feature fusion | |
| CN115187772A (en) | Target detection network training and target detection method, device and equipment | |
| CN107944450B (en) | License plate recognition method and device | |
| WO2018188270A1 (en) | Image semantic segmentation method and device | |
| CN109902619B (en) | Image closed-loop detection method and system | |
| CN111861925A (en) | An Image Rain Removal Method Based on Attention Mechanism and Gated Recurrent Unit | |
| US10706558B2 (en) | Foreground and background detection method | |
| CN111831852B (en) | A video retrieval method, device, equipment and storage medium | |
| CN112016472A (en) | Driver attention area prediction method and system based on target dynamic information | |
| CN112906808A (en) | Image classification method, system, device and medium based on convolutional neural network | |
| CN113591543A (en) | Traffic sign recognition method and device, electronic equipment and computer storage medium | |
| CN113487610A (en) | Herpes image recognition method and device, computer equipment and storage medium | |
| US20230107917A1 (en) | System and method for a hybrid unsupervised semantic segmentation | |
| CN117809118A (en) | A visual perception recognition method, equipment and medium based on deep learning | |
| CN114202656A (en) | Method and apparatus for processing image | |
| TWI684920B (en) | Headlight state analysis method, headlight state analysis system, and non-transitory computer readable media | |
| CN113763412B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
| CN119251471A (en) | Target detection method and device based on YOLO architecture | |
| US20250259274A1 (en) | System and method for deep equilibirum approach to adversarial attack of diffusion models |