TWI512641B - Identification system and identification method - Google Patents
Identification system and identification method Download PDFInfo
- Publication number
- TWI512641B TWI512641B TW103112320A TW103112320A TWI512641B TW I512641 B TWI512641 B TW I512641B TW 103112320 A TW103112320 A TW 103112320A TW 103112320 A TW103112320 A TW 103112320A TW I512641 B TWI512641 B TW I512641B
- Authority
- TW
- Taiwan
- Prior art keywords
- image data
- facial image
- recognized
- facial
- module
- Prior art date
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Description
本發明是有關於一種辨識系統,且特別是有關於一種用以辨識及搜尋影片中之人物或角色之相關資訊的系統及其方法。The present invention relates to an identification system, and more particularly to a system and method for identifying and searching for information about a person or character in a movie.
隨著多媒體科技的進步,影片的播放技術也越來越普及,使用者利用平板電腦、智慧型手機等電子裝備,即可輕鬆方便的觀看影片,不受地點的限制。然而,當使用者欲查詢影片中之人物或角色的相關資訊時,現有的技術仍有許多改進空間。With the advancement of multimedia technology, the playback technology of movies has become more and more popular. Users can easily and conveniently watch videos without using the electronic equipment such as tablet computers and smart phones. However, when the user wants to inquire about the information of the person or character in the movie, there is still much room for improvement in the existing technology.
例如,使用者可能會需要將暫停影片在某一畫面,然後擷取畫面中感興趣之人物或角色的影像,並將擷取下來的影像,利用辨識軟體或是圖像搜尋伺服器搜尋相關資訊。再者,現有的技術在搜尋人物或角色的相關資訊時,其搜尋成功的機率也有待加強。For example, the user may need to pause the movie on a certain screen, and then capture images of the person or character of interest in the screen, and use the identification software or image search server to search for relevant information. . Moreover, the existing technology in the search for information about people or characters, the probability of its search success needs to be strengthened.
因此,本發明之一態樣是在提供一種辨識系統,包含一儲存伺服器、至少一搜尋伺服器以及一辨識裝置。辨識裝置包含偵測模組、影像處理模組以及執行模組。偵測模組用以根據使用者指令對影片中之第一影像執行臉部偵測,以產生至少一臉部影像資料。影像處理模組耦接偵測模組並通訊連接儲存伺服器。影像處理模組用以根據權重計算公式,取得臉部影像資料之權重值,並根據權重值於臉部影像資料中選取並傳送欲辨識臉部影像資料至儲存伺服器,且儲存伺服器回傳對應於欲辨識臉部影像資料之儲存位置的儲存網址。執行模組通訊連接儲存伺服器與搜尋伺服器。執行模組用以接收儲存網址,並將儲存網址傳送至搜尋伺服器,使得搜尋伺服器根據儲存網址讀取欲辨識臉部影像資料,並根據欲辨識臉部影像資料產生並回傳對象資訊搜尋結果,執行模組根據對象資訊搜尋結果輸出對象相關資訊。Accordingly, one aspect of the present invention is to provide an identification system including a storage server, at least one search server, and an identification device. The identification device includes a detection module, an image processing module, and an execution module. The detection module is configured to perform face detection on the first image in the movie according to a user instruction to generate at least one facial image data. The image processing module is coupled to the detection module and communicates with the storage server. The image processing module is configured to obtain a weight value of the facial image data according to the weight calculation formula, and select and transmit the facial image data to the storage server according to the weight value in the facial image data, and store the server backhaul. Corresponding to the storage URL of the storage location where the facial image data is to be recognized. The execution module communication connection stores the server and the search server. The execution module is configured to receive the storage URL and transmit the storage URL to the search server, so that the search server reads the facial image data to be recognized according to the storage URL, and generates and returns the target information search according to the facial image data to be recognized. As a result, the execution module outputs the object related information based on the object information search result.
依據本發明一實施例,上述執行模組用以將儲存網址傳送至複數個搜尋伺服器,並判斷搜尋伺服器所回傳之複數個對象資訊搜尋結果是否包含對應於欲辨識臉部影像資料之對象相關資訊。According to an embodiment of the invention, the execution module is configured to transmit the storage URL to the plurality of search servers, and determine whether the plurality of object information search results returned by the search server include the image data corresponding to the facial image to be recognized. Object related information.
依據本發明另一實施例,上述執行模組係透過瀏覽器(browser)輸出對象相關資訊。According to another embodiment of the present invention, the execution module outputs object related information through a browser.
依據本發明又一實施例,上述偵測模組更用以對臉部影像資料計算對應的臉部偵測信心值。According to still another embodiment of the present invention, the detecting module is further configured to calculate a corresponding face detection confidence value for the facial image data.
依據本發明再一實施例,上述辨識裝置更包含影像 優化模組,用以判斷臉部偵測信心值是否小於一臨界值,若是,則偵測模組更用以對影片中之至少一第二影像執行臉部偵測,以產生與臉部影像資料相似且具有更高的臉部偵測信心值的臉部影像資料來取代臉部影像資料。According to still another embodiment of the present invention, the identification device further includes an image. The optimization module is configured to determine whether the face detection confidence value is less than a threshold value, and if so, the detection module is further configured to perform face detection on at least one second image in the movie to generate a facial image Face image data with similar data and higher facial detection confidence value to replace facial image data.
依據本發明另又一實施例,上述權重計算公式係由下列數學式決定:W=D×C×B×dmin,其中W為權重值;D為臉部影像資料中兩眼間的距離;C為臉部偵測信心值;B為臉部影像資料之亮度值;以及dmin為臉部影像資料中,兩眼間之中間點與臉部影像資料上五個定點間之距離中最小者,其中五個定點分別為臉部影像資料之中心點以及臉部影像資料上等分畫面之井字連線的交點。According to still another embodiment of the present invention, the weight calculation formula is determined by the following mathematical formula: W = D × C × B × dmin, where W is a weight value; D is a distance between two eyes in the face image data; The confidence value is detected for the face; B is the brightness value of the facial image data; and dmin is the smallest of the distance between the middle point of the two eyes and the five fixed points on the facial image data in the facial image data, wherein The five fixed points are the intersection of the center point of the facial image data and the well-word line of the equal-part image on the facial image data.
依據本發明另再一實施例,上述影像處理模組係用以於臉部影像資料中選取具有最高權重值的臉部影像資料做為欲辨識臉部影像資料。According to still another embodiment of the present invention, the image processing module is configured to select facial image data having the highest weight value as the facial image data to be recognized in the facial image data.
依據本發明再又一實施例,上述影像處理模組係用以將臉部影像資料依照權重值大小輸出至顯示元件,並根據使用者選擇指令從臉部影像資料中選取欲辨識臉部影像資料。According to still another embodiment of the present invention, the image processing module is configured to output the facial image data to the display component according to the weight value, and select the facial image data to be recognized from the facial image data according to the user selection instruction. .
本發明之另一態樣是在提供一種辨識方法,包含下列步驟:(a)根據使用者指令對影片中之第一影像執行臉部偵測,以產生至少一臉部影像資料;(b)根據權重計算公式,取得臉部影像資料之權重值;(c)根據權重值,於臉部影像資料中選取並儲存欲辨識臉部影像資料;(d)根據對應於該欲辨識臉部影像資料之儲存位置的儲存網址,讀取欲辨識 臉部影像資料;(e)根據欲辨識臉部影像資料產生對象資訊搜尋結果;以及(f)根據對象資訊搜尋結果輸出對象相關資訊。Another aspect of the present invention provides an identification method comprising the steps of: (a) performing face detection on a first image in a movie according to a user instruction to generate at least one facial image data; (b) Obtaining a weight value of the facial image data according to the weight calculation formula; (c) selecting and storing the facial image data to be recognized in the facial image data according to the weight value; (d) according to the facial image data corresponding to the facial image to be recognized Storage address of the storage location, read to identify Face image data; (e) generating object information search results according to the facial image data to be recognized; and (f) outputting object related information according to the object information search result.
依據本發明一實施例,上述辨識方法更包含:判斷對象資訊搜尋結果是否包含對應於欲辨識臉部影像資料之對象相關資訊,若否,則重複步驟(d)以及步驟(e)。According to an embodiment of the invention, the identifying method further comprises: determining whether the target information search result includes object related information corresponding to the facial image data to be recognized, and if not, repeating step (d) and step (e).
依據本發明另一實施例,上述辨識方法更包含:對臉部影像資料計算一對應的臉部偵測信心值。According to another embodiment of the present invention, the identification method further includes: calculating a corresponding face detection confidence value for the facial image data.
依據本發明又一實施例,上述辨識方法更包含:判斷臉部偵測信心值是否小於臨界值;以及若是,則對影片中之至少一第二影像執行臉部偵測,以產生與該至少一臉部影像資料相似且具有更高的臉部偵測信心值的臉部影像資料來取代臉部影像資料。According to still another embodiment of the present invention, the identifying method further includes: determining whether the face detection confidence value is less than a threshold; and if so, performing face detection on the at least one second image in the movie to generate the at least Face image data with similar facial image data and higher facial detection confidence value replaces facial image data.
依據本發明再一實施例,上述權重計算公式係由下列數學式決定:W=D×C×B×dmin,其中W為權重值;D為臉部影像資料中兩眼間的距離;C為臉部偵測信心值;B為臉部影像資料之亮度值;以及dmin為臉部影像資料中,兩眼間之中間點與臉部影像資料上五個定點間之距離中最小者,其中五個定點分別為臉部影像資料之中心點以及臉部影像資料上等分畫面之井字連線的交點。According to still another embodiment of the present invention, the weight calculation formula is determined by the following mathematical formula: W = D × C × B × dmin, where W is a weight value; D is a distance between two eyes in the face image data; C is Face detection confidence value; B is the brightness value of the facial image data; and dmin is the smallest of the distance between the middle point of the two eyes and the five fixed points on the facial image data in the facial image data, five of which The fixed points are the intersection of the center point of the facial image data and the well-word line of the equal-part image on the facial image data.
依據本發明另又一實施例,上述步驟(c)更包含:於臉部影像資料中選取具有最高權重值的臉部影像資料做為欲辨識臉部影像資料。According to still another embodiment of the present invention, the step (c) further includes: selecting facial image data having the highest weight value as the facial image data to be recognized in the facial image data.
依據本發明另再一實施例,上述步驟(c)更包含:將 臉部影像資料依照權重值大小輸出至顯示元件,並根據使用者選擇指令從臉部影像資料中選擇欲辨識臉部影像資料。According to still another embodiment of the present invention, the step (c) further includes: The facial image data is output to the display component according to the weight value, and the facial image data to be recognized is selected from the facial image data according to the user selection instruction.
應用本發明之優點在於使用者只需下達一指令,即可查詢影片中之人物或角色的相關資訊。此外,系統可自動辨識並決定使用者可能感興趣的人物或角色,使用者不需要暫停影片,也不需要手動選取欲查詢的人物或角色。另外,藉由利用多個搜尋伺服器進行搜尋,可使得對象相關資訊被成功搜尋到的機率增加。The advantage of applying the invention is that the user can query the relevant information of the person or character in the movie by simply issuing an instruction. In addition, the system automatically recognizes and determines the characters or characters that the user may be interested in. The user does not need to pause the movie or manually select the person or character to be queried. In addition, by using a plurality of search servers for searching, the probability that the object related information is successfully searched can be increased.
100、100a‧‧‧辨識系統100, 100a‧‧‧ identification system
110、110a‧‧‧辨識裝置110, 110a‧‧‧ Identification device
120、120a‧‧‧偵測模組120, 120a‧‧‧Detection module
122、122a、222‧‧‧臉部影像資料122, 122a, 222‧‧‧ Face image data
130‧‧‧影像處理模組130‧‧‧Image Processing Module
132‧‧‧欲辨識臉部影像資料132‧‧‧To identify facial image data
140‧‧‧執行模組140‧‧‧Execution module
150‧‧‧儲存伺服器150‧‧‧Storage server
152‧‧‧儲存網址152‧‧‧Save URL
160‧‧‧搜尋伺服器160‧‧‧Search server
162‧‧‧對象資訊搜尋結果162‧‧‧Target information search results
170、174、180、182、184、186、188‧‧‧端點End points of 170, 174, 180, 182, 184, 186, 188‧‧
190、192、194、196、198‧‧‧線段190, 192, 194, 196, 198‧‧ ‧ line segments
210‧‧‧影像優化模組210‧‧‧Image Optimization Module
302、304、306、308、310、312、402、502、602、604‧‧‧步驟302, 304, 306, 308, 310, 312, 402, 502, 602, 604‧ ‧ steps
第1圖為本發明一實施例中,一種辨識系統之方塊示意圖。FIG. 1 is a block diagram of an identification system according to an embodiment of the present invention.
第2圖為本發明一實施例中,一臉部影像資料之示意圖。FIG. 2 is a schematic diagram of a facial image data according to an embodiment of the invention.
第3圖為本發明一實施例中,一種辨識系統之方塊示意圖。FIG. 3 is a block diagram of an identification system according to an embodiment of the present invention.
第4圖為本發明一實施例中,一種辨識方法之流程示意圖。FIG. 4 is a schematic flow chart of an identification method according to an embodiment of the present invention.
第5圖為本發明一實施例中,一種辨識方法之流程示意圖。FIG. 5 is a schematic flow chart of an identification method according to an embodiment of the present invention.
第6圖為本發明一實施例中,一種辨識方法之流程示意圖。FIG. 6 is a schematic flow chart of an identification method according to an embodiment of the present invention.
第7圖為本發明一實施例中,一種辨識方法之流程示意圖。FIG. 7 is a schematic flow chart of an identification method according to an embodiment of the present invention.
下文係舉實施例配合所附圖式作詳細說明,但所提供之實施例並非用以限制本發明所涵蓋的範圍,而結構運作之描述非用以限制其執行之順序,任何由元件重新組合 之結構,所產生具有均等功效的裝置,皆為本發明所涵蓋的範圍。此外,圖式僅以說明為目的,並未依照原尺寸作圖。為使便於理解,下述說明中相同元件將以相同之符號標示來說明。The embodiments are described in detail below with reference to the accompanying drawings, but the embodiments are not intended to limit the scope of the invention, and the description of the structure operation is not intended to limit the order of execution, any component recombination The structure, which produces equal devices, is within the scope of the present invention. In addition, the drawings are for illustrative purposes only and are not drawn to the original dimensions. For ease of understanding, the same elements in the following description will be denoted by the same reference numerals.
於本文中,除非內文中對於冠詞有所特別限定,否則『一』與『該』可泛指單一個或多個。將進一步理解的是,本文中所使用之『包含』、『包括』、『具有』及相似詞彙,指明其所記載的特徵、區域、整數、步驟、操作、元件與/或組件,但不排除其所述或額外的其一個或多個其它特徵、區域、整數、步驟、操作、元件、組件,與/或其中之群組。In this document, "one" and "the" can be used to mean one or more, unless the article specifically defines the article. It will be further understood that the terms "comprising", "comprising", "having", and <RTIgt; One or more of its other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
另外,在本文中,使用第一、第二與第三等等之詞彙,是用於描述各種元件、組件、區域、層與/或區塊是可以被理解的。但是這些元件、組件、區域、層與/或區塊不應該被這些術語所限制。這些詞彙只限於用來辨別單一元件、組件、區域、層與/或區塊。因此,在下文中的一第一元件、組件、區域、層與/或區塊也可被稱為第二元件、組件、區域、層與/或區塊,而不脫離本發明的本意。In addition, the words "first, second, third, etc." are used herein to describe various elements, components, regions, layers and/or blocks. However, these elements, components, regions, layers and/or blocks should not be limited by these terms. These terms are only used to identify a single element, component, region, layer, and/or block. Thus, a singular element, component, region, layer and/or block may be referred to as a second element, component, region, layer and/or block, without departing from the spirit of the invention.
請參照第1圖。第1圖為本發明一實施例中,一種辨識系統100之方塊示意圖。辨識系統100包含儲存伺服器150、至少一搜尋伺服器160以及辨識裝置110。Please refer to Figure 1. 1 is a block diagram of an identification system 100 in accordance with an embodiment of the present invention. The identification system 100 includes a storage server 150, at least one search server 160, and an identification device 110.
辨識系統100可用以辨識並搜尋影片中之人物或角色的相關資訊,並將其提供給使用者。於一實施例中,儲存伺服器150以及搜尋伺服器160係透過網路與辨識裝 置110連接,而辨識裝置110可為一個人電腦、一筆記型電腦、一平板電腦、一智慧型手機、一多媒體播放器或是一智慧型眼鏡...等電子裝置。The identification system 100 can be used to identify and search for information about a person or character in a movie and provide it to the user. In one embodiment, the storage server 150 and the search server 160 are connected to the network and the identification device. The identification device 110 can be a personal computer, a notebook computer, a tablet computer, a smart phone, a multimedia player or a smart glasses.
辨識裝置110包含偵測模組120、影像處理模組130以及執行模組140。影像處理模組130耦接偵測模組120並通訊連接儲存伺服器150。執行模組140通訊連接儲存伺服器150與搜尋伺服器160。於一實施例中,偵測模組120係為一臉部偵側晶片,影像處理模組130係為一影像處理晶片,而執行模組140係為一系統晶片。於另一實施例中,辨識裝置110包含至少一處理器以及一記憶體,而偵測模組120、影像處理模組130以及執行模組140係儲存於上述記憶體,並藉由上述處理器執行其功能。The identification device 110 includes a detection module 120, an image processing module 130, and an execution module 140. The image processing module 130 is coupled to the detection module 120 and communicatively coupled to the storage server 150. The execution module 140 communicatively connects the storage server 150 with the search server 160. In one embodiment, the detection module 120 is a face detection chip, the image processing module 130 is an image processing chip, and the execution module 140 is a system wafer. In another embodiment, the identification device 110 includes at least one processor and a memory, and the detection module 120, the image processing module 130, and the execution module 140 are stored in the memory, and the processor is Perform its functions.
偵測模組120用以根據一使用者指令對一影片中之一第一影像執行臉部偵測,以產生至少一臉部影像資料122。上述使用者指令可為使用者透過鍵盤、滑鼠或遙控器所下達之指令,或是一語音指令。臉部影像資料122可為一人臉之臉部影像資料。於一實施例中,偵測模組120更用以對臉部影像資料122,計算一對應的臉部偵測信心值。上述臉部偵測信心值可作為一衡量所偵測到的臉部影像資料122是否確實包含臉部的可靠性指標。The detection module 120 is configured to perform face detection on a first image of a movie according to a user instruction to generate at least one facial image data 122. The user command can be a command issued by the user through a keyboard, a mouse or a remote controller, or a voice command. The facial image data 122 can be a facial image of a human face. In one embodiment, the detection module 120 is further configured to calculate a corresponding facial detection confidence value for the facial image data 122. The face detection confidence value can be used as a measure of whether the detected facial image data 122 does include a reliability indicator of the face.
影像處理模組130用以根據一權重計算公式,取得臉部影像資料122之一權重值。請同時參照第2圖,第2圖係依據本發明一實施例,繪示一臉部影像資料122a之示意圖。於該實施例中,計算臉部影像資料122a之權重值的 權重計算公式係由下列數學式決定:W=D×C×B×dmin,其中W為權重值;D為臉部影像資料122a中兩眼間的距離(即為第2圖中線段170之長度);C為臉部影像資料122a之臉部偵測信心值;B為臉部影像資料122a之亮度值;而dmin為臉部影像資料122a中,兩眼間之中間點(即為第2圖中之端點174)與臉部影像資料122a上五個定點間之距離中最小者。上述五個定點分別為臉部影像資料122a之中心點(即為第2圖中之端點180)以及臉部影像資料122a上等分畫面之井字連線的交點(即為第2圖中之端點182、184、186以及188)。於本實施例中,兩眼間之中間點174與上述五個定點180、182、184、186以及188間之距離分別為第2圖中之線段190、192、194、196以及198之長度。由第2圖可知,於本實施例中,dmin即為線段194之長度。The image processing module 130 is configured to obtain a weight value of the facial image data 122 according to a weight calculation formula. Please refer to FIG. 2 at the same time. FIG. 2 is a schematic diagram showing a facial image data 122a according to an embodiment of the present invention. In this embodiment, the weight value of the facial image data 122a is calculated. The weight calculation formula is determined by the following mathematical formula: W = D × C × B × dmin, where W is the weight value; D is the distance between the two eyes in the facial image data 122a (that is, the length of the line segment 170 in the second figure) C is the facial detection confidence value of the facial image data 122a; B is the luminance value of the facial image data 122a; and dmin is the intermediate point between the two eyes in the facial image data 122a (ie, the second image) The smallest of the distance between the endpoint 174) and the five fixed points on the facial image data 122a. The above five fixed points are respectively the intersection point of the center point of the facial image data 122a (ie, the end point 180 in FIG. 2) and the ticks of the halved picture on the facial image data 122a (ie, in FIG. 2) End points 182, 184, 186, and 188). In the present embodiment, the distance between the intermediate point 174 between the two eyes and the above five fixed points 180, 182, 184, 186 and 188 is the length of the line segments 190, 192, 194, 196 and 198 in Fig. 2, respectively. As can be seen from Fig. 2, in the present embodiment, dmin is the length of the line segment 194.
影像處理模組130更用以根據權重值於臉部影像資料122中選取並傳送一欲辨識臉部影像資料132至儲存伺服器150,使得儲存伺服器150回傳對應於欲辨識臉部影像資料132之儲存位置的儲存網址152。The image processing module 130 is further configured to select and transmit a facial image data 132 to the storage server 150 in the facial image data 122 according to the weight value, so that the storage server 150 returns the image data corresponding to the facial image to be recognized. A storage URL 152 of the storage location of 132.
於一實施例中,影像處理模組130係用以從臉部影像資料122中選取一具有最高權重值的臉部影像資料做為欲辨識臉部影像資料132。在此,具有最高權重值的臉部影像資料可以指畫面中佔有最大比例臉部影像的影像資料,也可以指經由上述權重計算公式計算後,具有最高權重值的臉部影像資料。In one embodiment, the image processing module 130 is configured to select a facial image data having the highest weight value from the facial image data 122 as the facial image data 132 to be recognized. Here, the face image data having the highest weight value may refer to the image data occupying the largest proportion of the face image in the picture, and may also refer to the face image data having the highest weight value after being calculated by the weight calculation formula.
於另一實施例中,影像處理模組130係用以將臉部影像資料122依照權重值大小輸出至一顯示元件,並根據一使用者選擇指令從臉部影像資料122中選取欲辨識臉部影像資料132。In another embodiment, the image processing module 130 is configured to output the facial image data 122 to a display component according to the weight value, and select the facial surface to be recognized from the facial image data 122 according to a user selection command. Image data 132.
執行模組140用以接收儲存網址152,並將儲存網址152傳送至搜尋伺服器160,使得搜尋伺服器160根據儲存網址152,從儲存欲辨識臉部影像資料132的儲存位置讀取欲辨識臉部影像資料132,並根據欲辨識臉部影像資料132產生並回傳一對象資訊搜尋結果162。執行模組140更用以根據對象資訊搜尋結果162輸出一對象相關資訊。The execution module 140 is configured to receive the storage URL 152 and transmit the storage URL 152 to the search server 160, so that the search server 160 reads the face to be recognized from the storage location where the facial image data 132 is to be recognized according to the storage URL 152. The image data 132 is generated and a target information search result 162 is generated and returned according to the facial image data 132 to be recognized. The execution module 140 is further configured to output an object related information according to the object information search result 162.
於一實施例中,搜尋伺服器160根據欲辨識臉部影像資料132,於一資料庫或網際網路上搜尋與欲辨識臉部影像資料132有關的資訊,例如:與欲辨識臉部影像資料132中之臉部相符合的人物之姓名、年齡、性別、照片、婚姻狀況、國籍、居住地、職業...等,並將所搜尋到的資訊作為對象資訊搜尋結果162。In one embodiment, the search server 160 searches for information related to the facial image data 132 to be recognized on a database or the Internet according to the facial image data 132 to be recognized, for example, the facial image data to be recognized 132. The name, age, gender, photo, marital status, nationality, place of residence, occupation, etc. of the person in the face of the face are used, and the searched information is used as the target information search result 162.
於另一實施例中,執行模組140係透過瀏覽器(browser,例如:Internet Explorer、Firefox、Chrome或是Safari)於顯示元件上輸出上述對象相關資訊。In another embodiment, the execution module 140 outputs the object related information on the display component through a browser (for example, Internet Explorer, Firefox, Chrome, or Safari).
於再一實施例中,執行模組140用以將儲存網址152傳送至複數個搜尋伺服器160,並判斷搜尋伺服器160所回傳之複數個對象資訊搜尋結果162是否包含對應於欲辨識臉部影像資料132之對象相關資訊。In still another embodiment, the execution module 140 is configured to transmit the storage URL 152 to the plurality of search servers 160, and determine whether the plurality of object information search results 162 returned by the search server 160 include the face to be recognized. Information about the object of the image data 132.
於該實施例中,執行模組140先將儲存網址152傳 送至一第一搜尋伺服器,並判斷第一搜尋伺服器所回傳之一第一對象資訊搜尋結果中是否包含對應於欲辨識臉部影像資料132之對象相關資訊。若是,則執行模組140輸出第一對象資訊搜尋結果中所包含對應於欲辨識臉部影像資料132之對象相關資訊。若否,則執行模組140將儲存網址152傳送至一第二搜尋伺服器,並判斷第二搜尋伺服器所回傳之一第二對象資訊搜尋結果中是否包含對應於欲辨識臉部影像資料132之對象相關資訊。若是,則執行模組140輸出第二對象資訊搜尋結果中所包含對應於欲辨識臉部影像資料132之對象相關資訊。若否,則執行模組140將儲存網址152傳送至一第三搜尋伺服器,並重複執行上述操作。於此實施例中,藉由利用多個搜尋伺服器來依序搜尋欲辨識臉部影像資料132之對象相關資訊,可使得對象相關資訊被成功搜尋到的機率增加。In this embodiment, the execution module 140 first transmits the storage URL 152. And sending to a first search server, and determining whether the first object information search result returned by the first search server includes object related information corresponding to the facial image data 132 to be recognized. If so, the execution module 140 outputs object related information corresponding to the facial image data 132 to be recognized included in the first object information search result. If not, the execution module 140 transmits the storage URL 152 to a second search server, and determines whether the second target information search result returned by the second search server includes the image data corresponding to the facial image to be recognized. 132 object related information. If yes, the execution module 140 outputs object related information corresponding to the facial image data 132 to be recognized included in the second object information search result. If not, the execution module 140 transmits the storage web address 152 to a third search server and repeats the above operations. In this embodiment, by using a plurality of search servers to sequentially search for object related information of the facial image data 132 to be recognized, the probability that the object related information is successfully searched can be increased.
請參照第3圖。第3圖為本發明一實施例中,一種辨識系統100a之方塊示意圖。相較於第1圖中所示之辨識系統100,於本實施例中,辨識裝置110a更包含影像優化模組210。於一實施例中,影像優化模組210係為一影像處理晶片。於另一實施例中,辨識裝置110a包含至少一處理器以及一記憶體,而影像優化模組210係儲存於上述記憶體,並藉由上述處理器執行其功能。偵測模組120a可為第1圖中所示之偵測模組120,其功能與操作類似,故在此不再贅述。Please refer to Figure 3. FIG. 3 is a block diagram of an identification system 100a according to an embodiment of the present invention. Compared with the identification system 100 shown in FIG. 1 , in the embodiment, the identification device 110 a further includes an image optimization module 210 . In one embodiment, the image optimization module 210 is an image processing wafer. In another embodiment, the identification device 110a includes at least one processor and a memory, and the image optimization module 210 is stored in the memory and performs its functions by the processor. The detection module 120a can be the detection module 120 shown in FIG. 1 , and its function and operation are similar, and therefore will not be described herein.
影像優化模組210用以判斷臉部影像資料122所對 應的臉部偵測信心值是否小於一臨界值。若是,則偵測模組120a更用以對影片中之至少一第二影像執行臉部偵測,以產生一與臉部影像資料122相似且具有更高的臉部偵測信心值之臉部影像資料222來取代臉部影像資料122。於本實施例中,藉由將臉部偵測信心值較低的臉部影像資料替換為臉部偵測信心值較高的相似臉部影像資料,可使得臉部影像資料確實包含臉部的機率更高,並進而提升辨識系統所搜尋到之對象相關資訊的可靠度。The image optimization module 210 is configured to determine that the facial image data 122 is correct. Whether the face detection confidence value is less than a critical value. If yes, the detecting module 120a is further configured to perform face detection on at least one second image in the movie to generate a face similar to the facial image data 122 and having a higher facial detection confidence value. The image data 222 is substituted for the facial image data 122. In this embodiment, the face image data with a lower face detection confidence value is replaced with a similar face image data with a higher face detection confidence value, so that the face image data does include the face. The probability is higher, and thus the reliability of the information related to the objects found by the identification system is improved.
請參照第4圖。第4圖為依據本發明一實施例繪示一種辨識方法之流程示意圖。辨識方法可實作為一電腦程式產品(如應用程式),並儲存於一電腦可讀取記錄媒體中,而使電腦讀取此記錄媒體後執行辨識方法。電腦可讀取記錄媒體可為唯讀記憶體、快閃記憶體、硬碟、光碟、隨身碟、磁帶、可由網路存取之資料庫或熟悉此技藝者可輕易思及具有相同功能之電腦可讀取記錄媒體。Please refer to Figure 4. FIG. 4 is a flow chart showing an identification method according to an embodiment of the invention. The identification method can be implemented as a computer program product (such as an application) and stored in a computer readable recording medium, and the computer can perform the identification method after reading the recording medium. The computer readable recording medium can be a read-only memory, a flash memory, a hard disk, a compact disc, a flash drive, a magnetic tape, a database accessible by the network, or a computer familiar to the skilled person who can easily think of the same function. The recording medium can be read.
此辨識方法可應用於如第1圖所繪示的辨識系統100中,但不以其為限。為方便及清楚說明起見,下列辨識方法之敘述係配合第1圖所示的辨識系統100作說明。This identification method can be applied to the identification system 100 as shown in FIG. 1 , but is not limited thereto. For convenience and clarity of description, the following description of the identification method is described in conjunction with the identification system 100 shown in FIG.
於步驟302中,偵測模組120根據使用者指令對影片中之第一影像執行臉部偵測,以產生至少一臉部影像資料122。In step 302, the detection module 120 performs face detection on the first image in the movie according to the user instruction to generate at least one facial image data 122.
然後,於步驟304中,影像處理模組130根據權重計算公式,取得臉部影像資料122之權重值。Then, in step 304, the image processing module 130 obtains the weight value of the facial image data 122 according to the weight calculation formula.
接著,於步驟306中,影像處理模組130根據權重 值,於臉部影像資料122中選取欲辨識臉部影像資料132,並將欲辨識臉部影像資料132傳送至儲存伺服器150作儲存。Next, in step 306, the image processing module 130 is based on the weight For example, the facial image data 132 is selected from the facial image data 122, and the facial image data 132 to be recognized is transmitted to the storage server 150 for storage.
於步驟308中,搜尋伺服器160根據執行模組140所傳送之對應於欲辨識臉部影像資料132之儲存位置的儲存網址152,讀取欲辨識臉部影像資料132。In step 308, the search server 160 reads the facial image data 132 to be recognized according to the storage URL 152 corresponding to the storage location of the facial image data 132 to be recognized by the execution module 140.
然後,於步驟310中,搜尋伺服器160根據該欲辨識臉部影像資料132產生對象資訊搜尋結果162。Then, in step 310, the search server 160 generates an object information search result 162 based on the facial image data 132 to be recognized.
接著,於步驟312,執行模組140根據搜尋伺服器160所回傳的對象資訊搜尋結果162輸出對象相關資訊。Next, in step 312, the execution module 140 outputs the object related information according to the object information search result 162 returned by the search server 160.
請參照第5圖。第5圖為依據本發明一實施例繪示一種辨識方法之流程示意圖。相較於第4圖所示之辨識方法,於本實施例中,辨識方法更包含步驟402。此辨識方法可應用於如第1圖所繪示的辨識系統100中,但不以其為限。為方便及清楚說明起見,下列辨識方法之敘述係配合第1圖所示的辨識系統100作說明。Please refer to Figure 5. FIG. 5 is a schematic flow chart of an identification method according to an embodiment of the invention. Compared with the identification method shown in FIG. 4, in the embodiment, the identification method further includes step 402. This identification method can be applied to the identification system 100 as shown in FIG. 1 , but is not limited thereto. For convenience and clarity of description, the following description of the identification method is described in conjunction with the identification system 100 shown in FIG.
於步驟402,執行模組140判斷對象資訊搜尋結果162是否包含對應於欲辨識臉部影像資料132之對象相關資訊。若否,則重複步驟308以及步驟310。In step 402, the execution module 140 determines whether the target information search result 162 includes object related information corresponding to the facial image data 132 to be recognized. If no, step 308 and step 310 are repeated.
於此實施例中,藉由重複搜尋欲辨識臉部影像資料132之對象相關資訊,可使得對象相關資訊被成功搜尋到的機率增加。In this embodiment, by repeatedly searching for the object related information of the facial image data 132 to be recognized, the probability that the object related information is successfully searched can be increased.
請參照第6圖。第6圖為依據本發明一實施例繪示一種辨識方法之流程示意圖。相較於第4圖所示之辨識方 法,於本實施例中,辨識方法更包含步驟502。此辨識方法可應用於如第1圖所繪示的辨識系統100中,但不以其為限。為方便及清楚說明起見,下列辨識方法之敘述係配合第1圖所示的辨識系統100作說明。Please refer to Figure 6. FIG. 6 is a flow chart showing an identification method according to an embodiment of the invention. Compared to the identification shown in Figure 4 In this embodiment, the identification method further includes step 502. This identification method can be applied to the identification system 100 as shown in FIG. 1 , but is not limited thereto. For convenience and clarity of description, the following description of the identification method is described in conjunction with the identification system 100 shown in FIG.
於步驟502,偵測模組120對臉部影像資料122計算一對應的臉部偵測信心值。上述臉部偵測信心值可作為一衡量所偵測到的臉部影像資料122是否確實包含臉部的可靠性指標。In step 502, the detection module 120 calculates a corresponding face detection confidence value for the facial image data 122. The face detection confidence value can be used as a measure of whether the detected facial image data 122 does include a reliability indicator of the face.
請參照第7圖。第7圖為依據本發明一實施例繪示一種辨識方法之流程示意圖。相較於第6圖所示之辨識方法,於本實施例中,辨識方法更包含步驟602以及步驟604。此辨識方法可應用於如第3圖所繪示的辨識系統100a中,但不以其為限。為方便及清楚說明起見,下列辨識方法之敘述係配合第3圖所示的辨識系統100a作說明。Please refer to Figure 7. FIG. 7 is a flow chart showing an identification method according to an embodiment of the invention. Compared with the identification method shown in FIG. 6, in the embodiment, the identification method further includes step 602 and step 604. This identification method can be applied to the identification system 100a as shown in FIG. 3, but is not limited thereto. For convenience and clarity of description, the following description of the identification method is described in conjunction with the identification system 100a shown in FIG.
於步驟602中,影像優化模組210判斷臉部影像資料122a所對應的臉部偵測信心值是否小於一臨界值。In step 602, the image optimization module 210 determines whether the face detection confidence value corresponding to the facial image data 122a is less than a critical value.
若上述判對結果為是,則於步驟604中,偵測模組120a對影片中之至少一第二影像執行臉部偵測,以產生與臉部影像資料122a相似且具有更高的臉部偵測信心值的臉部影像資料來取代臉部影像資料122a。If the result of the above determination is yes, then in step 604, the detection module 120a performs face detection on at least one second image in the movie to generate a higher facial similar to the facial image data 122a. The face image data of the confidence value is detected instead of the face image data 122a.
於本實施例中,藉由將臉部偵測信心值較低的臉部影像資料替換為臉部偵測信心值較高的相似臉部影像資料,可使得臉部影像資料確實包含臉部的機率更高,並進而提升所搜尋到之對象相關資訊的可靠度。In this embodiment, the face image data with a lower face detection confidence value is replaced with a similar face image data with a higher face detection confidence value, so that the face image data does include the face. The probability is higher, and thus the reliability of the information about the object being searched is improved.
應瞭解到,在本實施方式中所提及的步驟,除特別敘明其順序者外,均可依實際需要調整其前後順序,甚至可同時或部分同時執行。It should be understood that the steps mentioned in the present embodiment can be adjusted according to actual needs, and can be performed simultaneously or partially simultaneously, unless the order is specifically stated.
綜上所述,藉由本發明之技術手段,使用者只需下達一指令,即可查詢影片中之人物或角色的相關資訊。此外,系統可自動辨識並決定使用者可能感興趣的人物或角色,使用者不需要暫停影片,也不需要手動選取欲查詢的人物或角色。另外,當辨識裝置需要利用搜尋伺服器搜尋對象資訊時,辨識裝置不需要上傳欲辨識影像資料給搜尋伺服器。辨識裝置只需提供儲存伺服器所產生的儲存網址給搜尋伺服器,搜尋伺服器即可進行對象資訊的搜尋。由於部分搜尋伺服器可能無法提供直接上傳資料的服務,又或者使用者不方便直接上傳檔案,此時,搜尋伺服器可利用儲存網址讀取欲辨識臉部影像資料,即可進行對象資訊的搜尋。此外,藉由利用多個搜尋伺服器進行搜尋,可使得對象相關資訊被成功搜尋到的機率增加。In summary, with the technical means of the present invention, the user can query the related information of the person or character in the movie by simply issuing an instruction. In addition, the system automatically recognizes and determines the characters or characters that the user may be interested in. The user does not need to pause the movie or manually select the person or character to be queried. In addition, when the identification device needs to use the search server to search for object information, the identification device does not need to upload the image data to be identified to the search server. The identification device only needs to provide the storage URL generated by the storage server to the search server, and the search server can search for the object information. Since some search servers may not be able to provide direct uploading of data services, or users may not be able to directly upload files, the search server may use the storage URL to read facial image data to perform object information search. . In addition, by using a plurality of search servers for searching, the probability that the object related information is successfully searched can be increased.
雖然本揭示內容已以實施方式揭露如上,然其並非用以限定本揭示內容,任何熟習此技藝者,在不脫離本揭示內容之精神和範圍內,當可作各種之更動與潤飾,因此本揭示內容之保護範圍當視後附之申請專利範圍所界定者為準。The present disclosure has been disclosed in the above embodiments, but it is not intended to limit the disclosure, and any person skilled in the art can make various changes and refinements without departing from the spirit and scope of the disclosure. The scope of protection of the disclosure is subject to the definition of the scope of the patent application.
100‧‧‧辨識系統100‧‧‧ Identification System
110‧‧‧辨識裝置110‧‧‧ Identification device
120‧‧‧偵測模組120‧‧‧Detection module
122‧‧‧臉部影像資料122‧‧‧Face image data
130‧‧‧影像處理模組130‧‧‧Image Processing Module
132‧‧‧欲辨識臉部影像資料132‧‧‧To identify facial image data
140‧‧‧執行模組140‧‧‧Execution module
150‧‧‧儲存伺服器150‧‧‧Storage server
152‧‧‧儲存網址152‧‧‧Save URL
160‧‧‧搜尋伺服器160‧‧‧Search server
162‧‧‧對象資訊搜尋結果162‧‧‧Target information search results
Claims (15)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW103112320A TWI512641B (en) | 2014-04-02 | 2014-04-02 | Identification system and identification method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW103112320A TWI512641B (en) | 2014-04-02 | 2014-04-02 | Identification system and identification method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TW201539332A TW201539332A (en) | 2015-10-16 |
| TWI512641B true TWI512641B (en) | 2015-12-11 |
Family
ID=54851366
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW103112320A TWI512641B (en) | 2014-04-02 | 2014-04-02 | Identification system and identification method |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI512641B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020191817A1 (en) * | 2001-03-15 | 2002-12-19 | Toshio Sato | Entrance management apparatus and entrance management method |
| TW201120766A (en) * | 2009-12-03 | 2011-06-16 | Chunghwa Telecom Co Ltd | Human face recognition method based on individual face regions. |
| TW201232425A (en) * | 2011-01-24 | 2012-08-01 | Taiwan Colour And Imaging Technology Corp | Face recognition intelligent self-service system |
| TWM469556U (en) * | 2013-08-22 | 2014-01-01 | Univ Kun Shan | Intelligent monitoring device for perform face recognition in cloud |
-
2014
- 2014-04-02 TW TW103112320A patent/TWI512641B/en not_active IP Right Cessation
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020191817A1 (en) * | 2001-03-15 | 2002-12-19 | Toshio Sato | Entrance management apparatus and entrance management method |
| TW201120766A (en) * | 2009-12-03 | 2011-06-16 | Chunghwa Telecom Co Ltd | Human face recognition method based on individual face regions. |
| TW201232425A (en) * | 2011-01-24 | 2012-08-01 | Taiwan Colour And Imaging Technology Corp | Face recognition intelligent self-service system |
| TWM469556U (en) * | 2013-08-22 | 2014-01-01 | Univ Kun Shan | Intelligent monitoring device for perform face recognition in cloud |
Also Published As
| Publication number | Publication date |
|---|---|
| TW201539332A (en) | 2015-10-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7304370B2 (en) | Video retrieval method, apparatus, device and medium | |
| US11157577B2 (en) | Method for searching and device thereof | |
| CN105320428B (en) | Method and apparatus for providing image | |
| TWI462035B (en) | Object detection metadata | |
| US9858967B1 (en) | Section identification in video content | |
| CN107801096B (en) | Video playing control method and device, terminal equipment and storage medium | |
| US20140331264A1 (en) | Content annotation tool | |
| JP6361351B2 (en) | Method, program and computing system for ranking spoken words | |
| CN110020188A (en) | Global vector recommendation based on implicit interaction and profile data | |
| US11734370B2 (en) | Method for searching and device thereof | |
| CN113806588B (en) | Method and device for searching videos | |
| JP2019075019A (en) | Commodity information generation system, commodity information generation program and commodity information generation method | |
| TWI748266B (en) | Search method, electronic device and non-transitory computer-readable recording medium | |
| CN110598048A (en) | Video retrieval method and method and device for generating video retrieval mapping relationship | |
| CN110825928A (en) | Search method and device | |
| CN115017355A (en) | Image extractor training, search method, electronic device and storage medium | |
| CN111708946B (en) | Recommendation method and device for personalized movies and electronic equipment | |
| US20240430524A1 (en) | Systems and methods for recommending content items based on an identified posture | |
| TWI512641B (en) | Identification system and identification method | |
| WO2018177303A1 (en) | Media content recommendation method, device, and storage medium | |
| CN115309984B (en) | Content item recommendation method, device and server | |
| TW202004524A (en) | Search method, electronic device and non-transitory computer-readable recording medium | |
| CN108140033A (en) | For the biasing towing device of digital content | |
| CN117271803B (en) | Training method, device, equipment and storage medium for knowledge graph completion model | |
| TWI571753B (en) | Electronic computing device for generating an interactive index code image of an image, method thereof and computer program product thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MM4A | Annulment or lapse of patent due to non-payment of fees |