[go: up one dir, main page]

TW201006527A - Measuring object contour method and measuring object contour apparatus - Google Patents

Measuring object contour method and measuring object contour apparatus Download PDF

Info

Publication number
TW201006527A
TW201006527A TW097129697A TW97129697A TW201006527A TW 201006527 A TW201006527 A TW 201006527A TW 097129697 A TW097129697 A TW 097129697A TW 97129697 A TW97129697 A TW 97129697A TW 201006527 A TW201006527 A TW 201006527A
Authority
TW
Taiwan
Prior art keywords
momentum
image
map
edge map
analyzing
Prior art date
Application number
TW097129697A
Other languages
Chinese (zh)
Other versions
TWI361093B (en
Inventor
Chien-Chun Kuo
Po-Lung Chen
Chia-Chang Li
Ko-Shyang Wang
Original Assignee
Ind Tech Res Inst
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ind Tech Res Inst filed Critical Ind Tech Res Inst
Priority to TW097129697A priority Critical patent/TWI361093B/en
Publication of TW201006527A publication Critical patent/TW201006527A/en
Application granted granted Critical
Publication of TWI361093B publication Critical patent/TWI361093B/en

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A measuring object contour method is provided. The measuring object contour method includes the following steps: capturing multiple sequence images including a current image and at least a previous image; deriving a motion image from calculating the motion difference of the sequence images; deriving an edge map from operating with the current image; deriving a motion edge map from operating with the motion image; superposing the edge map with the motion edge map to derive a foreground edge map from extending the contour in the edge map and corresponding to the motion edge map. Besides, a measuring object contour apparatus using the measuring object contour method mentioned above is also provided.

Description

201006527 九、發明説明: 【發明所屬之技術領域】 本發明疋有關於一種分析物體輪廓方法,且特別是有 關於-種可去除背景㈣㈣下移動之物體輪廓的方法, 以實現人機互動之介面。 【先前技術】 ❹ 丨機互動介Φ之技術在近來已大幅開發成長,特別是 在遊戲產業中,遊戲者希望遊戲本身能夠即時直接回饋出 遊戲者的動作,藉此享受遊戲。在人機互動介面十,目前 大多的做法仍是使用者要穿戴特殊服裝或裝 戴特殊標誌 marker)’接文器才能感應到使用者的動作,而此對於使用 2言卻是相當的孩利。因此,以視黯式作為人機互 =面’料十分直覺枝的方式。料,目前之影像互 =僅限㈣測全部影像中有在變更或活動的區域並 ❿ 環境背景或辨識效率等因素,不易掌握完整而精確 •=輪廓’因此對於使用者的肢體動作無法有效進行詳細判 B/f ° 知以標誌感應使用者動作之方法已揭露於美國第 專利與美國第6308565專利。在美國第5524637 'Y主要是將主動或被動式標誌(marker)黏貼於使用者 2、’藉由偵測標誌的移動量而計算出使用者的施力大小 專利速度,進而達到人機互動的介面。在美國第6308565 ,仍是要將主動或被動式標誌黏貼於使用者特定之 '鳍由偵測標諸' 的移動量’搭配座標映射轉定位方 201006527201006527 IX. Description of the Invention: [Technical Field of the Invention] The present invention relates to a method for analyzing an object contour, and in particular to a method for removing an object contour moving under the background (4) (4), to realize an interface for human-computer interaction. . [Prior Art] 技术 The technology of 互动 互动 介 has been greatly developed and developed recently, especially in the game industry, where players hope that the game itself can directly feedback the player's actions and enjoy the game. In the human-computer interaction interface, most of the current practice is still that the user wears special clothing or wears a special logo marker. The receiver can sense the user's movement, but this is quite a child's benefit when using 2 words. . Therefore, it is a very intuitive way to use the visual type as a human-machine. Material, the current image mutual = only (4) measuring all the images in the area of change or activity and the environmental background or identification efficiency and other factors, it is not easy to grasp complete and accurate • = contour 'is therefore not effective for the user's limb movements A detailed method for determining the B/f ° flag to sense the user's action has been disclosed in U.S. Patent No. 6,308,565. In the United States, the 5524637 'Y is mainly to paste the active or passive marker on the user 2, 'by detecting the amount of movement of the marker to calculate the patent speed of the user's force, and then achieve the interface of human-computer interaction. . In the United States No. 6308565, it is still necessary to attach the active or passive logo to the user-specific 'Fin by the detection target' of the amount of movement' with coordinate mapping to the positioning side 201006527

法,取得使用者移動之2維/3維之空間座標位置,進而與 虛擬實境互動。如前所述,由於這些辨識之方法均需配戴 標誌於使用者身上,因此對於使用者而言是非常的不便利。 此外,習知如美國第6307951專利便揭露純粹以影像 處理的方式來判定使用者的動作。其主要是以區塊(block) 為基礎,分析每個區塊之亮度變化,進一步做為計算移動 物體之動量變化參考。最後以傅利葉轉換計算移動物體區 域之移動力與方向。 然而 二右便用者僅揮動手臂,而身體其他部位均保持 不動。則前述之方法僅能擷取出使用者之手臂(移動物體 輪廓’並將使用者未移動的部位視為是背景,而無法得出 者整體,輪廓。如此—來,此方法在解讀使用者動作 上便可i產生偏差’使得無法僅以影像處理的方式達 至'人機互動介面錢細實境互動。 【發明内容】 有鑑於此,士 ^ 本發明之目的是提供一種分析物體輪廓方 物體輪廓敦置,可先分析使用者移動之部位,再 的$延伸得出使用者整體輪廓,藉此達成人機互動 的介面狀態。 ^ 廓方ί達^述或是其他目的,本發明提出一種分析物體輪 k包^下列步驟:擷取多個序列影像,而這些序列 I:罢目刖影像與至少-張先前影像;對這些序列影像 進行運算、」乂件出動量圖(motion image);對目前影像 ^ 异’以得出邊緣圖(edge map);對動量圖進行運算, 6 201006527 量邊緣圖(motion edge map);以及疊八邊緣圖盘 ::圖’並自對應動量邊緣圖之邊 : : 仔則景邊緣圖伽eg舰nd edge map)。 的輪狀伸而 量變化實施例中’上述對這些序列影像計算動 對應像影像之像素之像素值與先前影c 像素值相減,以得到動量圖。 〜The method obtains the 2D/3D space coordinate position of the user's movement, and then interacts with the virtual reality. As mentioned above, since these methods of identification need to be worn on the user, it is very inconvenient for the user. In addition, it is disclosed in Japanese Patent No. 6,307,951 that the action of the user is determined purely by image processing. It is mainly based on blocks, and analyzes the brightness change of each block, and further serves as a reference for calculating the momentum change of moving objects. Finally, the Fourier transform is used to calculate the moving force and direction of the moving object area. However, the second right user only waved his arm while the rest of the body remained motionless. Then the above method can only extract the user's arm (moving the object contour ' and regard the part that the user has not moved as the background, but cannot get the whole body, the contour. So - this method is interpreting the user action The above-mentioned deviations can be made so that it is impossible to achieve the human-computer interaction interface and the fine-grained interaction only by image processing. [Invention] In view of this, the object of the present invention is to provide an object for analyzing the contour of an object. The outline can be used to analyze the part of the user's movement, and the extension of $ extends to the overall outline of the user, thereby achieving the interface state of human-computer interaction. ^ The profile of the user or other purposes, the present invention proposes a Analyze the object wheel k package. The following steps: capturing a plurality of sequence images, and the sequence I: the eye-catching image and the at least one previous image; performing operations on the sequence images, and "motion image"; For the current image ^ different 'to get the edge map (edge map); to calculate the momentum map, 6 201006527 edge map (motion edge map); and stack eight edge map:: map 'and self-corresponding FIG edge of the edge:: Aberdeen the edge view of FIG eg gamma ship nd edge map). In the embodiment of the wheel variation, the pixel values of the pixels corresponding to the image corresponding to the sequence image are subtracted from the previous image c pixel values to obtain a momentum map. ~

一在本發明之一實施例中,上述之先前A ❹ Ο 一 ’而對這些序列影像計算動量變化之步驟ζ :為 將該目前影像之像素之像素值與這些先前别 之像素值相減,以得到多個差量圖;以及將像素 加以得到動量圖。 將思些差堇圖相 在本發明之一實施例中,上述之對心 一 二:in:步驟可採用空間邊緣濾波方 在本發明之一實施例中,上述對動 出動量邊緣圖之步驟可採用影像二值化法進行運算以得 步驟:設定_;將大於_之動量圖^包括下列 為卜以及將小㈣值之麟量目之 、之像素值設 在本發明之—實_ t,上叙前像隸設為0。 =像資訊,而此物體影像資訊包括座標、方:物 前景邊緣圖之後更包括以動作規則資料:判 圖之物體影像資訊,其中動作規則資料庫具有 扣意對照表’而語意對照表包括前移、後移、右移與左移。 在本月之實知例中,上述揭取序列影像之步驟是In an embodiment of the invention, the foregoing step A 计算 ' 计算 计算 计算 计算 计算 计算 计算 计算 计算 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 这些 动 动 动 动 动 动 动To obtain a plurality of difference maps; and to add pixels to obtain a momentum map. In an embodiment of the present invention, the above-mentioned center-to-center: in: step may employ a spatial edge filtering method. In an embodiment of the present invention, the step of moving the momentum edge map The image binarization method may be used to perform the steps of: setting _; the momentum graph greater than _ includes the following, and the pixel value of the small (four) value of the numerator is set in the present invention - the actual _ t The upper pre-image is set to 0. = like information, and the object image information includes coordinates, squares: the object foreground edge map further includes the action rule data: the image information of the object of the judgment, wherein the action rule database has a button comparison table and the semantic comparison table includes the former Move, move backward, right shift and left shift. In the actual example of this month, the above steps of uncovering the sequence image are

201006527 以影像類取單元進行掏取, 機或網路攝影機。 V像韻取單元可為數位攝影 在本發明之一實施例中, 影像資訊後,更可包括與使用上逑判斷前景邊緣圖之物體 像資訊傳送至顯示單元,而者進行互動,或是將物體影 電視或投影式電視。 不單元可為電漿電視、液晶 綜上所述,在本發明之八 得出邊緣圖與動量邊緣圖,二析物體輪廓方法中,乃是井 廓,而動量邊緣圖僅包括物體中邊緣圖包括所有背景的輕 緣圖與動量邊緣Η疊合,並^動部份之輪靡 。藉由將i| _地1,廓’藉此以自動排除純粹之背景輪廓,而 纽立八㈣物體整體之輪廓。之後’便可對物體輪廓進行 «°思刀析,特別是如使用者之物體下,便可達到人互 的介面狀態。 為讓本發明之上述和其他目的、特徵和優點能更明顯 易®,下文特舉較佳實施例,並配合所附圖式,作詳細 明如下。 v 【實施方式】 圖1為依據本發明一實施例之分析物體輪廓方法 程圖,Li 二马求岣楚比對流程,圖2Α〜2D分別為依據本發明 之刀析物體輪廓方法所得之影像圖。請先參考圖1,如步 驟 S11 ώί·- , ^ 所不’本發明首先要擷取多個序列影像,而這些序 列衫像包括目前影像與先前影像。在本實施例中,乃是以 201006527 Λ 影像擷取單元對其前方的物體不斷地擷取序列影像,而這 些序列影像乃具有背景物與前景物之圖像,其中前景物可 被視為如使用者等會移動之物體,而背景物即為如桌椅等 ; 不會移動之物體。此外,影像擷取單元可為數位攝影機或 網路攝影機等等,不過本發明並不限定影像擷取單元的種 類。 承接上述,這些所擷取的序列影像可為陣列影像圖, 亦即每張序列影像均具有多個陣列排列之像素,而每個像 Ο 素均具有特定之像素值以表示顏色、亮度等資訊。這些序 列影像乃是依據時間序列排序,而最新擷取之序列影像便 稱為目前影像,且之前所擷取之序列影像便稱為先前影像。 接著如步驟S12所示,對這些序列影像計算動量變化 以得出動量圖,而動量圖之精神便是在表現序列影像的『變 動程度』。在本實施例中,動量圖例如是以將目前影像之 像素之像素值與先前影像之對應像素之像素值相減而得。 一般而言,此處之先前影像會選擇時間序列最靠近目前影 〇 像之序列影像,亦即若目前影像為時間序列為η,則先前 * 影像便會選取時間序列為η-1之序列影像。 - 如此一來,在動量圖中,畫素值等於零之晝素便表示 此區域是呈靜止狀態,而畫素值不等於零之晝素便表示此 區域之狀態有所改變。藉此便可分辨出序列影像之『變動 程度』。值得注意的是,前述之計算方式僅是舉例說明如 何以『變動程度』的概念形成動量圖,不過本發明並不限 定動量圖的計算方式。 舉例而言,以時間序列為η之目前影像來說,可先選 9 201006527 取時間序列為n-l與n-2之先前影像。計算時間序列為n 之目前影像與時間序列為n-1之先前影像的差異而得—差 量圖,再計算時間序列為η之目前影像與時間序列為 之先前影像的差異而得另一差量圖,將此兩個差量圖加總 即可得到動量圖。熟悉此項技藝者當可依據前述說明而稍 加修改動量圖的計算方式,惟其仍屬本發明之範轉内。 請再參考圖1,就算是非常精密的影像擷取單元,仍 會有晃動、雜訊或雜光的干擾。亦即在外界物體均未移動 的情況下,動量圖之所有像素的像素值仍不會完全為零。 因此’如步驟S13所示,接著要對該動量圖進行動量判斷, 以決定此動量圖是否表示物體移動,亦或僅是雜訊。若判 _動量圖確實表示有物體移動,則便進入之後的步驟;若 匈斷動量圖僅為雜訊,則便回到步驟S11而繼續擷取序列 影像。 附帶一提的是,本發明亦可略過步驟S13之判斷而逕 订之後的步驟,不過,若有進行步驟S13的判斷流程,則 會有較佳的分析效果。在本實施例中,是以統計的方法來 斜動量圖進行動量判斷。詳細而言,本實施例是先設定門 城率’ 4是在動量圖中,像素值不為零之像素的數量比率 超過門檻率,則便判斷動量圖表示物體移動,反之,則判 斯動量圖為雜訊。 以門檻率1/16’且動量圖之解析度為1024x728個像素 而言,則當有46592(1024x728/16)個像素之像素值不為零 時’則便認定此動量圖不為雜訊而進行之後的步驟。 接著如步驟S14與S15所示,對目前影像進行運算以 201006527 得出邊緣圖’並對動量圖進行運算以得出動量邊緣圖’其 中邊緣圖乃如圖2A所示’且動量邊緣圖乃如圖2b ^八” 稍先敘明的是,步驟S14與S15並無先後次序,為长=曰° 方便,而先進行步驟S14,再進行步驟Sl5。不過熟悉^ = 技藝者亦可輕易理解而先進行步驟S15 ,再進行步驟"、 或是同時進行步驟S14、S15。 ’201006527 Capture, machine or network camera with image class. The image capturing unit may be digitally photographed. In an embodiment of the present invention, after the image information, the object image information of the foreground edge image may be transmitted to the display unit by using the upper layer, and the interaction may be performed. Object TV or projection TV. The unit may be a plasma television or a liquid crystal. In the eighth aspect of the present invention, the edge map and the momentum edge map are obtained. In the method for analyzing the contour of the object, the contour is a well profile, and the momentum edge map includes only the edge map of the object. The light edge map including all the backgrounds is superimposed on the momentum edge and the partial rim is moved. By i = _ ground 1, the profile is used to automatically exclude the pure background contour, and the overall outline of the object. After that, you can make a «° analysis of the outline of the object, especially if it is under the user's object, you can reach the interface state of each other. The above and other objects, features, and advantages of the present invention will become more apparent. [Embodiment] FIG. 1 is a schematic diagram of a method for analyzing an object contour according to an embodiment of the present invention, and FIG. 2Α~2D are images obtained by the method for analyzing a contour of an object according to the present invention, respectively. Figure. Please refer to FIG. 1 first, as in step S11 ώί·-, ^. The invention firstly captures a plurality of sequence images, and the sequence of the shirt images includes the current image and the previous image. In this embodiment, the image capturing unit continuously captures the sequence images of the object in front of the 201006527 撷 image capturing unit, and the sequence images have images of the background object and the foreground object, wherein the foreground object can be regarded as The object that the user will move, and the background object is a table, a chair, etc.; an object that does not move. Further, the image capturing unit may be a digital camera or a web camera or the like, but the invention does not limit the type of image capturing unit. In the above, the sequence images captured may be array image images, that is, each sequence image has a plurality of arrayed pixels, and each pixel has a specific pixel value to indicate color, brightness, and the like. . These sequence images are sorted by time series, and the latest captured sequence image is called the current image, and the previously captured sequence image is called the previous image. Then, as shown in step S12, the momentum changes are calculated for the sequence images to obtain a momentum map, and the spirit of the momentum map is the "degree of change" of the sequence image. In this embodiment, the momentum map is obtained by subtracting the pixel value of the pixel of the current image from the pixel value of the corresponding pixel of the previous image. In general, the previous image here selects the sequence image whose time series is closest to the current image, that is, if the current image is time series η, the previous * image will select the sequence image with time series η-1. . - In this case, in the momentum graph, a pixel whose pixel value is equal to zero indicates that the region is in a stationary state, and a pixel whose pixel value is not equal to zero indicates that the state of the region has changed. This allows you to distinguish the "variation" of the sequence image. It should be noted that the foregoing calculation method is merely an example of how to form a momentum map by the concept of "degree of change", but the present invention does not limit the calculation method of the momentum map. For example, in the current image with time series η, 9 201006527 can be selected first to take the previous images of time series n-l and n-2. Calculate the difference between the current image of the time series n and the previous image of the time sequence of n-1, and calculate the difference between the current image and the time series of the time series as the previous image. The quantity map, the two difference maps are summed to obtain the momentum map. Those skilled in the art will be able to modify the calculation of the momentum map slightly according to the foregoing description, but it is still within the scope of the present invention. Please refer to Figure 1, even if it is a very sophisticated image capture unit, there will still be interference from sway, noise or stray light. That is, in the case where the external objects are not moved, the pixel values of all the pixels of the momentum map are still not completely zero. Therefore, as shown in step S13, momentum measurement is then performed on the momentum map to determine whether the momentum map indicates object movement, or only noise. If the _ momentum map does indicate that there is an object moving, then the subsequent steps are entered; if the Hungarian momentum map is only noise, then the process returns to step S11 and continues to capture the sequence image. Incidentally, the present invention may also skip the step after the determination of step S13 and the subsequent steps, but if there is a judgment flow of step S13, there is a better analysis effect. In the present embodiment, the momentum calculation is performed by the slanting momentum map in a statistical manner. In detail, in this embodiment, the gate rate "4" is first set in the momentum map. If the ratio of the number of pixels whose pixel value is not zero exceeds the threshold rate, then the momentum map is determined to indicate the movement of the object, and vice versa. The picture shows the noise. When the threshold is 1/16' and the resolution of the momentum map is 1024x728 pixels, then when there are 46592 (1024x728/16) pixels whose pixel value is not zero, then the momentum map is not considered as noise. Carry out the next steps. Then, as shown in steps S14 and S15, the current image is computed to obtain the edge map '201006527 and the momentum map is calculated to obtain the momentum edge map', wherein the edge map is as shown in FIG. 2A' and the momentum edge map is as Figure 2b ^8" It is stated at a glance that steps S14 and S15 have no order, and it is convenient for length = 曰°, and step S14 is performed first, and then step S15 is performed. However, it is easy for the skilled person to understand Step S15 is performed first, then step " or step S14, S15 is performed simultaneously.

❹ 在本實施例之步驟S14中,乃是採用空間邊緣濾 法或頻譜邊緣濾波方法而對目前影像進行運算以彳β 圖(圖2Α),其中索貝爾運算子方法乃是空間邊緣二波^ 之-種,並可具有較佳的效果。簡而言之,邊緣圖即為目 前影像之『素描畫』。擷取目前影像中影像不連續的 並將影像連續與不連續的部份以二值化(1、〇)表厂,° 得出以黑白表示之邊緣圖(圖2Α)。以本實施例而= 中包括前景之使用者以及背景之靜止椅子,因此白色= 即表示使用者與椅子之輪廓’而不屬於輪叙區域將會以 黑點表不。 類似地’在本實施例之步驟Sl5中,乃是直接以 二值化法對動量圖進行運算以得出動量邊緣圖(圖2b) 即’動量邊緣圖即為動量圖之『素描畫』。在本實施例中, 影像二值化法乃是先設定閥值,而此閥值主要是用來過濟 訊等干擾,且閥值的設定可依據實際畫素值而㈣不同。 承接上述,將動量圖之所有像素分為兩類,亦即將像 素值大於閥值之像素歸成-類,並將像素值小於閥值之 素歸成另-類’而分別以二值(1、G)表示,如此即可得出 以黑白表示之動量邊緣圖(圖2B)。以本實施例而言,由於 11 201006527 者僅有雙手在移動,因此動量圖之白點乃對應雙手瞬 示。的輪靡,而其他未移動的部位以及背景即以黑點表 作中:參考圖2A ’邊緣圖中的前景部份(白點區域)乃為動 子的幹用者的輪靡’而背景部份(白點區域)乃為靜止椅❹ In step S14 of the embodiment, the current image is calculated by using the spatial edge filtering method or the spectral edge filtering method to 彳β map (Fig. 2Α), wherein the Sobel operator method is a spatial edge two-wave ^ Kind, and can have better results. In short, the edge map is the “sketch” of the current image. The image in the current image is discontinuous and the continuous and discontinuous portions of the image are binarized (1, 〇), and the edge image in black and white is obtained (Fig. 2Α). In this embodiment, = the user of the foreground and the stationary chair of the background are included, so white = that is, the outline of the user and the chair, and the area that does not belong to the wheel will be indicated by the black dot. Similarly, in step S15 of the present embodiment, the momentum map is directly calculated by the binarization method to obtain a momentum edge map (Fig. 2b), i.e., the momentum edge map is the "sketch" of the momentum map. In the present embodiment, the image binarization method first sets a threshold value, and the threshold value is mainly used for interference such as interference, and the threshold value can be set according to the actual pixel value (4). In order to take the above, all the pixels of the momentum map are divided into two categories, that is, the pixels whose pixel values are larger than the threshold are classified into -classes, and the pixels whose pixel values are smaller than the threshold are classified into another class - and respectively have two values (1 G) means that the momentum edge map in black and white can be obtained (Fig. 2B). In the present embodiment, since only 11 hands are moving, the white point of the momentum map corresponds to the two-handed instant. The rim, while the other unmoved parts and the background are in the black point table: refer to Figure 2A. The foreground part (white point area) in the edge map is the rim of the mover's rim'. Part (white point area) is a static chair

之動,其中使用者的手臂部份正在移動,而使圖2B f邊緣圖(白點區域)反應出移動之手臂部份的輪廓。 邊緣^再參考圖1而如步驟S16所示,疊合邊緣圖與動量 景邊緣,教自對應動量邊緣圖之邊緣圖的輪廓延伸而得前 二示、、,圖二其中將邊緣圖與動量邊緣圖疊合後便如圖2c 而%景邊緣圖便如圖2D所示。詳細而言,動 (像素值為!之像素,圖中白色部八 像種子』’以任一個動量邊緣圖之種子(輪廓^份 而言’在邊緣圖相同位置處會有對應此種子的你1 而如圖2Γ 叩像素, ^之像素(以黑色小圓點表示)。以邊緣圖所 2B之德表 M V對應圖 ❹The movement, in which the user's arm portion is moving, causes the edge map (white point region) of Figure 2B to reflect the contour of the moving arm portion. The edge ^ is further referred to FIG. 1 and as shown in step S16, the overlapping edge image and the momentum scene edge are taught to extend from the contour of the edge image of the corresponding momentum edge image, and the front view and the momentum are shown in FIG. The edge map is superimposed as shown in Figure 2c and the % edge map is shown in Figure 2D. In detail, move (the pixel value is the pixel of !, the white part of the picture is like the seed"'s seed of any momentum edge map (in terms of outlines, there will be a seed corresponding to this seed at the same position on the edge map). 1 and as shown in Figure 2Γ pixels, ^ pixels (indicated by black dots).

1象素(黑色小圓點)往鄰接處找尋同樣為輪廓 M 戶圖中以箭碩表示),進而不斷向外延伸。將動量邊緣圖素 ^有種子所對應之邊緣圖之像素全部進行前述之延伸擴之 後(圖2C僅以兩個像素延伸示意),便可自邊緣圖擴焱j 景邊緣固(圖2D)。在本實施例中,動量邊緣圖之種子;= 應使用者的手部,故可從使用者的手部輪廓向外延伸子 展成使用者整體的輪廓。 、 值得注意的是,邊緣圖與前景邊緣圖的差異在於矿旦 邊緣圖沒有背景部份的椅子輪廓,而僅留下前景之= 者。這是由於動量邊緣圖的『種子』是在使用者的手部用 12 201006527 而可從手部的輪廓延伸擴展至使用者整體的輪廓。相 地,由於椅子區域並未有任何對應的『種子』存在,所以 椅子的輪廓便不會出現在前景邊緣圖(圖2D)中。 以 Ο Ο 如此一來’本發明便成功地自使用者移動之手部 廓,進而擴展出使用者的整體輪廓,並可自動排除靜止 2部份以得到前景邊緣圖。在本實㈣中,前景邊緣圖 二1 體,像資訊便可對應使用者之動作,而不會有背景物 (椅子)的雜訊干擾。因此,利用多個相之前景邊緣 L便可計算出如使用者座標、方向或速度之物體影像資 藉由這些物體影像資訊,便可進一步分析 ::意’進而達成人機雙向互動之效果。請再=用丨者 則資之後,本實施例更可如步驟817戶斤述之以動作規 用判斷前景邊緣圖之物體影像資訊,藉此以分析使 昭:動:之語意。具體而言’動作規則資料庫具有語意對 出使用ί此便可對照前景邊緣圖中之物體輪廓之變化而得 使用者動作之語意。1 pixel (black dot) is found in the vicinity of the same contour M. In the picture, it is represented by the arrow, and then continuously extends outward. After all the pixels of the edge map corresponding to the seed are extended and expanded (the figure is only extended by two pixels), the edge of the scene can be expanded from the edge map (Fig. 2D). In this embodiment, the seed of the momentum edge map; = should be in the user's hand, so that the user's hand contour can be extended outward to form the user's overall contour. It is worth noting that the difference between the edge map and the foreground edge map is that the mineral edge map has no background outline of the chair, leaving only the foreground =. This is because the "seed" of the momentum edge map is extended from the contour of the hand to the contour of the user as a user's hand 12 201006527. In contrast, since there is no corresponding "seed" in the chair area, the outline of the chair does not appear in the foreground edge map (Fig. 2D). In this way, the present invention succeeds in moving the hand profile from the user, thereby expanding the overall contour of the user, and automatically removing the stationary portion 2 to obtain a foreground edge map. In this (4), the foreground edge map is two-dimensional, and the information can correspond to the user's action without the noise interference of the background object (chair). Therefore, by using the plurality of phase front edge L to calculate the object image of the user's coordinates, direction or speed, the image information of these objects can be used to further analyze the effect of the two-way interaction. Please use the 则 则 则 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后 之后Specifically, the 'action rule database' has semantic meanings to use the user's actions in response to changes in the outline of the object in the foreground edge map.

目前狀態 表1 物體輪廓移動Current status Table 1 Object contour movement

201006527 右移 向右移動超過80個像素 右移 不動作 持續5幅影像移動小於10個像素 右移 向右移動超過10個像素 左移 向左移動超過80個像素 前移 不動作 持續5幅影像移動小於10個像素 前移 向下移動超過10個像素 後移 向上移動超過25個像素 後移 不動作 持續5幅影像移動小於10個像素 後移 向上移動超過10個像素 前移 向下移動超過25個像素 表1列出部分之語意對照表。請參考表1,當分析出 影像中之物體輪廓(如圖2D所示),便可經由表1而分析出 使用的之動作語意。當然,本實施例之語意對照表僅舉例 上移、下移、左移與右移這四種動作,熟悉此項技藝者當 可輕易擴增語意對照表之内容,而列舉出更多動作語意與 影像輪廓之間的對應轉換關係。 ❹ 一般而言,本實施例藉由分析出前景邊緣圖之物體影 * 像資訊之動作語意後,便可將前景邊緣圖之物體影像資訊 傳送至顯示單元輸出。如此一來,在顯示單元前方的使用 者便可即時觀看到自身的動作由顯示單元顯示,藉此達到 人機互動或是虛擬實境的效果。此外,顯示單元例如是電 漿電視、液晶電視或投影式電視,不過本發明並不限定顯 示單元的種類。 以前述本發明之分析物體輪廓方法的概念,本發明另 揭露一種分析物體輪廓裝置,而此分析物體輪廓裝置主要 14 201006527 是應用前述之方法。為更加清 廓裝置,以下將再搭配圖示 月本發月之分析物體輪 _圏3為㈣本㈣㈣_ 塊示意圖。請參考圖3,本 輪廓裝置的方 _ 包括影像操取單元110與^^分析物體輪靡裝置1〇0 110是用於操取序列影像112,=m’而影像擷取單元 電腦模組U0,其中這些序;些序列影像m至 影像資訊於其中,且使用者5 可〜包括使用者5〇之 動自身的位置(例如模擬打網:丁特疋之活動而不斷移 輪入輸出單元122、序列影像等等„)。_電腦模組120包括 輪靡單元m,其中輸入輪出单以及分析物體 單元11G與序列影像記憶單2 2 4接於影像揭取 單元m又耦接至序列影像記憶單而分析物體輪廓 承接上述’輪入輪屮S - „ 接收序列影像112,而序列用於自影像掘取皐元U〇 這些序列影像112,且::;==…用於儲存 序列影像m而得前景邊緣_體輪㈣元126是用於分析 詳細而言,分析物體輪 像112進行分析而得前|邊70 126對這些分析序列影 领叱〜S16,亦即對序景;=艮:為前述圖i之步 51,對目前影像進行運算 °㈣讀化叫出動量 壤行運算以糾動量邊緣緣_ 2A),對動量圖 邊緣圖(圖2C),以自動 並疊合邊緣圖與動量 2斗前文均有詳圖延伸而得前景邊緣圖(圖 洋述攻些步驟,於此便不再贅述。 ”、、更進》達成人機互動介面,電腦模組l2G更可壤 201006527 一步包括語意轉譯單开 可包括顯示單元13〇n:體輪廓裳置1〇〇更 接至分析物體輪轉元、=而轉譯單元m是耦 庫(未繪示)以本I-% 26,並利用内建之動作規則資料 意。此外,顯示單元n 豕貝汛的動作〜 譯單元128,以顯亍〜疋輕接至電腦模組120之語意轉 用者50進行互動。^、邊緣圖之物體影像資訊,而與使 综上所述,太路aa 廓裝置至少具有下=優=分析物體輪射法與分析物體輪 包括;有出邊緣圖與動量邊緣圖’其中邊緣圖 之於行藉=脎遠郛,而動量邊緣圖僅包括物體移動部份 之輪廓。#由將邊緣圖與動量邊 圖對應至邊緣圖的輪廒*## ^ 勒置邊緣 之輪廓而延伸出為基準延伸’便可從物體移動部份 之背景輪廓(未赫體之輪廓’藉此便可自動排除純粹 ’ 動之背景),而精確地得到物體整體之輪 f^v ❹ -藉由對物體輪廓進行語意分可 動的雙向溝通介面。 』入機互 、雖然本發明e^較佳實施例揭露如上,減並非用以 限疋本發β彳壬何熟習此技藝者,在不脫離本發明之 和1巳圍内’當可作些許之更動與潤飾,因此本發明之保 範圍當視後附之申請專利範圍所界定者為準。 【圈式簡單說明】 圖1為依據本發明一實施例之分析物體輪廓方法的流 201006527 *» 程圖。 圖2A〜2D分別為依據本發明之分析物體輪廓方法所 得之影像圖。 :圖3為依據本發明一實施例之分析物體輪廓裝置的方 塊示意圖。 【主要元件符號說明】 50 :使用者201006527 Move right to the right by more than 80 pixels. Right shift does not move. 5 images move less than 10 pixels. Right shift to the right. Move more than 10 pixels. Move left. Move more than 80 pixels to the left. Move forward. No motion. 5 image movements. Less than 10 pixels move forward to move more than 10 pixels, then move up more than 25 pixels, move backwards, continue to move 5 images, move less than 10 pixels, move up, move more than 10 pixels, move forward, move more than 25 Pixel Table 1 lists a partial semantic comparison table. Please refer to Table 1. When analyzing the contour of the object in the image (as shown in Figure 2D), the meaning of the action used can be analyzed through Table 1. Of course, the semantic comparison table of this embodiment only performs four kinds of actions: up, down, left, and right. Those who are familiar with the art can easily expand the content of the semantic table and list more action semantics. Corresponding conversion relationship with the image outline. ❹ In general, the present embodiment can transmit the object image information of the foreground edge image to the display unit output by analyzing the action semantics of the object image of the foreground edge image. In this way, the user in front of the display unit can instantly see that his or her own action is displayed by the display unit, thereby achieving the effect of human-computer interaction or virtual reality. Further, the display unit is, for example, a plasma TV, a liquid crystal television or a projection type television, but the present invention is not limited to the type of display unit. In view of the foregoing concept of the method of analyzing an object contour of the present invention, the present invention further discloses an apparatus for analyzing an object contour, and the apparatus for analyzing an object contour is mainly applied to the aforementioned method. In order to further clear the device, the following will be combined with the illustration of the monthly analysis of the object wheel _ 圏 3 for (four) this (four) (four) _ block diagram. Referring to FIG. 3, the side of the contour device includes an image capturing unit 110 and an object rim device 1 〇 0 110 for processing the sequence image 112, = m' and the image capturing unit computer module U0. And wherein the sequence images m to the image information are included therein, and the user 5 can include the user's own position (for example, the simulation of the netting: the activity of the Dinger is continuously moved into the output unit 122 , the sequence image, etc. „). The computer module 120 includes a rim unit m, wherein the input wheel and the analysis object unit 11G and the sequence image memory unit 2 2 4 are coupled to the image removal unit m and coupled to the sequence image The memory profile analyzes the contour of the object to receive the above-mentioned 'rounding rim S - „ receiving sequence image 112, and the sequence is used to capture the sequence image 112 from the image capturing unit U, and :::==... is used to store the sequence image The foreground edge _ body wheel (four) element 126 is used for analysis. In detail, the analysis object wheel image 112 is analyzed to obtain the front edge 70 126 for the analysis sequence 叱~S16, that is, the sequence scene;艮: For the current image of step 51 of the above figure i, the operation of the current image ° (four) read Call the momentum soil row operation to correct the edge edge _ 2A), and the momentum map edge map (Fig. 2C), with the automatic and overlapping edge map and the momentum 2 bucket front with detailed map extension to obtain the foreground edge map (Figure Ocean These steps will not be repeated here. ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 1〇〇 is further connected to the analysis object rotation element, = and the translation unit m is a coupling library (not shown) to the present I-% 26, and utilizes the built-in action rule information. In addition, the display unit n Action ~ Translate unit 128, to display the 语~疋 lightly connected to the computer module 120, the semantics of the user 50 to interact. ^, the edge map of the object image information, and with the above, the road aa profile device at least There is a lower = excellent = analysis object rotation method and analysis object wheel includes; there is an edge map and a momentum edge map 'where the edge map is on the line borrowing = 脎 far, and the momentum edge map only includes the contour of the moving part of the object. #由的边图和动量边图 corresponds to the edge of the rim*## ^ Locate the outline of the edge and extend it as a reference extension. The background contour of the moving part can be moved from the object (the outline of the unwrapped body can automatically exclude the purely moving background), and the object can be accurately obtained. Wheel f^v ❹ - a two-way communication interface that is semantically separated by the contour of the object. 』Incoming each other, although the preferred embodiment of the present invention is as disclosed above, the reduction is not intended to limit the familiarity of the present invention. The skilled person will make some modifications and refinements without departing from the scope of the invention. Therefore, the scope of the invention is defined by the scope of the appended claims. 1 is a flow chart 201006527*» of a method for analyzing an object contour according to an embodiment of the present invention. 2A to 2D are image views of the method for analyzing an object contour according to the present invention, respectively. Figure 3 is a block diagram of an apparatus for analyzing an object contour according to an embodiment of the present invention. [Main component symbol description] 50 : User

100 :分析物體輪廓裝置 110 :影像擷取單元 112 :序列影像 120 :電腦模組 122 :輸入輸出單元 124 :序列影像記憶單元 126 ··分析物體輪廓單元 128 :語意轉譯單元 130 :顯示單元 S11〜S17 :步驟 17100: analysis object contour device 110: image capturing unit 112: sequence image 120: computer module 122: input and output unit 124: sequence image memory unit 126 · analysis object contour unit 128: semantic translation unit 130: display unit S11~ S17: Step 17

Claims (1)

201006527 十、申請專利範圍: 1. 一種分析物體輪廓方法,包括: 擷取多個序列影像,而該些序列影像包括一目前影像 ; 與至少一先前影像; 對該些序列影像計算動量變化,以得出一動量圖; a 對該目前影像進行運算,以得出一邊緣圖; 對該動量圖進行運算,以得出一動量邊緣圖;以及 疊合該邊緣圖與該動量邊緣圖,並自對應該動量邊緣 ❿ 圖之該邊緣圖之輪廓延伸而得一前景邊緣圖。 2. 如申請專利範圍第1項所述之分析物體輪廓方法, 其中對該些序列影像計算動量變化是將該目前影像之像素 之像素值與該先前影像之對應像素之像素值相減,以得到 該動量圖。 3. 如申請專利範圍第1項所述之分析物體輪廓方法, 其中譚先前影像之數量為二,而對該些序列影像計算動量 變化之步驟包括: 〇 分別將該目前影像之像素之像素值與該些先前影像之 * 對應像素之像素值相減,以得到多個差量圖;以及 • 將該些差量圖相加以得到該動量圖。 4. 如申請專利範圍第1項所述之分析物體輪廓方法, 其中對該目前影像進行運算以得出該邊緣圖之步驟是採用 空間邊緣濾波方法或頻譜邊緣濾波方法。 5. 如申請專利範圍第1項所述之分析物體輪廓方法, 其中對該動量圖進行運算以得出該動量邊緣圖之步驟是採 用影像二值化法。 201006527 6. 如申請專利範圍第5項所述之分析物體輪廓方法, 其中對該動量圖進行運算以得出該動量邊緣圖之步驟包 括: ; 設定一閥值; 將大於該閥值之該動量圖之像素之像素值設為1,以 及 將小於該閥值之該動量圖之像素之像素值設為0。 7. 如申請專利範圍第1項所述之分析物體輪廓方法, 參 其中該前景邊緣圖具有一物體影像資訊。 8. 如申請專利範圍第7項所述之分析物體輪廓方法, 其中該物體影像資訊包括座標、方向或速度。 9. 如申請專利範圍第7項所述之分析物體輪廓方法, 於得到該前景邊緣圖之後更包括以一動作規則資料庫判斷 該前景邊緣圖之該物體影像資訊。’ 10. 如申請專利範圍第9項所述之分析物體輪廓方法, 其中該動作規則資料庫具有一語意對照表。 ❹ 11.如申請專利範圍第10項所述之分析物體輪廓方 ' 法,其中該語意對照表包括前移、後移、右移與左移。 - 12.如申請專利範圍第1項所述之分析物體輪廓方法, 其中擷取該些序列影像之步驟是以一影像擷取單元進行擷 取。 13. 如申請專利範圍第12項所述之分析物體輪廓方 法,其中該影像擷取單元為數位攝影機或網路攝影機。 14. 如申請專利範圍第9項所述之分析物體輪廓方法, 其中在判斷該前景邊緣圖之該物體影像資訊之後,更包括 19 201006527 s 與一使用者進行互動。 15. 如申請專利範圍第9項所述之分析物體輪廓方法, 其中在判斷該前景邊緣圖之該物體影像資訊之後,更包括 : 將該物體影像資訊傳送至一顯示單元。 16. 如申請專利範圍第15項所述之分析物體輪廓方 法,其中該顯示單元為電漿電視、液晶電視或投影式電視。 17. —種分析物體輪廓裝置,包括: 一影像擷取單元,適於擷取多個序列影像; 〇 一電腦模組,包括: 一輸入輸出單元,耦接至該影像擷取單元以接 收該些序列影像; 一序列影像記憶單元,耦接至該輸入輸出單 元,以儲存該些序列影像;以及 一分析物體輪廓單元,耦接至該序列影像記憶 單元,以分析該些序列影像而得一前景邊緣圖。 18. 如申請專利範圍第17項所述之分析物體輪廓裝 φ 置,其中該些序列影像包括一目前影像與至少一先前影 • 像,而該分析物體輪廓單元是對該些序列影像計算動量變 - 化以得出一動量圖,對該目前影像進行運算以得出一邊緣 圖,對該動量圖進行運算以得出一動量邊緣圖,並疊合該 邊緣圖與該動量邊緣圖,以自該動量邊緣圖延伸而得該前 景邊緣圖。 19. 如申請專利範圍第1項所述之分析物體輪廓裝置, 其中該電腦模組更包括一語意轉譯單元,耦接至該分析物 體輪廓單元,而該語意轉譯單元適於以一動作規則資料庫 20 201006527 '判斷該前景邊緣圖之一物體影像資訊。 20·如申請專利範圍第19項所述之分析物體輪廓裝 置,更包括一顯示單元,耦接至該電腦模組之該語意轉譯 單元,以顯示該前景邊緣圖之該物體影像資訊。 參201006527 X. Patent Application Range: 1. A method for analyzing an object contour, comprising: capturing a plurality of sequence images, wherein the sequence images comprise a current image; and at least one previous image; calculating momentum changes for the sequence images to Deriving a momentum map; a computing the current image to obtain an edge map; performing an operation on the momentum map to obtain a momentum edge map; and superimposing the edge map and the momentum edge map, and A contour edge map is obtained by extending the contour of the edge map corresponding to the momentum edge map. 2. The method for analyzing an object contour according to claim 1, wherein calculating the momentum change for the sequence image is subtracting a pixel value of a pixel of the current image from a pixel value of a corresponding pixel of the previous image, The momentum map is obtained. 3. The method for analyzing an object contour according to claim 1, wherein the number of previous images of Tan is two, and the step of calculating momentum changes for the sequence images comprises: 像素 respectively pixel values of pixels of the current image Subtracting the pixel values of the corresponding pixels of the previous images to obtain a plurality of difference maps; and • adding the difference maps to obtain the momentum map. 4. The method for analyzing an object contour according to claim 1, wherein the step of calculating the current image to obtain the edge map is a spatial edge filtering method or a spectral edge filtering method. 5. The method of analyzing an object contour according to claim 1, wherein the step of calculating the momentum map to obtain the momentum edge map is performed by image binarization. The method of analyzing an object contour according to claim 5, wherein the step of calculating the momentum map to obtain the momentum edge map comprises: setting a threshold; and using the momentum greater than the threshold The pixel value of the pixel of the figure is set to 1, and the pixel value of the pixel of the momentum map smaller than the threshold is set to zero. 7. The method for analyzing an object contour according to claim 1, wherein the foreground edge map has an object image information. 8. The method of analyzing an object contour according to claim 7, wherein the object image information comprises a coordinate, a direction or a speed. 9. The method for analyzing an object contour according to claim 7 is characterized in that after obtaining the foreground edge map, the object image information of the foreground edge map is determined by an action rule database. 10. The method of analyzing an object contour according to claim 9, wherein the action rule database has a semantic comparison table. ❹ 11. The method of analyzing an object contour as described in claim 10, wherein the semantic comparison table includes forward, backward, right and left shifts. 12. The method of analyzing an object contour according to claim 1, wherein the step of extracting the sequence images is performed by an image capturing unit. 13. The method of analyzing an object contour according to claim 12, wherein the image capturing unit is a digital camera or a webcam. 14. The method of analyzing an object contour according to claim 9, wherein after determining the image information of the object in the foreground edge map, the method further comprises: 19 201006527 s interacting with a user. 15. The method of analyzing an object contour according to claim 9, wherein after determining the image information of the object in the foreground edge map, the method further comprises: transmitting the image information of the object to a display unit. 16. The method of analyzing an object contour according to claim 15, wherein the display unit is a plasma television, a liquid crystal television or a projection television. 17. An apparatus for analyzing an object contour, comprising: an image capturing unit adapted to capture a plurality of sequence images; a computer module comprising: an input and output unit coupled to the image capturing unit to receive the image capturing unit a sequence of image memory units coupled to the input and output unit for storing the sequence images; and an analysis object contour unit coupled to the sequence image memory unit for analyzing the sequence images Prospect edge map. 18. The analysis object contour device according to claim 17, wherein the sequence images comprise a current image and at least one previous image, and the analysis object contour unit calculates momentum of the sequence image. Transforming to obtain a momentum map, calculating the current image to obtain an edge map, calculating the momentum map to obtain a momentum edge map, and superimposing the edge map and the momentum edge map to The foreground edge map is derived from the momentum edge map. 19. The analysis object contour device of claim 1, wherein the computer module further comprises a semantic translation unit coupled to the analysis object contour unit, and the semantic translation unit is adapted to use an action rule data. Library 20 201006527 'Determines object image information of one of the foreground edge maps. The analysis object contouring device of claim 19, further comprising a display unit coupled to the semantic translation unit of the computer module to display the object image information of the foreground edge map. Reference 21twenty one
TW097129697A 2008-08-05 2008-08-05 Measuring object contour method and measuring object contour apparatus TWI361093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW097129697A TWI361093B (en) 2008-08-05 2008-08-05 Measuring object contour method and measuring object contour apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW097129697A TWI361093B (en) 2008-08-05 2008-08-05 Measuring object contour method and measuring object contour apparatus

Publications (2)

Publication Number Publication Date
TW201006527A true TW201006527A (en) 2010-02-16
TWI361093B TWI361093B (en) 2012-04-01

Family

ID=44826720

Family Applications (1)

Application Number Title Priority Date Filing Date
TW097129697A TWI361093B (en) 2008-08-05 2008-08-05 Measuring object contour method and measuring object contour apparatus

Country Status (1)

Country Link
TW (1) TWI361093B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI413023B (en) * 2010-03-30 2013-10-21 Novatek Microelectronics Corp Method and apparatus for motion detection
US9002117B2 (en) 2010-07-28 2015-04-07 International Business Machines Corporation Semantic parsing of objects in video
US9134399B2 (en) 2010-07-28 2015-09-15 International Business Machines Corporation Attribute-based person tracking across multiple cameras
US9330312B2 (en) 2010-07-28 2016-05-03 International Business Machines Corporation Multispectral detection of personal attributes for video surveillance
US10424342B2 (en) 2010-07-28 2019-09-24 International Business Machines Corporation Facilitating people search in video surveillance

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI413023B (en) * 2010-03-30 2013-10-21 Novatek Microelectronics Corp Method and apparatus for motion detection
US9002117B2 (en) 2010-07-28 2015-04-07 International Business Machines Corporation Semantic parsing of objects in video
US9134399B2 (en) 2010-07-28 2015-09-15 International Business Machines Corporation Attribute-based person tracking across multiple cameras
TWI505200B (en) * 2010-07-28 2015-10-21 Ibm Method, system, computer program product and program for determining the location and attributes of an object in a video
US9245186B2 (en) 2010-07-28 2016-01-26 International Business Machines Corporation Semantic parsing of objects in video
US9330312B2 (en) 2010-07-28 2016-05-03 International Business Machines Corporation Multispectral detection of personal attributes for video surveillance
US9679201B2 (en) 2010-07-28 2017-06-13 International Business Machines Corporation Semantic parsing of objects in video
US10424342B2 (en) 2010-07-28 2019-09-24 International Business Machines Corporation Facilitating people search in video surveillance

Also Published As

Publication number Publication date
TWI361093B (en) 2012-04-01

Similar Documents

Publication Publication Date Title
EP3101624B1 (en) Image processing method and image processing device
CN106355153B (en) A kind of virtual objects display methods, device and system based on augmented reality
JP6129309B2 (en) Gesture based user interface
US9007422B1 (en) Method and system for mutual interaction using space based augmentation
US10225473B2 (en) Threshold determination in a RANSAC algorithm
CN106797458B (en) Virtual alteration of real objects
CN104035557B (en) Kinect action identification method based on joint activeness
JP7758104B2 (en) Information processing device, information processing method, and information processing program
CN106031154A (en) Image processing method and electronic device therefor
CN107729367A (en) A kind of moving line recommends method, apparatus and storage medium
KR102725398B1 (en) Image processing method and apparatus, device and medium
US10229508B2 (en) Dynamic particle filter parameterization
CN107146197A (en) A kind of reduced graph generating method and device
JP2019067388A (en) User interface for manipulating light-field images
TW201006527A (en) Measuring object contour method and measuring object contour apparatus
CN105809664A (en) Method and device for generating three-dimensional image
CN111161398A (en) Image generation method, device, equipment and storage medium
Won et al. Active 3D shape acquisition using smartphones
Khan et al. A review of benchmark datasets and training loss functions in neural depth estimation
Seychell et al. Cots: A multipurpose rgb-d dataset for saliency and image manipulation applications
CN116112716B (en) Virtual person live broadcast method, device and system based on single instruction stream and multiple data streams
Lyubanenko et al. Multi-camera finger tracking and 3d trajectory reconstruction for hci studies
JP2015184986A (en) Mixed reality sharing device
Kikuchi et al. Automatic diminished reality-based virtual demolition method using semantic segmentation and generative adversarial network for landscape assessment
JP2020101922A (en) Image processing apparatus, image processing method and program