[go: up one dir, main page]

TW201225001A - Construction method for three-dimensional image - Google Patents

Construction method for three-dimensional image Download PDF

Info

Publication number
TW201225001A
TW201225001A TW99137423A TW99137423A TW201225001A TW 201225001 A TW201225001 A TW 201225001A TW 99137423 A TW99137423 A TW 99137423A TW 99137423 A TW99137423 A TW 99137423A TW 201225001 A TW201225001 A TW 201225001A
Authority
TW
Taiwan
Prior art keywords
image
dimensional image
constructing
images
dimensional
Prior art date
Application number
TW99137423A
Other languages
Chinese (zh)
Other versions
TWI428855B (en
Inventor
xian-ming Wu
ke-zhi Huang
Meng Ouyang
Wei-De Zheng
Original Assignee
Chung Shan Inst Of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chung Shan Inst Of Science filed Critical Chung Shan Inst Of Science
Priority to TW99137423A priority Critical patent/TWI428855B/en
Publication of TW201225001A publication Critical patent/TW201225001A/en
Application granted granted Critical
Publication of TWI428855B publication Critical patent/TWI428855B/en

Links

Landscapes

  • Image Processing (AREA)
  • Endoscopes (AREA)

Abstract

A construction method for a three-dimensional image is disclosed The method includes the following steps: acquiring a plurality of annular images; converting the annular images into a plurality of corresponding strip images; performing a comparison and binding process on the strip images to obtain at least one two-dimensional image; performing a spatial matrix transformation according to the two-dimensional image so as to output a three-dimensional image. Using the construction method for the three-dimensional image to process the image information acquired by a ring field capsule, it is not only capable of converting two-dimensional images into the three-dimensional images to increase the accuracy of diagnosis of disease symptoms, but also enabling the medical personnel to make symptom judgment more convenient and faster.

Description

201225001 六、發明說明: 【發明所屬之技^財領域】 本發明係有關於一種三維影像之建構方法,尤其是一種透過 環場膠囊擷取人體醫療影像’並將其轉換輸出為三維影像之建構 方法。 【先前技術】 隨著近代醫療技術之日漸演進,内視鏡已逐漸成為檢查人體 • 器官之重要醫療設備之一。傳統之内視鏡係以末端設有攝影鏡頭 的長管穿入人體器官’在攝影鏡頭擷取影像訊號後,再將影像訊 號傳回電腦主機分析。然而,一般而言,人體之消化道長产十八 深長,以傳統之内視鏡伸入腸道,不僅不易操控,也令患者承受 相當嚴重之不適感。 蓉於以上,遂有膠肋視鏡裝置之問世,其_無線遙測技 術取代傳統之有線傳輸,並將光源、影像感測器、控制晶片、無 線訊號發送器與電池等元件封裝於膠囊狀殼體中,以適於消化道 讀查。舉例而言,患者於吞服職後,賴内視魏置即可隨 著消化道之帶動前進’進而#|取、;肖化道管壁之影像,續 透過無線訊號發送器’將影像訊號傳輪至體外之接收袭置(如:電 腦主機),錢軸内視餘置再由下消化道排出, 其中,膠囊内視絲置擷取到的影像資料,在傳輸^體外之 如:電齡機後,《之影倾理祕柯針對内視鏡 在各個不__取_影像進行鱗處财序轉 201225001 員分析腸道之碰點。於此,f知之影像處理方法僅可利用二唯 OVo Dimension)之影像分析,將各個影像:祕對位連接起來。秋 而’需注制的是’真正的人體器官係為三維空間,因此習知: 雜處理方法射_二維平面之影像_,絲法達到有效地 遇原病徵點影像之朗。於此,不僅不易識聰官之病灶 低了醫療人員之診斷正確性。 亦降 其次,一·言’影像感測器多設置於膠囊内視鏡裝置之端 部。也就是說’膠囊内視鏡裝置係為—種具有方向性之裝置。因 :囊㈣鏡裝置料化道巾隨著騎之_而前進或後退 日夺、、輸出之影像轉在對位連接上亦需有連接方向上的考量。 疋以广上所述,習知之影像處理方法實具有其值得解決 【發明内容】 -繁於以上,本發明提出一種三維影像之建構方法,盆係利用 體f鮮彡像,麟棘狀二料畔像透過空 像之建構方“,輸出三維之立體影像。根據本發明提出之三維影 更可m2 ’㈣料獻貞可雜域舰進行献期’ 三維立體影像之位置、角度、深度等等,以提高輸 出衫像之鐘別率。 數個出—種二維影像之建構方法,其步驟包括··擷取複 將獅轉換她之複__彡像、自條 Ρ 像之比對接合程序,以獲得至少_ 根據二維影像進行_陣轉換,據以輸出 201225001 根據本發明提出之三維影像之建構方法,更可包括:根據一 圖像使用者介面伽帅仙卿恤加卜⑽健立複數個參數功 能,以調變三維影像之輸出效果。 是以’本發明提出之三維影像之建構方法,不僅可應用於醫 學實驗之層級,亦可應用於日常之醫療檢測。凡擷取人體之醫療 衫像k ’以環場膠囊魏本發明提出之三維影像之建構方法,除 了可藉由輸出之二維立體影像更方便觀察器官之内壁景彡像,本發 籲明提出之二維影像之建構方法更具有利於判斷病徵之優點。 以上有本發日月的内容· ’與町的實施方式係用以示 範與解釋本發_精神與原理’並且提供本發_專辦請範圍 更進-步的轉。錢本發_特徵、實作與功效,紐合圖式 作較佳實施例詳細說明如下。 【實施方式】 以下在實施方式中詳細敘述本創作之詳細特徵以及優點,其 •内容足以使任何熟習相關技藝者了解本創作之技術内容並據以實 施,且根據本說明書所揭露之内容、巾請專概圍及圖式,任何 熟習相關技藝者可輕易地理解本創作相關之目的及優點。 根據本發明實施例之三維影像之建構方法,可以是但不限於 用以自一環場膠囊擷取到的影像進行影像處理,其中「第丨圖」 係為根據本發明之一實施例,應用於環場膠囊之結構示音圖。埽 場膠囊U)0具有-殼體1〇2,錄又體1〇2於其二相對側壯各^ •攝像窗腦。攝像窗1〇4可以為-透明且大致成環狀之攝像範圍, 5 201225001 令發光元件122射出之光線可透過攝像窗1〇4而投射至殼體1〇2 外之待測物130上。其中發光元件122可以是但不限於發光二極 體(Light Emitting Diode ’ LED) ’且待測物no亦可以是器官(如: 消化道系統、口腔、鼻腔、肛門、陰道等等)之内壁。 環場膠囊100之殼體102内,除了發光元件122之外’更包 括有·錐狀鏡106、透鏡模組1〇8、影像感測器12〇、電源供應模 組126與無線通訊單元124。當環場膠囊1〇〇被吞服於人體中,以 擷取器g内壁之影像時,發光元件122首先透過攝像窗1〇4而投 射出射向待測物130之光線,該光線在接觸到待測物13〇之壁面, 而被反射回殼體102時,會依序經過攝像窗、錐狀鏡而被 投射到透鏡模組108。爾後透鏡模組108可將該光線聚焦後傳送至 景’像感測器120。影像感測器120可以是但不限於互補式金氧半感 測器(Complementary Metal Oxide Semiconductor,CMOS)或一電荷 耦合元件(Charge Coupled Device,CCD),以將感測到的光線透過 感應像素(pixel)之作用而轉換為一影像資訊。於此,影像感測器 120可用以感測到不同時間、不同位置下所攝得待測物丨%之影像 資訊。 無線通訊單元124電性連接於影像感測器12〇,並將影像感測 器120擷取到的影像資訊以無線(wireless)方式傳出環場膠囊 100,令外部電腦(圖中未示)可針對該些影像資訊進行後續之影像 處理程序(意即本發明實施例之三維影像之建構方法)。其中無線通 訊單元124可以是射頻(Radio Frequency ’ RF)收發器、或射頻天線 201225001 trr ’且電源供應模組i26(例如:電池臟供上述各 n “之電力’ M令環場膠囊賴長時間在人體内進 L、唯在此需_的是’環場麵及其内部各個元件之 圖電性連接與影像資訊傳輸方式㈣,並_以限定本發明 凡將環狀影像轉換為三維影像之影像處理方法,皆隸屬 於本2之發明範圍,唯本發明係以上述環場膠囊卿作為說明 之貝知例而已。 、第2A圖」係為根據本發明實施例三維影像之建構方法,其 =如步驟咖至步驟㈣8所示,首先,自環場膠囊 ^=數_狀景彡像S2G2),然後將擷取到之環狀影像轉換 ^對應之複數個餘雜(步驟S2G4),續自鱗練輝進行影 =比對接合程序,以獲得至少—二維影像(步驟⑽),最後即 —料彡像騎空驗_換三維影像(步驟 牛驟關Γ步=204,將環狀影像轉換為對應之複數個條狀影像之 =d參閱「第则」,物狀影像f(x,y)之原始座標設 ,、' (x,y),%狀影像之圓心座標設為(XQ,yQ) _ 則依據以下公式.· x = x0+zc〇s(~〇) y = y〇+zsin(~^) 即可計其朗之做影像之絲抑,z)。 201225001 關於步驟遍,請參閱「第3A圖」,係為根___, 自條狀影像進行影像之比對接合程序,以獲得至少—二維影像, 其步驟流程圖。如步驟S302至步驟S3〇4所示,計算出條:影像 之座標後,首先自條狀影像執行一影像比對程序(步驟咖^然 後根據影像輯程狀結果執行躲接合餅,以獲得最後接合 完畢之二維影像(步驟S304)。 ° 請-併參閱「第3B圖」,係為根據本發明實施例,自條狀影 像執行影像比對程序之示意圖,舉例而言,自環狀影像轉換得= 2條狀影像包括條狀影像3〇,施,施,其中可以條狀影像%作為 一參考影像,並同時於條狀影像3〇a上掃描一掃描範圍&amp;,相等於 條狀影像30之參考範圍%之比對範圍。然後,於比對時計算條 狀影像3〇之參考範圍32與條狀影像3〇a之掃描範圍32,二者之間 的關係值,該關係值之取得可以是以—平均絕對誤差法⑽咖 bsolute Error ’ MAE)、平均平方誤差法(Mean Square £贿,μ犯) 或皮爾森相關係數法(Pearson c〇rrdati〇n c〇effident)來取得該些 條狀影像30,30a,30b之間的關係值。其中採用平均絕對誤差法 (Mae)進抛對時,可制—低度準確率之比對結果,採用平均 平方誤差法(MSE)進行比對時,可得到中度準確率之比對結果,而 Λ用皮爾林相關係數法(pears〇n C〇rreiatj〇n c〇efflcient)進行比對 時’則可—⑤度準確率之比對結果,㈣者可視實際輸出晝面 之解析需求’而自行決定採用上述三種演算法其中之一。 .在比對計算得到條狀影像30,30a,30b之間的關係值後,即可 201225001 將影像中相關性最高的區域(意即條狀影像咖之掃描範圍π移 動至條狀影像30之參考翻32)以進行後續之影像接合程序。也 沈疋%’ f彡像接合柯_影像_融合絲像品質融合等方 法,以接合條狀影像30a於條狀影像3〇,條狀影像通於條狀影 像30a,以獲得一接合完畢之二維影像。「第%圖」係為根據「第 狃圖」於接合完畢之二維影像示意圖,其係將掃描範圍%,之 ,邱感應像素取平均值後,取代原有參考範圍3kRGB感應 像素的值’而至於接合處(意即掃描範圍幻,)所在之一整列則全部 2用其各_之條狀影像30a,·之景彡像範_代,於此完成影 像接合程序,獲得接合後之二維影像4〇。 由於環場膠囊於消化道中會隨著腸道之罐動而前進或後退, 因此輸出之影像在前耻對接合的雜財上轉且有連接 考量。因此,「第4圖」係為根據本發明實施例,根據 :象比對程粒結果執行影像接合程細獲得二維影像,其步驟 =:如步釋2至步驟S條所示^ =二維影像之畫素列數是否不小於條狀影像之畫素列數, 即接合狀二祕像之畫素顺场或料絲影像之書 賊繼嫩確,錢_:_(步_4): =^二維縣之畫素聽小於触影像之晝相數時,則代表 =方向錯誤,必難新以另—方向對位後再次執行該影像接合 程序(步驟S406),以獲得正確接合之二維影像。 關於步驟謂,根據二維影像進行空間矩陣轉換’以輸出三 9 201225001 隹y像之步驟’明一併參閱「第5圖」,係為根據本發明實施例, 圓柱拉3L之轉換瞒,其結構示意圖。關滅型具有一半徑 R ’接合後之二維影像40具有橫向長度^,且#為例,假設 圓柱模型5〇具有總長度(對應所要操取器官内壁之^度、接合後之 二維影^ 4〇總長度)Z,則可得圓柱模㈣上之座標系統_), 其中%、㈣〜&amp; ’㈣〜2。也就是說,當接合後之二維影像40 上任意-點投影至__ 5G上時,其具有橫向座標與縱向座標 〇 R Θ 各自為〇1爛。由此可得,圓柱模型5〇之轉換矩陣 是以,根據本發日_例,根據二維影像進行空^陣轉^侧 出二維影像之步驟’職為將二料彡像(# g卩接合狀二維影像仙) 0 R6 乘上圓柱模型50之轉換矩陣 維立體影像。 &quot;1 Λ *1 0 ’以獲取最終輸出之 其中當以圓柱模型50之圓心建構一方向向量’且該方向向量 指向-座標點時,三維影像即可產纽财向向量為視角之練 點的三維立體影像。於此,藉由改變方向向量,即可調整: 體影像之齋舉_,「祕圖屬嘛_實_, 方向向量平行於圓柱模型長度方向時,其立體影像 「第⑽」可見’撕麵_喝軸,而在越^ 囫心處愈見’顯不其係為具有不同景深之立體影像犯 至「第6Ε圖」係為根據本發明實施例,方向向量非平二 型長度方向時,其個別之立體影像之示細,可見其不同斜^ 201225001 狀視場的三維立體影像。 其•人,第7圖」係為根據本發明又一實施例之三維影像之建 構方法,其步驟流程圖。射三維影像之建構方法除了步驟S202 至步驟S208之外,更可包括步驟S2l〇 :根據一圖像使用者介面 ^•aPhiCalUserInterface ’ Gm)建立複數個參數功能糊變三維 影像之輸出效果。也就是說,根據本發日収—實施例,更可藉由 圖像使用者介面(GUI) ’令使用者自行設定參數,例如:影像放大、 Φ =小、位移等不同參數,以調變三維影像之輸出效果。於此,在 p像放大l率固定的情況下,檢測人員藉由判讀輸出影像之物像 關係與放大倍率’即可簡易推知環場膠囊在器官内所行進的距 離,藉此有效完成影像定位之目的。 是以,根據本發明實施例之三維影像之建構方法,不僅可將 °丨之一,准平面景々像轉換為三維立體影像,更可藉由調整該三 維立體影像之健、肢、深鮮參數,_變三較體影像: 瞻輸出《。藉此解決習知存在之_,並且具村提供醫療人員 方便、正確且快速地進行,病徵判定之優點。 —雖然本發明以前述雜佳實關揭露如上,財麟用以限 定本發明’任何熟習相像技藝者,在不脫離本發明之精神與範圍 内’當可作些許更動與潤飾,因此本發明之專利保護範圍須視本 •說明書所附之申請專利範圍所界定者為準。 【圖式簡單說明】 第1圖係為根據本發明之一實施例,應用於環場膠囊之、结構 201225001 示意圖; 第2A圖係為根據本發明實施例三維影像之建構方法,其步驟 流程圖; 第2B圖係為根據「第2八圖」之步驟S2〇4之座標轉換示意 圖; 第3A圖係為根據本發明實施例,自條狀影像進行影像之比對 接合程序以獲得至少—二維影像,其步驟流程圖; 第3B圖係為根據本發明實⑽,丨,自條狀影像執行影像比對程 序之示意圖; 第3C圖係為根據第3B圖於接合完畢之二維影像示意圖; 第4圖係為根據本發明實施例,根據影像比對程序之結果執 行影像接合料簡得二維,其步驟流程圖; 第5圖係、為根據本㈣實施例,圓柱模型之概轉,其結 構示意圖; &quot;圖係為根據本發明實施例’方向向量平行於圓柱模型長 度方向時,其立體影像之示意圖; 第6B圖至第6£圖係為根據本發明實施例,方向向量非平行 於圓柱_長度方向時,其個別之立體影像之示賴:以及 第7圖係為根據本發明又一實施例之三維影像之建構方法, 其步驟流程圖。 【主要元件符號說明】 彡像 30,3〇a,30b 12 32 201225001 參考範圍 掃描範圍 ,32’ 接合後之二維影像40 圓柱模型 50 環場膠囊 1〇〇 殼體 102 攝像窗 104201225001 VI. Description of the invention: [Technology of the invention] The present invention relates to a method for constructing a three-dimensional image, in particular to construct a human medical image through a ring-shaped capsule and convert it into a three-dimensional image. method. [Prior Art] With the gradual evolution of modern medical technology, endoscopes have gradually become one of the important medical devices for examining human organs. The traditional endoscope is inserted into the human organ with a long tube with a photographic lens at the end. After capturing the image signal from the photographic lens, the image signal is transmitted back to the computer for analysis. However, in general, the digestive tract of the human body grows 18 long and deep, and the traditional endoscope extends into the intestine, which is not only difficult to handle, but also causes the patient to suffer considerable discomfort. Rong above, with the advent of the ribbed mirror device, its wireless telemetry technology replaces traditional wired transmission, and encapsulates components such as light source, image sensor, control chip, wireless signal transmitter and battery in a capsule shell. In the body, to be suitable for digestive tract reading. For example, after the patient is swallowed, Lai Wei Wei will be able to move forward along with the digestive tract. Further, the image of the wall of the Xiaohua Road will continue to transmit the image signal through the wireless signal transmitter. Passing the wheel to the outside of the receiving device (such as: computer host), the money axis internal vision is then discharged by the lower digestive tract, wherein the capsule internal view wire is captured by the image data, in the transmission of the body such as: electricity After the age of the machine, "The shadow of the secrets of the secret for the endoscopes in the various __ take _ image of the scales of the financial order to 201225001 staff analysis of the intestinal touch points. Here, the image processing method of the known image can only use the image analysis of the two OVo Dimension) to connect the respective images: the secret alignment. In the autumn, what needs to be injected is that the real human organs are three-dimensional, so it is known that the miscellaneous treatment method shoots the image of the two-dimensional plane, and the silk method achieves the image of the original diseased image effectively. In this case, it is not only difficult to know the lesions of the Congguan, but the diagnosis of medical personnel is low. Also, secondly, the “image” image sensor is placed at the end of the capsule endoscope device. That is to say, the capsule endoscope device is a device having directivity. Because: the capsule (four) mirror device materialized the towel to advance or retreat with the riding of the _, the output image to the alignment connection also needs to consider the connection direction. As described above, the conventional image processing method has its merits to be solved. [Summary of the Invention] - The present invention proposes a method for constructing a three-dimensional image, which uses a fresh image of a potted body, a thorn-like material. The image of the image is “through the construction of the empty image”, and outputs a three-dimensional stereoscopic image. The three-dimensional image proposed according to the present invention can be m2 '(4) material contribution can be distributed by the miscellaneous domain ship's position, angle, depth, etc. In order to improve the output rate of the output shirt image. Several methods for constructing a two-dimensional image, the steps include:····························································· a program for obtaining at least _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The parameter function is to adjust the output effect of the three-dimensional image. It is based on the construction method of the three-dimensional image proposed by the invention, which can be applied not only to the level of medical experiments, but also to daily medical detection. The method of constructing a three-dimensional image proposed by the invention of the human body is similar to the method of constructing a three-dimensional image proposed by the invention. In addition to the two-dimensional image output, it is more convenient to observe the inner wall of the organ. The construction method of the two-dimensional image is more conducive to judging the merits of the symptoms. The above is the content of the date of the present day · 'The implementation method of the town is used to demonstrate and explain the hair _ spirit and principle' and provide this hair _ special Please refer to the detailed description of the preferred embodiment of the present invention. The following is a detailed description of the detailed features of the present invention in the following embodiments. Advantages, the content of which is sufficient for any familiar artisan to understand the technical content of the creation and implement it according to the contents of the present specification, the scope of the towel, and the schema, which can be easily understood by any familiar artisan. The purpose and advantages of the present invention are as follows: The method for constructing a three-dimensional image according to an embodiment of the present invention may be, but not limited to, an image captured from a ring field capsule. Image processing, wherein "secondary map" is a structural sound map applied to a ring capsule according to an embodiment of the present invention. The capsule U)0 has a -shell 1〇2, and the recording body 1〇2 is strong on the opposite sides of the camera. The imaging window 1〇4 can be a transparent and substantially annular imaging range, and 5 201225001 allows the light emitted from the light-emitting element 122 to be projected through the imaging window 1〇4 onto the object to be tested 130 outside the housing 1〇2. The light-emitting element 122 may be, but not limited to, a Light Emitting Diode 'LED' and the object to be tested may also be an inner wall of an organ (eg, a digestive system, an oral cavity, a nasal cavity, an anus, a vagina, etc.). In the housing 102 of the ring capsule 100, in addition to the light-emitting element 122, a cone mirror 106, a lens module 1〇8, an image sensor 12A, a power supply module 126, and a wireless communication unit 124 are included. . When the ring-shaped capsule 1 is swallowed in the human body to capture the image of the inner wall of the picker g, the light-emitting element 122 firstly projects the light that is directed toward the object to be tested 130 through the image capturing window 1〇4, and the light is in contact with When the wall of the object to be tested 13 is reflected back to the casing 102, it is projected to the lens module 108 through the imaging window and the cone mirror. The rear lens module 108 can focus the light and transmit it to the image sensor 120. The image sensor 120 may be, but not limited to, a Complementary Metal Oxide Semiconductor (CMOS) or a Charge Coupled Device (CCD) to transmit the sensed light through the sensing pixel ( Pixel) is converted into an image information. Here, the image sensor 120 can be used to sense image information of the object to be detected 丨% at different times and at different positions. The wireless communication unit 124 is electrically connected to the image sensor 12A, and transmits the image information captured by the image sensor 120 to the ring field capsule 100 in a wireless manner to make an external computer (not shown). A subsequent image processing program (that is, a method for constructing a three-dimensional image in the embodiment of the present invention) can be performed for the image information. The wireless communication unit 124 can be a radio frequency (RF) transceiver, or an RF antenna 201225001 trr ' and a power supply module i26 (for example, the battery is dirty for the above-mentioned "power" M to make the ring field long time In the human body, L is only required for the 'ring scene and its internal components and the image information transmission method (4), and _ to limit the invention to convert the ring image into a three-dimensional image. The image processing method belongs to the scope of the invention of the present invention, and the present invention is based on the above-mentioned ring field capsules as a description. FIG. 2A is a construction method of a three-dimensional image according to an embodiment of the present invention. = As shown in step (4), first, the self-loop field capsule ^=number _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ From the scales to practice the shadow = comparison of the joint program to obtain at least - two-dimensional image (step (10)), and finally, the material is like riding the air test _ for three-dimensional image (step cattle step = step = 204, will ring The image is converted into a corresponding plurality of strips Like =d, see "the first", the original coordinate of the object image f(x,y), ' (x,y), the centroid of the % image is set to (XQ,yQ) _ according to the following formula .· x = x0+zc〇s(~〇) y = y〇+zsin(~^) It can be counted as the silk of the image, z). 201225001 For the steps, please refer to "3A", which is the root ___, the image matching process from the strip image to obtain at least - 2D image, the flow chart of the steps. As shown in step S302 to step S3〇4, after calculating the coordinates of the image: first, an image comparison program is executed from the strip image (step coffee) and then the hiding cake is executed according to the image sequence result to obtain the final The completed two-dimensional image (step S304). Please - and refer to "3B", which is a schematic diagram of performing an image comparison program from a strip image according to an embodiment of the present invention, for example, a ring image Converted = 2 strip images including strip image 3〇, Shi, Shi, where strip image % can be used as a reference image, and at the same time scan a scan range &amp; on the strip image 3〇a, equal to strip The reference range of the reference range % of the image 30. Then, when comparing, the relationship between the reference range 32 of the strip image 3 and the scan range 32 of the strip image 3〇a is calculated, and the relationship value is This can be achieved by the average absolute error method (10), the average square error method (Mean Square), or the Pearson correlation coefficient method (Pearson c〇rrdati〇nc〇effident). Some strip images 30, 30a, 30b The relationship between the value of. When the average absolute error method (Mae) is used to throw the pair, the comparison result of the low-accuracy rate can be obtained. When the average square error method (MSE) is used for comparison, the comparison result of the medium accuracy can be obtained. However, using the Pearlin correlation coefficient method (pears〇n C〇rreiatj〇nc〇efflcient), the comparison results can be compared with the accuracy of 5 degrees, and (4) can be based on the analytical demand of the actual output. Decided to adopt one of the above three algorithms. After comparing and calculating the relationship between the strip images 30, 30a, 30b, the region with the highest correlation in the image (that is, the scanning range π of the strip image coffee is moved to the strip image 30). Refer to page 32) for the subsequent image bonding procedure. Also, the method of merging the image of the _ image, the fused image, and the fusion of the image is performed to join the strip image 30a to the strip image 3, and the strip image passes through the strip image 30a to obtain a jointed image. 2D image. The "% map" is a two-dimensional image diagram of the joint image according to the "figure map". The scanning range is %, and the average value of the 3kRGB sensing pixel is replaced by the original reference range after the average of the Qiu sensing pixels. As for the joint (that is, the scanning range is illusory), one of the whole columns uses all of the strip images 30a, and the image of the image is used to complete the image bonding process. Dimensional image 4〇. Since the ring-shaped capsule advances or retreats in the digestive tract with the movement of the intestines, the output image is turned on and connected in the front shame. Therefore, the "figure 4" is a method of obtaining a two-dimensional image according to an image-joining process according to an example of the image-aligning particle result according to an embodiment of the present invention, the step =: as shown in step 2 to step S ^ = two Whether the number of pixels in the dimension image is not less than the number of pixels in the strip image, that is, the pixel of the joint shape of the second image or the image of the silk image is thief, the money _:_ (step _4) : =^ When the picture of the two-dimensional county is less than the number of phases of the touch image, it means that the direction is wrong. It is difficult to perform the image bonding process again after the other direction is aligned (step S406) to obtain the correct joint. 2D image. Regarding the step, the spatial matrix conversion is performed according to the two-dimensional image, and the step of outputting the three 9 201225001 隹 y image is also referred to as "the fifth figure", which is a conversion of the cylindrical pull 3L according to an embodiment of the present invention. Schematic. The off-type two-dimensional image 40 having a radius R 'joined has a lateral length ^, and # is an example, assuming that the cylindrical model 5 has a total length (corresponding to the degree of the inner wall of the organ to be manipulated, and the two-dimensional image after the joint ^ 4 〇 total length) Z, then the coordinate system _) on the cylindrical die (four), where %, (four) ~ &amp; '(four) ~ 2. That is to say, when the two-dimensional image 40 after the joint is projected onto the __ 5G at an arbitrary point, it has a lateral coordinate and a longitudinal coordinate 〇 R 各自 each of which is 〇1. Therefore, the conversion matrix of the cylindrical model is based on the step of the present invention, and the step of performing the two-dimensional image on the basis of the two-dimensional image is performed.卩Joined 2D imagery) 0 R6 Multiplied by the transformation matrix dimension of the cylindrical model 50. &quot;1 Λ *1 0 'to obtain the final output. When a direction vector is constructed with the center of the cylinder model 50 and the direction vector points to the coordinate point, the three-dimensional image can be used as a perspective of the new fiscal direction vector. Three-dimensional image. Here, by changing the direction vector, it is possible to adjust: the body image of the fasting _, "the secret picture belongs to the real_real_, the direction vector is parallel to the length direction of the cylindrical model, and the stereoscopic image "(10)" can be seen as the 'tear surface' _ drinking the axis, and at the heart of the heart, it is more obvious that it is a stereoscopic image with different depth of field to the "figure 6". According to an embodiment of the present invention, when the direction vector is not the flat length direction, The details of the individual stereoscopic images show the three-dimensional images of the different oblique images of 201225001. Fig. 7 is a flowchart showing the steps of constructing a three-dimensional image according to still another embodiment of the present invention. In addition to the step S202 to the step S208, the method for constructing the three-dimensional image may further include the step S2l: establishing an output effect of the plurality of parameter function-changing three-dimensional images according to an image user interface ^•aPhiCalUserInterface ’ Gm). That is to say, according to the present embodiment, the image user interface (GUI) can be used to set parameters, such as image enlargement, Φ = small, displacement, etc. The output effect of 3D images. Here, in the case where the p image magnification ratio is fixed, the examiner can easily infer the distance traveled by the ring field capsule in the organ by interpreting the object image relationship and the magnification ratio of the output image, thereby effectively performing image positioning. The purpose. Therefore, according to the method for constructing a three-dimensional image according to an embodiment of the present invention, not only one of the 丨, the quasi-planar 々 image can be converted into a three-dimensional image, but also the health, limbs, and freshness of the three-dimensional image can be adjusted. Parameters, _ three different body image: the output. This solves the problem of the existence of the knowledge, and has the advantage that the village provides medical personnel with convenient, correct and rapid progress and symptom determination. </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; The scope of patent protection shall be subject to the definition of the scope of the patent application attached to this specification. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of a structure 201225001 applied to a ring-shaped capsule according to an embodiment of the present invention; FIG. 2A is a construction method of a three-dimensional image according to an embodiment of the present invention, and a flow chart of the steps thereof 2B is a schematic diagram of coordinate conversion according to step S2〇4 of “2nd eighth diagram”; FIG. 3A is an image alignment procedure from strip image to obtain at least—two according to an embodiment of the invention Dimensional image, step flow chart thereof; Fig. 3B is a schematic diagram of performing image matching program from strip image according to the present invention (10); Fig. 3C is a schematic diagram of 2D image after bonding according to Fig. 3B Figure 4 is a flow chart showing the steps of performing a two-dimensional image splicing according to the result of the image matching program according to the embodiment of the present invention, and FIG. 5 is a schematic diagram of the cylindrical model according to the embodiment of the present invention. The schematic diagram of the structure is a schematic diagram of a stereoscopic image when the direction vector is parallel to the longitudinal direction of the cylindrical model according to the embodiment of the present invention; FIG. 6B to FIG. 6 are diagrams according to the present invention. In the embodiment, the direction vector is non-parallel to the cylinder_length direction, and the individual stereoscopic image is shown: and the seventh figure is a method for constructing the three-dimensional image according to another embodiment of the present invention, and the step flow chart thereof. [Key component symbol description] Image 30,3〇a,30b 12 32 201225001 Reference range Scanning range, 32' 2D image after bonding 40 Cylindrical model 50 Ring field capsule 1〇〇 Housing 102 Camera window 104

錐狀鏡 106 透鏡模組 108 影像感測器 120 發光元件 122 無線通訊單元 .124 電源供應模組 126 待測物 130Cone mirror 106 Lens module 108 Image sensor 120 Light-emitting element 122 Wireless communication unit .124 Power supply module 126 DUT 130

半徑 R 總長度 z 橫向長度Radius R total length z lateral length

Claims (1)

201225001 七、申請專利範圍: 1. 一種三維影像之建構方法,包括: 擷取複數個環狀影像; 以獲得至少一 •維影像;以及 將該等環狀影像轉換為對應之魏個條狀影像; 自该等條狀影像進行影像之比對接合程序, 像 根據該二維影像進行空間矩陣轉換’據以輸出—三維影 影像之建構方法,其中該將該等環狀影 象轉換為職⑽悔嶋询物彳用關係式 以自該等職影·得該等條狀影像,^ 為該等環狀影像之原始座標與圓心座標Γ(θ At(x〇,y〇b別 之座標。 (’z)為3亥等條狀影像 3·如凊求項1所述之三 像進行___==^自該麵影 括: &amp;于主夕―二維影像之步驟包 自》亥等條狀鱗執行—辣輯財以及 該二=錄W峨行1料⑽,以獲得 《如請如败雨紙雜4巾咖等條狀影 201225001 像執行-影像比對程序係為自該等條狀影像執行—平均絕對 誤差法(Mean Absolute ΕΙΤ0Γ ’ 平均平方誤差法⑽咖 Squared ’ MSE)或一皮爾森相關係數法听咖心㈣和 Coefficient)。 5. 如請求項4所述之三維影像之建構方法,其中該平均絕對誤差 法(觀)、該平均平方誤差法(MSE)與該皮爾森相關係數法 (Pearson Correlation Coeffiaent)#^^ ^ • 低度準確率、一中度準確率與一高度準確率之比對結果。 6. 如請求項3所述之三_像之建構方法,其中該根據該影像比 對程序之結果執行-影像接合程序,以獲得該二維影像之步驟 包括: 將該二維影像之晝素舰比較於該等練影像之畫素列 數;以及 輸出。亥—維景;像或重新以另—方向對位後再執行該影像 接合程序。 7.如請求項!所述之三維影像之建構方法,其中該轉該二維影 像進行工間矩陣轉換’據以輸出一三維影像之步驟係為將該二 維影像乘上-圓柱模型之一轉換矩陣,以獲取該三維影像。 8U項7所述之二維影像之建構方法,其巾棚減型之座 ^為(R^e’h) ’相柱模型之全長為z,該轉換矩陣為 _0及θ_ ’其中R為該圓柱模型之半徑,_〇〜2;r,h* 15 201225001 0〜z 〇 9. 如請求項7所述之三維影像之建構方法,其中以該圓柱模型之 圓心建構一方向向量,且該方向向量指向一座標點時,該三維 影像係產生以該方向向量為視角之該座標點之三維影像。 10. 如請求項1所述之三維影像之建構方法,更包括: 根據-圖像使用者介面(Graphical User ,gu〇建立· 複數個參數功能,以調變該三維影像之輸出效果。201225001 VII. Patent application scope: 1. A method for constructing a three-dimensional image, comprising: capturing a plurality of ring images; obtaining at least one dimension image; and converting the ring images into corresponding strip images The image matching process is performed from the strip images, such as spatial matrix conversion according to the two-dimensional image, and the method of constructing the three-dimensional image, wherein the circular image is converted into a job (10) Repentance and inquiries use the relationship to obtain such strip images from these roles, ^ is the original coordinates of the ring image and the coordinates of the center of the circle (θ At (x〇, y〇b). ('z) is a strip image of 3 hai, etc. 3. If the three images described in item 1 are ___==^ from the surface: &amp; in the main eve - the step of the 2D image package from "Hai The implementation of the strip-like scales - the spicy collection and the second = recorded W 峨 1 material (10), to obtain "such as please fall into the rain paper miscellaneous 4 towel coffee, etc. 201225001 like the implementation - image comparison program is from Equal strip image execution - mean absolute error method (Mean Absolute ΕΙΤ0Γ ' average Square error method (10) Coffee Squared 'MSE) or a Pearson correlation coefficient method (4) and Coefficient) 5. The method of constructing a three-dimensional image according to claim 4, wherein the average absolute error method (view), the The average squared error method (MSE) and the Pearson Correlation Coeffiaent#^^ ^ • the comparison of low accuracy, one-to-medium accuracy, and one high accuracy. 6. As requested in Item 3. The method for constructing a three-image, wherein the performing the image-engaging process according to the result of the image matching program to obtain the two-dimensional image comprises: comparing the two-dimensional image of the singular ship to the training The number of pixels in the image; and the output. The image is captured after the image is repositioned in the other direction. 7. The method of constructing the 3D image as described in the claim item, wherein the image is rotated. The two-dimensional image is converted by the inter-lab matrix. The step of outputting a three-dimensional image is to multiply the two-dimensional image by one of the transformation models of the cylindrical model to obtain the three-dimensional image. The two-dimensional image described in 8U item 7 Construction Method, the seat of the towel shed is ^ (R^e'h) 'The total length of the phase column model is z, the conversion matrix is _0 and θ_ ' where R is the radius of the cylindrical model, _〇~2; r,h* 15 201225001 0~z 〇9. The method for constructing a three-dimensional image according to claim 7, wherein a three-dimensional image system is constructed by constructing a direction vector with a center of the cylindrical model and the direction vector points to a punctuation Generating a three-dimensional image of the coordinate point with the direction vector as a viewing angle. 10. The method for constructing the three-dimensional image according to claim 1 further includes: according to the image user interface (Graphical User, gu〇 established·plural number Parameter function to modulate the output of the 3D image.
TW99137423A 2010-12-02 2010-12-02 The Method of Restoring Three Dimensional Image of Capsule Endoscopy TWI428855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99137423A TWI428855B (en) 2010-12-02 2010-12-02 The Method of Restoring Three Dimensional Image of Capsule Endoscopy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99137423A TWI428855B (en) 2010-12-02 2010-12-02 The Method of Restoring Three Dimensional Image of Capsule Endoscopy

Publications (2)

Publication Number Publication Date
TW201225001A true TW201225001A (en) 2012-06-16
TWI428855B TWI428855B (en) 2014-03-01

Family

ID=46726042

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99137423A TWI428855B (en) 2010-12-02 2010-12-02 The Method of Restoring Three Dimensional Image of Capsule Endoscopy

Country Status (1)

Country Link
TW (1) TWI428855B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI474040B (en) * 2012-12-25 2015-02-21 國立交通大學 Double-view capsule endoscope lens system
WO2021061335A1 (en) * 2019-09-23 2021-04-01 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI474040B (en) * 2012-12-25 2015-02-21 國立交通大學 Double-view capsule endoscope lens system
WO2021061335A1 (en) * 2019-09-23 2021-04-01 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance
CN114423355A (en) * 2019-09-23 2022-04-29 波士顿科学医学有限公司 Systems and methods for endoscopic video enhancement, quantification, and surgical guidance
US11430097B2 (en) 2019-09-23 2022-08-30 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance
JP2022546610A (en) * 2019-09-23 2022-11-04 ボストン サイエンティフィック サイムド,インコーポレイテッド Systems and methods for endoscopic video enhancement, quantification and surgical guidance
JP7206438B2 (en) 2019-09-23 2023-01-17 ボストン サイエンティフィック サイムド,インコーポレイテッド Systems and methods for endoscopic video enhancement, quantification and surgical guidance
AU2020352836B2 (en) * 2019-09-23 2023-05-25 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance
US11954834B2 (en) 2019-09-23 2024-04-09 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance
EP4345733A3 (en) * 2019-09-23 2024-07-10 Boston Scientific Scimed, Inc. System for endoscopic video enhancement
CN114423355B (en) * 2019-09-23 2024-10-01 波士顿科学医学有限公司 Systems and methods for endoscopic video enhancement, quantification, and surgical guidance
JP2024133074A (en) * 2019-09-23 2024-10-01 ボストン サイエンティフィック サイムド,インコーポレイテッド Systems and methods for endoscopic video enhancement, quantification and surgical guidance - Patents.com
AU2023214320B2 (en) * 2019-09-23 2024-11-07 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance
US12354240B2 (en) 2019-09-23 2025-07-08 Boston Scientific Scimed, Inc. System and method for endoscopic video enhancement, quantitation and surgical guidance
JP7756202B2 (en) 2019-09-23 2025-10-17 ボストン サイエンティフィック サイムド,インコーポレイテッド Systems and methods for endoscopic video enhancement, quantification and surgical guidance - Patents.com

Also Published As

Publication number Publication date
TWI428855B (en) 2014-03-01

Similar Documents

Publication Publication Date Title
JP7096445B2 (en) Endoscope processor, program, information processing method and information processing device
EP1997074B1 (en) Device, system and method for automatic detection of contractile activity in an image frame
JP5858636B2 (en) Image processing apparatus, processing method thereof, and program
CN102247114B (en) Image processing apparatus and image processing method
KR101921268B1 (en) Capsule endoscopy apparatus for rendering 3d image, operation method of said capsule endoscopy, receiver rendering 3d image interworking with said capsule endoscopy, and capsule endoscopy system
CN109381152B (en) Method and apparatus for area or volume of an object of interest in a gastrointestinal image
US10492668B2 (en) Endoscope system and control method thereof
JP5388657B2 (en) Image processing apparatus, method of operating image processing apparatus, and system
WO2014136579A1 (en) Endoscope system and endoscope system operation method
CN111035351B (en) Method and apparatus for travel distance measurement of capsule camera in gastrointestinal tract
CN112508840B (en) Information processing apparatus, inspection system, information processing method, and storage medium
US10883828B2 (en) Capsule endoscope
CN110288653A (en) A multi-angle ultrasonic image fusion method, system and electronic equipment
KR20110068153A (en) Image registration system and method for performing image registration between different images
CN115018767A (en) Cross-modal endoscopic image conversion and lesion segmentation method based on intrinsic representation learning
CN117064444A (en) A three-dimensional positioning method for ultrasonic probes
TW201225001A (en) Construction method for three-dimensional image
CN116392081A (en) Digestive tract power detection capsule
CN204520609U (en) A kind of portable capsule endoscope image recording system
Fan et al. 3D reconstruction of the WCE images by affine SIFT method
TWI428109B (en) Image Construction Method of Panoramic Capsule Endoscopic Device and Its Panoramic Image
CN108392165A (en) Method and utensil for the introscope with the range measurement scaled for object
KR20180128216A (en) Intelligent capsule endoscope and method for shooting thereof
Rahul et al. Design and Characterisation of Stereo Endoscope for Polyp Size Measurement
Candela Depth-enhanced multi-view triangulation for automated 3D polyp sizing in magnetic-guided robotic colonoscopy