TWI860908B - Method for constructing panoramic view model, vehicle-mounted device, and storage medium - Google Patents
Method for constructing panoramic view model, vehicle-mounted device, and storage medium Download PDFInfo
- Publication number
- TWI860908B TWI860908B TW112147332A TW112147332A TWI860908B TW I860908 B TWI860908 B TW I860908B TW 112147332 A TW112147332 A TW 112147332A TW 112147332 A TW112147332 A TW 112147332A TW I860908 B TWI860908 B TW I860908B
- Authority
- TW
- Taiwan
- Prior art keywords
- coordinate
- vehicle
- coordinate system
- target object
- determining
- Prior art date
Links
Landscapes
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
本發明屬於智慧駕駛領域,涉及影像處理技術,具體涉及一種全景環視模型構建方法、車載裝置及儲存介質。The present invention belongs to the field of intelligent driving and relates to image processing technology, and specifically relates to a method for constructing a panoramic view model, a vehicle-mounted device and a storage medium.
車輛內安裝的全景環視系統可以透過安裝在車輛的前後左右四個方向的攝像裝置來獲取車輛周圍的圖像,並基於這些圖像構建全景環視的三維模型以類比出車輛周圍的環境,從而幫助用戶更好地瞭解車輛與周圍物體的距離。然而,相關技術中構建的三維模型中的物體容易出現扭曲或失真等問題,使用戶難以根據三維模型準確識別車輛周邊物體並判斷車輛與物體之間的距離,導致用戶的行車風險增高。The panoramic view system installed in the vehicle can obtain images around the vehicle through the cameras installed in the front, back, left and right directions of the vehicle, and build a panoramic view three-dimensional model based on these images to simulate the environment around the vehicle, thereby helping users to better understand the distance between the vehicle and surrounding objects. However, the objects in the three-dimensional model constructed in the relevant technology are prone to distortion or distortion, making it difficult for users to accurately identify objects around the vehicle and judge the distance between the vehicle and the objects based on the three-dimensional model, resulting in increased driving risks for users.
鑒於以上內容,有必要提出一種全景環視模型構建方法、車載裝置及儲存介質,能夠解決由於構建的車輛全景環視模型中物體出現扭曲或失真而導致的用戶的行車風險增高的問題。In view of the above, it is necessary to propose a panoramic view model construction method, a vehicle-mounted device and a storage medium, which can solve the problem of increased driving risk for users caused by distortion or distortion of objects in the constructed vehicle panoramic view model.
本申請的實施例提供一種全景環視模型構建方法,所述方法包括:識別從車輛的攝像裝置獲取的環境圖像中的目標物件,並確定所述目標物件在所述環境圖像對應的第一座標系中的第一座標;將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標;根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標;根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離;基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。The embodiment of the present application provides a method for constructing a panoramic view model, the method comprising: identifying a target object in an environment image acquired from a camera of a vehicle, and determining a first coordinate of the target object in a first coordinate system corresponding to the environment image; converting the environment image into a bird's-eye view image, and determining a second coordinate of the target object in a second coordinate system corresponding to the bird's-eye view image according to the first coordinate; and determining a second coordinate of the target object in a second coordinate system corresponding to the bird's-eye view image according to the second coordinate. Determine a third coordinate of the target object in a third coordinate system corresponding to the camera device; determine a fourth coordinate of the target object in a fourth coordinate system corresponding to the vehicle according to the third coordinate, and determine an initial distance between the target object and the vehicle according to the fourth coordinate; determine a target distance based on a preset correction parameter and the initial distance, and construct a target panoramic surround model of the vehicle according to the target distance and the environmental image.
在一個實施例中,所述識別所述車輛的攝像裝置拍攝的環境圖像中的目標物件,並確定所述目標物件在所述環境圖像對應的第一座標系中的第一座標,包括:利用預設特徵識別演算法識別所述環境圖像中的目標物件,並在所述環境圖像中生成所述目標物件的矩形包圍框;在所述第一座標系中確定所述矩形包圍框的目標角點的座標,將所述目標角點的座標作為所述第一座標。In one embodiment, the target object in the environment image taken by the camera of the vehicle is identified, and the first coordinates of the target object in the first coordinate system corresponding to the environment image are determined, including: using a preset feature recognition algorithm to identify the target object in the environment image, and generating a rectangular enclosing frame of the target object in the environment image; determining the coordinates of the target corner points of the rectangular enclosing frame in the first coordinate system, and using the coordinates of the target corner points as the first coordinates.
在一個實施例中,所述預設特徵識別演算法包括:基於機器學習模型的特徵識別演算法、基於深度學習模型的特徵識別演算法中的一種或多種。In one embodiment, the preset feature recognition algorithm includes: one or more of a feature recognition algorithm based on a machine learning model and a feature recognition algorithm based on a deep learning model.
在一個實施例中,所述目標角點包括所述矩形包圍框的左下角點與右下角點;所述第二座標包括:所述左下角點在所述俯瞰圖像中的第一投影點在所述第二座標系中的座標,所述右下角點在所述俯瞰圖像中的第二投影點在所述第二座標系中的座標。In one embodiment, the target corner points include the lower left corner point and the lower right corner point of the rectangular enclosing frame; the second coordinates include: the coordinates of the first projection point of the lower left corner point in the overhead image in the second coordinate system, and the coordinates of the second projection point of the lower right corner point in the overhead image in the second coordinate system.
在一個實施例中,根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標包括:確定所述第一投影點在所述第二座標系中的座標與所述第二投影點在所述第二座標系中的座標之間的中點座標;根據所述第二座標系與所述第三座標系之間的第一轉換關係,確定所述中點座標在所述第三座標系中的座標作為所述第三座標。In one embodiment, determining the third coordinates of the target object in the third coordinate system corresponding to the photographic device based on the second coordinates includes: determining the midpoint coordinates between the coordinates of the first projection point in the second coordinate system and the coordinates of the second projection point in the second coordinate system; and determining the coordinates of the midpoint coordinates in the third coordinate system as the third coordinates based on a first conversion relationship between the second coordinate system and the third coordinate system.
在一個實施例中,根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標包括:確定所述俯瞰視角中所述攝像裝置與所述車輛的中心的安裝距離,所述安裝距離包括水準距離與垂直距離;根據所述安裝距離確定所述第三座標系與所述第四座標系之間的第二轉換關係,根據所述第二轉換關係將所述第三座標轉換至所述第四座標,其中,所述第四座標系的原點位於所述車輛的中心。In one embodiment, determining the fourth coordinate of the target object in a fourth coordinate system corresponding to the vehicle based on the third coordinate includes: determining the installation distance between the camera device and the center of the vehicle in the bird's-eye view, the installation distance including a horizontal distance and a vertical distance; determining a second conversion relationship between the third coordinate system and the fourth coordinate system based on the installation distance, and converting the third coordinate to the fourth coordinate based on the second conversion relationship, wherein the origin of the fourth coordinate system is located at the center of the vehicle.
在一個實施例中,所述根據所述第四座標確定所述目標物件與所述車輛的初始距離包括:確定所述第四座標與所述車輛的中心的歐式距離作為所述初始距離。In one embodiment, determining the initial distance between the target object and the vehicle based on the fourth coordinate includes: determining the European distance between the fourth coordinate and the center of the vehicle as the initial distance.
在一個實施例中,所述基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型,包括:根據所述初始距離與所述校正參數的差值確定所述目標距離;以所述目標距離作為底部半徑的長度構建三維碗形網格模型,將所述車輛的四個方向的環境圖像投影至所述三維碗形網格模型得到所述目標全景環視模型。In one embodiment, the target distance is determined based on a preset correction parameter and the initial distance, and a target panoramic surround view model of the vehicle is constructed according to the target distance and the environmental image, including: determining the target distance according to the difference between the initial distance and the correction parameter; constructing a three-dimensional bowl-shaped grid model with the target distance as the length of the bottom radius, and projecting the environmental images of the vehicle in four directions onto the three-dimensional bowl-shaped grid model to obtain the target panoramic surround view model.
本申請的實施例提供一種全景環視模型構建裝置,所述裝置包括:識別模組,用於識別從車輛的攝像裝置獲取的環境圖像中的目標物件,並確定所述目標物件在所述環境圖像對應的第一座標系中的第一座標;確定模組,用於將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標;根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標;根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離;構建模組,基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。The embodiment of the present application provides a panoramic view model construction device, the device comprising: an identification module, used to identify a target object in an environment image obtained from a camera of a vehicle, and determine a first coordinate of the target object in a first coordinate system corresponding to the environment image; a determination module, used to convert the environment image into a bird's-eye view image, and determine a second coordinate of the target object in a second coordinate system corresponding to the bird's-eye view image according to the first coordinate; The second coordinate determines the third coordinate of the target object in the third coordinate system corresponding to the camera device; the fourth coordinate of the target object in the fourth coordinate system corresponding to the vehicle is determined according to the third coordinate, and the initial distance between the target object and the vehicle is determined according to the fourth coordinate; a modeling group is constructed to determine the target distance based on a preset correction parameter and the initial distance, and a target panoramic surround model of the vehicle is constructed according to the target distance and the environmental image.
本申請的實施例提供一種車載裝置,所述車載裝置包括:儲存器和至少一個處理器,所述處理器用於執行所述儲存器中儲存的電腦程式時實現所述全景環視模型構建方法。An embodiment of the present application provides a vehicle-mounted device, which includes: a memory and at least one processor, wherein the processor is used to implement the panoramic view model construction method when executing a computer program stored in the memory.
本申請的實施例提供一種車輛,所述車輛包括至少一個攝像裝置與所述車載裝置。An embodiment of the present application provides a vehicle, which includes at least one camera device and the vehicle-mounted device.
本申請的實施例提供一種電腦可讀儲存介質,所述電腦可讀儲存介質上儲存有電腦程式,所述電腦程式被處理器執行時實現所述全景環視模型構建方法。An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed by a processor, the panoramic view model construction method is implemented.
綜上所述,本申請所述的全景環視模型構建方法,透過識別從車輛的攝像裝置獲取的環境圖像中的目標物件確定目標物件在環境圖像對應的第一座標系中的第一座標;基於預設的透視投影矩陣將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標;根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標;根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離;基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。能夠避免構建的車輛全景環視模型中物體出現扭曲或失真的情況,得到與真實環境更為接近的車輛全景環視模型,保障用戶根據車輛全景環視模型駕駛時的行車安全。In summary, the panoramic view model construction method described in the present application determines the first coordinates of the target object in the first coordinate system corresponding to the environmental image by identifying the target object in the environmental image obtained by the camera device of the vehicle; converts the environmental image into a bird's-eye view image based on a preset perspective projection matrix, and determines the second coordinates of the target object in the second coordinate system corresponding to the bird's-eye view image according to the first coordinates; and determines the second coordinates of the target object in the second coordinate system corresponding to the bird's-eye view image according to the second coordinates. The method comprises the following steps: determining the third coordinate of the target object in the third coordinate system corresponding to the camera device; determining the fourth coordinate of the target object in the fourth coordinate system corresponding to the vehicle according to the third coordinate, and determining the initial distance between the target object and the vehicle according to the fourth coordinate; determining the target distance based on the preset correction parameter and the initial distance, and constructing the target panoramic surround model of the vehicle according to the target distance and the environmental image. The method can avoid the distortion or distortion of objects in the constructed panoramic surround model of the vehicle, obtain a panoramic surround model of the vehicle that is closer to the real environment, and ensure the driving safety of users when driving according to the panoramic surround model of the vehicle.
為了能夠更清楚地理解本申請的上述目的、特徵和優點,下面結合附圖和具體實施例對本申請進行詳細描述。需要說明的是,在不衝突的情況下,本申請的實施例及實施例中的特徵可以相互組合。In order to more clearly understand the above-mentioned purpose, features and advantages of the present application, the present application is described in detail below in conjunction with the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features in the embodiments can be combined with each other without conflict.
除非另有定義,本文所使用的所有的技術和科學術語與屬於本申請的技術領域的技術人員通常理解的含義相同。本文中在本申請的說明書中所使用的術語只是為了描述在一個實施例中實施例的目的,不是旨在於限制本申請。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as those commonly understood by those skilled in the art of the application. The terms used herein in the specification of the application are only for the purpose of describing an embodiment in an embodiment and are not intended to limit the application.
需要說明的是,本申請中“至少一個”是指一個或者多個,“多個”是指兩個或多於兩個。“和/或”,描述關聯物件的關聯關係,表示可以存在三種關係,例如,A和/或B可以表示:單獨存在A,同時存在A和B,單獨存在B的情況,其中A,B可以是單數或者複數。本申請的說明書和請求項書及附圖中的術語“第一”、“第二”、“第三”、“第四”等(如果存在)是用於區別類似的物件,而不是用於描述特定的順序或先後次序。It should be noted that in this application, "at least one" means one or more, and "plurality" means two or more than two. "And/or" describes the relationship between related objects, indicating that three relationships may exist. For example, A and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural. The terms "first", "second", "third", "fourth", etc. (if any) in the specification, claim and drawings of this application are used to distinguish similar objects, rather than to describe a specific order or precedence.
在本申請實施例中,“示例性的”或者“例如”等詞用於表示作例子、例證或說明。本申請實施例中被描述為“示例性的”或者“例如”的任何實施例或設計方案不應被解釋為比其它實施例或設計方案更優選或更具優勢。確切而言,使用“示例性的”或者“例如”等詞旨在以具體方式呈現相關概念。在不衝突的情況下,下述的實施例及實施例中的特徵可以相互組合。In the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "for example" in the embodiments of the present application should not be interpreted as being more preferred or advantageous than other embodiments or designs. Rather, the use of words such as "exemplary" or "for example" is intended to present the relevant concepts in a concrete manner. The following embodiments and features in the embodiments may be combined with each other without conflict.
在一個實施例中,車輛內安裝的全景環視系統可以透過安裝在車輛的前後左右四個方向的攝像裝置來獲取車輛周圍的圖像,並基於這些圖像構建全景環視的三維模型以類比出車輛周圍的環境,從而幫助用戶更好地瞭解車輛與周圍物體的距離。然而,相關技術中構建的三維模型中的物體容易出現扭曲或失真等問題,使用戶難以根據三維模型準確識別車輛周邊物體並判斷車輛與物體之間的距離,導致用戶的行車風險增高。In one embodiment, the panoramic view system installed in the vehicle can obtain images of the surroundings of the vehicle through cameras installed in the front, rear, left, and right directions of the vehicle, and construct a panoramic view three-dimensional model based on these images to simulate the environment around the vehicle, thereby helping users to better understand the distance between the vehicle and surrounding objects. However, the objects in the three-dimensional model constructed in the related technology are prone to distortion or other problems, making it difficult for users to accurately identify objects around the vehicle and judge the distance between the vehicle and the objects based on the three-dimensional model, resulting in increased driving risks for users.
為解決上述問題,本申請實施例提供一種全景環視模型構建方法,透過識別從車輛的攝像裝置獲取的環境圖像中的目標物件確定目標物件在環境圖像對應的第一座標系中的第一座標;基於預設的透視投影矩陣將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標;根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標;根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離;基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。能夠避免構建的車輛全景環視模型中物體出現扭曲或失真的情況,得到與真實環境更為接近的車輛全景環視模型,保障用戶根據車輛全景環視模型駕駛時的行車安全。To solve the above problems, the present application provides a method for constructing a panoramic model, which determines the first coordinates of the target object in a first coordinate system corresponding to the environment image by identifying the target object in the environment image obtained by the camera device of the vehicle; converts the environment image into a bird's-eye view image based on a preset perspective projection matrix, and determines the second coordinates of the target object in a second coordinate system corresponding to the bird's-eye view image according to the first coordinates; and determines the second coordinates of the target object in a second coordinate system corresponding to the bird's-eye view image according to the first coordinates. The third coordinate of the target object in the third coordinate system corresponding to the camera device is determined based on the second coordinate; the fourth coordinate of the target object in the fourth coordinate system corresponding to the vehicle is determined based on the third coordinate, and the initial distance between the target object and the vehicle is determined based on the fourth coordinate; the target distance is determined based on the preset correction parameter and the initial distance, and the target panoramic surround model of the vehicle is constructed based on the target distance and the environmental image. It can avoid the distortion or distortion of objects in the constructed vehicle panoramic surround model, obtain a vehicle panoramic surround model that is closer to the real environment, and ensure the driving safety of users when driving according to the vehicle panoramic surround model.
圖1為本申請實施例提供的一種車載裝置的結構示意圖,本申請實施例對車載裝置的具體類型不作任何限制。FIG1 is a schematic diagram of the structure of a vehicle-mounted device provided in an embodiment of the present application. The embodiment of the present application does not impose any restrictions on the specific type of the vehicle-mounted device.
如圖1所示,該車載裝置10可以安裝於車輛1中,車載裝置10可以包括通信模組101、儲存器102、處理器103、輸入/輸出(Input / Output,I/O)介面104及匯流排105。處理器103透過匯流排105分別耦合於通信介面101、儲存器102、I/O介面104。As shown in FIG1 , the vehicle-mounted device 10 may be installed in a vehicle 1, and the vehicle-mounted device 10 may include a communication module 101, a memory 102, a processor 103, an input/output (I/O) interface 104, and a bus 105. The processor 103 is coupled to the communication interface 101, the memory 102, and the I/O interface 104 through the bus 105.
通信模組101可以包括有線通信模組和/或無線通訊模組。有線通信模組可以提供通用序列匯流排(universal serial bus,USB)、控制器局域網匯流排(Controller Area Network,CAN)、區域網際網路(Local Interconnect Network,LIN)、FlexRay等有線通信的解決方案中的一種或多種。無線通訊模組可以提供無線保真(wireless fidelity,Wi-Fi),藍牙(bluetooth,BT),移動通信網路,調頻(frequency modulation,FM),近距離無線通訊技術(near field communication,NFC),紅外技術(infrared,IR)等無線通訊的解決方案中的一種或多種。The communication module 101 may include a wired communication module and/or a wireless communication module. The wired communication module may provide one or more of the wired communication solutions such as universal serial bus (USB), controller area network bus (CAN), local interconnect network (LIN), FlexRay, etc. The wireless communication module may provide one or more of the wireless communication solutions such as wireless fidelity (Wi-Fi), bluetooth (BT), mobile communication network, frequency modulation (FM), near field communication technology (NFC), infrared technology (IR), etc.
儲存器102可以包括一個或多個隨機存取儲存器(random access memory,RAM)和一個或多個非易失性儲存器(non-volatile memory,NVM)。隨機存取儲存器可以由處理器103直接進行讀寫,可以用於儲存作業系統或其他正在運行中的程式的可執行程式(例如機器指令),還可以用於儲存用戶及應用的資料等。隨機存取儲存器可以包括靜態隨機儲存器(static random-access memory,SRAM)、動態隨機儲存器(dynamic random access memory,DRAM)、同步動態隨機儲存器(synchronous dynamic random access memory,SDRAM)、雙倍資料率同步動態隨機存取儲存器(double data rate synchronous dynamic random access memory,DDR SDRAM)等。The memory 102 may include one or more random access memories (RAM) and one or more non-volatile memories (NVM). The random access memories can be directly read and written by the processor 103, and can be used to store executable programs (such as machine instructions) of the operating system or other running programs, and can also be used to store user and application data. Random access memory may include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), etc.
非易失性儲存器也可以儲存可執行程式和儲存用戶及應用的資料等,可以提前載入到隨機存取儲存器中,用於處理器110直接進行讀寫。非易失性儲存器可以包括磁碟儲存元件、快閃儲存器(flash memory)。The non-volatile memory can also store executable programs and user and application data, etc., and can be loaded into the random access memory in advance for direct reading and writing by the processor 110. The non-volatile memory can include a disk storage element and a flash memory.
儲存器102用於儲存一個或多個電腦程式。一個或多個電腦程式被配置為被處理器103執行。該一個或多個電腦程式包括多個指令,多個指令被處理器103執行時,可實現在車載裝置10上執行的全景環視模型構建方法。The memory 102 is used to store one or more computer programs. The one or more computer programs are configured to be executed by the processor 103. The one or more computer programs include multiple instructions. When the multiple instructions are executed by the processor 103, the panoramic view model construction method executed on the vehicle-mounted device 10 can be implemented.
在其他實施例中,所述車載裝置10還包括外部儲存器介面,用於連接外部的儲存器,實現擴展車載裝置10的儲存能力。In other embodiments, the vehicle-mounted device 10 further includes an external storage interface for connecting to an external storage to expand the storage capacity of the vehicle-mounted device 10.
處理器103可以包括一個或多個處理單元,例如:處理器103可以包括應用處理器(application processor,AP),調製解調處理器,圖形處理器(graphics processing unit,GPU),圖像訊號處理器(image signal processor,ISP),控制器,視頻轉碼器,數位訊號處理器(digital signal processor,DSP),基帶處理器,和/或神經網路處理器(neural-network processing unit,NPU)等。其中,不同的處理單元可以是獨立的元件,也可以集成在一個或多個處理器中。The processor 103 may include one or more processing units, for example, the processor 103 may include an application processor (AP), a modulation and demodulation processor, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a video codec, a digital signal processor (DSP), a baseband processor, and/or a neural-network processing unit (NPU), etc. Different processing units may be independent components or integrated into one or more processors.
處理器103提供計算和控制能力,例如,處理器103用於執行儲存器102內儲存的電腦程式,以實現上述的全景環視模型構建方法。The processor 103 provides computing and control capabilities. For example, the processor 103 is used to execute a computer program stored in the memory 102 to implement the above-mentioned panoramic view model construction method.
I/O介面104用於提供用戶輸入或輸出的通道,例如I/O介面104可用於連接各種輸入輸出設備,例如,滑鼠、鍵盤、觸控裝置、顯示螢幕等,使得用戶可以錄入資訊,或者使資訊視覺化。The I/O interface 104 is used to provide a channel for user input or output. For example, the I/O interface 104 can be used to connect various input and output devices, such as a mouse, a keyboard, a touch device, a display screen, etc., so that the user can enter information or visualize the information.
I/O介面104還可以用於提供與攝像裝置106進行資料傳輸的通道,例如,I/O介面104可用於從攝像裝置106獲取車輛的環境圖像。The I/O interface 104 may also be used to provide a channel for data transmission with the camera 106. For example, the I/O interface 104 may be used to obtain environmental images of the vehicle from the camera 106.
攝像裝置106包括安裝於車輛1中的至少一個攝像裝置,用於拍攝車輛1所處的環境的環境圖像,攝像裝置106可以是魚眼攝像機、紅外攝像機等。例如圖2所示,為本申請一實施例提供的攝像裝置在車輛中的安裝位置的示例圖。攝像裝置包括安裝於車輛的前後左右四個方向的四個攝像裝置,該四個攝像裝置的四個視野範圍的並集能夠覆蓋環繞車輛的360度內的範圍。The camera device 106 includes at least one camera device installed in the vehicle 1, which is used to capture the environmental image of the environment in which the vehicle 1 is located. The camera device 106 can be a fisheye camera, an infrared camera, etc. For example, as shown in FIG. 2, it is an example diagram of the installation position of the camera device in the vehicle provided in an embodiment of the present application. The camera device includes four cameras installed in the front, rear, left, and right directions of the vehicle. The union of the four fields of view of the four cameras can cover a range of 360 degrees around the vehicle.
匯流排105至少用於提供車載裝置10中的通信模組101、儲存器102、處理器103、I/O介面104之間相互通信的通道。The bus 105 is at least used to provide a channel for mutual communication among the communication module 101 , the memory 102 , the processor 103 , and the I/O interface 104 in the vehicle-mounted device 10 .
可以理解的是,本申請實施例示意的結構並不構成對車載裝置10的具體限定。在本申請另一些實施例中,車載裝置10可以包括比圖示更多或更少的部件,或者組合某些部件,或者拆分某些部件,或者不同的部件佈置。圖示的部件可以以硬體,軟體或軟體和硬體的組合實現。It is understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the vehicle-mounted device 10. In other embodiments of the present application, the vehicle-mounted device 10 may include more or fewer components than shown in the figure, or combine some components, or separate some components, or arrange the components differently. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
圖3是本申請一實施例提供的全景環視模型構建方法的流程圖。所述全景環視模型構建方法應用於車載裝置中,例如圖1中的車載裝置10,具體包括以下步驟,根據不同的需求,該流程圖中步驟的順序可以改變,某些可以省略。FIG3 is a flow chart of a panoramic view model construction method provided by an embodiment of the present application. The panoramic view model construction method is applied to a vehicle-mounted device, such as the vehicle-mounted device 10 in FIG1 , and specifically includes the following steps. According to different requirements, the order of the steps in the flow chart can be changed, and some can be omitted.
步驟S31,識別從車輛的攝像裝置獲取的環境圖像中的目標物件,並確定所述目標物件在所述環境圖像對應的第一座標系中的第一座標。Step S31, identifying a target object in an environment image acquired from a camera of a vehicle, and determining a first coordinate of the target object in a first coordinate system corresponding to the environment image.
在一個實施例中,車輛中安裝著至少一個攝像裝置,車載裝置可以接收用戶輸入的每個攝像裝置的唯一標識以及每個攝像裝置相較於車輛的安裝位置,從而對安裝在不同位置的攝像裝置進行區分。其中,所述每個攝像裝置相較於車輛的安裝位置可以包括每個攝像裝置的與所述車輛的中心的安裝距離,所述安裝距離包括水準距離與垂直距離,其中,車輛的中心表示車輛的俯視圖的中心。In one embodiment, at least one camera is installed in a vehicle, and the vehicle-mounted device can receive a unique identifier of each camera input by a user and an installation position of each camera relative to the vehicle, thereby distinguishing cameras installed at different positions. The installation position of each camera relative to the vehicle may include an installation distance of each camera from the center of the vehicle, and the installation distance includes a horizontal distance and a vertical distance, wherein the center of the vehicle represents the center of a top view of the vehicle.
例如圖4所示,以車輛的俯視圖的中心(即虛線包圍框的中心)為原點建立車輛對應的直角座標系O1X1Y1,將車輛的俯視圖中車頭所在方向定義為車輛前方,車輛的前後左右四個方向各安裝著一個攝像裝置。其中,安裝在車輛前方的攝像裝置的標識為cameraF,安裝在車輛後方的攝像裝置標識為cameraB,安裝在車輛左方的攝像裝置標識為cameraL,安裝在車輛右方的攝像裝置標識為cameraR。任一攝像裝置相較於車輛的中心的水準距離包括該攝像裝置與Y1軸的距離,任一攝像裝置相較於車輛的中心的垂直距離包括該攝像裝置與X1軸的距離。在後續實施例中,將以如圖4所示的cameraF與cameraB安裝在Y1軸上進行舉例。在其他實施例中,若cameraF或cameraB不安裝在Y1軸上時,可以在座標變換時進行一定的平移補償以實現後續流程。For example, as shown in FIG4 , a rectangular coordinate system O1X1Y1 corresponding to the vehicle is established with the center of the top view of the vehicle (i.e., the center of the dotted line enclosed frame) as the origin, and the direction of the front of the vehicle in the top view is defined as the front of the vehicle. A camera is installed in each of the four directions of the vehicle. Among them, the camera installed in the front of the vehicle is marked as cameraF, the camera installed at the rear of the vehicle is marked as cameraB, the camera installed on the left of the vehicle is marked as cameraL, and the camera installed on the right of the vehicle is marked as cameraR. The horizontal distance of any camera relative to the center of the vehicle includes the distance between the camera and the Y1 axis, and the vertical distance of any camera relative to the center of the vehicle includes the distance between the camera and the X1 axis. In the subsequent embodiments, cameraF and cameraB as shown in FIG. 4 are mounted on the Y1 axis for example. In other embodiments, if cameraF or cameraB is not mounted on the Y1 axis, a certain translation compensation can be performed during the coordinate transformation to implement the subsequent process.
在一個實施例中,在識別從車輛的攝像裝置獲取的環境圖像中的目標物件之前,所述方法還包括對所述環境圖像進行預處理,所述預處理包括但不限於:尺寸調整,例如,將環境圖像的尺寸調整為特徵識別演算法所需輸入的大小;圖像優化,例如,基於插值演算法(例如雙線性插值演算法)對環境圖像的紋理清晰度進行提升;灰度處理,例如使用加權平均法等將環境圖像轉化為灰度圖像;濾波處理,例如使用預設的濾波器(例如均值濾波器、中值濾波器等)對環境圖像進行平滑處理以去除雜訊;畸變校正,例如,當攝像裝置為魚眼攝像機時,環境圖像中的物件可能出現畸變,可以基於魚眼攝像機的相機參數(例如焦距、主點座標、畸變係數等)與預先選擇的校正模型(例如Pinhole模型、魚眼模型等)對環境圖像進行畸變校正。透過對所述環境圖像進行預處理,能夠提高對環境圖像中的目標物件進行識別的準確率。In one embodiment, before identifying the target object in the environment image obtained from the camera of the vehicle, the method further includes preprocessing the environment image, and the preprocessing includes but is not limited to: resizing, for example, resizing the environment image to the size required for input of the feature recognition algorithm; image optimization, for example, improving the texture clarity of the environment image based on an interpolation algorithm (such as a bilinear interpolation algorithm); grayscale processing, for example, using a weighted average method to transform the environment image into a grayscale image; Convert to grayscale image; filter processing, for example, use preset filter (such as mean filter, median filter, etc.) to smooth the environment image to remove noise; distortion correction, for example, when the camera device is a fisheye camera, the objects in the environment image may be distorted, and the environment image can be distorted based on the camera parameters of the fisheye camera (such as focal length, principal point coordinates, distortion coefficient, etc.) and pre-selected correction model (such as Pinhole model, fisheye model, etc.). By pre-processing the environment image, the accuracy of identifying the target object in the environment image can be improved.
在一個實施例中,在得到車輛中的任一攝像裝置對應的畸變校正後的環境圖像後,為了方便後續步驟得到車輛的俯瞰視角的全景環視圖以構建車輛的全景環視模型,可以基於棋盤格標定方法對車輛的所有攝像裝置進行聯合標定,從而將車輛的所有攝像裝置拍攝的所有環境圖像轉換至同一座標系下。In one embodiment, after obtaining the distortion-corrected environment image corresponding to any camera in the vehicle, in order to facilitate the subsequent step of obtaining a panoramic surround view of the vehicle from a bird's-eye view angle to construct a panoramic surround model of the vehicle, all cameras of the vehicle can be jointly calibrated based on the checkerboard calibration method, thereby converting all environment images taken by all cameras of the vehicle into the same coordinate system.
具體地,基於棋盤格標定方法對車輛的所有攝像裝置進行聯合標定的流程包括:獲取每個攝像裝置拍攝的對應方向的地面的棋盤格圖像;對每個攝像裝置進行內參標定與外參標定,得到每個攝像裝置的內參與外參等初始相機參數;提取每個方向的棋盤格圖像中的目標特徵點,基於目標特徵點進行攝像裝置之間的特徵點匹配;使用最小化重影誤差等方法基於特徵點匹配資訊對每個攝像裝置的初始相機參數進行優化更新,使得更新的相機參數能夠讓對應的攝像裝置拍攝的棋盤格圖像更準確地對齊;對更新的相機參數的精度進行評估,若評估結果指示更新的相機參數的精度小於預設精度閾值,重複上述步驟,直至得到精度大於或等於所述精度閾值的相機參數,完成對所有攝像裝置的聯合標定。Specifically, the process of jointly calibrating all the camera devices of the vehicle based on the chessboard calibration method includes: obtaining the chessboard image of the ground in the corresponding direction taken by each camera device; performing intrinsic and extrinsic calibration on each camera device to obtain the initial camera parameters such as the intrinsic and extrinsic parameters of each camera device; extracting the target feature points in the chessboard image in each direction, and matching the feature points between the camera devices based on the target feature points; using the method of minimizing ghosting errors, etc. The method optimizes and updates the initial camera parameters of each camera device based on feature point matching information, so that the updated camera parameters can align the chessboard image taken by the corresponding camera device more accurately; the accuracy of the updated camera parameters is evaluated, and if the evaluation result indicates that the accuracy of the updated camera parameters is less than the preset accuracy threshold, the above steps are repeated until the camera parameters with an accuracy greater than or equal to the accuracy threshold are obtained, thereby completing the joint calibration of all camera devices.
在一個實施例中,上述聯合標定過程中還可以確定環境圖像中的每個圖元點的長度對應的世界距離,例如,可以確定環境圖像中的每個圖元點的長度對應的世界距離為1釐米。In an embodiment, the world distance corresponding to the length of each primitive point in the environment image can also be determined in the above joint calibration process. For example, it can be determined that the world distance corresponding to the length of each primitive point in the environment image is 1 cm.
在一個實施例中,對於車輛的任一攝像裝置(例如cameraF),所述識別所述車輛的攝像裝置拍攝的環境圖像中的目標物件,並確定所述目標物件在所述環境圖像對應的第一座標系中的第一座標,包括:利用預設特徵識別演算法識別所述環境圖像中的目標物件,並在所述環境圖像中生成所述目標物件的矩形包圍框;在所述第一座標系中確定所述矩形包圍框的目標角點的座標,將所述目標角點的座標作為所述第一座標。其中,所述目標物件包括具有高度的物件,例如車輛、牆壁、柱體、臺階、人體等對象。In one embodiment, for any camera device of a vehicle (e.g., camera F), the target object in the environment image taken by the camera device of the vehicle is identified, and the first coordinate of the target object in the first coordinate system corresponding to the environment image is determined, including: using a preset feature recognition algorithm to identify the target object in the environment image, and generating a rectangular enclosing frame of the target object in the environment image; determining the coordinates of the target corner points of the rectangular enclosing frame in the first coordinate system, and using the coordinates of the target corner points as the first coordinates. The target object includes objects with height, such as vehicles, walls, columns, steps, human bodies, and other objects.
在一個實施例中,所述預設特徵識別演算法包括:基於機器學習模型的特徵識別演算法、基於深度學習模型的特徵識別演算法中的一種或多種。In one embodiment, the preset feature recognition algorithm includes: one or more of a feature recognition algorithm based on a machine learning model and a feature recognition algorithm based on a deep learning model.
在一個實施例中,基於機器學習模型的特徵識別演算法中可以使用演算法,例如線性回歸演算法、支援向量回歸演算法、嶺回歸演算法、決策樹演算法等。基於機器學習模型的特徵識別演算法能夠透過有監督學習的方法學習如何對環境圖像中的圖元點進行分類,可以不依賴於顯示程式設計實現對環境圖像的特徵識別,能夠提高特徵識別的效率。In one embodiment, the feature recognition algorithm based on the machine learning model may use algorithms such as linear regression algorithm, support vector regression algorithm, ridge regression algorithm, decision tree algorithm, etc. The feature recognition algorithm based on the machine learning model can learn how to classify the pixel points in the environment image through a supervised learning method, and can realize feature recognition of the environment image without relying on display programming, which can improve the efficiency of feature recognition.
在一個實施例中,基於深度學習模型的特徵識別演算法可以使用多種神經網路結構,例如,卷積神經網路、迴圈神經網路、長短期記憶網路等。基於深度學習模型的特徵識別演算法可以基於深度學習模型提取環境圖像的低層特徵,根據低層特徵獲得環境圖像的更加抽象的高層表徵,以根據高層表徵確定環境圖像中的分散式特徵,實現對環境圖像的特徵識別,能夠適用於包含複雜物件的環境圖像的特徵識別,提高特徵識別的準確性和識別效率。In one embodiment, the feature recognition algorithm based on the deep learning model can use a variety of neural network structures, such as convolutional neural networks, recurrent neural networks, long short-term memory networks, etc. The feature recognition algorithm based on the deep learning model can extract low-level features of the environment image based on the deep learning model, obtain a more abstract high-level representation of the environment image based on the low-level features, and determine the distributed features in the environment image based on the high-level representation to achieve feature recognition of the environment image. It can be applied to feature recognition of environment images containing complex objects, and improve the accuracy and efficiency of feature recognition.
在一示例中,使用全卷積神經網路模型對環境圖像中的目標物件進行識別包括:利用多個卷積層透過卷積操作對環境圖像進行濾波處理,得到環境圖像的多個尺度的特徵圖像;使用啟動層將卷積層的輸出使用啟動函數(例如ReLU、sigmoid、tanh等)進行非線性變換,增加網路的表達能力;使用池化層對特徵圖像進行尺寸縮小,以減少參數數量和網路的計算複雜度;利用上採樣層對池化層的輸出進行差值或反卷積操作,以恢復特徵圖像的尺寸,得到環境圖像的高解析度的特徵表示;利用跳躍連接機制連接網路中的不同層,以更好地捕獲環境圖像中不同尺度的特徵資訊;基於語義分割演算法使用輸出層進行特徵分類,得到環境圖像中圖元點所屬的物件類別,從而確定環境圖像中的目標物件。In one example, using a fully convolutional neural network model to identify a target object in an environment image includes: using multiple convolutional layers to filter the environment image through convolution operations to obtain feature images of multiple scales of the environment image; using an activation layer to perform a nonlinear transformation on the output of the convolutional layer using an activation function (such as ReLU, sigmoid, tanh, etc.) to increase the expression ability of the network; using a pooling layer to reduce the size of the feature image to reduce parameters. The number of features and the computational complexity of the network are reduced; the output of the pooling layer is interpolated or deconvolved using the upper sampling layer to restore the size of the feature image and obtain a high-resolution feature representation of the environment image; the different layers in the network are connected using a skip connection mechanism to better capture feature information of different scales in the environment image; the output layer is used to perform feature classification based on the semantic segmentation algorithm to obtain the object category to which the pixel points in the environment image belong, thereby determining the target object in the environment image.
在一個實施例中,為了確定目標物件在環境圖像中的具體位置,可以首先在環境圖像中生成目標物件的矩形包圍框,之後在環境圖像對應的第一座標系中確定目標物件的矩形包圍框的目標角點的座標,最後將目標角點的座標作為目標物件在所述環境圖像對應的第一座標系中的第一座標。其中,所述目標角點包括所述矩形包圍框的左下角點與右下角點。In one embodiment, in order to determine the specific position of the target object in the environment image, a rectangular enclosing frame of the target object may be first generated in the environment image, and then the coordinates of the target corner points of the rectangular enclosing frame of the target object may be determined in the first coordinate system corresponding to the environment image, and finally the coordinates of the target corner points may be used as the first coordinates of the target object in the first coordinate system corresponding to the environment image. The target corner points include the lower left corner point and the lower right corner point of the rectangular enclosing frame.
例如圖5所示,為本申請實施例提供的目標物件的第一座標的示例圖。其中,以cameraF拍攝得到的環境圖像的左上角的位置為原點建立第一座標系O2X2Y2,在環境圖像中生成目標物件(例如圖5所示車輛後視圖)的虛線所示的矩形包圍框,確定虛線所示的矩形包圍框的左下角點在第一座標系中的座標(x1, y1)與右下角點在第一座標系中的座標(x2, y2),將座標(x1, y1)與座標(x2, y2)作為第一座標。For example, as shown in FIG5, it is an example diagram of the first coordinates of the target object provided in the embodiment of the present application. In it, the first coordinate system O2X2Y2 is established with the position of the upper left corner of the environment image captured by camera F as the origin, and a rectangular enclosing frame shown by a dotted line of the target object (such as the rear view of the vehicle shown in FIG5) is generated in the environment image, and the coordinates (x1, y1) of the lower left corner point and the coordinates (x2, y2) of the lower right corner point of the rectangular enclosing frame shown by the dotted line in the first coordinate system are determined, and the coordinates (x1, y1) and the coordinates (x2, y2) are used as the first coordinates.
步驟S32,將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標。Step S32: convert the environment image into a bird's-eye view image, and determine the second coordinates of the target object in a second coordinate system corresponding to the bird's-eye view image according to the first coordinates.
在一個實施例中,將所述環境圖像轉換至俯瞰視角的俯瞰圖像時,可以基於構建二維全景環視系統(Around View Monitor,AVM)的原理,使用預設的透視投影矩陣實現環境圖像轉換至俯瞰圖像的投影變換。In one embodiment, when the environment image is converted into a bird's-eye view image, the projection transformation of the environment image into the bird's-eye view image can be realized by using a preset perspective projection matrix based on the principle of constructing a two-dimensional panoramic surround view monitor (AVM).
具體地,以cameraF進行舉例,所述透視投影矩陣的獲取方法可以包括:基於棋盤格標定方法,使用cameraF拍攝cameraF視野內的地面中的棋盤格的第一圖像,利用輔助相機拍攝該棋盤格的第二圖像,其中,所述輔助相機位於車輛中心上方並且輔助相機的拍攝視角平行於車輛所在地平面;確定第一圖像中的多個標定特徵點在第一圖像所在座標系中的環境座標,並在第二圖像中確定所述多個標定特徵點對應的點在第二圖像所在座標系中的俯瞰座標;計算用於將所述多個標定特徵點對應的環境座標轉變至對應的俯瞰座標之間的單應矩陣,將所述單應矩陣作為所述投影透視矩陣。Specifically, taking cameraF as an example, the method for obtaining the perspective projection matrix may include: based on the chessboard calibration method, using cameraF to shoot a first image of a chessboard on the ground within the field of view of cameraF, and using an auxiliary camera to shoot a second image of the chessboard, wherein the auxiliary camera is located above the center of the vehicle and the shooting angle of the auxiliary camera is parallel to the center of the vehicle. At the ground level; determining the environmental coordinates of a plurality of calibrated feature points in the first image in the coordinate system of the first image, and determining in the second image the bird's-eye view coordinates of points corresponding to the plurality of calibrated feature points in the coordinate system of the second image; calculating a homography matrix for transforming the environmental coordinates corresponding to the plurality of calibrated feature points into corresponding bird's-eye view coordinates, and using the homography matrix as the projection perspective matrix.
在一個實施例中,使用所述透視投影矩陣對環境圖像中的每個圖元點進行投影轉換,得到環境圖像對應的俯瞰圖像(例如圖6所示),以及環境圖像中每個圖元點在俯瞰圖像中對應的投影點在俯瞰圖像所處的第二座標系中的座標,其中,第二座標系可以為圖7所示的O3X3Y3。此外,由於環境圖像中每個圖元點的長度對應的世界距離為已知參數,使用所述透視投影矩陣對該已知參數進行對應變換,可以得到俯瞰圖像中每個圖元點的長度對應的世界距離。其中,例如圖6所示,在將環境圖像轉換為俯瞰視角的俯瞰圖像時由於視角的轉變,環境圖像中越靠近上方的圖像被拉伸(包括拉長與拉寬)的程度越大,環境圖像中越靠近下方的圖像被拉伸的程度越小。In one embodiment, the perspective projection matrix is used to perform projection transformation on each primitive point in the environment image to obtain a bird's-eye view image corresponding to the environment image (for example, as shown in FIG. 6 ), and the coordinates of the projection points corresponding to each primitive point in the environment image in the bird's-eye view image in the second coordinate system where the bird's-eye view image is located, wherein the second coordinate system may be O3X3Y3 as shown in FIG. 7 . In addition, since the world distance corresponding to the length of each primitive point in the environment image is a known parameter, the perspective projection matrix is used to perform corresponding transformation on the known parameter to obtain the world distance corresponding to the length of each primitive point in the bird's-eye view image. For example, as shown in FIG6 , when the environment image is converted into a bird's-eye view image, due to the change in viewing angle, the image closer to the top in the environment image is stretched (including lengthening and widening) to a greater extent, and the image closer to the bottom in the environment image is stretched to a lesser extent.
在一個實施例中,使用所述透視投影矩陣對環境圖像中的每個圖元點進行投影轉換時,可以對目標物件的矩形包圍框進行投影轉換,得到俯瞰圖像中俯瞰視角的目標物件的包圍框(例如圖6所示的虛線框);還可以得到目標物件的矩形包圍框的左下角點對應的投影點,以及目標物件的矩形包圍框的右下角點對應的投影點。目標物件在所述俯瞰圖像對應的第二座標系中的第二座標包括:所述左下角點在所述俯瞰圖像中的第一投影點(例如圖6所示)在所述第二座標系中的座標,所述右下角點在所述俯瞰圖像中的第二投影點(例如圖6所示)在所述第二座標系中的座標。In one embodiment, when the perspective projection matrix is used to perform a projection transformation on each primitive point in the environment image, the rectangular enclosing frame of the target object can be projected and transformed to obtain the enclosing frame of the target object at the bird's-eye view angle in the bird's-eye view image (e.g., the dotted frame shown in FIG6 ); the projection point corresponding to the lower left corner point of the rectangular enclosing frame of the target object and the projection point corresponding to the lower right corner point of the rectangular enclosing frame of the target object can also be obtained. The second coordinates of the target object in the second coordinate system corresponding to the bird's-eye view image include: the coordinates of the first projection point of the lower left corner point in the bird's-eye view image (e.g., as shown in FIG6 ) in the second coordinate system, and the coordinates of the second projection point of the lower right corner point in the bird's-eye view image (e.g., as shown in FIG6 ) in the second coordinate system.
在一個實施例中,由於在步驟S31中已經對車輛中的所有攝像裝置進行了聯合標定,因此可以在聯合標定的基礎上根據環境圖像與俯瞰圖像之間的投影透視矩陣,得到車輛的俯瞰視角的全景環視圖。例如圖8所示,為本申請實施例提供的車輛的俯瞰視角的全景環視圖的示例圖。In one embodiment, since all the camera devices in the vehicle have been jointly calibrated in step S31, a panoramic view of the vehicle from a bird's-eye view angle can be obtained based on the joint calibration according to the projection perspective matrix between the environment image and the bird's-eye view image. For example, FIG8 is an example of a panoramic view of the vehicle from a bird's-eye view angle provided in an embodiment of the present application.
步驟S33,根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標。Step S33, determining the third coordinate of the target object in a third coordinate system corresponding to the camera device according to the second coordinate.
在一個實施例中,根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標包括:確定所述第一投影點在所述第二座標系中的座標與所述第二投影點在所述第二座標系中的座標之間的中點座標(例如(x3, y3));根據所述第二座標系與所述第三座標系之間的第一轉換關係,確定所述中點座標在所述第三座標系中的座標作為所述第三座標。In one embodiment, determining the third coordinates of the target object in the third coordinate system corresponding to the photographic device based on the second coordinates includes: determining the midpoint coordinates (e.g., (x3, y3)) between the coordinates of the first projection point in the second coordinate system and the coordinates of the second projection point in the second coordinate system; and determining the coordinates of the midpoint coordinates in the third coordinate system as the third coordinates based on a first conversion relationship between the second coordinate system and the third coordinate system.
在一個實施例中,為了方便計算,利用所述中點座標計算目標物件與對應的攝像裝置的距離時,可以將目標物件在俯瞰圖像中的第二座標轉換為攝像裝置對應的第三座標系中的第三座標。In one embodiment, for the convenience of calculation, when the distance between the target object and the corresponding camera device is calculated using the midpoint coordinates, the second coordinate of the target object in the bird's-eye view image can be converted into a third coordinate in a third coordinate system corresponding to the camera device.
具體地,以cameraF進行舉例,例如圖7所示,由於攝像裝置的視野中心位於環境圖像的居中垂線上,而俯瞰圖像的下邊界為車輛的車頭邊界,也是cameraF的安裝位置。因此,每個俯瞰圖像的下邊界的居中位置即為每個攝像裝置的所在位置,可以以俯瞰圖像的下邊界的居中位置為原點建立攝像裝置所在的第三座標系O4X4Y4。Specifically, taking camera F as an example, as shown in FIG7 , the center of the field of view of the camera device is located on the central vertical line of the environment image, and the lower boundary of the bird's-eye view image is the boundary of the front of the vehicle, which is also the installation position of camera F. Therefore, the center position of the lower boundary of each bird's-eye view image is the location of each camera device, and the third coordinate system O4X4Y4 where the camera device is located can be established with the center position of the lower boundary of the bird's-eye view image as the origin.
在一個實施例中,由於第二座標系與第三座標系都為直角座標系,可以透過確定第二座標系與第三座標系的兩個原點之間的位置關係確定所述第二座標系與所述第三座標系之間的第一轉換關係。具體的,可以利用投影變換矩陣根據環境圖像的長度與寬度確定俯瞰圖像的長度W與寬度H,其中,俯瞰圖像的長度可以是第二座標系的原點所在邊(例如圖7中俯瞰圖像的上邊界)的長度,俯瞰圖像的寬度可以是俯瞰圖像的兩個相互平行的邊界(例如圖7中俯瞰圖像的上邊界與下邊界)之間的距離;根據俯瞰圖像的長度與寬度確定第二座標系與第三座標系的兩個原點之間的位置關係,包括:第三座標系的原點在第二座標系中的座標為(W/2,-H);根據第二座標系與第三座標系的兩個原點之間的位置關係確定所述第一轉換關係,所述第一轉換關係包括:若第二座標系中任一點的座標為(x4,y4),則第三座標系中所述任一點的座標(x5,y5)= (-x4+W/2, H-|y4|)。In one embodiment, since the second coordinate system and the third coordinate system are both rectangular coordinate systems, the first conversion relationship between the second coordinate system and the third coordinate system can be determined by determining the positional relationship between the two origins of the second coordinate system and the third coordinate system. Specifically, the length W and width H of the bird's-eye view image can be determined according to the length and width of the environment image using a projection transformation matrix, wherein the length of the bird's-eye view image can be the length of the side where the origin of the second coordinate system is located (e.g., the upper boundary of the bird's-eye view image in FIG7 ), and the width of the bird's-eye view image can be the distance between two parallel boundaries of the bird's-eye view image (e.g., the upper boundary and the lower boundary of the bird's-eye view image in FIG7 ); the length and width of the environment image can be determined according to the length and width of the environment image. The positional relationship between the two origins of the second coordinate system and the third coordinate system includes: the coordinates of the origin of the third coordinate system in the second coordinate system are (W/2, -H); the first transformation relationship is determined according to the positional relationship between the two origins of the second coordinate system and the third coordinate system, and the first transformation relationship includes: if the coordinates of any point in the second coordinate system are (x4, y4), then the coordinates of any point in the third coordinate system are (x5, y5) = (-x4+W/2, H-|y4|).
在一個實施例中,可以根據所述第二座標系與所述第三座標系之間的第一轉換關係,確定所述中點座標(x3, y3)在所述第三座標系中的座標(-x3+W/2, H-|y3|)作為所述第三座標。In one embodiment, the coordinates (-x3+W/2, H-|y3|) of the midpoint coordinate (x3, y3) in the third coordinate system can be determined as the third coordinate according to the first conversion relationship between the second coordinate system and the third coordinate system.
在一個實施例中,由於環境圖像和俯瞰圖像中每個圖元點的長度對應的世界距離都是已知參數,因此可以根據所述第三座標確定目標物件與攝像裝置的世界距離,具體的,目標物件與攝像裝置的世界距離可以是第三座標與第三座標系的原點的歐式距離。In one embodiment, since the world distance corresponding to the length of each pixel point in the environment image and the bird's-eye view image are known parameters, the world distance between the target object and the camera device can be determined based on the third coordinate. Specifically, the world distance between the target object and the camera device can be the Euclidean distance between the third coordinate and the origin of the third coordinate system.
在一個實施例中,對於任一攝像裝置,由於環境圖像中可能包含多個目標物件,為了避免後續步驟構建的目標全景環視模型出現失真或扭曲,所述方法還包括:確定環境圖像中每個目標物件與對應的攝像裝置的距離,基於最小距離對應的目標物件構建所述目標全景環視模型。In one embodiment, for any camera device, since the environment image may contain multiple target objects, in order to avoid distortion or distortion of the target panoramic surround model constructed in the subsequent steps, the method further includes: determining the distance between each target object in the environment image and the corresponding camera device, and constructing the target panoramic surround model based on the target object corresponding to the minimum distance.
步驟S34,根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離。Step S34, determining a fourth coordinate of the target object in a fourth coordinate system corresponding to the vehicle according to the third coordinate, and determining an initial distance between the target object and the vehicle according to the fourth coordinate.
在一個實施例中,所述根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標包括:確定所述俯瞰視角中所述攝像裝置與所述車輛的中心的安裝距離,所述安裝距離包括水準距離與垂直距離;根據所述安裝距離確定所述第三座標系與所述第四座標系之間的第二轉換關係,根據所述第二轉換關係將所述第三座標轉換至所述第四座標,其中,所述第四座標系的原點位於所述車輛的中心。In one embodiment, determining the fourth coordinate of the target object in a fourth coordinate system corresponding to the vehicle based on the third coordinate includes: determining the installation distance between the camera device and the center of the vehicle in the bird's-eye view, the installation distance including a horizontal distance and a vertical distance; determining a second conversion relationship between the third coordinate system and the fourth coordinate system based on the installation distance, and converting the third coordinate to the fourth coordinate based on the second conversion relationship, wherein the origin of the fourth coordinate system is located at the center of the vehicle.
在一個實施例中,以cameraL進行舉例,例如圖9所示,為本申請實施例提供的第三座標系與第四座標系的示例圖。其中,O1X1Y1表示以車輛的中心為原點的第四座標系,O4X4Y4表示cameraL對應的第三座標系。In one embodiment, cameraL is used as an example, as shown in FIG9 , which is an example diagram of the third coordinate system and the fourth coordinate system provided by the embodiment of the present application. Among them, O1X1Y1 represents the fourth coordinate system with the center of the vehicle as the origin, and O4X4Y4 represents the third coordinate system corresponding to cameraL.
在一個實施例中,第二轉換關係的確定方法與第一轉換關係的確定方法類似,由於俯瞰視角中所述攝像裝置(例如cameraL)與所述車輛的中心的水準距離與垂直距離為已知參數(參考步驟S31),因此第二轉換關係可以包括基於水準距離與垂直距離對第三座標進行的平移變換,具體可以參考第一轉換關係的確定方法。In one embodiment, the method for determining the second transformation relationship is similar to the method for determining the first transformation relationship. Since the horizontal distance and vertical distance between the camera device (e.g., camera L) and the center of the vehicle in the bird's-eye view are known parameters (refer to step S31), the second transformation relationship may include a translation transformation of the third coordinate based on the horizontal distance and the vertical distance. Specifically, reference may be made to the method for determining the first transformation relationship.
在一個實施例中,所述根據所述第四座標確定所述目標物件與所述車輛的初始距離包括:確定所述第四座標與所述車輛的中心的歐式距離作為所述初始距離。In one embodiment, determining the initial distance between the target object and the vehicle based on the fourth coordinate includes: determining the European distance between the fourth coordinate and the center of the vehicle as the initial distance.
步驟S35,基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。Step S35, determining the target distance based on the preset correction parameters and the initial distance, and constructing a target panoramic surround model of the vehicle according to the target distance and the environmental image.
在一個實施例中,所述基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型,包括:根據所述初始距離與所述校正參數的差值確定所述目標距離;以所述目標距離作為底部半徑的長度構建三維碗形網格模型,將所述車輛的四個方向的環境圖像投影至所述三維碗形網格模型得到所述目標全景環視模型。其中,三維碗形網格模型的底部以車輛的中心為圓心。In one embodiment, the target distance is determined based on the preset correction parameter and the initial distance, and the target panoramic surround model of the vehicle is constructed according to the target distance and the environmental image, including: determining the target distance according to the difference between the initial distance and the correction parameter; constructing a three-dimensional bowl-shaped grid model with the target distance as the length of the bottom radius, and projecting the environmental images of the vehicle in four directions onto the three-dimensional bowl-shaped grid model to obtain the target panoramic surround model. The bottom of the three-dimensional bowl-shaped grid model is centered at the center of the vehicle.
例如圖10所示,為本申請實施例提供的全景環視模型的示例圖。其中,全景環視模型使用的是三維碗形網格模型,將車輛的四個方向的環境圖像投影至所述三維碗形網格模型就可以得到車輛的全景環視模型。圖10中左側模型為直接使用初始距離作為三維碗型網格模型的底部半徑的長度構建的模型,比對右側模型,可以看出左側模型中的柱體的目標物件在碗底出現了扭曲。為了避免上述問題,可以對初始距離進行校正,以使目標物件在投影時可以整體投影至碗壁內側,例如圖10右側模型所示。For example, as shown in FIG10, it is an example diagram of a panoramic view model provided in an embodiment of the present application. Among them, the panoramic view model uses a three-dimensional bowl-shaped grid model. The panoramic view model of the vehicle can be obtained by projecting the environmental images in four directions of the vehicle onto the three-dimensional bowl-shaped grid model. The left model in FIG10 is a model constructed by directly using the initial distance as the length of the bottom radius of the three-dimensional bowl-shaped grid model. Compared with the right model, it can be seen that the target object of the column in the left model is distorted at the bottom of the bowl. In order to avoid the above problems, the initial distance can be corrected so that the target object can be projected as a whole to the inner side of the bowl wall during projection, such as shown in the right model of FIG10.
在一個實施例中,所述校正參數的確定方法包括:確定所述目標物件的類別;基於所述類別確定所述目標物件的寬度,將所述目標物件的寬度作為所述校正參數。例如,當目標物件的類別為人體時,可以將人體的平均寬度作為所述校正參數。在另一個實施例中,還可以使用用戶輸入的參數作為校正參數。In one embodiment, the method for determining the correction parameter includes: determining the category of the target object; determining the width of the target object based on the category, and using the width of the target object as the correction parameter. For example, when the category of the target object is a human body, the average width of the human body can be used as the correction parameter. In another embodiment, a parameter input by a user can also be used as the correction parameter.
在一個實施例中,使用初始距離與所述校正參數的差值作為所述目標距離,可以使目標物件投影至以目標距離為底邊半徑的三維碗型網格模型的外側,從而將目標物件投影至三維碗型網格模型的碗壁上,避免投影時目標物件出現扭曲變形與失真。在車輛行駛過程中,透過不斷確定目標物件以及目標物件對應的校正參數,可以確保車輛在每個時刻的全景環視模型都不存在失真,從而保障用戶的行車安全。In one embodiment, the difference between the initial distance and the correction parameter is used as the target distance, so that the target object can be projected to the outside of the three-dimensional bowl-shaped grid model with the target distance as the base radius, thereby projecting the target object onto the bowl wall of the three-dimensional bowl-shaped grid model to avoid distortion of the target object during projection. During the driving process of the vehicle, by continuously determining the target object and the correction parameter corresponding to the target object, it can be ensured that the panoramic view model of the vehicle at every moment is not distorted, thereby ensuring the driving safety of the user.
在一個實施例中,所述方法還可以包括:若所述目標全景環視模型存在失真現象,對所述目標全景環視模型進行更新,包括:使用更新的校正參數縮小所述三維碗型網格模型的底部半徑。其中,可以接收駕駛車輛的用戶對目標全景環視模型是否失真的判定結果。In one embodiment, the method may further include: if the target panoramic view model is distorted, updating the target panoramic view model, including: using the updated correction parameters to reduce the bottom radius of the three-dimensional bowl-shaped grid model. The determination result of whether the target panoramic view model is distorted by the user driving the vehicle may be received.
在一個實施例中,本申請實施例提供的全景環視模型構建方法,透過識別從車輛的攝像裝置獲取的環境圖像中的目標物件確定目標物件在環境圖像對應的第一座標系中的第一座標;基於預設的透視投影矩陣將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標;根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標;根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離;基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。能夠避免構建的車輛全景環視模型中物體出現扭曲或失真的情況,得到與真實環境更為接近的車輛全景環視模型,保障用戶根據車輛全景環視模型駕駛時的行車安全,降低車輛碰撞導致的用戶的財產損失。In one embodiment, the panoramic view model construction method provided by the embodiment of the present application determines the first coordinates of the target object in the first coordinate system corresponding to the environmental image by identifying the target object in the environmental image obtained from the camera device of the vehicle; converts the environmental image into a bird's-eye view image based on a preset perspective projection matrix, and determines the second coordinates of the target object in the second coordinate system corresponding to the bird's-eye view image according to the first coordinates; and determines the second coordinates of the target object in the second coordinate system corresponding to the bird's-eye view image according to the first coordinates. The second coordinate determines the third coordinate of the target object in the third coordinate system corresponding to the camera device; the fourth coordinate of the target object in the fourth coordinate system corresponding to the vehicle is determined according to the third coordinate, and the initial distance between the target object and the vehicle is determined according to the fourth coordinate; the target distance is determined based on the preset correction parameter and the initial distance, and a target panoramic surround model of the vehicle is constructed according to the target distance and the environmental image. It can avoid the distortion or distortion of objects in the constructed vehicle panoramic view model, obtain a vehicle panoramic view model that is closer to the real environment, ensure the driving safety of users when driving according to the vehicle panoramic view model, and reduce the property loss of users caused by vehicle collisions.
圖11是本申請一實施例提供的全景環視模型構建裝置的結構圖。FIG. 11 is a structural diagram of a panoramic view model construction device provided in an embodiment of the present application.
在一些實施例中,所述全景環視模型構建裝置40可以包括多個由電腦程式段所組成的功能模組。所述全景環視模型構建裝置40中的各個程式段的電腦程式可以儲存於車載裝置的儲存器中,並由至少一個處理器所執行,以執行(詳見圖3描述)全景環視模型構建的功能。In some embodiments, the panoramic view model construction device 40 may include a plurality of functional modules composed of computer program segments. The computer programs of the various program segments in the panoramic view model construction device 40 may be stored in a memory of the vehicle-mounted device and executed by at least one processor to execute (see FIG. 3 for details) the function of building a panoramic view model.
本實施例中,所述全景環視模型構建裝置40根據其所執行的功能,可以被劃分為多個功能模組。所述功能模組可以包括:識別模組401、確定模組402、構建模組403。本申請所稱的模組是指一種能夠被至少一個處理器所執行並且能夠完成固定功能的一系列電腦程式段,其儲存在儲存器中。在本實施例中,關於所述全景環視模型構建裝置40的中各個模組的功能實現方式可以參見上文對全景環視模型構建方法的限定,在此不再重複描述。所述識別模組401,用於識別從車輛的攝像裝置獲取的環境圖像中的目標物件,並確定所述目標物件在所述環境圖像對應的第一座標系中的第一座標。所述確定模組402,用於將所述環境圖像轉換至俯瞰視角的俯瞰圖像,根據所述第一座標確定所述目標物件在所述俯瞰圖像對應的第二座標系中的第二座標;根據所述第二座標確定所述目標物件在所述攝像裝置對應的第三座標系中的第三座標;根據所述第三座標確定所述目標物件在所述車輛對應的第四座標系中的第四座標,並根據所述第四座標確定所述目標物件與所述車輛的初始距離。所述構建模組403,基於預設的校正參數與所述初始距離確定目標距離,根據所述目標距離與所述環境圖像構建所述車輛的目標全景環視模型。In this embodiment, the panoramic view model construction device 40 can be divided into multiple functional modules according to the functions it performs. The functional modules may include: an identification module 401, a determination module 402, and a construction module 403. The module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can complete fixed functions, which are stored in a memory. In this embodiment, the functional implementation method of each module in the panoramic view model construction device 40 can refer to the above definition of the panoramic view model construction method, and will not be repeated here. The identification module 401 is used to identify a target object in an environment image acquired from a camera of a vehicle, and determine a first coordinate of the target object in a first coordinate system corresponding to the environment image. The determination module 402 is used to convert the environment image into a bird's-eye view image, determine a second coordinate of the target object in a second coordinate system corresponding to the bird's-eye view image according to the first coordinate; determine a third coordinate of the target object in a third coordinate system corresponding to the camera according to the second coordinate; determine a fourth coordinate of the target object in a fourth coordinate system corresponding to the vehicle according to the third coordinate, and determine an initial distance between the target object and the vehicle according to the fourth coordinate. The constructed model group 403 determines the target distance based on the preset correction parameters and the initial distance, and constructs the target panoramic surround model of the vehicle according to the target distance and the environmental image.
本申請實施例還提供一種電腦可讀儲存介質,所述電腦可讀儲存介質上儲存有電腦程式,所述電腦程式中包括程式指令,所述程式指令被執行時所實現的方法可參照本申請上述各個實施例中的方法。其中,所述電腦可讀儲存介質可以是上述實施例所述的車載裝置的內部儲存器,例如所述車載裝置的硬碟或儲存器。所述電腦可讀儲存介質也可以是所述車載裝置的外接存放裝置,例如所述車載裝置上配備的插接式硬碟,智慧儲存卡(Smart Media Card,SMC),安全數位(Secure Digital,SD)卡,快閃儲存器卡(Flash Card)等。The embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program includes program instructions, and the method implemented when the program instructions are executed can refer to the methods in the above-mentioned embodiments of the present application. Among them, the computer-readable storage medium can be the internal storage of the vehicle-mounted device described in the above-mentioned embodiment, such as the hard disk or storage of the vehicle-mounted device. The computer-readable storage medium can also be an external storage device of the vehicle-mounted device, such as a plug-in hard disk equipped on the vehicle-mounted device, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card), etc.
在一些實施例中,所述電腦可讀儲存介質可以包括儲存程式區和儲存資料區,其中,儲存程式區可儲存作業系統、至少一個功能所需的應用程式等;儲存資料區可儲存根據車載裝置的使用所創建的資料等。In some embodiments, the computer-readable storage medium may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application required for at least one function, etc.; the data storage area may store data created according to the use of the vehicle-mounted device, etc.
在上述實施例中,對各個實施例的描述都各有側重,某個實施例中沒有詳述或記載的部分,可以參見其它實施例的相關描述。本領域普通技術人員可以意識到,結合本文中所公開的實施例描述的各示例的單元及演算法步驟,能夠以電子硬體、或者電腦軟體和電子硬體的結合來實現。這些功能究竟以硬體還是軟體方式來執行,取決於技術方案的特定應用和設計約束條件。專業技術人員可以對每個特定的應用來使用不同方法來實現所描述的功能,但是這種實現不應認為超出本申請的範圍。In the above embodiments, the description of each embodiment has its own emphasis. For parts that are not described or recorded in a certain embodiment, please refer to the relevant description of other embodiments. A person of ordinary skill in the art can realize that the units and algorithm steps of each example described in combination with the embodiments disclosed in this article can be implemented with electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.
在本申請所提供的實施例中,應該理解到,所揭露的裝置/終端設備和方法,可以透過其它的方式實現。例如,以上所描述的裝置/終端設備實施例僅僅是示意性的,例如,所述模組或單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,例如多個單元或元件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通訊連接可以是透過一些介面,裝置或單元的間接耦合或通訊連接,可以是電性,機械或其它的形式。In the embodiments provided in the present application, it should be understood that the disclosed devices/terminal equipment and methods can be implemented in other ways. For example, the device/terminal equipment embodiments described above are only schematic. For example, the division of the modules or units is only a logical functional division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
所述作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, i.e., they may be located in one place or distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the present embodiment.
以上所述實施例僅用以說明本申請的技術方案,而非對其限制;儘管參照前述實施例對本申請進行了詳細的說明,本領域的普通技術人員應當理解:其依然可以對前述各實施例所記載的技術方案進行修改,或者對其中部分技術特徵進行等同替換;而這些修改或者替換,並不使相應技術方案的本質脫離本申請各實施例技術方案的精神和範圍,均應包含在本申請的保護範圍之內。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application is described in detail with reference to the above-mentioned embodiments, ordinary technical personnel in this field should understand that they can still modify the technical solutions described in the above-mentioned embodiments, or replace some of the technical features therein with equivalents. These modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application, and should all be included in the protection scope of the present application.
1:車輛 10:車載裝置 101:通信模組 102:儲存器 103:處理器 104:I/O介面 105:匯流排 106:攝像裝置 O1X1Y1:第四座標系 O2X2Y2:第一座標系 O3X3Y3:第二座標系 O4X4Y4:第三座標系 40:全景環視模型構建裝置 401:識別模組 402:確定模組 403:構建模組 S31~S35:步驟1: Vehicle 10: On-board device 101: Communication module 102: Memory 103: Processor 104: I/O interface 105: Bus 106: Camera device O1X1Y1: Fourth coordinate system O2X2Y2: First coordinate system O3X3Y3: Second coordinate system O4X4Y4: Third coordinate system 40: Panoramic view model building device 401: Identification module 402: Determination module 403: Model building group S31~S35: Steps
圖1是本申請一實施例提供的車載裝置的結構圖。FIG1 is a structural diagram of a vehicle-mounted device provided in an embodiment of the present application.
圖2是本申請一實施例提供的攝像裝置在車輛中的安裝位置的示例圖。FIG. 2 is an example diagram of the installation position of a camera device in a vehicle provided in an embodiment of the present application.
圖3是本申請一實施例提供的全景環視模型構建方法的流程圖。FIG3 is a flow chart of a method for constructing a panoramic view model provided in an embodiment of the present application.
圖4是本申請一實施例提供的車輛的俯視圖的示例圖。FIG. 4 is an example diagram of a top view of a vehicle provided in an embodiment of the present application.
圖5是本申請一實施例提供的目標物件的第一座標的示例圖。FIG. 5 is an example diagram of the first coordinates of a target object provided in an embodiment of the present application.
圖6是本申請一實施例提供的環境圖像與對應的俯瞰圖像的示例圖。FIG. 6 is an example diagram of an environmental image and a corresponding bird's-eye view image provided by an embodiment of the present application.
圖7是本申請一實施例提供的第二座標系與第三座標系的示例圖。FIG. 7 is an example diagram of the second coordinate system and the third coordinate system provided in an embodiment of the present application.
圖8是本申請一實施例提供的車輛的俯瞰視角的全景環視圖的示例圖。FIG8 is an example diagram of a panoramic view from a bird's-eye view of a vehicle provided in an embodiment of the present application.
圖9是本申請一實施例提供的第三座標系與第四座標系的示例圖。FIG. 9 is an example diagram of a third coordinate system and a fourth coordinate system provided in an embodiment of the present application.
圖10是本申請一實施例提供的全景環視模型的示例圖。FIG. 10 is an example diagram of a panoramic view model provided in an embodiment of the present application.
圖11是本申請一實施例提供的全景環視模型構建裝置的結構圖。FIG. 11 is a structural diagram of a panoramic view model construction device provided in an embodiment of the present application.
S31~S35:步驟 S31~S35: Steps
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW112147332A TWI860908B (en) | 2023-12-05 | 2023-12-05 | Method for constructing panoramic view model, vehicle-mounted device, and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW112147332A TWI860908B (en) | 2023-12-05 | 2023-12-05 | Method for constructing panoramic view model, vehicle-mounted device, and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TWI860908B true TWI860908B (en) | 2024-11-01 |
| TW202524426A TW202524426A (en) | 2025-06-16 |
Family
ID=94379697
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW112147332A TWI860908B (en) | 2023-12-05 | 2023-12-05 | Method for constructing panoramic view model, vehicle-mounted device, and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TWI860908B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW202240199A (en) * | 2020-12-21 | 2022-10-16 | 美商英特爾股份有限公司 | High end imaging radar |
| TW202247650A (en) * | 2021-05-21 | 2022-12-01 | 美商高通公司 | Implicit image and video compression using machine learning systems |
| TW202312031A (en) * | 2021-08-25 | 2023-03-16 | 美商高通公司 | Instance-adaptive image and video compression in a network parameter subspace using machine learning systems |
| WO2023194801A1 (en) * | 2022-04-06 | 2023-10-12 | Mobileye Vision Technologies Ltd. | Steering limiters for vehicle navigation |
-
2023
- 2023-12-05 TW TW112147332A patent/TWI860908B/en active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW202240199A (en) * | 2020-12-21 | 2022-10-16 | 美商英特爾股份有限公司 | High end imaging radar |
| TW202247650A (en) * | 2021-05-21 | 2022-12-01 | 美商高通公司 | Implicit image and video compression using machine learning systems |
| TW202312031A (en) * | 2021-08-25 | 2023-03-16 | 美商高通公司 | Instance-adaptive image and video compression in a network parameter subspace using machine learning systems |
| WO2023194801A1 (en) * | 2022-04-06 | 2023-10-12 | Mobileye Vision Technologies Ltd. | Steering limiters for vehicle navigation |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202524426A (en) | 2025-06-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN118160007B (en) | Image modification technology | |
| CN111931764B (en) | A target detection method, target detection framework and related equipment | |
| CN114170290B (en) | Image processing method and related equipment | |
| WO2020119684A1 (en) | 3d navigation semantic map update method, apparatus and device | |
| US9135678B2 (en) | Methods and apparatus for interfacing panoramic image stitching with post-processors | |
| CN110378837B (en) | Target detection method and device based on fish-eye camera and storage medium | |
| WO2022165722A1 (en) | Monocular depth estimation method, apparatus and device | |
| CN113538704B (en) | Method and equipment for drawing virtual object light shadow based on light source position | |
| CN114143528A (en) | Multi-video stream fusion method, electronic device and storage medium | |
| CN114092905A (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
| CN116563384A (en) | Calibration method, equipment and computer equipment of an image acquisition device | |
| US12351110B2 (en) | Method for constructing 3D panoramic view model, vehicle-mounted device, and storage medium | |
| US10848686B2 (en) | Method of providing image and electronic device for supporting the method | |
| WO2024179571A1 (en) | Image processing method, model training method, apparatuses, device, and medium | |
| TWI860908B (en) | Method for constructing panoramic view model, vehicle-mounted device, and storage medium | |
| CN113870148A (en) | Facial distortion correction method, device, electronic device, chip and storage medium | |
| CN114445451B (en) | Planar image tracking method, terminal and storage medium | |
| TWI881694B (en) | Method for evaluating panoramic view system, electronic equipment, and storage medium | |
| CN114296239B (en) | Image display method and device for vehicle window | |
| CN118196208A (en) | Large-view-field camera calibration and parameter optimization method | |
| CN113643379A (en) | Calibration method, calibration device, interactive device, electronic device and storage medium | |
| CN118485738B (en) | IPM image generation method, device, equipment and computer-readable storage medium | |
| TWI893710B (en) | Method for calibrating camera extrinsic parameters, vehicle-mounted device, and storage medium | |
| TWI904264B (en) | Method of three dimensional texturing three dimensional model and electronic device for texturing three dimensional model | |
| CN115311148B (en) | Grid mapping calculation method and device, computer-readable medium and electronic device |