WO2025098882A1 - Method for fusing camera images, and vehicle - Google Patents
Method for fusing camera images, and vehicle Download PDFInfo
- Publication number
- WO2025098882A1 WO2025098882A1 PCT/EP2024/080889 EP2024080889W WO2025098882A1 WO 2025098882 A1 WO2025098882 A1 WO 2025098882A1 EP 2024080889 W EP2024080889 W EP 2024080889W WO 2025098882 A1 WO2025098882 A1 WO 2025098882A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- vehicle
- cameras
- images
- chassis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8046—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for replacing a rear-view mirror system
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the invention relates to a method for fusing camera images according to the preamble of claim 1 and a vehicle according to the preamble of claim 3.
- the fusion (stitching) of multiple camera images from vehicle cameras is complex when the camera positions may shift relative to each other over time.
- US 2021/0179172 A1 describes methods and apparatus for determining a trailer hitch articulation angle, including a camera for capturing a first image and a second image, a steering sensor configured to detect a steering angle, a speed sensor configured to detect a vehicle speed, a processor configured to generate a bird's eye view of the first image and the second image through a perspective transformation, generating a trailer hitch model from the first image in response to the steering angle and the vehicle speed indicating that a vehicle is traveling straight ahead, and generating a current hitch model from the second image, determining a trailer hitch articulation angle in response to an angular difference between the trailer hitch model and the current hitch model, and a vehicle controller that controls the vehicle in response to the trailer hitch articulation angle.
- DE 10 2019202 269 A1 describes a method for calibrating a mobile camera unit of a camera system, wherein static position parameters (pus, pos, prs, pls) are fused with dynamic position parameters (pud, pod, prd, pld) of at least two edges and local coordinates (x, y, z) of the camera unit are calculated therefrom.
- DE 10 2013209 156 A1 describes a camera with a housing for recording the surroundings of a motor vehicle for use in a motor vehicle surround-view system. Image data of the surroundings of the motor vehicle can be generated by means of the camera, and a first interface is provided via which the image data from the camera can be forwarded outside the housing for further processing.
- the camera has an inertial sensor arranged within the housing, and the data from the inertial sensor can be forwarded via the first interface or a second interface for further processing outside the housing.
- US 2017/0341583 A1 describes systems and methods for a towing vehicle towing a trailer having at least one imaging device.
- a method includes: receiving a first image stream including a plurality of first images from a first imaging device connected to the trailer; receiving a second image stream including a plurality of second images from a second imaging device connected to the vehicle; determining at least one common feature between a first image of the first images and a second image of the second images; determining a first distance from the first imaging device to the at least one common feature and a second distance from the second imaging device to the at least one common feature; and determining a position of the first imaging device relative to the vehicle based on the first distance, the second distance, and a known position and pose of the second imaging device.
- the invention is based on the object of providing a novel method for fusing camera images and a novel vehicle.
- the object is achieved according to the invention by a method for fusing camera images having the features of claim 1 and by a vehicle having the features of claim 3.
- a method for merging camera images from several cameras that are at least partially movable relative to one another can be understood as two or more cameras that are movable relative to one another.
- two or more cameras can be combined into a group, with the cameras within a group are immobile among themselves, but movable relative to another camera or another group of cameras.
- At least one inertial measuring unit is assigned to each independently movable camera or to each group of jointly movable cameras and is movable together with this, wherein in a learning phase acceleration data of the inertial measuring units are recorded together with camera images of the assigned cameras, a homography matrix is calculated and stored, wherein a look-up table is created from the homography matrix, wherein in an online phase the look-up table is retrieved for certain acceleration values determined by the inertial measuring units and camera images captured by the assigned cameras and provides homography matrices in order to calculate fused images.
- At least one of the cameras is designed as a mirror replacement camera on a cabin of a vehicle, wherein at least one further of the cameras is designed as a rear view camera on a chassis of the vehicle, wherein the cabin is movable relative to the chassis.
- a vehicle comprising a chassis and a cabin movable relative to the chassis, wherein at least one mirror replacement camera and at least one inertial measurement unit are arranged on the cabin, and at least one rearview camera and at least one inertial measurement unit are arranged on the chassis.
- the vehicle has a control unit configured to carry out the method described above.
- the vehicle is designed as a commercial vehicle or bus.
- the vehicle can be designed as an autonomous or semi-autonomous vehicle.
- the calculation for merging the camera images can be accelerated.
- the method according to the invention eliminates the need for online image processing and computation to dynamically merge images. Instead, historical data from the value range is used.
- Fig. 1 is a schematic view of a commercial vehicle with a tractor and a semitrailer or trailer, and
- Fig. 2 is a schematic view of a method for fusing camera images from multiple cameras
- FIG. 1 is a schematic view of a vehicle 1, in particular a commercial vehicle 1 with a tractor unit 2 and a semitrailer 3 or trailer.
- the tractor unit 2 has at least one mirror replacement camera 4 instead of an exterior mirror.
- the tractor unit 2 has a mirror replacement camera 4 on each side instead of a respective exterior mirror.
- the tractor unit 2 has at least one reversing camera 5, which, like the at least one mirror replacement camera 4, is directed against a direction of travel.
- the tractor unit 2 has at least one inertial measuring unit 6.
- one inertial measuring unit 6 is arranged in or on a cabin 7 and another inertial measuring unit 6 is arranged on a chassis 8 of the tractor unit 2.
- the reversing camera 5 is also arranged on the chassis 8 and is therefore movable relative to the mirror replacement camera 4 arranged on the air-sprung cabin 7.
- Figure 2 is a schematic view of a method for fusing camera images B from a plurality of cameras 4, 5, for example the at least one mirror replacement camera 4 and the at least one rear view camera 5.
- a one-time calculation of the overlay and homography matrix is necessary if the position ratio of cameras 4, 5 is static.
- a blanket calculation cannot be used.
- the cabin 7 can move differently relative to the chassis 8 due to the acceleration. Overlapping image sections between the camera images B of the cameras 4, 5 are therefore also dynamic, and therefore increased computational effort is required for merging.
- the present invention it is proposed to accelerate the calculation for merging the camera images B by taking into account acceleration data A determined by at least one inertial measuring unit 6 and based on the knowledge of the influence of certain accelerations on the ratio of the overlap of the camera images B.
- Image fusion is achieved by identifying interest points in multiple images. These different interest points are compared and superimposed, and a transformation is then calculated (see, for example, https://courses.cs.washinQton.edu/courses/cse576/16sp/Slides/10 lmageStitching.pdf). If you determine in a learning phase how the images will roughly overlap, then individual sections of the images can be examined later to perform a faster online process. If the transformations (homography matrices) are already known for different camera positions, they can be reused when the exact same camera positions are encountered again. If this is roughly known, a faster and more accurate search algorithm can be achieved, since it only needs to search for and compare interest points in a relatively small area.
- step S1 historical data of the acceleration data A of the inertial measurement units 6 can be acquired together with the camera images B or image sections, in particular across the entire value range. From this, a homography matrix can be calculated once in step S2 and stored in step S3. This creates a look-up table in step S4, which In an online phase, in a step S5, it can be retrieved for specific acceleration values A and camera images B determined by the inertial measurement units 6 and provides homography matrices to calculate the fused images in a step S6. Two images are merged into a single image after a transformation. The computationally intensive part of the image processing thus occurs once during the learning phase and is transferable to many vehicles. For example, steps S1 to S4 can be performed during the development of vehicle 1. This also has the advantage that blurring can be reduced by taking the acceleration data A into account at an early stage.
- the above-mentioned value range results from possible extreme camera positions, e.g., with the cab fully compressed and with the cab fully extended.
- an optimal homography matrix is calculated using optimizations. If the relative positions of the cameras are known, optimization can be performed within a limited value range. If the exact position is known, a known homography matrix for this position can be used directly without optimization.
- the look-up table has an input and an output, where the input is formed by the position (or change in the relative position) of the two cameras based on acceleration data (integrated).
- the output is the corresponding homography matrix.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Verfahren zum Fusionieren von Kamerabildern sowie Fahrzeug Method for fusing camera images and vehicle
Die Erfindung betrifft ein Verfahren zum Fusionieren von Kamerabildern gemäß dem Oberbegriff des Anspruchs 1 sowie ein Fahrzeug gemäß dem Oberbegriff des Anspruchs 3. The invention relates to a method for fusing camera images according to the preamble of claim 1 and a vehicle according to the preamble of claim 3.
Die Fusion (Stitching) von mehreren Kamerabildern von Fahrzeugkameras ist komplex, wenn sich die Kamerapositionen zueinander über die Zeit verschieben können. The fusion (stitching) of multiple camera images from vehicle cameras is complex when the camera positions may shift relative to each other over time.
US 2021/0179172 A1 beschreibt Verfahren und Vorrichtungen zum Bestimmen eines Knickwinkels der Anhängerkupplung, einschließlich einer Kamera zum Erfassen eines ersten Bildes und eines zweiten Bildes, eines Lenksensors, der zum Erkennen eines Lenkwinkels konfiguriert ist, eines Geschwindigkeitssensors, der zum Erfassen einer Fahrzeuggeschwindigkeit konfiguriert ist, eines Prozessors, der zum Erzeugen konfiguriert ist eine Vogelperspektive des ersten Bildes und des zweiten Bildes durch eine perspektivische Transformation, Erzeugen eines Anhängerkupplungsmodells aus dem ersten Bild als Reaktion auf den Lenkwinkel und die Fahrzeuggeschwindigkeit, die anzeigen, dass ein Fahrzeug geradeaus fährt, und Erzeugen eines aktuellen Kupplungsmodells aus dem zweiten Bild, Bestimmen eines Anhängerkupplungs- Knickwinkels als Reaktion auf einen Winkelunterschied zwischen dem Anhängerkupplungsmodell und dem aktuellen Kupplungsmodell und eine Fahrzeugsteuerung, die das Fahrzeug als Reaktion auf den Anhängerkupplungs- Knickwinkel steuert. US 2021/0179172 A1 describes methods and apparatus for determining a trailer hitch articulation angle, including a camera for capturing a first image and a second image, a steering sensor configured to detect a steering angle, a speed sensor configured to detect a vehicle speed, a processor configured to generate a bird's eye view of the first image and the second image through a perspective transformation, generating a trailer hitch model from the first image in response to the steering angle and the vehicle speed indicating that a vehicle is traveling straight ahead, and generating a current hitch model from the second image, determining a trailer hitch articulation angle in response to an angular difference between the trailer hitch model and the current hitch model, and a vehicle controller that controls the vehicle in response to the trailer hitch articulation angle.
DE 10 2019202 269 A1 beschreibt ein Verfahren zum Kalibrieren einer mobilen Kameraeinheit eines Kamerasystems, wobei statische Positionsparameter (pus, pos, prs, pls) mit dynamischen Positionsparametern (pud, pod, prd, pld) von wenigstens zwei Kanten fusioniert und daraus lokale Koordinaten (x, y, z) der Kameraeinheit berechnet werden. DE 10 2013209 156 A1 beschreibt eine Kamera mit einem Gehäuse zur Aufnahme der Umgebung eines Kraftfahrzeuges zur Verwendung in einem Kraftfahrzeug-Surround- View-System, wobei mittels der Kamera Bilddaten der Umgebung des Kraftfahrzeuges generierbar sind und eine erste Schnittstelle vorhanden ist, über die die Bilddaten der Kamera zur weiteren Verarbeitung außerhalb des Gehäuses weiterleitbar sind. Dabei ist vorgesehen, dass die Kamera einen Inertialsensor aufweist, der innerhalb des Gehäuses angeordnet ist und die Daten des Inertialsensors über die erste Schnittstelle oder eine zweite Schnittstelle zur weiteren Verarbeitung außerhalb des Gehäuses weiterleitbar sind. DE 10 2019202 269 A1 describes a method for calibrating a mobile camera unit of a camera system, wherein static position parameters (pus, pos, prs, pls) are fused with dynamic position parameters (pud, pod, prd, pld) of at least two edges and local coordinates (x, y, z) of the camera unit are calculated therefrom. DE 10 2013209 156 A1 describes a camera with a housing for recording the surroundings of a motor vehicle for use in a motor vehicle surround-view system. Image data of the surroundings of the motor vehicle can be generated by means of the camera, and a first interface is provided via which the image data from the camera can be forwarded outside the housing for further processing. The camera has an inertial sensor arranged within the housing, and the data from the inertial sensor can be forwarded via the first interface or a second interface for further processing outside the housing.
US 2017/0341583 A1 beschreibt Systeme und Verfahren für ein Zugfahrzeug, das einen Anhänger zieht, der über mindestens ein bildgebendes Gerät verfügt. Ein Verfahren umfasst: Empfangen eines ersten Bildstroms mit einer Vielzahl von ersten Bildern von einem ersten, mit dem Anhänger verbundenen Bildgebungsgerät; Empfangen eines zweiten Bildstroms mit einer Vielzahl von zweiten Bildern von einem zweiten, mit dem Fahrzeug verbundenen Bildgebungsgerät; Bestimmen von mindestens einem gemeinsamen Merkmal zwischen einem ersten Bild der ersten Bilder und einem zweiten Bild der zweiten Bilder; Bestimmen einer ersten Entfernung vom ersten Bildgebungsgerät zu dem mindestens einen gemeinsamen Merkmal und einer zweiten Entfernung vom zweiten Bildgebungsgerät zu dem mindestens einen gemeinsamen Merkmal; und Bestimmen einer Position des ersten Bildgebungsgeräts relativ zum Fahrzeug basierend auf der ersten Entfernung, der zweiten Entfernung und einer bekannten Position und Stellung des zweiten Bildgebungsgeräts. US 2017/0341583 A1 describes systems and methods for a towing vehicle towing a trailer having at least one imaging device. A method includes: receiving a first image stream including a plurality of first images from a first imaging device connected to the trailer; receiving a second image stream including a plurality of second images from a second imaging device connected to the vehicle; determining at least one common feature between a first image of the first images and a second image of the second images; determining a first distance from the first imaging device to the at least one common feature and a second distance from the second imaging device to the at least one common feature; and determining a position of the first imaging device relative to the vehicle based on the first distance, the second distance, and a known position and pose of the second imaging device.
Der Erfindung liegt die Aufgabe zu Grunde, ein neuartiges Verfahren zum Fusionieren von Kamerabildern sowie ein neuartiges Fahrzeug anzugeben. The invention is based on the object of providing a novel method for fusing camera images and a novel vehicle.
Die Aufgabe wird erfindungsgemäß gelöst durch ein Verfahren zum Fusionieren von Kamerabildern mit den Merkmalen des Anspruchs 1 sowie durch ein Fahrzeug mit den Merkmalen des Anspruchs 3. The object is achieved according to the invention by a method for fusing camera images having the features of claim 1 and by a vehicle having the features of claim 3.
Vorteilhafte Ausgestaltungen der Erfindung sind Gegenstand der Unteransprüche. Advantageous embodiments of the invention are the subject of the subclaims.
Es wird ein Verfahren zum Fusionieren von Kamerabildern mehrerer zumindest teilweise relativ zueinander beweglicher Kameras vorgeschlagen. Darunter können zwei oder mehr relativ zueinander beweglicher Kameras verstanden werden. Ebenso können zwei oder mehr Kameras zu einer Gruppe zusammengefasst sein, wobei die Kameras innerhalb einer Gruppe untereinander unbeweglich, jedoch beweglich gegenüber einer weiteren Kamera oder einer weiteren Gruppe von Kameras sind. A method is proposed for merging camera images from several cameras that are at least partially movable relative to one another. This can be understood as two or more cameras that are movable relative to one another. Likewise, two or more cameras can be combined into a group, with the cameras within a group are immobile among themselves, but movable relative to another camera or another group of cameras.
Erfindungsgemäß ist jeder unabhängig beweglichen Kamera oder jeder Gruppe gemeinsam beweglicher Kameras mindestens eine Inertialmesseinheit zugeordnet und mit dieser gemeinsam beweglich, wobei in einer Lernphase Beschleunigungsdaten der Inertialmesseinheiten zusammen mit Kamerabildern der zugeordneten Kameras erfasst werden, eine Homografiematrix berechnet und gespeichert wird, wobei aus der Homografiematrix eine Look-up-Table erstellt wird, wobei in einer Online-Phase die Lookup-Table für bestimmte von den Inertialmesseinheiten ermittelte Beschleunigungswerte und von den zugeordneten Kameras erfasste Kamerabilder abgerufen wird und Homografiematrizen liefert, um fusionierte Bilder zu berechnen. According to the invention, at least one inertial measuring unit is assigned to each independently movable camera or to each group of jointly movable cameras and is movable together with this, wherein in a learning phase acceleration data of the inertial measuring units are recorded together with camera images of the assigned cameras, a homography matrix is calculated and stored, wherein a look-up table is created from the homography matrix, wherein in an online phase the look-up table is retrieved for certain acceleration values determined by the inertial measuring units and camera images captured by the assigned cameras and provides homography matrices in order to calculate fused images.
In einer Ausführungsform ist mindestens eine der Kameras als eine Spiegelersatzkamera an einer Kabine eines Fahrzeugs ausgebildet, wobei mindestens eine weitere der Kameras als eine Rückfahrkamera an einem Chassis des Fahrzeugs ausgebildet ist, wobei die Kabine relativ zum Chassis beweglich ist. In one embodiment, at least one of the cameras is designed as a mirror replacement camera on a cabin of a vehicle, wherein at least one further of the cameras is designed as a rear view camera on a chassis of the vehicle, wherein the cabin is movable relative to the chassis.
Gemäß einem Aspekt der vorliegenden Erfindung wird ein Fahrzeug vorgeschlagen, umfassend ein Chassis und eine relativ zum Chassis bewegliche Kabine, wobei an der Kabine mindestens eine Spiegelersatzkamera und mindestens eine Inertialmesseinheit angeordnet ist, wobei am Chassis mindestens eine Rückfahrkamera und mindestens eine Inertialmesseinheit angeordnet ist. Das Fahrzeug weist eine Steuereinheit auf, die zur Durchführung des oben beschriebenen Verfahrens konfiguriert ist. According to one aspect of the present invention, a vehicle is proposed, comprising a chassis and a cabin movable relative to the chassis, wherein at least one mirror replacement camera and at least one inertial measurement unit are arranged on the cabin, and at least one rearview camera and at least one inertial measurement unit are arranged on the chassis. The vehicle has a control unit configured to carry out the method described above.
In einer Ausführungsform ist das Fahrzeug als Nutzfahrzeug oder Bus ausgebildet. In one embodiment, the vehicle is designed as a commercial vehicle or bus.
Ferner kann das Fahrzeug als ein autonomes oder teilautonomes Fahrzeug ausgebildet sein. Furthermore, the vehicle can be designed as an autonomous or semi-autonomous vehicle.
Gemäß der kann durch Berücksichtigung von Beschleunigungsdaten, die durch mindestens eine Inertialmesseinheit ermittelt werden, und anhand der Kenntnis über den Einfluss bestimmter Beschleunigungen auf das Verhältnis der Überlappung der Kamerabilder die Berechnung für das Fusionieren der Kamerabilder beschleunigt werden. Durch das erfindungsgemäße Verfahren muss keine Online-Bildverarbeitung und - berechnung durchgeführt werden, um dynamisch Bilder zu fusionieren. Stattdessen werden historische Daten aus dem Wertebereich genutzt. According to the, by taking into account acceleration data determined by at least one inertial measurement unit and by knowing the influence of certain accelerations on the ratio of the overlap of the camera images, the calculation for merging the camera images can be accelerated. The method according to the invention eliminates the need for online image processing and computation to dynamically merge images. Instead, historical data from the value range is used.
Ausführungsbeispiele der Erfindung werden im Folgenden anhand von Zeichnungen näher erläutert. Embodiments of the invention are explained in more detail below with reference to drawings.
Dabei zeigen: Showing:
Fig. 1 eine schematische Ansicht eines Nutzfahrzeugs mit einer Zugmaschine und einem Auflieger oder Anhänger, und Fig. 1 is a schematic view of a commercial vehicle with a tractor and a semitrailer or trailer, and
Fig. 2 eine schematische Ansicht eines Verfahrens zum Fusionieren von Kamerabildern mehrerer Kameras Fig. 2 is a schematic view of a method for fusing camera images from multiple cameras
Einander entsprechende Teile sind in allen Figuren mit den gleichen Bezugszeichen versehen. Corresponding parts are provided with the same reference numerals in all figures.
Figur 1 ist eine schematische Ansicht eines Fahrzeugs 1, insbesondere eines Nutzfahrzeugs 1 mit einer Zugmaschine 2 und einem Auflieger 3 oder Anhänger. Die Zugmaschine 2 weist mindestens eine Spiegelersatzkamera 4 anstelle eines Außenspiegels auf. Insbesondere weist die Zugmaschine 2 an jeder Seite jeweils eine Spiegelersatzkamera 4 anstelle eines jeweiligen Außenspiegels auf. Ferner weist die Zugmaschine 2 mindestens eine Rückfahrkamera 5 auf, die ebenso wie die mindestens eine Spiegelersatzkamera 4 entgegen einer Fahrtrichtung gerichtet ist. Ferner weist die Zugmaschine 2 mindestens eine Inertialmesseinheit 6 auf. Beispielsweise ist eine Inertialmesseinheit 6 in oder an einer Kabine 7 und eine weitere Inertialmesseinheit 6 an einem Chassis 8 der Zugmaschine 2 angeordnet. Die Rückfahrkamera 5 ist ebenso am Chassis 8 angeordnet und daher gegenüber der an der luftgefederten Kabine 7 angeordneten Spiegelersatzkamera 4 beweglich. Figure 1 is a schematic view of a vehicle 1, in particular a commercial vehicle 1 with a tractor unit 2 and a semitrailer 3 or trailer. The tractor unit 2 has at least one mirror replacement camera 4 instead of an exterior mirror. In particular, the tractor unit 2 has a mirror replacement camera 4 on each side instead of a respective exterior mirror. Furthermore, the tractor unit 2 has at least one reversing camera 5, which, like the at least one mirror replacement camera 4, is directed against a direction of travel. Furthermore, the tractor unit 2 has at least one inertial measuring unit 6. For example, one inertial measuring unit 6 is arranged in or on a cabin 7 and another inertial measuring unit 6 is arranged on a chassis 8 of the tractor unit 2. The reversing camera 5 is also arranged on the chassis 8 and is therefore movable relative to the mirror replacement camera 4 arranged on the air-sprung cabin 7.
Figur 2 ist eine schematische Ansicht eines Verfahrens zum Fusionieren von Kamerabildern B mehrerer Kameras 4, 5, beispielsweise der mindestens einen Spiegelersatzkamera 4 und der mindestens einen Rückfahrkamera 5. Für das Fusionieren mehrere Kamerabilder B mit überlappendem Bildausschnitt ist bei statischem Verhältnis der Positionen der Kameras 4, 5 eine einmalige Berechnung der Überlagerung und Homografiematrix notwendig. Figure 2 is a schematic view of a method for fusing camera images B from a plurality of cameras 4, 5, for example the at least one mirror replacement camera 4 and the at least one rear view camera 5. For merging several camera images B with overlapping image sections, a one-time calculation of the overlay and homography matrix is necessary if the position ratio of cameras 4, 5 is static.
In einem beweglichen System mit dynamischen Positionen der Kameras 4, 5 (beispielsweise der Position einer Spiegelersatzkamera 4 an einer luftgefederten Kabine 7 relativ zur Position einer Rückfahrkamera 5 am Chassis 8) kann keine pauschale Berechnung genutzt werden. Durch Bewegung des Fahrzeugs 1 kann die Kabine 7 sich aufgrund der Beschleunigungen im Verhältnis zum Chassis 8 unterschiedlich bewegen. Überlappende Bildausschnitte zwischen der Kamerabildern B der Kameras 4, 5 sind somit ebenfalls dynamisch und es wird daher erhöhter Rechenaufwand für das Fusionieren benötigt. In a moving system with dynamic positions of the cameras 4, 5 (for example, the position of a mirror replacement camera 4 on an air-sprung cabin 7 relative to the position of a rear-view camera 5 on the chassis 8), a blanket calculation cannot be used. As the vehicle 1 moves, the cabin 7 can move differently relative to the chassis 8 due to the acceleration. Overlapping image sections between the camera images B of the cameras 4, 5 are therefore also dynamic, and therefore increased computational effort is required for merging.
Gemäß der vorliegenden Erfindung wird vorgeschlagen, durch Berücksichtigung von Beschleunigungsdaten A, die durch mindestens eine Inertialmesseinheit 6 ermittelt werden, und anhand der Kenntnis über den Einfluss bestimmter Beschleunigungen auf das Verhältnis der Überlappung der Kamerabilder B die Berechnung für das Fusionieren der Kamerabilder B zu beschleunigen. According to the present invention, it is proposed to accelerate the calculation for merging the camera images B by taking into account acceleration data A determined by at least one inertial measuring unit 6 and based on the knowledge of the influence of certain accelerations on the ratio of the overlap of the camera images B.
Das Fusionieren (Stitchen) von Bildern erfolgt über die Ermittlung von Interest Points in mehreren Bildern. Diese verschiedenen Interest Points werden verglichen und übereinander gelegt, wobei anschließend eine Transformation berechnet wird (siehe z.B. htps://courses.cs.washinQton.edu/courses/cse576/16sp/Slides/10 lmageStitching.pdf). Wenn man in einer Lernphase ermittelt, wie die Bilder grob überlappen werden, dann können später einzelne Abschnitte der Bilder betrachtet werden, um ein schnelleres Online-Verfahren durchzuführen. Wenn für verschiedene Kamerapositionen die Transformationen (Homografiematrizen) bereits bekannt sind, dann können diese wiederverwendet werden, wenn die exakt gleichen Kamerapositionen wieder vorliegen. Wenn dies in etwa bekannt ist, dann kann ein schnellerer und genauerer Suchalgorithmus erzielt werden, da dieser nur in einem relativ kleinen Bereich Interest- Points suchen und vergleichen muss. Image fusion (stitching) is achieved by identifying interest points in multiple images. These different interest points are compared and superimposed, and a transformation is then calculated (see, for example, https://courses.cs.washinQton.edu/courses/cse576/16sp/Slides/10 lmageStitching.pdf). If you determine in a learning phase how the images will roughly overlap, then individual sections of the images can be examined later to perform a faster online process. If the transformations (homography matrices) are already known for different camera positions, they can be reused when the exact same camera positions are encountered again. If this is roughly known, a faster and more accurate search algorithm can be achieved, since it only needs to search for and compare interest points in a relatively small area.
Dabei können in einer Lernphase in einem Schritt S1 historische Daten der Beschleunigungsdaten A der Inertialmesseinheiten 6 zusammen mit den Kamerabildern B oder Bildausschnitten erfasst werden, insbesondere über den gesamten Wertebereich. Hieraus kann in einem Schritt S2 einmalig eine Homografiematrix berechnet und in einem Schritt S3 gespeichert werden. Dabei entsteht in einem Schritt S4 eine Look-up-Table, die in einer Online-Phase in einem Schritt S5 für bestimmte von den Inertialmesseinheiten 6 ermittelte Beschleunigungswerte A und Kamerabilder B abgerufen werden kann und Homografiematrizen liefert, um in einem Schritt S6 die fusionierten Bilder zu berechnen. Dabei werden zwei Bilder nach einer Transformation zu einem einzigen Bild zusammengeführt. Der rechenintensive Teil der Bildverarbeitung erfolgt somit einmalig in der Lernphase und ist übertragbar auf viele Fahrzeuge. Beispielsweise können die Schritte S1 bis S4 während der Entwicklung des Fahrzeugs 1 erfolgen. Dies hat zudem den Vorteil, dass Verwaschungen durch frühzeitige Berücksichtigung der Beschleunigungsdaten A reduziert werden können. In a learning phase, in step S1, historical data of the acceleration data A of the inertial measurement units 6 can be acquired together with the camera images B or image sections, in particular across the entire value range. From this, a homography matrix can be calculated once in step S2 and stored in step S3. This creates a look-up table in step S4, which In an online phase, in a step S5, it can be retrieved for specific acceleration values A and camera images B determined by the inertial measurement units 6 and provides homography matrices to calculate the fused images in a step S6. Two images are merged into a single image after a transformation. The computationally intensive part of the image processing thus occurs once during the learning phase and is transferable to many vehicles. For example, steps S1 to S4 can be performed during the development of vehicle 1. This also has the advantage that blurring can be reduced by taking the acceleration data A into account at an early stage.
Der oben erwähnte Wertebereich ergibt sich aus möglichen extremen Kamera-Positionen, z.B. bei komplett eingefedertem Fahrerhaus und bei komplett ausgefedertem Faherhaus. In diesem Wertebereich wird eine optimale Homografiematrix über Optimierungen berechnet. Wenn bekannt ist, wo die Kameras relativ zueinander stehen, dann kann in einem eingeschränkten Wertebereich eine Optimierung durchgeführt werden. Wenn die genaue Position bekannt ist, dann kann eine für diese Position schon bekannte Homografiematrix ohne Optimierung direkt verwendet werden. The above-mentioned value range results from possible extreme camera positions, e.g., with the cab fully compressed and with the cab fully extended. Within this value range, an optimal homography matrix is calculated using optimizations. If the relative positions of the cameras are known, optimization can be performed within a limited value range. If the exact position is known, a known homography matrix for this position can be used directly without optimization.
Die Look-up-Table weist einen Input und einen Output auf, wobei der Input durch Position (oder Veränderung der relativen Position) der beiden Kameras basierend auf Beschleunigungsdaten (integriert) gebildet wird. Der Output ist die entsprechende Homografiematrix. The look-up table has an input and an output, where the input is formed by the position (or change in the relative position) of the two cameras based on acceleration data (integrated). The output is the corresponding homography matrix.
Daimler Truck AG KlemmerDaimler Truck AG Klemmer
25.10.2024 October 25, 2024
Bezugszeichenliste List of reference symbols
1 Fahrzeug, Nutzfahrzeug 1 vehicle, commercial vehicle
2 Zugmaschine 2 tractors
3 Auflieger 3 trailers
4 Kamera, Spiegelersatzkamera 4 Camera, mirror replacement camera
5 Kamera, Rückfahrkamera 5 Camera, rear view camera
6 Inertialmesseinheit 6 Inertial measuring unit
7 Kabine 7 cabins
8 Chassis 8 chassis
A Beschleunigungsdaten A acceleration data
B Kamerabilder B Camera images
S1 bis S6 Schritt Steps S1 to S6
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102023130929.3A DE102023130929B3 (en) | 2023-11-08 | 2023-11-08 | Method for fusing camera images and vehicle |
| DE102023130929.3 | 2023-11-08 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025098882A1 true WO2025098882A1 (en) | 2025-05-15 |
Family
ID=93431891
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/080889 Pending WO2025098882A1 (en) | 2023-11-08 | 2024-10-31 | Method for fusing camera images, and vehicle |
Country Status (2)
| Country | Link |
|---|---|
| DE (1) | DE102023130929B3 (en) |
| WO (1) | WO2025098882A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102013209156A1 (en) | 2013-05-16 | 2014-12-04 | Continental Automotive Gmbh | A camera with a housing for use in a motor vehicle surround view system and method for calibrating and monitoring the calibration of such a camera |
| US20170341583A1 (en) | 2016-05-27 | 2017-11-30 | GM Global Technology Operations LLC | Systems and methods for towing vehicle and trailer with surround view imaging devices |
| DE102019202269A1 (en) | 2019-02-20 | 2020-08-20 | Continental Automotive Gmbh | Method for calibrating a mobile camera unit of a camera system for a motor vehicle |
| US20210179172A1 (en) | 2019-12-16 | 2021-06-17 | GM Global Technology Operations LLC | Method and apparatus for determining a trailer hitch articulation angle in a motor vehicle |
| DE102021106988A1 (en) * | 2021-03-22 | 2022-09-22 | Conti Temic Microelectronic Gmbh | Method and system for determining a vehicle's own motion information |
| US20220309776A1 (en) * | 2021-03-29 | 2022-09-29 | Conti Temic Microelectronic Gmbh | Method and system for determining ground level using an artificial neural network |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102022213225B4 (en) | 2022-07-20 | 2024-09-05 | Continental Automotive Technologies GmbH | Camera system, vehicle and procedure |
-
2023
- 2023-11-08 DE DE102023130929.3A patent/DE102023130929B3/en active Active
-
2024
- 2024-10-31 WO PCT/EP2024/080889 patent/WO2025098882A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102013209156A1 (en) | 2013-05-16 | 2014-12-04 | Continental Automotive Gmbh | A camera with a housing for use in a motor vehicle surround view system and method for calibrating and monitoring the calibration of such a camera |
| US20170341583A1 (en) | 2016-05-27 | 2017-11-30 | GM Global Technology Operations LLC | Systems and methods for towing vehicle and trailer with surround view imaging devices |
| DE102019202269A1 (en) | 2019-02-20 | 2020-08-20 | Continental Automotive Gmbh | Method for calibrating a mobile camera unit of a camera system for a motor vehicle |
| US20210179172A1 (en) | 2019-12-16 | 2021-06-17 | GM Global Technology Operations LLC | Method and apparatus for determining a trailer hitch articulation angle in a motor vehicle |
| DE102021106988A1 (en) * | 2021-03-22 | 2022-09-22 | Conti Temic Microelectronic Gmbh | Method and system for determining a vehicle's own motion information |
| US20220309776A1 (en) * | 2021-03-29 | 2022-09-29 | Conti Temic Microelectronic Gmbh | Method and system for determining ground level using an artificial neural network |
Non-Patent Citations (1)
| Title |
|---|
| CHEN CHANGHAO ET AL: "IONet: Learning to Cure the Curse of Drift in Inertial Odometry", PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, vol. 32, no. 1, 1 January 2018 (2018-01-01), XP093017568, ISSN: 2159-5399, DOI: 10.1609/aaai.v32i1.12102 * |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102023130929B3 (en) | 2025-02-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3560796B1 (en) | Trailer driving assistance system for a motor vehicle and method for operating a trailer driving assistance system of a motor vehicle | |
| EP3328686B1 (en) | Method and device for displaying the surrounding field of a tractor and trailer | |
| DE102008045436A1 (en) | Method for determining inclination angle between tractor and trailer for e.g. lorry, involves automatically determining inclination angle based on comparison of detected image with reference images and reference image inclination angle | |
| DE102004029130A1 (en) | Method for coupling a trailer to a motor vehicle | |
| DE102016123884A1 (en) | CLUTCH ANGLE DETECTION FOR TRAILER REVERSE ASSISTANCE SYSTEM | |
| DE102015107675A9 (en) | System and method for estimating vehicle dynamics using feature points in images from multiple cameras | |
| DE102021207558B4 (en) | Camera system and method for generating a 3D view with a camera system | |
| DE102004028763A1 (en) | Andockassistent | |
| EP3284649B1 (en) | Driver assistance for parking a motor vehicle and a trailer by means of virtual sensors | |
| DE102021201525A1 (en) | Method for determining a spatial orientation of a trailer | |
| EP4308418B1 (en) | Method and environment-capture system for producing an environmental image of an entire multi-part vehicle | |
| DE102019122086A1 (en) | Driver assistance for a combination | |
| DE102020112549A1 (en) | Method for measuring an add-on part arranged on a vehicle on the basis of images from a mobile terminal, computing device and driver assistance system | |
| EP3947246A1 (en) | Method for warehousing assistance for an industrial truck, and industrial truck | |
| DE102023130929B3 (en) | Method for fusing camera images and vehicle | |
| DE102015226695A1 (en) | Method and device for determining the overhang length of a trailer | |
| EP2603403B1 (en) | Method of displaying pictures on a display in a motor vehicle, corresponding driver assistance system and vehicle | |
| EP3743311B1 (en) | Method and device for operating a video monitoring system for a motor vehicle | |
| WO2018188700A1 (en) | Surround view system for a vehicle | |
| EP3032517A1 (en) | Device and method for supporting a driver of a vehicle, in particular a commercial vehicle | |
| EP3301646B1 (en) | Camera system and vehicle comprising such a camera system | |
| DE102021208819A1 (en) | Method for determining a kink angle | |
| DE102022112797A1 (en) | Method for providing at least one characteristic trailer parameter, computing device, computer-readable (storage) medium and system | |
| WO2022037749A1 (en) | Method for calibrating a yaw rate sensor of a vehicle | |
| DE102016124320A1 (en) | A method for supporting a driver of a motor vehicle when maneuvering the motor vehicle with a trailer coupled to the motor vehicle, computing device, driver assistance system and motor vehicle |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24801839 Country of ref document: EP Kind code of ref document: A1 |