[go: up one dir, main page]

WO2015014883A1 - Method for generating a look-up table in the operation of a camera system, camera system and motor vehicle - Google Patents

Method for generating a look-up table in the operation of a camera system, camera system and motor vehicle Download PDF

Info

Publication number
WO2015014883A1
WO2015014883A1 PCT/EP2014/066354 EP2014066354W WO2015014883A1 WO 2015014883 A1 WO2015014883 A1 WO 2015014883A1 EP 2014066354 W EP2014066354 W EP 2014066354W WO 2015014883 A1 WO2015014883 A1 WO 2015014883A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
motor vehicle
transformation data
camera
lut
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2014/066354
Other languages
French (fr)
Inventor
Patrick Eoghan Denny
Mark Patrick GRIFFIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connaught Electronics Ltd
Original Assignee
Connaught Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd filed Critical Connaught Electronics Ltd
Publication of WO2015014883A1 publication Critical patent/WO2015014883A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data

Definitions

  • the invention relates to a method for operating a camera system of a motor vehicle, in which an image of an environmental region of the motor vehicle is provided by means of a camera of the camera system. The image is then transformed to an image presentation using transformation data by means of an image processing device, wherein camera parameters of the camera are taken into account in transforming the image. The image presentation is then displayed on a display of the camera system.
  • the invention relates to a camera system formed for performing such a method as well as to a motor vehicle with such a camera system.
  • Camera systems for motor vehicles are already known from the prior art.
  • several cameras can be employed in a motor vehicle, wherein it becomes increasingly common nowadays to use a camera assembly with at least two cameras for a camera system of a vehicle, which each capture an environmental region of the motor vehicle.
  • four cameras can be employed, which capture the entire environment around the motor vehicle.
  • an overall image presentation can be provided from the images of all of the cameras, such as for example the so-called "bird eye view”.
  • This image presentation represents a plan view of the motor vehicle as well as its environment from a bird's eye view and thus for example from a reference point of view directly above the motor vehicle.
  • the provision of such an environmental representation from the images of several cameras is for example known from the document US 201 1/0156887.
  • the display can usually be switched between different operating modes, which differ from each other with respect to the displayed image presentation and thus with respect to the view.
  • the driver of the motor vehicle can select between different views, which are conceived and optimized for different road situations.
  • the display can for example also be switched into a cross traffic operating mode, in which the so-called junction view is displayed, i.e. a view, which shows the cross traffic.
  • Such a junction view can for example be provided based on images of a camera, which is disposed on the front - for example on the front bumper - or else in the rear region - for example on the rear bumper or on a tailgate - and has a relatively wide opening angle of 160° to 200°.
  • an image presentation can be displayed on the display, which is generated from the images of a rear view camera and presents the environmental region behind the motor vehicle.
  • the raw images of the camera have to be transformed or mapped into the coordinate system of the display.
  • the raw images are communicated from the camera to a central electronic image processing device digitally processing the images. If multiple cameras are employed, thus, this central image processing device receives the digital image data of all of the cameras.
  • this central image processing device receives the digital image data of all of the cameras.
  • the images are transformed from the coordinate system of the respective camera into the coordinate system of the display.
  • transformation data is used, which defines the transformation of the raw image.
  • This transformation data is for example in the form of a so-called look-up table and also considers the camera parameters, which in particular include the position of the attachment of the camera to the vehicle as well as the orientation on the vehicle as well as the characteristics of the used lens.
  • the position on the vehicle is defined by three coordinate values (x, y, z), which specify the unique position of the camera with respect to the vehicle body.
  • the orientation of the camera on the vehicle in turn is preset by three angular values, which specify the angles of orientation of the camera around the three vehicle axes x, y, z.
  • the characteristics of the lens for example define a distortion of the images caused by the lens and therefore should be taken into account because so-called fish-eye lenses are usually employed, which cause a relatively great distortion of the images. This distortion is corrected within the scope of the mentioned transformation.
  • the transformation data (look-up table)
  • the so-called “viewport” is also defined by the transformation data, i.e. a partial region of the image, which is used for generating the image presentation for the display.
  • the transformation data i.e. a partial region of the image, which is used for generating the image presentation for the display.
  • This viewport is depending on the currently activated operating mode of the display and thus depending on the current view displayed on the display.
  • One object of the invention is to provide a method, a camera system as well as a motor vehicle improved with respect to the prior art. According to the invention, this object is solved by a method, by a camera system as well as by a motor vehicle having the features of the respective independent claims.
  • a method according to the invention serves for operating a camera system of a motor vehicle.
  • At least one camera of the camera system provides an image of an environmental region of the motor vehicle.
  • the image is transformed to an image presentation displayed on a display of the motor vehicle by means of a digital image processing device.
  • the transformation of the image is effected using transformation data, wherein preset camera parameters of the camera are taken into account in transforming the image.
  • a current vehicle level and thus the current chassis height is acquired by means of at least one sensor of the motor vehicle and the transformation data is generated depending on the measured vehicle level in the operation of the camera system.
  • the invention is based on the realization that the position and the orientation of the camera relative to the vehicle body are fixedly preset and thus known, but the position and the orientation of the camera relative to the ground or to the road, on which the motor vehicle is located, can vary over time.
  • the level of the camera above the ground as well as the orientation of the camera can be affected by a plurality of factors, such as for example by the loading of the trunk of the vehicle, by the number of the vehicle occupants, by uneven distribution of the weights in the vehicle, by coupling a trailer or else during cross country drive.
  • the invention is based on the realization that this variation of the vehicle level also causes variation of the current view on the display.
  • the variation of the vehicle level results in errors in the composition of the images of different cameras.
  • the invention is based on the realization that the disadvantages of the prior art can be avoided in that the current vehicle level is measured by means of at least one sensor and taken into account in generating the transformation data (in particular the so-called lookup table). In this manner, all of the variations of the vehicle level can be compensated for, and the desired image presentation can always be displayed on the display, i.e. always the same view with respect to the ground.
  • the "ride height” or “ground clearance” is understood by the term “vehicle level”. Then, at least the current level of the camera above the ground can be inferred from the current value of the vehicle level, and the current level of the camera above the ground can be taken into account in generating the transformation data.
  • the vehicle level can for example be measured in a damper of the motor vehicle, i.e. a component, which causes the oscillations of the sprung masses to decay.
  • the relative position of the piston with respect to the cylinder can for example be measured by the sensor, which then allows conclusions to the actual vehicle level and the level of the camera above the ground.
  • the invention is not restricted to the arrangement of the sensor in the damper; basically, the at least one sensor can be disposed in any position, which allows the acquisition of the vehicle level.
  • At least two, in particular at least three, preferably four such sensors are used, which each acquire the vehicle level in the respective corner regions of the motor vehicle.
  • uneven distributions of the loading in the motor vehicle can be detected such that the current orientation of the camera around all of the vehicle axes can also be determined and taken into account in the transformation data.
  • the transformation data is preferably in the form of a look-up table representing a transformation map, which is applied to the raw image of the camera in order to alter the pixels of the image and map them into the coordinate system of the display.
  • the transformation data thus represents a projection function or mapping function, by means of which the image is transformed into the coordinate system of the display.
  • this transformation data consider the camera parameters, which are known and for example can be stored in a memory of the image processing device.
  • the camera parameters include the characteristics of the lens of the camera, the position of the camera relative to the vehicle body - this position can be determined by three coordinate values x, y, z, i.e.
  • the orientation is then defined by three angular values, namely an angle around the vehicle longitudinal axis, an angle around the vehicle transverse axis as well as an angle around the vehicle vertical axis.
  • the camera parameters thus can describe the characteristics of the lens and therefore the optical characteristics of the camera; on the other hand, the camera parameters also include the fixed installation position of the camera on the vehicle.
  • the image processing device is preferably a component separate from the camera.
  • the image processing device can be constituted by a controller, which may include a digital signal processor.
  • the signal processor then serves for performing the transformation of the image and for generating the image presentation.
  • the "generation" of the transformation data presently in particular implies that template data stored in the image processing device is used and adapted or completed depending on the measured vehicle level in the operation of the camera system.
  • a lookup table can be stored, which is then updated or completed depending on the measured vehicle level.
  • the entire look-up table has to be generated, but only that portion, which depends on the level of the camera above the ground and depends on the orientation of the camera.
  • a partial region (in particular exclusively a partial region) of the image is used for generating the image presentation and the partial region is defined by the transformation data.
  • the generation of the transformation data can then include that the partial region of the image is determined depending on the measured vehicle level.
  • the viewport is understood by the "partial region", i.e. an image section used for generating the image presentation for the display. This viewport is determined depending on the measured vehicle level in this embodiment.
  • This embodiment has the advantage that always the same environmental region of the motor vehicle can be displayed on the display, independently of the current vehicle level.
  • this embodiment allows correct and leap-free composition of the images of different cameras, which proves particularly advantageous in particular in the above mentioned "bird eye view”.
  • the camera system can also include multiple cameras: In an embodiment, at least two cameras can be employed, which each provide an image of an environmental region of the motor vehicle. For generating the image presentation (for example the "bird eye view"), then, respective partial regions (viewports) of the images can be combined with each other such that the partial regions mutually overlap in an overlapping region.
  • the overlapping region of the respective partial regions can be defined by the transformation data, and the generation of the transformation data can include that the overlapping region of the respective partial regions is determined depending on the vehicle level.
  • the transition regions between the images of different cameras can thus be compensated for depending on the measured vehicle level, and an image presentation can be provided on the display, which is based on the images of different cameras and does not have any leaps and double image structures in the transition regions.
  • a suspension system of the motor vehicle is switched between at least two predetermined suspension modes, which can for example be selected by the driver himself.
  • the following suspension modes can be provided, in which the motor vehicle has different levels, which are factory-preset: a standard mode with an intermediate vehicle level; a sports mode with a low vehicle level as well as an off-road mode with a greater vehicle level.
  • the transformation data for the transformation of the image can be generated separately for each suspension mode.
  • the transformation of the image to the image presentation can therefore be particularly precisely performed in each suspension mode of the motor vehicle.
  • the basic vehicle level is fixedly preset in each suspension mode, but the vehicle level is also influenced by a plurality of factors (if level regulation is not present), such as in particular by the loading of the motor vehicle and the like.
  • the currently activated suspension mode can be acquired by the image processing device.
  • the transformation data can then be generated depending on the measured vehicle level. This once generated transformation data can then be again used for the same suspension mode if the suspension system is switched into another mode and then again into the original mode.
  • transformation data for at least one other (non-activated) suspension mode of the suspension system is also generated separately depending on the transformation data of the current suspension mode and/or depending on the measured vehicle level.
  • the separate transformation data can be virtually simultaneously generated for all of the suspension modes. For example, this can be performed upon activating the ignition of the motor vehicle such that the transformation data for all of the suspension modes is provided already at this time. If the suspension system is then switched into another mode, thus, the already generated transformation data can be directly accessed such that a correct image presentation can be displayed directly after switching the suspension system.
  • the generation of the transformation data for the other, currently not activated suspension modes of the suspension system is allowed in that the vehicle level is basically factory- predefined for all of the suspension modes and thus the difference of the vehicle level between the respective suspension modes is known. If the current vehicle level for the currently activated suspension mode is measured, thus, the vehicle level in the other suspension modes can also be inferred based on these measured values.
  • startup of a prime mover i.e. a source of momentum
  • the transformation data can be generated with the startup of the prime mover or with the activation of the ignition, in particular with each startup of the prime mover or each time the ignition is activated.
  • the transformation data can be generated with the startup of the prime mover or with the activation of the ignition, in particular with each startup of the prime mover or each time the ignition is activated.
  • transformation data for all of the suspension modes of the motor vehicle is generated.
  • the generation of the transformation data at the time of the startup of the prime mover or the ignition has the advantage that the transformation data is therefore available for the entire period of time of the current operation of the motor vehicle, in particular for all of the suspension modes of the suspension system.
  • transformation data for example does not have to be generated anymore such that even upon switching between different suspension modes, the transformation data for the respective suspension mode is already available.
  • the current vehicle level is - in particular continuously - acquired already before the startup of the prime mover or before the activation of the ignition and the acquired measured values of the vehicle level are stored in the image processing device for the subsequent generation of the transformation data.
  • the vehicle level is continuously acquired during the travel, for example in predetermined time intervals, in a predetermined suspension mode - in particular in the off-road mode - and the transformation data is also respectively newly generated continuously during travel, for example also in predetermined time intervals based on the current measured values of the vehicle level.
  • the frequent variations of the vehicle level can be fast compensated for by acquiring the current vehicle level and thus also the current level and/or the orientation of the camera during the travel and generating the transformation data.
  • the display or the camera system can be switched between at least two operating modes, which differ from each other with respect to the image presentation and thus with respect to the view displayed on the display.
  • the operating mode of the display and thus the view on the display is altered by the driver himself.
  • the transformation data for the transformation of the image can be generated separately for each operating mode of the display.
  • the transformation can therefore be provided individually and specifically for each one of the operating modes of the display. If the camera system includes multiple cameras, thus, it can also be provided that the transformation data is generated separately for each camera or for each operating mode of the display.
  • switching of the display from a previous operating mode into another, current operating mode is acquired by the image processing device.
  • the transformation data for this current operating mode can then be generated directly upon switching and thus due to the switching of the display into the current operating mode. This for example means that upon the startup of the prime mover or upon the activation the ignition, the
  • transformation data is generated exclusively for the currently activated operating mode of the display and in particular for all of the suspension modes of the suspension system. If the display is then switched into another operating mode in the operation of the motor vehicle, thus , the transformation data is also generated for this new operating mode, in particular also for all of the suspension modes.
  • the measured values of the vehicle level can be used, which have been acquired either previously before the startup of the prime mover or before the activation of the ignition, or else are currently acquired by means of the at least one sensor.
  • the generation of the transformation data upon switching the display has the advantage that the computational power of the image processing device can be optimally utilized because the transformation data does not have to be generated at the same time for all of the operating modes of the display and all of the suspension modes of the vehicle and the generation of the transformation data thus can be distributed over the time.
  • unnecessary generation of transformation data for that operating mode of the display, which is not activated at all in the current operation of the motor vehicle is additionally prevented.
  • the operational power can be saved.
  • multiple sensors can also be employed, which are disposed distributed on the motor vehicle (for example in the respective dampers) and each acquire the vehicle level at the respective installation location. If multiple sensors are present, thus, the exact current position and/or the current orientation of the camera relative to the ground or floor, on which the motor vehicle is located, can also be determined based on measured values of the sensors. The transformation data can then be generated based on the thus determined position and/or orientation of the camera. If the exact position and/or the orientation of the camera are known, thus, the viewport of the image can for example be optimally defined such that the desired view can be generated on the display.
  • measured values or a tilt sensor of at least one wheel of the motor vehicle can also be taken into account, which further improves the accuracy.
  • behavior of the suspension system in respect of the vehicle level can be predicted by the image processing device. For example, if the suspension reaches the vehicle minimum height level (due to damper behavior) the image processing device can predict that the suspension system will start moving back up increasing the vehicle height level. This prediction can be based on a suspension behavior model. Depending on this prediction the transformation data can be adapted.
  • the invention relates to a camera system for a motor vehicle, including at least one camera for providing an image of an environmental region of the motor vehicle, as well as including an image processing device for transforming the image to an image presentation using transformation data as well as considering camera parameters of the camera, wherein the image presentation is provided for displaying on a display.
  • the image processing device is adapted to generate the transformation data depending on a measured vehicle level in the operation of the camera system.
  • a motor vehicle according to the invention in particular a passenger car, includes a camera system according to the invention.
  • FIG. 1 in schematic illustration a motor vehicle with a camera system according to an embodiment of the invention
  • Fig. 2 a block diagram for explaining an image transformation
  • FIG. 3 a further block diagram
  • Fig. 4 and 5 an exemplary raw image as well as an exemplary image presentation provided by means of image transformation of the raw image
  • Fig. 6 a flow diagram of a method according to an embodiment of the invention.
  • FIG. 7 and 8 schematic illustrations for explaining the problem in composing images of different cameras.
  • a motor vehicle 1 illustrated in Fig. 1 is for example a passenger car.
  • the motor vehicle 1 includes a camera system 2, which has a plurality of cameras 3, 4, 5, 6 in the
  • a first camera 3 is for example disposed on the front bumper of the motor vehicle 1 .
  • a second camera 4 is for example disposed in the rear area, for instance on the rear bumper or on a tailgate.
  • the two lateral cameras 5, 6 can for example be integrated in the respective exterior mirrors.
  • the cameras 3, 4, 5, 6 are electrically coupled to a central image processing device 7, which in turn is coupled to a display 8.
  • the display 8 is any display device, for example an LCD display.
  • the cameras 3, 4, 5, 6 are video cameras, which are each able to capture a sequence of images per time unit and communicate it to the image processing device 7.
  • the cameras 3, 4, 5, 6 are video cameras, which are each able to capture a sequence of images per time unit and communicate it to the image processing device 7.
  • the cameras 3, 4, 5, 6 can for example be CCD cameras or CMOS cameras.
  • the camera 3 captures an environmental region 9 in front of the motor vehicle 1 .
  • the camera 4 captures an environmental region 10 behind the motor vehicle 1 .
  • the camera 5 captures a lateral environmental region 10 to the left besides the motor vehicle 1 , while the camera 6 captures an environmental region 12 on the right side of the motor vehicle 1 .
  • the cameras 3, 4, 5, 6 provide images of the respective environmental regions 9, 10, 1 1 , 12 and communicate these images to the image processing device 7. As is apparent from Fig. 1 , the imaged environmental regions 9, 10, 1 1 , 12 can also mutually overlap in pairs.
  • a sensor 13 can respectively be provided to each wheel of the motor vehicle 1 , by means of which the vehicle level and thus the ground clearance of the motor vehicle 1 at the respective installation location of the sensor 13 is acquired.
  • the sensors 13 can be disposed in the respective corner regions of the motor vehicle 1 .
  • the sensors 13 are integrated in the respective dampers and thus measure a relative position of the piston relative to the cylinder.
  • the image processing device 7 can pick up the respective measured values of the sensors 13 on a
  • the motor vehicle 1 has a suspension system not illustrated in more detail, which is operated in at least two suspension modes, which differ from each other with respect to the vehicle level.
  • three suspension modes are provided, such as for example: a standard mode with an intermediate vehicle level, a sporting suspension mode with a low vehicle level as well as an off-road suspension mode with a relatively high vehicle level.
  • the current suspension mode can be selected by the driver of the motor vehicle 1 .
  • the vehicle levels for the respective suspension modes are factory-preset such that the differences between the vehicle levels of the different suspension modes are also known. These differences are stored in the image processing device 7.
  • the display 8 and the camera system 2, respectively, can be switched between different operating modes, wherein the switching between the different operating modes is for example effected by the driver himself using a corresponding operating device.
  • This operating device can for example be integrated in the display 8, which can be configured as a touch display.
  • different image presentations are generated, which are displayed on the display 8.
  • the operating modes differ in the view, which is presented on the display 8.
  • an image presentation 14 can for example be generated, which is based on the images I3, I4, I5, I6 of all of the cameras 3, 4, 5, 6.
  • the image processing device 7 receives the images I3, I4, I5, I6 of all of the cameras 3, 4, 5, 6 and generates the image presentation 14 from the images I3, I4, I5, I6.
  • This image presentation 14 shows the motor vehicle 1 and the environment 9, 10, 1 1 , 12 for example from a bird's eye view and thus from a point of view, which is above the motor vehicle 1 .
  • the images I3, I4, I5, I6 are each subjected to a transformation and then composed. For the respective transformation, transformation data is used, which is provided in the form of a look-up table LUT. Therein, a separate look-up table LUT is provided for each camera 3, 4, 5, 6.
  • a partial region I3', I4', I5', I6' of the respective image I3, I4, I5, I6 is determined.
  • the partial regions I3', I4', ⁇ 5', I6' are so-called viewports, which are defined in the respective look-up table LUT3, LUT4, LUT5, LUT6.
  • partial regions I3', I4', I5', I6' can be used, which show the respective environmental region 9, 10, 1 1 , 12 of the motor vehicle 1 up to a predetermined distance from the motor vehicle 1 .
  • the respective partial region I3', I4', I5', I6' is transformed into the coordinate system of the display 8.
  • the look-up table LUT3, LUT4, LUT5, LUT6 represents a transformation map, by means of which the pixels of the respective image I3, I4, I5, I6 are correspondingly altered and mapped to the display 8.
  • correction of the distortion of the respective image I3, I4, I5, I6 can also be performed, which is caused by the above mentioned fish-eye lens.
  • the partial regions I3', I4', I5', I6' mutually overlap in the image presentation 14 (bird eye view) in pairs in overlapping regions 15. These overlapping regions 15 too and thus the composition of the partial regions I3', I4', I5', I6' are preset by the transformation data LUT3, LUT4, LUT5, LUT6.
  • FIG. 3 A further operating mode of the display 8 is explained in more detail with reference to Fig. 3.
  • an image presentation 14 is displayed on the display 8, which is based exclusively on the images of a single camera, namely for example the rear view camera 4.
  • This situation also corresponds to a camera system 2, in which only a single camera is employed.
  • the camera 4 provides the images I4 to the image processing device 7, which performs the image transformation of the images I4 to the image presentation 14.
  • transformation data is used, which is provided in the form of a look-up table LUT 4'.
  • This image transformation involves that a partial region I4' (viewport) is selected from the image I4, and this partial region I4' is then transformed into the coordinate system of the display 8.
  • the above mentioned distortion correction is also performed.
  • FIG. 4 and 5 An exemplary image transformation of the image I4 of the camera 4 is illustrated in Fig. 4 and 5.
  • an exemplary raw image I4 of the camera 4 is shown in Fig. 4.
  • a fish-eye lens is used, which causes a relatively great distortion of the image I4, in particular in the edge regions.
  • an image presentation 14 arises, as it is exemplarily shown in Fig. 5.
  • only a partial region of the image I4 is used for the image presentation 14, which is then also corrected with respect to the distortion and adapted to the display 8.
  • camera parameters of the respective cameras 3, 4, 5, 6 also have to be taken into account.
  • These camera parameters in particular include the respective installation location of the cameras 3, 4, 5, 6 - i.e. the position of the cameras 3, 4, 5, 6 in a coordinate system x, y, z defined to the vehicle body (see Fig. 1 ) - as well as the orientation of the cameras 3, 4, 5, 6, which is defined by three angular values: Rx - orientation angle around the x axis, Ry - orientation angle around the y axis, and Rz - orientation angle around the z axis of the motor vehicle 1 .
  • the camera parameters can also include the characteristics of the used lens, such as for example information about the caused distortion.
  • the vehicle level in the respective suspension modes is basically factory-preset, but it has turned out that the vehicle level is also influenced by other factors in all of the suspension modes, such as for example by the loading of the motor vehicle 1 or by a trailer being coupled. Thereby, the position and the orientation of all of the cameras 3, 4, 5, 6 also vary relatively to the ground or to the road. If the transformation data LUT (in particular the respective viewports) would remain constant, the respective image presentation 14 also would vary depending on the current vehicle level. This problem can be exemplified based on Fig. 7 and 8 on the example of the "bird eye view":
  • the partial regions I4', I6' of the images I4, I6 of the cameras 4, 6 are composed and mutually overlap in the overlapping region 15.
  • a correct image presentation 14 is generated, in which the partial regions I4', I6' are correctly composed with each other without leaps or double image structures arising in the image presentation 14.
  • Fig. 7 road markings 16 depicted both in the partial region I4' and in the partial region I6' cover each other and thus are overall correctly depicted in the image presentation 14.
  • the level of the cameras 3, 4, 5, 6 above the ground also changes.
  • an image presentation 14 according to Fig. 8 is generated, in which the road marking 16' depicted in the partial region I4' is no longer in the same position as the road marking 16 of the partial region I6'.
  • double image structures arise in the image presentation 14, which may result in confusion of the driver.
  • the transformation data LUT is generated depending on the measured values of the sensors 13 and depending on the stored camera parameters by means of the image processing device 7 in the operation of the camera system 2.
  • the method starts in a step S1 , in which the ignition of the motor vehicle 1 and the prime mover (internal combustion engine or electric motor) are turned off. With turned off ignition, the measured values of the sensors 13 are continuously received and stored by the image processing device 7 according to step S2. According to step S2, thus, the measurement of the vehicle level and thus of the current chassis level of the motor vehicle 1 in z direction is effected. This is the optimum point of time for the acquisition of the measured values, since the motor vehicle 1 is usually loaded before turning on the ignition and thus the final static vehicle level appears.
  • step S3 the driver activates the ignition of the motor vehicle 1 or the prime mover in a further step S3 such that the motor vehicle 1 is "started".
  • step S3 at least the activation of the on-board network of the motor vehicle 1 is effected.
  • This is acquired by the image processing device 7.
  • it is checked by the image processing device 7, which one of the suspension modes of the suspension system is currently activated. This information can for example be picked up on the mentioned communication bus.
  • the image processing device 7 generates the transformation data LUT at least for the currently activated operating mode of the display 8. It is first generated for the current suspension mode of the suspension system according to step S4. Therein, the transformation data LUT is generated separately for each camera 3, 4, 5, 6. For generating the transformation data LUT, the above mentioned camera parameters as well as the previously stored measured values of the sensors 13 are used. If multiple sensors 13 are present, thus, the current position and orientation of the cameras 3, 4, 5, 6 relative to the ground can be calculated based on these measured values and considered in generating the transformation data LUT. Optionally, this can also be configured such that a preset look-up table is used for generating the transformation data LUT, which represents a template and is factory-stored in the image processing device 7.
  • This preset look-up table can already include the position and orientation of the respective camera 3, 4, 5, 6. If it is then determined by the image processing device 7 that the current position and/or orientation deviate from the stored position and orientation, respectively, thus, the look-up table can be correspondingly corrected depending on the measured values.
  • the generation of the reference data LUT can therefore include that a look-up table already stored in the image processing device 7 is corrected and/or completed.
  • the transformation data LUT can also be generated for the other, currently not activated suspension modes of the motor vehicle 1 . This is possible since the difference in the vehicle level between the different suspension modes is known. The transformation data LUT can therefore also be generated for the other, currently not activated suspension modes.
  • it is checked by the image processing device 7 whether or not the driver alters the suspension mode. If this is the case, thus, new transformation data LUT is applied to the images I3, I4, I5, I6, which has been already previously generated for the new suspension mode.
  • step S6 the image processing device 7 can also check if the driver or another user changes the operating mode of the display 8 and thus the image
  • the image processing device 7 If this is detected, thus, the image processing device 7 generates new transformation data for the current view (for the current operating mode of the display 8) based on the stored camera parameters and the current measured values of the sensors 13.
  • This new transformation data LUT can be generated for the new view for all of the three suspension modes.
  • the measured values of the sensors 13 can also be continuously acquired during travel.
  • the measured values are acquired and evaluated in predetermined time intervals by the image processing device 7.
  • the transformation data LUT can also be continuously updated and thus dynamically adapted in the off-road suspension mode during the travel based on the respectively current measured values of the vehicle level.
  • This is in particular advantageous in uneven terrain because the frequent variations of the vehicle level can be fast compensated for and thus optimum view can always be displayed on the display 8. This is allowed in that the current vehicle level and the current level and/or the orientation of the cameras 3, 4, 5, 6 are acquired during travel and the transformation data LUT is dynamically adapted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a method for operating a camera system of a motor vehicle by providing an image (I3, I4, I5, I6) of an environmental region of the motor vehicle by means of a camera (3, 4, 5, 6) of the camera system, by transforming the image (I3, I4, I5, I6) to an image presentation (14) using transformation data (LUT) by means of an image processing device (7), wherein camera parameters of the camera (3, 4, 5, 6) are taken into account in transforming the image (I3, I4, I5, I6), and by displaying the image presentation (14) on a display (8) of the camera system, wherein a current vehicle level of the motor vehicle is acquired by means of at least one sensor of the motor vehicle and the transformation data (LUT) is generated depending on the vehicle level in the operation of the camera system.

Description

Method for generating a look-up table in the operation of a camera system, camera system and motor vehicle
The invention relates to a method for operating a camera system of a motor vehicle, in which an image of an environmental region of the motor vehicle is provided by means of a camera of the camera system. The image is then transformed to an image presentation using transformation data by means of an image processing device, wherein camera parameters of the camera are taken into account in transforming the image. The image presentation is then displayed on a display of the camera system. In addition, the invention relates to a camera system formed for performing such a method as well as to a motor vehicle with such a camera system.
Camera systems for motor vehicles are already known from the prior art. As is known, several cameras can be employed in a motor vehicle, wherein it becomes increasingly common nowadays to use a camera assembly with at least two cameras for a camera system of a vehicle, which each capture an environmental region of the motor vehicle. For example, four cameras can be employed, which capture the entire environment around the motor vehicle. For example, an overall image presentation can be provided from the images of all of the cameras, such as for example the so-called "bird eye view". This image presentation represents a plan view of the motor vehicle as well as its environment from a bird's eye view and thus for example from a reference point of view directly above the motor vehicle. The provision of such an environmental representation from the images of several cameras is for example known from the document US 201 1/0156887.
In a camera system of a motor vehicle, the display can usually be switched between different operating modes, which differ from each other with respect to the displayed image presentation and thus with respect to the view. Here, the driver of the motor vehicle can select between different views, which are conceived and optimized for different road situations. Besides an operating mode, in which the above mentioned "bird eye view" is displayed as the image presentation, the display can for example also be switched into a cross traffic operating mode, in which the so-called junction view is displayed, i.e. a view, which shows the cross traffic. Such a junction view can for example be provided based on images of a camera, which is disposed on the front - for example on the front bumper - or else in the rear region - for example on the rear bumper or on a tailgate - and has a relatively wide opening angle of 160° to 200°. In a still further operating mode, an image presentation can be displayed on the display, which is generated from the images of a rear view camera and presents the environmental region behind the motor vehicle.
Independently of which operating mode of the display is currently used, as well as independently of the used number of the cameras, the raw images of the camera have to be transformed or mapped into the coordinate system of the display. Usually, the raw images are communicated from the camera to a central electronic image processing device digitally processing the images. If multiple cameras are employed, thus, this central image processing device receives the digital image data of all of the cameras. In order to provide the final image presentation for displaying on the display, the images are transformed from the coordinate system of the respective camera into the coordinate system of the display. For this purpose, in the prior art, transformation data is used, which defines the transformation of the raw image. This transformation data is for example in the form of a so-called look-up table and also considers the camera parameters, which in particular include the position of the attachment of the camera to the vehicle as well as the orientation on the vehicle as well as the characteristics of the used lens. Therein, the position on the vehicle is defined by three coordinate values (x, y, z), which specify the unique position of the camera with respect to the vehicle body. The orientation of the camera on the vehicle in turn is preset by three angular values, which specify the angles of orientation of the camera around the three vehicle axes x, y, z. The characteristics of the lens for example define a distortion of the images caused by the lens and therefore should be taken into account because so-called fish-eye lenses are usually employed, which cause a relatively great distortion of the images. This distortion is corrected within the scope of the mentioned transformation.
Thus, in generating the transformation data (look-up table), the present and known camera parameters are taken into account on the one hand. On the other hand, the so- called "viewport" is also defined by the transformation data, i.e. a partial region of the image, which is used for generating the image presentation for the display. In other words, only a section of the raw image is used for the image presentation, which is to be displayed on the display. This viewport is depending on the currently activated operating mode of the display and thus depending on the current view displayed on the display.
One object of the invention is to provide a method, a camera system as well as a motor vehicle improved with respect to the prior art. According to the invention, this object is solved by a method, by a camera system as well as by a motor vehicle having the features of the respective independent claims.
Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
A method according to the invention serves for operating a camera system of a motor vehicle. At least one camera of the camera system provides an image of an environmental region of the motor vehicle. The image is transformed to an image presentation displayed on a display of the motor vehicle by means of a digital image processing device. The transformation of the image is effected using transformation data, wherein preset camera parameters of the camera are taken into account in transforming the image. According to the invention, it is provided that a current vehicle level and thus the current chassis height is acquired by means of at least one sensor of the motor vehicle and the transformation data is generated depending on the measured vehicle level in the operation of the camera system.
The invention is based on the realization that the position and the orientation of the camera relative to the vehicle body are fixedly preset and thus known, but the position and the orientation of the camera relative to the ground or to the road, on which the motor vehicle is located, can vary over time. In particular, the level of the camera above the ground as well as the orientation of the camera can be affected by a plurality of factors, such as for example by the loading of the trunk of the vehicle, by the number of the vehicle occupants, by uneven distribution of the weights in the vehicle, by coupling a trailer or else during cross country drive. Further, the invention is based on the realization that this variation of the vehicle level also causes variation of the current view on the display. This in turn results in the vehicle environment being incorrectly displayed on the display - for example from a different perspective - and thus for example obstacles located in the environment of the vehicle are no longer or incorrectly represented. In a multi-camera system, the variation of the vehicle level results in errors in the composition of the images of different cameras. In this context, in particular in the overlapping region between images of different cameras, leaps or double image structures can arise, if the images are not correctly matched to each other in the overlapping region. Furthermore, the invention is based on the realization that the disadvantages of the prior art can be avoided in that the current vehicle level is measured by means of at least one sensor and taken into account in generating the transformation data (in particular the so-called lookup table). In this manner, all of the variations of the vehicle level can be compensated for, and the desired image presentation can always be displayed on the display, i.e. always the same view with respect to the ground.
Presently, preferably, the "ride height" or "ground clearance" is understood by the term "vehicle level". Then, at least the current level of the camera above the ground can be inferred from the current value of the vehicle level, and the current level of the camera above the ground can be taken into account in generating the transformation data.
Therein, the vehicle level can for example be measured in a damper of the motor vehicle, i.e. a component, which causes the oscillations of the sprung masses to decay. Therein, the relative position of the piston with respect to the cylinder can for example be measured by the sensor, which then allows conclusions to the actual vehicle level and the level of the camera above the ground. However, the invention is not restricted to the arrangement of the sensor in the damper; basically, the at least one sensor can be disposed in any position, which allows the acquisition of the vehicle level.
Preferably, at least two, in particular at least three, preferably four such sensors are used, which each acquire the vehicle level in the respective corner regions of the motor vehicle. Thus, uneven distributions of the loading in the motor vehicle can be detected such that the current orientation of the camera around all of the vehicle axes can also be determined and taken into account in the transformation data.
The transformation data is preferably in the form of a look-up table representing a transformation map, which is applied to the raw image of the camera in order to alter the pixels of the image and map them into the coordinate system of the display. In other words, the transformation data thus represents a projection function or mapping function, by means of which the image is transformed into the coordinate system of the display. Therein, this transformation data consider the camera parameters, which are known and for example can be stored in a memory of the image processing device. In particular, the camera parameters include the characteristics of the lens of the camera, the position of the camera relative to the vehicle body - this position can be determined by three coordinate values x, y, z, i.e. in vehicle longitudinal direction, in vehicle transverse direction and in vehicle vertical direction - as well as the orientation of the camera relative to the vehicle body - the orientation is then defined by three angular values, namely an angle around the vehicle longitudinal axis, an angle around the vehicle transverse axis as well as an angle around the vehicle vertical axis. On the one hand, the camera parameters thus can describe the characteristics of the lens and therefore the optical characteristics of the camera; on the other hand, the camera parameters also include the fixed installation position of the camera on the vehicle.
The image processing device is preferably a component separate from the camera. For example, the image processing device can be constituted by a controller, which may include a digital signal processor. The signal processor then serves for performing the transformation of the image and for generating the image presentation.
The "generation" of the transformation data presently in particular implies that template data stored in the image processing device is used and adapted or completed depending on the measured vehicle level in the operation of the camera system. For example, a lookup table can be stored, which is then updated or completed depending on the measured vehicle level. Thus, preferably, not the entire look-up table has to be generated, but only that portion, which depends on the level of the camera above the ground and depends on the orientation of the camera.
In an embodiment, it is provided that a partial region (in particular exclusively a partial region) of the image is used for generating the image presentation and the partial region is defined by the transformation data. The generation of the transformation data can then include that the partial region of the image is determined depending on the measured vehicle level. Therein, in particular the viewport is understood by the "partial region", i.e. an image section used for generating the image presentation for the display. This viewport is determined depending on the measured vehicle level in this embodiment. This embodiment has the advantage that always the same environmental region of the motor vehicle can be displayed on the display, independently of the current vehicle level. In addition, this embodiment allows correct and leap-free composition of the images of different cameras, which proves particularly advantageous in particular in the above mentioned "bird eye view".
The camera system can also include multiple cameras: In an embodiment, at least two cameras can be employed, which each provide an image of an environmental region of the motor vehicle. For generating the image presentation (for example the "bird eye view"), then, respective partial regions (viewports) of the images can be combined with each other such that the partial regions mutually overlap in an overlapping region. The overlapping region of the respective partial regions can be defined by the transformation data, and the generation of the transformation data can include that the overlapping region of the respective partial regions is determined depending on the vehicle level. The transition regions between the images of different cameras can thus be compensated for depending on the measured vehicle level, and an image presentation can be provided on the display, which is based on the images of different cameras and does not have any leaps and double image structures in the transition regions.
In an embodiment, it is provided that a suspension system of the motor vehicle is switched between at least two predetermined suspension modes, which can for example be selected by the driver himself. For example, the following suspension modes can be provided, in which the motor vehicle has different levels, which are factory-preset: a standard mode with an intermediate vehicle level; a sports mode with a low vehicle level as well as an off-road mode with a greater vehicle level. In this embodiment, the transformation data for the transformation of the image can be generated separately for each suspension mode. The transformation of the image to the image presentation can therefore be particularly precisely performed in each suspension mode of the motor vehicle. The basic vehicle level is fixedly preset in each suspension mode, but the vehicle level is also influenced by a plurality of factors (if level regulation is not present), such as in particular by the loading of the motor vehicle and the like.
The currently activated suspension mode can be acquired by the image processing device. For this currently activated suspension mode, the transformation data can then be generated depending on the measured vehicle level. This once generated transformation data can then be again used for the same suspension mode if the suspension system is switched into another mode and then again into the original mode.
It proves advantageous if in addition to the transformation data for the currently activated suspension mode, transformation data for at least one other (non-activated) suspension mode of the suspension system is also generated separately depending on the transformation data of the current suspension mode and/or depending on the measured vehicle level. Thus, the separate transformation data can be virtually simultaneously generated for all of the suspension modes. For example, this can be performed upon activating the ignition of the motor vehicle such that the transformation data for all of the suspension modes is provided already at this time. If the suspension system is then switched into another mode, thus, the already generated transformation data can be directly accessed such that a correct image presentation can be displayed directly after switching the suspension system. The generation of the transformation data for the other, currently not activated suspension modes of the suspension system is allowed in that the vehicle level is basically factory- predefined for all of the suspension modes and thus the difference of the vehicle level between the respective suspension modes is known. If the current vehicle level for the currently activated suspension mode is measured, thus, the vehicle level in the other suspension modes can also be inferred based on these measured values.
As already mentioned, startup of a prime mover, i.e. a source of momentum, of the motor vehicle or activation of the ignition of the motor vehicle can be acquired by the image processing device and the transformation data can be generated with the startup of the prime mover or with the activation of the ignition, in particular with each startup of the prime mover or each time the ignition is activated. At this time, preferably, the
transformation data for all of the suspension modes of the motor vehicle is generated. The generation of the transformation data at the time of the startup of the prime mover or the ignition has the advantage that the transformation data is therefore available for the entire period of time of the current operation of the motor vehicle, in particular for all of the suspension modes of the suspension system. In the operation of the motor vehicle, then, transformation data for example does not have to be generated anymore such that even upon switching between different suspension modes, the transformation data for the respective suspension mode is already available.
In order to be able to provide the transformation data upon startup of the prime mover or already upon activation of the ignition, it can be provided that the current vehicle level is - in particular continuously - acquired already before the startup of the prime mover or before the activation of the ignition and the acquired measured values of the vehicle level are stored in the image processing device for the subsequent generation of the transformation data. By acquiring the vehicle level in this specific operating phase of the motor vehicle, any variations in the vehicle level can be acquired, which are for example caused by additional trunk loading, and it can also be ensured that the transformation data can be particularly fast generated already upon the activation of the ignition or upon the startup of the prime mover.
Optionally, it can be provided that the vehicle level is continuously acquired during the travel, for example in predetermined time intervals, in a predetermined suspension mode - in particular in the off-road mode - and the transformation data is also respectively newly generated continuously during travel, for example also in predetermined time intervals based on the current measured values of the vehicle level. In particular in the uneven terrain, thus, the frequent variations of the vehicle level can be fast compensated for by acquiring the current vehicle level and thus also the current level and/or the orientation of the camera during the travel and generating the transformation data.
The display or the camera system can be switched between at least two operating modes, which differ from each other with respect to the image presentation and thus with respect to the view displayed on the display. Thus, it can for example occur that the operating mode of the display and thus the view on the display is altered by the driver himself. In order to be able to provide an optimum image presentation on the display respectively in each operating mode of the display, the transformation data for the transformation of the image can be generated separately for each operating mode of the display. The transformation can therefore be provided individually and specifically for each one of the operating modes of the display. If the camera system includes multiple cameras, thus, it can also be provided that the transformation data is generated separately for each camera or for each operating mode of the display. Thus, for example, in an operating mode, in which the "bird eye view" is generated on the display, other transformation data can be generated for the rear view camera than in an operating mode, in which the generated image presentation on the display is exclusively based on the images of the rear view camera. Depending on the operating mode of the display, thus, different transformation data can be generated for one and the same camera.
Preferably, switching of the display from a previous operating mode into another, current operating mode is acquired by the image processing device. The transformation data for this current operating mode can then be generated directly upon switching and thus due to the switching of the display into the current operating mode. This for example means that upon the startup of the prime mover or upon the activation the ignition, the
transformation data is generated exclusively for the currently activated operating mode of the display and in particular for all of the suspension modes of the suspension system. If the display is then switched into another operating mode in the operation of the motor vehicle, thus , the transformation data is also generated for this new operating mode, in particular also for all of the suspension modes. For generating the transformation data for the new operating mode of the display, therein, the measured values of the vehicle level can be used, which have been acquired either previously before the startup of the prime mover or before the activation of the ignition, or else are currently acquired by means of the at least one sensor. The generation of the transformation data upon switching the display has the advantage that the computational power of the image processing device can be optimally utilized because the transformation data does not have to be generated at the same time for all of the operating modes of the display and all of the suspension modes of the vehicle and the generation of the transformation data thus can be distributed over the time. Thus, unnecessary generation of transformation data for that operating mode of the display, which is not activated at all in the current operation of the motor vehicle, is additionally prevented. Thus, the operational power can be saved.
As already explained, multiple sensors can also be employed, which are disposed distributed on the motor vehicle (for example in the respective dampers) and each acquire the vehicle level at the respective installation location. If multiple sensors are present, thus, the exact current position and/or the current orientation of the camera relative to the ground or floor, on which the motor vehicle is located, can also be determined based on measured values of the sensors. The transformation data can then be generated based on the thus determined position and/or orientation of the camera. If the exact position and/or the orientation of the camera are known, thus, the viewport of the image can for example be optimally defined such that the desired view can be generated on the display.
Optionally, in determining the current position and/or orientation of the camera, measured values or a tilt sensor of at least one wheel of the motor vehicle can also be taken into account, which further improves the accuracy.
It can also be provided that, based on the monitoring of the vehicle suspension heights (vehicle level), behavior of the suspension system in respect of the vehicle level can be predicted by the image processing device. For example, if the suspension reaches the vehicle minimum height level (due to damper behavior) the image processing device can predict that the suspension system will start moving back up increasing the vehicle height level. This prediction can be based on a suspension behavior model. Depending on this prediction the transformation data can be adapted.
In addition, the invention relates to a camera system for a motor vehicle, including at least one camera for providing an image of an environmental region of the motor vehicle, as well as including an image processing device for transforming the image to an image presentation using transformation data as well as considering camera parameters of the camera, wherein the image presentation is provided for displaying on a display. The image processing device is adapted to generate the transformation data depending on a measured vehicle level in the operation of the camera system.
A motor vehicle according to the invention, in particular a passenger car, includes a camera system according to the invention.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the camera system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone.
Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
There show:
Fig. 1 in schematic illustration a motor vehicle with a camera system according to an embodiment of the invention;
Fig. 2 a block diagram for explaining an image transformation;
Fig. 3 a further block diagram;
Fig. 4 and 5 an exemplary raw image as well as an exemplary image presentation provided by means of image transformation of the raw image;
Fig. 6 a flow diagram of a method according to an embodiment of the invention;
and
Fig. 7 and 8 schematic illustrations for explaining the problem in composing images of different cameras. A motor vehicle 1 illustrated in Fig. 1 is for example a passenger car. The motor vehicle 1 includes a camera system 2, which has a plurality of cameras 3, 4, 5, 6 in the
embodiment, which are disposed distributed on the motor vehicle 1 . In the embodiment, four cameras 3, 4, 5, 6 are provided, wherein the invention is not restricted to such a number and arrangement of the cameras 3, 4, 5, 6. Basically, any number of cameras can be used, which can be disposed at different locations of the motor vehicle 1 . Alternatively to such a multi-camera system 2, a camera system 2 with a single camera can also be used.
A first camera 3 is for example disposed on the front bumper of the motor vehicle 1 . A second camera 4 is for example disposed in the rear area, for instance on the rear bumper or on a tailgate. The two lateral cameras 5, 6 can for example be integrated in the respective exterior mirrors. The cameras 3, 4, 5, 6 are electrically coupled to a central image processing device 7, which in turn is coupled to a display 8. The display 8 is any display device, for example an LCD display.
The cameras 3, 4, 5, 6 are video cameras, which are each able to capture a sequence of images per time unit and communicate it to the image processing device 7. The cameras
3, 4, 5, 6 each have a relatively large opening angle, for instance in a range of values from 150 ° to 200 °. For example, so-called fish-eye lenses can be employed for the cameras 3,
4, 5, 6. The cameras 3, 4, 5, 6 can for example be CCD cameras or CMOS cameras.
The camera 3 captures an environmental region 9 in front of the motor vehicle 1 . The camera 4 captures an environmental region 10 behind the motor vehicle 1 . The camera 5 captures a lateral environmental region 10 to the left besides the motor vehicle 1 , while the camera 6 captures an environmental region 12 on the right side of the motor vehicle 1 . The cameras 3, 4, 5, 6 provide images of the respective environmental regions 9, 10, 1 1 , 12 and communicate these images to the image processing device 7. As is apparent from Fig. 1 , the imaged environmental regions 9, 10, 1 1 , 12 can also mutually overlap in pairs.
A sensor 13 can respectively be provided to each wheel of the motor vehicle 1 , by means of which the vehicle level and thus the ground clearance of the motor vehicle 1 at the respective installation location of the sensor 13 is acquired. Generally speaking, the sensors 13 can be disposed in the respective corner regions of the motor vehicle 1 . For example, the sensors 13 are integrated in the respective dampers and thus measure a relative position of the piston relative to the cylinder. For example, the image processing device 7 can pick up the respective measured values of the sensors 13 on a
communication bus of the motor vehicle 1 - for example the CAN bus.
In the embodiment, the motor vehicle 1 has a suspension system not illustrated in more detail, which is operated in at least two suspension modes, which differ from each other with respect to the vehicle level. For example, three suspension modes are provided, such as for example: a standard mode with an intermediate vehicle level, a sporting suspension mode with a low vehicle level as well as an off-road suspension mode with a relatively high vehicle level. In particular, it is provided that the current suspension mode can be selected by the driver of the motor vehicle 1 . Basically, the vehicle levels for the respective suspension modes are factory-preset such that the differences between the vehicle levels of the different suspension modes are also known. These differences are stored in the image processing device 7.
The display 8 and the camera system 2, respectively, can be switched between different operating modes, wherein the switching between the different operating modes is for example effected by the driver himself using a corresponding operating device. This operating device can for example be integrated in the display 8, which can be configured as a touch display. In these different operating modes, different image presentations are generated, which are displayed on the display 8. In other words, the operating modes differ in the view, which is presented on the display 8.
With reference to Fig. 2, in a first operating mode, an image presentation 14 can for example be generated, which is based on the images I3, I4, I5, I6 of all of the cameras 3, 4, 5, 6. As is apparent from Fig. 2, the image processing device 7 receives the images I3, I4, I5, I6 of all of the cameras 3, 4, 5, 6 and generates the image presentation 14 from the images I3, I4, I5, I6. This image presentation 14 shows the motor vehicle 1 and the environment 9, 10, 1 1 , 12 for example from a bird's eye view and thus from a point of view, which is above the motor vehicle 1 . In order to generate this image presentation 14, the images I3, I4, I5, I6 are each subjected to a transformation and then composed. For the respective transformation, transformation data is used, which is provided in the form of a look-up table LUT. Therein, a separate look-up table LUT is provided for each camera 3, 4, 5, 6.
Within the scope of the respective transformation, first, a partial region I3', I4', I5', I6' of the respective image I3, I4, I5, I6 is determined. For generating the image presentation 14, then, exclusively the partial regions I3', I4', I5', I6' are used. The partial regions I3', I4', Ι5', I6' are so-called viewports, which are defined in the respective look-up table LUT3, LUT4, LUT5, LUT6. In providing the "bird eye view", for example, partial regions I3', I4', I5', I6' can be used, which show the respective environmental region 9, 10, 1 1 , 12 of the motor vehicle 1 up to a predetermined distance from the motor vehicle 1 .
Within the scope of the respective image transformation, then, the respective partial region I3', I4', I5', I6' is transformed into the coordinate system of the display 8. In this context, the look-up table LUT3, LUT4, LUT5, LUT6 represents a transformation map, by means of which the pixels of the respective image I3, I4, I5, I6 are correspondingly altered and mapped to the display 8. Within the scope of the respective image transformation, correction of the distortion of the respective image I3, I4, I5, I6 can also be performed, which is caused by the above mentioned fish-eye lens.
As is further apparent from Fig. 2, the partial regions I3', I4', I5', I6' mutually overlap in the image presentation 14 (bird eye view) in pairs in overlapping regions 15. These overlapping regions 15 too and thus the composition of the partial regions I3', I4', I5', I6' are preset by the transformation data LUT3, LUT4, LUT5, LUT6.
A further operating mode of the display 8 is explained in more detail with reference to Fig. 3. Here, an image presentation 14 is displayed on the display 8, which is based exclusively on the images of a single camera, namely for example the rear view camera 4. This situation also corresponds to a camera system 2, in which only a single camera is employed. The camera 4 provides the images I4 to the image processing device 7, which performs the image transformation of the images I4 to the image presentation 14. Here too, transformation data is used, which is provided in the form of a look-up table LUT 4'. This image transformation involves that a partial region I4' (viewport) is selected from the image I4, and this partial region I4' is then transformed into the coordinate system of the display 8. Herein, the above mentioned distortion correction is also performed.
An exemplary image transformation of the image I4 of the camera 4 is illustrated in Fig. 4 and 5. Therein, an exemplary raw image I4 of the camera 4 is shown in Fig. 4. As is apparent from Fig. 4, a fish-eye lens is used, which causes a relatively great distortion of the image I4, in particular in the edge regions. If this image I4 is transformed using the look-up table LUT4', thus, an image presentation 14 arises, as it is exemplarily shown in Fig. 5. As is apparent from Fig. 4 and 5, only a partial region of the image I4 is used for the image presentation 14, which is then also corrected with respect to the distortion and adapted to the display 8. In generating the transformation data LUT, camera parameters of the respective cameras 3, 4, 5, 6 also have to be taken into account. These camera parameters in particular include the respective installation location of the cameras 3, 4, 5, 6 - i.e. the position of the cameras 3, 4, 5, 6 in a coordinate system x, y, z defined to the vehicle body (see Fig. 1 ) - as well as the orientation of the cameras 3, 4, 5, 6, which is defined by three angular values: Rx - orientation angle around the x axis, Ry - orientation angle around the y axis, and Rz - orientation angle around the z axis of the motor vehicle 1 . The camera parameters can also include the characteristics of the used lens, such as for example information about the caused distortion.
The vehicle level in the respective suspension modes is basically factory-preset, but it has turned out that the vehicle level is also influenced by other factors in all of the suspension modes, such as for example by the loading of the motor vehicle 1 or by a trailer being coupled. Thereby, the position and the orientation of all of the cameras 3, 4, 5, 6 also vary relatively to the ground or to the road. If the transformation data LUT (in particular the respective viewports) would remain constant, the respective image presentation 14 also would vary depending on the current vehicle level. This problem can be exemplified based on Fig. 7 and 8 on the example of the "bird eye view":
Here, the partial regions I4', I6' of the images I4, I6 of the cameras 4, 6 are composed and mutually overlap in the overlapping region 15. With an unloaded motor vehicle 1 , a correct image presentation 14 is generated, in which the partial regions I4', I6' are correctly composed with each other without leaps or double image structures arising in the image presentation 14. This is recognizable in Fig. 7 in that road markings 16 depicted both in the partial region I4' and in the partial region I6' cover each other and thus are overall correctly depicted in the image presentation 14. If the loading of the motor vehicle 1 now changes, thus, the level of the cameras 3, 4, 5, 6 above the ground also changes. If the transformation data LUT is not corrected, an image presentation 14 according to Fig. 8 is generated, in which the road marking 16' depicted in the partial region I4' is no longer in the same position as the road marking 16 of the partial region I6'. Thus, double image structures arise in the image presentation 14, which may result in confusion of the driver.
In order to prevent the generation of a faulty image presentation 14, the transformation data LUT is generated depending on the measured values of the sensors 13 and depending on the stored camera parameters by means of the image processing device 7 in the operation of the camera system 2. A method according to an embodiment of the invention is now explained in more detail with reference to Fig. 6:
The method starts in a step S1 , in which the ignition of the motor vehicle 1 and the prime mover (internal combustion engine or electric motor) are turned off. With turned off ignition, the measured values of the sensors 13 are continuously received and stored by the image processing device 7 according to step S2. According to step S2, thus, the measurement of the vehicle level and thus of the current chassis level of the motor vehicle 1 in z direction is effected. This is the optimum point of time for the acquisition of the measured values, since the motor vehicle 1 is usually loaded before turning on the ignition and thus the final static vehicle level appears.
Now, the driver activates the ignition of the motor vehicle 1 or the prime mover in a further step S3 such that the motor vehicle 1 is "started". In other words, according to step S3, at least the activation of the on-board network of the motor vehicle 1 is effected. This is acquired by the image processing device 7. In addition, it is checked by the image processing device 7, which one of the suspension modes of the suspension system is currently activated. This information can for example be picked up on the mentioned communication bus.
In a further step S4, the image processing device 7 generates the transformation data LUT at least for the currently activated operating mode of the display 8. It is first generated for the current suspension mode of the suspension system according to step S4. Therein, the transformation data LUT is generated separately for each camera 3, 4, 5, 6. For generating the transformation data LUT, the above mentioned camera parameters as well as the previously stored measured values of the sensors 13 are used. If multiple sensors 13 are present, thus, the current position and orientation of the cameras 3, 4, 5, 6 relative to the ground can be calculated based on these measured values and considered in generating the transformation data LUT. Optionally, this can also be configured such that a preset look-up table is used for generating the transformation data LUT, which represents a template and is factory-stored in the image processing device 7. This preset look-up table can already include the position and orientation of the respective camera 3, 4, 5, 6. If it is then determined by the image processing device 7 that the current position and/or orientation deviate from the stored position and orientation, respectively, thus, the look-up table can be correspondingly corrected depending on the measured values. The generation of the reference data LUT can therefore include that a look-up table already stored in the image processing device 7 is corrected and/or completed.
In a further step S5, the transformation data LUT can also be generated for the other, currently not activated suspension modes of the motor vehicle 1 . This is possible since the difference in the vehicle level between the different suspension modes is known. The transformation data LUT can therefore also be generated for the other, currently not activated suspension modes. In a further step S6, then, it is checked by the image processing device 7 whether or not the driver alters the suspension mode. If this is the case, thus, new transformation data LUT is applied to the images I3, I4, I5, I6, which has been already previously generated for the new suspension mode.
According to step S6, the image processing device 7 can also check if the driver or another user changes the operating mode of the display 8 and thus the image
presentation 14. If this is detected, thus, the image processing device 7 generates new transformation data for the current view (for the current operating mode of the display 8) based on the stored camera parameters and the current measured values of the sensors 13. This new transformation data LUT can be generated for the new view for all of the three suspension modes.
If the off-road suspension mode is activated when the motor vehicle 1 is in the terrain, thus, the measured values of the sensors 13 can also be continuously acquired during travel. For example, the measured values are acquired and evaluated in predetermined time intervals by the image processing device 7. Then, the transformation data LUT can also be continuously updated and thus dynamically adapted in the off-road suspension mode during the travel based on the respectively current measured values of the vehicle level. This is in particular advantageous in uneven terrain because the frequent variations of the vehicle level can be fast compensated for and thus optimum view can always be displayed on the display 8. This is allowed in that the current vehicle level and the current level and/or the orientation of the cameras 3, 4, 5, 6 are acquired during travel and the transformation data LUT is dynamically adapted.

Claims

Claims
1 . Method for operating a camera system (2) of a motor vehicle (1 ), comprising the following steps of:
- providing an image (I3, I4, I5, I6) of an environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ) by means of a camera (3, 4, 5, 6) of the camera system (2),
- transforming the image (I3, I4, I5, I6) to an image presentation (14) using
transformation data (LUT) by means of an image processing device (7), wherein camera parameters of the camera (3, 4, 5, 6) are taken into account in transforming the image (I3, I4, I5, I6), and
- displaying the image presentation (14) on a display (8) of the camera system (2), characterized in that
a current vehicle level of the motor vehicle (1 ) is acquired by means of at least one sensor (13) of the motor vehicle (1 ) and the transformation data (LUT) is generated depending on the vehicle level in the operation of the camera system (2).
2. Method according to claim 1 ,
characterized in that
a partial region (Ι3', Ι4', Ι5', Ι6') of the image (I3, I4, I5, I6) is used for generating the image presentation (14) and the partial region (Ι3', Ι4', Ι5', Ι6') is defined by the transformation data (LUT), wherein the generation of the transformation data (LUT) includes that the partial region (Ι3', Ι4', Ι5', Ι6') of the image (I3, I4, I5, I6) is determined depending on the measured vehicle level.
3. Method according to claim 1 or 2,
characterized in that
at least two cameras (3, 4, 5, 6) of the camera system (2) each capture an image (I3, I4, I5, I6) of an environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ) and respective partial regions (Ι3', Ι4', Ι5', Ι6') of the images (I3, I4, I5, I6) are combined with each other for generating the image presentation (14) such that the partial regions (Ι3', Ι4', Ι5', Ι6') mutually overlap in an overlapping region (15), wherein the overlapping region (15) of the respective partial regions (Ι3', Ι4', Ι5', Ι6') is defined by the transformation data (LUT), and wherein the generation of the transformation data (LUT) includes that the overlapping region (15) of the respective partial regions (Ι3', Ι4', Ι5', Ι6') is determined depending on the vehicle level.
4. Method according to any one of the preceding claims,
characterized in that
a suspension system of the motor vehicle (1 ) is switched between at least two predetermined suspension modes, which differ from each other with respect to the vehicle level, wherein the transformation data (LUT) for the transformation of the image (I3, I4, I5, I6) is generated separately for each suspension mode.
5. Method according to claim 4,
characterized in that
the currently activated suspension mode is acquired by the image processing device (7) and the transformation data (LUT) is generated depending on the measured vehicle level for the currently activated suspension mode.
6. Method according to claim 5,
characterized in that
in addition to the transformation data (LUT) for the currently activated suspension mode, transformation data (LUT) for at least one different suspension mode of the suspension system is also generated separately depending on the transformation data (LUT) of the current suspension mode and/or depending on the measured vehicle level.
7. Method according to any one of the preceding claims,
characterized in that
a startup of a prime mover of the motor vehicle (1 ) or activation of an ignition of the motor vehicle (1 ) is acquired by the image processing device (7) and the
transformation data (LUT) is generated with the startup of the prime mover or with the activation of the ignition.
8. Method according to claim 7,
characterized in that
before the startup of the prime mover or before the activation of the ignition, the current vehicle level is, in particular continuously, acquired and the acquired measured values of the vehicle level are stored in the image processing device (7) for the subsequent generation of the transformation data (LUT).
9. Method according to any one of the preceding claims,
characterized in that
the display (8) is switched between at least two operating modes, which differ from each other with respect to the image presentation (14), wherein the transformation data (LUT) for the transformation of the image (I3, I4, I5, I6) is generated separately for each operating mode of the display (8).
10. Method according to claim 9,
characterized in that
switching of the display (8) from a previous operating mode into a different current operating mode is acquired by the image processing device (7), and the
transformation data (LUT) for the current operating mode is generated with the switching of the display (8) into the current operating mode.
1 1 . Method according to any one of the preceding claims,
characterized in that
the vehicle level is acquired by means of at least two sensors (13) disposed distributed on the motor vehicle (1 ), wherein a current position and/or a current orientation of the camera (3, 4, 5, 6) relative to a ground, on which the motor vehicle (1 ) is located, is determined based on measured values of the at least two sensors (13), and wherein the transformation data (LUT) is generated depending on the position and/or orientation of the camera (3, 4, 5, 6).
12. Camera system (2) for a motor vehicle (1 ) including:
- at least one camera (3, 4, 5, 6) for providing an image (I3, I4, I5, I6) of an
environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ), and
- an image processing device (7) for transforming the image (I3, I4, I5, I6) to an image presentation (14) using transformation data (LUT) as well as considering camera parameters of the camera (3, 4, 5, 6), wherein the image presentation (14) is provided for displaying on a display (8),
characterized in that the image processing device (7) is adapted to generate the transformation data (LUT) depending on a measured vehicle level in the operation of the camera system (2).
13. Motor vehicle (1 ) with a camera system (2) according to claim 12.
PCT/EP2014/066354 2013-08-01 2014-07-30 Method for generating a look-up table in the operation of a camera system, camera system and motor vehicle Ceased WO2015014883A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102013012808.0 2013-08-01
DE102013012808.0A DE102013012808B4 (en) 2013-08-01 2013-08-01 Method for generating a look-up table during operation of a camera system, camera system and motor vehicle

Publications (1)

Publication Number Publication Date
WO2015014883A1 true WO2015014883A1 (en) 2015-02-05

Family

ID=51300712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/066354 Ceased WO2015014883A1 (en) 2013-08-01 2014-07-30 Method for generating a look-up table in the operation of a camera system, camera system and motor vehicle

Country Status (2)

Country Link
DE (1) DE102013012808B4 (en)
WO (1) WO2015014883A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022101979A (en) * 2020-12-25 2022-07-07 株式会社デンソー Image generator, image generation method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019000701A1 (en) 2019-01-31 2019-06-13 Daimler Ag Method for controlling a motor vehicle and a motor vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1157890A1 (en) * 2000-05-26 2001-11-28 Matsushita Electric Industrial Co., Ltd. Image processor and monitoring system
JP2002293196A (en) * 2001-03-29 2002-10-09 Matsushita Electric Ind Co Ltd Image display method of vehicle-mounted camera and its device
JP2006182108A (en) * 2004-12-27 2006-07-13 Nissan Motor Co Ltd Vehicle periphery monitoring device
US20090160940A1 (en) * 2007-12-20 2009-06-25 Alpine Electronics, Inc. Image display method and image display apparatus
JP2009253571A (en) * 2008-04-04 2009-10-29 Clarion Co Ltd Monitor video image generation device for vehicle
EP2348279A1 (en) * 2008-10-28 2011-07-27 PASCO Corporation Road measurement device and method for measuring road

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009035422B4 (en) * 2009-07-31 2021-06-17 Bayerische Motoren Werke Aktiengesellschaft Method for geometrical image transformation
TWI417639B (en) 2009-12-30 2013-12-01 Ind Tech Res Inst Method and system for forming surrounding seamless bird-view image
DE102010048143A1 (en) * 2010-10-11 2011-07-28 Daimler AG, 70327 Method for calibrating camera arranged in vehicle, involves determining current camera parameters in continuous manner and considering current pitch angle of vehicle during calibration
DE102010062589A1 (en) * 2010-12-08 2012-06-14 Robert Bosch Gmbh Camera-based method for distance determination in a stationary vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1157890A1 (en) * 2000-05-26 2001-11-28 Matsushita Electric Industrial Co., Ltd. Image processor and monitoring system
JP2002293196A (en) * 2001-03-29 2002-10-09 Matsushita Electric Ind Co Ltd Image display method of vehicle-mounted camera and its device
JP2006182108A (en) * 2004-12-27 2006-07-13 Nissan Motor Co Ltd Vehicle periphery monitoring device
US20090160940A1 (en) * 2007-12-20 2009-06-25 Alpine Electronics, Inc. Image display method and image display apparatus
JP2009253571A (en) * 2008-04-04 2009-10-29 Clarion Co Ltd Monitor video image generation device for vehicle
EP2348279A1 (en) * 2008-10-28 2011-07-27 PASCO Corporation Road measurement device and method for measuring road

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022101979A (en) * 2020-12-25 2022-07-07 株式会社デンソー Image generator, image generation method
JP7593101B2 (en) 2020-12-25 2024-12-03 株式会社デンソー Image generating device and image generating method

Also Published As

Publication number Publication date
DE102013012808A1 (en) 2015-02-05
DE102013012808B4 (en) 2023-11-23

Similar Documents

Publication Publication Date Title
US8421865B2 (en) Method for calibrating a vehicular camera system
US9516277B2 (en) Full speed lane sensing with a surrounding view system
US10609339B2 (en) System for and method of dynamically displaying images on a vehicle electronic display
JP4903194B2 (en) In-vehicle camera unit, vehicle external display method, and driving corridor marker generation system
US20240196092A1 (en) Techniques to compensate for movement of sensors in a vehicle
US11295704B2 (en) Display control device, display control method, and storage medium capable of performing appropriate luminance adjustment in case where abnormality of illuminance sensor is detected
KR20150141804A (en) Apparatus for providing around view and Vehicle including the same
EP3761262B1 (en) Image processing device and image processing method
WO2011090163A1 (en) Parameter determination device, parameter determination system, parameter determination method, and recording medium
CN110378836B (en) Method, system and equipment for acquiring 3D information of object
CN107249934B (en) Method and device for displaying vehicle surrounding environment without distortion
CN106105188B (en) shooting system
US20170297491A1 (en) Image generation device and image generation method
JPWO2015045568A1 (en) Predicted course presentation device and predicted course presentation method
US20190375332A1 (en) Automatic mirror and adjustment
JP5729110B2 (en) Image processing apparatus and image processing method
US20220284221A1 (en) Deep learning based parametrizable surround vision
WO2015014883A1 (en) Method for generating a look-up table in the operation of a camera system, camera system and motor vehicle
JP5195776B2 (en) Vehicle periphery monitoring device
CN113646769B (en) System and method for image normalization
US11770495B2 (en) Generating virtual images based on captured image data
KR101729473B1 (en) Apparatus and method for compensating camera
JP6855254B2 (en) Image processing device, image processing system, and image processing method
JP2021002790A (en) Camera parameter setting device, camera parameter setting method, and camera parameter setting program
JP6772716B2 (en) Peripheral monitoring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14749747

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14749747

Country of ref document: EP

Kind code of ref document: A1