[go: up one dir, main page]

CN117274384A - A camera pose correction method, device, computer equipment and storage medium - Google Patents

A camera pose correction method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN117274384A
CN117274384A CN202311268699.5A CN202311268699A CN117274384A CN 117274384 A CN117274384 A CN 117274384A CN 202311268699 A CN202311268699 A CN 202311268699A CN 117274384 A CN117274384 A CN 117274384A
Authority
CN
China
Prior art keywords
camera
pitch angle
coordinates
target image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311268699.5A
Other languages
Chinese (zh)
Inventor
陶明明
陈胤子
张振林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Corp
Original Assignee
China Automotive Innovation Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Corp filed Critical China Automotive Innovation Corp
Priority to CN202311268699.5A priority Critical patent/CN117274384A/en
Publication of CN117274384A publication Critical patent/CN117274384A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及自动驾驶技术领域,特别是涉及一种相机位姿修正方法、装置、计算机设备、存储介质和计算机程序产品。包括:获取目标图像信息、环境监测信息以及目标相机的相机内参;基于目标图像信息获取车道线采样点在目标图像上的成像坐标,根据相机内参、成像坐标以及监测坐标确定车道线采样点的监测坐标与成像坐标之间的转换关系;根据转换关系、环境监测信息以及目标图像信息确定目标图像中车道线对应的消失点信息;基于消失点信息确定目标相机的俯仰角参数,根据俯仰角参数对目标相机的位姿进行修正。采用本方法能够在计算俯仰角的过程中参考车身与道路之间的角度关系,提高俯仰角计算结果的准确度。

The present application relates to the field of automatic driving technology, and in particular to a camera pose correction method, device, computer equipment, storage medium and computer program product. Including: obtaining the target image information, environmental monitoring information and camera internal parameters of the target camera; obtaining the imaging coordinates of the lane line sampling point on the target image based on the target image information, and determining the monitoring of the lane line sampling point based on the camera internal parameters, imaging coordinates and monitoring coordinates. The conversion relationship between coordinates and imaging coordinates; determine the vanishing point information corresponding to the lane line in the target image based on the conversion relationship, environmental monitoring information, and target image information; determine the pitch angle parameter of the target camera based on the vanishing point information, and calculate the pitch angle parameter based on the pitch angle parameter. The pose of the target camera is corrected. This method can refer to the angular relationship between the vehicle body and the road in the process of calculating the pitch angle, thereby improving the accuracy of the pitch angle calculation results.

Description

Camera pose correction method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of autopilot technology, and in particular, to a camera pose correction method, apparatus, computer device, storage medium, and computer program product.
Background
Autopilot refers to the automatic and safe operation of a motor vehicle by a computer without any human initiative through the synergistic effect of technologies such as artificial intelligence, visual computing, radar, monitoring devices, global positioning systems, and the like. In the automatic driving technology, the lane line detection technology is one of important components, an automatic driving automobile needs to sense different colors and obtain lane lines under different illumination conditions, the automobile can be guided to run in a correct obtaining area by means of the lane line detection technology, a basis is provided for the automatic cruising, lane keeping, lane overtaking and other actions of the automatic driving automobile, and early warning can be provided for a driver when the lane deviates from a lane, so that the safe driving of the automobile is facilitated. In the automatic driving field, when the pose of the vehicle changes, the pitch angle of the camera also changes, and the change of the pitch angle of the camera can lead to the change of the position of the target relative to the vehicle, thereby bringing the problems of inaccurate ranging and speed measurement of the target, inaccurate positioning of the lane line and the like, and increasing the risk of the automatic driving in the driving process. Therefore, how to effectively correct the camera pitch angle is a problem to be solved in the art.
In the related art, in the process of correcting the pitch angle of the camera, a plurality of sampling levels are generally set, each sampling level corresponds to different coordinate values of the lane lines in the X direction, vanishing points between each pair of adjacent lane lines are calculated under each sampling level, a set of vanishing points is formed, and finally, a final vanishing point is selected from the set of vanishing points through a process such as median filtering, and a corresponding pitch angle is calculated according to the final vanishing point.
However, the conventional camera pitch angle correction method has the following technical problems:
according to the prior art, the pitch angle correction under the scene of a flat road can be realized, and when special driving conditions such as jolt of a vehicle, rapid acceleration of the vehicle, change of a road ramp and the like occur, the vanishing point of the road line is difficult to accurately calculate, so that the pitch angle correction result is inaccurate.
Disclosure of Invention
Based on the foregoing, it is desirable to provide a camera pose correction method, apparatus, computer device, computer readable storage medium and computer program product capable of referring to an angular relationship between a vehicle body and a road in calculating a pitch angle and improving accuracy of a pitch angle calculation result.
In a first aspect, the present application provides a camera pose correction method. The method comprises the following steps:
acquiring target image information, environment monitoring information and camera internal parameters of a target camera, wherein the environment monitoring information comprises monitoring coordinates of lane line sampling points;
acquiring imaging coordinates of the lane line sampling points on a target image based on the target image information, and determining a conversion relation between the monitoring coordinates and the imaging coordinates of the lane line sampling points according to the camera internal parameters, the imaging coordinates and the monitoring coordinates;
determining vanishing point information corresponding to the lane lines in the target image according to the conversion relation, the environment monitoring information and the target image information;
and determining a pitch angle parameter of the target camera based on the vanishing point information, and correcting the pose of the target camera according to the pitch angle parameter.
In one embodiment, the determining the pitch angle parameter of the target camera based on the vanishing point information, and correcting the pose of the target camera according to the pitch angle parameter includes:
acquiring a plurality of first pitch angle parameters corresponding to different lane lines;
and determining the pitch angle parameters for realizing pose correction according to the plurality of first pitch angle parameters.
In one embodiment, the determining the pitch angle parameters for achieving pose correction according to the plurality of first pitch angle parameters includes:
and based on a preset median filtering algorithm, carrying out fusion processing on a plurality of first pitch angle parameters to obtain the pitch angle parameters.
In one embodiment, the acquiring the imaging coordinates of the lane line sampling point on the target image based on the target image information, and determining the conversion relationship between the monitoring coordinates and the imaging coordinates of the lane line sampling point according to the camera internal parameter, the imaging coordinates and the monitoring coordinates includes:
acquiring a first vector from the target camera to a monitoring coordinate point according to the environment monitoring information;
determining a focal length parameter of the target camera according to the camera internal parameters, and determining a second vector from the target camera to an imaging coordinate point according to the focal length parameter;
and acquiring a first proportional coefficient between the first vector and the second vector, and determining a conversion relation between the monitoring coordinate and the imaging coordinate according to the first proportional coefficient.
In one embodiment, the determining vanishing point information corresponding to the lane line in the target image according to the conversion relation, the environment monitoring information and the target image information includes:
constructing a third vector which takes the camera coordinate system as an origin and is parallel to the lane line, and acquiring projection point parameters of the third vector in the target image;
determining imaging lane line widths in the target image according to the two imaging coordinates, and acquiring real lane line widths according to the conversion relation;
and determining the vanishing point information according to the imaging lane line width and a second proportionality coefficient between the real lane line widths.
In one embodiment, the determining the imaging lane line width in the target image according to the two imaging coordinates, and the obtaining the real lane line width according to the conversion relation includes:
acquiring yaw angle parameters of the target camera according to the environment monitoring information;
the real lane line width is determined based on a trigonometric function relationship between a fourth vector between two of the monitored coordinate points and the camera coordinate system.
In a second aspect, the present application further provides a camera pose correction apparatus. The device comprises:
the data acquisition module is used for acquiring target image information, environment monitoring information and camera internal parameters of a target camera, wherein the environment monitoring information comprises monitoring coordinates of lane line sampling points;
the conversion relation module is used for acquiring imaging coordinates of the lane line sampling points on a target image based on the target image information, and determining a conversion relation between the monitoring coordinates and the imaging coordinates of the lane line sampling points according to the camera internal parameters, the imaging coordinates and the monitoring coordinates;
the vanishing point calculation module is used for determining vanishing point information corresponding to the lane lines in the target image according to the conversion relation, the environment monitoring information and the target image information;
and the pose correction module is used for determining a pitch angle parameter of the target camera based on the vanishing point information and correcting the pose of the target camera according to the pitch angle parameter.
In one embodiment, the pose correction module includes:
the first pitch angle parameter module is used for acquiring a plurality of first pitch angle parameters corresponding to different lane lines;
and the multi-parameter fusion module is used for determining the pitch angle parameters for realizing pose correction according to a plurality of the first pitch angle parameters.
In one embodiment, the multi-parameter fusion module includes:
and the parameter fusion module is used for carrying out fusion processing on a plurality of first pitch angle parameters based on a preset median filtering algorithm to obtain the pitch angle parameters.
In one embodiment, the conversion relation module includes:
the first vector module is used for acquiring a first vector from the target camera to a monitoring coordinate point according to the environment monitoring information;
a second vector module for determining a focal length parameter of the target camera according to the camera internal parameters, and determining a second vector from the target camera to an imaging coordinate point according to the focal length parameter;
and the scaling factor module is used for acquiring a first scaling factor between the first vector and the second vector, and determining the conversion relation between the monitoring coordinate and the imaging coordinate according to the first scaling factor.
In one embodiment, the vanishing point calculating module includes:
the third vector module is used for constructing a third vector which takes the camera coordinate system as an origin and is parallel to the lane line, and acquiring projection point parameters of the third vector in the target image;
the real lane line width module is used for determining imaging lane line widths in the target image according to the two imaging coordinates and acquiring the real lane line widths according to the conversion relation;
and the second scaling factor module is used for determining the vanishing point information according to a second scaling factor between the imaging lane line width and the real lane line width.
In one embodiment, the real lane line width module comprises:
the yaw angle parameter module is used for acquiring yaw angle parameters of the target camera according to the environment monitoring information;
and the trigonometric function relation module is used for determining the width of the real lane line based on a trigonometric function relation between a fourth vector between the two monitoring coordinate points and the camera coordinate system.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of a camera pose correction method according to any one of the embodiments of the first aspect when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a camera pose correction method according to any one of the embodiments of the first aspect.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of a camera pose correction method according to any embodiment of the first aspect.
The camera pose correction method, the camera pose correction device, the computer equipment, the storage medium and the computer program product can achieve the beneficial effects of corresponding technical problems in the background technology by deducing the technical characteristics:
in the process of detecting the pitch angle parameters of the camera, firstly, data such as target image information, environment monitoring information and the like which can be acquired by the vehicle-mounted sensor are acquired, and camera internal parameters of the camera configured by the vehicle can also be acquired. And then, acquiring imaging coordinates of the lane line sampling points according to the target image information, so as to acquire the conversion relation between the monitoring coordinates and the imaging coordinates according to the camera internal parameters, the imaging coordinates and the monitoring coordinates. After the conversion relation between the coordinates in the two coordinate systems is obtained, vanishing point information can be obtained through calculation, so that pitch angle parameters are determined according to the vanishing point information, and finally, the pose of the target camera can be corrected according to the pitch angle parameters. In the implementation, the conversion relation between the coordinates in the camera coordinate system and the image coordinate system is obtained in advance, so that the calculation is performed on the basis of the conversion relation when the vanishing point information is calculated, the conditions of jolt, gradient and the like of a reason are reflected in parameters, the accuracy of pitch angle calculation is improved, and the effect of camera pose correction is improved.
Drawings
FIG. 1 is a schematic diagram of a first process of a camera pose correction method according to an embodiment;
FIG. 2 is a schematic diagram of a second process of a camera pose correction method according to another embodiment;
FIG. 3 is a third flow chart of a camera pose correction method according to another embodiment;
FIG. 4 is a fourth flowchart of a camera pose correction method according to another embodiment;
FIG. 5 is a schematic diagram of the geometric relationship between the monitored coordinates and the imaged coordinates in one embodiment;
FIG. 6 is a fifth flowchart of a camera pose correction method according to another embodiment;
FIG. 7 is a schematic diagram of the geometric relationship of lane widths in one embodiment;
FIG. 8 is a schematic diagram of the geometry of the pitch angle in one embodiment;
FIG. 9 is a sixth flowchart of a camera pose correction method according to another embodiment;
FIG. 10 is a block diagram showing a configuration of a camera pose correction apparatus according to an embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the related art, in the process of correcting the pitch angle of the camera, a plurality of sampling levels are generally set, each sampling level corresponds to different coordinate values of the lane lines in the X direction, vanishing points between each pair of adjacent lane lines are calculated under each sampling level, a set of vanishing points is formed, and finally, a final vanishing point is selected from the set of vanishing points through a process such as median filtering, and a corresponding pitch angle is calculated according to the final vanishing point.
However, the conventional camera pitch angle correction method has the following technical problems:
according to the prior art, the pitch angle correction under the scene of a flat road can be realized, and when special driving conditions such as jolt of a vehicle, rapid acceleration of the vehicle, change of a road ramp and the like occur, the vanishing point of the road line is difficult to accurately calculate, so that the pitch angle correction result is inaccurate.
In one embodiment, as shown in fig. 1, a camera pose correction method is provided, and this embodiment is applied to a terminal for illustration by using the method, it can be understood that the method can also be applied to a server, and can also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server.
In this embodiment, the method includes the steps of:
step 102: and acquiring target image information, environment monitoring information and camera internal parameters of a target camera, wherein the environment monitoring information comprises monitoring coordinates of lane line sampling points.
The target image information may refer to image information acquired by the vehicle through the vision sensor for the environment in a specific direction of the vehicle, and may be presented in an image form, and the sampling points in the target image may determine positions based on an image coordinate system. The environmental monitoring information may refer to environmental information within a sensing range of the vehicle, which is acquired by the vehicle through other types of sensors, the environmental monitoring information may include sensing position information of a certain sampling point in the environment, and the sensed position information may determine a position based on a camera coordinate system. Camera intrinsic may refer to fixedly set parameters in the onboard camera, such as imaging focal length of the camera, etc.
For example, the terminal may acquire target image information, environment monitoring information, and camera references, respectively, required for the analysis process.
Step 104: and acquiring imaging coordinates of the lane line sampling points on a target image based on the target image information, and determining a conversion relation between the monitoring coordinates and the imaging coordinates of the lane line sampling points according to the camera internal parameters, the imaging coordinates and the monitoring coordinates.
The lane line can refer to a marking line used for guiding and limiting the action range of the vehicle in the road, the lane line needs to be detected in order to realize dynamic control of the vehicle body in automatic driving, a lane line sampling point can be selected on the perceived lane line according to a preset sampling rule in detection, and the lane line is detected according to the lane line sampling point.
For example, the terminal may analyze the target image information to obtain imaging coordinates of the lane line sampling point on the target image, where the imaging coordinates are based on an image coordinate system. The terminal can also obtain the monitoring coordinates of the lane line sampling points in the environment monitoring information according to the environment monitoring information, and the monitoring coordinates at the moment are based on a camera coordinate system. Because of the imaging focal length of the camera, the imaging focal length-based offset exists in the camera coordinate system and the image coordinate system, and on the other hand, because the road surface is difficult to be in an ideal flat state and the vehicle body is difficult to keep completely parallel to the lane line, the imaging coordinates and the monitoring coordinates of the lane line sampling points are different. The conversion relation of the coordinates of the same lane line sampling point in two coordinate systems can be obtained by respectively acquiring the imaging coordinates and the monitoring coordinates.
Step 106: and determining vanishing point information corresponding to the lane lines in the target image according to the conversion relation, the environment monitoring information and the target image information.
The vanishing point may refer to an intersection point where parallel lines in the three-dimensional space of the vehicle intersect, and in this embodiment may refer to an intersection point of lane lines.
For example, after determining the transformation relation of the coordinates of the lane line sampling points in the two coordinate systems, the obtained transformation relation may be used to determine vanishing point information corresponding to the lane line in the target image according to the environment monitoring information and the mathematical relation of the data obtained from the target image information.
Step 108: and determining a pitch angle parameter of the target camera based on the vanishing point information, and correcting the pose of the target camera according to the pitch angle parameter.
The pitch angle may refer to an angle between a vector parallel to the axis of the vehicle body and pointing forward of the vehicle and the ground, and the pitch angle parameter may refer to a parameter describing the pitch angle of the vehicle body.
The terminal may obtain the pitch angle parameter of the camera according to the mathematical relationship between the vanishing point and the pitch angle after obtaining the vanishing point information. Thus, the terminal can correct the pose of the target camera according to the pitch angle parameters.
The camera pose correction method reasonably derives by combining the technical characteristics in the embodiment, and can realize the beneficial effects of solving the technical problems in the background technology:
in the process of detecting the pitch angle parameters of the camera, firstly, data such as target image information, environment monitoring information and the like which can be acquired by the vehicle-mounted sensor are acquired, and camera internal parameters of the camera configured by the vehicle can also be acquired. And then, acquiring imaging coordinates of the lane line sampling points according to the target image information, so as to acquire the conversion relation between the monitoring coordinates and the imaging coordinates according to the camera internal parameters, the imaging coordinates and the monitoring coordinates. After the conversion relation between the coordinates in the two coordinate systems is obtained, vanishing point information can be obtained through calculation, so that pitch angle parameters are determined according to the vanishing point information, and finally, the pose of the target camera can be corrected according to the pitch angle parameters. In the implementation, the conversion relation between the coordinates in the camera coordinate system and the image coordinate system is obtained in advance, so that the calculation is performed on the basis of the conversion relation when the vanishing point information is calculated, the conditions of jolt, gradient and the like of a reason are reflected in parameters, the accuracy of pitch angle calculation is improved, and the effect of camera pose correction is improved.
In one embodiment, as shown in FIG. 2, step 108 includes:
step 202: acquiring a plurality of first pitch angle parameters corresponding to different lane lines;
for example, the terminal may take a plurality of different lane lines as references to obtain a plurality of first pitch angle parameters.
Step 204: and determining the pitch angle parameters for realizing pose correction according to the plurality of first pitch angle parameters.
For example, the terminal may determine a pitch angle parameter for a plurality of different first pitch angle parameters, which is ultimately used to achieve pose correction.
In the embodiment, in a multi-lane scene, information of a plurality of lanes is simultaneously referred to, a plurality of different first pitch angle parameters are obtained, and a final pitch angle parameter is determined based on the plurality of first pitch angle parameters, so that the effectiveness of the pitch angle parameters is improved.
In one embodiment, as shown in FIG. 3, step 204 includes:
step 302: and based on a preset median filtering algorithm, carrying out fusion processing on a plurality of first pitch angle parameters to obtain the pitch angle parameters.
The median filtering may refer to an algorithm that obtains a specific set of data through a median value in multiple sets of data, and the median filtering algorithm may be used as a data fusion algorithm that fuses multiple objects to one object.
In this embodiment, the pitch angle calculation results of the multiple lanes are fused through a median filtering algorithm, which is helpful for improving the accuracy of pitch angle parameters.
In one embodiment, as shown in FIG. 4, step 104 includes:
step 402: and acquiring a first vector from the target camera to a monitoring coordinate point according to the environment monitoring information.
For example, as shown in fig. 5, the terminal may acquire a first vector from the target camera to the monitored coordinate point according to the environmental monitoring information. For example, a first vector may be obtained by taking the origin of the camera as a, the monitored coordinate points of the paired lane line sampling points as B, C
Step 404: and determining a focal length parameter of the target camera according to the camera internal parameters, and determining a second vector from the target camera to an imaging coordinate point according to the focal length parameter.
For example, the terminal may determine the focal length parameter f of the target camera from the camera parameters. In this way, the terminal may determine a second vector from the camera origin of the target camera to the imaging coordinate point based on the focal length parameter. For example, the camera origin may be a, imaging coordinate points of paired lane line sampling points may be B ', C', and the second vector may be obtained
Step 406: and acquiring a first proportional coefficient between the first vector and the second vector, and determining a conversion relation between the monitoring coordinate and the imaging coordinate according to the first proportional coefficient.
Illustratively, the terminal may set the scaling factor between the first vector and the second vector to be m, n, and determine the conversion relationship between the monitoring coordinate and the imaging coordinate according to the mathematical relationship between the first vector and the second vector, which may be represented by the following formula:
where f may represent the focal length of the camera and B' may have coordinates of (x) 2 ,y 2 The coordinates of C' may be (x) 1 ,y 1 ,f)。
In this embodiment, the scaling factor between the monitoring coordinates and the imaging coordinates can be obtained by solving the mathematical relationship between the first vector and the second vector.
In one embodiment, as may be seen in FIG. 6, step 106 may include:
step 602: and constructing a third vector which takes the camera coordinate system as an origin and is parallel to the lane line, and acquiring a projection point parameter of the third vector in the target image.
For example, as shown in fig. 7, the terminal may construct a third vector having a camera coordinate system as an origin and being parallel to the lane line, thereby acquiring a projected point parameter of the third vector in the target image. For example, the origin of the camera may be O, and the third vector may beThe projection point isIs VP.
Step 604: and determining the imaging lane line width in the target image according to the two imaging coordinates, and acquiring the real lane line width according to the conversion relation.
For example, the terminal may determine an imaging lane line width in the target image according to the two imaging coordinates, and acquire a real lane line width according to the conversion relationship. For example, the width of the lane line may be W or vectorModulus, vector->The angle α with the imaging plane xoy may be the yaw angle of the vehicle, and may be expressed as follows:
in addition, the imaging lane line width may be w, and the proxel parameters may be as follows:
where dy may be the distance from the vanishing point VP to the midpoint of the vector B 'C', and CamH may be the distance from the point P to the midpoint of the vector BC, i.e., the mounting height of the camera.
Step 606: and determining the vanishing point information according to the imaging lane line width and a second proportionality coefficient between the real lane line widths.
For example, the terminal may determine vanishing point information according to a second scaling factor between the imaged lane line width and the real lane line width. The second scaling factor may be a trigonometric function. The coordinates of the vanishing point VP may beWherein the coordinate point (pt) x ,pt y ) Is the midpoint of B 'C'.
Accordingly, as shown in fig. 8, the pitch angle parameter may be as follows:
wherein c (c) x ,c y ) May represent the image center point coordinates, pitch, and yaw may represent the angles of the illustrations, respectively.
In one embodiment, as shown in fig. 9, step 604 may include:
step 902: and acquiring yaw angle parameters of the target camera according to the environment monitoring information.
Step 904: the real lane line width is determined based on a trigonometric function relationship between a fourth vector between two of the monitored coordinate points and the camera coordinate system.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a camera pose correction device for realizing the camera pose correction method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the camera pose correction device provided below may refer to the limitation of one camera pose correction method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 10, there is provided a camera pose correction apparatus including: the system comprises a data acquisition module, a conversion relation module, a vanishing point calculation module and a pose correction module, wherein:
the data acquisition module is used for acquiring target image information, environment monitoring information and camera internal parameters of a target camera, wherein the environment monitoring information comprises monitoring coordinates of lane line sampling points;
the conversion relation module is used for acquiring imaging coordinates of the lane line sampling points on a target image based on the target image information, and determining a conversion relation between the monitoring coordinates and the imaging coordinates of the lane line sampling points according to the camera internal parameters, the imaging coordinates and the monitoring coordinates;
the vanishing point calculation module is used for determining vanishing point information corresponding to the lane lines in the target image according to the conversion relation, the environment monitoring information and the target image information;
and the pose correction module is used for determining a pitch angle parameter of the target camera based on the vanishing point information and correcting the pose of the target camera according to the pitch angle parameter.
In one embodiment, the pose correction module includes:
the first pitch angle parameter module is used for acquiring a plurality of first pitch angle parameters corresponding to different lane lines;
and the multi-parameter fusion module is used for determining the pitch angle parameters for realizing pose correction according to a plurality of the first pitch angle parameters.
In one embodiment, the multi-parameter fusion module includes:
and the parameter fusion module is used for carrying out fusion processing on a plurality of first pitch angle parameters based on a preset median filtering algorithm to obtain the pitch angle parameters.
In one embodiment, the conversion relation module includes:
the first vector module is used for acquiring a first vector from the target camera to a monitoring coordinate point according to the environment monitoring information;
a second vector module for determining a focal length parameter of the target camera according to the camera internal parameters, and determining a second vector from the target camera to an imaging coordinate point according to the focal length parameter;
and the scaling factor module is used for acquiring a first scaling factor between the first vector and the second vector, and determining the conversion relation between the monitoring coordinate and the imaging coordinate according to the first scaling factor.
In one embodiment, the vanishing point calculating module includes:
the third vector module is used for constructing a third vector which takes the camera coordinate system as an origin and is parallel to the lane line, and acquiring projection point parameters of the third vector in the target image;
the real lane line width module is used for determining imaging lane line widths in the target image according to the two imaging coordinates and acquiring the real lane line widths according to the conversion relation;
and the second scaling factor module is used for determining the vanishing point information according to a second scaling factor between the imaging lane line width and the real lane line width.
In one embodiment, the real lane line width module comprises:
the yaw angle parameter module is used for acquiring yaw angle parameters of the target camera according to the environment monitoring information;
and the trigonometric function relation module is used for determining the width of the real lane line based on a trigonometric function relation between a fourth vector between the two monitoring coordinate points and the camera coordinate system.
Each module in the camera pose correction device can be fully or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 11. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program when executed by a processor implements a camera pose correction method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A camera pose correction method, the method comprising:
acquiring target image information, environment monitoring information and camera internal parameters of a target camera, wherein the environment monitoring information comprises monitoring coordinates of lane line sampling points;
acquiring imaging coordinates of the lane line sampling points on a target image based on the target image information, and determining a conversion relation between the monitoring coordinates and the imaging coordinates of the lane line sampling points according to the camera internal parameters, the imaging coordinates and the monitoring coordinates;
determining vanishing point information corresponding to the lane lines in the target image according to the conversion relation, the environment monitoring information and the target image information;
and determining a pitch angle parameter of the target camera based on the vanishing point information, and correcting the pose of the target camera according to the pitch angle parameter.
2. The method of claim 1, wherein the determining a pitch angle parameter of the target camera based on the vanishing point information, and correcting the pose of the target camera according to the pitch angle parameter comprises:
acquiring a plurality of first pitch angle parameters corresponding to different lane lines;
and determining the pitch angle parameters for realizing pose correction according to the plurality of first pitch angle parameters.
3. The method according to claim 2, wherein said determining said pitch angle parameters for achieving pose correction from a number of said first pitch angle parameters comprises:
and based on a preset median filtering algorithm, carrying out fusion processing on a plurality of first pitch angle parameters to obtain the pitch angle parameters.
4. The method of claim 1, wherein the acquiring imaging coordinates of the lane-line sampling point on the target image based on the target image information, determining a conversion relationship between the monitoring coordinates and the imaging coordinates of the lane-line sampling point according to the camera internal reference, the imaging coordinates, and the monitoring coordinates comprises:
acquiring a first vector from the target camera to a monitoring coordinate point according to the environment monitoring information;
determining a focal length parameter of the target camera according to the camera internal parameters, and determining a second vector from the target camera to an imaging coordinate point according to the focal length parameter;
and acquiring a first proportional coefficient between the first vector and the second vector, and determining a conversion relation between the monitoring coordinate and the imaging coordinate according to the first proportional coefficient.
5. The method of claim 4, wherein the determining vanishing point information corresponding to a lane line in the target image according to the conversion relation, the environmental monitoring information, and the target image information comprises:
constructing a third vector which takes the camera coordinate system as an origin and is parallel to the lane line, and acquiring projection point parameters of the third vector in the target image;
determining imaging lane line widths in the target image according to the two imaging coordinates, and acquiring real lane line widths according to the conversion relation;
and determining the vanishing point information according to the imaging lane line width and a second proportionality coefficient between the real lane line widths.
6. The method of claim 5, wherein determining an imaged lane line width in the target image from the two imaged coordinates and obtaining a real lane line width from the conversion relationship comprises:
acquiring yaw angle parameters of the target camera according to the environment monitoring information;
the real lane line width is determined based on a trigonometric function relationship between a fourth vector between two of the monitored coordinate points and the camera coordinate system.
7. A camera pose correction device, the device comprising:
the data acquisition module is used for acquiring target image information, environment monitoring information and camera internal parameters of a target camera, wherein the environment monitoring information comprises monitoring coordinates of lane line sampling points;
the conversion relation module is used for acquiring imaging coordinates of the lane line sampling points on a target image based on the target image information, and determining a conversion relation between the monitoring coordinates and the imaging coordinates of the lane line sampling points according to the camera internal parameters, the imaging coordinates and the monitoring coordinates;
the vanishing point calculation module is used for determining vanishing point information corresponding to the lane lines in the target image according to the conversion relation, the environment monitoring information and the target image information;
and the pose correction module is used for determining a pitch angle parameter of the target camera based on the vanishing point information and correcting the pose of the target camera according to the pitch angle parameter.
8. The camera pose correction apparatus according to claim 7, wherein said pose correction module comprises:
the first pitch angle parameter module is used for acquiring a plurality of first pitch angle parameters corresponding to different lane lines;
and the multi-parameter fusion module is used for determining the pitch angle parameters for realizing pose correction according to a plurality of the first pitch angle parameters.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202311268699.5A 2023-09-27 2023-09-27 A camera pose correction method, device, computer equipment and storage medium Pending CN117274384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311268699.5A CN117274384A (en) 2023-09-27 2023-09-27 A camera pose correction method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311268699.5A CN117274384A (en) 2023-09-27 2023-09-27 A camera pose correction method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117274384A true CN117274384A (en) 2023-12-22

Family

ID=89215725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311268699.5A Pending CN117274384A (en) 2023-09-27 2023-09-27 A camera pose correction method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117274384A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119975368A (en) * 2025-01-20 2025-05-13 重庆长安汽车股份有限公司 Information acquisition method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898638A (en) * 2018-06-27 2018-11-27 江苏大学 A kind of on-line automatic scaling method of vehicle-mounted camera
CN113744330A (en) * 2021-08-16 2021-12-03 苏州挚途科技有限公司 Vehicle-mounted camera pitch angle determining method and device and electronic equipment
WO2023028880A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 External parameter calibration method for vehicle-mounted camera and related apparatus
CN116309814A (en) * 2022-11-29 2023-06-23 北京斯年智驾科技有限公司 Vehicle pose determination method, device, computing device and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898638A (en) * 2018-06-27 2018-11-27 江苏大学 A kind of on-line automatic scaling method of vehicle-mounted camera
CN113744330A (en) * 2021-08-16 2021-12-03 苏州挚途科技有限公司 Vehicle-mounted camera pitch angle determining method and device and electronic equipment
WO2023028880A1 (en) * 2021-08-31 2023-03-09 华为技术有限公司 External parameter calibration method for vehicle-mounted camera and related apparatus
CN116309814A (en) * 2022-11-29 2023-06-23 北京斯年智驾科技有限公司 Vehicle pose determination method, device, computing device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119975368A (en) * 2025-01-20 2025-05-13 重庆长安汽车股份有限公司 Information acquisition method, device, computer equipment and storage medium
CN119975368B (en) * 2025-01-20 2025-09-19 重庆长安汽车股份有限公司 Information acquisition method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112862890B (en) Road gradient prediction method, device and storage medium
CN114091521B (en) Vehicle heading angle detection method, device, equipment and storage medium
CN114859938B (en) Robot, dynamic obstacle state estimation method, device and computer equipment
CN113064415A (en) Method and device for planning track, controller and intelligent vehicle
CN119022877A (en) A method, device, equipment, medium and product for measuring sea surface targets by unmanned aerial vehicle based on monocular camera
CN114913500B (en) Pose determination method and device, computer equipment and storage medium
CN117274384A (en) A camera pose correction method, device, computer equipment and storage medium
CN113227708B (en) Method and device for determining pitch angle and terminal equipment
CN119228910B (en) Method, device, equipment, medium and program product for determining camera posture of vehicle
CN117745537B (en) Tunnel equipment temperature detection method, device, computer equipment and storage medium
CN115719388B (en) Camera external parameter acquisition method and device, electronic equipment and storage medium
CN114463504B (en) Roadside linear feature reconstruction method, system and storage medium based on monocular camera
CN118823082A (en) Laser point cloud and panoramic image registration method, device, equipment and program product
CN115222815A (en) Obstacle distance detection method, device, computer equipment and storage medium
CN116597403A (en) Target object locating method, device, computer equipment and storage medium
CN116168086A (en) Calibration precision evaluation method and device, electronic equipment and storage medium
CN114708245A (en) Vehicle dimension measuring method, device, computer equipment and storage medium
CN117740186B (en) Tunnel equipment temperature detection method, device and computer equipment
CN120526259B (en) Model training method, target detection device and computer equipment
CN116758517B (en) Three-dimensional target detection method and device based on multi-view image and computer equipment
CN119247392A (en) Laser radar position determination method, device, equipment, medium and product
CN117994337A (en) Vehicle perception space determination method, device, computer equipment and storage medium
CN119445537A (en) Target detection method, device, computer equipment and storage medium
CN119902523A (en) Vehicle perception method, device, computer equipment and storage medium
Ha et al. A new calibrator providing easy detection of feature points for calibrating fisheye cameras in vehicle AVM systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination