Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
Furthermore, it should be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
A vehicle control method and apparatus according to an embodiment of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1is a schematic flow chart of a vehicle control method according to an embodiment of the present application. As shown in fig. 1, the vehicle control method includes:
s101, in a driver assistance system, collecting panoramic views, point cloud data and driving information;
s102, judging the current driving environment based on the panoramic view and driving information, and determining a driving mode based on the driving environment, wherein the driving mode represents a longitudinal speed control interval and a vehicle head steering angle interval in the corresponding driving environment;
S103, processing the panoramic view and the point cloud data by using the trained road perception model to obtain a road map containing concave-convex objects;
S104, in the road map, calculating the height mean square error of the concave-convex objects in each preset area, and if the height mean square error is smaller than a first threshold value, obtaining a driving path sequence based on the height mean square error;
S105, determining a target driving path from the driving path sequence based on a vehicle control factor and a height mean square error, and controlling the vehicle to run based on the target driving path, wherein the vehicle control factor at least comprises a vehicle speed, a transverse acceleration, a longitudinal acceleration and a vehicle head steering angle section;
and S106, if the mean square error of the height exceeds a first threshold value and the concave-convex object exceeds a transverse vehicle control range, controlling the vehicle to decelerate based on a longitudinal vehicle control speed section corresponding to the driving mode.
Specifically, panoramic view, point cloud data and driving information are collected, the current driving environment is judged according to the panoramic view and the driving information, a driver auxiliary system automatically determines a proper driving mode based on the driving environment, the driving mode can represent the current driving environment, a longitudinal speed control range and a vehicle head steering angle range under the current vehicle speed, and then the panoramic view and the point cloud data are automatically processed through a trained road perception model to obtain a road map containing concave-convex objects on a road.
Based on the road map, the height mean square error of the concave-convex object in each preset area in the current road can be analyzed, a running path sequence is obtained based on the obtained height mean square error analysis, namely, a smoothly passing path sequence is obtained, and the most appropriate target running path is comprehensively considered according to the height mean square error of each area and the vehicle control factors, if the calculated height mean square error exceeds a first threshold value, the coverage of the concave-convex object exceeds the transverse vehicle control range, the system can perform gentle deceleration according to the longitudinal vehicle control interval in the corresponding driving mode, so that the vehicle can also relatively gently pass through the area where the concave-convex object is located under the condition that the target running path is not available, and the vehicle can be controlled to decelerate and send prompt information to the driver, so that the driver can control the vehicle to stably run.
Further, the trained road perception model can accurately identify pits and raised parts in the road, and render the pits and raised parts as concave-convex objects to obtain a road map for subsequent analysis and processing.
Further, the mean square deviation of the height of the concave-convex objects in each preset area is calculated to analyze the concave-convex situation in the road, the mean square deviation is calculated to represent the height difference of the concave-convex objects in each preset area, if the mean square deviation is larger, the height difference of the concave-convex objects in the area is larger, the vehicle driving-over jitter is more serious, the bypass is considered when the mean square deviation exceeds a first threshold, but if the range of the mean square deviation exceeding the first threshold exceeds a transverse vehicle control range, the slow speed sweep can be considered, the transverse vehicle control range can be determined according to the current speed, driving mode and other factors, the specific limitation is not made, and different transverse vehicle control ranges of different vehicles in different situations can be determined through the preset debugging.
The mean square error can be obtained by calculating the average value of the concave-convex objects, then calculating the difference value between the height of each concave-convex object and the average value, squaring each difference value, adding all squared difference values, and finally dividing the sum by the number of concave-convex objects.
The range of each preset area can be determined according to the vehicle tire and the width of the vehicle body, and the vehicle tire can not smoothly drive through the range of the area or can not bypass the range, for example, when the vehicle tire is 200mm, the area with the width of 220mm can be set to calculate the mean square error of the concave-convex objects in the corresponding area, and then the mean square error of all the areas with the width of 220mm in the road map can be traversed to obtain a driving path sequence, and the specific area range is determined by combining the actual road condition, the vehicle width, the wheel width and other influencing factors, and is not limited herein, so that the adjustment of different vehicle types in different roads can be determined before the scheme is implemented, and it is understood that the width of the preset area can represent the 2-dimensional area in the road map.
It should be further appreciated that when determining the driving path sequence according to the mean square error of the height, the driving path sequence may be determined by comprehensively analyzing the mean square error of all preset areas in the road map, as shown in fig. 2, if the mean square error of the heights of the areas a and E are smaller than the first threshold, but the mean square error of the heights of the areas B, C, D and F are all larger than the first threshold, the driving path sequence at that time includes the path a-E, which is not limited herein.
The first threshold may be determined comprehensively through parameters of the chassis of the vehicle, for example, a mean square error that the vehicle can accept without decelerating is 1.0, but is not limited herein, and the specific first threshold may be determined comprehensively according to parameters of the chassis of the vehicle, for example, the first threshold of the mean square error that the vehicle can accept may be determined comprehensively according to the chassis height, the chassis spring coefficient, and the like.
Of course, before the mean square error of the height is calculated, whether the height of the bulge meets the requirement that the vehicle drives through, for example, a large obstacle exists in front of the bulge, the bulge can be bypassed first, and if the bulge cannot be used, the deceleration parking can be considered.
The longitudinal speed control speed interval represents a speed change interval capable of slowly accelerating and decelerating under the current running environment, for example, under the condition that the running speed is less than 30km/h on a non-paved road, the longitudinal speed control speed interval can be +/-10 km/h, and the specific speed change can be automatically determined by the system according to the current running state and the running environment. The head steering angle section may represent a controllable head deflection angle at which the vehicle can keep running on a lane in the current running environment, for example, the running mode may be a wet running mode in rainy or snowy weather, the head steering angle section may be set to 10 ° if the speed at this time is below 60km/h, and the head deflection angle section may be set to 5 ° if the speed is above 60 km/h. Of course, the specific digital determination can be adjusted according to the corresponding chassis parameters without any limitation, and also needs to be comprehensively determined according to specific road types (such as road width, road surface conditions, expressways, urban roads and the like), weather conditions, driving experience sense, cloud big data and the like.
According to the method, panoramic views, point cloud data and driving information can be acquired through the multiple sensors, the driving environment and driving mode are further determined, road conditions are analyzed to obtain a road map containing concave-convex objects, a target driving path of a vehicle is obtained by combining the road map, or the vehicle is controlled to be decelerated or stopped without a direct passing path, so that smooth form control of the vehicle when facing complex road conditions is realized, vehicle shake is slowed down, comfortable driving experience is brought to passengers, and accidents caused by too fast speed and too late and insufficient reflection of the roadblock of a driver are avoided.
In some embodiments, before collecting the panoramic view, the point cloud data, and the driving information, the method further includes: responding to a vehicle self-checking result, and judging whether the vehicle starts a constant-speed cruising or lane keeping system if the vehicle sensor works normally and the vehicle is in a normal working condition; and if the vehicle starts the constant-speed cruising or lane keeping system, starting the vehicle control service.
Specifically, before the vehicle enters the automatic vehicle control service, the vehicle still needs to be self-checked, and if the current vehicle body meets the premise that the vehicle control service is started, the vehicle control service can be started, and then the flow of each data entering the vehicle control service is collected.
Further, the vehicle self-checking process at least comprises checking a vehicle sensor and a vehicle working condition, if the vehicle sensor works normally, for example, the condition that a vehicle camera is not shielded, has no fog or faults and the like indicates that the camera works normally, if the sensor can receive data normally, the vehicle working condition is checked when the sensor works normally, if the vehicle working condition is normal, whether the current vehicle starts a constant-speed cruising or lane keeping system is further judged, and if the vehicle starts the constant-speed cruising or lane keeping system, the vehicle control service can be started.
According to the method of the embodiment, whether the vehicle sensor and the vehicle are in a normal state or not is checked through the vehicle self-checking, if the sensor is normal and the vehicle working condition is normal, the system can further detect whether the vehicle starts a constant-speed cruising or lane keeping system, a vehicle control service is started in a state that the constant-speed cruising or lane keeping system is activated, and after the vehicle control service is started, the vehicle can automatically acquire panoramic view, point cloud data, driving information and other data through the sensors.
In some embodiments, the driving information includes temperature, humidity and wheel speed, and collecting the panoramic view, the point cloud data and the driving information includes: acquiring a panoramic view through the capture of a camera, acquiring point cloud data according to a laser radar, and performing space alignment on the panoramic view and the point cloud data; the temperature, humidity and wheel speed are obtained through a temperature sensor, a humidity sensor and a wheel speed controller.
Specifically, the required data can be collected by each sensor and processed to be used later, that is, the image data of the surrounding environment can be captured by the cameras, and the panoramic view can be captured by a plurality of cameras (such as front, back, front left, front right, rear left, rear right cameras) or panoramic cameras. The laser radar can be arranged at the top of a vehicle or other suitable positions through the laser radar to acquire point cloud data, and is used for comprehensively scanning the most comprehensive and accurate point cloud data captured by the surrounding environment to help the system to generate a three-dimensional road map. The temperature and humidity information outside the vehicle can be acquired through the temperature sensor and the humidity sensor so as to assist the system in environmental analysis, and the wheel speed of each wheel can be acquired through the wheel speed controller, so that the real-time running state of the vehicle, such as the information of the speed and the acceleration, can be acquired.
After each data is acquired through the sensor, the point cloud data can be subjected to space synchronization processing in a panoramic view, and because the installation positions and the installation angles of the cameras and the laser radars on the vehicle are possibly different, the angles of the point cloud data and the panoramic view received in the system are different, the point cloud data and the panoramic view can be subjected to space alignment and overlapped, and the accurate rendering of the three-dimensional concave-convex objects can be conveniently realized.
It should be appreciated that after the corresponding data is obtained by each sensor, the corresponding data may be temporarily stored in a database, and if the system determines that the vehicle control service is started, the corresponding data is collected from the database, so that a temporary database may be constructed for storing the data obtained by each sensor, which is not limited herein.
According to the method of the embodiment, the corresponding data can be acquired through the sensors and integrated into the driver assistance system, so that the system can be helped to more accurately sense and understand the surrounding environment, and further more accurate vehicle control decisions and control can be made.
In some embodiments, spatially aligning the panoramic view and the point cloud data includes: the method comprises the steps of taking the center of a rear axle of a vehicle as an origin, establishing a front left upper right hand coordinate system taking the driving direction as an x axis, and establishing a virtual coordinate point on the right hand coordinate system; and carrying out space rotation and translation on the time-synchronous point cloud data and the panoramic view based on the virtual coordinate points respectively, so as to realize the space alignment of the point cloud data and the panoramic view.
Specifically, the point cloud data and the panoramic view can be time-aligned to obtain time-synchronized point cloud data and panoramic view, which are mainly represented by different sampling rates of the camera and the laser radar, so that the time synchronization can be performed by taking a smaller sampling rate as a sampling standard.
When the point cloud data and the panoramic view are aligned in space, the front view and the front view point cloud data can be aligned, firstly, a front left-upper right hand coordinate system with the driving direction of an x axis can be established by taking the center of a rear axis of a vehicle as an origin as a driving coordinate system, virtual coordinate points (Xq, 0 and Zq) are conceived in the driving coordinate system as reference points, the reference systems are extracted for the panoramic view and the point cloud data, the internal and external parameters of a camera can be acquired through a checkerboard calibration method in the processing process, and the panoramic view and the point cloud data are mapped onto the virtual coordinate points through space rotation and translation, so that the same angle and direction can be ensured to realize the spatial alignment.
And then, dense point clouds can be obtained by using a super-division algorithm of laser radar point cloud data so as to improve the density and accuracy of the point cloud data, 2D image information is matched with the laser radar point cloud data by outputting a target contour in a panoramic view, abnormal points are removed, the accuracy and the integrity of the data are ensured, the space synchronization of the laser radar and visual information can be further realized, reliable data support is provided for subsequent perception and decision, the process of further realizing the space synchronization can be carried out when the panoramic view and the point cloud data are acquired, and can be carried out in a trained road perception model, the limitation is not carried out, and the determination can be carried out according to a specific system realization effect and a data transmission effect.
According to the method, the point cloud data and the panoramic view can be mapped through establishing a coordinate system, so that data consistency and accuracy between the laser radar and the camera are achieved, and more reliable environment sensing and information processing capability is provided for the vehicle control service.
In some embodiments, determining the current driving environment based on the panoramic view and the driving information, and determining the driving mode based on the driving environment, includes: fusing the temperature, the humidity, the wheel speed and the panoramic view to obtain fused data; the fusion data are input into a multi-classification discriminator to judge, a driving environment is determined, and a driving mode is determined based on the driving environment and the vehicle speed, wherein the driving mode at least comprises a non-paved road surface mode, a wet-skid mode, a sunny mode and a fog barrier mode.
Specifically, the current driving environment can be judged through temperature, humidity, wheel speed and panoramic view, the data can be fused to obtain fusion data, the fusion data is input into a multi-classification discriminator to be judged, the current driving environment is obtained, and the driving mode is determined based on the driving environment and the vehicle speed.
Further, the multi-class arbiter may be a pre-trained module that can select driving modes through the driving environment. The multi-classification discriminator can classify the input data, namely the temperature, the humidity, the wheel speed and the fusion data of the panoramic view, so as to obtain the current running environment, wherein the running environment at least can comprise weather conditions and road conditions, and then the most suitable driving modes, such as a non-paved road mode, a wet-skid mode, a sunny mode and a fog barrier mode, are obtained according to the running environment and the current speed, and the longitudinal vehicle control section and the vehicle head steering angle section of the driving modes can be determined according to specific conditions and early debugging, so that the method is not limited.
As an example, the current driving environment is a non-paved road surface and is in rainy and snowy weather, the driving mode is a non-paved road surface mode and a wet skid mode, when the current vehicle speed is less than 30km/h, the longitudinal speed control interval of the non-paved road surface mode is +/-10 km/h, and the vehicle head deflection angle is 30 degrees, but when the speed of the wet skid mode is less than 30km/h through the adjustment of a pre-control variable, the longitudinal speed control interval is +/-15 km/h, and the vehicle head steering angle interval is 25 degrees, so that the driving mode can be comprehensively determined to be the longitudinal speed control interval of plus or minus 15km/h under the current driving environment, and the vehicle head steering angle interval is 25 degrees, so that the vehicle body stability can be ensured to the greatest extent under the conditions of ensuring the safety of the vehicle body and not deviating from a lane.
It should be appreciated that the driving mode may represent a full range of tasks when turning on the vehicle control service, i.e. rather than performing the corresponding driving mode only if the concave-convex objects appear in the indentations of the road surface, the driving assistance system may also control the vehicle to achieve the most suitable current driving speed and direction according to the driving mode when the road surface is flat. Also, for example, when an obstacle that cannot be passed over is encountered in a road section, the vehicle may be controlled to bypass the obstacle according to the head steering angle section in the driving mode, and of course, the vehicle may be controlled to stop if the obstacle cannot be bypassed.
It should also be appreciated that the driving mode may be set up to a head steering angle interval, or may be set up to control the vehicle in the lane, i.e. not to set up a specific head steering angle interval, but to automatically debug according to the width of the current road, to keep the vehicle running in the lane, although the head steering angle interval may also represent a steering angle interval that comprehensively considers keeping the lane running, without excessive deployment here.
According to the method of the embodiment, the current driving environment can be judged according to the comprehensive data in the driver assistance system, and the proper driving mode is automatically determined, so that the driving safety and the comfort are improved, better driving experience and guarantee are provided for a driver, and reliable technical support is provided for realizing automatic driving.
In some embodiments, the processing the panoramic view and the point cloud data using the trained road perception model to obtain a road map including concave-convex objects includes: carrying out three-dimensional modeling on the panoramic view by using a trained road perception model to obtain a modeling result comprising pits and protrusions, restraining the modeling result by using dense point clouds, and rendering the restrained modeling result to obtain an initial road map comprising all concave-convex objects, wherein the dense point clouds are calculated by a super-division algorithm based on point cloud data; and carrying out smoothing treatment on the initial road map through a window with a preset size to obtain the road map.
Specifically, the point cloud data and the panoramic view which are spatially synchronized can be input into a trained road perception model for three-dimensional modeling, an initial road map containing all road surface concave-convex objects is obtained, the initial road map is subjected to smoothing processing, and negligible concave-convex objects which have no influence on vehicle running in a road are removed, so that the road map is obtained. It should be understood that the concave-convex objects include, of course, guideboard marks, plants, buildings, etc., but the present application focuses on the flow of traffic control service in the case of road surface-based concave-convex, so that the concave-convex objects mainly represent pits and projections on road surfaces, such as stones, puddles, concave-convex on road surfaces crushed by vehicles, etc.
Further, the processing procedure of the trained road perception model can be represented as road object weight mapping under a three-dimensional space (such as Bird's Eye View, BEV space), 6 RGB images of multiple views, namely panoramic views, can be used as input, feature extraction under the three-dimensional space is completed by using a picture encoder module to obtain three-dimensional features of road elements, IPM is adopted in a transducer to complete 2D-3 conversion, and then bev+ transfomer and occupation modules are adopted to perform feature enhancement under the three-dimensional space, so that three-dimensional modeling is performed, namely elements such as concave-convex objects of a road surface are accurately modeled, then the concave-convex objects subjected to three-dimensional modeling are rendered to obtain an initial road map containing all concave-convex objects, so that when the road elements are analyzed, not only the convex objects can be identified, but also pit objects can be accurately identified, and when the pit objects are rendered, namely the negative direction of the Z axis of a right-hand coordinate system can be obtained, and the feature of the three-dimensional modeling can be restrained by using point cloud before, so that a more accurate rendering result can be obtained.
After that, the initial road map can be subjected to smoothing treatment, that is, a window (for example, a 3*3 sliding window) with a preset size can be utilized to slide in the initial road map, and the data in the window is subjected to smoothing treatment to remove excessive data which does not affect the running of the vehicle, so that the road map is clearer and more continuous, and the road map can be prevented from being affected by excessive data when the altitude variance is calculated later, and thus, a complete, accurate and precise road map can be obtained through a pre-trained road perception model.
It should be appreciated that the road perception model may be tested and debugged continuously to obtain a trained road perception model that accurately identifies road elements and concave-convex objects. In model training, data acquisition can be performed on the pits in a time sequence mode, namely, visual changes of the same target under different distance angles are recorded, the depth width with the largest trend of the depth changes of the same target is recorded through a laser radar, and the pits are sequentially processed, but the method is not particularly limited.
According to the method, the raised objects and the pit objects can be accurately identified by rendering the concave-convex objects on the road according to the trained road perception model, a complete and accurate road map is obtained, and a more reliable data base is provided for data analysis of subsequent vehicle control service, so that the vehicle can stably run when facing the concave-convex objects, namely, unpaved roads, puddles and stone barriers.
In some embodiments, after obtaining the road map including the concave-convex object, further comprising: if the concave-convex object has a target concave-convex object higher than a second threshold value, acquiring the distance between the target concave-convex object and the vehicle; if the distance is smaller than the preset distance and the vehicle speed is greater than the preset vehicle speed, the vehicle control service is closed, and warning information is sent to the driver.
Specifically, if a target concave-convex object higher than a second threshold value exists in the front of the vehicle through the trained road perception model in the vehicle control service process, and the vehicle control service can be closed and warning information can be sent to a driver when the distance is too short and the vehicle speed is too fast, the warning information can include reminding the driver of having an obstacle which can not be passed through in front of the vehicle, and prompting to reduce the speed and stop, and of course, if a special emergency is met, the vehicle can be controlled by the system to brake, namely automatically and emergently, and the vehicle is not unfolded. It should be appreciated that the obstacle that cannot be passed over in front may be a situation that the vehicle deviates from the current lane if the vehicle is controlled to turn around the obstacle in the current driving state and driving environment, so that the vehicle may be controlled to stop at a reduced speed for driving safety to avoid traffic accidents, which is not limited herein.
It should be further appreciated that the second threshold may be set according to the height of the chassis of the vehicle, or may be determined comprehensively based on other factors, for example, the second threshold is 300mm, which is not limited herein, and the preset vehicle speed may be determined jointly according to the current driving state, the driving mode, and the distance between the obstacle and the vehicle, that is, the preset vehicle speed needs to be determined according to a specific situation, and the system may be trained through a large amount of data, so that the system may accurately calculate the preset vehicle speed according to the current specific situation, and therefore, the specific preset vehicle speed is not limited herein.
As an example, in a foggy weather, the visible distance is 100 meters, the vehicle recognizes that an obstacle exists in front through the laser radar, but the vehicle is invisible or very fuzzy, and the vehicle cannot be decelerated to a stop when the vehicle speed passes through the current longitudinal vehicle control section to the obstacle too fast, and the vehicle cannot bypass the obstacle through the vehicle head steering angle section, so that the driver can be prompted to decelerate or emergently brake by means of the driver assistance system, the driver can know that the target concave-convex object is mainly a convex obstacle at the moment, but the target concave-convex object is not limited at all, the preset vehicle speed can be comprehensively determined according to different driving environments, driving modes and the like and can be realized in practical application, and the vehicle is not unfolded.
It should be noted that, according to the present embodiment, if the visible range is small in generating the road map, the road map may also be generated by infrared or other sensor-assisted point cloud data to expand the visible range of the road map, but the present invention is not limited thereto.
Of course, besides turning off the vehicle control service when facing the obstacle, the user can also manually exit the vehicle control service, or the user exits the cruise control system or the lane keeping system, the vehicle control service can automatically exit, and if the vehicle control service is turned on again, the vehicle control service can be turned on firstly and then turned on manually.
According to the method of the embodiment, the driver can be helped to predict the situation of the road ahead in advance by prompting the driver assistance system, preventive measures can be taken under the conditions of over-high vehicle speed and small visual field of the driver so as to avoid potential accidents or damages, and therefore the scheme not only ensures the smooth running of the vehicle on the concave-convex road surface, but also can prevent the vehicle according to the road situation in a targeted manner, and further improves the automatic driving system.
Fig. 3 is a flow chart of another vehicle control method according to an embodiment of the present application, and the present application is further described below with reference to fig. 3.
After the vehicle is electrified, self-checking can be performed in the running process of the vehicle, if equipment is abnormal, maintenance reminding is sent out and the vehicle control service is not entered, if the state of each sensor and the state of the vehicle are normal, data acquisition is performed to obtain data required by the vehicle control service, all the acquired data are subjected to preliminary definition, marking and training, and then the data are input into a trained road perception model for processing, so that a road map containing concave-convex objects is obtained.
And judging whether the vehicle starts the constant-speed cruising or keeps the lane, if not, not starting the vehicle control service, if so, checking whether to start the vehicle control service, if not, not executing the vehicle control service, and if so, further entering the vehicle control service, and analyzing the obtained road map.
And analyzing the obtained road map to obtain the mean square error of the height of the concave-convex objects in the preset area, comparing the mean square error of the height with a first threshold value, and selecting an optimal target driving path to smoothly drive through according to the vehicle control factors if the mean square error of the height does not exceed the first threshold value.
If the mean square error of the height exceeds the first threshold, judging whether the vehicle can bypass, namely judging whether the vehicle can bypass the range of the transverse vehicle control, if so, controlling the vehicle to bypass, and if not, decelerating based on the driving mode and informing the driver, wherein the vehicle can be regarded as exiting the vehicle control service.
After bypassing the obstacle or the large pit, if there is an obstacle as a concave-convex object, the vehicle control can be performed by continuously determining the target travel path.
And then if the driver exits the vehicle control service or the constant speed cruising and the lane keeping are closed, the vehicle control service can be ended. Therefore, the complete vehicle control service is realized, the vehicle control service can be flexibly handled when facing a complex road surface environment, and the comfort and the safety of a driver are ensured.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 4 is a schematic diagram of a vehicle control apparatus according to an embodiment of the present application. As shown in fig. 4, the vehicle control apparatus includes:
the acquisition module 401 is configured to acquire panoramic view, point cloud data and driving information in the driver assistance system;
A judging module 402 configured to judge a current driving environment based on the panoramic view and the driving information, and determine a driving mode based on the driving environment, the driving mode representing a longitudinal speed control section and a head steering angle section in the corresponding driving environment;
the processing module 403 is configured to process the panoramic view and the point cloud data by using the trained road perception model to obtain a road map containing concave-convex objects;
The calculating module 404 is configured to calculate a mean square error of the height of the concave-convex objects in each preset area in the road map, and if the mean square error of the height is smaller than a first threshold value, obtain a driving path sequence based on the mean square error of the height;
A first vehicle control module 405 configured to determine a target travel path from the travel path sequence based on vehicle control factors and a height mean square error, the vehicle travel being controlled based on the target travel path, the vehicle control factors including at least a vehicle speed, a lateral acceleration, a longitudinal acceleration, and a vehicle head steering angle interval;
The second vehicle control module 406 is configured to control the vehicle to decelerate based on the longitudinal vehicle control speed interval corresponding to the driving mode if the mean square error of the height exceeds the first threshold and the concave-convex object exceeds the lateral vehicle control range.
In some embodiments, the acquisition module 401 is specifically configured to respond to a vehicle self-checking result, and if the vehicle sensor is working normally and the vehicle is in a normal working condition, determine whether the vehicle starts a constant-speed cruising or lane keeping system; and if the vehicle starts the constant-speed cruising or lane keeping system, starting the vehicle control service.
In some embodiments, the acquisition module 401 is specifically configured to acquire a panoramic view through capturing by a camera, acquire point cloud data according to a laser radar, and spatially align the panoramic view and the point cloud data; and obtaining the temperature, the humidity and the wheel speed through a temperature sensor, a humidity sensor and a wheel speed controller.
In some embodiments, the acquisition module 401 is specifically configured to establish a front left upper right hand coordinate system with a driving direction as an x axis by using a center of a rear axle of the vehicle as an origin, and establish a virtual coordinate point on the right hand coordinate system; and carrying out space rotation translation on the time-synchronous point cloud data and the panoramic view based on the virtual coordinate points respectively to realize space alignment of the point cloud data and the panoramic view.
In some embodiments, the determining module 402 is specifically configured to fuse the temperature, the humidity, the wheel speed, and the panoramic view to obtain fused data; the fusion data are input into a multi-classification discriminator to judge, a driving environment is determined, and a driving mode is determined based on the driving environment and the vehicle speed, wherein the driving mode at least comprises a non-paved road surface mode, a wet-skid mode, a sunny mode and a fog barrier mode.
In some embodiments, the processing module 403 is specifically configured to perform three-dimensional modeling on the panoramic view by using a trained road perception model, obtain a modeling result including pits and protrusions, constrain the modeling result by using a dense point cloud, and render the constrained modeling result, so as to obtain an initial road map including all concave-convex objects, where the dense point cloud is obtained by performing a super-resolution algorithm calculation based on point cloud data; and carrying out smoothing treatment on the initial road map through a window with a preset size to obtain the road map.
In some embodiments, the processing module 403 is specifically configured to obtain, if there is a target concave-convex object higher than the second threshold, a distance between the target concave-convex object and the vehicle; if the distance is smaller than the preset distance and the vehicle speed is greater than the preset vehicle speed, the vehicle control service is closed, and warning information is sent to the driver.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 5 is a schematic diagram of an electronic device 5 according to an embodiment of the present application. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and executable on the processor 501. The steps of the various method embodiments described above are implemented by processor 501 when executing computer program 503. Or the processor 501 when executing the computer program 503 performs the functions of the modules/units in the above-described apparatus embodiments.
The electronic device 5 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 5 may include, but is not limited to, a processor 501 and a memory 502. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the electronic device 5 and is not limiting of the electronic device 5 and may include more or fewer components than shown, or different components.
The Processor 501 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
The memory 502 may be an internal storage unit of the electronic device 5, for example, a hard disk or a memory of the electronic device 5. The memory 502 may also be an external storage device of the electronic device 5, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 5. Memory 502 may also include both internal storage units and external storage devices of electronic device 5. The memory 502 is used to store computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit.
The integrated modules/units may be stored in a storage medium if implemented in the form of software functional units and sold or used as stand-alone products. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.