[go: up one dir, main page]

WO2022033810A1 - Procédé mis en œuvre par ordinateur et produit-programme d'ordinateur permettant d'obtenir une représentation de scène d'environnement pour un système de conduite automatisée, procédé mis en œuvre par ordinateur permettant d'apprendre une prédiction de scène d'environnement pour un système de conduite automatisée, et dispositif de commande pour un système de conduite automatisée - Google Patents

Procédé mis en œuvre par ordinateur et produit-programme d'ordinateur permettant d'obtenir une représentation de scène d'environnement pour un système de conduite automatisée, procédé mis en œuvre par ordinateur permettant d'apprendre une prédiction de scène d'environnement pour un système de conduite automatisée, et dispositif de commande pour un système de conduite automatisée Download PDF

Info

Publication number
WO2022033810A1
WO2022033810A1 PCT/EP2021/070099 EP2021070099W WO2022033810A1 WO 2022033810 A1 WO2022033810 A1 WO 2022033810A1 EP 2021070099 W EP2021070099 W EP 2021070099W WO 2022033810 A1 WO2022033810 A1 WO 2022033810A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving system
information
environment
layer
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2021/070099
Other languages
German (de)
English (en)
Inventor
Georg Schneider
Nils MURZYN
Vijay PARSI
Firas MUALLA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZF Friedrichshafen AG
Original Assignee
ZF Friedrichshafen AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZF Friedrichshafen AG filed Critical ZF Friedrichshafen AG
Priority to EP21745818.1A priority Critical patent/EP4196379A1/fr
Publication of WO2022033810A1 publication Critical patent/WO2022033810A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours

Definitions

  • Computer-implemented method and computer program product for obtaining a representation of surrounding scenes for an automated driving system, computer-implemented method for learning a prediction of surrounding scenes for an automated driving system and control device for an automated driving system
  • the invention relates to a computer-implemented method and a computer program product for obtaining a representation of surrounding scenes for an automated driving system, a computer-implemented method for learning a prediction of surrounding scenes for an automated driving system, and a control unit for an automated driving system.
  • the environment is characterized by a large number of explicit, visible signs and markings, such as traffic signs, lane markings, curbs, roadsides, which are coupled with regionally different meanings, rules and real behavior and with a large number of underlying rules and standards, which do not visibly determine the behavior of the interactors in the environment, such as when an emergency vehicle approaches from behind, an emergency lane must be formed.
  • these rules are applied very differently from region to region and, on the other hand, they depend on accompanying events, such as the approach of an emergency vehicle in an acute traffic jam situation in the previous example.
  • all of these explicit, implicit, regional and event-driven rules/information must be considered and used for temporal prediction.
  • Occupancy grids are known in the prior art, a map-like representation of the static environment and road users located therein, see for example EP 2 771 873 B1. Spatial dependencies can be detected by means of such grid representations.
  • the disadvantage is that additional semantic information is usually not recorded or has to be managed separately.
  • the invention is based on the object of enabling improved movement planning for an intelligent agent including automated driving systems
  • the methods according to claims 1 and 8, the computer program product according to claim 7 and the control unit according to claim 12 each solve this task.
  • the environment scene representation according to the invention represents a hybrid representation.
  • the further processing based on this representation in order to enable, for example, a chronological prediction of all road users over several time steps into the future, becomes faster, more efficient, more powerful, more precise, less error-prone, more robust and reliable.
  • the advantages of the spatial and the semantic representation are brought into harmony with one another in an intelligent manner.
  • One aspect of the invention relates to a computer-implemented method for obtaining a representation of an environment scene for an automated driving system, comprising the steps
  • the static environment features include regional information, position data of the driving system and/or the environment features, traffic regulation information, traffic signs and anchor trajectories.
  • the dynamic environment features include semantic information and movement information of road users.
  • the driving system is regulated and/or controlled based on the scene representation.
  • a further aspect of the invention relates to a computer program for obtaining a representation of an environment scene for an automated driving system.
  • the computer program comprises instructions that cause a computer to carry out a method according to the invention when the program is run on the computer.
  • a further aspect of the invention relates to a computer-implemented method for learning a prediction of environmental scenes for an automated driving system.
  • a machine learning algorithm receives the environmental scene representations obtained according to a method according to the invention together with the respective reference predictions as input data pairs. Based on these pairs of input data, the gradient-based prediction is learned from the surrounding scene representations.
  • a further aspect of the invention relates to a control unit for an automated driving system.
  • the control unit includes first interfaces via which the control unit receives environmental sensor data from the driving system.
  • the control unit includes a processing unit that determines environmental features from the environmental sensor data, executes a machine learning algorithm learned according to a method according to the invention and receives predicted environmental scenes and, based on the predicted environmental scenes, determines regulation and/or control signals for automated operation of the driving system.
  • the control device includes second interfaces, via which the control device provides the control and/or control signals to actuators for longitudinal and/or lateral guidance of the driving system.
  • Computer-implemented means that the steps of the method are executed by a data processing device, for example a computer, a computing system, a computer network, for example a cloud system, or parts thereof.
  • a data processing device for example a computer, a computing system, a computer network, for example a cloud system, or parts thereof.
  • Automated driving systems include automated vehicles, road vehicles, people movers, robots and drones.
  • Environmental features include houses, streets, in particular street geometry and/or condition, signs, lane markings, vegetation, moving road users, vehicles, pedestrians, cyclists.
  • Surroundings sensor data include raw data and/or data preprocessed, for example with filters, amplifiers, serializers, compression and/or conversion units, from cameras, radar sensors, lidar sensors, ultrasonic sensors, acoustic sensors, Car2X units and/or real-time/offline maps arranged on the driving system .
  • the surroundings sensor data are actually data entered with the driving system.
  • the environmental sensor data includes virtually generated data, for example using software, hardware, model and/or vehicle-in-the-loop methods.
  • the surroundings sensor data are real data that have been virtually augmented and/or varied.
  • the environmental features are obtained from the environmental sensor data using object classifiers, for example artificial neural networks for semantic image segmentation.
  • the environment scene representation layers a scenario into several layers.
  • a real scenario is presented as a hybrid of static and dynamic and thus semantic information.
  • the environment scene representation according to the invention is also called Hybrid Scene Representation for Prediction, abbreviated HSRV.
  • the scenario is an image with i pixels in the x-direction and j pixels in the y-direction.
  • the individual layers can also be displayed as images and are arranged congruently with one another, for example the layers are spatially congruently one on top of the other.
  • the environment scene representation according to the invention can be imagined as a stack of digital photos lying one on top of the other, for example taken from a bird's eye view of an intersection.
  • this stack of images is combined with further layers of partly purely semantic information that is represented, for example, as pure feature vectors.
  • Static environmental characteristics are divided into two further categories. Elements that do not change at all or only after a long period of time do not change their state in the short term and are referred to as rigid.
  • HRSV also provides for an adaptation of these elements if, for example, there is a change in traffic routing. However, this aspect of adaptation takes place on a different time scale. Road markings are an example of this. In contrast, there are elements that can change state frequently and are therefore state-changing. Traffic lights or variable message signs, for example, are classified in the latter category.
  • Position data of the driving system and/or the environmental features are recorded via map information.
  • a map section is formed by assigning a value to each pixel of the map information corresponding layer of the environment scene representation. The values are based on discrete labels of the map, e.g. numeric codes for street, walkway, broken line, double line, etc.
  • the right of way rules are shown via the traffic regulation information.
  • a line is drawn in the middle of each lane. Additional lines are drawn at intersections, representing all permissible maneuvers.
  • implicitly regulated information such as "Right before left” is overlaid on the signage. Any conflicting rule information is aggregated to form a consistent rule in this layer, so that the rules then in effect are treated as having priority.
  • Traffic advisors include state-changing and stateful traffic advisors.
  • Status-changing traffic signs are usually used to summarize signals that are passed on to the driver visually and that can change their status several times in the course of a day. Examples of this category are traffic lights, variable message signs on motorways and entry signs at toll booths.
  • These traffic signs are represented as a pixel value representing the current state in the spatial context of the local scene representation. For reasons of redundancy, such pixel regions are generally not limited to one pixel, but rather mapped to a larger number of pixels. The exact size of the expansion is mostly learned from data to an optimum.
  • the anchor trajectories combine information from the right of way rules and from the status-changing traffic signs. According to one aspect of the invention, the anchor trajectories determined in this way are brought into line with the rules of the status-changing traffic indicators and prioritized accordingly. According to one aspect of the invention, the layer of the anchor trajectories can supplement or replace the layers of traffic instructions and/or traffic regulation information, depending on the time required of the driving system.
  • the computer program instructions include software and/or hardware instructions.
  • the computer program is loaded into a memory of the control device according to the invention, for example, or is already loaded into this memory. According to a further aspect of the invention, the computer program according to the invention is executed on hardware and/or software of a cloud facility.
  • the computer program is loaded into the memory, for example, by a computer-readable data carrier or a data carrier signal.
  • the invention is thus also implemented as an aftermarket solution.
  • the control unit prepares input signals, processes them using an electronic circuit and provides logic and/or power levels as regulation and/or control signals.
  • the control device according to the invention is scalable for assisted driving through to fully automated/autonomous/driverless driving.
  • control unit receives raw data from sensors and includes an evaluation unit that processes the raw data for HSRV. According to a further aspect of the invention, the control unit receives pre-processed raw data. According to a further aspect of the invention, the control unit includes an interface to an evaluation unit that processes the raw data for HSRV.
  • control unit includes a software and/or hardware level for trajectory planning or high-level controlling. After this level, the signals are then sent to the actuators.
  • the processing unit includes, for example, a programmable electronic circuit.
  • the processing unit or the control device is designed as a system-on-chip.
  • the scene representation includes:
  • the regional information and/or the weather information is provided in the form of codes or a machine learning algorithm learns a connection between the region and driving behavior by entering global coordinates and driving data of the driving system,
  • the position of the driving system is determined from a map section at a specific point in time and the map section is generated for each new time step or the map section is updated after a specified number of time steps, with each pixel of the second layer being assigned a value on the map,
  • the traffic regulation information is determined by means of traffic signs recorded from the environmental sensor data and/or traffic regulations derived from the regional information
  • the anchor trajectories which according to one aspect of the invention include lane lines that can be reached by a road user, are prioritized depending on the traffic signs,
  • the movement information is learned and determined using a machine learning algorithm via time steps and displayed spatially.
  • Adding the regional information for example in the form of a country code from a table, leads to an improvement in the prediction quality.
  • Each region is represented by a specific country or region code.
  • the current weather situation is processed via a weather code.
  • This code can also be global to the machine learning algorithm, i.e. not over one layer, to be provided.
  • the machine learning algorithm thus has the opportunity to learn the real connections between region and/or weather and actual driving behavior.
  • the same regional value is assigned to each pixel in a layer.
  • one option is to learn a connection between the region and driving behavior directly via the global coordinates instead of a country code and thus not having to carry out an expert-based delimitation of regions.
  • country codes are obtained from the following look-up table:
  • pixel values for traffic lights are taken from the following look-up table:
  • street line types are taken from the following look-up table, for example:
  • semantic information is bundled into a feature vector.
  • vehicle class for example truck, car, motorcycle, bicycle, pedestrian
  • the height and width of the objects or states of the flashing lights for example right, left, warning, off.
  • Descriptors describe these properties, i.e. they generate the feature vectors for input into a machine learning algorithm. These descriptors are arranged in the same way as the dynamic information descriptors and form the semantically explicit information layer.
  • latent feature vectors are calculated using artificial deep neural networks.
  • object classifiers which are upstream of the environmental scene representation according to the invention, are implemented as artificial deep neural networks.
  • latent feature vector is generated as an intermediate product during classification.
  • latent intermediate vectors of all Road users are spatially arranged in the manner described above and form the layer of semantic-latent information.
  • the semantically explicit layer is supplemented with the semantically latent layer.
  • An advantage of the semantically latent information is the robustness against noise signals of discrete classes.
  • the discrete classification varies between two classes, such as truck and passenger car, it is difficult to correctly interpret the class information.
  • the latent feature vector is a vector of continuous numbers, fluctuations have little to no effect and allow for a more robust interpretation of the object's semantic information.
  • the dynamic part describes the moving road users in the scene.
  • the coordinates of the road users are used over a certain period of time to generate a descriptor for this dynamic movement behavior.
  • Driving behavior can also be contained latently.
  • the calculation of this descriptor is learned on the one hand by means of an artificial deep neural network, for example a network comprising long-short-term memory layers, abbreviated LSTM.
  • LSTMs With LSTMs, after a settling phase, an iterative adjustment of the descriptor is only possible by entering the coordinates of the next time step.
  • parameters of a vehicle dynamics or movement dynamics model are used here, for example by means of a Kalman filter.
  • the descriptors of all road users are spatially arranged based on the last coordinate and form the layer of movement information.
  • the environmental features are represented in pixels of the layers and/or via feature vectors with spatial anchor points.
  • the feature vectors have a predetermined spatial anchor point.
  • the environmental features are interpreted as color values of the pixels.
  • a spatial position of the environment features is recorded in each layer via a corresponding position on a map. This is advantageous for a spatially corresponding arrangement of the environmental features.
  • spatial coordinates of the driving system and/or the environmental features are represented in pixels, with one pixel in each of the layers corresponding to the same route length.
  • a plurality of environment scene representations are provided, which depict the static and dynamic environment features including the road users over a variable number of x time steps.
  • the machine learning algorithm is trained, validated and tested using these environment scene representations. During the validation, meta-parameters included in the learning process are adjusted appropriately. During the test phase, the prediction of the learned machine learning algorithm is evaluated.
  • the environment scene representation is coupled to the neural structures.
  • the advantage of the environmental scene representation according to the invention is that a very large and very flexible amount of information is provided which the machine learning algorithm can access. Within the learning phase, in which the variable parameters/weights of the machine learning algorithm are adjusted, the use of the specific information that is best suited to perform the tasks of prediction then emerges.
  • the machine learning algorithm comprises an encoder-decoder structure
  • the convolutional network learns interactions between the layers of the environment scene representation, interactions between road users and/or interactions between road users and environment features and in the form of an output volume whose height and width is equal to the size of the environment scene representation, to output it, whereby a column based on the pixel-discrete position of the road user is determined from the output volume for each road user and the column with a vector that describes the dynamic behavior, is concatenated,
  • Composite feature vectors obtained from the concatenation are decoded into predicted trajectories of the driving system and/or the road users.
  • the encoders and/or decoders are based on long-short-term memory technology.
  • noise vectors are concatenated by generative adversarial learning and different trajectories in the future are generated by different noise vectors for identical trajectories in the past. This captures multimodal uncertainties of predictions.
  • the machine learning algorithm is a multi-agent tensor fusion encoder-decoder.
  • a multi-agent tensor fusion encoder-decoder for static environmental scenes is disclosed in arXiv: 1904.04776v2 [cs.CV].
  • the invention provides a multi-agent tensor fusion algorithm for the environment scene representation according to the invention, which also includes dynamic environment features in addition to static environment features.
  • the multi-agent tensor fusion algorithm according to the invention does not receive static environmental scenes as input, but rather the HSRV containing dynamic environmental features.
  • an encoder-decoder LSTM network is particularly well suited to solving sequence-based problems.
  • the noise vectors are generated by a generative adversarial network, abbreviated GAN, for example by the GAN disclosed in arXiv: 1904.04776v2 [cs.CV] under point 3.3.
  • FIG. 1 shows a representation of an environment scene representation according to the invention
  • FIG. 4 shows a representation of the method according to the invention for obtaining the environment scene representation from FIG.
  • FIG. 1 shows an example of a surrounding scene representation HSRV according to the invention.
  • a car as an example of a driving system R at a junction.
  • a pedestrian W At the junction there is a pedestrian W.
  • the right of way is controlled by a traffic light L.
  • the traffic light circuit L shows the car R the green traffic light phase and the pedestrian W the red one.
  • the various layers that are essential for the prediction of the trajectories of the road users are shown above the representation of this situation from a bird's eye view.
  • Layer A shows the regional information.
  • Layer B uses the map information, layer C the traffic regulation information.
  • the stateful traffic signs and the anchor trajectories are contained in layer D and layer E.
  • Layer F describes the semantic characteristics of the individual road users.
  • Layer G and Layer H contain latent information, where this information in layer G is based on properties that describe the road user, and in layer H on the dynamic movement behavior.
  • Layers A to E are static layers and describe static environmental features stat of environmental scene E.
  • Layers A to C describe rigid static environmental features stat_1 and layers D and E state-changing static environmental features stat_2.
  • the layers F to H are dynamic layers and describe dynamic environment features dyn of the environment scene E.
  • FIG. 2 shows an exemplary architecture of an artificial deep neural network DNN, which receives the environment scene representation HSRV as input.
  • the environment scene representation HSRV is input into the network DNN as a feature volume.
  • the network DNN includes a convolutional network-encoder-decoder structure, which uses multi-agent tensor fusion to control the interactions between the various layers AH and, due to its filter mask-based architecture, the interactions with elements of the environmental scenes located in the environment -Representation HSRV to be modeled.
  • a feature volume results from the network DNN, where height and width correspond to the input volume.
  • the input volume is the environment scene representation HSRV.
  • a column is now selected for each road user from the output volume V and concatenated with the vector that describes the dynamic behavior and a noise vector.
  • the column is determined based on the quantized position of the road user.
  • the assembled feature vectors are now each fed into an LSTM decoder. This decoder then generates the future trajectory for each road user. Since different noise vectors are concatenated in the training according to the GAN setup, different noise vectors for identical trajectories in the past can be used in the inference to generate different trajectories in the future.
  • the control unit ECU shown in FIG. 3 receives environment sensor data U via first interfaces INT 1 , for example from one or more cameras of the Driving system R.
  • a processing unit P for example a CPU, GPU or FPGA, carries out object classifiers and determines the static and/or dynamic surroundings features stat and dyn from the surroundings sensor data U.
  • the processing unit P processes the environmental features using a machine learning algorithm learned according to the invention and obtains predicted environmental scenes. Based on the predicted environmental scenes, the processing unit P determines regulation and/or control signals for automated operation of the driving system R.
  • the control unit ECU uses second interfaces INT 2 to provide the regulation and/or control signals to actuators for longitudinal and/or lateral guidance of the driving system R ready.
  • step V1 the environmental features stat and dyn are obtained.
  • step V2 the slices AH are generated with the respective environment features stat and dyn.
  • step V3 the driving system R is regulated and/or controlled based on the scene representation HSRV.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur permettant d'obtenir une représentation de scène d'environnement (HSRV) pour un système de conduite automatisée (R), ledit procédé comprenant les étapes suivantes consistant : à obtenir des caractéristiques d'environnement à partir de données de capteur d'environnement réel et/ou virtuel du système de conduite (R) (V1); et à agencer les caractéristiques d'environnement dans la représentation de scène (HSRV) (V2) comprenant une pluralité de couches (A-H) qui sont disposées en relation spatiale et comprennent chacune des caractéristiques d'environnement statiques (stat) ou dynamiques (dyn), les caractéristiques d'environnement statiques (stat) comprenant des informations régionales, des données de position du système de conduite et/ou des caractéristiques d'environnement, des informations de règle de trafic, des indicateurs de trafic et des trajectoires d'ancrage, les caractéristiques d'environnement dynamiques (dyn) comprenant des informations sémantiques et des informations de mouvement des utilisateurs de la route, et le système de conduite (R) étant commandé en boucle fermée et/ou en boucle ouverte (V3) sur la base de la représentation de scène (HSRV).
PCT/EP2021/070099 2020-08-14 2021-07-19 Procédé mis en œuvre par ordinateur et produit-programme d'ordinateur permettant d'obtenir une représentation de scène d'environnement pour un système de conduite automatisée, procédé mis en œuvre par ordinateur permettant d'apprendre une prédiction de scène d'environnement pour un système de conduite automatisée, et dispositif de commande pour un système de conduite automatisée Ceased WO2022033810A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21745818.1A EP4196379A1 (fr) 2020-08-14 2021-07-19 Procédé mis en oeuvre par ordinateur et produit-programme d'ordinateur permettant d'obtenir une représentation de scène d'environnement pour un système de conduite automatisée, procédé mis en oeuvre par ordinateur permettant d'apprendre une prédiction de scène d'environnement pour un système de conduite automatisée, et dispositif de commande pour un système de conduite automatisée

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020210379.8 2020-08-14
DE102020210379.8A DE102020210379A1 (de) 2020-08-14 2020-08-14 Computerimplementiertes Verfahren und Computerprogrammprodukt zum Erhalten einer Umfeldszenen-Repräsentation für ein automatisiertes Fahrsystem, computerimplementiertes Verfahren zum Lernen einer Prädiktion von Umfeldszenen für ein automatisiertes Fahrsystem und Steuergerät für ein automatisiertes Fahrsystem

Publications (1)

Publication Number Publication Date
WO2022033810A1 true WO2022033810A1 (fr) 2022-02-17

Family

ID=77042979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/070099 Ceased WO2022033810A1 (fr) 2020-08-14 2021-07-19 Procédé mis en œuvre par ordinateur et produit-programme d'ordinateur permettant d'obtenir une représentation de scène d'environnement pour un système de conduite automatisée, procédé mis en œuvre par ordinateur permettant d'apprendre une prédiction de scène d'environnement pour un système de conduite automatisée, et dispositif de commande pour un système de conduite automatisée

Country Status (3)

Country Link
EP (1) EP4196379A1 (fr)
DE (1) DE102020210379A1 (fr)
WO (1) WO2022033810A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114906153A (zh) * 2022-05-30 2022-08-16 中国第一汽车股份有限公司 障碍物轨迹预测方法、装置、存储介质及处理器
CN114926788A (zh) * 2022-03-11 2022-08-19 武汉理工大学 一种多模态自动提取交通场景信息的方法、系统及设备
CN115081187A (zh) * 2022-05-20 2022-09-20 华南理工大学 无人驾驶环境分层特征建模方法、系统及其存储介质
CN115468778A (zh) * 2022-09-14 2022-12-13 北京百度网讯科技有限公司 车辆测试方法、装置、电子设备及存储介质
CN115662167A (zh) * 2022-10-14 2023-01-31 北京百度网讯科技有限公司 自动驾驶地图构建方法、自动驾驶方法及相关装置
CN118514712A (zh) * 2024-05-13 2024-08-20 江苏大学 基于多层次交互关系图的场景级联合轨迹预测方法、系统及存储介质
EP4527708A1 (fr) * 2023-09-20 2025-03-26 Anhui NIO Autonomous Driving Technology Co., Ltd. Procédé d'obtention de trajectoire de véhicule, appareil de commande, support d'informations lisible et véhicule

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021203440A1 (de) 2021-04-07 2022-10-13 Zf Friedrichshafen Ag Computerimplementiertes Verfahren, Computerprogramm und Anordnung zum Vorhersagen und Planen von Trajektorien
DE102022201127B4 (de) 2022-02-03 2024-12-05 Zf Friedrichshafen Ag Verfahren und Computerprogramm zum Charakterisieren von zukünftigen Trajektorien von Verkehrsteilnehmern
DE102022131178B3 (de) 2022-11-24 2024-02-08 Cariad Se Verfahren zum automatisierten Führen eines Fahrzeugs sowie Verfahren zum Erzeugen eines hierzu fähigen Modells des Maschinellen Lernens sowie Prozessorschaltung und Fahrzeug

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2771873B1 (fr) 2011-10-28 2018-04-11 Conti Temic microelectronic GmbH Modèle d'environnement à base de grille pour un véhicule
US20200110416A1 (en) * 2018-10-04 2020-04-09 Zoox, Inc. Trajectory prediction on top-down scenes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2771873B1 (fr) 2011-10-28 2018-04-11 Conti Temic microelectronic GmbH Modèle d'environnement à base de grille pour un véhicule
US20200110416A1 (en) * 2018-10-04 2020-04-09 Zoox, Inc. Trajectory prediction on top-down scenes

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926788A (zh) * 2022-03-11 2022-08-19 武汉理工大学 一种多模态自动提取交通场景信息的方法、系统及设备
CN115081187A (zh) * 2022-05-20 2022-09-20 华南理工大学 无人驾驶环境分层特征建模方法、系统及其存储介质
CN114906153A (zh) * 2022-05-30 2022-08-16 中国第一汽车股份有限公司 障碍物轨迹预测方法、装置、存储介质及处理器
CN115468778A (zh) * 2022-09-14 2022-12-13 北京百度网讯科技有限公司 车辆测试方法、装置、电子设备及存储介质
CN115468778B (zh) * 2022-09-14 2023-08-15 北京百度网讯科技有限公司 车辆测试方法、装置、电子设备及存储介质
CN115662167A (zh) * 2022-10-14 2023-01-31 北京百度网讯科技有限公司 自动驾驶地图构建方法、自动驾驶方法及相关装置
CN115662167B (zh) * 2022-10-14 2023-11-24 北京百度网讯科技有限公司 自动驾驶地图构建方法、自动驾驶方法及相关装置
EP4527708A1 (fr) * 2023-09-20 2025-03-26 Anhui NIO Autonomous Driving Technology Co., Ltd. Procédé d'obtention de trajectoire de véhicule, appareil de commande, support d'informations lisible et véhicule
CN118514712A (zh) * 2024-05-13 2024-08-20 江苏大学 基于多层次交互关系图的场景级联合轨迹预测方法、系统及存储介质

Also Published As

Publication number Publication date
DE102020210379A1 (de) 2022-02-17
EP4196379A1 (fr) 2023-06-21

Similar Documents

Publication Publication Date Title
WO2022033810A1 (fr) Procédé mis en œuvre par ordinateur et produit-programme d'ordinateur permettant d'obtenir une représentation de scène d'environnement pour un système de conduite automatisée, procédé mis en œuvre par ordinateur permettant d'apprendre une prédiction de scène d'environnement pour un système de conduite automatisée, et dispositif de commande pour un système de conduite automatisée
EP4320408A1 (fr) Procédé mis en oeuvre par ordinateur, programme informatique et ensemble pour la prédiction et la planification de trajectoires
DE112017006530B4 (de) Rückmeldung für ein autonomes fahrzeug
DE112020001103B4 (de) Multitasking-Wahrnehmungsnetzwerk mit Anwendungen für ein Szenenverständnis und ein fortschrittliches Fahrerassistenzsystem
DE102022201127B4 (de) Verfahren und Computerprogramm zum Charakterisieren von zukünftigen Trajektorien von Verkehrsteilnehmern
DE102021109395A1 (de) Verfahren, systeme und vorrichtungen für benutzerverständliche erklärbare lernmodelle
DE112022001546T5 (de) Systeme und Verfahren zur Erzeugung von Objekterkennungs-Labels unter Verwendung fovealer Bildvergrößerung für autonomes Fahren
DE112021006846T5 (de) Systeme und Verfahren zur szenarioabhängigen Trajektorienbewertung
DE112022002869T5 (de) Verfahren und System zur Verhaltensprognose von Akteuren in einer Umgebung eines autonomen Fahrzeugs
DE102022109385A1 (de) Belohnungsfunktion für Fahrzeuge
EP3850536A1 (fr) Analyse de scénarios spatiaux dynamiques
DE102022003079A1 (de) Verfahren zu einer automatisierten Generierung von Daten für rasterkartenbasierte Prädiktionsansätze
DE102020200876B4 (de) Verfahren zum Verarbeiten von Sensordaten einer Sensorik eines Fahrzeugs
WO2022263175A1 (fr) Prédiction de mouvement pour usagers de la route
DE112021005432T5 (de) Verfahren und System zum Vorhersagen von Trajektorien zur Manöverplanung basierend auf einem neuronalen Netz
DE102021117227A1 (de) Analysieren eines Kreisverkehrs
DE102024100521A1 (de) Bildgestützte generierung von beschreibenden und wahrnehmungs-nachrichten von automobilszenen
DE102019204187A1 (de) Klassifizierung und temporale Erkennung taktischer Fahrmanöver von Verkehrsteilnehmern
DE102022214267A1 (de) Computer-implementiertes Verfahren und System zur Verhaltensplanung eines zumindest teilautomatisierten EGO-Fahrzeugs
DE102022109384A1 (de) Verarbeitung von Umgebungsdaten für Fahrzeuge
WO2023083474A1 (fr) Procédé de commande assistée ou automatisée de véhicule
DE112020006532T5 (de) Computersystem und verfahren mit ende-zu-ende modellierung für einen simulierten verkehrsagenten in einer simulationsumgebung
DE102024000515A1 (de) Verfahren zur Prädiktion von am Straßenverkehr teilnehmenden Agenten in einer Umgebung eines automatisiert fahrenden Fahrzeugs
DE102023206536A1 (de) Fahrbezogene Vorgänge in virtuellen Feldern
DE102024104360A1 (de) Fahrzeugsysteme und verfahren zur vorhersage des überquerens von strassen durch fussgänger

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21745818

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021745818

Country of ref document: EP

Effective date: 20230314