[go: up one dir, main page]

WO2021060599A1 - Véhicule et procédé de détection de voie - Google Patents

Véhicule et procédé de détection de voie Download PDF

Info

Publication number
WO2021060599A1
WO2021060599A1 PCT/KR2019/013501 KR2019013501W WO2021060599A1 WO 2021060599 A1 WO2021060599 A1 WO 2021060599A1 KR 2019013501 W KR2019013501 W KR 2019013501W WO 2021060599 A1 WO2021060599 A1 WO 2021060599A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
lane line
information
models
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2019/013501
Other languages
English (en)
Korean (ko)
Inventor
정동하
민 트루엉홍
발드빈 존슨토르스테인
박재일
김영성
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seoul Robotics Co Ltd
Original Assignee
Seoul Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seoul Robotics Co Ltd filed Critical Seoul Robotics Co Ltd
Publication of WO2021060599A1 publication Critical patent/WO2021060599A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • It relates to a vehicle and method for detecting lanes.
  • a method of detecting a lane using an image of a camera may cause an error in the process of processing information in a three-dimensional form for use in actual driving of a vehicle.
  • the method of detecting lanes using a pre-prepared high-resolution map has a problem in that there is a problem of resources to store a large amount of data in the vehicle and a problem in that the high-resolution map must be continuously updated to reflect a situation that changes in real time.
  • a vehicle and a method for detecting lanes in real time based on lane line information continuously acquired in a driving direction of a vehicle, and a computer program stored in a computer-readable storage medium are provided.
  • a vehicle includes: a sensor unit configured to sense a three-dimensional space using at least one sensor and output spatial information; A memory storing computer executable instructions; And by executing the computer-executable command, obtaining road area information from the obtained spatial information, extracting road marking information based on an intensity value from the obtained road area information, and extracting the extracted road area information. After clustering the mark information, filtering is performed based on lane line width information to obtain lane line information, and lane line models are generated based on the lane line information successively obtained in the driving direction of the vehicle. And a processor for generating a lane comprising two adjacent lane lines among lane lines based on the generated lane line models.
  • a method of detecting a lane according to a second aspect includes: obtaining road area information from spatial information obtained by detecting a three-dimensional space; Extracting road marking information based on an intensity value from the obtained road area information; Obtaining lane line information by performing filtering based on lane line width information after clustering the extracted road marking information; Generating lane line models based on the lane line information continuously acquired in a driving direction of the vehicle; And generating a lane including two adjacent lane lines among lane lines based on the generated lane line models.
  • a computer program stored in a computer-readable storage medium includes: in a vehicle, obtaining road area information from spatial information obtained by sensing a three-dimensional space; Extracting road marking information based on an intensity value from the obtained road area information; Obtaining lane line information by performing filtering based on lane line width information after clustering the extracted road marking information; Generating lane line models based on the lane line information continuously acquired in a driving direction of the vehicle; And generating a lane including two adjacent lane lines among lane lines based on the generated lane line models.
  • FIG. 1 is a block diagram showing a configuration of a vehicle according to an exemplary embodiment.
  • FIG. 2 is a flowchart illustrating a method of detecting a lane according to an exemplary embodiment.
  • FIG. 3 is a diagram illustrating an example of obtaining road area information from spatial information according to an exemplary embodiment.
  • FIG. 4 is a diagram illustrating an example of extracting road marking information based on an intensity value from road area information according to an embodiment.
  • FIG. 5 is a diagram illustrating a process of obtaining lane line information by performing filtering based on lane line width information after clustering road marking information according to an exemplary embodiment.
  • FIG. 6 is a detailed flowchart illustrating a process of generating lane line models based on lane line information continuously acquired in a driving direction of a vehicle according to an exemplary embodiment.
  • 7A, 7B, and 7C are diagrams for explaining a possibility of estimating a direction according to a distribution distance of lane line information and a possibility of estimating a direction according to the number of distributions of lane line information.
  • FIG. 8 is a diagram illustrating a process of updating an existing lane line model or adding a new lane line model based on lane line information continuously acquired in a driving direction of a vehicle according to an exemplary embodiment.
  • 9 is a diagram for explaining a reason why lane line models are fitted to predetermined curve models using direction information of an entire road and lane line information of each lane line model.
  • FIG. 10 is a detailed flowchart illustrating a process of generating a lane including two adjacent lane lines among lane lines based on lane line models generated according to an exemplary embodiment.
  • FIG. 11 is a diagram illustrating a process of generating lane models including two adjacent lane lines and merging lane models constituting the same lane among the generated lane models into one lane model, according to an exemplary embodiment.
  • a vehicle includes: a sensor unit configured to sense a three-dimensional space using at least one sensor and output spatial information; A memory storing computer executable instructions; And by executing the computer-executable command, obtaining road area information from the obtained spatial information, extracting road marking information based on an intensity value from the obtained road area information, and extracting the extracted road area information. After clustering the mark information, filtering is performed based on lane line width information to obtain lane line information, and lane line models are generated based on the lane line information successively obtained in the driving direction of the vehicle. And a processor for generating a lane comprising two adjacent lane lines among lane lines based on the generated lane line models.
  • the present embodiments relate to a vehicle and a method for detecting a lane, and a computer program stored in a computer-readable storage medium, and details that are widely known to those of ordinary skill in the art to which the following embodiments belong. The explanation is omitted.
  • FIG. 1 is a block diagram showing the configuration of a vehicle 100 according to an embodiment.
  • the vehicle 100 may be an autonomous vehicle or a vehicle equipped with a driving assistance system.
  • the vehicle 100 may acquire spatial information on a surrounding 3D space using a sensor such as a lidar device to detect a lane on a road where the vehicle 100 is driven.
  • a vehicle 100 may include a memory 110, a processor 120, a communication interface device 130, a sensor unit 140, and a user interface device 150. .
  • a processor 120 may perform arithmetic and logic operations.
  • a communication interface device 130 may include a Wi-Fi connection.
  • a sensor unit 140 may include a Wi-Fi connection.
  • a user interface device 150 may include a Wi-Fi connection.
  • Those of ordinary skill in the art related to the present embodiment can see that other general-purpose components may be further included in addition to the components illustrated in FIG. 1.
  • the memory 110 may store software and/or programs.
  • the memory 110 may store an application, a program such as an application programming interface (API), and various types of data.
  • the memory 110 may store instructions executable by the processor 120.
  • the processor 120 may access and use data stored in the memory 110 or may store new data in the memory 110.
  • the processor 120 may execute instructions stored in the memory 110.
  • the processor 120 may execute a computer program installed in the vehicle 100.
  • the processor 120 may install a computer program or application received from the outside in the memory 110.
  • the processor 120 may include at least one processing module.
  • the processing module may be a dedicated processing module for executing a predetermined program.
  • the processor 120 includes various types of processing modules that execute vehicle control programs for autonomous driving, such as ADAS (Advanced Driver Assistance System), or processing modules that execute a three-dimensional space tracking program, respectively, as separate dedicated chips. It can be included in the form.
  • the processor 120 may control other components included in the vehicle 100 to perform an operation corresponding to an execution result such as a command or a computer program.
  • the communication interface device 130 may perform wireless communication with another device or network.
  • the communication interface device 130 may include a communication module supporting at least one of various wireless communication methods.
  • a communication module that performs short-range communication such as Wi-Fi (Wireless Fidelity), various types of mobile communication such as 3G, 4G, 5G, or ultra-wideband communication may be included.
  • the communication interface device 130 may be connected to a device located outside the vehicle 100 to transmit and receive signals or data.
  • the vehicle 100 may communicate with other vehicles through the communication interface device 130 or may be connected to a server that manages an area in which the vehicle 100 is located.
  • the sensor unit 140 may include at least one sensor for sensing a 3D space.
  • the sensor unit 140 may detect an object located within a detection range, and obtain data capable of generating coordinates of the detected object in a three-dimensional space.
  • the sensor unit 140 may acquire shape data or distance data for an object located within the sensing range.
  • the sensor unit 140 may include at least one of various types of sensors such as a light detection and ranging sensor, a radar sensor, a camera sensor, an infrared image sensor, and an ultrasonic sensor.
  • the sensor unit 140 acquires data on a space in a range of 360 degrees including at least one 3D lidar sensor, and further includes at least one of a radar sensor and an ultrasonic sensor. It is possible to obtain data on a blind area that cannot be detected or a proximity space within a predetermined distance from the vehicle 100.
  • the user interface device 150 may receive a user input or the like from a user.
  • the user interface device 150 may display information such as an execution result of a computer program in the vehicle 100, a processing result corresponding to a user input, and a state of the vehicle 100.
  • the user interface device 150 may include hardware units for receiving inputs or providing outputs, and may include a dedicated software module for driving them.
  • the user interface device 150 may be a touch screen, but is not limited thereto.
  • the vehicle 100 may further include components required for autonomous driving, such as Global Positioning System (GPS) and Inertial Measurement Units (IMU).
  • GPS is a satellite navigation system that calculates the current position of the vehicle 100 by receiving signals from GPS satellites.
  • the IMU is a device that measures the speed, direction, gravity, and acceleration of the vehicle 100.
  • the processor 120 may acquire information related to the movement and posture of the vehicle 100 using GPS and IMU.
  • the processor 120 may acquire other information related to the control of the vehicle 100 from another sensor or memory provided in the vehicle 100.
  • the names of the components of the vehicle 100 described above may be different, and the vehicle 100 may be configured to include at least one of the aforementioned components, and some components may be omitted or additional other components may be added. Can include.
  • An autonomous vehicle or vehicle 100 equipped with a driving assistance system may use a lidar sensor to obtain spatial information on a three-dimensional space around it, thereby detecting a lane on a road in which the vehicle 100 is driven. .
  • a method of detecting a lane in real time based on lane line information continuously acquired in the driving direction of the vehicle will be described in detail.
  • the processor 120 acquires road area information from spatial information on a three-dimensional space sensed using at least one sensor by executing a computer-executable instruction, and based on an intensity value from the obtained road area information.
  • lane line information may be obtained by extracting road marking information, clustering the extracted road marking information, and performing filtering based on lane line width information.
  • the road marking information may be traffic information related signs displayed on a road, such as a lane line, a speed limit, a crosswalk, and a driving direction indicator line, which are lane boundaries.
  • the processor 120 may generate lane line models based on lane line information continuously acquired in the driving direction of the vehicle 100 by executing a computer executable command.
  • the processor 120 updates the existing lane line model or adds a new lane line model based on the continuity of the obtained lane line information by executing a computer-executable instruction, and the distribution distance and distribution of the obtained lane line information Based on the number, it is possible to determine whether direction estimation is possible, and the direction of the existing lane line model or the new lane line model may be determined.
  • the processor 120 executes a computer-executable instruction to perform lane line model filtering based on lane width information for the existing lane line model and the new lane line model, and the direction of the lane line model on which the lane line model filtering is performed. Can be determined.
  • the processor 120 generates lane line models by fitting the lane line models into predetermined curve models using direction information of the entire road and lane line information of each lane line model by executing a computer-executable instruction. can do.
  • the processor 120 may generate a lane including two adjacent lane lines among lane lines based on the generated lane line models by executing a computer-executable instruction.
  • the processor 120 executes a computer-executable instruction to compare a distance between two lane lines and a predetermined width information value among lane lines based on the generated lane line models, and a lane model consisting of two adjacent lane lines. Are generated, and lane models constituting the same lane among the generated lane models may be merged into one lane model.
  • the processor 120 performs extrapolation on lane line models corresponding to both lane lines of the generated lane models by executing a computer-executable instruction, and the generated lane model, which is extended by performing the extrapolation.
  • the processor 120 may perform fitting of the lane line models corresponding to both lane lines of the merged one lane model into a curved model by executing a computer-executable instruction.
  • FIG. 2 is a flowchart illustrating a method of detecting a lane according to an exemplary embodiment.
  • the vehicle 100 may acquire road area information from spatial information obtained by sensing a 3D space.
  • the vehicle 100 may sense a three-dimensional space using a lidar sensor and obtain spatial information in the form of a point cloud. Since the acquired spatial information includes not only information on the road area on which the vehicle 100 moves, but also information on other vehicles, buildings, and pedestrians, the vehicle 100 can extract road area information from the acquired spatial information. have.
  • the vehicle 100 may sample spatial information in the form of a point cloud acquired using a lidar sensor as uniform information of a predetermined size through a voxelization process. The vehicle 100 may extract road area information based on a height value in the sampled voxels.
  • FIG. 3 is a diagram illustrating an example of obtaining road area information from spatial information according to an exemplary embodiment.
  • a point group corresponding to road area information of a ground height in which a lane exists is extracted from spatial information in the form of a point group.
  • the vehicle 100 may extract road marking information based on an intensity value from the obtained road area information.
  • the vehicle 100 may extract road marking information including a lane line based on an intensity value of each point from a point group corresponding to the road area information. Since traffic information related signs displayed on the road, such as lane lines, speed limits, crosswalks, and driving direction indication lines, are made of colors and materials with high reflectivity, road marking information can be extracted by extracting points with high intensity values.
  • FIG. 4 is a diagram illustrating an example of extracting road marking information based on an intensity value from road area information according to an embodiment.
  • the left side shows the distribution of point groups corresponding to the road area information
  • the right side shows points having a high intensity value among the point groups by distinguishing them.
  • points having a high intensity value may correspond to lane lines.
  • the vehicle 100 may obtain lane line information by clustering the extracted road marking information and then performing filtering based on the width information of the lane line.
  • the vehicle 100 clusters nearby point groups on the extracted road marking information, and filtering based on the width information of the lane line is performed. By doing so, it is possible to remove clustered point groups having a width that cannot be regarded as the width of a general lane line.
  • the vehicle 100 may cluster a point group corresponding to the road marking information between adjacent points, and obtain width information of each cluster by a difference between the maximum and minimum coordinates of the points in each cluster.
  • the vehicle 100 moves a window for checking the width of the point group in the cluster in the vehicle traveling direction for the area corresponding to each cluster, and removes the point group whose width exceeds a predetermined value, thereby corresponding to the lane line. Point group can be acquired.
  • FIG. 5 is a diagram illustrating a process of obtaining lane line information by performing filtering based on lane line width information after clustering road marking information according to an exemplary embodiment.
  • road marking information having a high intensity value is extracted from spatial information in the form of a point cloud corresponding to road area information, and the point group corresponding to the road marking information is re-clustered, and the width of each cluster is performed through a window search. You can see the information confirmed.
  • a cluster having a width of W1 corresponds to a range of the width of a general lane line and can be viewed as lane line information
  • a cluster having a width of W2 corresponds to a crosswalk, and a lane having a general width. Because it is outside the range of the width of the line, it can be filtered out.
  • the lane line information obtained in this way may include not a lane line, but a driving direction indication such as a left turn or a straight line, for this purpose, once again in the process of generating a lane line model based on the lane line information.
  • the filtering process can be performed.
  • the vehicle 100 may generate lane line models based on lane line information continuously acquired in the driving direction of the vehicle 100.
  • a process of generating lane line models will be described in detail with reference to FIGS. 6 to 9.
  • FIG. 6 is a detailed flowchart illustrating a process of generating lane line models based on lane line information continuously acquired in a driving direction of a vehicle according to an exemplary embodiment.
  • the vehicle 100 may update an existing lane line model or add a new lane line model based on the continuity of the acquired lane line information.
  • the vehicle 100 may check whether there is a lane line model close to within a predetermined criterion in the obtained lane line information, and expand an existing lane line model or add a new lane line model.
  • points close to the existing lane line may be absorbed to extend the existing lane line, and points far from the existing lane line may generate a new lane line using the points as a starting point.
  • the lane line model may have coordinates of absorbed points, coordinates of points stored to update a directionality, and a directional value of a current lane line model.
  • the vehicle 100 may perform lane line model filtering based on lane width information on the existing lane line model and the new lane line model. For example, not a lane line, but part or all of a driving direction indication such as a left turn or a straight line may have a width similar to the width of the lane line. Therefore, based on the width information of the lane line described in step 230 of FIG. Even if filtering is performed, it may not be removed. Accordingly, in order to remove the road marking information other than the lane line, filtering a lane line model based on the width information of the lane may be performed.
  • a lane line model with low reliability may be removed.
  • the reliability of the lane line model may be calculated based on the number of points included in the lane line model and the consistency of the lane line model.
  • the vehicle 100 may determine whether reliable direction estimation of each lane line model is possible based on the distribution distance and the number of distributions of the obtained lane line information.
  • the direction of the lane line model can be calculated by aligning a straight line between the coordinates using the coordinates of the stored points to update the directionality. If the calculated uncertainty of the straight line is less than a predetermined uncertainty value, the direction of the lane line model is Can be updated. This is to prevent the misdirection of the lane line model from being calculated due to the distribution and noise of the discrete points.
  • the coordinates of the points stored to update the direction are deleted, and instead, the coordinates of the included points may be stored.
  • 7A, 7B, and 7C are diagrams for explaining a possibility of estimating a direction according to a distribution distance of lane line information and a possibility of estimating a direction according to the number of distributions of lane line information.
  • the vehicle 100 may determine the direction of the lane line model based on the obtained lane line information.
  • step 650 if reliable direction estimation is not possible, the vehicle 100 may determine the direction of the lane line model based on the direction information of the entire road.
  • step 660 the vehicle 100 determines whether a window scan for the entire lane line information of the ROI is completed, and if not, based on the lane line information obtained by further performing a window scan, the vehicle 100 determines the lane line models. You can continue the creation process.
  • FIG. 8 is a diagram illustrating a process of updating an existing lane line model or adding a new lane line model based on lane line information continuously acquired in a driving direction of a vehicle according to an exemplary embodiment.
  • FIG. 8 a process of generating lane line models based on lane line information continuously acquired in a driving direction of the vehicle 100 is illustrated.
  • the existing lane line model is updated or a new lane line model is added, and each lane is The line model can be expanded. If the shielding area or the area that the lidar sensor could not detect appears continuously, the extension of the lane line model can be terminated.
  • the vehicle 100 uses the direction information of the entire road and the lane line information of each lane line model. Models can be fitted with various curve models such as B-spline and polynomial spline. The vehicle 100 may obtain a lane line model that prevents overfitting by using the location of points and overall direction information stored in each lane line model.
  • 9 is a diagram for explaining a reason why lane line models are fitted to predetermined curve models using direction information of an entire road and lane line information of each lane line model.
  • the vehicle 100 may generate a lane including two adjacent lane lines among lane lines based on the generated lane line models.
  • a process of creating a lane will be described in detail with reference to FIGS. 10 to 11.
  • FIG. 10 is a detailed flowchart illustrating a process of generating a lane including two adjacent lane lines among lane lines based on lane line models generated according to an exemplary embodiment.
  • the vehicle 100 may generate lane models including two adjacent lane lines among lane lines based on the generated lane line models.
  • the vehicle 100 may generate lane models including two adjacent lane lines by comparing a distance between two lane lines and a predetermined width information value among lane lines based on the generated lane line models. If the distance between the two lane lines is within a range of a predetermined width information value corresponding to the width of one lane, a lane model for one lane composed of the two lane lines may be generated.
  • the vehicle 100 may perform extrapolation on lane line models corresponding to both lane lines of the generated lane models.
  • the vehicle 100 may determine whether lanes of each of the expanded lane models overlap each other by more than a predetermined criterion as the extrapolation is performed.
  • step 1040 if there are lanes that overlap each other by more than a predetermined criterion, the vehicle 100 may determine that lane models corresponding thereto constitute the same lane.
  • the vehicle 100 may merge the lane models determined to form the same lane into one lane model.
  • a continuous lane line model for the entire merged lane may be generated.
  • the vehicle 100 may determine whether merging between lane models determined to constitute the same lane is completed by performing extrapolation for each lane line model for all lanes.
  • FIG. 11 is a diagram illustrating a process of generating lane models including two adjacent lane lines and merging lane models constituting the same lane among the generated lane models into one lane model, according to an exemplary embodiment.
  • the lane generated based on the lane lines is also shown in FIG. It can be created by breaking as shown in. With respect to the broken part of FIG. 11, if it is determined that the lanes that are extended constitute the same lane by performing extrapolation on the lane line models corresponding to both lane lines of each lane, the lane models of each lane are By merging into the lane model of, it is possible to generate a lane model for one continuous lane. By applying this method to all lanes, all lanes can be detected.
  • curve fitting for the lane line model of the continuous full length lane line may be performed again to generate a continuous lane line model.
  • each of the above-described embodiments may be provided in the form of a computer program or application stored in a computer-readable storage medium that enables the vehicle 100 to perform a method of detecting a lane.
  • each of the above-described embodiments may be provided in the form of a computer program or application stored in a computer-readable storage medium for causing the vehicle 100 to execute a method including predetermined steps of detecting a lane.
  • a computer-readable storage medium storing instructions and data executable by a computer or a processor. At least one of the command and data may be stored in the form of a program code, and when executed by a processor, a predetermined program module may be generated to perform a predetermined operation.
  • Such computer-readable storage media include read-only memory (ROM), random-access memory (RAM), flash memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, and DVD-ROMs.
  • DVD-Rs DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs
  • BD-ROMs BD-Rs
  • BD-R LTHs BD-REs
  • magnetic tapes floppy disks
  • magneto-optical data storage devices Optical data storage devices, hard disks, solid-state disks (SSDs), and can store instructions or software, related data, data files, and data structures, and can store instructions or instructions on a processor or computer so that the processor or computer can execute the instructions. It can be any device capable of providing software, associated data, data files, and data structures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un véhicule comprenant : une unité de capteur qui détecte un espace tridimensionnel à l'aide d'au moins un capteur et délivre des informations d'espace ; une mémoire qui stocke des instructions exécutables par ordinateur ; et un processeur qui exécute les instructions exécutables par ordinateur afin : d'acquérir des informations de région de route à partir des informations d'espace acquises ; d'extraire des informations de marquage de route à partir des informations de région de route acquises sur la base d'une valeur d'intensité ; d'acquérir des informations de ligne de voie par regroupement des informations de marquage de route extraites, puis par réalisation d'un filtrage sur la base d'informations de largeur de lignes de voie ; de générer des modèles de ligne de voie sur la base des informations de ligne de voie acquises en continu dans la direction de déplacement du véhicule ; et de générer une voie comprenant deux lignes de voie adjacentes parmi des lignes de voie sur la base des modèles de ligne de voie générés.
PCT/KR2019/013501 2019-09-27 2019-10-15 Véhicule et procédé de détection de voie Ceased WO2021060599A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0120131 2019-09-27
KR1020190120131A KR102255924B1 (ko) 2019-09-27 2019-09-27 레인을 검출하는 차량 및 방법

Publications (1)

Publication Number Publication Date
WO2021060599A1 true WO2021060599A1 (fr) 2021-04-01

Family

ID=75165034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/013501 Ceased WO2021060599A1 (fr) 2019-09-27 2019-10-15 Véhicule et procédé de détection de voie

Country Status (2)

Country Link
KR (1) KR102255924B1 (fr)
WO (1) WO2021060599A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591618A (zh) * 2021-07-14 2021-11-02 重庆长安汽车股份有限公司 前方道路形状估算方法、系统、车辆及存储介质
CN116153057A (zh) * 2022-09-12 2023-05-23 东北林业大学 基于激光雷达点云估算车道宽度的方法
CN116228561A (zh) * 2022-12-26 2023-06-06 武汉中海庭数据技术有限公司 基于关键点序列的车道线复原方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102751623B1 (ko) * 2023-05-19 2025-01-10 주식회사 아이나비시스템즈 포인트 클라우드 도로 차선 추출 시스템 및 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150165972A1 (en) * 2012-06-19 2015-06-18 Toyota Jidosha Kabushiki Kaisha Roadside object detection apparatus
KR20170104287A (ko) * 2016-03-07 2017-09-15 한국전자통신연구원 주행 가능 영역 인식 장치 및 그것의 주행 가능 영역 인식 방법
WO2018126228A1 (fr) * 2016-12-30 2018-07-05 DeepMap Inc. Création de signe et de voie destinée à des cartes haute définition utilisées pour des véhicules autonomes
KR20180137905A (ko) * 2017-06-20 2018-12-28 현대모비스 주식회사 차선 인식 장치 및 방법
KR20190055634A (ko) * 2017-11-15 2019-05-23 전자부품연구원 차선 검출 장치 및 차선 검출 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150165972A1 (en) * 2012-06-19 2015-06-18 Toyota Jidosha Kabushiki Kaisha Roadside object detection apparatus
KR20170104287A (ko) * 2016-03-07 2017-09-15 한국전자통신연구원 주행 가능 영역 인식 장치 및 그것의 주행 가능 영역 인식 방법
WO2018126228A1 (fr) * 2016-12-30 2018-07-05 DeepMap Inc. Création de signe et de voie destinée à des cartes haute définition utilisées pour des véhicules autonomes
KR20180137905A (ko) * 2017-06-20 2018-12-28 현대모비스 주식회사 차선 인식 장치 및 방법
KR20190055634A (ko) * 2017-11-15 2019-05-23 전자부품연구원 차선 검출 장치 및 차선 검출 방법

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591618A (zh) * 2021-07-14 2021-11-02 重庆长安汽车股份有限公司 前方道路形状估算方法、系统、车辆及存储介质
CN113591618B (zh) * 2021-07-14 2024-02-20 重庆长安汽车股份有限公司 前方道路形状估算方法、系统、车辆及存储介质
CN116153057A (zh) * 2022-09-12 2023-05-23 东北林业大学 基于激光雷达点云估算车道宽度的方法
CN116228561A (zh) * 2022-12-26 2023-06-06 武汉中海庭数据技术有限公司 基于关键点序列的车道线复原方法及系统

Also Published As

Publication number Publication date
KR20210037468A (ko) 2021-04-06
KR102255924B1 (ko) 2021-05-25

Similar Documents

Publication Publication Date Title
EP3581890B1 (fr) Procédé et dispositif de positionnement
JP6812404B2 (ja) 点群データを融合させるための方法、装置、コンピュータ読み取り可能な記憶媒体、及びコンピュータプログラム
KR102338370B1 (ko) 센서를 이용하여 획득한 공간 정보를 활용하는 차량 및 센싱 장치, 그리고 이를 위한 서버
US11346682B2 (en) Augmented 3D map
JP6826421B2 (ja) 設備巡視システム及び設備巡視方法
CN106503653B (zh) 区域标注方法、装置和电子设备
CN105793669B (zh) 车辆位置推定系统、装置、方法以及照相机装置
WO2021060599A1 (fr) Véhicule et procédé de détection de voie
KR20190082071A (ko) 전자 지도를 업데이트하기 위한 방법, 장치 및 컴퓨터 판독 가능한 저장 매체
WO2021060778A1 (fr) Véhicule et procédé de génération de carte correspondant à un espace en trois dimensions
JP2009068951A (ja) 空中架線の管理システム
JP6759175B2 (ja) 情報処理装置および情報処理システム
CN111353453B (zh) 用于车辆的障碍物检测方法和装置
WO2021162205A1 (fr) Procédé, appareil, serveur et programme d'ordinateur pour la prévention d'accident de collision
WO2020141694A1 (fr) Véhicule utilisant des informations spatiales acquises à l'aide d'un capteur, dispositif de détection utilisant des informations spatiales acquises à l'aide d'un capteur, et serveur
KR102428765B1 (ko) 터널 조명을 이용한 자율주행 차량 항법 시스템
US12097880B2 (en) Augmented 3D map
KR102106029B1 (ko) 간판 검출 성능 향상을 위한 방법 및 시스템
CN114216469B (zh) 一种对高精地图进行更新的方法、智慧基站及存储介质
CN111273314A (zh) 点云数据处理方法、装置和存储介质
JP2022128469A (ja) 情報処理装置、制御方法、及びプログラム
JP7439969B2 (ja) 地図データ収集装置及び地図データ収集用コンピュータプログラム
KR102758637B1 (ko) 차량을 원격 제어하는 방법 및 서버
US20240183986A1 (en) Travelable area extraction apparatus, system, and method, and non-transitory computer readable medium
US20250093178A1 (en) Data collecting device and data collecting system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946871

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946871

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19946871

Country of ref document: EP

Kind code of ref document: A1