[go: up one dir, main page]

WO2025022609A1 - Object detection method and object detection device - Google Patents

Object detection method and object detection device Download PDF

Info

Publication number
WO2025022609A1
WO2025022609A1 PCT/JP2023/027390 JP2023027390W WO2025022609A1 WO 2025022609 A1 WO2025022609 A1 WO 2025022609A1 JP 2023027390 W JP2023027390 W JP 2023027390W WO 2025022609 A1 WO2025022609 A1 WO 2025022609A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
velocity
vehicle
relative distance
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2023/027390
Other languages
French (fr)
Japanese (ja)
Inventor
旭伸 池田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Priority to PCT/JP2023/027390 priority Critical patent/WO2025022609A1/en
Publication of WO2025022609A1 publication Critical patent/WO2025022609A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Definitions

  • the present invention relates to an object detection method and an object detection device.
  • the present invention aims to improve the detection accuracy when detecting objects from captured images.
  • the vehicle control device 10 includes an external sensor 11 , a vehicle sensor 12 , a positioning device 13 , a map database (map DB) 14 , an actuator 17 , and a controller 18 .
  • the external sensor 11 includes a camera 11a mounted on the vehicle 1 and capturing an image of the surroundings of the vehicle 1.
  • the camera 11a may be, for example, a stereo camera.
  • the external sensor 11 may also include a plurality of different types of object detection sensors that detect objects around the vehicle 1, such as a laser radar, a millimeter wave radar, or a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging).
  • the vehicle sensor 12 is mounted on the host vehicle 1 and detects various information (vehicle signals) obtained from the host vehicle 1.
  • the vehicle sensor 12 includes, for example, a vehicle speed sensor that detects the vehicle speed of the host vehicle 1, a wheel speed sensor that detects the rotational speed of the tires of the host vehicle 1, an acceleration sensor that detects the acceleration and deceleration of the host vehicle 1, a steering angle sensor that detects the steering angle of the steering wheel, a turning angle sensor that detects the turning angle of the steered wheels, a gyro sensor that detects the angular velocity of the host vehicle 1, a yaw rate sensor that detects the yaw rate, an accelerator sensor that detects the accelerator opening of the host vehicle, and a brake sensor that detects the amount of brake operation.
  • the positioning device 13 includes a Global Navigation System (GNSS) receiver and receives radio waves from a plurality of navigation satellites to measure the current position of the vehicle 1.
  • the GNSS receiver may be, for example, a Global Positioning System (GPS) receiver.
  • GPS Global Positioning System
  • the positioning device 13 may be, for example, an inertial navigation system.
  • the map database 14 stores road map data.
  • the map database 14 may store high-precision map data (hereinafter simply referred to as a "high-precision map”) suitable as map information for automated driving.
  • Actuator 17 operates the steering wheel, accelerator opening, and brake device of the host vehicle in response to control signals from controller 18 to generate vehicle behavior of the host vehicle.
  • Actuator 17 includes a steering actuator, an accelerator opening actuator, and a brake control actuator.
  • the steering actuator controls the steering direction and steering amount of the host vehicle.
  • the accelerator opening actuator controls the accelerator opening of the host vehicle.
  • the brake control actuator controls the braking operation of the brake device of the host vehicle.
  • the controller 18 is an electronic control unit that controls the running of the vehicle 1.
  • the controller 18 includes a processor 18a and peripheral components such as a storage device 18b.
  • the processor 18a may be, for example, a CPU or an MPU.
  • the storage device 18b may include a semiconductor storage device, a magnetic storage device, an optical storage device, or the like.
  • the storage device 18b may include memories such as a register, a cache memory, and a ROM and a RAM used as a main storage device.
  • the functions of the controller 18 described below are realized, for example, by the processor 18a executing a computer program stored in the storage device 18b.
  • the controller 18 may be formed of dedicated hardware for executing each information processing described below.
  • the controller 18 detects the optical flow on the image captured by the camera 11a, and detects moving objects around the host vehicle 1 based on the detection result of the optical flow. Specifically, the controller 18 detects feature points of an object from the image captured by the camera 11a, calculates the optical flow of the feature points, and calculates the lateral velocity Vx of the feature points on the image coordinate system of the image captured by the camera 11a based on the optical flow.
  • Fig. 2A is a schematic diagram of an image coordinate system SI and a spatial coordinate system SS.
  • the image coordinate system SI is a coordinate system that has a reference point (e.g., the center of the image) on the captured image IM of the camera 11a as its origin and represents the two-dimensional coordinates of the pixels of the captured image.
  • a reference point e.g., the center of the image
  • the horizontal direction and the vertical direction on the image coordinate system SI are represented by the symbols "x" and "y", respectively.
  • the spatial coordinate system SS is a coordinate system that represents three-dimensional coordinates in a three-dimensional space.
  • a camera coordinate system having the viewpoint of the camera 11a (the center point of the image sensor) as the origin O is exemplified as the spatial coordinate system SS, but a stationary coordinate system having a fixed point as the origin, such as a map coordinate system, may be used as the spatial coordinate system SS.
  • the longitudinal direction of the spatial coordinate system SS is represented by the symbol "Z”.
  • the longitudinal direction is the optical axis direction AO of the camera 11a.
  • the vertical direction is represented by the symbol "Y", and the lateral direction perpendicular to the longitudinal direction and the vertical direction is represented by the symbol "X”.
  • the longitudinal direction and lateral direction of the spatial coordinate system SS may respectively coincide with the longitudinal direction and the vehicle width direction of the vehicle 1.
  • the optical axis direction AO may be inclined in the pitching direction with respect to the longitudinal direction of the vehicle 1.
  • the controller 18 measures the relative distance and azimuth angle from the vehicle 1 to the feature point on the spatial coordinate system SS.
  • the relative distance from the vehicle 1 to the feature point will be simply referred to as "relative distance”.
  • the controller 18 may measure the relative distance based on the parallax of the feature point on a pair of captured images taken by the camera 11a, which is a stereo camera, and measure the azimuth angle from the coordinate position of the feature point.
  • the controller 18 may measure the relative distance and azimuth angle based on the laser radar, millimeter wave radar, LIDAR, or the like of the external sensor 11.
  • the controller 18 estimates a first feature point velocity, which is the velocity of the feature point of the object on the spatial coordinate system SS, based on the change over time of the relative distance and the azimuth angle.
  • the controller 18 calculates the lateral velocity VX of the feature point on the spatial coordinate system SS using the movement velocity of the feature point on the image coordinate system SI.
  • the accuracy of the vertical movement speed on the spatial coordinate system SS affects the calculation accuracy of the horizontal speed.
  • the controller 18 measures the vehicle behavior of the host vehicle 1 based on the vehicle signals detected by the vehicle sensors 12, and approximates the longitudinal speed based on the vehicle behavior. For example, the controller 18 measures, as the vehicle behavior, the amount of movement and the amount of change in attitude (e.g., the amount of change in yaw angle) of the host vehicle 1 during a processing cycle of the controller 18. The controller 18 estimates the longitudinal speed of the feature point by calculating a homogeneous transformation matrix T relating to the time change in the position and attitude of the camera 11a based on the measured vehicle behavior.
  • the time width (t-(t-1)) of the processing cycle of the controller 18 is represented as ⁇ t
  • the estimated value V eZ of the vertical velocity of the characteristic point on the spatial coordinate system SS can be obtained by the following equation (3).
  • V eZ (Z t - Z' t-1 )/ ⁇ t...(3)
  • the longitudinal velocity V eZ estimated in this way means the relative velocity resulting from the vehicle behavior of the host vehicle 1 when the feature point of the object is stationary in the longitudinal direction.
  • the controller 18 calculates the lateral velocity V2X of the feature point on the spatial coordinate system SS, which is the velocity component of the feature point in the lateral direction perpendicular to the optical axis direction of the camera 11a, based on the estimated value VeZ of the vertical velocity, using the following equation (4).
  • V2X ( ZVx + xVeZ )/f...(4) This makes it possible to calculate the lateral velocity V2X that does not depend on the measurement accuracy of the vertical velocity of the stereo camera.
  • the lateral velocity V2X is an example of a "second feature point velocity" described in the claims.
  • the controller 18 clusters the feature points extracted from the image captured by the camera 11a into feature point groups based on the second feature point velocities, and detects individual objects from the feature point groups.
  • step S1 the camera 11a captures an image of the surroundings of the host vehicle 1.
  • step S2 the controller 18 detects the optical flow of feature points from the captured image of the surroundings of the host vehicle 1.
  • step S3 the controller 18 measures the relative distance from the host vehicle to the feature points.
  • step S4 the controller 18 measures the vehicle behavior of the host vehicle 1 based on the vehicle signal detected by the vehicle sensor 12.
  • step S5 the controller 18 estimates a first feature point velocity based on a time change in the relative distance.
  • step S6 calculates a second feature point velocity (e.g., a lateral velocity V2X ) based on the optical flow, the relative distance, and the vehicle behavior.
  • step S7 the controller 18 clusters the feature points into feature point groups based on the second feature point velocity, and detects an object from the feature point groups.
  • Second Embodiment 4 is a block diagram showing an example of a functional configuration of the controller 18 according to the second embodiment.
  • the controller 18 includes a spatial position calculation unit 20, an optical flow calculation unit 21, a first feature point velocity estimation unit 22, a second feature point velocity calculation unit 23, a moving object detection unit 24, and a vehicle control unit 25.
  • the spatial position calculation unit 20 measures the relative distance and azimuth angle from the vehicle 1 to the feature point.
  • the spatial position calculation unit 20 may measure the relative distance and azimuth angle based on the parallax between a pair of captured images taken by the camera 11a, which is a stereo camera, or may measure the relative distance and azimuth angle based on a laser radar, a millimeter wave radar, a LIDAR, or the like.
  • the spatial position calculation unit 20 calculates the position (X, Y, Z) of the feature point on the spatial coordinate system SS based on the relative distance and the azimuth angle.
  • the optical flow calculation unit 21 detects the optical flow of characteristic points from a captured image of the surroundings of the vehicle 1 .
  • the first feature point velocity estimation unit 22 estimates the horizontal velocity VX and the vertical velocity VZ of the feature point on the spatial coordinate system SS as the first feature point velocity based on the time change of the position calculated by the spatial position calculation unit 20.
  • the first feature point velocity estimation unit 22 may estimate the vertical velocity VZ using a Kalman filter.
  • the vertical velocity VZ may be estimated using a Kalman filter. That is, the vertical velocity VZ may be estimated based on the relative distance and the azimuth angle measured from multiple frames of the captured image of the stereo camera used as the camera 11a.
  • the estimation can be stabilized by using the estimated value V eZ of the vertical velocity given by the above formula (3) as the initial state of the vertical velocity VZ in the Kalman filter.
  • the second feature point velocity calculation unit 23 calculates the lateral velocity Vx of the feature point on the image coordinate system SI based on the optical flow calculated by the optical flow calculation unit 21.
  • the second feature point velocity calculation unit 23 calculates the lateral velocity V2X given by the above equations (3) and (4) as the second feature point velocity based on the lateral velocity Vx , the longitudinal position Z of the feature point, and the vehicle behavior of the host vehicle 1.
  • the moving object detection unit 24 classifies the feature points extracted from the captured image into groups based on the lateral velocity VX and vertical velocity VZ estimated by the first feature point velocity estimation unit 22 and the lateral velocity V2X calculated by the second feature point velocity calculation unit 23, and detects the classified groups as individual moving objects.
  • the moving object detection unit 24 includes a clustering unit 24a and a separation unit 24b.
  • the clustering unit 24a judges whether the relative distance from the vehicle 1 to the feature point is equal to or smaller than a predetermined threshold Th.
  • the predetermined threshold Th may be set to a threshold distance at which the detection speed by the stereo camera used as the camera 11a has a predetermined allowable error. For example, the distance at which the detection speed has an allowable error can be calculated by determining the speed at which the parallax changes by one tone from the parallax characteristics and observation period of the stereo camera.
  • the weighting coefficient a is set to be larger when the relative distance is large than when the relative distance is small.
  • the weighting coefficient a may be set to be larger as the relative distance increases.
  • the clustering unit 24a calculates a weighted velocity VwX by the following equation (5) based on the longitudinal velocity VZ estimated by the first feature point velocity estimating unit 22, VeZ calculated by the above equation (3), and the weighting coefficient a.
  • Fig. 6 is a flowchart of an example of the object detection method according to the modified example.
  • the processes in steps S30 to S35 are similar to those in steps S1 to S6 in Fig. 3.
  • the clustering unit 24a calculates a weighted velocity VwX .
  • the clustering unit 24a labels the feature points according to whether they are moving to the right or left, or are stationary in the horizontal direction, based on the weighted speed Vw X.
  • the clustering unit 24a forms a feature point group by clustering feature points labeled as moving to the right, based on the weighted speed Vw X.
  • the clustering unit 24a clusters feature points labeled as moving to the left, and feature points labeled as stationary.
  • the process of step S39 is the same as step S8 in FIG. 3.
  • the controller 18 of the third embodiment estimates a crossing time T cross from the current time until the moving object crosses the path of the host vehicle 1.
  • a stereo camera is used to calculate the speed of a distant moving object in the spatial coordinate system SS
  • the measurement accuracy becomes unstable.
  • a Kalman filter is used to observe coordinates in the spatial coordinate system SS and calculate the speed
  • the delay of the Kalman filter becomes a problem when detecting a road user (vehicle or pedestrian) jumping out.
  • the controller 18 of the third embodiment obtains the speed at which the moving object moves in a direction that intersects with the path of the vehicle 1 (hereinafter referred to as "intersection speed V cross ”) from the lateral speed V 2X calculated as described above, and estimates the intersection time T cross .
  • Fig. 7 is a block diagram showing an example of the functional configuration of the controller 18 according to the third embodiment.
  • the controller 18 according to the third embodiment includes an intersection region extraction unit 26, an intersection determination unit 27, and an intersection time estimation unit 28 in addition to the configuration shown in Fig. 4.
  • the intersection area extraction unit 26 extracts an intersection area R cross , which is a candidate area where an object expected to intersect with the path of the vehicle 1 may exist, from the area around the vehicle 1.
  • FIG. 8(a) is a schematic diagram of an example of the intersection area R cross . For example, when the path T r of the vehicle 1 intersects with the crosswalk CW, the area occupied by the crosswalk CW may be extracted as the intersection area R cross .
  • the crossing time estimation unit 28 converts the lateral direction speed V2X calculated as described above into a crossing speed Vcross , which is the speed in the crossing direction Dcross .
  • Fig. 8B is a schematic diagram for explaining a method of calculating the crossing speed Vcross . If the angle between the vertical direction (Z direction) on the spatial coordinate system SS and the crossing direction D cross is ⁇ , the horizontal (X direction) component VX and the vertical (Z direction) component VZ of the crossing speed V cross are expressed by the following equations (6) and (7).
  • VX sin ⁇ Vcross ...(6)
  • VZ cos ⁇ Vcross ...(7)
  • the conversion equation from the lateral velocity V 2X to the crossing velocity V cross is derived by the following equation (11).
  • V cross V 2X /(sin ⁇ (X/Z)cos ⁇ )...(11)
  • the intersection time estimation unit 28 calculates the intersection velocity V cross based on the horizontal velocity V 2X calculated by the second feature point velocity calculation unit 23, the vertical position Z and horizontal position X of the object calculated by the spatial position calculation unit 20, and the above equation (11).
  • the intersection time estimation unit 28 estimates the intersection time T cross by dividing the distance between the current position of the moving object MO and the intersection point P cross by the intersection speed V cross .
  • step S40 the intersection area extraction unit 26 extracts the intersection area Rcross .
  • step S41 the intersection determination unit 27 detects an intersection object.
  • step S42 the intersection determination unit 27 calculates the intersection point Pcross .
  • step S43 the intersection time estimation unit 28 calculates the intersection speed Vcross , and then calculates the intersection time Tcross .
  • step S44 the vehicle control unit 25 controls at least one of the steering angle, driving force, or braking force of the host vehicle 1 based on the intersection time Tcross .
  • the controller 18 detects the optical flow of feature points from captured images obtained by photographing the surroundings of the host vehicle 1 with the camera 11a, measures the relative distance from the host vehicle 1 to the feature points, measures the vehicle behavior of the host vehicle 1, estimates a first feature point velocity, which is the speed of the feature point in the spatial coordinate system, based on the relative distance, calculates a second feature point velocity, which is the lateral velocity component of the feature point that is perpendicular to the optical axis direction of the camera 11a, based on the optical flow, the relative distance, and the vehicle behavior, clusters the feature points into feature point groups based on the second feature point velocity, and detects objects from the feature point groups.
  • the second feature point velocity which is not dependent on the measurement accuracy of the vertical velocity of the stereo camera, to be calculated from the captured images, and the feature points can be clustered based on the second feature point velocity, thereby improving the object detection accuracy.
  • the feature point groups generated by clustering based on the first feature point velocity calculated from the relative distance measurement results it is possible to separate objects that have different velocities in the optical axis direction of the camera. As a result, the object detection accuracy can be further improved.
  • the controller 18 may cluster the feature points based on the second feature point velocity, and if the relative distance is equal to or less than the predetermined threshold, the controller 18 may cluster the feature points based on the first feature point velocity.
  • the controller 18 may cluster the feature points based on a weighted sum of the first feature point velocity and the second feature point velocity, weighted according to the relative distance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is an object detection method, wherein: the optical flow of a feature point is detected from a captured image obtained by photographing the surroundings of a host vehicle with a camera (S2); a relative distance from the host vehicle to the feature point is measured (S3); a vehicle behavior of the host vehicle is measured (S4); a first feature point speed, which is the speed of the feature point on a spatial coordinate system, is estimated on the basis of the relative distance (S5); a second feature point speed, which is a speed component of the feature point in a lateral direction orthogonal to an optical axis direction of the camera, is calculated on the basis of the optical flow, the relative distance, and the vehicle behavior (S6); the feature point is clustered into a feature point group on the basis of the second feature point speed (S7); and the object is detected from the feature point group.

Description

物体検出方法及び物体検出装置Object detection method and object detection device

 本発明は、物体検出方法及び物体検出装置に関する。 The present invention relates to an object detection method and an object detection device.

 下記特許文献1には、監視領域の撮像画像からオプティカルフローを検出し、オプティカルフローに基づいて移動速度を算出し、移動速度に基づいて画素ブロックをグルーピングすることにより監視領域内の物体を検出する移動速度検出装置が記載されている。 The following Patent Document 1 describes a moving speed detection device that detects an object in a monitored area by detecting an optical flow from a captured image of the monitored area, calculating a moving speed based on the optical flow, and grouping pixel blocks based on the moving speed.

特開2005-214914号公報JP 2005-214914 A

 撮像画像から検出した移動速度に基づいて画像上の特徴点をクラスタリングして物体を検出すると、カメラの光軸方向における速度が異なる物体を同一物体と誤認識する虞がある。本発明は、撮像画像から物体を検出する際の検出精度を向上することを目的とする。 When detecting objects by clustering feature points on an image based on the movement speed detected from the captured image, there is a risk that objects moving at different speeds in the direction of the camera's optical axis may be mistakenly recognized as the same object. The present invention aims to improve the detection accuracy when detecting objects from captured images.

 本発明の一態様の物体検出方法では、自車両周囲をカメラで撮影して得られた撮像画像から特徴点のオプティカルフローを検出し、自車両から特徴点までの相対距離を測定し、自車両の車両挙動を測定し、相対距離に基づいて空間座標系上の特徴点の速度である第1特徴点速度を推定し、オプティカルフローと相対距離と車両挙動とに基づいて、特徴点のカメラの光軸方向に対して直交する横方向の速度成分である第2特徴点速度を算出し、第2特徴点速度に基づいて特徴点を特徴点グループにクラスタリングし、特徴点グループから物体を検出する。 In one embodiment of the object detection method of the present invention, the optical flow of feature points is detected from an image captured by a camera around the host vehicle, the relative distance from the host vehicle to the feature points is measured, the vehicle behavior of the host vehicle is measured, a first feature point velocity, which is the speed of the feature point in a spatial coordinate system, is estimated based on the relative distance, a second feature point velocity, which is the lateral velocity component of the feature point perpendicular to the optical axis direction of the camera, is calculated based on the optical flow, the relative distance, and the vehicle behavior, the feature points are clustered into feature point groups based on the second feature point velocity, and objects are detected from the feature point groups.

 本発明によれば、撮像画像から物体を検出する際の検出精度を向上できる。
 本発明の目的及び利点は、特許請求の範囲に示した要素及びその組合せを用いて具現化され達成される。前述の一般的な記述及び以下の詳細な記述の両方は、単なる例示及び説明であり、特許請求の範囲のように本発明を限定するものでないと解するべきである。
According to the present invention, it is possible to improve the detection accuracy when detecting an object from a captured image.
The objects and advantages of the invention will be realized and attained by means of the elements and combinations recited in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.

実施形態の車両制御装置の例の概略構成図である。1 is a schematic configuration diagram of an example of a vehicle control device according to an embodiment; (a)は画像座標系と空間座標系の模式図であり、(b)は縦方向速度の測定精度が横方向速度に与える影響の模式図である。1A is a schematic diagram of an image coordinate system and a spatial coordinate system, and FIG. 1B is a schematic diagram of the effect of the measurement accuracy of the vertical velocity on the horizontal velocity. 第1実施形態の物体検出方法の一例のフローチャートである。4 is a flowchart illustrating an example of an object detection method according to the first embodiment. 第2実施形態のコントローラの機能構成の一例のブロック図である。FIG. 11 is a block diagram illustrating an example of a functional configuration of a controller according to a second embodiment. 第2実施形態の物体検出方法の一例のフローチャートである。10 is a flowchart illustrating an example of an object detection method according to a second embodiment. 変形例の物体検出方法の一例のフローチャートである。13 is a flowchart illustrating an example of a modified object detection method. 第3実施形態のコントローラの機能構成の一例のブロック図である。FIG. 13 is a block diagram illustrating an example of a functional configuration of a controller according to a third embodiment. (a)は交錯領域の一例の模式図であり、(b)は交錯速度の算出方法を説明するための模式図である。5A is a schematic diagram of an example of an intersection region, and FIG. 5B is a schematic diagram for explaining a method of calculating an intersection speed. 交錯時間の算出方法の一例のフローチャートである。13 is a flowchart of an example of a method for calculating an intersection time.

 (第1実施形態)
 (構成)
 図1は、実施形態の車両制御装置の例の概略構成図である。自車両1は、自車両1の走行を制御する車両制御装置10を備える。例えば、車両制御装置10は、自車両1の周辺の走行環境に基づいて、運転者が関与せずに自車両1を自動で運転する自律運転制御や、自車両1の操舵機構、駆動力又は制動力の少なくとも1つを制御することにより運転者による自車両1の運転を支援する運転支援制御を実行してよい。運転支援制御は、例えば自動ブレーキ、先行車追従制御、定速走行制御、合流支援制御などであってよい。
First Embodiment
(composition)
1 is a schematic diagram of an example of a vehicle control device according to an embodiment. The vehicle 1 includes a vehicle control device 10 that controls the traveling of the vehicle 1. For example, the vehicle control device 10 may execute an autonomous driving control that automatically drives the vehicle 1 without the driver's involvement based on the traveling environment around the vehicle 1, or a driving assistance control that assists the driver in driving the vehicle 1 by controlling at least one of the steering mechanism, driving force, or braking force of the vehicle 1. The driving assistance control may be, for example, an automatic brake, a preceding vehicle following control, a constant speed traveling control, or a merging assistance control.

 車両制御装置10は、外界センサ11と、車両センサ12と、測位装置13と、地図データベース(地図DB)14と、アクチュエータ17と、コントローラ18とを備える。
 外界センサ11は、自車両1に搭載されて自車両1の周辺の画像を撮像するカメラ11aを備える。カメラ11aは例えばステレオカメラであってよい。また、外界センサ11は、レーザレーダやミリ波レーダ、LIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)など、自車両1の周辺の物体を検出する複数の異なる種類の物体検出センサを備えてもよい。
The vehicle control device 10 includes an external sensor 11 , a vehicle sensor 12 , a positioning device 13 , a map database (map DB) 14 , an actuator 17 , and a controller 18 .
The external sensor 11 includes a camera 11a mounted on the vehicle 1 and capturing an image of the surroundings of the vehicle 1. The camera 11a may be, for example, a stereo camera. The external sensor 11 may also include a plurality of different types of object detection sensors that detect objects around the vehicle 1, such as a laser radar, a millimeter wave radar, or a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging).

 車両センサ12は、自車両1に搭載され、自車両1から得られる様々な情報(車両信号)を検出する。車両センサ12には、例えば、自車両1の車速を検出する車速センサ、自車両1のタイヤの回転速度を検出する車輪速センサ、自車両1の加速度及び減速度を検出する加速度センサ、ステアリングホイールの操舵角を検出する操舵角センサ、操向輪の転舵角を検出する転舵角センサ、自車両1の角速度を検出するジャイロセンサ、ヨーレートを検出するヨーレートセンサ、自車両のアクセル開度を検出するアクセルセンサと、ブレーキ操作量を検出するブレーキセンサが含まれる。 The vehicle sensor 12 is mounted on the host vehicle 1 and detects various information (vehicle signals) obtained from the host vehicle 1. The vehicle sensor 12 includes, for example, a vehicle speed sensor that detects the vehicle speed of the host vehicle 1, a wheel speed sensor that detects the rotational speed of the tires of the host vehicle 1, an acceleration sensor that detects the acceleration and deceleration of the host vehicle 1, a steering angle sensor that detects the steering angle of the steering wheel, a turning angle sensor that detects the turning angle of the steered wheels, a gyro sensor that detects the angular velocity of the host vehicle 1, a yaw rate sensor that detects the yaw rate, an accelerator sensor that detects the accelerator opening of the host vehicle, and a brake sensor that detects the amount of brake operation.

 測位装置13は、全地球型測位システム(GNSS)受信機を備え、複数の航法衛星から電波を受信して自車両1の現在位置を測定する。GNSS受信機は、例えば地球測位システム(GPS)受信機等であってよい。測位装置13は、例えば慣性航法装置であってもよい。
 地図データベース14は、道路地図データを記憶している。例えば地図データベース14は、自動運転用の地図情報として好適な高精度地図データ(以下、単に「高精度地図」という)を記憶してよい。
The positioning device 13 includes a Global Navigation System (GNSS) receiver and receives radio waves from a plurality of navigation satellites to measure the current position of the vehicle 1. The GNSS receiver may be, for example, a Global Positioning System (GPS) receiver. The positioning device 13 may be, for example, an inertial navigation system.
The map database 14 stores road map data. For example, the map database 14 may store high-precision map data (hereinafter simply referred to as a "high-precision map") suitable as map information for automated driving.

 アクチュエータ17は、コントローラ18からの制御信号に応じて、自車両のステアリングホイール、アクセル開度及びブレーキ装置を操作して、自車両の車両挙動を発生させる。アクチュエータ17は、ステアリングアクチュエータと、アクセル開度アクチュエータと、ブレーキ制御アクチュエータを備える。ステアリングアクチュエータは、自車両のステアリングの操舵方向及び操舵量を制御する。アクセル開度アクチュエータは、自車両のアクセル開度を制御する。ブレーキ制御アクチュエータは、自車両のブレーキ装置の制動動作を制御する。 Actuator 17 operates the steering wheel, accelerator opening, and brake device of the host vehicle in response to control signals from controller 18 to generate vehicle behavior of the host vehicle. Actuator 17 includes a steering actuator, an accelerator opening actuator, and a brake control actuator. The steering actuator controls the steering direction and steering amount of the host vehicle. The accelerator opening actuator controls the accelerator opening of the host vehicle. The brake control actuator controls the braking operation of the brake device of the host vehicle.

 コントローラ18は、自車両1の走行制御を行う電子制御ユニットである。コントローラ18は、プロセッサ18aと、記憶装置18b等の周辺部品とを含む。プロセッサ18aは、例えばCPUやMPUであってよい。記憶装置18bは、半導体記憶装置や、磁気記憶装置、光学記憶装置等を備えてよい。記憶装置18bは、レジスタ、キャッシュメモリ、主記憶装置として使用されるROM及びRAM等のメモリを含んでよい。
 以下に説明するコントローラ18の機能は、例えばプロセッサ18aが、記憶装置18bに格納されたコンピュータプログラムを実行することで実現される。なお、コントローラ18を、以下に説明する各情報処理を実行するための専用のハードウエアにより形成してもよい。
The controller 18 is an electronic control unit that controls the running of the vehicle 1. The controller 18 includes a processor 18a and peripheral components such as a storage device 18b. The processor 18a may be, for example, a CPU or an MPU. The storage device 18b may include a semiconductor storage device, a magnetic storage device, an optical storage device, or the like. The storage device 18b may include memories such as a register, a cache memory, and a ROM and a RAM used as a main storage device.
The functions of the controller 18 described below are realized, for example, by the processor 18a executing a computer program stored in the storage device 18b. Note that the controller 18 may be formed of dedicated hardware for executing each information processing described below.

 コントローラ18は、自車両1の走行制御を行う際に、カメラ11aの撮像画像上のオプティカルフローを検出し、オプティカルフローの検出結果に基づいて自車両1の周囲の移動物体を検出する。
 具体的にはコントローラ18は、カメラ11aの撮像画像から物体の特徴点を検出し、特徴点のオプティカルフローを算出する。コントローラ18は、カメラ11aの撮像画像の画像座標系上の特徴点の横方向速度Vをオプティカルフローに基づいて算出する。
 図2(a)は、画像座標系SIと空間座標系SSの模式図である。画像座標系SIは、カメラ11aの撮像画像IM上の基準点(例えば画像中心)を原点とし、撮像画像の画素の2次元座標を表す座標系である。図2(a)において画像座標系SI上の横方向及び上下方向をそれぞれ符号「x」及び「y」で表している。
When controlling the driving of the host vehicle 1, the controller 18 detects the optical flow on the image captured by the camera 11a, and detects moving objects around the host vehicle 1 based on the detection result of the optical flow.
Specifically, the controller 18 detects feature points of an object from the image captured by the camera 11a, calculates the optical flow of the feature points, and calculates the lateral velocity Vx of the feature points on the image coordinate system of the image captured by the camera 11a based on the optical flow.
Fig. 2A is a schematic diagram of an image coordinate system SI and a spatial coordinate system SS. The image coordinate system SI is a coordinate system that has a reference point (e.g., the center of the image) on the captured image IM of the camera 11a as its origin and represents the two-dimensional coordinates of the pixels of the captured image. In Fig. 2A, the horizontal direction and the vertical direction on the image coordinate system SI are represented by the symbols "x" and "y", respectively.

 また、空間座標系SSは、3次元空間上の3次元座標を表す座標系である。本明細書では、空間座標系SSとしてカメラ11aの視点(撮像素子の中心点)を原点Oとするカメラ座標系を例示するが、例えば地図座標系のように固定地点を原点とする静止座標系を空間座標系SSとして使用してもよい。
 図2(a)において空間座標系SSの前後方向を符号「Z」で表している。前後方向はカメラ11aの光軸方向AOである。また、鉛直方向を符号「Y」で表し、前後方向及び鉛直方向と直交する横方向を符号「X」で表している。例えば、空間座標系SSの前後方向及び横方向は自車両1の前後方向及び車幅方向とそれぞれ一致していてもよい。光軸方向AOは、自車両1の前後方向に対してピッチング方向に傾斜していてもよい。
The spatial coordinate system SS is a coordinate system that represents three-dimensional coordinates in a three-dimensional space. In this specification, a camera coordinate system having the viewpoint of the camera 11a (the center point of the image sensor) as the origin O is exemplified as the spatial coordinate system SS, but a stationary coordinate system having a fixed point as the origin, such as a map coordinate system, may be used as the spatial coordinate system SS.
In Fig. 2(a), the longitudinal direction of the spatial coordinate system SS is represented by the symbol "Z". The longitudinal direction is the optical axis direction AO of the camera 11a. The vertical direction is represented by the symbol "Y", and the lateral direction perpendicular to the longitudinal direction and the vertical direction is represented by the symbol "X". For example, the longitudinal direction and lateral direction of the spatial coordinate system SS may respectively coincide with the longitudinal direction and the vehicle width direction of the vehicle 1. The optical axis direction AO may be inclined in the pitching direction with respect to the longitudinal direction of the vehicle 1.

 コントローラ18は、自車両1から特徴点までの空間座標系SS上の相対距離と方位角を測定する。以下、自車両1から特徴点までの相対距離を単に「相対距離」と表記する。例えばコントローラ18は、ステレオカメラであるカメラ11aで撮影した一対の撮像画像上の特徴点の視差に基づいて相対距離を測定し、特徴点の座標位置から方位角を測定してもよい。また例えばコントローラ18は、外界センサ11のレーザレーダやミリ波レーダ、LIDAR等に基づいて相対距離と方位角を測定してもよい。
 コントローラ18は、相対距離と方位角の時間変化に基づいて空間座標系SS上の物体の特徴点の速度である第1特徴点速度を推定する。
The controller 18 measures the relative distance and azimuth angle from the vehicle 1 to the feature point on the spatial coordinate system SS. Hereinafter, the relative distance from the vehicle 1 to the feature point will be simply referred to as "relative distance". For example, the controller 18 may measure the relative distance based on the parallax of the feature point on a pair of captured images taken by the camera 11a, which is a stereo camera, and measure the azimuth angle from the coordinate position of the feature point. In addition, for example, the controller 18 may measure the relative distance and azimuth angle based on the laser radar, millimeter wave radar, LIDAR, or the like of the external sensor 11.
The controller 18 estimates a first feature point velocity, which is the velocity of the feature point of the object on the spatial coordinate system SS, based on the change over time of the relative distance and the azimuth angle.

 ステレオカメラは、測定対象との距離の2乗に反比例して測定精度が悪化する。このため遠方の物体の特徴点に対しては空間座標系SS上の速度を精度良く算出することが難しい。
 このためコントローラ18は、画像座標系SI上の特徴点の移動速度を利用して空間座標系SS上の特徴点の横方向速度Vを算出する。
 しかし、図2(b)の矢印ARに示すように、原点O(カメラの視点)から特徴点Pを見る奥行き方向Dに特徴点Pが移動しても、画像座標系SI上の特徴点Pの横方向位置xは変わらない。このため、空間座標系SS上の縦方向の移動速度の精度が横方向速度の算出精度に影響する。
The measurement accuracy of a stereo camera deteriorates inversely proportional to the square of the distance to the measurement target, so it is difficult to accurately calculate the velocity in the spatial coordinate system SS for feature points of a distant object.
For this reason, the controller 18 calculates the lateral velocity VX of the feature point on the spatial coordinate system SS using the movement velocity of the feature point on the image coordinate system SI.
However, as shown by the arrow AR in Fig. 2B, even if the feature point Pf moves in the depth direction Dd of the view of the feature point Pf from the origin O (camera viewpoint), the horizontal position x of the feature point Pf on the image coordinate system SI does not change. Therefore, the accuracy of the vertical movement speed on the spatial coordinate system SS affects the calculation accuracy of the horizontal speed.

 具体的には、画像座標系SI上の特徴点Pの横方向位置及び横方向速度をそれぞれ「x」及び「V」と表記し、空間座標系SS上の縦方向位置及び縦方向速度をそれぞれ「Z」及び「V」と表記し、カメラ11aの焦点距離を「f」と表記すると、空間座標系SS上の横方向速度Vは次式(1)で表せられる。
 V=(ZV+xV)/f …(1)
 上式(1)から、横方向速度Vの精度は、縦方向速度Vの精度の影響を受けることが分かる。なお、本明細書では、空間座標系SS上の「縦方向」として、それぞれ「前後方向Z」を用いる例を示すが、これに代えて図2(b)に示す「奥行き方向D」を用いてもよい。
Specifically, if the lateral position and lateral velocity of feature point Pf on the image coordinate system SI are denoted as "x" and " Vx ", respectively, the vertical position and vertical velocity on the spatial coordinate system SS are denoted as "Z" and " Vz ", respectively, and the focal length of camera 11a is denoted as "f", then the lateral velocity VX on the spatial coordinate system SS can be expressed by the following equation (1).
V X = (ZV x + xV Z )/f...(1)
From the above formula (1), it can be seen that the accuracy of the lateral velocity VX is affected by the accuracy of the vertical velocity VZ . Note that in this specification, an example is shown in which the "front-back direction Z" is used as the "vertical direction" in the spatial coordinate system SS, but instead, the "depth direction Dd " shown in Fig. 2(b) may be used.

 このためステレオカメラの測定精度が低下する遠方の物体の特徴点に対しては、空間座標系SS上の移動速度を精度良く求めることが難しい。
 なお自車両1が停止している状態では、物体の縦方向速度が「0」であると近似することで、空間座標系SS上の横方向速度Vを、次式(2)により算出して、物体の横方向速度Vを求めることができる。
 V=ZV/f …(2)
 しかしながら、自車両1が走行している場合は自車両の車両挙動によって背景や物体が近づいてくるため、実際の観測値に対し不適切な近似となる。
For this reason, it is difficult to accurately determine the moving speed in the spatial coordinate system SS for feature points of distant objects, where the measurement accuracy of the stereo camera decreases.
When the host vehicle 1 is stopped, the longitudinal speed of the object is approximated to be "0", and the lateral speed VX of the object in the spatial coordinate system SS can be calculated by the following equation (2), thereby obtaining the lateral speed VX of the object.
Vx = ZVx /f...(2)
However, when the host vehicle 1 is moving, the background and objects approach due to the vehicle behavior of the host vehicle, resulting in an inappropriate approximation to the actual observed values.

 そこでコントローラ18は、車両センサ12が検出した車両信号に基づいて自車両1の車両挙動を測定し、車両挙動に基づいて縦方向速度を近似する。
 例えばコントローラ18は、車両挙動として、コントローラ18の処理周期の間における自車両1の移動量と姿勢変化量(例えばヨー角変化量)とを測定する。コントローラ18は、測定した車両挙動に基づいてカメラ11aの位置及び姿勢の時間変化に関する同次変換行列Tを計算することにより、特徴点の縦方向速度を推定する。
The controller 18 measures the vehicle behavior of the host vehicle 1 based on the vehicle signals detected by the vehicle sensors 12, and approximates the longitudinal speed based on the vehicle behavior.
For example, the controller 18 measures, as the vehicle behavior, the amount of movement and the amount of change in attitude (e.g., the amount of change in yaw angle) of the host vehicle 1 during a processing cycle of the controller 18. The controller 18 estimates the longitudinal speed of the feature point by calculating a homogeneous transformation matrix T relating to the time change in the position and attitude of the camera 11a based on the measured vehicle behavior.

 例えば、ある時刻tにおける特徴点の座標ベクトルx=(X,Y,Z,1)が与えられている場合、時刻tよりも1時刻前の時刻(t-1)における特徴点の座標ベクトルの推定値x’t-1=(X’t-1,Y’t-1,Z’t-1,1)を、x’t-1=Txにより求める。
 コントローラ18の処理周期の時間幅(t-(t-1))をΔtと表すと、空間座標系SS上の特徴点の縦方向速度の推定値VeZを、次式(3)で求めることができる。
 VeZ=(Z-Z’t-1)/Δt …(3)
 なお、時刻(t-1)における特徴点の座標ベクトルの観測値xt-1=(Xt-1,Yt-1,Zt-1,1)に基づいて、時刻tにおける特徴点の座標ベクトルの推定値を算出することにより縦方向速度の推定値VeZを算出してもよい。
For example, if the coordinate vector xt = ( Xt , Yt , Zt , 1) of a feature point at a certain time t is given, the estimated value x't -1 = (X't-1, Y't-1 , Z't -1 , 1) of the coordinate vector of the feature point at the time (t-1) one time before time t is found using x't -1 = Txt .
If the time width (t-(t-1)) of the processing cycle of the controller 18 is represented as Δt, the estimated value V eZ of the vertical velocity of the characteristic point on the spatial coordinate system SS can be obtained by the following equation (3).
V eZ = (Z t - Z' t-1 )/Δt...(3)
In addition, the estimated value of the longitudinal velocity V eZ may be calculated by calculating an estimated value of the coordinate vector of the characteristic point at time t based on the observed value x t-1 = (X t-1 , Y t-1 , Z t-1 , 1) of the coordinate vector of the characteristic point at time (t-1).

 このように推定した縦方向速度VeZは、物体の特徴点が縦方向に静止していた場合の自車両1の車両挙動に起因する相対速度を意味する。
 コントローラ18は、縦方向速度の推定値VeZに基づいて、カメラ11aの光軸方向に対して直交する横方向における特徴点の速度成分である空間座標系SS上の特徴点の横方向速度V2Xを次式(4)に基づいて算出する。
 V2X=(ZV+xVeZ)/f …(4)
 これにより、ステレオカメラの縦方向速度の測定精度に依存しない横方向速度V2Xを算出できる。横方向速度V2Xは、特許請求の範囲に記載の「第2特徴点速度」の一例である。
 コントローラ18は、カメラ11aの撮像画像から抽出した特徴点を、第2特徴点速度に基づいて特徴点グループにクラスタリングする。そして、特徴点グループから個々の物体を検出する。
The longitudinal velocity V eZ estimated in this way means the relative velocity resulting from the vehicle behavior of the host vehicle 1 when the feature point of the object is stationary in the longitudinal direction.
The controller 18 calculates the lateral velocity V2X of the feature point on the spatial coordinate system SS, which is the velocity component of the feature point in the lateral direction perpendicular to the optical axis direction of the camera 11a, based on the estimated value VeZ of the vertical velocity, using the following equation (4).
V2X = ( ZVx + xVeZ )/f...(4)
This makes it possible to calculate the lateral velocity V2X that does not depend on the measurement accuracy of the vertical velocity of the stereo camera. The lateral velocity V2X is an example of a "second feature point velocity" described in the claims.
The controller 18 clusters the feature points extracted from the image captured by the camera 11a into feature point groups based on the second feature point velocities, and detects individual objects from the feature point groups.

 図3は、第1実施形態の物体検出方法の一例のフローチャートである。ステップS1においてカメラ11aは、自車両1の周囲の画像を撮像する。ステップS2においてコントローラ18は、自車両1の周囲の撮像画像から特徴点のオプティカルフローを検出する。ステップS3においてコントローラ18は、自車両から特徴点までの相対距離を測定する。ステップS4においてコントローラ18は、車両センサ12が検出した車両信号に基づいて自車両1の車両挙動を測定する。
 ステップS5においてコントローラ18は、相対距離の時間変化に基づいて第1特徴点速度を推定する。ステップS6においてコントローラ18は、オプティカルフローと相対距離と車両挙動とに基づいて第2特徴点速度(例えば横方向速度V2X)を算出する。ステップS7においてコントローラ18は、第2特徴点速度に基づいて特徴点を特徴点グループにクラスタリングし、特徴点グループから物体を検出する。
3 is a flowchart of an example of the object detection method of the first embodiment. In step S1, the camera 11a captures an image of the surroundings of the host vehicle 1. In step S2, the controller 18 detects the optical flow of feature points from the captured image of the surroundings of the host vehicle 1. In step S3, the controller 18 measures the relative distance from the host vehicle to the feature points. In step S4, the controller 18 measures the vehicle behavior of the host vehicle 1 based on the vehicle signal detected by the vehicle sensor 12.
In step S5, the controller 18 estimates a first feature point velocity based on a time change in the relative distance. In step S6, the controller 18 calculates a second feature point velocity (e.g., a lateral velocity V2X ) based on the optical flow, the relative distance, and the vehicle behavior. In step S7, the controller 18 clusters the feature points into feature point groups based on the second feature point velocity, and detects an object from the feature point groups.

 (第2実施形態)
 図4は、第2実施形態のコントローラ18の機能構成の一例のブロック図である。コントローラ18は、空間位置算出部20と、オプティカルフロー算出部21と、第1特徴点速度推定部22と、第2特徴点速度算出部23と、移動物体検出部24と、車両制御部25とを備える。
 空間位置算出部20は、自車両1から特徴点までの相対距離と方位角を測定する。例えば空間位置算出部20は、ステレオカメラであるカメラ11aで撮影した一対の撮像画像の視差に基づいて相対距離と方位角を測定してもよく、レーザレーダやミリ波レーダ、LIDAR等に基づいて相対距離と方位角を測定してもよい。空間位置算出部20は、相対距離と方位角とに基づいて空間座標系SS上における特徴点の位置(X,Y,Z)を算出する。
Second Embodiment
4 is a block diagram showing an example of a functional configuration of the controller 18 according to the second embodiment. The controller 18 includes a spatial position calculation unit 20, an optical flow calculation unit 21, a first feature point velocity estimation unit 22, a second feature point velocity calculation unit 23, a moving object detection unit 24, and a vehicle control unit 25.
The spatial position calculation unit 20 measures the relative distance and azimuth angle from the vehicle 1 to the feature point. For example, the spatial position calculation unit 20 may measure the relative distance and azimuth angle based on the parallax between a pair of captured images taken by the camera 11a, which is a stereo camera, or may measure the relative distance and azimuth angle based on a laser radar, a millimeter wave radar, a LIDAR, or the like. The spatial position calculation unit 20 calculates the position (X, Y, Z) of the feature point on the spatial coordinate system SS based on the relative distance and the azimuth angle.

 オプティカルフロー算出部21は、自車両1の周囲の撮像画像から特徴点のオプティカルフローを検出する。
 第1特徴点速度推定部22は、空間位置算出部20が算出した位置の時間変化に基づいて、空間座標系SS上における特徴点の横方向速度Vと縦方向速度Vを第1特徴点速度として推定する。例えば第1特徴点速度推定部22は、カルマンフィルタを用いて縦方向速度Vを推定してよい。特に、特徴点との相対距離が後述の所定の閾値Thよりも大きい場合に、カルマンフィルタを用いて縦方向速度Vを推定してもよい。すなわち、カメラ11aとして用いられているステレオカメラの撮像画像の複数フレームから測定した相対距離と方位角に基づいて、縦方向速度Vを推定してもよい。また、カルマンフィルタにおける縦方向速度Vの初期状態として、上式(3)で与えられる縦方向速度の推定値VeZを用いると、推定を安定させることができる。
The optical flow calculation unit 21 detects the optical flow of characteristic points from a captured image of the surroundings of the vehicle 1 .
The first feature point velocity estimation unit 22 estimates the horizontal velocity VX and the vertical velocity VZ of the feature point on the spatial coordinate system SS as the first feature point velocity based on the time change of the position calculated by the spatial position calculation unit 20. For example, the first feature point velocity estimation unit 22 may estimate the vertical velocity VZ using a Kalman filter. In particular, when the relative distance to the feature point is greater than a predetermined threshold Th described later, the vertical velocity VZ may be estimated using a Kalman filter. That is, the vertical velocity VZ may be estimated based on the relative distance and the azimuth angle measured from multiple frames of the captured image of the stereo camera used as the camera 11a. In addition, the estimation can be stabilized by using the estimated value V eZ of the vertical velocity given by the above formula (3) as the initial state of the vertical velocity VZ in the Kalman filter.

 第2特徴点速度算出部23は、オプティカルフロー算出部21が算出したオプティカルフローに基づいて、画像座標系SI上の特徴点の横方向速度Vを算出する。第2特徴点速度算出部23は、横方向速度Vと、特徴点の縦方向位置Zと、自車両1の車両挙動に基づいて、上式(3)と(4)で与えられる横方向速度V2Xを第2特徴点速度として算出する。 The second feature point velocity calculation unit 23 calculates the lateral velocity Vx of the feature point on the image coordinate system SI based on the optical flow calculated by the optical flow calculation unit 21. The second feature point velocity calculation unit 23 calculates the lateral velocity V2X given by the above equations (3) and (4) as the second feature point velocity based on the lateral velocity Vx , the longitudinal position Z of the feature point, and the vehicle behavior of the host vehicle 1.

 移動物体検出部24は、第1特徴点速度推定部22が推定した横方向速度V及び縦方向速度Vと、第2特徴点速度算出部23が算出した横方向速度V2Xに基づいて、撮像画像から抽出された特徴点をグループに分類し、分類分けされたグループを個々の移動物体として検出する。移動物体検出部24は、クラスタリング部24aと分離部24bとを備える。
 クラスタリング部24aは、自車両1から特徴点までの相対距離が所定の閾値Th以下であるか否かを判定する。所定の閾値Thとしては、例えばカメラ11aとして用いられているステレオカメラによる検出速度が所定の許容誤差となる閾値距離に設定してよい。例えばステレオカメラの視差の特性と観測周期から視差が1階調分変化する速度を求めることで、検出速度が許容誤差となる距離を算出できる。
The moving object detection unit 24 classifies the feature points extracted from the captured image into groups based on the lateral velocity VX and vertical velocity VZ estimated by the first feature point velocity estimation unit 22 and the lateral velocity V2X calculated by the second feature point velocity calculation unit 23, and detects the classified groups as individual moving objects. The moving object detection unit 24 includes a clustering unit 24a and a separation unit 24b.
The clustering unit 24a judges whether the relative distance from the vehicle 1 to the feature point is equal to or smaller than a predetermined threshold Th. The predetermined threshold Th may be set to a threshold distance at which the detection speed by the stereo camera used as the camera 11a has a predetermined allowable error. For example, the distance at which the detection speed has an allowable error can be calculated by determining the speed at which the parallax changes by one tone from the parallax characteristics and observation period of the stereo camera.

 相対距離が閾値Th以下である場合に、クラスタリング部24aは、第1特徴点速度推定部22が推定した横方向速度Vに基づいて、撮像画像から抽出された特徴点のラベリングとクラスタリングを行う。
 まず、クラスタリング部24aは、横方向速度Vに基づいて特徴点が右方向若しくは左方向に移動しているか、又は横方向において静止しているかを判定し、右方向に移動している特徴点と、左方向に移動している特徴点と、静止している特徴点とに分類するラベリングを行う。例えば、画像座標系SIにおいて特徴点のx座標が右方向に移動しているときの横方向速度Vの符号をプラスと定義し、画像座標系SIにおいて特徴点のx座標が左方向に移動しているときの横方向速度Vの符号をマイナスと定義してよい。
そしてクラスタリング部24aは、右方向に移動しているとラベリングされた特徴点同士を、横方向速度Vに基づいてクラスタリングすることにより特徴点グループを形成する。左方向に移動しているとラベリングされた特徴点同士、静止しているとラベリングされた特徴点同士も同様にクラスタリングする。
When the relative distance is equal to or smaller than the threshold value Th, the clustering unit 24 a performs labeling and clustering of the feature points extracted from the captured image based on the lateral velocity VX estimated by the first feature point velocity estimating unit 22 .
First, the clustering unit 24a determines whether a feature point is moving to the right or left, or is stationary in the horizontal direction, based on the lateral velocity VX , and performs labeling to classify the feature points into feature points moving to the right, feature points moving to the left, and stationary feature points. For example, the sign of the lateral velocity VX when the x coordinate of the feature point is moving to the right in the image coordinate system SI may be defined as positive, and the sign of the lateral velocity VX when the x coordinate of the feature point is moving to the left in the image coordinate system SI may be defined as negative.
The clustering unit 24a then forms a feature point group by clustering the feature points labeled as moving to the right based on the lateral velocity V X. Similarly, the clustering unit 24a clusters the feature points labeled as moving to the left and the feature points labeled as stationary.

 一方で、相対距離が閾値Th以下でない場合にクラスタリング部24aは、第2特徴点速度算出部23が算出した横方向速度V2Xに基づいて、撮像画像から抽出された特徴点のラベリングとクラスタリングを行う。ラベリングとクラスタリングの処理の内容は、横方向速度Vに代えて横方向速度V2Xを用いる以外、上記の処理と同様である。
 分離部24bは、第1特徴点速度推定部22が推定した縦方向速度Vに基づいて、クラスタリング部24aが生成した特徴点グループを分離して、分離された特徴点グループを個々の移動物体として検出する。なお分離部24bは、縦方向速度Vの推定精度の高い特徴点グループに限定して分類処理を実行してもよい。例えば第1特徴点速度推定部22がカルマンフィルタを用いて縦方向速度Vを推定する場合には、誤差共分散行列に基づいて縦方向速度Vの推定精度を算出してよい。
On the other hand, when the relative distance is not equal to or less than the threshold value Th, the clustering unit 24a performs labeling and clustering of the feature points extracted from the captured image based on the lateral velocity V2X calculated by the second feature point velocity calculation unit 23. The contents of the labeling and clustering processes are similar to the above processes except that the lateral velocity V2X is used instead of the lateral velocity VX .
The separation unit 24b separates the feature point groups generated by the clustering unit 24a based on the vertical speed VZ estimated by the first feature point speed estimation unit 22, and detects the separated feature point groups as individual moving objects. Note that the separation unit 24b may execute classification processing limited to feature point groups with high estimation accuracy of the vertical speed VZ . For example, when the first feature point speed estimation unit 22 estimates the vertical speed VZ using a Kalman filter, the estimation accuracy of the vertical speed VZ may be calculated based on an error covariance matrix.

 車両制御部25は、移動物体検出部24が検出した移動物体に基づいて、自車両1の操舵機構、駆動装置又は制動装置の少なくとも1つを制御する車両制御を実行する。例えば車両制御部25は、検出された移動物体を回避するように自車両1の操舵機構、駆動装置又は制動装置の少なくとも1つを制御してよい。 The vehicle control unit 25 executes vehicle control to control at least one of the steering mechanism, drive device, or braking device of the host vehicle 1 based on the moving object detected by the moving object detection unit 24. For example, the vehicle control unit 25 may control at least one of the steering mechanism, drive device, or braking device of the host vehicle 1 so as to avoid the detected moving object.

 図5は、第2実施形態の物体検出方法の一例のフローチャートである。ステップS10~S14の処理は、図3のステップS1~S5と同様である。ステップS15において移動物体検出部24は、自車両1から特徴点までの相対距離が所定の閾値Th以下であるか否かを判定する。相対距離が所定の閾値Th以下である場合(ステップS15:Y)に処理はステップS19へ進む。相対距離が所定の閾値Th以下でない場合(ステップS15:N)に処理はステップS16へ進む。ステップS16の処理は図3のステップS6と同様である。 FIG. 5 is a flowchart of an example of an object detection method of the second embodiment. The processing of steps S10 to S14 is similar to steps S1 to S5 of FIG. 3. In step S15, the moving object detection unit 24 determines whether the relative distance from the vehicle 1 to the feature point is equal to or less than a predetermined threshold Th. If the relative distance is equal to or less than the predetermined threshold Th (step S15: Y), the processing proceeds to step S19. If the relative distance is not equal to or less than the predetermined threshold Th (step S15: N), the processing proceeds to step S16. The processing of step S16 is similar to step S6 of FIG. 3.

 ステップS17においてクラスタリング部24aは、横方向速度V2Xに基づいて、右方向若しくは左方向に移動しているか又は横方向において静止しているかに応じて特徴点をラベリングする。
 ステップS18においてクラスタリング部24aは、右方向に移動しているとラベリングされた特徴点同士を、横方向速度V2Xに基づいてクラスタリングすることにより特徴点グループを形成する。左方向に移動しているとラベリングされた特徴点同士、静止しているとラベリングされた特徴点同士も同様にクラスタリングする。その後に処理はステップS21へ進む。
 ステップS19及びS20の処理は、横方向速度V2Xを横方向速度Vに読み替えることを除いてステップS17及びS18の処理と同様である。
 ステップS21の処理は図3のステップS8と同様である。
In step S17, the clustering unit 24a labels the feature points according to whether they are moving to the right or left, or are stationary in the lateral direction, based on the lateral speed V2X .
In step S18, the clustering unit 24a forms a feature point group by clustering feature points labeled as moving to the right based on the lateral velocity V2X . Feature points labeled as moving to the left and feature points labeled as stationary are similarly clustered. The process then proceeds to step S21.
The processes in steps S19 and S20 are similar to those in steps S17 and S18, except that the lateral velocity V2X is replaced with the lateral velocity VX .
The process of step S21 is similar to step S8 in FIG.

 (変形例)
 第2実施形態の変形例のクラスタリング部24aは、第1特徴点速度として推定された横方向速度Vと、第2特徴点速度として算出された横方向速度V2Xとを、特徴点までの相対距離に応じた重み係数aで重み付けされた重み付け和である重み付け速度Vwに基づいて特徴点をクラスタリングする。
 例えば横方向速度Vは上式(1)で表され、横方向速度V2Xは上式(4)で表される。したがって、例えば次式(5)のように上式(1)及び(4)を重み係数aで重み付けすることにより重み付け速度Vwを求めることができる。
 Vw=(ZV+x(aVeZ+(1-a)V)/f …(5)
 なお、重み係数aは、相対距離が小さい場合に比べて大きい場合により大きくなるように設定する。例えば相対距離が大きいほどより大きな重み係数aを設定してよい。
 クラスタリング部24aは、第1特徴点速度推定部22が推定した縦方向速度Vと、上式(3)により求められるVeZと、重み係数aとに基づいて、次式(5)により重み付け速度Vwを算出する。
(Modification)
The clustering unit 24a in the modified example of the second embodiment clusters feature points based on a weighted velocity VwX, which is a weighted sum of the lateral velocity VX estimated as the first feature point velocity and the lateral velocity V2X calculated as the second feature point velocity, weighted by a weighting coefficient a according to the relative distance to the feature point.
For example, the lateral velocity VX is expressed by the above formula (1), and the lateral velocity V2X is expressed by the above formula (4). Therefore, for example, the weighted velocity VwX can be obtained by weighting the above formulas (1) and (4) with a weighting coefficient a, as in the following formula (5).
Vw
The weighting coefficient a is set to be larger when the relative distance is large than when the relative distance is small. For example, the weighting coefficient a may be set to be larger as the relative distance increases.
The clustering unit 24a calculates a weighted velocity VwX by the following equation (5) based on the longitudinal velocity VZ estimated by the first feature point velocity estimating unit 22, VeZ calculated by the above equation (3), and the weighting coefficient a.

 図6は、変形例の物体検出方法の一例のフローチャートである。ステップS30~S35の処理は、図3のステップS1~S6と同様である。ステップS36においてクラスタリング部24aは、重み付け速度Vwを算出する。
 ステップS37においてクラスタリング部24aは、重み付け速度Vwに基づいて、右方向若しくは左方向に移動しているか又は横方向において静止しているかに応じて特徴点をラベリングする。ステップS38において、クラスタリング部24aは、右方向に移動しているとラベリングされた特徴点同士を、重み付け速度Vwに基づいてクラスタリングすることにより特徴点グループを形成する。左方向に移動しているとラベリングされた特徴点同士、静止しているとラベリングされた特徴点同士も同様にクラスタリングする。ステップS39の処理は、図3のステップS8と同様である。
Fig. 6 is a flowchart of an example of the object detection method according to the modified example. The processes in steps S30 to S35 are similar to those in steps S1 to S6 in Fig. 3. In step S36, the clustering unit 24a calculates a weighted velocity VwX .
In step S37, the clustering unit 24a labels the feature points according to whether they are moving to the right or left, or are stationary in the horizontal direction, based on the weighted speed Vw X. In step S38, the clustering unit 24a forms a feature point group by clustering feature points labeled as moving to the right, based on the weighted speed Vw X. Similarly, the clustering unit 24a clusters feature points labeled as moving to the left, and feature points labeled as stationary. The process of step S39 is the same as step S8 in FIG. 3.

 (第3実施形態)
 第3実施形態のコントローラ18は、移動物体検出部24が検出した移動物体が自車両1の進路と交錯する場合に、現在時刻から移動物体が自車両1の進路と交錯までの交錯時間Tcrossを推定する。
 上記のとおり、ステレオカメラを用いて遠方の移動物体の空間座標系SS上の速度を求めると測定精度が不安定となる。一方でカルマンフィルタを用いて空間座標系SSの上の座標を観測して速度を算出すると、道路利用者(車両や歩行者)の飛び出しを検出する場合にカルマンフィルタの遅延が問題となる。
 そこで第3実施形態のコントローラ18は、上述のように算出した横方向速度V2Xから、移動物体が自車両1の進路と交錯する方向へ進む速度(以下、「交錯速度Vcross」と表記する)を求めて、交錯時間Tcrossを推定する。
Third Embodiment
When a moving object detected by the moving object detection unit 24 crosses the path of the host vehicle 1, the controller 18 of the third embodiment estimates a crossing time T cross from the current time until the moving object crosses the path of the host vehicle 1.
As described above, when a stereo camera is used to calculate the speed of a distant moving object in the spatial coordinate system SS, the measurement accuracy becomes unstable. On the other hand, when a Kalman filter is used to observe coordinates in the spatial coordinate system SS and calculate the speed, the delay of the Kalman filter becomes a problem when detecting a road user (vehicle or pedestrian) jumping out.
Therefore, the controller 18 of the third embodiment obtains the speed at which the moving object moves in a direction that intersects with the path of the vehicle 1 (hereinafter referred to as "intersection speed V cross ") from the lateral speed V 2X calculated as described above, and estimates the intersection time T cross .

 図7は、第3実施形態のコントローラ18の機能構成の一例のブロック図である。第3実施形態のコントローラ18は、図4に示した構成に加えて、交錯領域抽出部26と、交錯判定部27と、交錯時間推定部28を備える。
 交錯領域抽出部26は、自車両1の周囲の領域のうち、自車両1の進路との交錯が期待される物体が存在しうる領域の候補である交錯領域Rcrossを抽出する。図8(a)は、交錯領域Rcrossの一例の模式図である。例えば、自車両1の進路Tが横断歩道CWと交差する場合には、横断歩道CWが占める領域を交錯領域Rcrossとして抽出してよい。また例えば自車両1の進路Tが対向車線と交差する場合には、対向車線が占める領域を交錯領域Rcrossとして抽出してよい。また例えば、自車両1が走行中の走行車線と交差する交差車線が占める領域を交錯領域Rcrossとして抽出してよい。自車両1は、例えば地図データベース14から交錯領域Rcrossを抽出してもよく、カメラ11aの撮像画像から交錯領域Rcrossを抽出してもよい。
Fig. 7 is a block diagram showing an example of the functional configuration of the controller 18 according to the third embodiment. The controller 18 according to the third embodiment includes an intersection region extraction unit 26, an intersection determination unit 27, and an intersection time estimation unit 28 in addition to the configuration shown in Fig. 4.
The intersection area extraction unit 26 extracts an intersection area R cross , which is a candidate area where an object expected to intersect with the path of the vehicle 1 may exist, from the area around the vehicle 1. FIG. 8(a) is a schematic diagram of an example of the intersection area R cross . For example, when the path T r of the vehicle 1 intersects with the crosswalk CW, the area occupied by the crosswalk CW may be extracted as the intersection area R cross . For example, when the path T r of the vehicle 1 intersects with an oncoming lane, the area occupied by the oncoming lane may be extracted as the intersection area R cross . For example, the area occupied by the intersecting lane that intersects with the lane in which the vehicle 1 is traveling may be extracted as the intersection area R cross . The vehicle 1 may extract the intersection area R cross from the map database 14, for example, or may extract the intersection area R cross from the image captured by the camera 11a.

 交錯領域抽出部26は、交錯領域Rcrossに存在する道路利用者の移動する方向として期待される方向を、交錯方向Dcrossとして取得する。例えば交錯領域抽出部26は、自動運転用の地図情報として好適な高精度地図データから交錯方向Dcrossの情報を取得してよい。
 交錯判定部27は、移動物体検出部24が検出した移動物体のうち、自車両1の進路Tと交錯しうる交錯物体を検出する。例えば交錯判定部27は、移動物体検出部24が検出した移動物体MOが交錯領域Rcross内に位置するか、移動物体MOと交錯領域Rcrossとの距離が閾値以下であり、且つ移動物体MOの移動方向Dmと交錯方向Dcrossとの差が閾値以下である場合に、移動物体MOが交錯物体であると判定する。
 交錯判定部27は、移動物体MOの現在位置から交錯方向Dcrossに延びる線分と自車両1の進路Tとの交点を交錯点Pcrossとして算出する。
The intersection area extraction unit 26 obtains the expected direction in which road users present in the intersection area R cross will move as the intersection direction D cross . For example, the intersection area extraction unit 26 may obtain information on the intersection direction D cross from high-precision map data suitable as map information for automated driving.
The intersection determination unit 27 detects, among the moving objects detected by the moving object detection unit 24, an intersection object that may intersect with the path T r of the host vehicle 1. For example, the intersection determination unit 27 determines that the moving object MO detected by the moving object detection unit 24 is an intersection object when the moving object MO is located within the intersection area R cross , or the distance between the moving object MO and the intersection area R cross is equal to or smaller than a threshold, and the difference between the moving direction Dm of the moving object MO and the intersection direction D cross is equal to or smaller than a threshold.
The intersection determination unit 27 calculates an intersection point P cross between a line segment extending from the current position of the moving object MO in the intersection direction D cross and the course T r of the host vehicle 1 .

 交錯時間推定部28は、上述のように算出した横方向速度V2Xを、交錯方向Dcross上の速度である交錯速度Vcrossに変換する。図8(b)は交錯速度Vcrossの算出方法を説明するための模式図である。
 空間座標系SSの上の縦方向(Z方向)と交錯方向Dcrossとのなす角をθとすると、交錯速度Vcrossの横方向(X方向)の成分Vと縦方向(Z方向)の成分Vは次式(6)及び(7)で表される。
 V=sinθ×Vcross …(6)
 V=cosθ×Vcross …(7)
The crossing time estimation unit 28 converts the lateral direction speed V2X calculated as described above into a crossing speed Vcross , which is the speed in the crossing direction Dcross . Fig. 8B is a schematic diagram for explaining a method of calculating the crossing speed Vcross .
If the angle between the vertical direction (Z direction) on the spatial coordinate system SS and the crossing direction D cross is θ, the horizontal (X direction) component VX and the vertical (Z direction) component VZ of the crossing speed V cross are expressed by the following equations (6) and (7).
VX =sinθ× Vcross …(6)
VZ =cosθ× Vcross …(7)

 一方で、画像座標系SI上の横方向速度Vは次式(8)で表される。
 V=(f/Z)×V-(fX/Z)×V …(8)
 また、自車両1の車速が0であると近似すると横方向速度V2Xは次式(9)により与えられる。
 V2X=ZV/f …(9)
 上式(8)及び(9)から次式(10)が導かれる。
 V2X=V-(fX/Z)×V …(10)
On the other hand, the lateral velocity Vx on the image coordinate system SI is expressed by the following equation (8).
V x = (f/Z) x V x - (fX/Z 2 ) x V Z ...(8)
Furthermore, if the vehicle speed of the host vehicle 1 is approximated to be 0, the lateral speed V2X is given by the following equation (9).
V 2X = ZV x /f (9)
The following equation (10) is derived from the above equations (8) and (9).
V 2X = V X - (fX/Z) x V Z ...(10)

 上式(10)に上式(6)及び(7)を代入すると、横方向速度V2Xから交錯速度Vcrossへの変換式が次式(11)により導かれる。
 Vcross=V2X/(sinθ-(X/Z)cosθ) …(11)
 交錯時間推定部28は、第2特徴点速度算出部23が算出した横方向速度V2Xと、空間位置算出部20が算出した物体の縦方向位置Z及び横方向位置Xと、上式(11)とに基づいて交錯速度Vcrossを算出する。
 交錯時間推定部28は、移動物体MOの現在位置と交錯点Pcrossとの間の距離を交錯速度Vcrossで除算することにより、交錯時間Tcrossを推定する。
 車両制御部25は、交錯時間Tcrossに基づいて自車両1の操舵角、駆動力又は制動力の少なくとも1つを制御してよい。例えば自車両1が停止しており、発進後に自車両1が交錯点Pcrossまで到達する時間と交錯時間Tcrossとの差分が閾値以上である場合には自車両1の発進を許可し、差分が閾値未満の場合には自車両1の発進を禁止してよい。また例えば、自車両1が走行しており、自車両1が交錯点Pcrossまで到達する時間と交錯時間Tcrossとの差分が閾値未満である場合には、自車両1に制動力を発生させるか、移動物体MOを回避する自動操舵制御を実行してよい。
By substituting the above equations (6) and (7) into the above equation (10), the conversion equation from the lateral velocity V 2X to the crossing velocity V cross is derived by the following equation (11).
V cross =V 2X /(sinθ−(X/Z)cosθ)…(11)
The intersection time estimation unit 28 calculates the intersection velocity V cross based on the horizontal velocity V 2X calculated by the second feature point velocity calculation unit 23, the vertical position Z and horizontal position X of the object calculated by the spatial position calculation unit 20, and the above equation (11).
The intersection time estimation unit 28 estimates the intersection time T cross by dividing the distance between the current position of the moving object MO and the intersection point P cross by the intersection speed V cross .
The vehicle control unit 25 may control at least one of the steering angle, driving force, or braking force of the host vehicle 1 based on the crossing time Tcross . For example, when the host vehicle 1 is stopped and the difference between the time when the host vehicle 1 reaches the crossing point Pcross after starting and the crossing time Tcross is equal to or greater than a threshold, the host vehicle 1 may be permitted to start, and when the difference is less than the threshold, the host vehicle 1 may be prohibited from starting. Also, for example, when the host vehicle 1 is traveling and the difference between the time when the host vehicle 1 reaches the crossing point Pcross and the crossing time Tcross is less than a threshold, the host vehicle 1 may generate a braking force or perform automatic steering control to avoid the moving object MO.

 図9は、交錯時間Tcrossの算出方法の一例のフローチャートである。ステップS40において交錯領域抽出部26は、交錯領域Rcrossを抽出する。ステップS41において交錯判定部27は、交錯物体を検出する。ステップS42において交錯判定部27は、交錯点Pcrossを算出する。ステップS43において交錯時間推定部28は、交錯速度Vcrossを算出してから、交錯時間Tcrossを算出する。ステップS44において車両制御部25は、交錯時間Tcrossに基づいて自車両1の操舵角、駆動力又は制動力の少なくとも1つを制御する。 9 is a flowchart of an example of a method for calculating the intersection time Tcross . In step S40, the intersection area extraction unit 26 extracts the intersection area Rcross . In step S41, the intersection determination unit 27 detects an intersection object. In step S42, the intersection determination unit 27 calculates the intersection point Pcross . In step S43, the intersection time estimation unit 28 calculates the intersection speed Vcross , and then calculates the intersection time Tcross . In step S44, the vehicle control unit 25 controls at least one of the steering angle, driving force, or braking force of the host vehicle 1 based on the intersection time Tcross .

 (実施形態の効果)
 (1)コントローラ18は、自車両1周囲をカメラ11aで撮影して得られた撮像画像から特徴点のオプティカルフローを検出し、自車両1から特徴点までの相対距離を測定し、自車両1の車両挙動を測定し、相対距離に基づいて空間座標系上の特徴点の速度である第1特徴点速度を推定し、オプティカルフローと相対距離と車両挙動とに基づいて、特徴点のカメラ11aの光軸方向に対して直交する横方向の速度成分である第2特徴点速度を算出し、第2特徴点速度に基づいて特徴点を特徴点グループにクラスタリングし、特徴点グループから物体を検出する。
 これにより、ステレオカメラの縦方向速度の測定精度に依存しない第2特徴点速度を撮像画像から求めて、第2特徴点速度に基づいて特徴点をクラスタリングすることができるため物体の検出精度を向上できる。クラスタリングにより生成した特徴点グループを、相対距離の測定結果から求めた第1特徴点速度に基づいて分離することにより、カメラの光軸方向における速度が異なる物体を分離することができる。この結果、物体の検出精度をより向上できる。
(Effects of the embodiment)
(1) The controller 18 detects the optical flow of feature points from captured images obtained by photographing the surroundings of the host vehicle 1 with the camera 11a, measures the relative distance from the host vehicle 1 to the feature points, measures the vehicle behavior of the host vehicle 1, estimates a first feature point velocity, which is the speed of the feature point in the spatial coordinate system, based on the relative distance, calculates a second feature point velocity, which is the lateral velocity component of the feature point that is perpendicular to the optical axis direction of the camera 11a, based on the optical flow, the relative distance, and the vehicle behavior, clusters the feature points into feature point groups based on the second feature point velocity, and detects objects from the feature point groups.
This allows the second feature point velocity, which is not dependent on the measurement accuracy of the vertical velocity of the stereo camera, to be calculated from the captured images, and the feature points can be clustered based on the second feature point velocity, thereby improving the object detection accuracy. By separating the feature point groups generated by clustering based on the first feature point velocity calculated from the relative distance measurement results, it is possible to separate objects that have different velocities in the optical axis direction of the camera. As a result, the object detection accuracy can be further improved.

 (2)相対距離が所定の閾値よりも大きい場合に、コントローラ18は、第2特徴点速度に基づいて特徴点をクラスタリングし、相対距離が所定の閾値以下である場合に、第1特徴点速度に基づいて特徴点をクラスタリングしてもよい。コントローラ18は、相対距離に応じて重み付けされた第1特徴点速度と第2特徴点速度の重み付け和に基づいて特徴点をクラスタリングしてもよい。これにより、ステレオカメラによる測距精度が高い近距離の特徴点は、ステレオカメラの測定結果から推定した第1特徴点速度を優先して用いてクラスタリングし、ステレオカメラによる測距精度が低い遠距離の特徴点は、ステレオカメラによる縦方向速度の測定精度に依存しない第2特徴点速度を優先して用いてクラスタリングすることができる。 (2) If the relative distance is greater than a predetermined threshold, the controller 18 may cluster the feature points based on the second feature point velocity, and if the relative distance is equal to or less than the predetermined threshold, the controller 18 may cluster the feature points based on the first feature point velocity. The controller 18 may cluster the feature points based on a weighted sum of the first feature point velocity and the second feature point velocity, weighted according to the relative distance. In this way, feature points at close range where the distance measurement accuracy by the stereo camera is high can be clustered using the first feature point velocity estimated from the measurement results of the stereo camera as a priority, and feature points at long range where the distance measurement accuracy by the stereo camera is low can be clustered using the second feature point velocity that does not depend on the measurement accuracy of the vertical velocity by the stereo camera as a priority.

 (3)コントローラ18は、ステレオカメラであるカメラ11aにより撮影された撮像画像から相対距離を測定し、相対距離が所定の閾値よりも大きい場合に、複数フレームの撮像画像から測定した相対距離に基づいて、前後方向における特徴点の速度を推定してもよい。これにより、ステレオカメラによる測距精度が低い遠距離の特徴点の速度の推定精度を向上できる。
 (4)コントローラ18は、第2特徴点速度に基づいて、特徴点が右方向若しくは左方向に移動しているか又は横方向において静止しているかに応じて特徴点をラベリングし、同じラベルの特徴点同士をクラスタリングしてもよい。これにより、右方向に移動している特徴点を、左方向に移動している特徴点や静止している特徴点と同じグループにクラスタリングするのを防止できる。同様に、左方向に移動している特徴点を、右方向に移動している特徴点や静止している特徴点と同じグループにクラスタリングしたり、静止している特徴点を移動している特徴点と同じグループにクラスタリングするのを防止できる。
(3) The controller 18 may measure the relative distance from the captured images captured by the stereo camera 11a, and when the relative distance is greater than a predetermined threshold, estimate the speed of the feature point in the forward/backward direction based on the relative distance measured from the captured images of multiple frames. This improves the accuracy of estimating the speed of a feature point at a long distance where the distance measurement accuracy of the stereo camera is low.
(4) The controller 18 may label the feature points according to whether the feature points are moving to the right or left or stationary in the horizontal direction based on the second feature point velocity, and cluster feature points with the same label. This makes it possible to prevent feature points moving to the right from being clustered in the same group as feature points moving to the left or stationary feature points. Similarly, it is possible to prevent feature points moving to the left from being clustered in the same group as feature points moving to the right or stationary feature points, and to prevent stationary feature points from being clustered in the same group as moving feature points.

 (5)コントローラ18は、自車両1の進路との交錯が期待される物体が存在する候補領域内に位置する特徴点から物体を検出してもよい。これにより自車両1に接近する虞がある物体を効率よく検出できる。
 (6)コントローラ18は、特徴点グループを分離することにより検出された物体が、自車両の進路と交錯する移動方向である交錯方向を推定し、物体の第2特徴点速度を交錯方向上の速度に変換し、物体の交錯方向上の速度に基づいて、物体が自車両の進路と交錯するまでの交錯時間を推定してもよい。例えば、自車両の進路と交錯する物体が存在する候補領域内に位置し、交錯方向と近い移動方向を有する移動する物体について、交錯時間を推定してもよい。これにより、自車両1の進路と交錯する可能性がある移動物体が、自車両1の進路と交錯するまでの交錯時間を迅速に推定できる。
(5) The controller 18 may detect an object from feature points located within a candidate area in which an object is expected to intersect with the path of the host vehicle 1. This makes it possible to efficiently detect an object that is likely to approach the host vehicle 1.
(6) The controller 18 may estimate an intersection direction, which is a moving direction in which the object detected by separating the feature point group intersects with the path of the host vehicle, convert the second feature point speed of the object into a speed in the intersection direction, and estimate an intersection time until the object intersects with the path of the host vehicle based on the speed of the object in the intersection direction. For example, the controller 18 may estimate the intersection time for a moving object that is located in a candidate area in which an object that intersects with the path of the host vehicle exists and has a moving direction close to the intersection direction. This makes it possible to quickly estimate the intersection time until a moving object that may intersect with the path of the host vehicle 1 intersects with the path of the host vehicle 1.

 ここに記載されている全ての例及び条件的な用語は、読者が、本発明と技術の進展のために発明者により与えられる概念とを理解する際の助けとなるように、教育的な目的を意図したものであり、具体的に記載されている上記の例及び条件、並びに本発明の優位性及び劣等性を示すことに関する本明細書における例の構成に限定されることなく解釈されるべきものである。本発明の実施例は詳細に説明されているが、本発明の精神及び範囲から外れることなく、様々な変更、置換及び修正をこれに加えることが可能であると解すべきである。 All examples and conditional terms described herein are intended for educational purposes to assist the reader in understanding the present invention and the concepts provided by the inventor for the advancement of technology, and should be construed without limitation to the above specifically described examples and conditions, and the configuration of the examples in this specification with respect to showing the advantages and disadvantages of the present invention. Although the embodiments of the present invention have been described in detail, it should be understood that various changes, substitutions and modifications can be made thereto without departing from the spirit and scope of the present invention.

 1…自車両、10…車両制御装置、11…外界センサ、11a…カメラ、12…車両センサ、13…測位装置、14…地図データベース、17…アクチュエータ、18…コントローラ、18a…プロセッサ、18b…記憶装置 1... host vehicle, 10... vehicle control device, 11... external sensor, 11a... camera, 12... vehicle sensor, 13... positioning device, 14... map database, 17... actuator, 18... controller, 18a... processor, 18b... storage device

Claims (9)

 自車両周囲をカメラで撮影して得られた撮像画像から特徴点のオプティカルフローを検出し、
 前記自車両から前記特徴点までの相対距離を測定し、
 前記自車両の車両挙動を測定し、
 前記相対距離に基づいて空間座標系上の前記特徴点の速度である第1特徴点速度を推定し、
 前記オプティカルフローと前記相対距離と前記車両挙動とに基づいて、前記特徴点の前記カメラの光軸方向に対して直交する横方向の速度成分である第2特徴点速度を算出し、
 前記第2特徴点速度に基づいて前記特徴点を特徴点グループにクラスタリングし、
 前記特徴点グループから物体を検出する、
 ことを特徴とする物体検出方法。
Detecting the optical flow of characteristic points from captured images of the surroundings of the vehicle using a camera;
measuring a relative distance from the vehicle to the feature point;
Measure a vehicle behavior of the host vehicle;
estimating a first feature point velocity, which is a velocity of the feature point in a spatial coordinate system, based on the relative distance;
calculating a second feature point velocity, which is a lateral velocity component of the feature point perpendicular to an optical axis direction of the camera, based on the optical flow, the relative distance, and the vehicle behavior;
clustering the feature points into feature point groups based on the second feature point velocities;
detecting an object from the group of feature points;
13. An object detection method comprising:
 前記相対距離が所定の閾値よりも大きい場合に、前記第2特徴点速度に基づいて前記特徴点を前記特徴点グループにクラスタリングし、前記相対距離が所定の閾値以下である場合に、前記第1特徴点速度に基づいて前記特徴点を前記特徴点グループにクラスタリングし、
 前記第1特徴点速度に基づいて前記特徴点グループを分離して、分離された前記特徴点グループを物体として検出することを特徴とする請求項1に記載の物体検出方法。
clustering the feature points into the feature point group based on the second feature point velocity when the relative distance is greater than a predetermined threshold, and clustering the feature points into the feature point group based on the first feature point velocity when the relative distance is equal to or less than a predetermined threshold;
The method according to claim 1 , further comprising: separating the feature point groups based on the first feature point velocities; and detecting the separated feature point groups as objects.
 前記相対距離に応じて重み付けされた前記第1特徴点速度と前記第2特徴点速度の重み付け和に基づいて前記特徴点をクラスタリングすることを特徴とする請求項1に記載の物体検出方法。 The object detection method according to claim 1, characterized in that the feature points are clustered based on a weighted sum of the first feature point velocity and the second feature point velocity weighted according to the relative distance.  ステレオカメラにより撮影された撮像画像から前記相対距離を測定し、
 前記相対距離が所定の閾値よりも大きい場合に、複数フレームの撮像画像から測定した前記相対距離に基づいて、前記カメラの光軸方向である前後方向における前記特徴点の速度を推定する、ことを特徴とする請求項1~3のいずれか一項に記載の物体検出方法。
measuring the relative distance from an image captured by a stereo camera;
The object detection method according to any one of claims 1 to 3, characterized in that, when the relative distance is greater than a predetermined threshold, a speed of the feature point in a forward/backward direction, which is a direction of an optical axis of the camera, is estimated based on the relative distance measured from multiple frames of captured images.
 前記第2特徴点速度に基づいて、前記特徴点が右方向若しくは左方向に移動しているか又は横方向において静止しているかに応じて前記特徴点をラベリングし、同じラベルの前記特徴点同士をクラスタリングする、ことを特徴とする請求項1~4のいずれか一項に記載の物体検出方法。 The object detection method according to any one of claims 1 to 4, characterized in that the feature points are labeled according to whether they are moving to the right or left or are stationary in the lateral direction based on the second feature point velocity, and feature points with the same label are clustered together.  前記自車両の進路との交錯が期待される物体が存在する候補領域内に位置する前記特徴点から物体を検出することを特徴とする請求項1~5のいずれか一項に記載の物体検出方法。 The object detection method according to any one of claims 1 to 5, characterized in that an object is detected from the feature points located within a candidate area in which an object that is expected to intersect with the path of the vehicle is present.  前記特徴点グループを分離することにより検出された前記物体が、前記自車両の進路と交錯する移動方向である交錯方向を推定し、
 前記物体の前記第2特徴点速度を前記交錯方向上の速度に変換し、
 前記物体の前記交錯方向上の速度に基づいて、前記物体が前記自車両の進路と交錯するまでの交錯時間を推定する、
 ことを特徴とする請求項1~6のいずれか一項に記載の物体検出方法。
estimating an intersection direction, which is a moving direction in which the object detected by separating the feature point group intersects with a path of the host vehicle;
converting the second feature point velocity of the object into a velocity in the crossing direction;
estimating an intersection time until the object intersects with the path of the host vehicle based on a speed of the object in the intersection direction;
The object detection method according to any one of claims 1 to 6.
 前記自車両の進路と交錯する物体が存在する候補領域内又は前記候補領域近傍に位置し、前記交錯方向と近い移動方向を有する移動する物体について、前記交錯時間を推定することを特徴とする請求項7に記載の物体検出方法。 The object detection method according to claim 7, characterized in that the intersection time is estimated for a moving object that is located within or near a candidate area in which an object intersects with the vehicle's path and has a moving direction close to the intersection direction.  自車両周囲を撮影するカメラと、
 前記自車両の車両挙動を測定するセンサと、
 前記カメラで撮影して得られた撮像画像から特徴点のオプティカルフローを検出し、前記自車両から前記特徴点までの相対距離を測定し、前記相対距離に基づいて空間座標系上の前記特徴点の速度である第1特徴点速度を推定し、前記オプティカルフローと前記相対距離と前記車両挙動とに基づいて、前記特徴点の前記カメラの光軸方向に対して直交する横方向の速度成分である第2特徴点速度を算出し、前記第2特徴点速度に基づいて前記特徴点を特徴点グループにクラスタリングし、前記特徴点グループから物体を検出するコントローラと、
 を備えることを特徴とする物体検出装置。
A camera that captures images of the surroundings of the vehicle;
A sensor for measuring a vehicle behavior of the host vehicle;
a controller that detects an optical flow of a feature point from an image captured by the camera, measures a relative distance from the host vehicle to the feature point, estimates a first feature point velocity which is the velocity of the feature point on a spatial coordinate system based on the relative distance, calculates a second feature point velocity which is a lateral velocity component of the feature point perpendicular to an optical axis direction of the camera based on the optical flow, the relative distance, and the vehicle behavior, clusters the feature points into feature point groups based on the second feature point velocity, and detects an object from the feature point groups;
An object detection device comprising:
PCT/JP2023/027390 2023-07-26 2023-07-26 Object detection method and object detection device Pending WO2025022609A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/027390 WO2025022609A1 (en) 2023-07-26 2023-07-26 Object detection method and object detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/027390 WO2025022609A1 (en) 2023-07-26 2023-07-26 Object detection method and object detection device

Publications (1)

Publication Number Publication Date
WO2025022609A1 true WO2025022609A1 (en) 2025-01-30

Family

ID=94374540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/027390 Pending WO2025022609A1 (en) 2023-07-26 2023-07-26 Object detection method and object detection device

Country Status (1)

Country Link
WO (1) WO2025022609A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014029604A (en) * 2012-07-31 2014-02-13 Denso It Laboratory Inc Moving object recognition system, moving object recognition program, and moving object recognition method
WO2018002985A1 (en) * 2016-06-27 2018-01-04 日産自動車株式会社 Object tracking method and object tracking device
JP2019003235A (en) * 2017-06-09 2019-01-10 トヨタ自動車株式会社 Target information acquisition device
JP2019045300A (en) * 2017-09-01 2019-03-22 株式会社デンソーテン Radar apparatus and signal processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014029604A (en) * 2012-07-31 2014-02-13 Denso It Laboratory Inc Moving object recognition system, moving object recognition program, and moving object recognition method
WO2018002985A1 (en) * 2016-06-27 2018-01-04 日産自動車株式会社 Object tracking method and object tracking device
JP2019003235A (en) * 2017-06-09 2019-01-10 トヨタ自動車株式会社 Target information acquisition device
JP2019045300A (en) * 2017-09-01 2019-03-22 株式会社デンソーテン Radar apparatus and signal processing method

Similar Documents

Publication Publication Date Title
US11312353B2 (en) Vehicular control system with vehicle trajectory tracking
US8355539B2 (en) Radar guided vision system for vehicle validation and vehicle motion characterization
JP4343536B2 (en) Car sensing device
JP3915746B2 (en) Vehicle external recognition device
Barth et al. Estimating the driving state of oncoming vehicles from a moving platform using stereo vision
JP2023507671A (en) Methods of Object Avoidance During Autonomous Navigation
US11326889B2 (en) Driver assistance system and control method for the same
US11436815B2 (en) Method for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor, and apparatus for performing the same
US12025752B2 (en) Systems and methods for detecting erroneous LIDAR data
JP2015005132A (en) Virtual lane generation apparatus and program
WO2008009966A2 (en) Determining the location of a vehicle on a map
JP2018048949A (en) Object identification device
US10949681B2 (en) Method and device for ascertaining an optical flow based on an image sequence recorded by a camera of a vehicle
JP7234840B2 (en) position estimator
US12394093B2 (en) Method and device for calibrating a camera mounted on a vehicle
WO2025022609A1 (en) Object detection method and object detection device
JP2023116424A (en) Method and device for determining position of pedestrian
WO2023139978A1 (en) Vehicle-mounted camera device, vehicle-mounted camera system, and image storage method
WO2022270183A1 (en) Computation device and speed calculation method
Michalke et al. Towards a closer fusion of active and passive safety: Optical flow-based detection of vehicle side collisions
JP2000315255A (en) Rear side monitoring device for vehicle and rear side monitoring alarm device for vehicle
CN114817765A (en) Map-based target course disambiguation
WO2025017924A1 (en) Object detection method and object detection device
US20230260294A1 (en) Apparatus, method, and computer program for estimating road edge
CN119749563B (en) Vehicle target detection method and device and vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23945328

Country of ref document: EP

Kind code of ref document: A1