[go: up one dir, main page]

WO2025022706A1 - Object detection system, object detection device, and object detection method - Google Patents

Object detection system, object detection device, and object detection method Download PDF

Info

Publication number
WO2025022706A1
WO2025022706A1 PCT/JP2024/009242 JP2024009242W WO2025022706A1 WO 2025022706 A1 WO2025022706 A1 WO 2025022706A1 JP 2024009242 W JP2024009242 W JP 2024009242W WO 2025022706 A1 WO2025022706 A1 WO 2025022706A1
Authority
WO
WIPO (PCT)
Prior art keywords
object detection
area
point cloud
cloud data
accumulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/009242
Other languages
French (fr)
Japanese (ja)
Inventor
拓磨 柄澤
敦 佐々
正紀 住吉
雅史 保谷
嘉孝 木谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kokusai Denki Electric Inc
Original Assignee
Hitachi Kokusai Electric Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Kokusai Electric Inc filed Critical Hitachi Kokusai Electric Inc
Priority to JP2025535560A priority Critical patent/JPWO2025022706A1/ja
Publication of WO2025022706A1 publication Critical patent/WO2025022706A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present invention relates to an object detection system based on point cloud data acquired by scanning a monitoring area.
  • FIG. 1 shows an example of the configuration of an object detection system.
  • Figure 2 shows an image of how the object detection system works.
  • the object detection system in FIG. 1 includes a LiDAR 10 and an object detection server 20.
  • the LiDAR 10 acquires point cloud data representing the three-dimensional position and shape of an object 30 present in the monitoring range by scanning and irradiating a laser light onto the monitoring range and measuring the arrival time of the reflected light.
  • the point cloud data acquired by the LiDAR 10 is transmitted to the object detection server 20 via a network NW.
  • the object detection server 20 executes an object detection process to detect an object 30 present in the monitoring range based on the point cloud data received from the LiDAR 10.
  • the resolution of the laser light emitted from a LiDAR is determined by the LiDAR's irradiation pattern and the individual LiDAR.
  • the higher the resolution of the laser light the higher the density of the point cloud data that can be acquired, making it possible to detect smaller or more distant objects.
  • the resolution of the laser light is low, the density of the point cloud data that can be acquired will be low, making it difficult to detect small or distant objects. This is because conventional object detection systems detect objects using one frame of point cloud data acquired by LiDAR at a time.
  • Figure 3 shows an example of one frame of point cloud data obtained by LiDAR for an object.
  • the density of the point cloud data is low, so the object cannot be accurately identified.
  • it has been considered to accumulate and overlay multiple frames of point cloud data before processing (see, for example, Non-Patent Document 1).
  • Figure 4 shows an example of point cloud data from the first to fourth frames obtained for an object by LiDAR.
  • the point cloud data obtained by LiDAR is acquired with a slight shift in the coordinates of the point cloud data in successive frames, as shown in Figure 4, even when the object is stationary. Therefore, by accumulating and overlaying point cloud data from multiple frames, as shown in Figure 5, the density of the point cloud data can be artificially increased, allowing for more accurate object detection and discrimination.
  • the accumulation (overlay) of point cloud data described above is effective for stationary objects (hereafter referred to as "stationary objects"), but is less effective for moving objects, especially objects that move a lot (hereafter referred to as “moving objects”).
  • stationary objects stationary objects
  • moving objects moving objects
  • This problem can be addressed by using high-performance (high-resolution) LiDAR or by increasing the number of LiDAR units installed, but this is hardware-dependent and expensive.
  • the present invention was made in consideration of the above-mentioned conventional circumstances, and aims to provide a mechanism for object detection based on point cloud data acquired by scanning a monitoring range, capable of effectively detecting both stationary and moving objects.
  • an object detection system is configured as follows. That is, in an object detection system including a point cloud data acquisition device that scans a monitoring range to acquire point cloud data, and an object detection device that executes object detection processing based on the point cloud data acquired by the point cloud data acquisition device, the object detection device accumulates point cloud data for each area within the monitoring range by the number of accumulation frames set for that area, in accordance with the setting of the number of accumulation frames for each area into which the monitoring range is divided, and executes the object detection processing based on the accumulated point cloud data.
  • the number of accumulated frames for each area can be set to a value according to the expected speed of the object to be detected in each area.
  • the number of accumulated frames for each area can be set to a different value between the time period when the presence of an object to be detected in each area is expected and other time periods.
  • the object detection device can be configured to re-execute the object detection process for an area in which an object has been detected by the object detection process, based on point cloud data that has accumulated a number of accumulated frames that is greater than the number set for that area.
  • An object detection device is configured as follows. That is, in an object detection device that performs object detection processing based on point cloud data acquired by scanning a monitoring range, the device accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area according to the setting of the number of accumulated frames for each area into which the monitoring range is divided, and performs the object detection processing based on the accumulated point cloud data.
  • An object detection method is configured as follows. That is, in the object detection method based on point cloud data acquired by scanning a monitoring range, an object detection device accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area in accordance with the setting of the number of accumulated frames for each area into which the monitoring range is divided, and performs object detection processing based on the accumulated point cloud data.
  • the present invention provides a mechanism for object detection based on point cloud data acquired by scanning a monitoring range, which can effectively detect both stationary and moving objects.
  • FIG. 1 is a diagram illustrating an example of the configuration of an object detection system.
  • FIG. 2 is a diagram illustrating an operation image of the object detection system illustrated in FIG. 1 .
  • FIG. 2 is a diagram showing an example of one frame of point cloud data obtained regarding an object.
  • FIG. 2 is a diagram showing an example of point cloud data of the first to fourth frames obtained regarding an object.
  • FIG. 13 is a diagram showing an example of point cloud data obtained by overlapping four frames.
  • FIG. 2 is a diagram showing an example of division of a monitoring range in the proposed method.
  • FIG. 11 is a diagram showing an example of data on the number of accumulated frames by area in the proposed method.
  • FIG. 2 is a diagram illustrating an example of a processing flow in the proposed method.
  • the general configuration of an object detection system according to one embodiment of the present invention is the same as the object detection system shown in FIG. 1. That is, the object detection system of this example has a LiDAR 10, which is an example of a point cloud data acquisition device according to the present invention, and an object detection server 20, which is an example of an object detection device according to the present invention.
  • a LiDAR 10 which is an example of a point cloud data acquisition device according to the present invention
  • an object detection server 20 which is an example of an object detection device according to the present invention.
  • LiDAR10 acquires point cloud data representing the three-dimensional position and shape of object 30 present within the monitoring range by scanning and irradiating the monitoring range with laser light and measuring the arrival time of the reflected light.
  • the point cloud data acquired by LiDAR10 is transmitted to object detection server 20 via network NW.
  • object detection server 20 executes object detection processing to detect object 30 present within the monitoring range.
  • the point cloud data is processed using background subtraction to determine the size, shape, etc. of the detected object, and the detected object is identified based on the results.
  • the detected object may be identified by matching with previously registered object shapes or by an AI model that has learned the object shape.
  • LiDAR 10 instead of LiDAR 10, other point cloud data acquisition devices such as 3D sensors capable of acquiring point cloud data representing the three-dimensional position and shape of objects present in the monitoring range may be used.
  • the object detection server 20 may execute object detection processing based on the point cloud data acquired by each LiDAR 10.
  • the object detection server 20 may be realized, for example, by a computer equipped with hardware resources such as a processor and memory, and may be configured to read programs related to each function according to the present invention from the memory and execute them by the processor.
  • the object detection server 20 may be realized by one computer, or by multiple computers connected to each other so that they can communicate with each other.
  • the object detection server 20 divides the monitoring range into multiple areas and executes object detection processing with a different number of accumulated frames for each area. For example, as shown in FIG. 6, the monitoring range is divided into an air area whose main purpose is to detect stationary objects (flying objects caught on utility poles or electric wires) and a ground area whose main purpose is to detect moving objects (passing people and vehicles).
  • FIG. 7 shows an example of area-specific accumulated frame number data used for the above control.
  • Such area-specific accumulated frame number data is stored in the internal memory of the object detection server 20 or in an external device accessible to the object detection server 20.
  • the area-specific accumulated frame number data may be set manually by a user of the object detection system, or may be set automatically by a device such as the object detection server 20 according to the distribution of stationary and moving objects analyzed from past point cloud data.
  • the object detection server 20 accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area according to the area-specific accumulated frame count data described above, and performs object detection processing based on the accumulated point cloud data.
  • the object detection server 20 will perform object detection processing for the sky area based on point cloud data accumulated and superimposed over four frames, and for the ground area based on one frame of point cloud data.
  • FIG. 8 shows an example of a processing flow in the proposed method.
  • the object detection server 20 initializes (sets to 0) the frame counter for each area and initializes (clears) the accumulated point cloud data for each area (step S11).
  • the object detection server 20 receives one frame of point cloud data from the LiDAR 10 (step S12), it performs the following processing (steps S13 to S17) for each area.
  • step S13 the frame counter of area n (where 1 ⁇ n ⁇ number of areas) is incremented (added by 1) (step S13), and the corresponding portion of the received point cloud data is added to the accumulated point cloud data of area n (step S14).
  • step S15 it is determined whether the frame counter of area n has reached the accumulated frame number (step S15). If the frame counter of area n has reached the accumulated frame number (step S15; Yes), object detection processing is performed based on the accumulated point cloud data of area n (step S16), and the frame counter and accumulated point cloud data of area n are reset (step S17), and processing of the next area is started.
  • step S15 if the frame counter of area n has not reached the accumulated frame number (step S15; Yes), processing of steps S16 to S17 is not performed, and processing of the next area is started. If processing of all areas is completed, the system waits until the reception of the next frame of point cloud data (step S12).
  • object detection processing in the sky area is performed every four frames, but by storing point cloud data for the number of accumulated frames as history, it is also possible to perform object detection processing for each frame (each time point cloud data is received from LiDAR) by overlaying the point cloud data from the past four frames.
  • the object detection system of this example includes LiDAR 10 that scans the monitoring range to acquire point cloud data, and object detection server 20 that executes object detection processing based on the point cloud data acquired by LiDAR 10.
  • Object detection server 20 accumulates point cloud data for each area within the monitoring range by the number of accumulation frames set for that area according to the setting of the number of accumulation frames for each area into which the monitoring range is divided, and executes object detection processing based on the accumulated point cloud data.
  • object detection processing can be performed based on point cloud data accumulated and overlaid over four frames
  • object detection processing can be performed based on point cloud data for one frame.
  • flying objects caught on utility poles or power lines can be detected with high accuracy
  • in the ground area people and vehicles passing by can be detected with high accuracy.
  • this can be achieved through ingenuity in software rather than hardware, more advanced detection is possible while keeping costs down for the entire system.
  • the number of accumulated frames in the air area, where the main purpose is to detect stationary objects is set to 4, and the number of accumulated frames in the ground area, where the main purpose is to detect moving objects, is set to 1, but this is merely an example.
  • the number of accumulated frames for each area may be set to a value according to the expected speed of the object to be detected in each area.
  • the number of accumulated frames in an area, where the main purpose is to detect slow-moving moving objects may be set to 2
  • the number of accumulated frames in an area, where the main purpose is to detect fast-moving moving objects may be set to 1.
  • the number of accumulated frames for each area may be set to a different value depending on the time of day when the presence of an object to be detected in each area is expected and other time of day.
  • the number of accumulated frames may be set to a low value during time periods when pedestrians are expected to pass by, and to a high value during time periods when pedestrians are not expected to pass by.
  • the number of accumulated frames for each area may be set to a value according to the LiDAR sampling period.
  • the number of accumulated frames may be increased when the LiDAR sampling period is short, and decreased when the sampling period is long.
  • the number of accumulated frames for each area may be set to a value according to the urgency of detection in each area (degree of need for immediate detection).
  • the number of accumulated frames may be increased in areas where the urgency of detection is low, and decreased in areas where the urgency of detection is high.
  • the monitoring range is divided into two areas, but it may be divided into three or more areas.
  • it may be divided into an overhead area, an area where passengers walk (platform area), and an area where trains run (railroad track area), and a different number of accumulated frames may be set for each area.
  • the object detection server 20 may re-execute the object detection process for an area in which an object has been detected by the object detection process, based on point cloud data that has accumulated a number of frames greater than the number set for that area. For example, if an object is detected by executing the object detection process for a ground area based on one frame of point cloud data, the object detection process is re-executed for the ground area based on four frames of point cloud data. This makes it possible to further improve the accuracy of object detection.
  • the present invention can be provided not only as the devices described above or as systems composed of these devices, but also as methods executed by these devices, programs for implementing the functions of these devices using a processor, and storage media for storing such programs in a computer-readable format.
  • the present invention can be used in an object detection system based on point cloud data acquired by scanning a monitoring area.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present invention pertains to object detection based on point cloud data acquired by scanning a monitoring range, and provides a mechanism whereby both a stationary object and a moving object can be suitably detected. An object detection system according to one embodiment of the present invention comprises: a LiDAR 10 that acquires point cloud data by scanning a monitoring range; and an object detection server 20 that executes an object detection process on the basis of the point cloud data acquired by the LiDAR 10. In accordance with the setting of a number of accumulated frames for each area obtained by dividing the monitoring range, the object detection server 20 accumulates point cloud data of each area in the monitoring range for each among the number of accumulated frames set for said area and executes the object detection process on the basis of the accumulated point cloud data.

Description

物体検知システム、物体検知装置、および物体検知方法OBJECT DETECTION SYSTEM, OBJECT DETECTION DEVICE, AND OBJECT DETECTION METHOD

 本発明は、監視範囲を走査して取得される点群データに基づく物体検知システムに関する。 The present invention relates to an object detection system based on point cloud data acquired by scanning a monitoring area.

 従来、LiDAR(Light Detection And Ranging)を用いて監視範囲に存在する物体を検知する物体検知システムが開発されている。図1には、物体検知システムの構成例を示してある。また、図2には、物体検知システムの動作イメージを示してある。 Conventionally, object detection systems have been developed that use LiDAR (Light Detection and Ranging) to detect objects that exist within a surveillance area. Figure 1 shows an example of the configuration of an object detection system. Figure 2 shows an image of how the object detection system works.

 図1の物体検知システムは、LiDAR10と、物体検知サーバ20とを有している。LiDAR10は、図2に示すように、監視範囲に対してレーザー光を走査しながら照射して反射光の到達時間を計測することで、監視範囲に存在する物体30の3次元位置や形状を表す点群データを取得する。LiDAR10により取得された点群データは、ネットワークNWを介して物体検知サーバ20へ送信される。物体検知サーバ20は、LiDAR10から受信した点群データに基づいて、監視範囲に存在する物体30を検知するための物体検知処理を実行する。 The object detection system in FIG. 1 includes a LiDAR 10 and an object detection server 20. As shown in FIG. 2, the LiDAR 10 acquires point cloud data representing the three-dimensional position and shape of an object 30 present in the monitoring range by scanning and irradiating a laser light onto the monitoring range and measuring the arrival time of the reflected light. The point cloud data acquired by the LiDAR 10 is transmitted to the object detection server 20 via a network NW. The object detection server 20 executes an object detection process to detect an object 30 present in the monitoring range based on the point cloud data received from the LiDAR 10.

西田健,外3名、「自動運転のためのLIDARの仕様の検証」、第32回ファジィ学会ファジィシステムシンポジウム講演論文集FSS(2016佐賀大学)、p.147-150Ken Nishida and 3 others, "Verification of LIDAR specifications for autonomous driving", Proceedings of the 32nd Fuzzy Systems Society Fuzzy System Symposium FSS (Saga University, 2016), pp. 147-150

 LiDARから照射されるレーザー光の分解能は、LiDARの照射パターンや、LiDAR個体によって決まっている。レーザー光の分解能が高いほど、取得できる点群データの密度が高くなり、より小さな物体や遠い物体の検知が可能となる。一方、レーザー光の分解能が低いと、取得できる点群データの密度が低くなり、その結果、小さな物体や遠い物体の検知がしにくくなる。これは、従来の物体検知システムでは、LiDARで取得される点群データを1フレーム分ずつ使用して物体を検知していたためである。 The resolution of the laser light emitted from a LiDAR is determined by the LiDAR's irradiation pattern and the individual LiDAR. The higher the resolution of the laser light, the higher the density of the point cloud data that can be acquired, making it possible to detect smaller or more distant objects. On the other hand, if the resolution of the laser light is low, the density of the point cloud data that can be acquired will be low, making it difficult to detect small or distant objects. This is because conventional object detection systems detect objects using one frame of point cloud data acquired by LiDAR at a time.

 図3には、LiDARによって対象物に関して得られる1フレーム分の点群データの例を示してある。図3の例では、点群データの密度が低いため、物体を正確に判別することができない。このような問題の対策として、LiDARで取得される点群データを1フレーム単位で処理するのではなく、複数フレーム分の点群データを蓄積して重ね合わせてから処理することが検討されている(例えば、非特許文献1参照)。 Figure 3 shows an example of one frame of point cloud data obtained by LiDAR for an object. In the example of Figure 3, the density of the point cloud data is low, so the object cannot be accurately identified. As a solution to this problem, rather than processing the point cloud data obtained by LiDAR on a frame-by-frame basis, it has been considered to accumulate and overlay multiple frames of point cloud data before processing (see, for example, Non-Patent Document 1).

 点群データの蓄積(重ね合わせ)による物体検知について、図4および図5を参照して説明する。図4には、LiDARによって対象物に関して得られる1フレーム目~4フレーム目の点群データの例を示してある。LiDARで取得される点群データは、対象物が静止している場合でも、図4に示すように、前後のフレームで点群データの座標が僅かにずれて取得される。そこで、複数フレーム分の点群データを蓄積して重ね合わせることで、図5に示すように、点群データの密度を疑似的に増やすことできるため、物体の検知や判別をより正確に行えるようになる。 Object detection through accumulation (overlapping) of point cloud data will be explained with reference to Figures 4 and 5. Figure 4 shows an example of point cloud data from the first to fourth frames obtained for an object by LiDAR. The point cloud data obtained by LiDAR is acquired with a slight shift in the coordinates of the point cloud data in successive frames, as shown in Figure 4, even when the object is stationary. Therefore, by accumulating and overlaying point cloud data from multiple frames, as shown in Figure 5, the density of the point cloud data can be artificially increased, allowing for more accurate object detection and discrimination.

 上述した点群データの蓄積(重ね合わせ)は、静止している物体(以下、「静止物」という)に対しては有効であるものの、動いている物体、特に、動きが大きい物体(以下、「移動物」という)に対しての有効性は低い。これは、移動物をLiDARで観測した場合、各フレームでの分布エリアがばらけた点群データしか取得できないため、複数フレーム分を蓄積して重ね合わせたとしても、広範囲に分散した点群データしか得られないためである。このような問題に対しては、高性能(分解能の高い)LiDARを用いるか、LiDARの設置台数を増やすことで対応できるが、ハードウェアに依存するものであり、高いコストがかかってしまう。 The accumulation (overlay) of point cloud data described above is effective for stationary objects (hereafter referred to as "stationary objects"), but is less effective for moving objects, especially objects that move a lot (hereafter referred to as "moving objects"). This is because when observing a moving object with LiDAR, only point cloud data with a scattered distribution area can be obtained in each frame, so even if multiple frames are accumulated and overlaid, only point cloud data distributed over a wide area can be obtained. This problem can be addressed by using high-performance (high-resolution) LiDAR or by increasing the number of LiDAR units installed, but this is hardware-dependent and expensive.

 本発明は、上記のような従来の事情に鑑みて為されたものであり、監視範囲を走査して取得される点群データに基づく物体検知に関し、静止物と移動物の両方を好適に検知することが可能な仕組みを提供することを目的とする。 The present invention was made in consideration of the above-mentioned conventional circumstances, and aims to provide a mechanism for object detection based on point cloud data acquired by scanning a monitoring range, capable of effectively detecting both stationary and moving objects.

 上記の目的を達成するために、本発明の一態様に係る物体検知システムは以下のように構成される。すなわち、監視範囲を走査して点群データを取得する点群データ取得装置と、前記点群データ取得装置により取得された点群データに基づいて物体検知処理を実行する物体検知装置とを備えた物体検知システムにおいて、前記物体検知装置は、前記監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、前記監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて前記物体検知処理を実行することを特徴とする。 In order to achieve the above object, an object detection system according to one aspect of the present invention is configured as follows. That is, in an object detection system including a point cloud data acquisition device that scans a monitoring range to acquire point cloud data, and an object detection device that executes object detection processing based on the point cloud data acquired by the point cloud data acquisition device, the object detection device accumulates point cloud data for each area within the monitoring range by the number of accumulation frames set for that area, in accordance with the setting of the number of accumulation frames for each area into which the monitoring range is divided, and executes the object detection processing based on the accumulated point cloud data.

 ここで、上記の物体検知システムにおいて、前記エリア別の蓄積フレーム数は、各エリアでの検知目標となる物体の想定速度に応じた値が設定され得る。 Here, in the above object detection system, the number of accumulated frames for each area can be set to a value according to the expected speed of the object to be detected in each area.

 また、上記の物体検知システムにおいて、前記エリア別の蓄積フレーム数は、各エリアでの検知目標となる物体の存在が想定される時間帯と他の時間帯とで異なる値が設定され得る。 In addition, in the above object detection system, the number of accumulated frames for each area can be set to a different value between the time period when the presence of an object to be detected in each area is expected and other time periods.

 また、上記の物体検知システムにおいて、前記物体検知装置は、前記物体検知処理によって物体が検知されたエリアに対し、該エリアに対する設定より多い蓄積フレーム数分を蓄積した点群データに基づいて、前記物体検知処理を実行し直すように構成され得る。 Furthermore, in the above object detection system, the object detection device can be configured to re-execute the object detection process for an area in which an object has been detected by the object detection process, based on point cloud data that has accumulated a number of accumulated frames that is greater than the number set for that area.

 本発明の別の態様に係る物体検知装置は、以下のように構成される。すなわち、監視範囲を走査して取得される点群データに基づいて物体検知処理を実行する物体検知装置において、前記監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、前記監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて前記物体検知処理を実行することを特徴とする。 An object detection device according to another aspect of the present invention is configured as follows. That is, in an object detection device that performs object detection processing based on point cloud data acquired by scanning a monitoring range, the device accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area according to the setting of the number of accumulated frames for each area into which the monitoring range is divided, and performs the object detection processing based on the accumulated point cloud data.

 本発明の更に別の態様に係る物体検知方法は、以下のように構成される。すなわち、監視範囲を走査して取得される点群データに基づく物体検知方法において、物体検知装置が、前記監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、前記監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて物体検知処理を実行することを特徴とする。 An object detection method according to yet another aspect of the present invention is configured as follows. That is, in the object detection method based on point cloud data acquired by scanning a monitoring range, an object detection device accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area in accordance with the setting of the number of accumulated frames for each area into which the monitoring range is divided, and performs object detection processing based on the accumulated point cloud data.

 本発明によれば、監視範囲を走査して取得される点群データに基づく物体検知に関し、静止物と移動物の両方を好適に検知することが可能な仕組みを提供することができる。 The present invention provides a mechanism for object detection based on point cloud data acquired by scanning a monitoring range, which can effectively detect both stationary and moving objects.

物体検知システムの構成例を示す図である。FIG. 1 is a diagram illustrating an example of the configuration of an object detection system. 図1に示す物体検知システムの動作イメージを示す図である。FIG. 2 is a diagram illustrating an operation image of the object detection system illustrated in FIG. 1 . 物体に関して得られる1フレーム分の点群データの例を示す図である。FIG. 2 is a diagram showing an example of one frame of point cloud data obtained regarding an object. 物体に関して得られる1フレーム目~4フレーム目の点群データの例を示す図である。FIG. 2 is a diagram showing an example of point cloud data of the first to fourth frames obtained regarding an object. 4フレーム分を重ね合わせた点群データの例を示す図である。FIG. 13 is a diagram showing an example of point cloud data obtained by overlapping four frames. 本提案方式における監視範囲の区分例を示す図である。FIG. 2 is a diagram showing an example of division of a monitoring range in the proposed method. 本提案方式におけるエリア別蓄積フレーム数データの例を示す図である。FIG. 11 is a diagram showing an example of data on the number of accumulated frames by area in the proposed method. 本提案方式における処理フローの例を示す図である。FIG. 2 is a diagram illustrating an example of a processing flow in the proposed method.

 本発明の一実施形態について、図面を参照して説明する。本発明の一実施形態に係る物体検知システムの概略的な構成は、図1に示した物体検知システムと同じである。すなわち、本例の物体検知システムは、本発明に係る点群データ取得装置の一例であるLiDAR10と、本発明に係る物体検知装置の一例である物体検知サーバ20とを有している。 One embodiment of the present invention will be described with reference to the drawings. The general configuration of an object detection system according to one embodiment of the present invention is the same as the object detection system shown in FIG. 1. That is, the object detection system of this example has a LiDAR 10, which is an example of a point cloud data acquisition device according to the present invention, and an object detection server 20, which is an example of an object detection device according to the present invention.

 LiDAR10は、監視範囲に対してレーザー光を走査しながら照射して反射光の到達時間を計測することで、監視範囲に存在する物体30の3次元位置や形状を表す点群データを取得する。LiDAR10により取得された点群データは、ネットワークNWを介して物体検知サーバ20へ送信される。物体検知サーバ20は、LiDAR10から受信した点群データに基づいて、監視範囲に存在する物体30を検知するための物体検知処理を実行する。物体検知処理では、例えば、点群データを背景差分法により処理して検知物のサイズや形状等を判別し、その結果に基づいて検知物を特定する。また、予め登録しておいた物体形状とのマッチングや、物体形状を学習させたAIモデルにより、検知物の特定を行うようにしてもよい。 LiDAR10 acquires point cloud data representing the three-dimensional position and shape of object 30 present within the monitoring range by scanning and irradiating the monitoring range with laser light and measuring the arrival time of the reflected light. The point cloud data acquired by LiDAR10 is transmitted to object detection server 20 via network NW. Based on the point cloud data received from LiDAR10, object detection server 20 executes object detection processing to detect object 30 present within the monitoring range. In the object detection processing, for example, the point cloud data is processed using background subtraction to determine the size, shape, etc. of the detected object, and the detected object is identified based on the results. In addition, the detected object may be identified by matching with previously registered object shapes or by an AI model that has learned the object shape.

 なお、LiDAR10に代えて、監視範囲に存在する物体の3次元位置や形状を表す点群データを取得可能な3Dセンサなどの他の点群データ取得装置を用いてもよい。また、図1では、LiDAR10を1台のみ示しているが、複数台のLiDAR10を備え、物体検知サーバ20がそれぞれのLiDAR10により取得された点群データに基づいて物体検知処理を実行してもよい。物体検知サーバ20は、例えば、プロセッサやメモリなどのハードウェア資源を備えたコンピュータにより実現され、本発明に係る各機能に関するプログラムをメモリから読み出してプロセッサにより実行するように構成され得る。また、物体検知サーバ20は、1台のコンピュータにより実現されてもよいし、互いに通信可能に接続された複数台のコンピュータにより実現されてもよい。 In addition, instead of LiDAR 10, other point cloud data acquisition devices such as 3D sensors capable of acquiring point cloud data representing the three-dimensional position and shape of objects present in the monitoring range may be used. Also, although only one LiDAR 10 is shown in FIG. 1, multiple LiDAR 10 may be provided and the object detection server 20 may execute object detection processing based on the point cloud data acquired by each LiDAR 10. The object detection server 20 may be realized, for example, by a computer equipped with hardware resources such as a processor and memory, and may be configured to read programs related to each function according to the present invention from the memory and execute them by the processor. Also, the object detection server 20 may be realized by one computer, or by multiple computers connected to each other so that they can communicate with each other.

 本提案方式において、物体検知サーバ20は、監視範囲を複数のエリアに区分し、エリア毎に異なる蓄積フレーム数で物体検知処理を実行する。監視範囲は、例えば図6に示すように、静止物(電柱や電線に引っかかった飛来物)の検知を主目的とした上空エリアと、移動物(通行する人や車両)の検知を主目的とした地上エリアとに区分される。 In this proposed method, the object detection server 20 divides the monitoring range into multiple areas and executes object detection processing with a different number of accumulated frames for each area. For example, as shown in FIG. 6, the monitoring range is divided into an air area whose main purpose is to detect stationary objects (flying objects caught on utility poles or electric wires) and a ground area whose main purpose is to detect moving objects (passing people and vehicles).

 図7には、上記の制御に用いるエリア別蓄積フレーム数データの例を示してある。図7の例では、エリアID=“1”で識別される「上空エリア」の蓄積フレーム数として“4”が設定され、エリアID=“2”で識別される「地上エリア」の蓄積フレーム数として“1”が設定されている。このようなエリア別蓄積フレーム数データは、物体検知サーバ20の内部のメモリ、または物体検知サーバ20がアクセス可能な外部デバイスに記憶される。エリア別蓄積フレーム数データは、物体検知システムのユーザが手入力により設定してもよいし、物体検知サーバ20等の装置が過去の点群データから分析した静止物や移動物の分布に従って自動的に設定してもよい。 FIG. 7 shows an example of area-specific accumulated frame number data used for the above control. In the example of FIG. 7, the number of accumulated frames for the "sky area" identified by area ID="1" is set to "4", and the number of accumulated frames for the "ground area" identified by area ID="2" is set to "1". Such area-specific accumulated frame number data is stored in the internal memory of the object detection server 20 or in an external device accessible to the object detection server 20. The area-specific accumulated frame number data may be set manually by a user of the object detection system, or may be set automatically by a device such as the object detection server 20 according to the distribution of stationary and moving objects analyzed from past point cloud data.

 物体検知サーバ20は、上記のようなエリア別蓄積フレーム数データに従って、監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて物体検知処理を実行する。図7のエリア別蓄積フレーム数データを用いる場合、物体検知サーバ20は、上空エリアに対しては、4フレーム分を蓄積して重ね合わせた点群データに基づいて物体検知処理を実行し、地上エリアに対しては、1フレーム分の点群データに基づいて物体検知処理を実行することになる。 The object detection server 20 accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area according to the area-specific accumulated frame count data described above, and performs object detection processing based on the accumulated point cloud data. When using the area-specific accumulated frame count data of FIG. 7, the object detection server 20 will perform object detection processing for the sky area based on point cloud data accumulated and superimposed over four frames, and for the ground area based on one frame of point cloud data.

 図8には、本提案方式における処理フローの例を示してある。
 物体検知サーバ20はまず、エリア別のフレームカウンタの初期化(0をセット)およびエリア別の蓄積点群データの初期化(クリア)を行う(ステップS11)。その後、物体検知サーバ20は、LiDAR10から1フレーム分の点群データを受信すると(ステップS12)、各エリアに対して以下の処理(ステップS13~S17)を行う。
FIG. 8 shows an example of a processing flow in the proposed method.
First, the object detection server 20 initializes (sets to 0) the frame counter for each area and initializes (clears) the accumulated point cloud data for each area (step S11). After that, when the object detection server 20 receives one frame of point cloud data from the LiDAR 10 (step S12), it performs the following processing (steps S13 to S17) for each area.

 まず、エリアn(ここで、1≦n≦エリア数)のフレームカウンタをインクリメント(1加算)し(ステップS13)、エリアnの蓄積点群データに受信点群データの該当する部分を追加する(ステップS14)。次に、エリアnのフレームカウンタが蓄積フレーム数に到達したか否かを判定する(ステップS15)。そして、エリアnのフレームカウンタが蓄積フレーム数に到達した場合(ステップS15;Yes)には、エリアnの蓄積点群データに基づく物体検知処理を実行し(ステップS16)、エリアnのフレームカウンタおよび蓄積点群データをリセット(ステップS17)した後に、次のエリアの処理に移る。一方、エリアnのフレームカウンタが蓄積フレーム数に到達していない場合(ステップS15;Yes)には、ステップ16~ステップS17の処理を行わずに、次のエリアの処理に移る。また、全てのエリアに対する処理が完了した場合には、次のフレームの点群データの受信(ステップS12)まで待機する。 First, the frame counter of area n (where 1≦n≦number of areas) is incremented (added by 1) (step S13), and the corresponding portion of the received point cloud data is added to the accumulated point cloud data of area n (step S14). Next, it is determined whether the frame counter of area n has reached the accumulated frame number (step S15). If the frame counter of area n has reached the accumulated frame number (step S15; Yes), object detection processing is performed based on the accumulated point cloud data of area n (step S16), and the frame counter and accumulated point cloud data of area n are reset (step S17), and processing of the next area is started. On the other hand, if the frame counter of area n has not reached the accumulated frame number (step S15; Yes), processing of steps S16 to S17 is not performed, and processing of the next area is started. If processing of all areas is completed, the system waits until the reception of the next frame of point cloud data (step S12).

 ここで、図8の処理フローでは、上空エリアの物体検知処理を4フレーム毎に実行することになるが、蓄積フレーム数分の点群データを履歴として記憶しておくことで、1フレーム毎(LiDARから点群データを受信する毎)に、過去4フレーム分の点群データを重ね合わせて物体検知処理を実行することも可能となる。 In the processing flow of Figure 8, object detection processing in the sky area is performed every four frames, but by storing point cloud data for the number of accumulated frames as history, it is also possible to perform object detection processing for each frame (each time point cloud data is received from LiDAR) by overlaying the point cloud data from the past four frames.

 以上のように、本例の物体検知システムは、監視範囲を走査して点群データを取得するLiDAR10と、LiDAR10により取得された点群データに基づいて物体検知処理を実行する物体検知サーバ20とを備えており、物体検知サーバ20は、監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて物体検知処理を実行する。 As described above, the object detection system of this example includes LiDAR 10 that scans the monitoring range to acquire point cloud data, and object detection server 20 that executes object detection processing based on the point cloud data acquired by LiDAR 10. Object detection server 20 accumulates point cloud data for each area within the monitoring range by the number of accumulation frames set for that area according to the setting of the number of accumulation frames for each area into which the monitoring range is divided, and executes object detection processing based on the accumulated point cloud data.

 これにより、例えば、上空エリアに対しては、4フレーム分を蓄積して重ね合わせた点群データに基づいて物体検知処理を実行することができ、地上エリアに対しては、1フレーム分の点群データに基づいて物体検知処理を実行することができる。その結果、上空エリアにおいて、電柱や電線に引っかかった飛来物などの検知を精度よく行えると共に、地上エリアにおいて、通行する人や車両の検知を精度よく行える。このように、静止物と移動物の両方を好適に検知することが可能となる。しかも、ハードウェアではなくソフトウェアの工夫によって実現できるので、システム全体にかかる費用を抑えつつ、より高度な検知が可能となる。 As a result, for example, in the air area, object detection processing can be performed based on point cloud data accumulated and overlaid over four frames, and in the ground area, object detection processing can be performed based on point cloud data for one frame. As a result, in the air area, flying objects caught on utility poles or power lines can be detected with high accuracy, and in the ground area, people and vehicles passing by can be detected with high accuracy. In this way, it becomes possible to effectively detect both stationary and moving objects. Moreover, because this can be achieved through ingenuity in software rather than hardware, more advanced detection is possible while keeping costs down for the entire system.

 ここで、上記の説明では、静止物の検知を主目的とした上空エリアの蓄積フレーム数を4とし、移動物の検知を主目的とした地上エリアの蓄積フレーム数を1としたが、これは例示に過ぎない。例えば、エリア別の蓄積フレーム数として、各エリアでの検知目標となる物体の想定速度に応じた値を設定してもよい。一例として、動きが遅い移動物の検知を主目的としたエリアの蓄積フレーム数を2とし、動きが速い移動物の検知を主目的としたエリアの蓄積フレーム数を1としてもよい。 In the above explanation, the number of accumulated frames in the air area, where the main purpose is to detect stationary objects, is set to 4, and the number of accumulated frames in the ground area, where the main purpose is to detect moving objects, is set to 1, but this is merely an example. For example, the number of accumulated frames for each area may be set to a value according to the expected speed of the object to be detected in each area. As one example, the number of accumulated frames in an area, where the main purpose is to detect slow-moving moving objects, may be set to 2, and the number of accumulated frames in an area, where the main purpose is to detect fast-moving moving objects, may be set to 1.

 例えば、エリア別の蓄積フレーム数として、各エリアでの検知目標となる物体の存在が想定される時間帯と他の時間帯とで時間帯に応じて異なる値を設定してもよい。一例として、各エリアにおいて、人の通行が想定される時間帯は蓄積フレーム数を少なくし、人の通行が想定されない時間帯は蓄積フレーム数を多くしてもよい。 For example, the number of accumulated frames for each area may be set to a different value depending on the time of day when the presence of an object to be detected in each area is expected and other time of day. As one example, in each area, the number of accumulated frames may be set to a low value during time periods when pedestrians are expected to pass by, and to a high value during time periods when pedestrians are not expected to pass by.

 例えば、エリア別の蓄積フレーム数として、LiDARのサンプリング周期に応じた値を設定してもよい。一例として、LiDARのサンプリング周期が短い場合は蓄積フレーム数を多くし、サンプリング周期が長い場合は蓄積フレーム数を少なくしてもよい。 For example, the number of accumulated frames for each area may be set to a value according to the LiDAR sampling period. As an example, the number of accumulated frames may be increased when the LiDAR sampling period is short, and decreased when the sampling period is long.

 例えば、エリア別の蓄積フレーム数として、各エリアでの検知の緊急度(直ちに検知する必要性の程度)に応じた値を設定してもよい。一例として、検知の緊急度が低いエリアである場合には蓄積フレーム数を多くし、検知の緊急度が高いエリアである場合には蓄積フレーム数を少なくしてもよい。 For example, the number of accumulated frames for each area may be set to a value according to the urgency of detection in each area (degree of need for immediate detection). As one example, the number of accumulated frames may be increased in areas where the urgency of detection is low, and decreased in areas where the urgency of detection is high.

 また、上記の説明では、監視範囲を2つのエリアに区分したが、3つ以上のエリアに区分してもよい。例えば、駅の構内を監視する場合に、上空エリアと、乗客歩行エリア(プラットホーム部分)と、車両走行エリア(線路部分)とに区分し、各エリアに対して異なる蓄積フレーム数を設定してもよい。 In the above explanation, the monitoring range is divided into two areas, but it may be divided into three or more areas. For example, when monitoring the inside of a station, it may be divided into an overhead area, an area where passengers walk (platform area), and an area where trains run (railroad track area), and a different number of accumulated frames may be set for each area.

 なお、上述した物体検知システムの変形例として、物体検知サーバ20は、物体検知処理によって物体が検知されたエリアに対し、該エリアに対する設定より多い蓄積フレーム数分を蓄積した点群データに基づいて、物体検知処理を実行し直すようにしてもよい。例えば、地上エリアに対して1フレーム分の点群データに基づいて物体検知処理を実行して物体が検知された場合に、地上エリアに対して4フレーム分の点群データに基づいて物体検知処理を実行し直す。これにより、物体の検知精度を更に高めることが可能となる。 As a variation of the object detection system described above, the object detection server 20 may re-execute the object detection process for an area in which an object has been detected by the object detection process, based on point cloud data that has accumulated a number of frames greater than the number set for that area. For example, if an object is detected by executing the object detection process for a ground area based on one frame of point cloud data, the object detection process is re-executed for the ground area based on four frames of point cloud data. This makes it possible to further improve the accuracy of object detection.

 以上、本発明の実施形態について説明したが、これら実施形態は例示に過ぎず、本発明の技術的範囲を限定するものではない。本発明は、その他の様々な実施形態をとることが可能であると共に、本発明の要旨を逸脱しない範囲で、省略や置換等の種々の変形を行うことができる。これら実施形態及びその変形は、本明細書等に記載された発明の範囲や要旨に含まれると共に、特許請求の範囲に記載された発明とその均等の範囲に含まれる。 The above describes the embodiments of the present invention, but these embodiments are merely illustrative and do not limit the technical scope of the present invention. The present invention can take various other embodiments, and various modifications such as omissions and substitutions can be made without departing from the gist of the present invention. These embodiments and their modifications are included in the scope and gist of the invention described in this specification, etc., and are included in the scope of the invention described in the claims and their equivalents.

 また、本発明は、上記の説明で挙げたような装置や、これら装置で構成されたシステムとして提供することが可能なだけでなく、これら装置により実行される方法、これら装置の機能をプロセッサにより実現させるためのプログラム、そのようなプログラムをコンピュータ読み取り可能に記憶する記憶媒体などとして提供することも可能である。 In addition, the present invention can be provided not only as the devices described above or as systems composed of these devices, but also as methods executed by these devices, programs for implementing the functions of these devices using a processor, and storage media for storing such programs in a computer-readable format.

 本発明は、監視範囲を走査して取得される点群データに基づく物体検知システムに利用することが可能である。 The present invention can be used in an object detection system based on point cloud data acquired by scanning a monitoring area.

 10:LiDAR、 20:物体検知サーバ、 30:物体

 
10: LiDAR, 20: Object detection server, 30: Object

Claims (6)

 監視範囲を走査して点群データを取得する点群データ取得装置と、
 前記点群データ取得装置により取得された点群データに基づいて物体検知処理を実行する物体検知装置とを備えた物体検知システムにおいて、
 前記物体検知装置は、前記監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、前記監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて前記物体検知処理を実行することを特徴とする物体検知システム。
a point cloud data acquisition device that scans a monitoring range and acquires point cloud data;
and an object detection device that performs object detection processing based on the point cloud data acquired by the point cloud data acquisition device,
The object detection system is characterized in that the object detection device accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area in accordance with a setting of the number of accumulated frames for each area into which the monitoring range is divided, and performs the object detection process based on the accumulated point cloud data.
 請求項1に記載の物体検知システムにおいて、
 前記エリア別の蓄積フレーム数は、各エリアでの検知目標となる物体の想定速度に応じた値が設定されることを特徴とする物体検知システム。
2. The object detection system according to claim 1,
An object detection system, characterized in that the number of accumulated frames for each area is set to a value corresponding to the estimated speed of an object to be detected in each area.
 請求項1に記載の物体検知システムにおいて、
 前記エリア別の蓄積フレーム数は、各エリアでの検知目標となる物体の存在が想定される時間帯と他の時間帯とで異なる値が設定されることを特徴とする物体検知システム。
2. The object detection system according to claim 1,
An object detection system characterized in that the number of accumulated frames for each area is set to a different value between a time period during which an object to be detected in each area is expected to exist and other time periods.
 請求項1に記載の物体検知システムにおいて、
 前記物体検知装置は、前記物体検知処理によって物体が検知されたエリアに対し、該エリアに対する設定より多い蓄積フレーム数分を蓄積した点群データに基づいて、前記物体検知処理を実行し直すことを特徴とする物体検知システム。
2. The object detection system according to claim 1,
The object detection system is characterized in that the object detection device re-executes the object detection process for an area in which an object is detected by the object detection process based on point cloud data that has accumulated a number of accumulated frames that is greater than the number set for that area.
 監視範囲を走査して取得される点群データに基づいて物体検知処理を実行する物体検知装置において、
 前記監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、前記監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて前記物体検知処理を実行することを特徴とする物体検知装置。
1. An object detection device that performs object detection processing based on point cloud data acquired by scanning a monitoring range,
An object detection device characterized by accumulating point cloud data for each area within the monitoring range by the number of accumulated frames set for that area in accordance with a setting of the number of accumulated frames for each area into which the monitoring range is divided, and performing the object detection process based on the accumulated point cloud data.
 監視範囲を走査して取得される点群データに基づく物体検知方法において、
 物体検知装置が、前記監視範囲を区分したエリア別の蓄積フレーム数の設定に従って、前記監視範囲内の各エリアの点群データを該エリアに設定された蓄積フレーム数分ずつ蓄積し、蓄積された点群データに基づいて物体検知処理を実行することを特徴とする物体検知方法。

 
1. An object detection method based on point cloud data acquired by scanning a monitoring range, comprising:
An object detection method characterized in that an object detection device accumulates point cloud data for each area within the monitoring range by the number of accumulated frames set for that area, in accordance with a setting of the number of accumulated frames for each area into which the monitoring range is divided, and performs object detection processing based on the accumulated point cloud data.

PCT/JP2024/009242 2023-07-26 2024-03-11 Object detection system, object detection device, and object detection method Pending WO2025022706A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2025535560A JPWO2025022706A1 (en) 2023-07-26 2024-03-11

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-121949 2023-07-26
JP2023121949 2023-07-26

Publications (1)

Publication Number Publication Date
WO2025022706A1 true WO2025022706A1 (en) 2025-01-30

Family

ID=94374640

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/009242 Pending WO2025022706A1 (en) 2023-07-26 2024-03-11 Object detection system, object detection device, and object detection method

Country Status (2)

Country Link
JP (1) JPWO2025022706A1 (en)
WO (1) WO2025022706A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
JP2021196195A (en) * 2020-06-10 2021-12-27 パナソニックIpマネジメント株式会社 Processing device, processing method, program, and radar system
JP2022112828A (en) * 2021-01-22 2022-08-03 凸版印刷株式会社 DISTANCE IMAGING DEVICE AND DISTANCE IMAGING METHOD
US20230230368A1 (en) * 2020-05-25 2023-07-20 Sony Semiconductor Solutions Corporation Information processing apparatus, information processing method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
US20230230368A1 (en) * 2020-05-25 2023-07-20 Sony Semiconductor Solutions Corporation Information processing apparatus, information processing method, and program
JP2021196195A (en) * 2020-06-10 2021-12-27 パナソニックIpマネジメント株式会社 Processing device, processing method, program, and radar system
JP2022112828A (en) * 2021-01-22 2022-08-03 凸版印刷株式会社 DISTANCE IMAGING DEVICE AND DISTANCE IMAGING METHOD

Also Published As

Publication number Publication date
JPWO2025022706A1 (en) 2025-01-30

Similar Documents

Publication Publication Date Title
KR102198724B1 (en) Method and apparatus for processing point cloud data
JP7345504B2 (en) Association of LIDAR data and image data
CN110163904B (en) Object labeling method, movement control method, device, equipment and storage medium
Plastiras et al. Efficient convnet-based object detection for unmanned aerial vehicles by selective tile processing
US20200388150A1 (en) Monitoring a scene to analyze an event using a plurality of image streams
CN110765894A (en) Target detection method, device, equipment and computer readable storage medium
US20190087666A1 (en) Method and apparatus for identifying static obstacle
CN110216661B (en) Falling area identification method and device
EP4303834A1 (en) Road change detection method, computing device, and storage medium
CN114119465B (en) Point cloud data processing method and device
CN112487894A (en) Automatic inspection method and device for rail transit protection area based on artificial intelligence
JP2025508060A (en) Road obstacle detection method, device, equipment, and storage medium
US20140141823A1 (en) Communication device, comunication method and computer program product
Ng et al. Low latency deep learning based parking occupancy detection by exploiting structural similarity
KR102221158B1 (en) Apparatus and method for detecting vehicle type, speed and traffic using radar device and image processing
CN119156548A (en) Point cloud evaluation method and device
WO2025022706A1 (en) Object detection system, object detection device, and object detection method
CN115661394A (en) Method for constructing lane line map, computer device and storage medium
CN119380309B (en) Road network generation method, intelligent device and computer readable storage medium
JP2019106166A (en) Information processing method, information processing apparatus and program
EP4414746A1 (en) Multi-object detection and tracking
KR101392222B1 (en) Laser radar for calculating the outline of the target, method for calculating the outline of the target
CN114169355A (en) Information acquisition method and device, millimeter wave radar, equipment and storage medium
CN115690028B (en) Road pit detection method and device, electronic equipment and storage medium
US11679771B2 (en) Vehicle event identification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24845091

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2025535560

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2025535560

Country of ref document: JP