WO2022044453A1 - Image processing device, image processing method, and vehicle onboard electronic control device - Google Patents
Image processing device, image processing method, and vehicle onboard electronic control device Download PDFInfo
- Publication number
- WO2022044453A1 WO2022044453A1 PCT/JP2021/019374 JP2021019374W WO2022044453A1 WO 2022044453 A1 WO2022044453 A1 WO 2022044453A1 JP 2021019374 W JP2021019374 W JP 2021019374W WO 2022044453 A1 WO2022044453 A1 WO 2022044453A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- image
- camera
- image processing
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to an image processing device, and more particularly to a method for easily detecting an abnormality in an in-vehicle camera.
- the difference image generation unit acquires a plurality of image data captured by a camera at different times from the image storage unit, and among these image data. Generate difference image data.
- the difference image update unit compares the previous difference image and the current difference image for each pixel, and updates the current difference image with the pixel having the larger difference. When the number of updates of the difference image reaches a predetermined number, the latest difference image is output to the dirt determination unit.
- the dirt determination unit describes an image pickup device that determines the presence or absence of lens dirt in the camera based on the latest difference image.
- Patent Document 2 Japanese Unexamined Patent Publication No. 2012-166705
- a determination means for determining the presence or absence of foreign matter adhering to the lens based on the detection result thereof is provided, and the determination means 8 has a predetermined constantness regarding the length or formation interval of the detected object to be detected.
- a foreign matter adhesion determination device for an in-vehicle camera lens that makes a determination based on the presence or absence of a foreign object is described.
- Patent Document 3 Japanese Unexamined Patent Publication No. 2019-29940 is a deposit detection device for detecting deposits reflected in a photographed image taken by an image pickup device installed on a moving object, and the deposits are detected from the photographed image.
- the contour extraction unit that extracts the contour area of the deposit as the contour region, the inner extraction unit that extracts the region inside the contour of the deposit as the inner region from the captured image, and the contour region and the contour of the deposit are compared.
- the one that matches either the shape or the brightness is detected as the deposit contour region, the inner region is compared with the inside of the contour of the deposit, and the one that matches either the shape or the brightness is the deposit.
- Adhesion detection that detects as an inner region and detects as a deposit detection region a region of a lens deposit consisting of either the contour of the deposit or the inside of the contour from either the contour region of the deposit or the inner region of the deposit.
- a deposit detection device including a region determination unit is described.
- the camera has a function to detect a hardware failure of the image sensor. Further, although many methods for detecting obstacles due to deposits on the lens provided on the front surface of the camera have been proposed, the processing load is high and the processing load is required to be reduced.
- an object of the present invention is to detect an abnormality in the camera with a low processing load.
- a typical example of the invention disclosed in the present application is as follows. That is, it is an image processing device that processes an image taken by an in-vehicle camera, and is based on an image acquisition unit that acquires an image from the in-vehicle camera and a type of structure presumed to exist around the own vehicle.
- a structure color specifying part that specifies the color information of the structure
- an image area specifying part that specifies a region in the image where the structure is presumed to exist based on the type of the structure, and the in-vehicle camera.
- the camera abnormality detecting unit is provided with a camera abnormality detecting unit for detecting the abnormality of the above, and when the structure is present around the own vehicle, the camera abnormality detecting unit includes color information obtained from the structure color specifying unit and the image. It is characterized in that an abnormality of the in-vehicle camera is detected by comparing with a color component of a region in which the structure is presumed to be present.
- an abnormality of the camera can be detected with a low processing load. Issues, configurations and effects other than those mentioned above will be clarified by the description of the following examples.
- FIG. 1 is a block diagram showing a configuration of an image processing device according to an embodiment of the present invention.
- the image processing device 1 is mounted on an in-vehicle electronic control device (ECU) or an in-vehicle camera device, and has a memory 10, an arithmetic unit (CPU) 30, a network interface (CAN) 40, an interface (I / F) 50, and a network interface ( It has Eth) 60.
- ECU electronice control device
- CPU central processing unit
- CAN network interface
- I / F interface
- It has Eth 60.
- the arithmetic unit 30 executes the program stored in the memory 10. A part of the processing performed by the arithmetic unit 30 by executing the program may be executed by another arithmetic unit (for example, hardware such as FPGA (Field Programable Gate Array) or ASIC (Application Specific Integrated Circuit)).
- FPGA Field Programable Gate Array
- ASIC Application Specific Integrated Circuit
- the memory 10 includes a ROM and a RAM which are non-volatile storage elements.
- the ROM stores an invariant program (for example, BIOS) and the like.
- the RAM is a high-speed and volatile storage element such as DRAM (Dynamic Random Access Memory) and a non-volatile storage element (non-temporary storage medium) such as SRAM (Static Random Access Memory). Stores the program executed by and the data used when the program is executed.
- the memory 10 includes an image acquisition unit 11, a vehicle information acquisition unit 12, a map information acquisition unit 13, an image area identification unit 14, a structure color identification unit 16, a camera abnormality detection unit 18, and an abnormality area notification completed determination.
- a program for realizing each functional block such as the unit 19 and the abnormality notification unit 21 is stored.
- the memory 10 stores data referred to when the program is executed, such as the structure position table 15, the structure color component table 17, and the abnormal area notified table 20.
- the network interface (CAN) 40 controls communication with other devices mounted on the vehicle via CAN (Control Area Network).
- the interface (I / F) 50 is a serial or parallel input / output interface, and an in-vehicle camera 51 is connected to the interface (I / F) 50.
- the network interface (Eth) 60 controls communication with other devices mounted on the vehicle via Ethernet (registered trademark, the same applies hereinafter).
- the image acquisition unit 11 acquires an image taken by the vehicle-mounted camera 51 via the interface 50.
- the vehicle information 41 including the behavior of the vehicle is acquired via the vehicle information acquisition unit 12 and the network interface (CAN) 40.
- the map information acquisition unit 13 acquires map information 61 via the network interface (Eth) 60.
- the map information to be acquired may be detailed map information for automatic driving or map information for navigation.
- the image area specifying unit 14 refers to the structure position table 15 and specifies a structure detection area in which the structure is presumed to exist in the image.
- the structure position table 15 is a table in which information on a region in which a structure estimated from map information may exist in an image is recorded, and the details thereof will be described with reference to FIG.
- the structure color specifying unit 16 specifies the color of the structure to be determined with reference to the structure color component table 17.
- the structure color component table 17 is a table in which color information for each type of structure is recorded, and the details thereof will be described with reference to FIGS. 3, 4, and 5.
- the camera abnormality detection unit 18 detects deposits on the front surface of the camera (for example, a lens) based on the color information of the structure existing in the image taken by the in-vehicle camera 51.
- the abnormal area notified determination unit 19 refers to the abnormal area already notified table 20 and determines whether the abnormality detected by the camera abnormality detecting unit 18 has been notified.
- the abnormality area already notified table 20 is a table in which the abnormality notified by the abnormality notification unit 21 is recorded.
- the abnormality notification unit 21 notifies the occupant and other ECUs of the abnormality determined by the abnormality area notification completion determination unit 19 to be unnotified.
- FIG. 2 is a diagram showing a configuration example of the structure position table 15 according to the embodiment of the present invention.
- the image area specifying unit 14 is referred to for specifying a structure detection area (that is, a region for determining the color of the structure) in which the structure is presumed to exist, and each type of structure is referred to. Holds the coordinate data in the image of.
- the position of the structure detection area where a traffic light, a road sign, and a pedestrian crossing are presumed to exist in the image is recorded as a structure for determining color.
- the structure detection area recorded in the structure position table 15 shown in FIG. 2 is defined by the three coordinates of the vertices of the triangle.
- the shape of the structure detection area recorded in the structure position table 15 may be a quadrangle (trapezoid, square, rectangle) instead of a triangle. Since the image captured by the in-vehicle camera 51 appears small at a distance, if the structure detection area is defined by a triangle or a trapezoid, the area where the structure exists can be accurately represented and the structure detection area can be made small.
- a plurality of structure detection areas may be recorded for the same type of structure. For example, since the traffic light is provided on the left side of the road (above the own lane in the left traffic) and the right side (above the oncoming lane in the left traffic), if a plurality of structure detection areas are defined according to the installation pattern of the traffic light. good.
- the structure detection area acquired from the structure position table 15 may be changed. Further, the structure detection area may be calculated based on the distance from the intersection using the data of one structure detection area as a variable, and the structure detection area may be changed according to the distance from the intersection. By switching the structure detection area according to the position and distance in this way, the scanning range for color detection can be reduced and the processing load can be reduced.
- the structure detection areas of traffic lights, road signs, and pedestrian crossings are recorded because the colors are similar in the same type of structures and the pattern of the installation location is constant, but other structures.
- Structure detection areas for objects eg, road signs (road paint), railroad crossings) may be defined.
- FIGS. 3, 4, and 5 are diagrams showing a configuration example of the structure color component table 17 according to the embodiment of the present invention, and a plurality of colors are defined according to the combination of the environment and the structure.
- the environment is composed of a plurality of tables divided according to the weather and day and night.
- FIG. 3 shows the color of the structure in the daytime in fine weather
- FIG. 4 shows the color of the structure in the daytime in rainy weather
- the color of the structure is represented by the chromaticity of red, green, and blue in the RGB color space, but other color spaces (for example, CMYK) may be used.
- the typical colors contained in the structure are defined. For example, many stop signs are placed at intersections and are colored white on a red background. Therefore, the red color of the ground color is defined. Further, the blue color of the ground color of the sign indicating the prohibition of traveling outside the designated direction may be defined, or the yellow color of the ground color of the sign indicating the shape of the intersection may be defined.
- the color when lit in blue and the color when lit in red are advisable to define one or both of the color when lit in blue and the color when lit in red.
- yellow lighting may be defined.
- the color (blue, yellow, red) of the signal being turned off may be defined.
- FIGS. 6, 7, and 8 are flowcharts of camera abnormality detection processing.
- the arithmetic unit 30 executes a camera abnormality detection method selection process. Specifically, the image processing device 1 determines whether or not the map information can be acquired (S601). For example, if the vehicle is equipped with an automatic driving function, detailed map information can be acquired. Further, if the vehicle is equipped with a navigation system, map information for navigation can be acquired. Further, if the map information is provided from the map ECU via CAN, the image processing device 1 can acquire the map information via CAN.
- the process proceeds to the camera abnormality detection process (FIG. 8) from the own vehicle information.
- the image acquisition unit 11 acquires the image data taken by the in-vehicle camera 51 (S701).
- the image acquisition unit 11 may acquire an image every frame (for example, 30 fps), but may acquire one image every few frames (for example, every 0.5 seconds).
- the number of images acquired per unit time may be changed depending on the traveling speed. For example, when the traveling speed is slow, the time interval for acquiring an image is lengthened, and when the traveling speed is high, the time interval for acquiring an image is shortened. By changing the image acquisition interval according to the speed, the structure can be captured by a plurality of still images regardless of the speed, and the color of the structure can be reliably determined.
- the image acquisition unit 11 executes the image correction process (S702). Specifically, the shape of the image is shaped.
- the image acquired by the in-vehicle camera 51 may be distorted, and the image is formed by linear transformation or projective transformation.
- the brightness of the image is adjusted by adjusting the exposure to facilitate the detection of colors.
- the map information acquisition unit 13 acquires the map information.
- the map information acquired by the map information acquisition unit 13 includes at least the position of the intersection.
- the map information acquisition unit 13 extracts information on traffic lights, road signs, pedestrian crossings, and road markings (paints) included in the acquired map information (S703).
- the positions of structures such as traffic lights, road signs, pedestrian crossings, and road markings (paint) can be known.
- the navigation map information does not include the position of the structure, the area where the structure exists in the vicinity of the intersection such as a traffic light, a road sign, a pedestrian crossing, and a road marking can be known.
- the map information acquisition unit 13 determines from the acquired map information whether or not there is a sign or the like in the vicinity of the own vehicle (S704).
- the presence or absence of the structure can be determined depending on whether the structure information can be extracted from the detailed map information.
- the map information for navigation is acquired, if the distance to the intersection is closer than a predetermined threshold, the intersection is approached, and generally, the structure of a traffic light, a road sign, a pedestrian crossing, a road marking, etc. near the intersection. It can be determined that the object is near the own vehicle.
- the processing is executed in the order of the processing by the image acquisition unit 11 and the processing by the map information acquisition unit 13, but either of them may be executed first or may be executed in parallel.
- step S704 when it is determined in step S704 that there is no structure in the vicinity of the own vehicle (No in S705), the process returns to step S701 and the next image data is acquired.
- the structure color specifying unit 16 refers to the data corresponding to the current environment of the structure color component table 17 and owns itself.
- the color of the structure determined to be in the vicinity of the vehicle is specified (S706). Since the lighting color is not known in the traffic light, all the lighting colors (and the extinguishing color if registered) are specified from the structure color component table 17. Further, when the type of the installed sign cannot be specified and the color of the sign to be recognized cannot be specified, all the possible colors may be specified from the structure color component table 17.
- the image area specifying unit 14 refers to the structure position table 15 and specifies a structure detection area in which it is presumed that a structure determined to be in the vicinity of the own vehicle exists in the image (S707).
- the structure color specifying process by the structure color specifying unit 16 and the image area specifying process by the image area specifying unit 14 are executed in this order, but whichever is executed first or in parallel is executed. good.
- the camera abnormality detection unit 18 determines whether or not the pixel of the color component of the structure determined to be in the vicinity of the own vehicle exists in the structure detection region (S708). Specifically, the pixels in the specified structure detection region are scanned, and it is determined whether or not a predetermined number of pixels satisfying the color condition of the structure specified in step S706 are continuously present.
- determining whether or not there are color components of multiple structures it is preferable to determine in order of ease of color recognition. For example, a light emitting traffic light can easily recognize a color, so it is preferable to recognize the color first. Next, it is advisable to recognize the white paint on the road (for example, a pedestrian crossing) and then the sign.
- the continuous counter is initialized to 0, the abnormal area notified table 20 is cleared (S709), the process returns to step S701, and the next image data is input. get.
- the continuous counter is added by 1 (S710), and it is determined whether the number of continuous counters is a predetermined number or more (S711). As a result, if the continuous counter does not exceed a predetermined threshold value, the process returns to step S701 without determining that there is an abnormality, and the next image data is acquired.
- the camera abnormality detection unit 18 determines that the camera abnormality is due to the fact that the structure that should exist is not shown in the image. Then, the abnormal area notified determination unit 19 refers to the abnormal area already notified table 20 and determines whether the determined abnormality has been notified (S712).
- the abnormality notification unit 21 notifies the occupant and other ECUs of the camera dirt abnormality (S714).
- the occupant sees the camera dirt abnormality warning displayed on the instrument panel and removes the dirt on the front surface of the camera (for example, the lens).
- the ECU receives the camera dirt abnormality notification, it determines that the reliability of the image of the vehicle-mounted camera 51 is low, and the vehicle control process is performed without using the image of the vehicle-mounted camera 51 until the dirt is removed. To execute.
- the ECU may stop the control of the automatic driving (AD) using the image of the vehicle-mounted camera 51 and notify the occupant of the stop of the automatic driving. Further, the ECU stops the system including the vehicle-mounted camera 51 in which the dirt is detected when the camera dirt abnormality is detected, and stops the control of the automatic driving due to the decrease in the redundancy, so that the advanced driver assistance (ADAS) You may switch to.
- AD automatic driving
- ADAS advanced driver assistance
- the image is divided into a plurality of areas (for example, 16 pieces of 4 vertical ⁇ 4 horizontal), and in each area. It may be determined whether the color component of the structure is present.
- a continuous counter and an abnormal region notified table 20 are provided for each region, and in step S708, it is determined whether or not the pixel of the color component of the structure exists in the structure detection region for each divided region.
- the continuous counter and the abnormal area notified table 20 are updated for each area.
- the image acquisition unit 11 acquires the image data taken by the in-vehicle camera 51 (S701) and executes the image correction process (S702).
- the own vehicle information acquisition unit 12 acquires the own vehicle information 41 (S803).
- the vehicle information 41 acquired by the vehicle information acquisition unit 12 includes speed, acceleration, steering angle, occupant operation amount (brake operation amount, accelerator operation amount), and the like as vehicle behavior information.
- the own vehicle information acquisition unit 12 estimates whether there is a vehicle in front traveling in front of the own vehicle or a signal in the vicinity of the own vehicle based on the acquired own vehicle information 41 (S804). Specifically, when low-speed driving and stopping are repeated, it can be estimated that there is a traffic jam and there is a vehicle in front. In addition, when the vehicle decelerates from constant speed and stops, it can be estimated that the signal is approaching an intersection.
- the structure color specifying unit 16 specifies the color of the structure determined to be in the vicinity of the own vehicle by referring to the data corresponding to the current environment of the structure color component table 17 (S706).
- the image region specifying unit 14 refers to the structure position table 15 and identifies a structure detection region in which it is presumed that a structure determined to be in the vicinity of the own vehicle exists in the image (S707). For example, when it is estimated that there is a vehicle in front, the area of the vehicle in front is set as a structure detection area, and the red color of the tail lamp in the structure detection area is detected. Further, when it is estimated that the vehicle is approaching an intersection, the color of each structure is detected in the structure detection area of the traffic light, the road sign, and the pedestrian crossing in the same manner as described above.
- the camera abnormality detection unit 18 determines whether or not the pixel of the color component of the structure determined to be in the vicinity of the own vehicle exists in the structure detection region (S708).
- the continuous counter is initialized to 0, the abnormal area notified table 20 is cleared (S709), the process returns to step S701, and the next image data is acquired. ..
- the continuous counter is added by 1 (S710), and it is determined whether the number of continuous counters is a predetermined number or more (S711). As a result, if the continuous counter does not exceed a predetermined threshold value, the process returns to step S701 without determining that there is an abnormality, and the next image data is acquired.
- the abnormal area notified determination unit 19 refers to the abnormal area notified table 20 and determines the abnormality. Is determined (S712).
- the abnormal stain on the camera is not notified (No in S713), so that the abnormal region notified determination unit 19 newly determines that the camera is dirty.
- the abnormality notification unit 21 After determining that an abnormality has been detected, the abnormality notification unit 21 notifies the occupant and other ECUs of the camera dirt abnormality (S714).
- the image may be divided into a plurality of regions (for example, 16 vertical ⁇ 4 horizontal regions), and it may be determined whether or not the color component of the structure is present in each region.
- the image processing device 1 of the embodiment of the present invention is based on the image acquisition unit 11 that acquires an image from the in-vehicle camera 51 and the type of the structure that is presumed to exist around the own vehicle.
- the structure color specifying unit 16 that specifies the color information of the structure
- the image area specifying unit 14 that specifies the area in which the structure is presumed to exist in the image based on the type of the structure
- the in-vehicle camera A camera abnormality detecting unit 18 for detecting an abnormality of 51 is provided, and when a structure exists around the own vehicle, the camera abnormality detecting unit 18 includes color information obtained from the structure color specifying unit 16 and an image.
- the abnormality of the vehicle-mounted camera 51 is detected by comparing with the color component of the region where the structure is presumed to exist, the abnormality of the vehicle-mounted camera 51 can be detected with a low processing load. That is, since the presence or absence of the color component of the structure is determined, the processing load can be reduced and the detection can be facilitated by reducing the amount of information used for the determination. In addition, it is possible to easily implement the detection of deposits on the front surface (lens) of the camera, which causes performance deterioration, even in a system that does not require high processing performance. Further, by using a structure having a common color such as a signal or a sign, it becomes easy to set the color condition of the structure based on the type of the structure.
- the image area specifying unit 14 acquires the information of the structure presumed to exist around the own vehicle from the map information, the image is based on the information of the traffic light, the road sign, and the pedestrian crossing included in the detailed map.
- the area where the structure is presumed to exist can be accurately identified.
- the own vehicle information acquisition unit 12 estimates the information of the structure estimated to exist around the own vehicle from the state of the own vehicle, the preceding vehicle is used as the structure and the vehicle is mounted on the vehicle with a low processing load.
- the abnormality of the camera 51 can be detected.
- the approach to the intersection can be detected.
- the own vehicle information acquisition unit 12 estimates the state of the vehicle from the vehicle speed, it can detect the presence or absence of a vehicle in front and the approach to an intersection even when the map information cannot be acquired.
- the structure position table 15 holds a region in which the structure is estimated to exist corresponding to the type of the structure, it is estimated that the image area specifying unit 14 has a structure corresponding to the type. Area can be easily obtained.
- the structure color specifying unit 16 is a vehicle-mounted camera when the color information obtained from the structure color specifying unit 16 cannot be detected in a region where the structure is presumed to exist in the image for a predetermined number of times in succession. Detects 51 anomalies. That is, since it is determined that the lens in the region where a predetermined color cannot be continuously detected has stains that hinder the shooting of the outside world, temporary non-detection can be eliminated and the abnormality of the in-vehicle camera 51 can be reliably detected.
- the abnormality area notification determination unit 19 notifies the occupant and / or another ECU when the camera abnormality detected by the camera abnormality detection unit 18 is a viewing notification, so that the occupant is informed that the front surface (lens) of the camera is dirty. Can be promoted to be removed.
- the ECU can execute the vehicle control process without using the image of the camera to ensure the safety.
- the present invention is not limited to the above-mentioned embodiment, but includes various modifications and equivalent configurations within the scope of the attached claims.
- the above-described examples have been described in detail in order to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to those having all the described configurations.
- a part of the configuration of one embodiment may be replaced with the configuration of another embodiment.
- the configuration of another embodiment may be added to the configuration of one embodiment.
- other configurations may be added / deleted / replaced with respect to a part of the configurations of each embodiment.
- each configuration, function, processing unit, processing means, etc. described above may be realized by hardware by designing a part or all of them by, for example, an integrated circuit, and the processor realizes each function. It may be realized by software by interpreting and executing the program to be executed.
- Information such as programs, tables, and files that realize each function can be stored in a storage device such as a memory, a hard disk, SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
- a storage device such as a memory, a hard disk, SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
- control lines and information lines show what is considered necessary for explanation, and do not necessarily show all the control lines and information lines necessary for mounting. In practice, it can be considered that almost all configurations are interconnected.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
本出願は、令和2年(2020年)8月25日に出願された日本出願である特願2020-141436の優先権を主張し、その内容を参照することにより、本出願に取り込む。 This application claims the priority of Japanese Patent Application No. 2020-141436, which is a Japanese application filed on August 25, 2020, and incorporates it into this application by referring to its contents.
本発明は、画像処理装置に関し、特に、車載カメラの異常を容易に検出する方法に関する。 The present invention relates to an image processing device, and more particularly to a method for easily detecting an abnormality in an in-vehicle camera.
高度運転支援や自動運転などの車両の電子制御技術の進歩に伴い、車室外にもイメージセンサが装着されるようになり、イメージセンサの前面部材(例えば、レンズ)が汚れる機会が増加している。イメージセンサの前面レンズに異物が付着すると、当該領域の映像が取り込めず、車外の物体の認識精度が低下する可能性がある。 With the progress of electronic control technology of vehicles such as advanced driving support and automatic driving, image sensors are being installed outside the vehicle interior, and the chances that the front member (for example, lens) of the image sensor becomes dirty are increasing. .. If foreign matter adheres to the front lens of the image sensor, the image in the area cannot be captured, and the recognition accuracy of the object outside the vehicle may decrease.
例えば、特許文献1(特開2007-318355号公報)には、差分画像生成部は、画像蓄積部から、カメラによって異なる時刻において撮像された複数の画像データを取得し、これらの画像データ間の差分画像データを生成する。差分画像更新部は、前回の差分画像と今回の差分画像とを各画素について比較し、差分の大きい方の画素で今回の差分画像を更新する。差分画像の更新回数が所定回数に達した場合、最新の差分画像が汚れ判断部へ出力される。汚れ判断部は、この最新の差分画像に基づいて、カメラにおけるレンズ汚れの有無を判断する撮像装置が記載されている。 For example, in Patent Document 1 (Japanese Unexamined Patent Publication No. 2007-318355), the difference image generation unit acquires a plurality of image data captured by a camera at different times from the image storage unit, and among these image data. Generate difference image data. The difference image update unit compares the previous difference image and the current difference image for each pixel, and updates the current difference image with the pixel having the larger difference. When the number of updates of the difference image reaches a predetermined number, the latest difference image is output to the dirt determination unit. The dirt determination unit describes an image pickup device that determines the presence or absence of lens dirt in the camera based on the latest difference image.
また、特許文献2(特開2012-166705号公報)には、車載カメラの撮影映像に基づいて、所定の車両周辺領域上に形成された長尺状の所定の検出対象物を検出する検出手段と、これの検出結果に基づいて、レンズへの異物の付着の有無を判定する判定手段とを備え、判定手段8は、検出された検出対象物の長さもしくは形成間隔についての所定の一定性の有無を判定基準とした判定を行う車載カメラレンズ用異物付着判定装置が記載されている。 Further, in Patent Document 2 (Japanese Unexamined Patent Publication No. 2012-166705), a detection means for detecting a predetermined long object to be detected formed on a predetermined vehicle peripheral region based on an image taken by an in-vehicle camera. A determination means for determining the presence or absence of foreign matter adhering to the lens based on the detection result thereof is provided, and the determination means 8 has a predetermined constantness regarding the length or formation interval of the detected object to be detected. A foreign matter adhesion determination device for an in-vehicle camera lens that makes a determination based on the presence or absence of a foreign object is described.
また、特許文献3(特開2019-29940号公報)には、動体に設置された撮像装置が撮影する撮影画像に映る付着物を検出する付着物検出装置であって、前記撮影画像から付着物の輪郭の領域を輪郭領域として抽出する輪郭抽出部と、前記撮影画像から付着物の輪郭の内側の領域を内側領域として抽出する内側抽出部と、前記輪郭領域と付着物の輪郭とを比較して、形状及び輝度のいずれかが適合するものを付着物輪郭領域として検出し、前記内側領域と付着物の輪郭の内側とを比較して、形状及び輝度のいずれかが適合するものを付着物内側領域として検出し、前記付着物輪郭領域及び前記付着物内側領域のいずれかから付着物の輪郭及び輪郭の内側のいずれかから成るレンズ付着物の領域を付着物検出領域として検出する付着物検出領域判定部と、を備える付着物検出装置が記載されている。 Further, Patent Document 3 (Japanese Unexamined Patent Publication No. 2019-29940) is a deposit detection device for detecting deposits reflected in a photographed image taken by an image pickup device installed on a moving object, and the deposits are detected from the photographed image. The contour extraction unit that extracts the contour area of the deposit as the contour region, the inner extraction unit that extracts the region inside the contour of the deposit as the inner region from the captured image, and the contour region and the contour of the deposit are compared. The one that matches either the shape or the brightness is detected as the deposit contour region, the inner region is compared with the inside of the contour of the deposit, and the one that matches either the shape or the brightness is the deposit. Adhesion detection that detects as an inner region and detects as a deposit detection region a region of a lens deposit consisting of either the contour of the deposit or the inside of the contour from either the contour region of the deposit or the inner region of the deposit. A deposit detection device including a region determination unit is described.
カメラは、イメージセンサのハードウェア故障を検知する機能を有する。また、カメラの前面に設けられるレンズの付着物による障害を検知する方法は多く提案されているが、処理負荷が高く、処理負荷の低減が求められている。 The camera has a function to detect a hardware failure of the image sensor. Further, although many methods for detecting obstacles due to deposits on the lens provided on the front surface of the camera have been proposed, the processing load is high and the processing load is required to be reduced.
そこで、本発明では、低い処理負荷でカメラの異常を検出することを目的とする。 Therefore, an object of the present invention is to detect an abnormality in the camera with a low processing load.
本願において開示される発明の代表的な一例を示せば以下の通りである。すなわち、車載カメラが撮影した画像を処理する画像処理装置であって、前記車載カメラより画像を取得する画像取得部と、自車両の周囲に存在すると推定される構造物の種類に基づいて、当該構造物の色情報を特定する構造物色特定部と、前記構造物の種類に基づいて、前記画像の中で前記構造物が存在すると推定される領域を特定する画像領域特定部と、前記車載カメラの異常を検知するカメラ異常検知部とを備え、前記カメラ異常検知部は、前記自車両の周囲に前記構造物が存在する場合、前記構造物色特定部から得られた色情報と、前記画像の中に前記構造物が存在すると推定される領域の色成分とを比較して、前記車載カメラの異常を検知することを特徴とする。 A typical example of the invention disclosed in the present application is as follows. That is, it is an image processing device that processes an image taken by an in-vehicle camera, and is based on an image acquisition unit that acquires an image from the in-vehicle camera and a type of structure presumed to exist around the own vehicle. A structure color specifying part that specifies the color information of the structure, an image area specifying part that specifies a region in the image where the structure is presumed to exist based on the type of the structure, and the in-vehicle camera. The camera abnormality detecting unit is provided with a camera abnormality detecting unit for detecting the abnormality of the above, and when the structure is present around the own vehicle, the camera abnormality detecting unit includes color information obtained from the structure color specifying unit and the image. It is characterized in that an abnormality of the in-vehicle camera is detected by comparing with a color component of a region in which the structure is presumed to be present.
本発明の一態様によれば、低い処理負荷でカメラの異常を検出できる。前述した以外の課題、構成及び効果は、以下の実施例の説明によって明らかにされる。 According to one aspect of the present invention, an abnormality of the camera can be detected with a low processing load. Issues, configurations and effects other than those mentioned above will be clarified by the description of the following examples.
<実施例1>
図1は、本発明の実施例の画像処理装置の構成を示すブロック図である。
<Example 1>
FIG. 1 is a block diagram showing a configuration of an image processing device according to an embodiment of the present invention.
画像処理装置1は、車載電子制御装置(ECU)や車載カメラ装置に実装され、メモリ10、演算装置(CPU)30、ネットワークインターフェース(CAN)40、インターフェース(I/F)50、及びネットワークインターフェース(Eth)60を有する。
The
演算装置30は、メモリ10に格納されたプログラムを実行する。演算装置30がプログラムを実行して行う処理の一部を、他の演算装置(例えば、FPGA(Field Programable Gate Array)やASIC(Application Specific Integrated Circuit)などのハードウェア)で実行してもよい。
The
メモリ10は、不揮発性の記憶素子であるROM及びRAMを含む。ROMは、不変のプログラム(例えば、BIOS)などを格納する。RAMは、DRAM(Dynamic Random Access Memory)のような高速かつ揮発性の記憶素子、及びSRAM(Static Random Access Memory)のような不揮発性の記憶素子(非一時的記憶媒体)であり、演算装置30が実行するプログラム及びプログラムの実行時に使用されるデータを格納する。
The
具体的には、メモリ10は、画像取得部11、自車情報取得部12、地図情報取得部13、画像領域特定部14、構造物色特定部16、カメラ異常検知部18、異常領域通知済判定部19、及び異常通知部21などの各機能ブロックを実現するためのプログラムを格納する。また、メモリ10は、構造物位置テーブル15、構造物色成分テーブル17、及び異常領域既通知テーブル20などの、プログラム実行時に参照されるデータを格納する。
Specifically, the
ネットワークインターフェース(CAN)40は、車両に搭載された他の装置とのCAN(Control Area Network)を経由した通信を制御する。インターフェース(I/F)50は、シリアル又はパラレルの入出力インターフェースであり、車載カメラ51が接続される。ネットワークインターフェース(Eth)60は、車両に搭載された他の装置とのイーサネット(登録商標、以下同じ)を経由した通信を制御する。
The network interface (CAN) 40 controls communication with other devices mounted on the vehicle via CAN (Control Area Network). The interface (I / F) 50 is a serial or parallel input / output interface, and an in-
画像取得部11は、インターフェース50を介して、車載カメラ51が撮影した画像を取得する。自車情報取得部12、ネットワークインターフェース(CAN)40を介して、車両の挙動(例えば、速度、加速度、舵角、乗員のブレーキやアクセルの操作量など)を含む自車情報41を取得する。地図情報取得部13は、ネットワークインターフェース(Eth)60を介して地図情報61を取得する。取得する地図情報は、自動運転用の詳細な地図情報でも、ナビゲーション用の地図情報でもよい。
The
画像領域特定部14は、構造物位置テーブル15を参照して、画像中で構造物が存在すると推定される構造物検出領域を特定する。構造物位置テーブル15は、地図情報から推定される構造物が画像中に存在する可能性がある領域の情報が記録されたテーブルであり、その詳細は図2を参照して説明する。
The image
構造物色特定部16は、構造物色成分テーブル17を参照して、判定すべき構造物の色を特定する。構造物色成分テーブル17は、構造物の種類毎の色彩の情報が記録されたテーブルであり、その詳細は図3、図4、図5を参照して説明する。
The structure
カメラ異常検知部18は、車載カメラ51が撮影した画像中に存在する構造物の色情報に基づいて、カメラ前面(例えばレンズ)の付着物を検知する。
The camera
異常領域通知済判定部19は、異常領域既通知テーブル20を参照して、カメラ異常検知部18が検知した異常が通知済みであるかを判定する。異常領域既通知テーブル20は、異常通知部21が通知した異常が記録されるテーブルである。異常通知部21は、異常領域通知済判定部19が未通知であると判定した異常を、乗員や他のECUに通知する。
The abnormal area notified
図2は、本発明の実施例の構造物位置テーブル15の構成例を示す図である。 FIG. 2 is a diagram showing a configuration example of the structure position table 15 according to the embodiment of the present invention.
構造物位置テーブル15は、画像領域特定部14が構造物が存在すると推定される構造物検出領域(すなわち、構造物の色を判定する領域)を特定するために参照され、構造物の種類毎の画像における座標データを保持する。
In the structure position table 15, the image
例えば、図2に示す構造物位置テーブル15には、色彩を判定する構造物として、信号機、道路標識、横断歩道が画像中に存在すると推定される構造物検出領域の位置が記録されている。図2に示す構造物位置テーブル15に記録される構造物検出領域は三角形の頂点の三つの座標で定義される。構造物位置テーブル15に記録される構造物検出領域の形状は、三角形ではなく、四角形(台形、正方形、長方形)でもよい。車載カメラ51が撮影する画像は遠方で小さく写るので、構造物検出領域を三角形や台形で定義すると構造物が存在する領域を的確に表して、構造物検出領域を小さくできる。
For example, in the structure position table 15 shown in FIG. 2, the position of the structure detection area where a traffic light, a road sign, and a pedestrian crossing are presumed to exist in the image is recorded as a structure for determining color. The structure detection area recorded in the structure position table 15 shown in FIG. 2 is defined by the three coordinates of the vertices of the triangle. The shape of the structure detection area recorded in the structure position table 15 may be a quadrangle (trapezoid, square, rectangle) instead of a triangle. Since the image captured by the in-
また、同種の構造物について複数の構造物検出領域が記録されてもよい。例えば、信号機は、道路の左側(左側通行における自車線の上)や右側(左側通行における対向車線の上)に設けられることから、信号機の設置パターンに応じて複数の構造物検出領域を定義するとよい。 Further, a plurality of structure detection areas may be recorded for the same type of structure. For example, since the traffic light is provided on the left side of the road (above the own lane in the left traffic) and the right side (above the oncoming lane in the left traffic), if a plurality of structure detection areas are defined according to the installation pattern of the traffic light. good.
交差点からの距離によって構造物が画像中で見える位置が変わるので、構造物位置テーブル15から取得する構造物検出領域を変えてもよい。また、一つの構造物検出領域のデータを変数として交差点からの距離によって構造物検出領域を演算して、交差点からの距離によって構造物検出領域を変えてもよい。このように位置や距離によって構造物検出領域を切り替えると、色検出のためにスキャンする範囲を小さくでき、処理負荷を低減できる。 Since the position where the structure can be seen in the image changes depending on the distance from the intersection, the structure detection area acquired from the structure position table 15 may be changed. Further, the structure detection area may be calculated based on the distance from the intersection using the data of one structure detection area as a variable, and the structure detection area may be changed according to the distance from the intersection. By switching the structure detection area according to the position and distance in this way, the scanning range for color detection can be reduced and the processing load can be reduced.
本実施例では、同種の構造物では色が類似していること、設置場所のパターンが一定であることから信号機、道路標識、横断歩道の構造物検出領域が記録されているが、他の構造物(例えば、道路標示(路面ペイント)、踏切)の構造物検出領域を定義してもよい。 In this embodiment, the structure detection areas of traffic lights, road signs, and pedestrian crossings are recorded because the colors are similar in the same type of structures and the pattern of the installation location is constant, but other structures. Structure detection areas for objects (eg, road signs (road paint), railroad crossings) may be defined.
図3、図4、図5は、本発明の実施例の構造物色成分テーブル17の構成例を示す図であり、環境と構造物との組み合わせに応じて複数の色彩が定められている。図示する構造物色成分テーブル17では、環境として天候と昼夜によって分けられた複数のテーブルで構成され、図3が晴天の昼間、図4が晴天の夜間、図5が雨天の昼間の構造物の色彩を表す。環境と構造物との組み合わせに応じて色彩を定められれば、他の形式で色彩を定義してもよい。 FIGS. 3, 4, and 5 are diagrams showing a configuration example of the structure color component table 17 according to the embodiment of the present invention, and a plurality of colors are defined according to the combination of the environment and the structure. In the illustrated structure color component table 17, the environment is composed of a plurality of tables divided according to the weather and day and night. FIG. 3 shows the color of the structure in the daytime in fine weather, FIG. 4 shows the color of the structure in the daytime in rainy weather, and FIG. Represents. Colors may be defined in other formats as long as the colors are defined according to the combination of environment and structure.
構造物の色彩は、RGB色空間で赤色、緑色、青色の色度によって表しているが、他の色空間(例えばCMYK)を用いてもよい。 The color of the structure is represented by the chromaticity of red, green, and blue in the RGB color space, but other color spaces (for example, CMYK) may be used.
各環境において、構造物に含まれる代表的な色彩が定義される。例えば、一時停止の標識の多くは交差点に設置され、その色は赤地に白文字である。このため、地色の赤色を定義しておく。また、指定方向外進行禁止の標識の地色の青色を定義したり、交差点の形状を示す標識の地色の黄色を定義してもよい。 In each environment, the typical colors contained in the structure are defined. For example, many stop signs are placed at intersections and are colored white on a red background. Therefore, the red color of the ground color is defined. Further, the blue color of the ground color of the sign indicating the prohibition of traveling outside the designated direction may be defined, or the yellow color of the ground color of the sign indicating the shape of the intersection may be defined.
信号機では、青色点灯時の色、赤色点灯時の色の一方又は両方を定義するとよい。なお、点灯時間が短いが黄色点灯を定義してもよい。また、消灯中の信号の色(青、黄、赤)を定義してもよい。 For traffic lights, it is advisable to define one or both of the color when lit in blue and the color when lit in red. Although the lighting time is short, yellow lighting may be defined. Further, the color (blue, yellow, red) of the signal being turned off may be defined.
横断歩道では、ペイントの白色を定義するとよい。RGB空間では、RGBの各色の色度が等しければ、明度によって白から灰色、黒のいずれかとなる。 For pedestrian crossings, it is good to define the white color of the paint. In the RGB space, if the chromaticity of each color of RGB is equal, it is either white to gray or black depending on the lightness.
図6、図7、図8は、カメラ異常検知処理のフローチャートである。 FIGS. 6, 7, and 8 are flowcharts of camera abnormality detection processing.
まず、演算装置30は、カメラ異常検知手法選択処理を実行する。具体的には、画像処理装置1が地図情報が取得可能であるかを判定する(S601)。例えば、車両に自動運転機能が搭載されていれば詳細地図情報を取得できる。また、車両にナビゲーションシステムが搭載されていればナビゲーション用地図情報を取得できる。また、地図ECUからCAN経由で地図情報が提供されていれば、画像処理装置1はCAN経由で地図情報を取得できる。
First, the
地図情報が取得できれば、地図情報からのカメラ異常検知処理(図7)へ進む。一方、地図情報が取得できなければ、自車情報からのカメラ異常検知処理(図8)へ進む。 If the map information can be obtained, proceed to the camera abnormality detection process (Fig. 7) from the map information. On the other hand, if the map information cannot be acquired, the process proceeds to the camera abnormality detection process (FIG. 8) from the own vehicle information.
地図情報からのカメラ異常検知処理では、まず、画像取得部11は、車載カメラ51が撮影した画像データを取得する(S701)。画像取得部11は、毎フレーム(例えば30fps)に画像を取得してもよいが、数フレームに一つの(例えば0.5秒ごとに)画像を取得してもよい。単位時間あたりに取得する画像数を走行速度によって変えてもよい。例えば、走行速度が遅い場合に画像を取得する時間間隔を長くし、走行速度が速い場合に画像を取得する時間間隔を短くする。速度によって画像取得間隔を変えることによって、速度によらず構造物を複数の静止画像で捕らえられ、構造物の色を確実に判定できる。
In the camera abnormality detection process from the map information, first, the
そして、画像取得部11は、画像補正処理を実行する(S702)。具体的には、画像の形状を整形する。車載カメラ51が取得した画像は歪んでいる場合があり、線形変換や射影変換によって画像を成形する。また、色彩の検知を容易にするため露出調整によって画像の明るさを調整する。
Then, the
その後、地図情報取得部13は地図情報を取得する。地図情報取得部13が取得する地図情報は、少なくとも交差点の位置を含む。そして、地図情報取得部13は、取得した地図情報に含まれている信号機、道路標識、横断歩道、道路標示(ペイント)の情報を抽出する(S703)。詳細地図情報を取得した場合、信号機、道路標識、横断歩道、道路標示(ペイント)などの構造物の位置が分かる。一方、ナビゲーション用地図情報には構造物の位置が含まれていないので、信号機、道路標識、横断歩道、道路標示などの交差点近傍における構造物が存在する領域が分かる。
After that, the map
その後、地図情報取得部13は、取得した地図情報から自車近傍に標識などの有無を判定する(S704)。詳細地図情報を取得した場合、詳細地図情報から構造物の情報が抽出できたかによって、構造物の有無を判定できる。一方、ナビゲーション用地図情報を取得した場合、交差点までの距離が所定の閾値より近ければ、交差点へ接近しており、一般的に交差点の近くの信号機、道路標識、横断歩道、道路標示などの構造物が自車近傍に有ると判定できる。
After that, the map
本実施例では、画像取得部11による処理、地図情報取得部13による処理の順で処理を実行するが、どちらを先に実行しても並行して実行してもよい。
In this embodiment, the processing is executed in the order of the processing by the
そして、ステップS704で自車近傍に構造物が無いと判定された場合(S705でNo)、ステップS701に戻り、次の画像データを取得する。 Then, when it is determined in step S704 that there is no structure in the vicinity of the own vehicle (No in S705), the process returns to step S701 and the next image data is acquired.
一方、ステップS704で自車近傍に構造物が有ると判定された場合(S705でYes)、構造物色特定部16は、構造物色成分テーブル17の現在の環境に対応するデータを参照して、自車近傍にあると判定された構造物の色を特定する(S706)。なお、信号機においては、点灯色が分からないので、構造物色成分テーブル17から全ての点灯時の色を(登録されていれば消灯時の色も)特定する。また、設置されている標識の種類が特定できず、認識すべき標識の色が特定できない場合、可能性がある全ての色を構造物色成分テーブル17から特定するとよい。
On the other hand, when it is determined in step S704 that there is a structure in the vicinity of the own vehicle (Yes in S705), the structure
その後、画像領域特定部14は、構造物位置テーブル15を参照して、自車近傍にあると判定された構造物が画像中で存在すると推定される構造物検出領域を特定する(S707)。
After that, the image
本実施例では、構造物色特定部16による構造物色特定処理、画像領域特定部14による画像領域特定処理の順で処理を実行するが、どちらを先に実行しても並行して実行してもよい。
In this embodiment, the structure color specifying process by the structure
その後、カメラ異常検知部18は、自車近傍に有ると判定された構造物の色成分の画素が構造物検出領域内に存在するかを判定する(S708)。具体的には、特定された構造物検出領域内の画素をスキャンし、ステップS706で特定された構造物の色の条件を満たす画素が所定の数だけ連続して存在するかを判定する。
After that, the camera
複数の構造物の色成分が存在するかを判定する場合、色認識が容易な順に判定するとよい。例えば、発光している信号機は色の認識が容易であるため、先に色を認識するとよい。その次に、路上の白色ペイント(例えば、横断歩道)、標識の順に認識するとよい。 When determining whether or not there are color components of multiple structures, it is preferable to determine in order of ease of color recognition. For example, a light emitting traffic light can easily recognize a color, so it is preferable to recognize the color first. Next, it is advisable to recognize the white paint on the road (for example, a pedestrian crossing) and then the sign.
そして、構造物検出領域に当該構造物の色が検出されると、連続カウンタを0に初期化し、異常領域既通知テーブル20をクリアして(S709)、ステップS701に戻り、次の画像データを取得する。一方、構造物検出領域に当該構造物の色が検出されなければと、連続カウンタを1加算して(S710)、連続カウンタが所定の数以上であるかを判定する(S711)。その結果、連続カウンタが所定の閾値を超えていなければ、異常ありと判定せずに、ステップS701に戻り、次の画像データを取得する。 Then, when the color of the structure is detected in the structure detection area, the continuous counter is initialized to 0, the abnormal area notified table 20 is cleared (S709), the process returns to step S701, and the next image data is input. get. On the other hand, if the color of the structure is not detected in the structure detection region, the continuous counter is added by 1 (S710), and it is determined whether the number of continuous counters is a predetermined number or more (S711). As a result, if the continuous counter does not exceed a predetermined threshold value, the process returns to step S701 without determining that there is an abnormality, and the next image data is acquired.
一方、連続カウンタが所定の閾値を超えていれば、カメラ異常検知部18は、存在すべき構造物が画像に写っていないのでカメラ異常と判定する。そして、異常領域通知済判定部19は、異常領域既通知テーブル20を参照して、判定された異常が通知済かを判定する(S712)。
On the other hand, if the continuous counter exceeds a predetermined threshold value, the camera
その結果、判定された異常が異常領域既通知テーブル20に登録されていれば、当該カメラの汚れ異常は通知済みなので(S713でYes)、さらに異常を通知することなく、ステップS701に戻り、次の画像データを取得する。 As a result, if the determined abnormality is registered in the abnormality area already notified table 20, the dirt abnormality of the camera has been notified (Yes in S713). Therefore, the process returns to step S701 without further notifying the abnormality, and the next step is performed. Get the image data of.
一方、判定された異常が異常領域既通知テーブル20に登録されていなければ、当該車載カメラ51の汚れ異常は通知されていないので(S713でNo)、新たにカメラ汚れ異常が検出されたと判定して、異常通知部21が当該カメラ汚れ異常を乗員や他のECUに通知する(S714)。乗員は、インスツルメントパネルに表示されたカメラ汚れ異常警告を見て、カメラ前面(例えばレンズ)の汚れを除去する。また、ECUは、カメラ汚れ異常通知を受信すると、当該車載カメラ51の映像の信頼性が低いと判定し、当該汚れが除去されるまで、当該車載カメラ51の映像を使用せずに車両制御処理を実行する。例えば、ECUは、当該車載カメラ51の映像を使用した自動運転(AD)の制御を停止して、乗員に自動運転の停止を通知するとよい。また、ECUは、カメラ汚れ異常の検出に伴って、汚れが検出された車載カメラ51を含む系を停止して、冗長度の低下によって自動運転の制御を停止して、高度運転支援(ADAS)に切り替えてもよい。
On the other hand, if the determined abnormality is not registered in the abnormality area already notified table 20, the dirt abnormality of the in-
以上の説明では画像を一つの領域として構造物の色成分が存在するかを判定しているが、画像を複数(例えば、縦4×横4の16個)の領域に分けて、各領域において構造物の色成分が存在するかを判定してもよい。この場合、連続カウンタと異常領域既通知テーブル20を領域毎に設けて、ステップS708において、分割された領域毎に構造物の色成分の画素が構造物検出領域内に存在するかを判定し、ステップS709、S710において、領域毎に連続カウンタと異常領域既通知テーブル20を更新する。このように分割された画像の領域毎に構造物の色成分が存在するかの判定によって、車載カメラ51の前面に異物が付着しているだけでなく、異物が付着している領域を特定できる。
In the above description, it is determined whether or not the color component of the structure exists by using the image as one area. However, the image is divided into a plurality of areas (for example, 16 pieces of 4 vertical × 4 horizontal), and in each area. It may be determined whether the color component of the structure is present. In this case, a continuous counter and an abnormal region notified table 20 are provided for each region, and in step S708, it is determined whether or not the pixel of the color component of the structure exists in the structure detection region for each divided region. In steps S709 and S710, the continuous counter and the abnormal area notified table 20 are updated for each area. By determining whether or not the color component of the structure exists in each region of the image divided in this way, it is possible to identify not only the foreign matter adhered to the front surface of the vehicle-mounted
次に、地図情報が取得できない場合に実行される自車情報からのカメラ異常検知処理について説明する。自車情報からのカメラ異常検知処理(図8)では、前述した地図情報からのカメラ異常検知処理(図7)と同じ処理ステップには同じ符号を付し、それらのステップの詳細な説明は省略する。 Next, the camera abnormality detection process from the vehicle information, which is executed when the map information cannot be acquired, will be described. In the camera abnormality detection process (FIG. 8) from the own vehicle information, the same processing steps as the camera abnormality detection process (FIG. 7) from the map information described above are designated by the same reference numerals, and detailed explanations of those steps are omitted. do.
自車情報からのカメラ異常検知処理では、まず、画像取得部11は、車載カメラ51が撮影した画像データを取得し(S701)、画像補正処理を実行する(S702)。
In the camera abnormality detection process from the own vehicle information, first, the
その後、自車情報取得部12は、自車情報41を取得する(S803)。自車情報取得部12が取得する自車情報41は、車両の挙動の情報として、速度、加速度、舵角、乗員の操作量(ブレーキ操作量、アクセル操作量)などである。
After that, the own vehicle
そして、自車情報取得部12は、取得した自車情報41に基づいて、自車の前方を走行する前走車があるか、自車の近傍に信号があるかを推定する(S804)。具体的には、低速走行と停止を繰り返す場合には、渋滞しており、前走車がいると推定できる。また、定速走行から減速して停止した場合、信号がある交差点に近づいていると推定できる。
Then, the own vehicle
その後、構造物色特定部16は、構造物色成分テーブル17の現在の環境に対応するデータを参照して、自車近傍にあると判定された構造物の色を特定する(S706)。その後、画像領域特定部14は、構造物位置テーブル15を参照して、自車近傍にあると判定された構造物が画像中で存在すると推定される構造物検出領域を特定する(S707)。例えば、前走車があると推定された場合、前走車の領域を構造物検出領域として、当該構造物検出領域内のテールランプの赤色を検出する。また、交差点に近づいていると推定された場合、前述と同様に、信号機、道路標識、横断歩道の構造物検出領域内で各構造物の色彩を検出する。
After that, the structure
その後、カメラ異常検知部18は、自車近傍に有ると判定された構造物の色成分の画素が構造物検出領域内に存在するかを判定する(S708)。構造物検出領域に当該構造物の色が検出されると、連続カウンタを0に初期化し、異常領域既通知テーブル20をクリアして(S709)、ステップS701に戻り、次の画像データを取得する。一方、構造物検出領域に当該構造物の色が検出されなければと、連続カウンタを1加算して(S710)、連続カウンタが所定の数以上であるかを判定する(S711)。その結果、連続カウンタが所定の閾値を超えていなければ、異常ありと判定せずに、ステップS701に戻り、次の画像データを取得する。
After that, the camera
一方、連続カウンタが所定の閾値を超えていれば、所定の色成分が連続して検出できないので、異常領域通知済判定部19は、異常領域既通知テーブル20を参照して、判定された異常が通知済かを判定する(S712)。
On the other hand, if the continuous counter exceeds a predetermined threshold value, the predetermined color component cannot be continuously detected. Therefore, the abnormal area notified
その結果、検出された色成分が異常領域既通知テーブル20に登録されていれば、当該カメラの汚れ異常は通知済みなので(S713でYes)、さらに異常を通知することなく、ステップS701に戻り、次の画像データを取得する。 As a result, if the detected color component is registered in the abnormality area already notified table 20, the stain abnormality of the camera has been notified (Yes in S713), so the process returns to step S701 without further notifying the abnormality. Acquire the next image data.
一方、検出された色成分が異常領域既通知テーブル20に登録されていなければ、当該カメラの汚れ異常は通知されていないので(S713でNo)、異常領域通知済判定部19が新たにカメラ汚れ異常が検出されたと判定して、異常通知部21が、当該カメラ汚れ異常を乗員や他のECUに通知する(S714)。
On the other hand, if the detected color component is not registered in the abnormal region notified table 20, the abnormal stain on the camera is not notified (No in S713), so that the abnormal region notified
前述のように、画像を複数(例えば、縦4×横4の16個)の領域に分けて、各領域において構造物の色成分が存在するかを判定してもよい。 As described above, the image may be divided into a plurality of regions (for example, 16 vertical × 4 horizontal regions), and it may be determined whether or not the color component of the structure is present in each region.
以上に説明したように、本発明の実施例の画像処理装置1は、車載カメラ51より画像を取得する画像取得部11と、自車両の周囲に存在すると推定される構造物の種類に基づいて、当該構造物の色情報を特定する構造物色特定部16と、構造物の種類に基づいて、画像の中で構造物が存在すると推定される領域を特定する画像領域特定部14と、車載カメラ51の異常を検知するカメラ異常検知部18とを備え、カメラ異常検知部18は、自車両の周囲に構造物が存在する場合、構造物色特定部16から得られた色情報と、画像の中に構造物が存在すると推定される領域の色成分とを比較して、車載カメラ51の異常を検知するので、低い処理負荷で車載カメラ51の異常を検出できる。すなわち、構造物の色成分の有無を判定するので、判定に使用する情報量を少なくすることによって処理負荷を軽減し、検知を容易にできる。また、高度な処理性能が不要なシステムにも、性能低下を招くカメラ前面(レンズ)への付着物の検出を容易に実装可能である。また、信号機や標識などの色が共通している物を構造物として用いることによって、構造物の種類に基づいて当該構造物の色彩の条件の設定が容易となる。
As described above, the
また、画像領域特定部14は、自車両の周囲に存在すると推定される構造物の情報を地図情報から取得するので、詳細地図に含まれる信号機、道路標識、横断歩道の情報に基づいて、画像の中で構造物が存在すると推定される領域を正確に特定できる。また、ナビゲーション用地図に含まれる交差点の位置に基づいて、信号機、道路標識、横断歩道などの構造物が画像の中で存在すると推定される領域を正確に特定できる。このため、画像に写るべき構造物の色彩が存在するかを容易に判定できる。
Further, since the image
また、自車情報取得部12は、自車両の周囲に存在すると推定される構造物の情報を自車両の状態から推定するので、構造物として前走車を利用して、低い処理負荷で車載カメラ51の異常を検出できる。また、地図情報が取得できない場合でも、交差点への接近が検出できる。
Further, since the own vehicle
また、自車情報取得部12は、車両の状態を車速から推定するので、地図情報が取得できない場合でも、前走車の有無や交差点への接近を検出できる。
Further, since the own vehicle
また、構造物位置テーブル15が、構造物の種類に対応して、構造物が存在すると推定される領域を保持するので、画像領域特定部14が、種類に応じた構造物が存在すると推定される領域を容易に取得できる。
Further, since the structure position table 15 holds a region in which the structure is estimated to exist corresponding to the type of the structure, it is estimated that the image
また、構造物色特定部16は、所定の回数連続して、構造物色特定部16から得られた色情報が、画像の中に構造物が存在すると推定される領域に検出できない場合に、車載カメラ51の異常を検知する。すなわち、所定の色彩が連続して検出できない領域のレンズに外界の撮影を阻害する汚れがあると判定するので、一時的な不検出を排除して、車載カメラ51の異常を確実に検知できる。
Further, the structure
また、異常領域通知済判定部19は、カメラ異常検知部18が検知したカメラ異常が見通知である場合に、乗員及び/又は他のECUに通知するので、乗員にカメラ前面(レンズ)の汚れの除去を促すことができる。ECUは、当該カメラの映像を使用せずに車両制御処理を実行して、安全性を担保できる。
Further, the abnormality area
また、複数の構造物が存在する場合は、各構造物とその構造物検出領域に関連付けられた連続カウンタを複数保有することによって、同時により多くの領域の異常検知を判定することができる。 Further, when a plurality of structures exist, by having a plurality of continuous counters associated with each structure and the structure detection area, it is possible to determine abnormality detection in a larger number of areas at the same time.
なお、本発明は前述した実施例に限定されるものではなく、添付した特許請求の範囲の趣旨内における様々な変形例及び同等の構成が含まれる。例えば、前述した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに本発明は限定されない。また、ある実施例の構成の一部を他の実施例の構成に置き換えてもよい。また、ある実施例の構成に他の実施例の構成を加えてもよい。また、各実施例の構成の一部について、他の構成の追加・削除・置換をしてもよい。 It should be noted that the present invention is not limited to the above-mentioned embodiment, but includes various modifications and equivalent configurations within the scope of the attached claims. For example, the above-described examples have been described in detail in order to explain the present invention in an easy-to-understand manner, and the present invention is not necessarily limited to those having all the described configurations. Further, a part of the configuration of one embodiment may be replaced with the configuration of another embodiment. Further, the configuration of another embodiment may be added to the configuration of one embodiment. In addition, other configurations may be added / deleted / replaced with respect to a part of the configurations of each embodiment.
また、前述した各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等により、ハードウェアで実現してもよく、プロセッサがそれぞれの機能を実現するプログラムを解釈し実行することにより、ソフトウェアで実現してもよい。 Further, each configuration, function, processing unit, processing means, etc. described above may be realized by hardware by designing a part or all of them by, for example, an integrated circuit, and the processor realizes each function. It may be realized by software by interpreting and executing the program to be executed.
各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリ、ハードディスク、SSD(Solid State Drive)等の記憶装置、又は、ICカード、SDカード、DVD等の記録媒体に格納することができる。 Information such as programs, tables, and files that realize each function can be stored in a storage device such as a memory, a hard disk, SSD (Solid State Drive), or a recording medium such as an IC card, SD card, or DVD.
また、制御線や情報線は説明上必要と考えられるものを示しており、実装上必要な全ての制御線や情報線を示しているとは限らない。実際には、ほとんど全ての構成が相互に接続されていると考えてよい。 Also, the control lines and information lines show what is considered necessary for explanation, and do not necessarily show all the control lines and information lines necessary for mounting. In practice, it can be considered that almost all configurations are interconnected.
Claims (11)
前記車載カメラより画像を取得する画像取得部と、
自車両の周囲に存在すると推定される構造物の種類に基づいて、当該構造物の色情報を特定する構造物色特定部と、
前記構造物の種類に基づいて、前記画像の中で前記構造物が存在すると推定される領域を特定する画像領域特定部と、
前記車載カメラの異常を検知するカメラ異常検知部とを備え、
前記カメラ異常検知部は、前記自車両の周囲に前記構造物が存在する場合、前記構造物色特定部から得られた色情報と、前記画像の中に前記構造物が存在すると推定される領域の色成分とを比較して、前記車載カメラの異常を検知することを特徴とする画像処理装置。 An image processing device that processes images taken by an in-vehicle camera.
An image acquisition unit that acquires an image from the in-vehicle camera,
A structure color specifying part that specifies color information of the structure based on the type of structure that is presumed to exist around the own vehicle.
An image area specifying portion that specifies a region in which the structure is presumed to exist in the image based on the type of the structure.
It is equipped with a camera abnormality detection unit that detects abnormalities in the in-vehicle camera.
When the structure is present around the own vehicle, the camera abnormality detecting unit includes color information obtained from the structure color specifying unit and a region in which the structure is presumed to exist in the image. An image processing device characterized by detecting an abnormality in the in-vehicle camera by comparing it with a color component.
前記画像領域特定部は、自車両の周囲に存在すると推定される構造物の情報を地図情報から取得することを特徴とする画像処理装置。 The image processing apparatus according to claim 1.
The image area specifying unit is an image processing device characterized in that information on a structure presumed to exist around the own vehicle is acquired from map information.
自車両の周囲に存在すると推定される構造物の情報を自車両の状態から推定する自車情報取得部を有することを特徴とする画像処理装置。 The image processing apparatus according to claim 1.
An image processing device having an own vehicle information acquisition unit that estimates information on a structure presumed to exist around the own vehicle from the state of the own vehicle.
前記自車情報取得部は、車両の状態を車速から推定することを特徴とする画像処理装置。 The image processing apparatus according to claim 3.
The own vehicle information acquisition unit is an image processing device characterized in that the state of the vehicle is estimated from the vehicle speed.
前記構造物の種類に対応して、構造物が存在すると推定される領域を示す構造物位置情報を保持し、
前記画像領域特定部は、前記構造物位置情報を参照して、前記画像の中で前記構造物が存在すると推定される領域を特定することを特徴とする画像処理装置。 The image processing apparatus according to claim 1.
Corresponding to the type of the structure, the structure position information indicating the region where the structure is presumed to exist is retained, and the structure position information is retained.
The image region specifying unit is an image processing apparatus that refers to the structure position information and identifies a region in which the structure is presumed to exist in the image.
前記電子制御装置は、所定の演算処理を実行する演算装置と、前記演算処理に必要なデータを格納する記憶装置とを有し、
前記画像処理方法は、
前記演算装置が、前記車載カメラより画像を取得する画像取得手順と、
前記演算装置が、自車両の周囲に存在すると推定される構造物の種類に基づいて、当該構造物の色情報を特定する構造物色特定手順と、
前記演算装置が、前記構造物の種類に基づいて、前記画像の中で前記構造物が存在すると推定される領域を特定する画像領域特定手順と、
前記演算装置が、前記車載カメラの異常を検知するカメラ異常検知手順とを含み、
前記カメラ異常検知手順では、前記演算装置が、前記自車両の周囲に前記構造物が存在する場合、前記構造物色特定手順で得られた色情報と、前記画像の中に前記構造物が存在すると推定される領域の色成分とを比較して、前記車載カメラの異常を検知することを特徴とする画像処理方法。 It is an image processing method in which an electronic control device processes an image taken by an in-vehicle camera.
The electronic control device includes an arithmetic unit that executes a predetermined arithmetic processing and a storage device that stores data necessary for the arithmetic processing.
The image processing method is
An image acquisition procedure in which the arithmetic unit acquires an image from the in-vehicle camera,
A structure color specifying procedure for specifying the color information of the structure based on the type of the structure presumed to exist around the own vehicle by the arithmetic unit.
An image area specifying procedure in which the arithmetic unit identifies a region in the image where the structure is presumed to exist based on the type of the structure.
The arithmetic unit includes a camera abnormality detection procedure for detecting an abnormality in the vehicle-mounted camera.
In the camera abnormality detection procedure, when the arithmetic device has the structure around the own vehicle, the color information obtained in the structure color specifying procedure and the structure are present in the image. An image processing method characterized by detecting an abnormality in the in-vehicle camera by comparing it with a color component in an estimated region.
前記画像領域特定手順では、前記演算装置が、自車両の周囲に存在すると推定される構造物の情報を地図情報から取得することを特徴とする画像処理方法。 The image processing method according to claim 6.
In the image area specifying procedure, the image processing method is characterized in that the arithmetic unit acquires information on a structure presumed to exist around the own vehicle from map information.
前記演算装置が、自車両の周囲に存在すると推定される構造物の情報を自車両の状態から推定する自車情報取得手順を含むことを特徴とする画像処理方法。 The image processing method according to claim 6.
An image processing method, wherein the arithmetic unit includes a vehicle information acquisition procedure for estimating information on a structure presumed to exist around the vehicle from the state of the vehicle.
前記自車情報取得手順では、前記演算装置が、車両の状態を車速から推定することを特徴とする画像処理方法。 The image processing method according to claim 8.
In the own vehicle information acquisition procedure, the image processing method is characterized in that the arithmetic unit estimates the state of the vehicle from the vehicle speed.
前記記憶装置は、前記構造物の種類に対応して、構造物が存在すると推定される領域を示す構造物位置情報を保持し、
前記画像領域特定手順では、前記演算装置が、前記構造物位置情報を参照して、前記画像の中で前記構造物が存在すると推定される領域を特定することを特徴とする画像処理方法。 The image processing method according to claim 6.
The storage device holds structure position information indicating a region where the structure is presumed to exist, corresponding to the type of the structure.
The image processing method, characterized in that, in the image region specifying procedure, the arithmetic unit refers to the structure position information to specify a region in the image where the structure is presumed to exist.
所定の演算処理を実行する演算装置と、
前記演算処理に必要なデータを格納する記憶装置とを備え、
前記演算装置が、前記車載カメラより画像を取得する画像取得部と、
前記演算装置が、自車両の周囲に存在すると推定される構造物の種類に基づいて、当該構造物の色情報を特定する構造物色特定部と、
前記演算装置が、前記構造物の種類に基づいて、前記画像の中で前記構造物が存在すると推定される領域を特定する画像領域特定部と、
前記演算装置が、前記車載カメラの異常を検知するカメラ異常検知部とを有し、
前記カメラ異常検知部は、前記自車両の周囲に前記構造物が存在する場合、前記構造物色特定部から得られた色情報と、前記画像の中に前記構造物が存在すると推定される領域の色成分とを比較して、前記車載カメラの異常を検知することを特徴とする車載電子制御装置。 It is an in-vehicle electronic control device.
An arithmetic unit that executes predetermined arithmetic processing and
It is equipped with a storage device for storing data necessary for the arithmetic processing.
An image acquisition unit in which the arithmetic unit acquires an image from the in-vehicle camera, and
The arithmetic unit has a structure color specifying unit that specifies color information of the structure based on the type of the structure that is presumed to exist around the own vehicle.
An image area specifying unit that specifies a region in which the structure is presumed to exist in the image based on the type of the structure by the arithmetic unit.
The arithmetic unit has a camera abnormality detecting unit that detects an abnormality of the vehicle-mounted camera.
When the structure is present around the own vehicle, the camera abnormality detecting unit includes color information obtained from the structure color specifying unit and a region in which the structure is presumed to exist in the image. An in-vehicle electronic control device characterized by detecting an abnormality in the in-vehicle camera by comparing it with a color component.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2022545316A JP7446445B2 (en) | 2020-08-25 | 2021-05-21 | Image processing device, image processing method, and in-vehicle electronic control device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020141436 | 2020-08-25 | ||
| JP2020-141436 | 2020-08-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022044453A1 true WO2022044453A1 (en) | 2022-03-03 |
Family
ID=80353065
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2021/019374 Ceased WO2022044453A1 (en) | 2020-08-25 | 2021-05-21 | Image processing device, image processing method, and vehicle onboard electronic control device |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP7446445B2 (en) |
| WO (1) | WO2022044453A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001103496A (en) * | 1999-09-30 | 2001-04-13 | Mitsubishi Electric Corp | Image processing device |
| JP2014096712A (en) * | 2012-11-09 | 2014-05-22 | Fuji Heavy Ind Ltd | Vehicle exterior environment recognition device |
-
2021
- 2021-05-21 WO PCT/JP2021/019374 patent/WO2022044453A1/en not_active Ceased
- 2021-05-21 JP JP2022545316A patent/JP7446445B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001103496A (en) * | 1999-09-30 | 2001-04-13 | Mitsubishi Electric Corp | Image processing device |
| JP2014096712A (en) * | 2012-11-09 | 2014-05-22 | Fuji Heavy Ind Ltd | Vehicle exterior environment recognition device |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2022044453A1 (en) | 2022-03-03 |
| JP7446445B2 (en) | 2024-03-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113160594B (en) | Change point detection device and map information publishing system | |
| US9773177B2 (en) | Surrounding environment recognition device | |
| CN109117709B (en) | Collision avoidance system for autonomous vehicles | |
| US7983447B2 (en) | Imaging environment recognition device | |
| JP7040374B2 (en) | Object detection device, vehicle control system, object detection method and computer program for object detection | |
| US10730503B2 (en) | Drive control system | |
| US9360332B2 (en) | Method for determining a course of a traffic lane for a vehicle | |
| CN104185588B (en) | Vehicle-mounted imaging system and method for determining road width | |
| JP5399027B2 (en) | A device having a system capable of capturing a stereoscopic image to assist driving of an automobile | |
| US10552706B2 (en) | Attachable matter detection apparatus and attachable matter detection method | |
| US20080007429A1 (en) | Visibility condition determining device for vehicle | |
| JP7251582B2 (en) | Display controller and display control program | |
| US11679769B2 (en) | Traffic signal recognition method and traffic signal recognition device | |
| US9262817B2 (en) | Environment estimation apparatus and vehicle control system | |
| CN105393293A (en) | Vehicle-mounted device | |
| US11697346B1 (en) | Lane position in augmented reality head-up display system | |
| US11663834B2 (en) | Traffic signal recognition method and traffic signal recognition device | |
| JP4951481B2 (en) | Road marking recognition device | |
| JP7446445B2 (en) | Image processing device, image processing method, and in-vehicle electronic control device | |
| JP2022161700A (en) | Traffic light recognition device | |
| CN103748600A (en) | Method and device for detecting disturbing objects in the surrounding air of vehicle | |
| CN115565363A (en) | Signal recognition device | |
| CN115179864A (en) | Control device and control method for moving body, storage medium, and vehicle | |
| US20250225875A1 (en) | Notification system | |
| EP3865815A1 (en) | Vehicle-mounted system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21860884 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2022545316 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21860884 Country of ref document: EP Kind code of ref document: A1 |