WO2018150661A1 - Dispositif de capture d'images embarqué - Google Patents
Dispositif de capture d'images embarqué Download PDFInfo
- Publication number
- WO2018150661A1 WO2018150661A1 PCT/JP2017/040721 JP2017040721W WO2018150661A1 WO 2018150661 A1 WO2018150661 A1 WO 2018150661A1 JP 2017040721 W JP2017040721 W JP 2017040721W WO 2018150661 A1 WO2018150661 A1 WO 2018150661A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dirt
- area
- wiping
- vehicle
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present invention relates to an in-vehicle image pickup apparatus that is mounted on a car body of an automobile and can image the outside of the automobile.
- an in-vehicle camera mounted on the body of an automobile is used to image at least one of the front, rear, and side directions of the host vehicle, or the entire peripheral direction, and the captured image is displayed on a monitor screen in the vehicle.
- In-vehicle imaging devices are becoming widespread.
- the in-vehicle camera is attached to the outside of the vehicle body. Therefore, dirt such as raindrops and mud tends to adhere to the surface of the lens (including lens protection glass) of the in-vehicle camera. If dirt adheres to the lens surface, the dirt is reflected in the captured image of the in-vehicle camera, and the scenery is shielded.
- a lens dirt removing device that blows water or air onto the lens surface of an in-vehicle camera and removes dirt adhering to the lens surface.
- Japanese Patent Laid-Open No. 2014-11785 discloses this A lens dirt removal diagnostic technique for diagnosing whether the lens dirt removal apparatus has functioned normally is disclosed. In this technique, before and after the operation of the lens dirt removing device, the captured images of the in-vehicle camera are compared, and it is diagnosed that dirt has been removed from changes in contrast and edge strength.
- Japanese Patent Laid-Open No. 2016-15583 discloses a lens dirt removal sequential diagnosis technique as a method capable of diagnosing dirt removal even when the timing of dirt removal work execution is unknown.
- this lens dirt removal sequential diagnosis technology a captured image captured by an in-vehicle camera immediately after the host vehicle stops is used as a reference image, and a difference image between the reference image and a captured image captured by the in-vehicle camera thereafter is sequentially determined. The difference image is generated and compared with a previously detected dirt adhesion region, and if a difference is detected in the majority of the dirt adhesion region, it is diagnosed that the dirt has been wiped away.
- the object of the present invention is to reduce the possibility of erroneous determination when diagnosing that the dirt on the lens surface has been wiped off even when the operation timing of removing the dirt attached to the lens surface of the in-vehicle camera is unknown.
- Another object of the present invention is to provide an in-vehicle image pickup apparatus having a lens dirt wiping diagnostic apparatus.
- the present invention detects the presence or absence of dirt on the lens or a dirt-attached area based on an in-vehicle camera that captures an image through a lens and captured images sequentially captured by the in-vehicle camera.
- the stain wiping diagnosis unit sequentially captures images with the in-vehicle camera.
- a stable state determination unit that determines whether the captured image is in a stable state based on a time change of the captured image, and a determination result by the stable state determination unit among the captured images sequentially captured by the in-vehicle camera is in a stable state
- a wiping area extraction unit that extracts a wiping area for dirt based on a temporal change of a captured image at a certain time, and the dirt state management part is configured to detect the presence or absence of dirt detected by the dirt detection unit or the adhesion of dirt. Storing the range, the wiping area of dirt the wiping region extraction unit has extracted, already reflected in the presence or dirt adhering area of the dirt that stores, performs dirt management of said lens, characterized in that.
- the present invention it is possible to reduce the possibility of erroneous determination when diagnosing that the dirt on the lens surface has been wiped off even when the operation timing of removing the dirt attached to the lens surface of the in-vehicle camera is unknown. It is possible to provide an in-vehicle imaging device having a lens dirt wiping diagnostic device that can be used.
- FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 1 of the present invention.
- the figure which shows an example of the flowchart of a process of the stable state determination part The figure which shows an example of the flowchart of a process of the wiping area extraction part.
- FIG. 5 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 2 of the present invention.
- FIG. 6 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 3 of the present invention.
- FIG. 6 is a block diagram illustrating a schematic configuration of an in-vehicle imaging device according to Embodiment 4 of the present invention.
- FIG. 1 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 1 of the present invention.
- the in-vehicle imaging device 100 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4.
- the dirt wiping diagnosis unit 2 includes a stable state determination unit 21 and a wiping region extraction unit 22.
- the dirt wiping diagnosis unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnosis device (the same applies to the other embodiments below).
- the in-vehicle camera 1 is installed in a vehicle (not shown), and takes an image in front of the in-vehicle camera 1 via a lens (including a lens protection glass).
- the in-vehicle camera 1 sequentially outputs an image (captured image) obtained by imaging the front of the in-vehicle camera 1. For example, when the frame rate is 20 fps, the in-vehicle camera 1 captures an image and outputs the captured image every 50 milliseconds.
- the stable state determination unit 21 sequentially acquires captured images output from the in-vehicle camera 1, extracts a change (time change of the captured image) from the captured image acquired immediately before each time, and the magnitude of the change is predetermined. If it is less than the value, it is determined that the captured image is temporally stable (stable state). Conversely, if it exceeds a predetermined value, the captured image is not temporally stable (unstable state). judge.
- “change in time of captured image” refers to a difference in comparison between image data of a captured image at a certain time (any type of image data) and image data of a captured image after a certain period of time has elapsed. (Change with time), for example, change of each pixel of the captured image (change with time), and “size of time change of captured image” is, for example, the size of the area of the changed region In some cases, for example, there is a change amount in each pixel of the image data. As an example of “time change of captured image”, “time change of feature amount of captured image” may be used.
- extraction of time-varying captured image refers to a region in which a difference is generated as a result of comparison (that is, a region in which a change has occurred over time) among all regions of the captured image (entire image)
- extraction may be performed, and in some cases, a captured image in which a difference is generated as a result of comparison is extracted from a large number of captured images that are sequentially captured.
- image data of a captured image it may be simply referred to as a captured image
- image data of a difference image it may be simply referred to as a difference image
- the stable state determination unit 21 finally outputs the determination result as a stable state signal. It is stable when a moving body (for example, a moving body such as a person who cleans the surface of the lens of the in-vehicle camera 1 or another vehicle) is not shown in the captured image, or when the moving amount of the moving body in the captured image is small. It becomes a state.
- a moving body for example, a moving body such as a person who cleans the surface of the lens of the in-vehicle camera 1 or another vehicle
- the wiping area extraction unit 22 refers to the stable state signal output from the stable state determination unit 21, sequentially acquires captured images output from the in-vehicle camera 1 only in the stable state, and acquires in the previous stable state each time. A change from the captured image is extracted. As a result, a change in the captured image (change in time of the captured image) in a state where the moving body is not reflected in the captured image or the moving amount of the moving body in the captured image is small (stable state) is extracted. Then, an area having a large change in the captured image is output to the subsequent dirt state management unit 4 as an area (wiping area) from which dirt attached to the lens surface of the in-vehicle camera 1 is wiped off.
- the dirt detection unit 3 detects and outputs an adhesion area (dirt detection area) of dirt attached to the lens surface of the in-vehicle camera 1 from captured images sequentially output from the in-vehicle camera 1.
- a known method described in JP 2012-38048 A can be used. For example, there is a method of detecting a region with a small luminance change over a long period from a captured image captured by the in-vehicle camera 1 while the host vehicle is traveling, and setting this region as a stain detection region. Some methods detect only the presence or absence of dirt, not the area where dirt is attached. In this case, the dirt detection unit 3 outputs the strength of the detection reaction (stain detection intensity) corresponding to the dirt adhesion area and density instead of the dirt detection region.
- the dirt state management unit 4 refers to the dirt detection area or the dirt detection intensity output from the dirt detection unit 3 while the dirt detection unit 3 is functioning, and adheres to the lens surface of the in-vehicle camera 1.
- the area is set as a dirty storage area, and the presence / absence of dirt is managed as a dirty state (the dirty storage area and the dirty state are stored in a nonvolatile storage unit (not shown)).
- the dirt state management unit 4 refers to the wiping area output from the wiping area extraction unit 22 and reflects it in the managed dirt state and dirt storage area. For example, when the wiping area from the wiping area extraction unit 22 exceeds a predetermined area, the dirt state management unit 4 changes the stored dirt state from “dirty” to “no dirt”. For example, the dirt state management unit 4 erases the wiping area from the wiping area extraction unit 22 from the stored dirt storage area.
- the dirt state management unit 4 may store the presence or absence of dirt or the size of the area of the dirt storage area as a dirt state.
- the dirt state management unit 4 notifies the user of the managed dirt state and dirt storage area (lens dirt management status) with a warning light or a speaker (not shown), or automatically controls the vehicle. It also functions to notify a vehicle control system (not shown) such as a driving system.
- FIG. 2A is an example of a captured image captured by the in-vehicle camera 1 when water droplets adhere to the lens surface of the in-vehicle camera 1. Water drops are reflected in the areas 31 and 32 of the captured image, respectively, and the scenery beyond that is blocked. Further, FIG. 2B shows a captured image in the case where water drops reflected in the region 31 are removed.
- FIG. 3 is a difference image obtained by taking the difference between the two captured images of FIGS. 2A and 2B. In the difference image of FIG. 3, only the region 31 from which water droplets have been removed is extracted.
- FIG. 4 is an example of a captured image when a moving pedestrian appears in front of the in-vehicle camera 1.
- a moving pedestrian is shown in the region 33 of the captured image.
- 5 shows a difference image between the captured image of FIG. 2A and the captured image of FIG.
- the difference image in FIG. 5 not only pedestrians in the region 33 but also water drops in the region 31 are extracted. This is because the pedestrian in the area 33 overlaps the water drop in the area 31 and the brightness of the water drop in the area 31 is changed by the reflected light from the pedestrian.
- the stable state determination unit 21 detects a state in which the moving body is not reflected on the in-vehicle camera 1.
- FIG. 6 an example of the flowchart of the process of the said stable state determination part 21 is shown.
- captured images sequentially output from the in-vehicle camera 1 are acquired (step 2101), and a difference image (sequential difference image) between the acquired captured image and the captured image acquired immediately before that is generated (step 2102). . That is, the acquired image data of the captured image is compared with the image data of the captured image acquired immediately before, and image data in which only the different portions of the comparison results are left is generated as the image data of the difference image.
- an area having a difference equal to or larger than a predetermined value in the sequential difference image is extracted as a large difference area (sequential difference area) (step 2103), and an area of the sequential difference area (sequential difference area) is derived (step 2104). ).
- “large” in the large difference area extracted in step 2103 is large enough to exclude differences other than the difference caused by the movement of the moving object, such as the difference in error due to noise in the image data of the captured image. Therefore, it is desirable to adjust according to the actual machine environment. Further, “extracting a region having a large difference” is extracting a region candidate having a difference caused by the movement of the moving body from the entire region of the captured image.
- step 2105 it is determined whether the successive difference area is equal to or smaller than a predetermined value. If the successive difference area is less than or equal to a predetermined value, it is determined that the moving object is not reflected or the movement of the moving object is small (stable state) (step 2106), and the successive difference area exceeds the predetermined value. For example, it is determined that the moving body is reflected (unstable state) (step 2107). “Small” when the movement of the moving body is small means that the movement of the moving body can be regarded as a stable state even if the moving body is reflected, that is, it is not mistaken for wiping dirt. It is desirable to show that the movement is small, and adjust according to the actual machine environment. Finally, the determination result is output as a stable state signal (steps 2106 and 2107). This series of processing is repeated each time a captured image by the in-vehicle camera 1 is input.
- step 2104 of the flowchart of FIG. 6 there is a method of accumulating the successive difference area over a certain period and deriving the accumulated area of the successive difference area as the successive difference area.
- step 2105 of the flowchart of FIG. 6 the average value or maximum value of the successive difference areas is calculated over a certain period, and if the calculated value is smaller than a predetermined value, the stable state signal is set to the stable state, and the calculation is performed. If the value exceeds the predetermined value and is large, a method of making the stable state signal in an unstable state may be used.
- step 2101 of the flowchart of FIG. 6 there is a method of thinning out and acquiring captured images sequentially output from the in-vehicle camera 1.
- thinning out the captured image it becomes easy to capture the movement of the moving body, and the stable state signal can be stabilized.
- the stable state determination unit 21 extracts a feature amount such as an edge image or a HOG (Histogram of Oriented Gradients) feature amount from the captured image without using the captured image of the vehicle-mounted camera 1 as it is (here, the extraction result) Is called a feature amount image), and this feature amount image may be used instead of the captured image.
- a feature amount image such as an edge image or a HOG (Histogram of Oriented Gradients) feature amount from the captured image without using the captured image of the vehicle-mounted camera 1 as it is (here, the extraction result) Is called a feature amount image
- a feature amount image may be used instead of the captured image.
- the detection performance can be improved and the amount of data to be processed can be reduced.
- the HOG feature amount it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light.
- an average value such as luminance or edge strength is extracted from the captured image and used, the amount of data to be processed can be greatly reduced.
- the wiping area extraction unit 22 extracts the time change of the captured image from only the captured image in a stable state in which the moving object is not reflected on the in-vehicle camera 1 or the movement of the moving object reflected is small. Detect the wiped area.
- FIG. 7 shows an example of a flowchart of processing of the wiping area extraction unit 22.
- the stable state signal output from the stable state determination unit 21 is referenced to determine whether the stable state signal indicates a stable state (step 2201). If the stable state signal is in a stable state, an image captured by the in-vehicle camera 1 is acquired (step 2202). Then, a difference image between the acquired captured image in the stable state and the captured image in the previous stable state acquired in the previous process is generated (step 2203). Then, an area having a large difference (difference area) is extracted from the difference image (step 2204), and the difference area is output as a dirt wiping area (step 2205).
- the large difference area extracted in step 2204 is an area where the difference is equal to or greater than a predetermined value.
- This predetermined value may be the same value as the predetermined value used in the description of step 2103, or a different value. However, it is desirable to adjust according to the actual machine environment. This series of processing is repeatedly executed every time a captured image is output from the in-vehicle camera 1.
- the wiping area extraction unit 22 may add a process of adding the difference images generated in Step 2203 for a certain period and accumulating between Step 2203 and Step 2204. If the difference image is added for a sufficient period including the start and end of headlight irradiation, the change in the brightness of the stain due to the temporary irradiation of the headlight can be detected from the difference image generated in step 2203. It can be removed by offsetting.
- the period during which the difference image is accumulated is a method of setting a certain period after the difference is extracted, a method of setting the illuminance of the in-vehicle camera 1 to be a predetermined value or more, or a headlight detection process (not shown).
- simple headlight detection as an example of headlight detection processing can be realized by detecting a high-luminance circular region from a captured image.
- a process for accumulating the difference image for a certain period may be added between the step 2203 and the step 2204.
- the difference area may be accumulated for a certain period to be the wiping area. As a result, it is possible to suppress the removal of the wiping area for dirt.
- the predetermined period for accumulating the difference image or the difference area may be a predetermined time, but until the dirt state managed by the dirt state management unit 4 becomes “no dirt” or dirt. It is preferable that the detection unit 3 functions. As a result, it is possible to avoid spillage of the wiping area for dirt. In addition, it is preferable that the wiping area is sequentially output so that the dirt state management unit 4 can manage the dirt wiping state even during the certain period in which the difference image or the difference area is accumulated.
- the wiping area extraction unit 22 also extracts a feature amount such as an edge image or a HOG feature amount from the captured image without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature amount image).
- This feature amount image may be used instead of the captured image.
- the detection performance can be improved and the amount of data to be processed can be reduced.
- the HOG feature amount it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light.
- the dirt state management unit 4 manages the attached area as a dirt storage area by setting the presence or absence of dirt attached to the lens surface of the in-vehicle camera 1 as a dirty state.
- the dirt state management unit 4 refers to the dirt detection area or the dirt detection intensity output from the dirt detection unit 3 and stores the dirt state and the dirt storage area.
- the dirt detection unit 3 may not function depending on the situation, for example, when the host vehicle is stopped. Even if the dirt on the lens surface of the in-vehicle camera 1 is wiped off in this state, the dirt wiping is not reflected in the dirt state or the dirt storage area stored in the dirt state management part 4 only by the dirt detector 3.
- the dirt state management unit 4 refers to the wiping area output from the wiping area extraction unit 22 and erases the wiping area from the dirt storage area stored in the dirt state management unit 4. Further, when the wiping area exceeds a predetermined area, the stored dirt state is changed from “dirty” to “no dirt”.
- the dirty state management unit 4 may be configured to manage only the dirty state, may be configured to manage only the dirty storage area, or may be configured to manage both the dirty state and the dirty storage area. It may be. In the case where the dirty state management unit 4 manages both the dirty state and the dirty storage area, the stored dirty state is changed from “dirty” to “dirty” when the dirty storage area becomes less than a predetermined area. It may be changed to “None”.
- the wiping area output from the wiping area extracting unit 22 is obtained by extracting an area that has changed with time in the captured image of the in-vehicle camera 1, and may include an area in which dirt newly appears.
- the wiping area is newly added to the dirt storage area as a dirt adhesion area. May be stored.
- the dirt state stored in the dirt state management unit 4 is “no dirt” and the wiping area exceeds a predetermined area, the dirt state is changed from “no dirt” to “dirty”. You may change to
- contamination adhering to the lens surface of the vehicle-mounted camera 1 may be removed in steps little by little.
- the lens is not cleaned manually and a part of the dirt remains, and the cleaning is repeated many times, or a part of the water drops run off by its own weight.
- the wiping area output from the wiping area extraction unit 22 only a part of the dirt that is wiped each time is output.
- the dirt state management unit 4 accumulates the wiping areas output by the wiping area extraction unit 22, and when the accumulated wiping area exceeds a predetermined area, the stored dirt state is changed from “dirty” to “ It is preferable to change from “no dirt” or “no dirt” to “dirty”.
- the wiping area output from the wiping area extraction unit 22 may be sequentially reflected in the dirt storage area.
- the in-vehicle imaging device after excluding the scene where a person or other moving body cleaning the lens surface of the in-vehicle camera 1 is reflected in the captured image of the in-vehicle camera 1, It is possible to provide a lens dirt wiping diagnostic device (dirt wiping diagnostic unit 2, dirt detection unit 3, and dirt state management unit 4) that extracts temporal changes in the captured image and detects wiping of lens dirt.
- a lens dirt wiping diagnostic device dirty wiping diagnostic unit 2, dirt detection unit 3, and dirt state management unit 4
- the stable state signal by the stable state determination unit 21 becomes “unstable state” and thus is excluded from the processing in the wiping region extraction unit 22,
- these “stable states” are targets of processing in the wiping area extraction unit 22.
- the captured image at the time of stopping must be used as a reference image, and it can be diagnosed at any time, avoiding the separation of the imaging time of the two images that take the difference, and the possibility of misjudgment. Can be reduced.
- the dirt detection unit 3 that detects dirt attached to the lens surface of the in-vehicle camera 1 does not function (for example, when the host vehicle is stopped)
- the wiping of the dirt attached to the lens of the in-vehicle camera 1 is detected,
- the autonomous control system for example, automatic driving system of the vehicle that has been stopped due to the dirt on the lens of the in-vehicle camera 1 can be resumed.
- FIG. 8 is a block diagram showing a schematic configuration of the in-vehicle imaging device according to the second embodiment of the present invention.
- the same reference numerals are given to components having the same functions as those of the in-vehicle image pickup apparatus 100 according to the first embodiment shown in FIG.
- the in-vehicle imaging device 200 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4.
- the dirt wiping diagnosis unit 2 includes a stable state determination unit 21 and a wiping region extraction unit 22.
- the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnostic device.
- the wiping area extraction unit 22 includes the in-vehicle camera stored in the dirt state management unit 4 in addition to the captured image output from the in-vehicle camera 1 and the stable state signal output from the stable state determination unit 21. Reference is made to the dirt adhesion area (dirt storage area) of the lens surface 1.
- FIG. 9 is an example of a flowchart of processing of the wiping area extraction unit 22 in the second embodiment.
- the dirt storage area stored in the dirt state management unit 4 is acquired (step 2210). Thereafter, if the stable state signal output from the stable state determination unit 21 indicates the stable state, the captured image of the in-vehicle camera 1 is acquired, and the captured image and the image of the stable state immediately before acquired in the previous process are acquired. A difference image from the image is generated (steps 2201 to 2203). Then, the difference image is accumulated (step 2211), and an area having a large difference (difference area) is extracted from the accumulated difference image (step 2212). Next, the dirt storage area acquired in step 2210 is compared with the accumulated difference area (step 2213), and if most of the dirt storage area becomes the accumulated difference area, it is accumulated.
- the difference area is output as an area (wiping area) from which dirt is wiped off (step 2205).
- the stable state signal indicates an unstable state in step 2201 or if it is not the difference area in which most of the dirt storage area is accumulated in step 2213, a new captured image in the stable state is displayed. In order to acquire, after stopping for a certain period of time (step 2214), the processing is restarted from step 2201.
- the difference images can continue to be accumulated in step 2211 until most of the dirt is wiped off.
- region extraction part 22 of a present Example is not limited to the flowchart shown in FIG.
- the same function can be obtained by omitting step 2211 and extracting a region having a large difference from the difference image in step 2212 and accumulating the extracted region and using the accumulated region as a difference region. .
- FIG. 10 is a block diagram illustrating a schematic configuration of the in-vehicle imaging device according to the third embodiment of the present invention.
- the same reference numerals are given to components having the same functions as those of the in-vehicle image pickup apparatus 100 according to the first embodiment shown in FIG.
- the in-vehicle imaging device 300 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4. Further, the dirt wiping diagnosis unit 2 includes a stable state determination unit 21, a wiping region extraction unit 22, and a cleaning state determination unit 23.
- the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 are also referred to as a lens dirt wiping diagnostic device.
- FIG. 11 is an example of a captured image captured by the in-vehicle camera 1 when a cloth is pressed against the lens of the in-vehicle camera 1.
- a region 34 which is a region pressed against the cloth, is displayed darkly and black.
- the cleaning state determination unit 23 refers to the illuminance of the in-vehicle camera 1 and determines that the lens of the in-vehicle camera 1 has been cleaned when the illuminance has greatly decreased by a predetermined value or more, and cleans the determination result. Output as a signal.
- the cleaning state determination unit 23 refers to the focal length of the in-vehicle camera 1 (not shown), and when the change amount of the focal length is equal to or greater than a predetermined value, the in-vehicle camera 1 The lens may be determined to be cleaned, and the result of the determination may be output as a cleaning signal.
- the cleaning state determination unit 23 calculates the average luminance from the captured image of the in-vehicle camera 1, and determines that the lens of the in-vehicle camera 1 is cleaned when the average luminance is greatly reduced by a predetermined value or more.
- the result of the determination may be output as a cleaning signal.
- the cleaning state determination part 23 extracts the area
- the lens may be determined to be cleaned, and the result of the determination may be output as a cleaning signal.
- the cleaning state determination unit 23 determines whether there is a large luminance change at the end of the dark region, and the end of the dark region. If there is a luminance change greater than or equal to a predetermined value, it is determined that the dark area is caused by cleaning the lens, and if there is no luminance change greater than or equal to the predetermined value at the end of the dark area, the dark area is It is determined that it is not caused by cleaning. Thereby, the erroneous detection of cleaning determination can be suppressed.
- the cleaning state determination unit 23 extracts a region (non-edge region) whose edge strength is equal to or less than a predetermined value from the captured image of the in-vehicle camera 1, and when the area of the non-edge region becomes equal to or greater than the predetermined value, It may be determined that the lens of the in-vehicle camera 1 has been cleaned, and the result of the determination may be output as a cleaning signal.
- the cleaning state determination unit 23 may determine that the lens of the in-vehicle camera 1 has been cleaned when the decrease in illuminance, the change in focal length, the decrease in average luminance of the captured image, and the like continues for a longer period of time. .
- the dark region or the non-edge region is detected from the captured image of the in-vehicle camera 1 as a region where dirt has been cleaned. . Then, the dark region or the non-edge region is sequentially accumulated, and when the accumulated area of the dark region or the non-edge region becomes a predetermined value or more, it is determined that the lens of the in-vehicle camera 1 is cleaned. The result of the determination is preferably output as a cleaning signal.
- FIG. 12 is an example of a flowchart of processing of the wiping area extraction unit 22 in the present embodiment.
- step 2201 it is determined whether the stable state signal output from the stable state determination unit 21 indicates a stable state (step 2201) and whether the cleaning signal output from the cleaning state determination unit 23 indicates that cleaning is being performed. Perform (step 2221). And when it is in a stable state and not being cleaned (not being cleaned), a captured image of the in-vehicle camera 1 is acquired (step 2202). Thereafter, it is determined whether or not the cleaning signal indicates that cleaning is being performed in the immediately preceding process (step 2222). If the cleaning signal indicates that cleaning is being performed, the captured image acquired in step 2202 is a captured image in a stable state immediately after cleaning. Therefore, a difference image between the captured image acquired in step 2202 and the non-cleaning captured image acquired in the previous process is generated (step 2223).
- the difference image is the difference between the captured images immediately before and after the cleaning period. Thereafter, an area with a large difference (difference area) is extracted from the difference image (step 2204), and the difference area is output as a wiped area (wiping area) (step 2205). If the steady state signal indicates an unstable state in step 2201 or if the cleaning signal indicates that cleaning is being performed in step 2221, the cleaning signal does not indicate that cleaning is being performed in the immediately preceding process in step 2222. In this case, after the processing is stopped for a certain time (step 2224), the processing is restarted from step 2201.
- steps 2223, 2204, and 2205 are performed only when the order of steps 2221 and 2202 is reversed to acquire all the captured images in the stable state and the cleaning signal is switched from cleaning to non-cleaning. Even if executed, the same function can be obtained.
- the wiping area extraction unit 22 may accumulate the difference image for a certain period, and may extract a large difference area (difference area) in the accumulated difference image.
- the accumulated difference area may be output as the wiping area.
- the period during which the difference image or the difference area is accumulated may be a predetermined period. However, until the dirt state managed by the dirt state management unit 4 becomes “no dirt” or dirt detection is performed. It is preferable that the part 3 is functioned. As a result, it is possible to avoid spillage of the wiping area for dirt. In addition, it is preferable that the wiping area
- the wiping area extraction unit 22 extracts a feature amount such as an edge image or a HOG feature amount from the captured image without using the captured image of the in-vehicle camera 1 as it is (here, the extraction result is referred to as a feature amount image). ), This feature amount image may be used instead of the captured image.
- the detection performance can be improved and the amount of data to be processed can be reduced.
- the HOG feature amount it is possible to make it difficult to extract unnecessary image changes due to slight fluctuations of the moving body and ambient light.
- the wiping area for the dirt cannot be detected, and thus the wiping is not performed. Instead of the area, the wiping accuracy information is output.
- the period for performing the dirt wiping detection by the dirt wiping diagnosis unit 2 can be limited.
- the vehicle speed of the own vehicle and information similar to the passenger getting on and off of the own vehicle are acquired and used.
- Information similar to getting on and off of the passenger of the own vehicle is information on opening / closing of the door and / or window of the own vehicle or a change in the vehicle weight.
- a sensor for detecting and acquiring such information is provided. Since the vehicle height also changes depending on the vehicle weight, a change in the vehicle height may be detected instead of the change in the vehicle weight.
- the lens dirt wiping diagnostic apparatus of this embodiment when the door is opened and closed with the vehicle speed sufficiently lowered, or when the vehicle weight decreases, the occupant removes the lens of the in-vehicle camera 1 from the own vehicle. It is determined that the vehicle has got off, and this is used as a trigger for starting the process of the dirt wiping diagnosis unit 2.
- the stain wiping diagnosis unit 2 may perform the process on the captured image of the in-vehicle camera 1.
- the processing of the dirt wiping diagnosis unit 2 may be stopped.
- the vehicle-mounted camera 1 is a side camera
- the process of the dirt wiping diagnosis unit 2 for the vehicle-mounted camera 1 may be stopped.
- the information similar to the passenger getting on and off of the host vehicle described in the fourth embodiment is used.
- the wiping of the lens of the in-vehicle camera 1 may be determined.
- the dirt state management unit 4 receives the input and initializes the stored dirt state and / or dirt storage area.
- the lens dirt wiping diagnostic apparatus can be operated by a vehicle-mounted battery (not shown), but the lens dirt wiping diagnostic apparatus is also immediately stopped because there is a concern that the battery is out of charge. In this case, the lens dirt wiping diagnostic apparatus according to the first to fifth embodiments cannot detect the wiping of dirt attached to the lens of the in-vehicle camera 1.
- the wiping area extraction unit 22 of the lens dirt wiping diagnostic apparatus before the engine is stopped or immediately after the engine is stopped until the lens dirt wiping diagnostic apparatus is stopped the captured image in a stable state is stored in a nonvolatile storage unit.
- the storage unit eg, a non-volatile memory
- the wiping area extraction unit 22 extracts a difference image between the captured image stored in the storage unit and the latest captured image in the stable state acquired after the engine is restarted, and wipes off the dirt. Detect areas. In this case, the cleaning state determination unit 23 is not necessary.
- the engine stop time is stored in a nonvolatile storage unit (not shown), and when the engine is restarted, the engine stop time is calculated as the difference between the engine stop time stored in the storage unit and the current time, and the engine
- the stop time is greater than or equal to a predetermined value
- the captured image of the in-vehicle camera 1 is displayed on a monitor in the vehicle or the like
- the user is checked for the presence or absence of dirt
- the confirmation result is input from the monitor input means (for example, touch panel)
- the dirt state management unit 4 receives the input and initializes the stored dirt state and / or dirt storage area.
- the present embodiment it is possible to cope with the case where the stop time becomes long, the imaging time of the two images taking the difference is separated, and the surrounding brightness and the arrangement of the object are changed, and the possibility of erroneous determination can be reduced.
- the passenger in the passenger seat opens the door window and wipes the lens of the side camera on the passenger seat while the vehicle is traveling.
- the lens of the in-vehicle camera 1 is manually cleaned.
- the stable state determination unit 21 described in the third embodiment continues to output a stable state signal indicating an unstable state, and the dirt wiping diagnosis unit 2 does not function, and the dirt is wiped off. Cannot be detected.
- FIG. 13 is a block diagram showing a schematic configuration of an in-vehicle imaging device according to Embodiment 7 of the present invention.
- the same reference numerals are given to components having the same functions as those in the in-vehicle imaging device 300 of the third embodiment shown in FIG.
- the in-vehicle image pickup apparatus 400 of the present embodiment includes an in-vehicle camera 1, a dirt wiping diagnosis unit 2, a dirt detection unit 3, and a dirt state management unit 4, and further includes a vehicle speed sensor. 5 is provided.
- the vehicle speed sensor 5 detects and outputs the vehicle speed of the host vehicle.
- the dirt wiping diagnosis unit 2 includes a cleaning state determination unit 23 in addition to the stable state determination unit 21 and the wiping region extraction unit 22 as in the third embodiment.
- the stable state determination unit 21 and the wiping area extraction unit 22 can be omitted.
- the dirt wiping diagnostic unit 2, the dirt detection unit 3, and the dirt state management unit 4 may be a lens dirt wiping diagnostic device, or may include a vehicle speed sensor 5 as a lens dirt wiping diagnostic device.
- the dirt wiping diagnostic unit 2 acquires the vehicle speed of the host vehicle output from the vehicle speed sensor 5, and when the vehicle speed exceeds a predetermined value, stops the processing of the stable state determination unit 21 and the wiping region extraction unit 22, The cleaning signal output from the cleaning state determination unit 23 is output to the dirt state management unit 4.
- the cleaning signal output from the cleaning state determination unit 23 merely indicates the presence or absence of cleaning, and it cannot be determined from the cleaning signal whether dirt has actually been wiped off. Therefore, it is preferable not to carry out this embodiment unnecessarily.
- window opening / closing information is acquired from the vehicle, and only when the window is sufficiently open, the vehicle-mounted camera 1 mounted in the vicinity of the window becomes dirty from the cleaning signal output by the cleaning state determination unit 23. Determine the wiping. Thereby, this embodiment can be prevented from being unnecessarily performed.
- the surrounding object When the surrounding object is reflected on the in-vehicle camera 1, it can be determined that the area where the object is displayed is not dirty. Further, in the captured image of the in-vehicle camera 1, the outline of the dirt often appears unclear, and it can be determined that there is no dirt in the area where the clear outline appears.
- an object recognition unit such as pedestrian detection for detecting a pedestrian reflected in a captured image of the in-vehicle camera 1 or a vehicle detection for detecting a vehicle reflected in the captured image, or the in-vehicle camera 1 It may be determined that there is no dirt in an area where a clear contour is detected in the captured image, and the dirt state management unit 4 may reflect this determination on the dirt state and dirt storage area to be stored.
- the dirt wiping area can be detected even if the past dirt adhesion area and dirt state are unknown.
- a vehicle-mounted camera that captures an image through a lens
- a stain detection unit that detects the presence or absence of a stain on the lens or a stain-attached region based on a time change of a captured image sequentially captured by the vehicle-mounted camera
- a dirt wiping diagnostic unit for diagnosing lens dirt wiping
- a dirt state management unit for managing the lens dirt, wherein the dirt wiping diagnostic unit is based on captured images sequentially captured by the in-vehicle camera.
- a wiping area extraction unit for extracting a wiping area for dirt based on The dirt state management unit stores the presence / absence of dirt detected by the dirt detection unit or the dirt adhesion area, and the dirt wiping area extracted by the wiping area extraction unit is stored.
- the present invention is configured as follows. (2) In the in-vehicle imaging device, the stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and the acquired image is acquired immediately before the acquired captured image and the acquired captured image each time. A difference image with the image is generated, a region with a large difference is extracted based on the generated difference image, and if the area of the region with the large difference is equal to or less than a predetermined value, the time change of the acquired captured image is small. Since the vehicle-mounted image pickup apparatus is characterized in that it is determined that the area is large in the difference and the area of the large difference exceeds a predetermined value, it is determined that the time change of the acquired captured image is not large and stable. The stable state can be appropriately determined by determining the stable state when the area of the large difference area is equal to or smaller than the predetermined value.
- the present invention is configured as follows. (3) In the in-vehicle imaging device, the stable state determination unit sequentially acquires captured images output from the in-vehicle camera, and sequentially generates feature amount images obtained by extracting feature amounts from the sequentially acquired captured images. Each time, a difference image between the generated feature amount image and the feature amount image generated immediately before the generated feature amount image is generated, a region having a large difference is extracted based on the generated difference image, and the extraction is performed.
- the area of the large difference area is less than or equal to a predetermined value, it is determined that the acquired captured image has a small temporal change and is in a stable state, and if the area of the large difference area exceeds the predetermined value, the acquired Since the in-vehicle imaging device is characterized in that it is determined that the change in time of the captured image is large and is not in a stable state, the detection performance can be improved or the amount of data to be processed can be reduced by using the feature image.
- the present invention is configured as follows. (4) In the above-described in-vehicle imaging device, the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, An in-vehicle imaging device characterized by generating a difference image with the acquired captured image in a stable state, extracting a region having a large difference from the generated difference image, and setting the extracted region as a dirt wiping region Therefore, the possibility of erroneous determination can be reduced by using a captured image in a stable state.
- the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, Generate a difference image with the acquired captured image in a stable state, accumulate the generated difference image for a certain period, extract a region with a large difference from the accumulated difference image, and wipe the dirt of the extracted region Since it is an in-vehicle imaging device characterized by the area, it is possible to deal with the case where the brightness of the dirt changes due to temporary irradiation of headlights of other vehicles by accumulating difference images. is there.
- the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and the acquired captured image in the stable state, Generate a difference image from the acquired captured image in a stable state, acquire the presence or absence of dirt or a dirt adhesion region stored in the dirt state management unit, and the acquired dirt disappears or the acquired dirt adhesion
- the generated difference image is accumulated until the area of the area becomes a predetermined value or less, a region having a large difference is extracted from the accumulated difference image, and the extracted region is used as a dirt wiping region. Since the vehicle-mounted imaging device is configured to accumulate the difference images, it is possible to suppress the removal of the dirt wiping area.
- the wiping area extraction unit sequentially acquires captured images in which the determination result by the stable state determination unit is in a stable state, and features from the sequentially acquired captured images in the stable state
- the feature amount image extracted is sequentially generated, a difference image between the generated feature amount image and the previously acquired feature amount image is generated, and a region having a large difference is extracted based on the generated difference image.
- the vehicle-mounted imaging device is characterized in that the extracted area is a dirt wiping area, the detection performance can be improved and the amount of data to be processed can be reduced by using the feature amount image. .
- the present invention is configured as follows.
- the dirt state management unit acquires the presence / absence of dirt or a dirt adhesion area detected by the dirt detection unit, stores the acquired dirt adhesion area as a dirt storage area, and The acquired presence / absence of dirt or the size of the area of the stored dirt storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired, and the acquired from the stored dirt adhesion area Since the wiping area is deleted and the dirt state is updated based on the updated dirt adhesion area or the area of the deleted wiping area, the vehicle-mounted image pickup apparatus is provided. It can be carried out.
- the dirt state management unit acquires the presence or absence of dirt or a dirt adhesion region detected by the dirt detection unit, stores the acquired dirt adhesion region as a dirt storage region, and The acquired presence / absence of dirt or the size of the area of the stored dirt storage area is stored as a dirt state, the dirt wiping area detected by the wiping area extraction unit is acquired and accumulated for a certain period, and the accumulated wiping area If the area of the image sensor is greater than or equal to a predetermined value, the stored in-vehicle image pickup apparatus is characterized by eliminating the dirt. It is possible to deal with cases where it is removed step by step.
- the present invention is configured as follows. (10) In the above-described in-vehicle imaging device, the dirt state management unit notifies the user of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion region. Since the vehicle-mounted imaging device is characterized by this, the user can know the management status of lens dirt, and can take appropriate measures.
- the present invention is configured as follows. (11)
- the dirt state management unit notifies the system for autonomously controlling the vehicle of the dirt management status of the lens based on the stored presence or absence of dirt or a dirt adhesion region. Since the vehicle-mounted imaging device is characterized by this, in a system that performs autonomous control such as automatic driving, it is possible to know the management status of dirt on the lens and perform appropriate control.
- the present invention is configured as follows. (12) A vehicle-mounted camera that captures an image through a lens, a stain detection unit that detects the presence or absence of a stain on the lens or a stain-attached region based on captured images sequentially captured by the vehicle-mounted camera, and a stain on the lens A dirt wiping diagnosis unit for diagnosing wiping of the lens, and a dirt state management unit for managing dirt on the lens, and the dirt wiping diagnosis unit is in a stable state based on captured images sequentially captured by the in-vehicle camera.
- the lens is cleaned based on the determination of the stable state determination unit that determines whether or not there is a cleaning state determination unit that determines that the lens has been cleaned, and the determination of the stable state determination unit and the cleaning state determination unit. Wiping to extract a picked-up image having a small temporal change before and after the period during which the lens is cleaned, and extracting a dirt wiping region from the difference between the two picked-up images before and after the period during which the lens is cleaned
- the dirt state management unit stores the presence or absence of dirt detected by the dirt detection unit or a dirt adhesion region, and already stores the dirt wiping region extracted by the wiping region extraction unit.
- an in-vehicle image pickup device that manages the dirt on the lens, so it reflects the cleaning condition of the lens and adheres to the lens surface of the in-vehicle camera.
- an in-vehicle imaging apparatus having a lens dirt wiping diagnostic device that can reduce the possibility of erroneous determination when diagnosing that dirt on a lens surface has been wiped off even when the operation timing of removing dirt is unknown. can do.
- the cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the on-vehicle camera as a dark region, and the area of the extracted dark region is equal to or larger than a predetermined value.
- the cleaning state can be determined by extracting a dark area with low luminance.
- the cleaning state determination unit determines whether there is a large luminance change at an end of the extracted dark region when the extracted dark region does not reach the entire captured image. Further, only a dark region having a large luminance change at the end is further extracted, and when the area of the dark region having a large luminance change at the extracted end is equal to or larger than a predetermined value, it is determined that the lens is cleaned. Since the vehicle-mounted imaging device is characterized by this, the cleaning state can be more accurately determined by determining the luminance change at the end of the dark region.
- the cleaning state determination unit extracts a region having a luminance equal to or lower than a predetermined value from the captured image of the in-vehicle camera as a dark region, accumulates the extracted dark region, and accumulates the extracted dark region.
- the on-vehicle imaging device is characterized in that it is determined that the lens is cleaned. Even when the lens is cleaned, it can be detected that the lens has been cleaned.
- the dirt wiping diagnosis unit refers to information similar to the speed of the host vehicle and / or boarding / exiting information of the host vehicle, and only when an occupant of the host vehicle can clean the lens. Since the vehicle-mounted imaging device is characterized by operating, the processing load can be reduced by stopping the operation when the lens cannot be cleaned.
- the present invention is configured as follows. (17)
- the dirt detection unit, the dirt wiping diagnostic unit, and the dirt state management unit constitute a lens dirt wiping diagnostic device, and the lens dirt wiping diagnostic device is configured to stop an engine. Immediately after the lens dirt wiping diagnostic device is stopped, the captured image of the in-vehicle camera is stored at the time of stop, and when the engine is restarted, the captured image stored at the time of stop is stored in the past before cleaning the lens. Since the vehicle-mounted image pickup device is used as a picked-up image of the lens, it is possible to appropriately manage the dirt state of the lens even if the lens stain wiping diagnostic device is stopped when the engine is stopped.
- the vehicle-mounted imaging device includes an object recognition unit that detects an object from a captured image of the vehicle-mounted camera, and the dirt state management unit uses a region where the object recognition unit detects an object as a dirt wiping region. Since the vehicle-mounted imaging device is characterized in that the extracted wiping area is deleted from the extracted and stored dirt adhesion area, it is considered that there is no dirt on the lens in the area where the object is detected By reflecting the detection result of the object recognition unit, more appropriate management of the dirt state of the lens can be performed.
- the present invention is configured as follows. (19) The vehicle-mounted imaging device is provided, and the autonomous control system of the vehicle is permitted to the vehicle autonomous control system when the area of the dirt adhesion region stored in the dirt state management unit is reduced to a predetermined value or less. Since the vehicle control system is characterized by this, autonomous control of the vehicle can be performed without being affected by dirt on the lens.
- SYMBOLS 1 Car-mounted camera, 2 ... Dirt wiping diagnosis part, 3 ... Dirt detection part, 4 ... Dirt state management part, 5 ... Vehicle speed sensor, 21 ... Stable state determination part, 22 ... Wiping area extraction part, 23 ... Cleaning state determination part , 31, 32... Region (water drop display region), 33... Region (pedestrian display region), 34.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
L'invention concerne un dispositif de capture d'images embarqué comprenant un dispositif de diagnostic d'élimination de contamination de lentille permettant de réduire la probabilité d'une détermination erronée. Le dispositif de capture d'images embarqué comprend : une caméra embarquée pour capturer une image via une lentille ; une unité de détection de contamination qui, sur la base d'images capturées, capturées séquentiellement au moyen de la caméra embarquée, détecte la présence ou l'absence de contamination ou d'une zone liée à une contamination sur la lentille ; une unité de détermination d'état stable qui, sur la base d'un changement temporel dans les images capturées, capturées séquentiellement par la caméra embarquée, détermine la présence ou l'absence d'un état stable ; une unité d'extraction de zone dépourvue de contamination qui extrait une zone dépourvue de contamination sur la base du changement temporel dans des images capturées, parmi les images capturées, capturées séquentiellement par la caméra embarquée, lorsque le résultat de la détermination par l'unité de détermination d'état stable indique un état stable ; et une unité de gestion d'état de contamination dans laquelle la présence ou l'absence de contamination ou de la zone liée à une contamination détectée par l'unité de détection de contamination est stockée, et qui gère la contamination sur la lentille en reflétant la zone dépourvue de contamination extraite par l'unité d'extraction de zone dépourvue de contamination sur la présence ou l'absence de contamination ou de la zone liée à une contamination qui est stockée.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017024810A JP6757271B2 (ja) | 2017-02-14 | 2017-02-14 | 車載用撮像装置 |
| JP2017-024810 | 2017-02-14 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018150661A1 true WO2018150661A1 (fr) | 2018-08-23 |
Family
ID=63169194
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2017/040721 Ceased WO2018150661A1 (fr) | 2017-02-14 | 2017-11-13 | Dispositif de capture d'images embarqué |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP6757271B2 (fr) |
| WO (1) | WO2018150661A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110532876A (zh) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | 夜晚模式镜头付着物的检测方法、系统、终端和存储介质 |
| US12131549B2 (en) | 2022-03-29 | 2024-10-29 | Panasonic Automotive Systems Co., Ltd. | Image monitoring device |
| US12368834B2 (en) | 2022-12-15 | 2025-07-22 | Ford Global Technologies, Llc | Vehicle sensor assembly |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7394602B2 (ja) * | 2019-05-17 | 2023-12-08 | 株式会社Lixil | 判定装置 |
| JP7251425B2 (ja) * | 2019-09-20 | 2023-04-04 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
| JP7151675B2 (ja) * | 2019-09-20 | 2022-10-12 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
| US20220395149A1 (en) * | 2019-09-24 | 2022-12-15 | Lixil Corporation | Toilet seat device |
| JP7442384B2 (ja) * | 2019-09-24 | 2024-03-04 | 株式会社Lixil | 便座装置 |
| JP7609570B2 (ja) * | 2019-09-24 | 2025-01-07 | 株式会社Lixil | 便座装置 |
| WO2021060212A1 (fr) * | 2019-09-24 | 2021-04-01 | 株式会社Lixil | Dispositif de siège de toilettes |
| JP2021056882A (ja) * | 2019-09-30 | 2021-04-08 | アイシン精機株式会社 | 周辺監視装置および周辺監視プログラム |
| CN112348784A (zh) * | 2020-10-28 | 2021-02-09 | 北京市商汤科技开发有限公司 | 相机镜头的状态检测方法、装置、设备及存储介质 |
| JP7745131B2 (ja) * | 2021-11-02 | 2025-09-29 | パナソニックIpマネジメント株式会社 | 排泄物画像表示システムおよび便器 |
| JP7398643B2 (ja) * | 2022-03-29 | 2023-12-15 | パナソニックIpマネジメント株式会社 | 画像監視装置 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014007294A1 (fr) * | 2012-07-03 | 2014-01-09 | クラリオン株式会社 | Dispositif embarqué |
| JP2016015583A (ja) * | 2014-07-01 | 2016-01-28 | クラリオン株式会社 | 車載用撮像装置 |
-
2017
- 2017-02-14 JP JP2017024810A patent/JP6757271B2/ja not_active Expired - Fee Related
- 2017-11-13 WO PCT/JP2017/040721 patent/WO2018150661A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014007294A1 (fr) * | 2012-07-03 | 2014-01-09 | クラリオン株式会社 | Dispositif embarqué |
| JP2016015583A (ja) * | 2014-07-01 | 2016-01-28 | クラリオン株式会社 | 車載用撮像装置 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110532876A (zh) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | 夜晚模式镜头付着物的检测方法、系统、终端和存储介质 |
| US12131549B2 (en) | 2022-03-29 | 2024-10-29 | Panasonic Automotive Systems Co., Ltd. | Image monitoring device |
| US12368834B2 (en) | 2022-12-15 | 2025-07-22 | Ford Global Technologies, Llc | Vehicle sensor assembly |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6757271B2 (ja) | 2020-09-16 |
| JP2018132861A (ja) | 2018-08-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6757271B2 (ja) | 車載用撮像装置 | |
| CN104395156B (zh) | 车辆周围监视装置 | |
| CN105142981B (zh) | 用于前视摄像头的驾驶员视野适配器 | |
| US9956941B2 (en) | On-board device controlling accumulation removing units | |
| US11565659B2 (en) | Raindrop recognition device, vehicular control apparatus, method of training model, and trained model | |
| JP4668838B2 (ja) | 雨滴検出装置およびワイパー制御装置 | |
| US11237388B2 (en) | Processing apparatus, image capturing apparatus, and image processing method | |
| CN112839846B (zh) | 车用清洗装置、车用清洗方法及记录介质 | |
| JP2014011785A (ja) | 車載カメラ汚れ除去装置の診断装置、診断方法及び車両システム | |
| JP6755161B2 (ja) | 付着物検出装置および付着物検出方法 | |
| US20140241589A1 (en) | Method and apparatus for the detection of visibility impairment of a pane | |
| KR20170055907A (ko) | 촬상 시스템 | |
| US20120200708A1 (en) | Vehicle peripheral monitoring device | |
| JP6081034B2 (ja) | 車載カメラ制御装置 | |
| CN111655540B (zh) | 用于识别与机动车辆邻近的至少一个对象的方法 | |
| GB2570156A (en) | A Controller For Controlling Cleaning of a Vehicle Camera | |
| KR20220152823A (ko) | 차량의 주행영상 기록 장치 및 그 방법 | |
| KR100659227B1 (ko) | 차량 앞유리 와이퍼를 제어하기 위한 와이퍼 제어장치 | |
| US20200207313A1 (en) | Method for controlling at least one washing device of at least one sensor situated on an outer contour of a vehicle | |
| JP6841725B2 (ja) | 他車監視システム | |
| JP3636955B2 (ja) | 車両用雨滴検出装置 | |
| JPH0872641A (ja) | 車両用画像認識装置 | |
| JP5041985B2 (ja) | 車載撮像装置 | |
| JPH11185023A (ja) | ウィンドガラスの汚損度検出装置 | |
| JP2000025576A (ja) | 自動ワイパー装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17896428 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17896428 Country of ref document: EP Kind code of ref document: A1 |