WO2017098709A1 - Dispositif de reconnaissance d'images et procédé de reconnaissance d'images - Google Patents
Dispositif de reconnaissance d'images et procédé de reconnaissance d'images Download PDFInfo
- Publication number
- WO2017098709A1 WO2017098709A1 PCT/JP2016/005037 JP2016005037W WO2017098709A1 WO 2017098709 A1 WO2017098709 A1 WO 2017098709A1 JP 2016005037 W JP2016005037 W JP 2016005037W WO 2017098709 A1 WO2017098709 A1 WO 2017098709A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- distance
- image recognition
- luminance
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/12—Acquisition of 3D measurements of objects
- G06V2201/121—Acquisition of 3D measurements of objects using special illumination
Definitions
- This disclosure relates to an image recognition apparatus and an image recognition method.
- cameras that can acquire not only luminance images but also distance information are attracting attention. Since this camera can recognize a space three-dimensionally by using distance information of an image, it is expected that a distance between a person and a car can be known, and a person can be recognized with higher accuracy.
- Patent Document 1 proposes a method for recognizing the similarity of a distance histogram as a feature amount, compared to a conventional human recognition method using a luminance gradient feature amount. Thereby, even when there are many complicated backgrounds or when people overlap, a reduction in recognition accuracy is suppressed.
- the conventional human recognition method using a distance image scans the entire image including an object other than a person such as a background to perform human recognition, which causes a problem that the calculation processing time becomes long and the recognition speed decreases.
- the problem to be solved by the present disclosure is to provide an image recognition apparatus and an image recognition method that cause little reduction in detection speed even for high-resolution images, and little reduction in recognition accuracy even under complicated backgrounds. is there.
- One form of the image recognition device includes a camera unit that generates a distance signal and a luminance signal using reflected light from a plurality of subjects, a distance image from the distance signal, and a luminance image from the luminance signal And an image recognition processing unit for performing image recognition, wherein the image recognition processing unit divides the distance image and the luminance image into a plurality of regions, and each of the plurality of regions. A first region where it is clear that a specific object does not exist and a second region other than the first region are determined, and the first region is excluded from the plurality of regions. To execute image recognition processing.
- a distance image including pixel data indicating a distance value and a luminance image including pixel data indicating a luminance value are generated by imaging with a camera, and the distance image and the distance image
- the luminance image is divided into a plurality of regions, and each of the plurality of regions is either a first region where it is clear that a specific object does not exist, or any other second region Determination is performed, and image recognition is executed by excluding the first region from the plurality of regions.
- FIG. 1 is a block diagram illustrating a configuration example of an image recognition apparatus according to an embodiment.
- FIG. 2 is a diagram illustrating an example of a subject assumed when an image is taken with an in-vehicle camera.
- FIG. 3A is a diagram illustrating a luminance image obtained when the subject illustrated in FIG. 2 is imaged by the image recognition apparatus according to the embodiment.
- FIG. 3B is a diagram showing a luminance value along a dotted line 3B in FIG. 3A.
- FIG. 3C is a diagram illustrating a luminance value along a dotted line 3C in FIG. 3A.
- FIG. 4 is a diagram illustrating a result of subject boundary extraction using only a luminance image.
- FIG. 4 is a diagram illustrating a result of subject boundary extraction using only a luminance image.
- FIG. 5A is a diagram illustrating a distance image obtained when the subject illustrated in FIG. 2 is imaged by the image recognition apparatus according to the embodiment.
- FIG. 5B is a diagram showing a distance value along a dotted line 5B in FIG. 5A.
- FIG. 5C is a diagram illustrating a distance value along a dotted line 5C in FIG. 5A.
- FIG. 6 is a diagram illustrating a result of subject boundary extraction using only a distance image.
- FIG. 7 is a diagram in which the extracted boundary is synthesized using the luminance image and the distance image.
- FIG. 8 is a diagram illustrating a first region among a plurality of regions divided by the boundary of the subject extracted using the distance image and the luminance image.
- FIG. 9 is a flowchart of an image recognition method executed by the image recognition apparatus according to the embodiment.
- FIG. 1 is a block diagram illustrating a configuration example of an image recognition apparatus according to an embodiment.
- An image recognition apparatus 100 illustrated in FIG. 1 includes a camera unit 10, an image generation unit 20, and an image recognition processing unit 30.
- the camera unit 10 includes a light source 11, a light source control unit 12, a camera lens 13, and an image sensor 14.
- the image generation unit 20 includes a distance image generation unit 21 and a luminance image generation unit 22.
- the image recognition processing unit 30 includes an area determination unit 31, an image extraction unit 32, a feature amount calculation unit 33, and a recognition processing unit 34.
- the camera unit 10 generates a distance signal and a luminance signal using reflected light from a plurality of subjects.
- the light source 11 is mainly a light source with a near-infrared wavelength (such as an LED or a laser diode), and irradiates light in a pulse shape at a specific frequency under the control of the light source control unit 12.
- a near-infrared wavelength such as an LED or a laser diode
- the light source control unit 12 irradiates the subject with pulsed light from the light source 11 and forms an image of reflected light from the subject on the image sensor 14 through the camera lens 13.
- the image sensor 14 has a plurality of pixel portions arranged two-dimensionally, and each pixel portion receives reflected light.
- the distance signal can be acquired by calculating the time difference between the timing when the reflected light arrives and the timing when the light source 11 irradiates the light.
- the distance signal indicates, for example, the distance between the subject and the camera unit 10 for each pixel unit.
- the imaging device 14 also acquires a luminance signal while not emitting pulsed light from the light source 11 as in a normal camera.
- the image generation unit 20 generates a distance image and a luminance image from the distance signal and luminance signal obtained from the camera unit 10.
- the distance image generation unit 21 generates a distance image by calculating a time difference between the timing when the light of the reflected signal arrives and the timing when the light is irradiated.
- the luminance image generation unit 22 generates a luminance image similarly to a general camera.
- the camera unit 10 and the image generation unit 20 have a configuration in which light emission control for TOF (Time Of Flight) distance measurement is added and a distance image generation unit 21 is added as compared with a general camera.
- TOF Time Of Flight
- the image recognition processing unit 30 is configured to exclude the first region of the luminance image and the distance image from the object of image recognition.
- the first area is an area where it is clear that a specific object does not exist among all areas of the luminance image and the distance image.
- the specific object may typically be a person, but in addition to this, a bicycle, a two-wheeled vehicle, a car, or the like on which a person is riding, or an animal other than a person may be used.
- the region determination unit 31 performs a process of dividing the subject into a plurality of regions using the luminance image and the distance image.
- the area determination unit 31 determines whether each of the divided areas is the first area or the other second area. For the first region, the feature amount necessary for identifying the object is not calculated by the feature amount calculation unit 33, and the feature amount is calculated for the second region.
- the image extraction unit 32 reflects the determination result of the region determination unit 31 and performs image extraction of the second region.
- the feature amount calculation unit 33 calculates the feature amount only in the image extracted by the image extraction unit 32.
- the recognition processing unit 34 performs recognition processing according to the feature amount calculated by the feature amount calculation unit 33.
- FIG. 2 is a diagram showing an example of a subject assumed when an image is taken with an in-vehicle camera.
- the subject shown in FIG. 2 includes general subjects such as pedestrians, buildings, ground, roads, cars, traffic lights, pedestrian crossings, trees, sky, and clouds.
- a luminance image and a distance image obtained when the subject shown in FIG. 2 is imaged by the image recognition apparatus 100 will be described.
- FIG. 3A shows a luminance image obtained when the image recognition apparatus 100 images the subject shown in FIG.
- the luminance image includes pixel data indicating luminance.
- the contrast of the subject in FIG. 3A corresponds to the amount of the luminance signal.
- the bright portion has a large luminance value, and the dark portion has a small luminance value.
- region division using the luminance image that is, region boundary extraction in the region determination unit 31.
- FIG. 3B is a diagram showing a luminance value along a dotted line 3B in FIG. 3A.
- FIG. 3C is a diagram illustrating a luminance value along a dotted line 3C in FIG. 3A.
- FIG. 3B and FIG. 3C are diagrams in which the horizontal axis of the dotted lines 3B and 3C in FIG. 3A is the horizontal axis and the luminance value is plotted on the vertical axis.
- the region division of the subject that is, the extraction of the boundary (edge) will be described with the luminance value shown in FIG. 3B as an example.
- the boundary between the building and the ground, the boundary between the ground and the road, and the boundary between the ground and the tree can be extracted.
- a luminance value difference of 5 to 10% or more is observed between adjacent pixels is extracted as a boundary.
- this value is arbitrarily determined depending on camera noise or the like, and is not limited thereto.
- FIG. 4 shows the result when the boundary of the subject is extracted using only the luminance image.
- the boundary between the person and the tree a portion 3b surrounded by a circle in FIG. 3B
- the boundary between the person and the building the circle in FIG. It can be seen that the portion 3c surrounded by
- the present disclosure proposes an image recognition apparatus 100 that uses a distance image together in order to accurately extract a subject boundary even if it is difficult to extract the subject boundary only with a luminance image.
- region division using the distance image that is, region boundary extraction in the region determination unit 31.
- FIG. 5A shows a distance image obtained when the image recognition apparatus 100 images the subject shown in FIG.
- the distance image includes pixel data indicating a distance value.
- the contrast of the subject in FIG. 5A corresponds to the distance
- the bright part is far from the image recognition apparatus 100 to the subject
- the dark part is close.
- FIG. 5B is a diagram showing the distance value along the dotted line 5B in FIG. 5A.
- FIG. 5C is a diagram illustrating a distance value along a dotted line 5C in FIG. 5A.
- 5B and 5C are graphs in which the horizontal pixel rows of dotted lines 5B and 5C in FIG. 5A are plotted on the horizontal axis, and distance values are plotted on the vertical axis. Since the dotted lines 5B and 5C are spatially the same position as the above-described dotted lines 3B and 3C, it is easy to use both values because the address of the pixel value of the image sensor is also the same.
- extraction of the boundary of the subject will be described using the distance value shown in FIG. 5B as an example.
- the boundary between the building and the ground, the boundary between the ground and the traffic light pillar, the boundary between the ground and the tree, and the boundary between the person and the tree can be extracted.
- the distance value changes by 5-10% or more between adjacent pixels, or when the difference (gradient) of the distance value of several pixels including adjacent pixels is about 50% or more
- the gradient changes.
- the pixel to be extracted is extracted as a boundary, but this value is arbitrarily determined by camera noise or the like, and is not limited to this.
- FIG. 5B it can be seen that due to the difference in the distance value between the tree and the person, a boundary that cannot be clearly extracted in the luminance image can be extracted.
- the boundary can be extracted based on the difference in the distance value between the building and the person.
- FIG. 6 shows the result when the boundary of the subject is extracted using only the distance image.
- a distance image such as a road and the ground
- a thing with small change in distance value small unevenness
- a road crosswalk a thing with small change in distance value
- the region determination unit 31 integrates both the boundary division processing of a plurality of subjects using a luminance image and the boundary division processing of a plurality of subjects using a distance image. That is, as shown in FIG. 7, by combining the boundaries extracted using the respective luminance images and distance images, it is possible to accurately extract the boundary of the subject.
- the camera unit 10 and the image generation unit 20 that perform TOF distance measurement used in the image recognition apparatus 100 can acquire the luminance image and the distance image optically coaxially, optical axis correction or the like is unnecessary. It becomes. For this reason, it is clear that the luminance image and the distance image are superior not only in cost but also in recognition speed as compared with the case where they are acquired by separate cameras.
- the region determination unit 31 of the image recognition apparatus 100 is either the first region where it is clear that a specific object does not exist for each of the plurality of divided regions, or the other second region. Judgment is made.
- the area determination unit 31 performs a determination process as to whether or not to calculate a feature amount necessary for identifying an object for each of a plurality of areas.
- these determination processes will be described using the identification of a person as a specific object as an example.
- FIG. 8 is a diagram illustrating a first region among a plurality of regions divided by the boundary of the subject extracted using the distance image and the luminance image. That is, FIG.
- a distance value is a certain value or more (in this case, a camera's limit measurable distance Xm or more) among a plurality of regions divided by the subject boundary extracted using the distance image and the luminance image shown in FIG. 8A) and a region 8B in which the distance value changes while maintaining a constant inclination in the horizontal pixel direction or the vertical pixel direction.
- the region 8A is a region that does not include a pixel indicating a distance smaller than a predetermined value (for example, the limit measurable distance Xm) in the distance image.
- the region 8B is a region where the difference between adjacent pixels is uniform in a certain direction (for example, the vertical direction of the distance image).
- the subject is an empty or far background in the region where the region beyond the limit measurement distance of the camera is continuous at the upper part of the screen like the region 8A.
- Recognition processing can be omitted.
- the ground including a road here
- human recognition processing can be omitted as well. That is, while the feature amount necessary for the human identification process has been calculated in the entire area of the subject, the image recognition apparatus 100 uses the boundary area of the subject obtained from the luminance image and the distance image.
- the image recognition apparatus 100 generates the distance image and the luminance signal using the reflected light from the plurality of subjects, generates the distance image from the distance signal, and generates the luminance signal.
- An image generation unit 20 that generates a luminance image from the image
- an image recognition processing unit 30 that performs image recognition.
- the image recognition processing unit 30 divides the distance image and the luminance image into a plurality of regions, respectively. For each of the plurality of regions, a determination is made as to which of the first region and the second region other than the first region where it is clear that a specific object does not exist, and the first of the plurality of regions The image recognition process is executed by excluding the region 1.
- the image recognition processing unit 30 may perform the determination based on a distance value obtained from the distance image divided into the plurality of regions.
- the image recognition processing unit 30 may determine, as the first region, a region that does not include a pixel indicating a distance smaller than a predetermined value in the distance image among the plurality of regions.
- a sky or a distant background can be included in the first area.
- the image recognition processing unit 30 may determine an area where the difference between adjacent pixels is uniform among the plurality of areas in the distance image as the first area.
- the ground, roads, etc. can be included in the first area.
- the image recognition processing unit 30 may set a boundary between the adjacent pixels as a boundary between the plurality of regions when a difference between distance values obtained by adjacent pixels in the distance image is equal to or larger than a threshold value.
- the camera unit 10 may include an image sensor 14 that generates the distance signal and the luminance signal.
- the distance image and the luminance image are generated by the same camera, processing such as optical axis correction is unnecessary, and the cost is compared with the case where the luminance image and the distance image are acquired by separate cameras. It is effective not only in recognition speed.
- the image recognition method executed by the image recognition apparatus 100 is based on the image captured by the camera, the distance image including the pixel data indicating the distance value, and the luminance image including the pixel data indicating the luminance value.
- the distance image and the luminance image are each divided into a plurality of regions (S12), and it is clear that a specific object does not exist for each of the plurality of regions.
- any other second region (S13) and image recognition is executed by excluding the first region from the plurality of regions (S14).
- FIG. 9 is a flowchart of an image recognition method executed by the image recognition apparatus 100.
- each component such as the light source control unit, the image recognition processing unit, and the image recognition processing unit is configured by dedicated hardware or executes a software program suitable for each component. May be realized.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- a comprehensive or specific aspect of the present disclosure may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable CD-ROM. You may implement
- the image recognition apparatus according to the present disclosure can be suitably used for, for example, a vehicle-mounted sensor.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
L'invention a trait à un dispositif de reconnaissance d'images (100), comprenant : une unité de prise de vues (10) permettant de générer un signal de distance et un signal de luminosité en utilisant la lumière réfléchie par une pluralité de sujets ; une unité de génération d'images (20) conçue pour générer une image de distance à partir du signal de distance, et générer une image de luminosité à partir du signal de luminosité ; et une unité de traitement de reconnaissance d'images (30) servant à reconnaître les images. L'unité de traitement de reconnaissance d'images (30) divise l'image de distance et l'image de luminosité en une pluralité de régions, détermine si chaque région de la pluralité de régions est une première région dans laquelle il est clair qu'un objet spécifié n'est pas présent ou une seconde région dans laquelle ce n'est pas le cas, et effectue une reconnaissance d'images excluant les premières régions de la pluralité de régions.
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017554783A JP6358552B2 (ja) | 2015-12-08 | 2016-12-01 | 画像認識装置および画像認識方法 |
| CN201680065134.9A CN108351964B (zh) | 2015-12-08 | 2016-12-01 | 图像识别装置及图像识别方法 |
| EP16872611.5A EP3389008A4 (fr) | 2015-12-08 | 2016-12-01 | Dispositif de reconnaissance d'images et procédé de reconnaissance d'images |
| US15/960,050 US10339405B2 (en) | 2015-12-08 | 2018-04-23 | Image recognition device and image recognition method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2015239726 | 2015-12-08 | ||
| JP2015-239726 | 2015-12-08 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/960,050 Continuation US10339405B2 (en) | 2015-12-08 | 2018-04-23 | Image recognition device and image recognition method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017098709A1 true WO2017098709A1 (fr) | 2017-06-15 |
Family
ID=59013962
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2016/005037 Ceased WO2017098709A1 (fr) | 2015-12-08 | 2016-12-01 | Dispositif de reconnaissance d'images et procédé de reconnaissance d'images |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US10339405B2 (fr) |
| EP (1) | EP3389008A4 (fr) |
| JP (1) | JP6358552B2 (fr) |
| CN (1) | CN108351964B (fr) |
| WO (1) | WO2017098709A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019188389A1 (fr) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | Dispositif de traitement de signal et procédé de traitement de signal, programme, et corps mobile |
| JP2019203959A (ja) * | 2018-05-22 | 2019-11-28 | キヤノン株式会社 | 撮像装置、及びその制御方法 |
| JP2021043633A (ja) * | 2019-09-10 | 2021-03-18 | 株式会社豊田中央研究所 | 物体識別装置、及び物体識別プログラム |
| JP2023106227A (ja) * | 2022-01-20 | 2023-08-01 | 京セラ株式会社 | 深度情報処理装置、深度分布推定方法、深度分布検出システム及び学習済みモデル生成方法 |
| JP2023170536A (ja) * | 2022-05-19 | 2023-12-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、移動体、及びコンピュータプログラム |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020049906A1 (fr) * | 2018-09-03 | 2020-03-12 | パナソニックIpマネジメント株式会社 | Dispositif de mesure de distance |
| JP6726324B1 (ja) * | 2019-01-17 | 2020-07-22 | オリンパス株式会社 | 撮像装置、画像合成方法、及び画像合成プログラム |
| KR20220010885A (ko) | 2020-07-20 | 2022-01-27 | 에스케이하이닉스 주식회사 | ToF 센서를 이용한 모션 인식 장치 및 이의 동작 방법 |
| JP7695102B2 (ja) * | 2021-01-07 | 2025-06-18 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、プログラム、および記録媒体 |
| US11854239B2 (en) | 2021-01-07 | 2023-12-26 | Canon Kabushiki Kaisha | Image processing device, imaging device, image processing method, and recording medium |
| CN113111732B (zh) * | 2021-03-24 | 2024-08-23 | 浙江工业大学 | 一种高速服务区密集行人检测方法 |
| CN116456199B (zh) * | 2023-06-16 | 2023-10-03 | Tcl通讯科技(成都)有限公司 | 拍摄补光方法、装置、电子设备及计算机可读存储介质 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS55559B2 (fr) | 1977-05-16 | 1980-01-08 | ||
| JP2011128756A (ja) * | 2009-12-16 | 2011-06-30 | Fuji Heavy Ind Ltd | 物体検出装置 |
| WO2014073322A1 (fr) * | 2012-11-08 | 2014-05-15 | 日立オートモティブシステムズ株式会社 | Dispositif de détection d'objet et procédé de détection d'objet |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010140613A1 (fr) | 2009-06-03 | 2010-12-09 | 学校法人中部大学 | Dispositif de détection d'objet |
| CN102122390B (zh) * | 2011-01-25 | 2012-11-14 | 于仕琪 | 基于深度图像进行人体检测的方法 |
| CN102737370B (zh) * | 2011-04-02 | 2015-07-01 | 株式会社理光 | 检测图像前景的方法及设备 |
| CN104427291B (zh) | 2013-08-19 | 2018-09-28 | 华为技术有限公司 | 一种图像处理方法及设备 |
| CN103714321B (zh) * | 2013-12-26 | 2017-09-26 | 苏州清研微视电子科技有限公司 | 基于距离图像和强度图像的驾驶员人脸定位系统 |
-
2016
- 2016-12-01 EP EP16872611.5A patent/EP3389008A4/fr not_active Withdrawn
- 2016-12-01 JP JP2017554783A patent/JP6358552B2/ja active Active
- 2016-12-01 CN CN201680065134.9A patent/CN108351964B/zh active Active
- 2016-12-01 WO PCT/JP2016/005037 patent/WO2017098709A1/fr not_active Ceased
-
2018
- 2018-04-23 US US15/960,050 patent/US10339405B2/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS55559B2 (fr) | 1977-05-16 | 1980-01-08 | ||
| JP2011128756A (ja) * | 2009-12-16 | 2011-06-30 | Fuji Heavy Ind Ltd | 物体検出装置 |
| WO2014073322A1 (fr) * | 2012-11-08 | 2014-05-15 | 日立オートモティブシステムズ株式会社 | Dispositif de détection d'objet et procédé de détection d'objet |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP3389008A4 |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7351293B2 (ja) | 2018-03-29 | 2023-09-27 | ソニーグループ株式会社 | 信号処理装置、および信号処理方法、プログラム、並びに移動体 |
| WO2019188389A1 (fr) * | 2018-03-29 | 2019-10-03 | ソニー株式会社 | Dispositif de traitement de signal et procédé de traitement de signal, programme, et corps mobile |
| CN111886626A (zh) * | 2018-03-29 | 2020-11-03 | 索尼公司 | 信号处理设备和信号处理方法、程序及移动体 |
| KR20200136905A (ko) * | 2018-03-29 | 2020-12-08 | 소니 주식회사 | 신호 처리 장치 및 신호 처리 방법, 프로그램, 그리고 이동체 |
| KR102859653B1 (ko) * | 2018-03-29 | 2025-09-16 | 소니그룹주식회사 | 신호 처리 장치 및 신호 처리 방법, 컴퓨터 판독가능 매체, 그리고 이동체 |
| JPWO2019188389A1 (ja) * | 2018-03-29 | 2021-03-18 | ソニー株式会社 | 信号処理装置、および信号処理方法、プログラム、並びに移動体 |
| US11860640B2 (en) | 2018-03-29 | 2024-01-02 | Sony Corporation | Signal processing device and signal processing method, program, and mobile body |
| JP2019203959A (ja) * | 2018-05-22 | 2019-11-28 | キヤノン株式会社 | 撮像装置、及びその制御方法 |
| JP7118737B2 (ja) | 2018-05-22 | 2022-08-16 | キヤノン株式会社 | 撮像装置、及びその制御方法 |
| JP7235308B2 (ja) | 2019-09-10 | 2023-03-08 | 株式会社豊田中央研究所 | 物体識別装置、及び物体識別プログラム |
| JP2021043633A (ja) * | 2019-09-10 | 2021-03-18 | 株式会社豊田中央研究所 | 物体識別装置、及び物体識別プログラム |
| JP2023106227A (ja) * | 2022-01-20 | 2023-08-01 | 京セラ株式会社 | 深度情報処理装置、深度分布推定方法、深度分布検出システム及び学習済みモデル生成方法 |
| JP7727563B2 (ja) | 2022-01-20 | 2025-08-21 | 京セラ株式会社 | 深度情報処理装置、深度分布推定方法、深度分布検出システム及び学習済みモデル生成方法 |
| JP2023170536A (ja) * | 2022-05-19 | 2023-12-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、移動体、及びコンピュータプログラム |
| JP7483790B2 (ja) | 2022-05-19 | 2024-05-15 | キヤノン株式会社 | 画像処理装置、画像処理方法、移動体、及びコンピュータプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108351964B (zh) | 2019-10-18 |
| EP3389008A1 (fr) | 2018-10-17 |
| JPWO2017098709A1 (ja) | 2018-04-05 |
| EP3389008A4 (fr) | 2018-11-21 |
| US10339405B2 (en) | 2019-07-02 |
| JP6358552B2 (ja) | 2018-07-18 |
| CN108351964A (zh) | 2018-07-31 |
| US20180247148A1 (en) | 2018-08-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6358552B2 (ja) | 画像認識装置および画像認識方法 | |
| US9047518B2 (en) | Method for the detection and tracking of lane markings | |
| JP5747549B2 (ja) | 信号機検出装置及びプログラム | |
| Eum et al. | Enhancing light blob detection for intelligent headlight control using lane detection | |
| CN110431562B (zh) | 图像识别装置 | |
| US9965690B2 (en) | On-vehicle control device | |
| CN105825495B (zh) | 物体检测装置和物体检测方法 | |
| JP4263737B2 (ja) | 歩行者検知装置 | |
| US9460343B2 (en) | Method and system for proactively recognizing an action of a road user | |
| US9619895B2 (en) | Image processing method of vehicle camera and image processing apparatus using the same | |
| JP2018063680A (ja) | 交通信号認識方法および交通信号認識装置 | |
| JP2013232091A (ja) | 接近物体検知装置、接近物体検知方法及び接近物体検知用コンピュータプログラム | |
| JP5065172B2 (ja) | 車両灯火判定装置及びプログラム | |
| CN107891808A (zh) | 行车提醒方法、装置及车辆 | |
| CN114556412A (zh) | 物体识别装置 | |
| JP2014160408A (ja) | 道路標識認識装置 | |
| JP6483360B2 (ja) | 対象物認識装置 | |
| JP4528283B2 (ja) | 車両周辺監視装置 | |
| JP4732985B2 (ja) | 画像処理装置 | |
| JP2011103058A (ja) | 誤認識防止装置 | |
| JP4765113B2 (ja) | 車両周辺監視装置、車両、車両周辺監視用プログラム、車両周辺監視方法 | |
| KR101340014B1 (ko) | 위치 정보 제공 장치 및 방법 | |
| JP2010020557A (ja) | 画像処理装置及び画像処理方法 | |
| JP2020101935A (ja) | 画像認識装置 | |
| KR20140096576A (ko) | 차량 검지 장치 및 차량 검지 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16872611 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2017554783 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |