[go: up one dir, main page]

WO2024135722A1 - Imaging device and image data generation method - Google Patents

Imaging device and image data generation method Download PDF

Info

Publication number
WO2024135722A1
WO2024135722A1 PCT/JP2023/045661 JP2023045661W WO2024135722A1 WO 2024135722 A1 WO2024135722 A1 WO 2024135722A1 JP 2023045661 W JP2023045661 W JP 2023045661W WO 2024135722 A1 WO2024135722 A1 WO 2024135722A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
imaging
imaging element
element group
gain value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2023/045661
Other languages
French (fr)
Japanese (ja)
Inventor
拓弥 片岡
大樹 角谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koito Manufacturing Co Ltd
Original Assignee
Koito Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koito Manufacturing Co Ltd filed Critical Koito Manufacturing Co Ltd
Priority to JP2024566102A priority Critical patent/JPWO2024135722A1/ja
Publication of WO2024135722A1 publication Critical patent/WO2024135722A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/51Control of the gain

Definitions

  • This disclosure relates to an imaging device and an image data generation method.
  • Patent Document 1 discloses a vehicle lamp that employs a variable light distribution device known as an ADB (Adaptive Driving Beam), which uses a camera to detect the presence or absence of a shading target, such as an oncoming vehicle, and can dim or turn off the light in the area corresponding to the shading target.
  • ADB Adaptive Driving Beam
  • the present disclosure therefore aims to provide an imaging device and image data generation method that is simple in configuration and can easily detect objects.
  • An imaging device includes: A plurality of image pickup elements each capable of setting a gain value; an image generating unit that generates first image data and second image data based on imaging information output from the imaging element in one imaging operation,
  • the imaging element includes: a first imaging element group set to a first gain value; a second imaging element group set to a second gain value greater than the first gain value;
  • the image generating unit includes: generating the first image data based on first imaging information output from the first imaging element group;
  • the second image data is generated based on second imaging information output from the second imaging element group, and the second image data has a higher average luminance than the first image data.
  • the image data generating method includes: 1. An image data generating method for generating image data based on imaging information output from a plurality of imaging elements, each of which can have a set gain value, in one imaging operation, comprising: generating first image data based on first imaging information output from a first imaging element group set to a first gain value among the imaging elements; The second image data is generated based on second imaging information output from a second imaging element group among the imaging elements, the second imaging element group being set to a second gain value greater than the first gain value, and the second image data has a higher average luminance than the first image data.
  • the imaging device and image data generation method disclosed herein can provide an imaging device and image data generation method that can generate image data that is easy to detect objects with a simple configuration.
  • FIG. 1 is a block diagram of an imaging device according to an embodiment.
  • FIG. 2 is a side view illustrating an example of an imaging element according to an embodiment.
  • FIG. 3 is a front view illustrating an example of an imaging unit according to the embodiment.
  • FIG. 4 is a flowchart showing a control form of the image data generating method according to the embodiment.
  • FIG. 5 is a schematic diagram showing an image data generating method according to an embodiment.
  • FIG. 6A is a diagram comparing an actual view ahead of a vehicle with image data generated by an image data generating device according to an embodiment.
  • FIG. 6B is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment.
  • FIG. 6A is a diagram comparing an actual view ahead of a vehicle with image data generated by an image data generating device according to an embodiment.
  • FIG. 6B is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment
  • FIG. 6C is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment.
  • FIG. 6D is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment.
  • FIG. 7A is a diagram showing an actual view ahead of a vehicle and image data generated by an image data generating device according to an embodiment.
  • FIG. 7B is a diagram showing an actual view ahead of the vehicle and image data generated by the image data generating device according to the embodiment.
  • FIG. 7C is a diagram showing an actual view ahead of the vehicle and image data generated by the image data generating device according to the embodiment.
  • FIG. 7D is a diagram showing an actual view ahead of the vehicle and image data generated by the image data generating device according to the embodiment.
  • Fig. 1 is a block diagram of the imaging device 1 according to this embodiment.
  • the vehicle X has a vehicle control unit X1 and a vehicle lamp 3, and the imaging device 1 is provided in the vehicle lamp 3.
  • the imaging device 1 has an imaging unit 2, a signal processing unit 20, an image generation unit 30, and an image recognition unit 40.
  • the imaging unit 2 has a plurality of imaging elements 10 and converts light from the outside into a signal.
  • the signal processing unit 20 converts the signal received from the imaging unit 2 into imaging information 100 (see Fig. 5) required to generate an image.
  • the image generation unit 30 converts the imaging information 100 received from the signal processing unit 20 into image data.
  • the image recognition unit 40 detects an object from the image data generated by the image generation unit 30. Note that, although the image recognition unit 40 is illustrated as a part of the imaging device 1 in Fig. 1, the image recognition unit 40 does not have to be a part of the imaging device 1. For example, it may be a part of the vehicle control unit X1 mounted on the vehicle X.
  • the image sensor 10 has an on-chip lens 11, a color filter 12, a photodiode 13, and an amplifier 14.
  • the on-chip lens 11 is a lens directly mounted on the photodiode 13 on which the color filter 12 is mounted.
  • An optical image of a subject is incident on the photodiode 13 through the on-chip lens 11.
  • the color filter 12 blocks light other than a specific wavelength range and transmits only light in a specific wavelength range. As a result, only light in the specific wavelength range is incident on the photodiode 13.
  • each of the image sensors 10 has an amplifier 14 that can amplify or attenuate the intensity of a signal when converting photons incident on the photodiode 13 into a signal.
  • one amplifier 14 is provided for each photodiode 13, and each amplifier 14 can set the degree of amplification and attenuation (gain) of the signal of the photodiode 13 individually.
  • the imaging unit 2 is configured with a plurality of imaging elements 10 arranged in a lattice pattern, and one color filter 12 is arranged for each imaging element 10.
  • the imaging element 10 has a red imaging element 10r that receives red light, a blue imaging element 10b that receives blue light, and a green imaging element 10g that receives green light.
  • the green imaging element 10g that receives green light has a first green imaging element 10g1 and a second green imaging element 10g2.
  • the red imaging element 10r is provided with a color filter R that transmits red light
  • the blue imaging element 10b is provided with a color filter B that transmits blue light
  • the first green imaging element 10g1 and the second green imaging element 10g2 are provided with a color filter G that transmits green light.
  • the imaging section 2 in this embodiment has a Bayer array in which one unit formed of four imaging elements 10, namely a red imaging element 10r, a blue imaging element 10b, a first green imaging element 10g1, and a second green imaging element 10g2, is repeatedly arranged.
  • the imaging element 10 is divided into a first imaging element group 10A and a second imaging element group 10B.
  • the first imaging element group 10A is composed of the second green imaging element 10g2.
  • the second imaging element group 10B is composed of imaging elements that are not included in the first imaging element group 10A, and in this embodiment, is composed of a red imaging element 10r, a blue imaging element 10b, and a first green imaging element 10g1.
  • FIG 4 is a flowchart showing the processing of the signal processing unit 20 and the image generating unit 30 according to this embodiment.
  • the imaging unit 2 receives light from an object, the imaging unit outputs imaging information 100 according to the light.
  • the signal processing unit 20 receives the imaging information 100 from the imaging unit 2 (STEP 1). Thereafter, the signal processing unit 20 extracts the first imaging information 100A acquired by the first imaging element group 10A from the imaging information 100 received from the imaging unit 2 (STEP 2) and transmits it to the image generating unit 30.
  • the image generating unit 30 generates first image data 200A based on the first imaging information 100A received from the signal processing unit 20 (STEP 3).
  • the signal processing unit 20 extracts the second imaging information 100B acquired by the second imaging element group 10B from the imaging information 100 received from the imaging element 10 (STEP 4) and transmits it to the image generating unit 30.
  • the image generation unit 30 generates second image data 200B based on the second imaging information 100B (STEP 5).
  • FIG. 5 is a schematic diagram showing a process in which the first image data 200A and the second image data 200B are generated from the imaging information 100.
  • the imaging information 100 transmitted from the imaging unit 2 is shown as data arranged in four rows and four columns in FIG. 5, but in reality, the imaging information 100 corresponds to the number of imaging elements 10.
  • a certain piece of data is given a code representing the color of a color filter provided on the imaging element 10 that output the data, and a two-digit number corresponding to the row number and column number.
  • the imaging information 100 corresponds to the arrangement of the imaging elements 10 shown in FIG. 3, and the grouping of the first imaging element group 10A and the second imaging element group 10B is the same as in FIG. 3.
  • the imaging information 100 based on a signal emitted by the imaging element 10 arranged in the first row and first column of the imaging elements 10 shown in FIG. 3 corresponds to G11 in the imaging information 100 shown in FIG. 5. Furthermore, for example, among the imaging elements 10 shown in FIG. 3, the imaging information 100 based on signals emitted by the imaging elements 10 arranged in the first row, first column, the first row, second column, and the second row, first column, which are imaging elements 10 belonging to the second imaging element group 10B, correspond to G 11 , R 12 , and B 21 , respectively, in FIG. 5.
  • the signal processing unit 20 When the signal processing unit 20 extracts the first imaging information 100A from the imaging information 100, the signal processing unit 20 extracts G 22 , G 24 , G 42 , and G 44 , which correspond to the data received from the imaging elements 10 belonging to the first imaging element group 10A.
  • the second imaging information 100B generated in this manner is output as data of two rows and two columns in the example of Fig. 5.
  • the image generating unit 30 generates first image data 200A based on the first imaging information 100A generated by the signal processing unit 20.
  • the image generating unit 30 generates a single pixel from each piece of data in the first imaging information 100A.
  • the first imaging information 100A is composed only of signals received from the second green imaging element 10g2, which is provided with the same green color filter, so the first image data 200A is output as a grayscale image.
  • the second imaging information 100B extracts G 11 , R 12 , and B 21 , which correspond to the data received from the second imaging element group 10B.
  • the image generating unit 30 uses the second imaging information 100B extracted from the imaging information 100 to generate GRB 11 , which is data constituting the second imaging information 100B.
  • the image generating unit 30 extracts G 13 , R 14 , and B 23 to generate GRB 12 , extracts G 31 , R 32 , and B 41 to generate GRB 21 , and extracts G 33 , R 34 , and B 43 to generate GRB 22 .
  • the signal processing unit 20 generates second imaging information 100B consisting of GRB 11 , GRB 12 , GRB 21 , and GRB 22 and arranged in two rows and two columns.
  • the image generating unit 30 uses the above-mentioned method to generate the second image data 200B based on the second imaging information 100B generated by the signal processing unit 20.
  • the data GRB11 can include color information because it is generated based on the data G11 of the first green imaging element 10g1 provided with a green color filter, the data R12 of the red imaging element 10r provided with a red color filter, and the data B21 of the blue imaging element 10b provided with a blue color filter.
  • the second image data 200B generated by such pixels is output as a color image or a grayscale image. In this embodiment, the second image data 200B is output as a color image.
  • the gain set in the imaging elements 10 belonging to the second imaging element group 10B is set higher than the gain set in the imaging elements 10 belonging to the first imaging element group 10A.
  • the second image data 200B generated using the signal transmitted from the second imaging element group 10B has a higher average luminance than the first image data 200A generated using the signal transmitted from the first imaging element group 10A.
  • the second image data 200B generated from the second element group 10B tends to be a brighter image than the first image data 200A generated from the first element group 10A because the gain of the second element group 10B is higher than the gain of the first element group 10A.
  • the image recognition unit 40 identifies the pixel as a head lamp (hereinafter referred to as HL) or a tail lamp (hereinafter referred to as TL).
  • the image recognition unit 40 identifies the pixel as an HL when a group of pixels having a brightness equal to or greater than a predetermined value is white, and identifies the pixel as a TL when the pixel having a brightness equal to or greater than a predetermined value is red.
  • the method of identifying HL or TL from an image by the image recognition unit 40 is not limited to the above embodiment.
  • the imaging device 1 of this embodiment generates first image data 200A and second image data 200B with different average luminances in a single shooting. How the HL of an oncoming vehicle and the TL of a leading vehicle are detected using such an imaging device 1 will be explained using Figures 6A to 6D and 7A to 7D.
  • the light spot that appears in image data 200 and corresponds to HL is referred to as light spot Z1
  • the light spot that appears in image data 200 and corresponds to TL is referred to as light spot Z2.
  • FIGS. 6A to 6C show a scene where there is an oncoming vehicle ahead of vehicle X.
  • FIG. 6A is a diagram showing the state ahead of vehicle X.
  • FIG. 6B is a diagram showing first image data 200A1 generated in the situation shown in FIG. 6A.
  • FIG. 6C is a diagram showing second image data 200B1 generated in the situation shown in FIG. 6A.
  • the first image data 200A1 is a grayscale image
  • the second image data 200B1 is a color image.
  • the average luminance of the second image data 200B1 is higher than the average luminance of the first image data 200A1. Note that average luminance means the average luminance of the previous pixels contained in the image.
  • the imaging device 1 of this embodiment generates the first image data 200A1 in Fig. 6B at the same time as generating the second image data 200B1.
  • This first image data 200A1 is generated based on data obtained from the imaging element 10 with a low gain setting. Therefore, when generating the first image data 200A1, the light emitted from HL is not too bright, and the shape of HL appears clearly in the first image data 200A1. Therefore, the image recognition unit 40 can identify HL from Fig. 6B. 6B and 6C, the average luminance of the second image data 200B1 is higher than the average luminance of the first image data 200A1 even though the images were taken at the same moment. The image recognition unit 40 can identify the light spot Z1 from each of the first image data 200A1 and the second image data 200B1.
  • FIGS. 7A to 7C show a scene where a vehicle ahead of vehicle X is present.
  • FIG. 7A is a diagram showing the state ahead of vehicle X.
  • FIG. 7B is a diagram showing first image data 200A generated in the situation shown in FIG. 7A.
  • FIG. 7C is a diagram showing second image data 200B generated in the situation shown in FIG. 7A.
  • the image recognition unit 40 cannot identify the TL of the vehicle in front from the first image data 200A2 shown in Figure 7B, but can identify the TL from the second image data 200B2 shown in Figure 7C.
  • the imaging device 1 of this embodiment generates the second image data 200B2 of Fig. 7C at the same time as generating the first image data 200A2.
  • This second image data 200B2 is generated based on data obtained from the imaging element 10 with a high gain setting. Therefore, when generating the second image data 200B2, the light emitted from the TL is not too dark, and the shape of the TL appears clearly as a light spot Z2 in the second image data 200B2.
  • the image recognition unit 40 can identify the TL from Fig. 7C.
  • the image recognition unit 40 in the present disclosure recognizes HL and TL in the above-mentioned manner.
  • the image recognition unit 40 transmits the recognition result to a vehicle control unit X1 provided in the vehicle X and a lamp control unit 3b provided in the vehicle lamp 3, and the vehicle control unit X1 and the lamp control unit 3b change the lighting state of the vehicle lamp 3 based on the recognition result of the image recognition unit 40.
  • "changing the lighting state of the vehicle lamp 3" means changing the light distribution pattern to be formed.
  • 6D and 7D show a manner in which the image recognition unit 40 changes the lighting state of the light source 3a based on the result of recognizing HL and TL. In Fig. 6D and Fig.
  • the light distribution pattern formed by the vehicle lamp 3 is shown by a hatched area.
  • the image recognition unit 40 is able to identify the light spot Z1 (HL) from the first image data 200A1, so the lamp control unit 3b controls the light source 3a to block or dim the portion where the light spot Z1 is located, thereby forming the light distribution pattern Hi1 as shown in the figure.
  • the image recognition unit 40 is able to identify the light spot Z2 (TL) from the second image data 200B2, so the lamp control unit 3b controls the light source 3a to block or dim the area where the light spot Z2 is located, forming the light distribution pattern Hi2 as shown in the figure.
  • the lighting state of the vehicle lamp 3 incorporating the imaging device 1 according to the present disclosure can be changed as described above.
  • first image data is generated based on the first imaging information output from the first imaging element group
  • second image data with high average luminance is generated based on the second imaging information output from the second imaging element group.
  • high average luminance here can also be rephrased as a high average pixel illuminance or a high average pixel gamma value.
  • HDR High Dynamic Range
  • the inventor therefore considered generating image data that makes it easy to recognize objects without using HDR technology, by generating two types of image data with different brightness from the imaging information generated by a single imaging session.
  • the imaging element 10 of the imaging device 1 is divided into a first imaging element group 10A and a second imaging element group 10B, and first image data 200A is generated from a signal generated by the first imaging element group 10A, and second image data 200B is generated from a signal generated by the second imaging element group 10B.
  • the second imaging element group 10B since the second imaging element group 10B has a higher gain set than the first imaging element group 10A, the second image data 200B generated from the second imaging element group 10B is capable of capturing dark objects brighter than the first image data 200A.
  • first image data and second image data with different average luminance can be generated simultaneously with one shooting.
  • image data with different average luminance the luminance distribution can be expanded using two images. This effectively expands the dynamic range, making it possible to recognize bright objects while also being able to recognize dark objects.
  • the first image data 200A and the second image data 200B may both be grayscale images. Even if the first image data 200A and the second image data 200B are both grayscale images, the average luminance of the first image data 200A and the second image data 200B is different, so that bright objects can be recognized as bright objects, and it is also possible to recognize dark objects that were previously difficult to recognize using image data generated by a single capture.
  • the image sensor 10 has a color filter 12, and the color filter 12 has a Bayer array composed of three colors, red, green, and blue, but the present disclosure is not limited to this.
  • a white filter may be included, or a filter that enables the image sensor to detect light other than visible light may be included.
  • a configuration in which no color filter is provided at all may also be used.
  • the signal processing unit 20 and the image generating unit 30 create the first image data 200A from the imaging information 100 and then create the second image data 200B, but the present disclosure is not limited to this. In the present disclosure, it is sufficient that the first image data 200A and the second image data 200B are generated from imaging information obtained by a single imaging operation, and the first image data and the second image data may be generated sequentially or simultaneously.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

An imaging device comprising a plurality of imaging elements (10) for each of which a gain value can be set and an image generation unit (30) that generates first image data (200A) and second image data (200B) on the basis of imaging information (100) output from the imaging elements (10) in one image capture, wherein: the imaging elements (10) include a first imaging element group (10A) set to a first gain value and a second imaging element group (10B) set to a second gain value larger than the first gain value; the image generation unit (30) generates the first image data (200A) on the basis of the first imaging information (100A) output from the first imaging element group (10A), and generates the second image data (200B) on the basis of the second imaging information (100B) output from the second imaging element group (10B); and the second image data (200B) has higher average brightness than the first image data (200A).

Description

撮像装置及び画像データ生成方法Imaging device and image data generating method

 本開示は、撮像装置及び画像データ生成方法に関する。 This disclosure relates to an imaging device and an image data generation method.

 特許文献1には、対向車などの遮光対象の有無をカメラで検出し、遮光対象に対応する領域を減光あるいは消灯することができる、いわゆるADB(Adaptive Driving Beam)と呼ばれる配光可変装置が採用された車両用灯具が開示されている。 Patent Document 1 discloses a vehicle lamp that employs a variable light distribution device known as an ADB (Adaptive Driving Beam), which uses a camera to detect the presence or absence of a shading target, such as an oncoming vehicle, and can dim or turn off the light in the area corresponding to the shading target.

国際公開2021/070783号公報International Publication No. 2021/070783

 特許文献1のような車両用灯具において、対象物の検知のためにカメラによって撮像を行う場合、撮影画像の白飛びや黒つぶれを解消するために、異なる露光時間で撮像した画像を合成してダイナミックレンジを広げるHDR(High Dynamic Range)技術が搭載されたカメラ(イメージセンサ)が用いられる。
 しかし、HDR技術を実現するためには高性能なカメラを必要とするため、車両用灯具のコストが嵩んでいた。
In a vehicle lamp such as that described in Patent Document 1, when an image is captured using a camera to detect an object, a camera (image sensor) equipped with HDR (High Dynamic Range) technology is used, which combines images captured with different exposure times to expand the dynamic range in order to eliminate whiteout and blackout problems in the captured image.
However, realizing HDR technology requires high-performance cameras, which increases the cost of vehicle lighting fixtures.

 そこで本開示は、簡易な構成で対象物を検知しやすい撮像装置および画像データ生成方法を提供することを目的とする。 The present disclosure therefore aims to provide an imaging device and image data generation method that is simple in configuration and can easily detect objects.

 本開示の一態様に係る撮像装置は、
 各々ゲイン値が設定可能な複数の撮像素子と、
 一回の撮像で前記撮像素子から出力される撮像情報に基づき、第一画像データおよび第二画像データを生成する画像生成部と、を備えた撮像装置であって、
 前記撮像素子は、
  第一ゲイン値に設定された第一撮像素子群と、
  前記第一ゲイン値よりも大きな第二ゲイン値に設定された第二撮像素子群を有し、
 前記画像生成部は、
  前記第一撮像素子群から出力される第一撮像情報に基づき前記第一画像データを生成し、
  前記第二撮像素子群から出力される第二撮像情報に基づき前記第二画像データを生成し、前記第二画像データは前記第一画像データよりも平均輝度が高い。
An imaging device according to one aspect of the present disclosure includes:
A plurality of image pickup elements each capable of setting a gain value;
an image generating unit that generates first image data and second image data based on imaging information output from the imaging element in one imaging operation,
The imaging element includes:
a first imaging element group set to a first gain value;
a second imaging element group set to a second gain value greater than the first gain value;
The image generating unit includes:
generating the first image data based on first imaging information output from the first imaging element group;
The second image data is generated based on second imaging information output from the second imaging element group, and the second image data has a higher average luminance than the first image data.

 また、本開示の態様に係る画像データ生成方法は、
 一回の撮像で、各々ゲイン値が設定可能な複数の撮像素子から出力される撮像情報に基づき画像データを生成する画像データ生成方法であって、
 前記撮像素子のうち、第一ゲイン値に設定された第一撮像素子群から出力される第一撮像情報に基づき第一画像データを生成し、
 前記撮像素子のうち、前記第一ゲイン値よりも大きな第二ゲイン値に設定された第二撮像素子群から出力される第二撮像情報に基づき、前記第二画像データを生成し、前記第二画像データは前記第一画像データよりも平均輝度が高い。
In addition, the image data generating method according to the embodiment of the present disclosure includes:
1. An image data generating method for generating image data based on imaging information output from a plurality of imaging elements, each of which can have a set gain value, in one imaging operation, comprising:
generating first image data based on first imaging information output from a first imaging element group set to a first gain value among the imaging elements;
The second image data is generated based on second imaging information output from a second imaging element group among the imaging elements, the second imaging element group being set to a second gain value greater than the first gain value, and the second image data has a higher average luminance than the first image data.

 本開示の撮像装置及び画像データ生成方法によれば、簡易な構成で対象物を検知しやすい画像データを生成することができる撮像装置及び画像データ生成方法を提供することができる。 The imaging device and image data generation method disclosed herein can provide an imaging device and image data generation method that can generate image data that is easy to detect objects with a simple configuration.

図1は、実施形態に係る撮像装置のブロック図である。FIG. 1 is a block diagram of an imaging device according to an embodiment. 図2は、実施形態に係る撮像素子の一例を示す側面図である。FIG. 2 is a side view illustrating an example of an imaging element according to an embodiment. 図3は、実施形態に係る撮像部の一例を示す正面図である。FIG. 3 is a front view illustrating an example of an imaging unit according to the embodiment. 図4は、実施形態に係る画像データ生成方法の制御形態を示すフローチャートである。FIG. 4 is a flowchart showing a control form of the image data generating method according to the embodiment. 図5は、実施形態に係る画像データ生成方法を示す概略図である。FIG. 5 is a schematic diagram showing an image data generating method according to an embodiment. 図6Aは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを比較する図である。FIG. 6A is a diagram comparing an actual view ahead of a vehicle with image data generated by an image data generating device according to an embodiment. 図6Bは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを比較する図である。FIG. 6B is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment. 図6Cは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを比較する図である。FIG. 6C is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment. 図6Dは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを比較する図である。FIG. 6D is a diagram comparing an actual view ahead of the vehicle with image data generated by the image data generating device according to the embodiment. 図7Aは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを示す図である。FIG. 7A is a diagram showing an actual view ahead of a vehicle and image data generated by an image data generating device according to an embodiment. 図7Bは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを示す図である。FIG. 7B is a diagram showing an actual view ahead of the vehicle and image data generated by the image data generating device according to the embodiment. 図7Cは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを示す図である。FIG. 7C is a diagram showing an actual view ahead of the vehicle and image data generated by the image data generating device according to the embodiment. 図7Dは、実際の車両前方の様子と実施形態に係る画像データ生成装置から生成された画像データを示す図である。FIG. 7D is a diagram showing an actual view ahead of the vehicle and image data generated by the image data generating device according to the embodiment.

[本開示の実施形態の詳細]
 本開示の実施形態に係る撮像装置1の具体例を、以下に図面を参照しつつ説明する。なお、本開示はこれらの例示に限定されるものではなく、請求の範囲によって示され、請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。
[Details of the embodiment of the present disclosure]
Specific examples of the imaging device 1 according to the embodiment of the present disclosure will be described below with reference to the drawings. Note that the present disclosure is not limited to these examples, but is defined by the claims, and is intended to include all modifications within the meaning and scope equivalent to the claims.

 図1~図3を用いて、本実施形態に係る撮像装置1について詳述する。図1は、実施形態に係る撮像装置1のブロック図である。図1においては、車両Xが車両制御部X1と車両用灯具3を有しており、撮像装置1は車両用灯具3に設けられている。図1に示すように、撮像装置1は撮像部2と信号処理部20と画像生成部30と画像認識部40を有する。撮像部2は複数の撮像素子10を有し、外部からの光を信号に変換する。信号処理部20は撮像部2から受信した信号を、画像を生成するために必要な撮像情報100(図5参照)に変換する。画像生成部30は信号処理部20から受信した撮像情報100を画像データに変換する。画像認識部40は画像生成部30が生成した画像データから対象物を検出する。なお、図1においては画像認識部40を撮像装置1の一部として図示しているが、画像認識部40は撮像装置1の一部でなくともよい。例えば車両Xに搭載される車両制御部X1の一部であってもよい。 The imaging device 1 according to this embodiment will be described in detail with reference to Figs. 1 to 3. Fig. 1 is a block diagram of the imaging device 1 according to this embodiment. In Fig. 1, the vehicle X has a vehicle control unit X1 and a vehicle lamp 3, and the imaging device 1 is provided in the vehicle lamp 3. As shown in Fig. 1, the imaging device 1 has an imaging unit 2, a signal processing unit 20, an image generation unit 30, and an image recognition unit 40. The imaging unit 2 has a plurality of imaging elements 10 and converts light from the outside into a signal. The signal processing unit 20 converts the signal received from the imaging unit 2 into imaging information 100 (see Fig. 5) required to generate an image. The image generation unit 30 converts the imaging information 100 received from the signal processing unit 20 into image data. The image recognition unit 40 detects an object from the image data generated by the image generation unit 30. Note that, although the image recognition unit 40 is illustrated as a part of the imaging device 1 in Fig. 1, the image recognition unit 40 does not have to be a part of the imaging device 1. For example, it may be a part of the vehicle control unit X1 mounted on the vehicle X.

 図2は実施形態に係る撮像素子10の一例を示す側面図である。図2に示すように、撮像素子10はオンチップレンズ11とカラーフィルタ12とフォトダイオード13と増幅部14を有する。オンチップレンズ11は、カラーフィルタ12が搭載されたフォトダイオード13に直接搭載されているレンズである。被写体の光学像はオンチップレンズ11を通してフォトダイオード13に入射する。カラーフィルタ12は、特定の波長域以外の光を遮光し、特定の波長域の光のみを透過させる。これにより、特定の波長域の光のみがフォトダイオード13に入射する。フォトダイオード13はオンチップレンズ11とカラーフィルタ12を通して入射した光子(フォトン)をカウントし、信号値として出力する。
 また、撮像素子10はそれぞれ、フォトダイオード13に入射した光子を信号に変換する際に、信号の強度を増幅または減衰させることができる増幅部14を有している。本実施形態において、一つのフォトダイオード13につき一つの増幅部14が設けられており、各々の増幅部14は個別にフォトダイオード13の信号の増幅及び減衰の程度(ゲイン)を設定可能である。
2 is a side view showing an example of an image sensor 10 according to an embodiment. As shown in FIG. 2, the image sensor 10 has an on-chip lens 11, a color filter 12, a photodiode 13, and an amplifier 14. The on-chip lens 11 is a lens directly mounted on the photodiode 13 on which the color filter 12 is mounted. An optical image of a subject is incident on the photodiode 13 through the on-chip lens 11. The color filter 12 blocks light other than a specific wavelength range and transmits only light in a specific wavelength range. As a result, only light in the specific wavelength range is incident on the photodiode 13. The photodiode 13 counts photons incident through the on-chip lens 11 and the color filter 12, and outputs the counted number of photons as a signal value.
Furthermore, each of the image sensors 10 has an amplifier 14 that can amplify or attenuate the intensity of a signal when converting photons incident on the photodiode 13 into a signal. In this embodiment, one amplifier 14 is provided for each photodiode 13, and each amplifier 14 can set the degree of amplification and attenuation (gain) of the signal of the photodiode 13 individually.

 図3は実施形態に係る撮像部2の一例を示す正面図である。図3に示すように、撮像部2は複数の撮像素子10が格子状に配列されて構成されており、一つの撮像素子10に対して一つのカラーフィルタ12が配置されている。本実施形態において撮像素子10は赤色光を受光する赤撮像素子10r、青色光を受光する青撮像素子10b、緑色光を受光する緑撮像素子10gを有する。なお、本実施形態において緑色光を受光する緑撮像素子10gは第一緑撮像素子10g1と第二緑撮像素子10g2を有している。赤撮像素子10rには赤色光を透過するカラーフィルタRが設けられ、青撮像素子10bには青色光を透過するカラーフィルタBが設けられ、第一緑撮像素子10g1及び第二緑撮像素子10g2には緑色光を透過するカラーフィルタGが設けられる。
 本実施形態における撮像部2は、赤撮像素子10r、青撮像素子10b、第一緑撮像素子10g1、第二緑撮像素子10g2の四つの撮像素子10で構成される一単位が繰り返し配列されるベイヤー配列を有する。
3 is a front view showing an example of the imaging unit 2 according to the embodiment. As shown in FIG. 3, the imaging unit 2 is configured with a plurality of imaging elements 10 arranged in a lattice pattern, and one color filter 12 is arranged for each imaging element 10. In this embodiment, the imaging element 10 has a red imaging element 10r that receives red light, a blue imaging element 10b that receives blue light, and a green imaging element 10g that receives green light. In this embodiment, the green imaging element 10g that receives green light has a first green imaging element 10g1 and a second green imaging element 10g2. The red imaging element 10r is provided with a color filter R that transmits red light, the blue imaging element 10b is provided with a color filter B that transmits blue light, and the first green imaging element 10g1 and the second green imaging element 10g2 are provided with a color filter G that transmits green light.
The imaging section 2 in this embodiment has a Bayer array in which one unit formed of four imaging elements 10, namely a red imaging element 10r, a blue imaging element 10b, a first green imaging element 10g1, and a second green imaging element 10g2, is repeatedly arranged.

 撮像素子10は、第一撮像素子群10Aと第二撮像素子群10Bとに群分けされている。本実施形態において第一撮像素子群10Aは第二緑撮像素子10g2で構成される。第二撮像素子群10Bは第一撮像素子群10Aに含まれない撮像素子で構成され、本実施形態においては赤撮像素子10r、青撮像素子10b、第一緑撮像素子10g1で構成される。 The imaging element 10 is divided into a first imaging element group 10A and a second imaging element group 10B. In this embodiment, the first imaging element group 10A is composed of the second green imaging element 10g2. The second imaging element group 10B is composed of imaging elements that are not included in the first imaging element group 10A, and in this embodiment, is composed of a red imaging element 10r, a blue imaging element 10b, and a first green imaging element 10g1.

 次に図4及び図5を用いて本実施形態に係る撮像装置1の動作について説明する。図4は、実施形態に係る信号処理部20と画像生成部30の処理を示すフローチャートである。撮像部2が対象物からの光を受光すると、撮像部は、光に応じた撮像情報100を出力する。信号処理部20は撮像部2から撮像情報100を受信する(STEP1)。その後、信号処理部20は、撮像部2から受信した撮像情報100から第一撮像素子群10Aが取得した第一撮像情報100Aを抽出し(STEP2)、画像生成部30に送信する。画像生成部30は信号処理部20から受信した第一撮像情報100Aを基に第一画像データ200Aを生成する(STEP3)。その後、信号処理部20は撮像素子10から受信した撮像情報100から第二撮像素子群10Bが取得した第二撮像情報100Bを抽出し(STEP4)、画像生成部30に送信する。画像生成部30は第二撮像情報100Bを基に第二画像データ200Bを生成する(STEP5)。 Next, the operation of the imaging device 1 according to this embodiment will be described with reference to Figures 4 and 5. Figure 4 is a flowchart showing the processing of the signal processing unit 20 and the image generating unit 30 according to this embodiment. When the imaging unit 2 receives light from an object, the imaging unit outputs imaging information 100 according to the light. The signal processing unit 20 receives the imaging information 100 from the imaging unit 2 (STEP 1). Thereafter, the signal processing unit 20 extracts the first imaging information 100A acquired by the first imaging element group 10A from the imaging information 100 received from the imaging unit 2 (STEP 2) and transmits it to the image generating unit 30. The image generating unit 30 generates first image data 200A based on the first imaging information 100A received from the signal processing unit 20 (STEP 3). Thereafter, the signal processing unit 20 extracts the second imaging information 100B acquired by the second imaging element group 10B from the imaging information 100 received from the imaging element 10 (STEP 4) and transmits it to the image generating unit 30. The image generation unit 30 generates second image data 200B based on the second imaging information 100B (STEP 5).

 図5は撮像情報100から第一画像データ200Aおよび第二画像データ200Bが生成される過程を示した概略図である。なお図5では説明のため、撮像部2(撮像素子10)から送信される撮像情報100を四行四列のデータとして示しているが、実際には撮像情報100は撮像素子10の数に対応したデータである。また、図5では、あるデータを、そのデータを出力した撮像素子10に設けられたカラーフィルタの色を表す符号と、行番号列番号にあたる二桁の数字が付されている。また、撮像情報100は図3に示した撮像素子10の並びと対応しており、第一撮像素子群10A、第二撮像素子群10Bの群分けの態様は図3と同様である。そのため、例えば、図3に示す撮像素子10のうち一行一列目に配置された撮像素子10が発した信号を基とする撮像情報100は、図5に示す撮像情報100において、G11に対応する。また、例えば図3に示す撮像素子10の内、第二撮像素子群10Bに属する撮像素子10である一行一列目、一行二列目、二行一列目に配置された撮像素子10が発した信号を基とする撮像情報100は図5においてそれぞれG11、R12、B21と対応する。 5 is a schematic diagram showing a process in which the first image data 200A and the second image data 200B are generated from the imaging information 100. For the sake of explanation, the imaging information 100 transmitted from the imaging unit 2 (imaging element 10) is shown as data arranged in four rows and four columns in FIG. 5, but in reality, the imaging information 100 corresponds to the number of imaging elements 10. In FIG. 5, a certain piece of data is given a code representing the color of a color filter provided on the imaging element 10 that output the data, and a two-digit number corresponding to the row number and column number. In addition, the imaging information 100 corresponds to the arrangement of the imaging elements 10 shown in FIG. 3, and the grouping of the first imaging element group 10A and the second imaging element group 10B is the same as in FIG. 3. Therefore, for example, the imaging information 100 based on a signal emitted by the imaging element 10 arranged in the first row and first column of the imaging elements 10 shown in FIG. 3 corresponds to G11 in the imaging information 100 shown in FIG. 5. Furthermore, for example, among the imaging elements 10 shown in FIG. 3, the imaging information 100 based on signals emitted by the imaging elements 10 arranged in the first row, first column, the first row, second column, and the second row, first column, which are imaging elements 10 belonging to the second imaging element group 10B, correspond to G 11 , R 12 , and B 21 , respectively, in FIG. 5.

 信号処理部20が撮像情報100から第一撮像情報100Aを抽出する際、信号処理部20は第一撮像素子群10Aに属する撮像素子10から受信したデータにあたるG22、G24、G42、G44を抽出する。こうして生成した第二撮像情報100Bは、図5の例では二行二列のデータとして出力される。 When the signal processing unit 20 extracts the first imaging information 100A from the imaging information 100, the signal processing unit 20 extracts G 22 , G 24 , G 42 , and G 44 , which correspond to the data received from the imaging elements 10 belonging to the first imaging element group 10A. The second imaging information 100B generated in this manner is output as data of two rows and two columns in the example of Fig. 5.

 画像生成部30は信号処理部20で生成された第一撮像情報100Aを基に第一画像データ200Aを生成する。画像生成部30は、第一撮像情報100Aの各々のデータから、それぞれ単一の画素を生成する。本実施形態において、第一撮像情報100Aはいずれも同じ緑色のカラーフィルタが設けられた第二緑撮像素子10g2より受信した信号のみで構成されるため、第一画像データ200Aはグレースケールの画像として出力される。 The image generating unit 30 generates first image data 200A based on the first imaging information 100A generated by the signal processing unit 20. The image generating unit 30 generates a single pixel from each piece of data in the first imaging information 100A. In this embodiment, the first imaging information 100A is composed only of signals received from the second green imaging element 10g2, which is provided with the same green color filter, so the first image data 200A is output as a grayscale image.

 信号処理部20が撮像情報100から第二撮像情報100Bを抽出する際、第二撮像情報100Bは第二撮像素子群10Bから受信したデータにあたるG11、R12、B21を抽出する。画像生成部30は、撮像情報100から抽出された第二撮像情報100Bを用いて、第二撮像情報100Bを構成するデータであるGRB11を生成する。GRB11の生成と同様にして、画像生成部30はG13、R14、B23を抽出してGRB12を生成し、G31、R32、B41を抽出してGRB21を生成し、G33、R34、B43を抽出してGRB22を生成する。こうして信号処理部20は、GRB11、GRB12、GRB21、GRB22で構成された二行二列の第二撮像情報100Bを生成する。 When the signal processing unit 20 extracts the second imaging information 100B from the imaging information 100, the second imaging information 100B extracts G 11 , R 12 , and B 21 , which correspond to the data received from the second imaging element group 10B. The image generating unit 30 uses the second imaging information 100B extracted from the imaging information 100 to generate GRB 11 , which is data constituting the second imaging information 100B. In the same manner as in generating GRB 11 , the image generating unit 30 extracts G 13 , R 14 , and B 23 to generate GRB 12 , extracts G 31 , R 32 , and B 41 to generate GRB 21 , and extracts G 33 , R 34 , and B 43 to generate GRB 22 . In this manner, the signal processing unit 20 generates second imaging information 100B consisting of GRB 11 , GRB 12 , GRB 21 , and GRB 22 and arranged in two rows and two columns.

 上述した方法により、画像生成部30は信号処理部20で生成された第二撮像情報100Bを基に第二画像データ200Bを生成する。データGRB11は、緑色のカラーフィルタが設けられた第一緑撮像素子10g1のデータG11、赤色のカラーフィルタが設けられた赤撮像素子10rのデータR12、青色のカラーフィルタが設けられた青撮像素子10bのデータB21に基づき生成されるため、色情報を含むことができる。このような画素により生成された第二画像データ200Bはカラー画像またはグレースケール画像として出力される。本実施形態では、第二画像データ200Bはカラー画像として出力される。 Using the above-mentioned method, the image generating unit 30 generates the second image data 200B based on the second imaging information 100B generated by the signal processing unit 20. The data GRB11 can include color information because it is generated based on the data G11 of the first green imaging element 10g1 provided with a green color filter, the data R12 of the red imaging element 10r provided with a red color filter, and the data B21 of the blue imaging element 10b provided with a blue color filter. The second image data 200B generated by such pixels is output as a color image or a grayscale image. In this embodiment, the second image data 200B is output as a color image.

 本実施形態において、第二撮像素子群10Bに属する撮像素子10において設定されるゲインは第一撮像素子群10Aに属する撮像素子10において設定されるゲインよりも高く設定されている。これにより、第二撮像素子群10Bから送信される信号を用いて生成した第二画像データ200Bは、第一撮像素子群10Aから送信される信号を用いて生成した第一画像データ200Aと比べ、平均輝度が高い。つまり、同じ対象物を撮影した画像であっても、第二素子群10Bのゲインが第一素子群10Aのゲインよりも高いため、第二素子群10Bから生成した第二画像データ200Bは第一素子群10Aから生成した第一画像データ200Aよりも明るめの画像となりやすい。 In this embodiment, the gain set in the imaging elements 10 belonging to the second imaging element group 10B is set higher than the gain set in the imaging elements 10 belonging to the first imaging element group 10A. As a result, the second image data 200B generated using the signal transmitted from the second imaging element group 10B has a higher average luminance than the first image data 200A generated using the signal transmitted from the first imaging element group 10A. In other words, even when the images are of the same object, the second image data 200B generated from the second element group 10B tends to be a brighter image than the first image data 200A generated from the first element group 10A because the gain of the second element group 10B is higher than the gain of the first element group 10A.

 画像認識部40は、例えば、所定値以上の輝度を有する画素の集合体が特定の大きさおよび特定の形状をなしており、特定の領域に現れたときに、該画素がヘッドランプ(以下、HLと称する)またはテールランプ(以下、TLと称する)であると特定する。あるいは、画像認識部40は、カラー画像である第二画像データ200Bにおいては、所定値以上の輝度を有する画素の集合体が白色であるときに該画素がHLであり、所定値以上の輝度を有する画素が赤色であるときに該画素がTLであると特定する。なお、画像認識部40による画像からHLあるいはTLを特定する手法は上記実施形態に限られない。 For example, when a group of pixels having a brightness equal to or greater than a predetermined value has a specific size and shape and appears in a specific area, the image recognition unit 40 identifies the pixel as a head lamp (hereinafter referred to as HL) or a tail lamp (hereinafter referred to as TL). Alternatively, in the second image data 200B, which is a color image, the image recognition unit 40 identifies the pixel as an HL when a group of pixels having a brightness equal to or greater than a predetermined value is white, and identifies the pixel as a TL when the pixel having a brightness equal to or greater than a predetermined value is red. Note that the method of identifying HL or TL from an image by the image recognition unit 40 is not limited to the above embodiment.

 このように、本実施形態の撮像装置1は、一回の撮影で、互いに平均輝度の異なる第一画像データ200Aと第二画像データ200Bとを生成する。このような撮像装置1を用いて対向車のHLと、前走車のTLを検出する様子を図6A~図6Dと図7A~図7Dを用いて説明する。 In this way, the imaging device 1 of this embodiment generates first image data 200A and second image data 200B with different average luminances in a single shooting. How the HL of an oncoming vehicle and the TL of a leading vehicle are detected using such an imaging device 1 will be explained using Figures 6A to 6D and 7A to 7D.

 以下の説明においては画像データ200に現れるHLに対応する光点を光点Z1、画像データ200に現れるTLに対応する光点を光点Z2とする。 In the following explanation, the light spot that appears in image data 200 and corresponds to HL is referred to as light spot Z1, and the light spot that appears in image data 200 and corresponds to TL is referred to as light spot Z2.

 図6A~図6Cは、車両Xの前方に対向車がいる場面を示す。図6Aは、車両Xの前方の様子を示す図である。図6Bは、図6Aの状況で生成された第一画像データ200A1を示す図である。図6Cは、図6Aに示す状況で生成された第二画像データ200B1を示す図である。上述したように、第一画像データ200A1はグレースケール画像であり、第二画像データ200B1はカラー画像である。また、第二画像データ200B1の平均輝度は、第一画像データ200A1の平均輝度よりも高い。なお、平均輝度とは、画像中に含まれる前画素の輝度の平均を意味する。 FIGS. 6A to 6C show a scene where there is an oncoming vehicle ahead of vehicle X. FIG. 6A is a diagram showing the state ahead of vehicle X. FIG. 6B is a diagram showing first image data 200A1 generated in the situation shown in FIG. 6A. FIG. 6C is a diagram showing second image data 200B1 generated in the situation shown in FIG. 6A. As described above, the first image data 200A1 is a grayscale image, and the second image data 200B1 is a color image. Furthermore, the average luminance of the second image data 200B1 is higher than the average luminance of the first image data 200A1. Note that average luminance means the average luminance of the previous pixels contained in the image.

 図6Cに示すように、第二画像データ200B1において、対向車のHLに対応する位置周辺の画素は白飛びを起こしている。第二画像データ200B1は、ゲインが高く設定された撮像素子10から得られたデータを基に生成しているからである。第二画像データ200B1においては、HLから出射される光が明るすぎるため、HLの周囲も明るい領域とされ、白飛びしている。このため、第二画像データ200B1にはHLの形状が明瞭に現れず、画像認識部40は図6CからはHLを特定することができない。
 ところが、本実施形態の撮像装置1は、第二画像データ200B1の生成と同時に、図6Bの第一画像データ200A1も生成している。この第一画像データ200A1は、ゲインが低く設定された撮像素子10から得られたデータを基に生成している。このため、第一画像データ200A1を生成する際にはHLから出射された光が明るすぎるということはなく、第一画像データ200A1にはHLの形状が明瞭に現れる。このため、画像認識部40は図6BからHLを特定することができる。
 なお、図6Bと図6Cとを比較した場合、同じ瞬間に撮影した画像であっても、第二画像データ200B1の平均輝度は、第一画像データ200A1の平均輝度よりも高い。画像認識部40は第一画像データ200A1、第二画像データ200B1それぞれから光点Z1を識別することができる。
As shown in Fig. 6C, in the second image data 200B1, pixels around the position corresponding to the HL of the oncoming vehicle are blown out. This is because the second image data 200B1 is generated based on data obtained from the imaging element 10 with a high gain setting. In the second image data 200B1, the light emitted from the HL is too bright, so the area around the HL is also bright and blown out. For this reason, the shape of the HL does not appear clearly in the second image data 200B1, and the image recognition unit 40 cannot identify the HL from Fig. 6C.
However, the imaging device 1 of this embodiment generates the first image data 200A1 in Fig. 6B at the same time as generating the second image data 200B1. This first image data 200A1 is generated based on data obtained from the imaging element 10 with a low gain setting. Therefore, when generating the first image data 200A1, the light emitted from HL is not too bright, and the shape of HL appears clearly in the first image data 200A1. Therefore, the image recognition unit 40 can identify HL from Fig. 6B.
6B and 6C, the average luminance of the second image data 200B1 is higher than the average luminance of the first image data 200A1 even though the images were taken at the same moment. The image recognition unit 40 can identify the light spot Z1 from each of the first image data 200A1 and the second image data 200B1.

 図7A~図7Cは、車両Xの前方に前走車がいる場面を示す。図7Aは、車両Xの前方の様子を示す図である。図7Bは、図7Aに示す状況で生成された第一画像データ200Aを示す図である。図7Cは、図7Aに示す状況で生成された第二画像データ200Bを示す図である。 FIGS. 7A to 7C show a scene where a vehicle ahead of vehicle X is present. FIG. 7A is a diagram showing the state ahead of vehicle X. FIG. 7B is a diagram showing first image data 200A generated in the situation shown in FIG. 7A. FIG. 7C is a diagram showing second image data 200B generated in the situation shown in FIG. 7A.

 図7A~図7Cに示した例では図6A~図6Cに示した例とは異なり、画像認識部40は、図7Bで示す第一画像データ200A2から前走車のTLが特定できない。一方、図7Cで示す第二画像データ200B2からはTLが特定できる。
 図7Bに示すように、第一画像データ200A2において、前走車のTLは暗すぎて、ゲインが低く設定された撮像素子10から得られたデータでは、TLに対応する光点Z2が現れない。このため、画像認識部40は図7BからはTLを特定することができない。
 ところが、本実施形態の撮像装置1は、第一画像データ200A2の生成と同時に、図7Cの第二画像データ200B2も生成している。この第二画像データ200B2は、ゲインが高く設定された撮像素子10から得られたデータを基に生成している。このため、第二画像データ200B2を生成する際にはTLから出射された光が暗すぎるということはなく、第二画像データ200B2にはTLの形状が光点Z2として明瞭に現れる。画像認識部40は図7CからTLを特定することができる。
7A to 7C, unlike the example shown in Figures 6A to 6C, the image recognition unit 40 cannot identify the TL of the vehicle in front from the first image data 200A2 shown in Figure 7B, but can identify the TL from the second image data 200B2 shown in Figure 7C.
As shown in Fig. 7B, in the first image data 200A2, the TL of the vehicle ahead is too dark, and the light spot Z2 corresponding to the TL does not appear in the data obtained from the image sensor 10 with the gain set to low. Therefore, the image recognition unit 40 cannot identify the TL from Fig. 7B.
However, the imaging device 1 of this embodiment generates the second image data 200B2 of Fig. 7C at the same time as generating the first image data 200A2. This second image data 200B2 is generated based on data obtained from the imaging element 10 with a high gain setting. Therefore, when generating the second image data 200B2, the light emitted from the TL is not too dark, and the shape of the TL appears clearly as a light spot Z2 in the second image data 200B2. The image recognition unit 40 can identify the TL from Fig. 7C.

 本開示における画像認識部40は上述した形態により、HLとTLを認識する。画像認識部40は認識した結果を、車両Xに設けられた車両制御部X1や車両用灯具3に設けられる灯具制御部3bに送信し、車両制御部X1や灯具制御部3bは、画像認識部40が認識した結果を基に車両用灯具3の点灯状態を変更する。ここで、「車両用灯具3の点灯状態を変更する」とは、形成する配光パターンを変更することである。
 画像認識部40がHLとTLを認識した結果を基に光源3aの点灯状態を変更する態様を図6D、図7Dに示す。図6D、図7Dには、車両用灯具3によって形成される配光パターンをハッチングした領域で示している。
 図6Dにおいて、画像認識部40は第一画像データ200A1から光点Z1(HL)を特定できているので、灯具制御部3bは光点Z1が存在する部分を遮光または減光するように光源3aを制御し、図示したような配光パターンHi1を形成する。
 図7Dにおいて、画像認識部40は第二画像データ200B2から光点Z2(TL)を特定できているので、灯具制御部3bは光点Z2が存在する部分を遮光または減光するように光源3aを制御し、図示したような配光パターンHi2を形成する。
 本開示における撮像装置1を搭載した車両用灯具3は以上のように点灯状態を変更することができる。
The image recognition unit 40 in the present disclosure recognizes HL and TL in the above-mentioned manner. The image recognition unit 40 transmits the recognition result to a vehicle control unit X1 provided in the vehicle X and a lamp control unit 3b provided in the vehicle lamp 3, and the vehicle control unit X1 and the lamp control unit 3b change the lighting state of the vehicle lamp 3 based on the recognition result of the image recognition unit 40. Here, "changing the lighting state of the vehicle lamp 3" means changing the light distribution pattern to be formed.
6D and 7D show a manner in which the image recognition unit 40 changes the lighting state of the light source 3a based on the result of recognizing HL and TL. In Fig. 6D and Fig. 7D, the light distribution pattern formed by the vehicle lamp 3 is shown by a hatched area.
In FIG. 6D, the image recognition unit 40 is able to identify the light spot Z1 (HL) from the first image data 200A1, so the lamp control unit 3b controls the light source 3a to block or dim the portion where the light spot Z1 is located, thereby forming the light distribution pattern Hi1 as shown in the figure.
In Figure 7D, the image recognition unit 40 is able to identify the light spot Z2 (TL) from the second image data 200B2, so the lamp control unit 3b controls the light source 3a to block or dim the area where the light spot Z2 is located, forming the light distribution pattern Hi2 as shown in the figure.
The lighting state of the vehicle lamp 3 incorporating the imaging device 1 according to the present disclosure can be changed as described above.

 上述したように本開示に係る撮像装置によれば、一回の撮像で撮像素子から出力されるデータの内、第一撮像素子群から出力される第一撮像情報に基づき第一画像データを生成し、第二撮像素子群から出力される第二撮像情報に基づき、平均輝度が高い第二画像データを生成する。なお、ここで言う「平均輝度が高い」とは、画素の照度の平均が高い、あるいは画素のγ値の平均が高いとも言い換えることができる。 As described above, according to the imaging device of the present disclosure, among the data output from the imaging elements in one imaging operation, first image data is generated based on the first imaging information output from the first imaging element group, and second image data with high average luminance is generated based on the second imaging information output from the second imaging element group. Note that "high average luminance" here can also be rephrased as a high average pixel illuminance or a high average pixel gamma value.

 従来、撮影画像の白飛びや黒つぶれを解消するためには異なる露光時間で撮像した画像を合成してダイナミックレンジを広げるHDR(High Dynamic Range)を用いることが一般的であった。しかし、HDRをカメラに搭載する場合、異なる露光時間で撮像する関係上システムが複雑になり、コストが上昇する可能性がある。 In the past, the common way to eliminate blown-out highlights and crushed shadows in captured images was to use HDR (High Dynamic Range), which expands the dynamic range by synthesizing images captured with different exposure times. However, when HDR is installed in a camera, the system becomes complicated because images are captured with different exposure times, which can increase costs.

 そこで発明者は、一度の撮像により生成される撮像情報から明るさの異なる二種類の画像データを生成することによって、HDRの技術を用いることなく、対象物を認識しやすい画像データを生成することを検討した。
 本開示に係る撮像装置によれば、撮像装置1が有する撮像素子10は第一撮像素子群10Aと第二撮像素子群10Bに分けられ、第一撮像素子群10Aが生成する信号から第一画像データ200Aが生成され、第二撮像素子群10Bが生成する信号から第二画像データ200Bが生成される。
 また、第二撮像素子群10Bは第一撮像素子群10Aと比べ、ゲインが高く設定されているので、第二撮像素子群10Bから生成される第二画像データ200Bは第一画像データ200Aと比べて、暗い物体を明るく写すことが可能である。
The inventor therefore considered generating image data that makes it easy to recognize objects without using HDR technology, by generating two types of image data with different brightness from the imaging information generated by a single imaging session.
According to the imaging device of the present disclosure, the imaging element 10 of the imaging device 1 is divided into a first imaging element group 10A and a second imaging element group 10B, and first image data 200A is generated from a signal generated by the first imaging element group 10A, and second image data 200B is generated from a signal generated by the second imaging element group 10B.
In addition, since the second imaging element group 10B has a higher gain set than the first imaging element group 10A, the second image data 200B generated from the second imaging element group 10B is capable of capturing dark objects brighter than the first image data 200A.

 そのため、本開示の構成によれば、一回の撮影で平均輝度が異なる第一画像データと第二画像データを同時に生成することができる。平均輝度が異なる画像データを用いることにより、輝度分布を二枚の画像を使って広げることができる。これによりダイナミックレンジが実質的に広がり、暗い物体も認識可能としつつ、明るい物体も認識可能となった。このように本開示の構成によれば、簡易な構成で対象物を検知しやすい撮像装置を提供することができる。 Therefore, according to the configuration of the present disclosure, first image data and second image data with different average luminance can be generated simultaneously with one shooting. By using image data with different average luminance, the luminance distribution can be expanded using two images. This effectively expands the dynamic range, making it possible to recognize bright objects while also being able to recognize dark objects. In this way, according to the configuration of the present disclosure, it is possible to provide an imaging device that is easily capable of detecting objects with a simple configuration.

 ここまで本開示の実施形態について説明をしたが、本開示の態様は上述した実施形態に限られない。例えば、第一画像データ200Aと第二画像データ200Bは共にグレースケール画像であってもよい。第一画像データ200Aと第二画像データ200Bが共にグレースケール画像であったとしても、第一画像データ200Aと第二画像データ200Bの平均輝度は異なるので、明るい物体は明るい物体として認識可能であり、従来、一度の撮像により生成された画像データでは認識が難しかった暗い物体の認識も可能である。 Though the embodiments of the present disclosure have been described above, the aspects of the present disclosure are not limited to the above-described embodiments. For example, the first image data 200A and the second image data 200B may both be grayscale images. Even if the first image data 200A and the second image data 200B are both grayscale images, the average luminance of the first image data 200A and the second image data 200B is different, so that bright objects can be recognized as bright objects, and it is also possible to recognize dark objects that were previously difficult to recognize using image data generated by a single capture.

 また、上述した実施形態において、撮像素子10はカラーフィルタ12を有し、カラーフィルタ12は赤、緑、青の三色で構成されたベイヤー配列である例を示したが、本開示はこれに限られない。例えば白色のフィルタが含まれていてもよいし、撮像素子が可視光以外の光を検出可能とするフィルタが含まれていてもよい。また、そもそもカラーフィルタが設けられない構成であってもよい。 In addition, in the above-described embodiment, the image sensor 10 has a color filter 12, and the color filter 12 has a Bayer array composed of three colors, red, green, and blue, but the present disclosure is not limited to this. For example, a white filter may be included, or a filter that enables the image sensor to detect light other than visible light may be included. In addition, a configuration in which no color filter is provided at all may also be used.

 また、上述した実施形態において、信号処理部20及び画像生成部30は撮像情報100から第一画像データ200Aを作成した後に第二画像データ200Bを作成する例を示したが、本開示はこれに限られない。本開示において第一画像データ200Aと第二画像データ200Bは一度の撮像によって得られる撮像情報から生成されていればよく、第一画像データおよび第二画像データは順番に生成されても、同時に生成されてもよい。 In addition, in the above-described embodiment, an example has been shown in which the signal processing unit 20 and the image generating unit 30 create the first image data 200A from the imaging information 100 and then create the second image data 200B, but the present disclosure is not limited to this. In the present disclosure, it is sufficient that the first image data 200A and the second image data 200B are generated from imaging information obtained by a single imaging operation, and the first image data and the second image data may be generated sequentially or simultaneously.

 以上、本開示の実施形態について説明をしたが、本開示の技術的範囲が本実施形態の説明によって限定的に解釈されるべきではないのは言うまでもない。本実施形態は単なる一例であって、特許請求の範囲に記載された発明の範囲内において、様々な実施形態の変更が可能であることが当業者によって理解されるところである。本開示の技術的範囲は特許請求の範囲に記載された発明の範囲及びその均等の範囲に基づいて定められるべきである。 The above describes an embodiment of the present disclosure, but it goes without saying that the technical scope of the present disclosure should not be interpreted as being limited by the description of this embodiment. This embodiment is merely an example, and it will be understood by those skilled in the art that various modifications of the embodiment are possible within the scope of the invention described in the claims. The technical scope of the present disclosure should be determined based on the scope of the invention described in the claims and its equivalents.

 本出願は、2022年12月21日出願の日本特許出願2022-204744号に基づくものであり、その内容はここに参照として取り込まれる。 This application is based on Japanese Patent Application No. 2022-204744, filed on December 21, 2022, the contents of which are incorporated herein by reference.

Claims (8)

 各々ゲイン値が設定可能な複数の撮像素子と、
 一回の撮像で前記撮像素子から出力される撮像情報に基づき、第一画像データおよび第二画像データを生成する画像生成部と、を備えた撮像装置であって、
 前記撮像素子は、
  第一ゲイン値に設定された第一撮像素子群と、
  前記第一ゲイン値よりも大きな第二ゲイン値に設定された第二撮像素子群を有し、
 前記画像生成部は、
  前記第一撮像素子群から出力される第一撮像情報に基づき前記第一画像データを生成し、
  前記第二撮像素子群から出力される第二撮像情報に基づき前記第二画像データを生成し、前記第二画像データは前記第一画像データよりも平均輝度が高い、撮像装置。
A plurality of image pickup elements each capable of setting a gain value;
an image generating unit that generates first image data and second image data based on imaging information output from the imaging element in one imaging operation,
The imaging element includes:
a first imaging element group set to a first gain value;
a second imaging element group set to a second gain value greater than the first gain value;
The image generating unit includes:
generating the first image data based on first imaging information output from the first imaging element group;
An imaging device that generates the second image data based on second imaging information output from the second imaging element group, the second image data having a higher average luminance than the first image data.
 一回の撮像で、各々ゲイン値が設定可能な複数の撮像素子から出力される撮像情報に基づき画像データを生成する画像データ生成方法であって、
 前記撮像素子のうち、第一ゲイン値に設定された第一撮像素子群から出力される第一撮像情報に基づき第一画像データを生成し、
 前記撮像素子のうち、前記第一ゲイン値よりも大きな第二ゲイン値に設定された第二撮像素子群から出力される第二撮像情報に基づき、第二画像データを生成し、前記第二画像データは前記第一画像データよりも平均輝度が高い、画像データ生成方法。
1. An image data generating method for generating image data based on imaging information output from a plurality of imaging elements, each of which can have a set gain value, in one imaging operation, comprising:
generating first image data based on first imaging information output from a first imaging element group set to a first gain value among the imaging elements;
An image data generating method, comprising: generating second image data based on second imaging information output from a second imaging element group among the imaging elements, the second imaging element group being set to a second gain value greater than the first gain value; and generating second image data having an average luminance higher than that of the first image data.
 一回の撮像で、各々ゲイン値が設定可能な複数の撮像素子から出力される撮像情報に基づき画像データを生成する画像データ生成装置であって、
 前記撮像素子のうち、第一ゲイン値に設定された第一撮像素子群から出力される第一撮像情報に基づき第一画像データを生成し、
 前記撮像素子のうち、前記第一ゲイン値よりも大きな第二ゲイン値に設定された第二撮像素子群から出力される第二撮像情報に基づき、第二画像データを生成し、前記第二画像データは前記第一画像データよりも平均輝度が高い、画像データ生成装置。
An image data generating device that generates image data based on imaging information output from a plurality of imaging elements, each of which can have a set gain value, in one imaging operation, comprising:
generating first image data based on first imaging information output from a first imaging element group set to a first gain value among the imaging elements;
An image data generating device that generates second image data based on second imaging information output from a second imaging element group among the imaging elements, the second imaging element group being set to a second gain value greater than the first gain value, and the second image data has a higher average luminance than the first image data.
 前記撮像素子は、着色されたカラーフィルタと、前記カラーフィルタを透過した光を受光するフォトダイオードと、を有し、
 前記第一撮像素子群に設けられる前記カラーフィルタは同一の色であり、
 前記第二撮像素子群に設けられる前記カラーフィルタは、赤、青、緑の少なくとも3種類である、請求項1に記載の撮像装置。
the imaging element includes a color filter and a photodiode that receives light transmitted through the color filter;
the color filters provided in the first imaging element group are of the same color;
The imaging device according to claim 1 , wherein the color filters provided in the second imaging element group are of at least three types: red, blue, and green.
 前記第一画像データはグレースケールであり、前記第二画像データはカラーである、請求項1に記載の撮像装置。 The imaging device of claim 1, wherein the first image data is grayscale and the second image data is color.  前記第一画像データと前記第二画像データはグレースケールである、請求項1に記載の撮像装置。 The imaging device of claim 1, wherein the first image data and the second image data are grayscale.  請求項1に記載の撮像装置によって生成された前記第一画像データから検出した光点、および前記第二画像データからに検出した光点に基づいて点灯状態を変更する、車両用灯具。 A vehicle lamp that changes its illumination state based on light spots detected from the first image data generated by the imaging device described in claim 1 and light spots detected from the second image data.  前記第一画像データからヘッドランプの光点を検出し、前記第二画像データからテールランプの光点を検出する、請求項7に記載の車両用灯具。 The vehicle lamp according to claim 7, which detects headlamp light spots from the first image data and detects taillamp light spots from the second image data.
PCT/JP2023/045661 2022-12-21 2023-12-20 Imaging device and image data generation method Ceased WO2024135722A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2024566102A JPWO2024135722A1 (en) 2022-12-21 2023-12-20

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022204744 2022-12-21
JP2022-204744 2022-12-21

Publications (1)

Publication Number Publication Date
WO2024135722A1 true WO2024135722A1 (en) 2024-06-27

Family

ID=91588735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/045661 Ceased WO2024135722A1 (en) 2022-12-21 2023-12-20 Imaging device and image data generation method

Country Status (2)

Country Link
JP (1) JPWO2024135722A1 (en)
WO (1) WO2024135722A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014165528A (en) * 2013-02-21 2014-09-08 Clarion Co Ltd Image pickup device
JP2017046204A (en) * 2015-08-27 2017-03-02 クラリオン株式会社 Imaging apparatus
JP2018117309A (en) * 2017-01-20 2018-07-26 ソニーセミコンダクタソリューションズ株式会社 Imaging apparatus, image processing method, and image processing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014165528A (en) * 2013-02-21 2014-09-08 Clarion Co Ltd Image pickup device
JP2017046204A (en) * 2015-08-27 2017-03-02 クラリオン株式会社 Imaging apparatus
JP2018117309A (en) * 2017-01-20 2018-07-26 ソニーセミコンダクタソリューションズ株式会社 Imaging apparatus, image processing method, and image processing system

Also Published As

Publication number Publication date
JPWO2024135722A1 (en) 2024-06-27

Similar Documents

Publication Publication Date Title
US9906766B2 (en) Imaging device
CN109416475B (en) Beam splitting extended dynamic range image capture system
JP2013219560A (en) Imaging apparatus, imaging method, and camera system
WO2016075908A1 (en) Imaging device and imaging method
WO2011132241A1 (en) Image capture device
JP2007318753A (en) Image capturing apparatus and operation method thereof
JP5397714B1 (en) Surveillance camera device
JP2017118191A (en) IMAGING ELEMENT, ITS DRIVING METHOD, AND IMAGING DEVICE
WO2017033687A1 (en) Imaging device
KR20130139788A (en) Imaging apparatus which suppresses fixed pattern noise generated by an image sensor of the apparatus
US20130155302A1 (en) Digital image sensor
WO2020238804A1 (en) Image acquisition apparatus and image acquisition method
JP5750291B2 (en) Image processing device
WO2024135722A1 (en) Imaging device and image data generation method
JP7057818B2 (en) Low light imaging system
JP2012010282A (en) Imaging device, exposure control method, and exposure control program
JP6322723B2 (en) Imaging apparatus and vehicle
KR101475468B1 (en) Infrared camera system with infrared LED
JP4530149B2 (en) High dynamic range camera system
JP2017038311A (en) Solid-state imaging device
US10458849B2 (en) Sensor assembly for capturing spatially resolved photometric data
JP2019197948A (en) Imaging device and method for controlling the same
JP6594557B2 (en) Image processing apparatus and light distribution control system
JP7784456B2 (en) Background light subtraction for infrared images
JP2014150471A (en) Imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23907082

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2024566102

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23907082

Country of ref document: EP

Kind code of ref document: A1