US7911515B2 - Imaging apparatus and method of processing video signal - Google Patents
Imaging apparatus and method of processing video signal Download PDFInfo
- Publication number
- US7911515B2 US7911515B2 US12/078,919 US7891908A US7911515B2 US 7911515 B2 US7911515 B2 US 7911515B2 US 7891908 A US7891908 A US 7891908A US 7911515 B2 US7911515 B2 US 7911515B2
- Authority
- US
- United States
- Prior art keywords
- signal
- pixels
- interpolation
- frequency
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
- H04N2209/046—Colour interpolation to calculate the missing colour values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/048—Picture signal generators using solid-state devices having several pick-up sensors
- H04N2209/049—Picture signal generators using solid-state devices having several pick-up sensors having three pick-up sensors
Definitions
- This invention generally relates to an imaging apparatus such as a video camera, and particularly relates to an imaging apparatus including solid-state image sensors and a section for processing video signals outputted from the image sensors to attain a high video definition.
- this invention relates to a method of processing video signals to attain a high video definition.
- Japanese patent application publication number 2000-341708 discloses an imaging apparatus including a lens, a prism, and solid-state image sensors for R, G, and B (red, green, and blue).
- An incident light beam passes through the lens before reaching the prism.
- the incident light beam is separated by the prism into R, G, and B light beams applied to the R, G, and B image sensors respectively.
- the R, G, and B light beams are converted by the R, G, and B image sensors into R, G, and B signals, respectively.
- Each of the R, G, and B image sensors has a matrix array of 640 photosensor pixels in a horizontal direction and 480 photosensor pixels in a vertical direction.
- the matrix array of the photosensor pixels has a predetermined horizontal pixel pitch and a predetermined vertical pixel pitch.
- the optical position of the R and B image sensors relative to the incident light beam slightly differs from that of the G image sensor such that the photosensor pixels in the R, G, and B image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch and vertical intervals equal to half the vertical pixel pitch.
- the R, G, and B signals outputted from the R, G, and B image sensors are processed, through interpolation, into component video signals representing a periodically-updated frame composed of 1280 pixels in a horizontal direction and 720 pixels in a vertical direction.
- Japanese patent application publication number 2000-341710 discloses an imaging apparatus including a lens, a prism, and solid-state image sensors for R, G 1 , G 2 , and B (red, first green, second green, and blue).
- An incident light beam passes through the lens before reaching the prism.
- the incident light beam is separated by the prism into R, G 1 , G 2 , and B light beams applied to the R, G 1 , G 2 , and B image sensors respectively.
- the R, G 1 , G 2 , and B light beams are converted by the R, G 1 , G 2 , and B image sensors into R, G 1 , G 2 , and B signals, respectively.
- Each of the R, G 1 , G 2 , and B image sensors has a matrix array of 640 photosensor pixels in a horizontal direction and 480 photosensor pixels in a vertical direction.
- the matrix array of the photosensor pixels has a predetermined horizontal pixel pitch and a predetermined vertical pixel pitch.
- the optical position of the G 2 image sensor relative to the incident light beam slightly differs from that of the G 1 image sensor such that the photosensor pixels in the G 1 and G 2 image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch and vertical intervals equal to half the vertical pixel pitch.
- the optical position of the R image sensor relative to the incident light beam slightly differs from that of the G 1 image sensor such that the photosensor pixels in the R and G 1 image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch.
- the optical position of the B image sensor relative to the incident light beam slightly differs from that of the G 1 image sensor such that the photosensor pixels in the B and G 1 image sensors are staggered at vertical intervals equal to half the vertical pixel pitch.
- the R, G 1 , G 2 , and B signals outputted from the R, G 1 , G 2 , and B image sensors are processed, through interpolation, into component video signals representing a periodically-updated frame composed of 1280 pixels in a horizontal direction and 960 pixels in a vertical direction.
- Japanese patent application publication number 11-234690/1999 discloses an imaging apparatus including two image sensors each having a matrix array of photosensor pixels.
- the positions of the two image sensors slightly differ such that the pixel array in one of the two image sensors is obliquely shifted from that in the other image sensor by a distance corresponding to half a pixel in a horizontal direction and a distance corresponding to a half a pixel in a vertical direction.
- a set of output signals from the two image sensors is composed of only first signal segments representing first alternate ones of pixels forming one high-definition frame.
- Second signal segments representing second alternate ones of pixels forming one high-definition frame are generated from the first signal segments through interpolation depending on the direction of a high video correlation.
- the degree and direction of each of video correlations at and around a pixel of interest are detected.
- a second signal segment representing a pixel of interest is generated from first signal segments representing left and right pixels neighboring the pixel of interest.
- a second signal segment representing a pixel of interest is generated from first signal segments representing upper and lower pixels neighboring the pixel of interest.
- a second signal segment representing a pixel of interest is generated from first signal segments representing left, right, upper, and lower pixels neighboring the pixel of interest.
- Japanese patent application publication number 10-155158/1998 discloses an imaging apparatus similar to that in Japanese application 11-234690/1999 except for the following points.
- the apparatus of Japanese application 10-155158/1998 not only horizontal and vertical video correlations but also oblique video correlations can be detected.
- a signal segment representing a pixel of interest is generated from signal segments representing left-upper and right-lower pixels neighboring the pixel of interest.
- a signal segment representing a pixel of interest is generated from signal segments representing left-lower and right-upper pixels neighboring the pixel of interest.
- a first aspect of this invention provides an imaging apparatus comprising a color-separation optical system for separating incident light into green, red, and blue light beams; a first solid-state image sensor receiving the green light beam from the color-separation optical system and including a plurality of photosensor pixels arranged at prescribed pixel pitches along horizontal and vertical directions, the first solid-state image sensor changing the received green light beam into an analog G signal through photoelectric conversion implemented by the photosensor pixels therein; a second solid-state image sensor receiving the red light beam from the color-separation optical system and including a plurality of photosensor pixels, the second solid-state image sensor changing the received red light beam into an analog R signal through photoelectric conversion implemented by the photosensor pixels therein, wherein the photosensor pixels in the second solid-state image sensor are different in optical position from the photosensor pixels in the first solid-state image sensor by a distance corresponding to half the prescribed pixel pitch along at least one of the horizontal and vertical directions; a third solid-state image sensor receiving the blue light beam from the color-s
- a second aspect of this invention is based on the first aspect thereof, and provides an imaging apparatus wherein the eighth means comprises means for calculating first, second, and third components of each of the standard deviations; and means for combining the calculated first, second, and third components to obtain each of the standard deviations; wherein the calculated first component is equal to a standard deviation of portions of the high-frequency components GH, RH, and BH which represent pixels in a first line passing through the interpolation pixel and parallel to each of the predetermined directions, wherein the calculated second component is equal to a standard deviation of portions of the high-frequency components GH, RH, and BH which represent pixels in a second line adjacent and parallel to the first line, and wherein the calculated third component is equal to a standard deviation of portions of the high-frequency components GH, RH, and BH which represent pixels in a third line adjacent and parallel to the first line.
- a third aspect of this invention provides a method of processing a video signal in an imaging apparatus including a color-separation optical system and first, second, and third solid-state imaging sensors, the color-separation optical system separating incident light into green, red, and blue light beams, the first solid-state image sensor receiving the green light beam from the color-separation optical system and including a plurality of photosensor pixels arranged at prescribed pixel pitches along horizontal and vertical directions, the first solid-state image sensor changing the received green light beam into an analog G signal through photoelectric conversion implemented by the photosensor pixels therein, the second solid-state image sensor receiving the red light beam from the color-separation optical system and including a plurality of photosensor pixels, the second solid-state image sensor changing the received red light beam into an analog R signal through photoelectric conversion implemented by the photosensor pixels therein, wherein the photosensor pixels in the second solid-state image sensor are different in optical position from the photosensor pixels in the first solid-state image sensor by a distance corresponding to half the prescribed pixel pitch along
- the method comprises the steps of converting the analog G, R, and B signals generated by the first, second, and third solid-state image sensors into digital G, R, and B signals, respectively; implementing two-dimensional interpolation in response to the digital G signal to generate signal segments representing pixels between pixels represented by the digital G signal, and thereby converting the digital G signal into an interpolation-result signal GI; implementing two-dimensional interpolation in response to the digital R signal to generate signal segments representing pixels between pixels represented by the digital R signal, and thereby converting the digital R signal into an interpolation-result signal RI; implementing two-dimensional interpolation in response to the digital B signal to generate signal segments representing pixels between pixels represented by the digital B signal, and thereby converting the digital B signal into an interpolation-result signal BI; extracting low-frequency components GL, which are defined in a two-dimensional space, from the interpolation-result signal GI through the use of a first two-dimensional low pass filter; extracting low-frequency
- a fourth aspect of this invention is based on the third aspect thereof, and provides a method wherein the step of calculating the standard deviations comprises calculating first, second, and third components of each of the standard deviations; and combining the calculated first, second, and third components to obtain each of the standard deviations; wherein the calculated first component is equal to a standard deviation of portions of the high-frequency components GH, RH, and BH which represent pixels in a first line passing through the interpolation pixel and parallel to each of the predetermined directions, wherein the calculated second component is equal to a standard deviation of portions of the high-frequency components GH, RH, and BH which represent pixels in a second line adjacent and parallel to the first line, and wherein the calculated third component is equal to a standard deviation of portions of the high-frequency components GH, RH, and BH which represent pixels in a third line adjacent and parallel to the first line.
- a fifth aspect of this invention provides an imaging apparatus comprising means for separating an R signal into a high-frequency-component signal RH and a low-frequency-component signal RL; means for separating a G signal into a high-frequency-component signal GH and a low-frequency-component signal GL; means for separating a B signal into a high-frequency-component signal BH and a low-frequency-component signal BL; wherein the high-frequency-component signals RH, GH, and BH are divided into portions representing respective pixels in every frame; means for calculating standard deviations of respective sets of selected ones among the portions of the high-frequency-component signals RH, GH, and BH, wherein each of the sets represents selected pixels including each prescribed pixel, and the selected pixels are in different predetermined directions related to the respective calculated standard deviations and passing the prescribed pixel; means for detecting the smallest one among the calculated standard deviations; means for identifying, among the predetermined directions, one related to the detected smallest standard deviation; means for using
- This invention has the following advantages. Three solid-state image sensors are used, and thereby a high video definition is attained which corresponds to a greater number of pixels than a pixel number in each of the three image sensors.
- the effect of increasing definitions along oblique directions is provided similar to the effect of increasing definitions along horizontal and vertical directions.
- the direction of a correlation in an image pattern is detected through the use of standard deviations, and thereby the accuracy of interpolation for high-frequency signal components can be increased. Therefore, it is possible to attain a high video definition comparable to that provided in cases where four solid-state image sensors are used.
- FIGS. 1 , 2 , and 3 are diagrams of arrays of photosensor pixels in solid-state image sensors in a first prior-art imaging apparatus.
- FIG. 4 is a diagram of a superposition-like combination of the photosensor pixels in the image sensors in FIGS. 1 , 2 , and 3 .
- FIGS. 5 , 6 , 7 , and 8 are diagrams of arrays of photosensor pixels in solid-state image sensors in a second prior-art imaging apparatus.
- FIG. 9 is a diagram of a superposition-like combination of the photosensor pixels in the image sensors in FIGS. 5 , 6 , 7 , and 8 .
- FIG. 10 is a diagram of a pixel array in a color filter provided in an image sensor in a third prior-art imaging apparatus.
- FIG. 11 is a diagram of oblique lines and a superposition-like combination of the photosensor pixels in the image sensors in FIGS. 1 , 2 , and 3 .
- FIG. 12 is a diagram showing an array of the values of pixel-corresponding data pieces in the third prior-art imaging apparatus.
- FIG. 13 is a block diagram of an imaging apparatus according to a first embodiment of this invention.
- FIG. 14 is a diagram of an array of photosensor pixels in a green image sensor in FIG. 13 .
- FIG. 15 is a diagram of an array of photosensor pixels in a red image sensor in FIG. 13 .
- FIG. 16 is a diagram of an array of photosensor pixels in a blue image sensor in FIG. 13 .
- FIG. 17 is a diagram of a superposition-like combination of the photosensor pixels in the image sensors in FIGS. 14 , 15 , and 16 .
- FIG. 18 is a diagram showing an array of pixel-corresponding segments of a sample-number-increased green signal generated by a horizontal-vertical interpolating section in FIG. 13 .
- FIG. 19 is a diagram showing an array of pixel-corresponding segments of a sample-number-increased red signal generated by the horizontal-vertical interpolating section in FIG. 13 .
- FIG. 20 is a diagram showing an array of pixel-corresponding segments of a sample-number-increased blue signal generated by the horizontal-vertical interpolating section in FIG. 13 .
- FIG. 21 is a diagram showing an example of the frequency spectrum of a digital R, G, or B signal received by the horizontal-vertical interpolating section in FIG. 13 .
- FIG. 22 is a diagram showing an example of the frequency spectrum of the sample-number-increased red, green, or blue signal generated by the horizontal-vertical interpolating section in FIG. 13 .
- FIG. 23 is a diagram showing an example of the frequency spectrum of an interpolation-result red, green, or blue signal generated by the horizontal-vertical interpolating section in FIG. 13 .
- FIG. 24 is a diagram showing an example of the frequency spectrum of a low-frequency red, green, or blue signal generated by a low-frequency component extracting section in FIG. 13 .
- FIG. 25 is a diagram showing an example of the frequency spectrum of a high-frequency red, green, or blue signal generated by a high-frequency component extracting section in FIG. 13 .
- FIG. 26 is a diagram of an arrangement of pixels including not only actual pixels but also nonexistent pixels for which corresponding signal segments are generated by interpolation.
- FIG. 27 is a diagram of the relation between a nonexistent pixel and a pixel-corresponding segment of the high-frequency blue signal which is adopted for the nonexistent pixel when a highest-correlation direction is a vertical direction.
- FIG. 28 is a diagram of the relation between a nonexistent pixel and a pixel-corresponding segment of the high-frequency red signal which is adopted for the nonexistent pixel when a highest-correlation direction is a horizontal direction.
- FIG. 29 is a diagram of the relation between a nonexistent pixel and pixel-corresponding segments of the high-frequency green signal which are adopted for the nonexistent pixel when a highest-correlation direction is a 45-degree oblique direction.
- FIG. 30 is a diagram of the relation between a nonexistent pixel and pixel-corresponding segments of the high-frequency green signal which are adopted for the nonexistent pixel when a highest-correlation direction is a 135-degree oblique direction.
- FIG. 31 is a flowchart of a segment of a control program for a computer forming a correlation-direction detecting section and a high-frequency component interpolating section in FIG. 13 .
- FIG. 32 is a diagram showing an example of the frequency spectrum of a high-frequency interpolation-result signal generated by the high-frequency component interpolating section in FIG. 13 .
- FIG. 33 is a diagram showing an example of the frequency spectrum of each of high-definition component video signals generated by a high-frequency component adding section in FIG. 13 .
- FIG. 34 is a block diagram of a portion of a correlation-direction detecting section in an imaging apparatus according to a second embodiment of this invention.
- FIG. 35 is a diagram of an arrangement of pixels corresponding to signal segments used for the calculation of a standard deviation along a vertical direction in the second embodiment of this invention.
- FIG. 36 is a diagram of an arrangement of pixels corresponding to signal segments used for the calculation of a standard deviation along a horizontal direction in the second embodiment of this invention.
- FIG. 37 is a diagram of an arrangement of pixels corresponding to signal segments used for the calculation of a standard deviation along a 45-degree oblique direction in the second embodiment of this invention.
- FIG. 38 is a diagram of an arrangement of pixels corresponding to signal segments used for the calculation of a standard deviation along a 135-degree oblique direction in the second embodiment of this invention.
- a first prior-art imaging apparatus disclosed in Japanese patent application publication number 2000-341708 is designed so that an incident light beam is separated into G, R, and B (green, red, and blue) light beams, which are applied to G, R, and B solid-state image sensors respectively.
- the G image sensor has an array of photosensor pixels shown in FIG. 1 where “G” denotes the position of each photosensor pixel.
- the R image sensor has an array of photosensor pixels shown in FIG. 2 where “R” denotes the position of each photosensor pixel.
- the B image sensor has an array of photosensor pixels shown in FIG. 3 where “B” denotes the position of each photosensor pixel.
- photosensor pixels are spaced at a pitch Px along a horizontal direction and at a pitch Py along a vertical direction.
- the optical position of the R and B image sensors relative to the incident light beam slightly differs from that of the G image sensor such that the photosensor pixels in the G, R, and B image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch Px and vertical intervals equal to half the vertical pixel pitch Py.
- a superposition-like combination of the photosensor pixels in the G, R, and B image sensors has a configuration shown in FIG. 4 .
- the positions of the photosensor pixels in the R image sensor are equal to those of the photosensor pixels in the B image sensor.
- interpolation using video signals outputted from the G, R, and B image sensors is implemented for increasing the number of effective pixels for every frame to attain a high video definition.
- a second prior-art imaging apparatus disclosed in Japanese patent application publication number 2000-341710 is designed so that a G light beam is applied to first and second G image sensors while an R light beam and a B light beam are applied to an R image sensor and a B image sensor respectively.
- the first and second G image sensors, the R image sensor, and the B image sensor are of a solid-state type.
- the first G image sensor has an array of photosensor pixels shown in FIG. 5 where “G 1 ” denotes the position of each photosensor pixel.
- the second G image sensor has an array of photosensor pixels shown in FIG. 6 where “G 2 ” denotes the position of each photosensor pixel.
- the R image sensor has an array of photosensor pixels shown in FIG. 7 where “R” denotes the position of each photosensor pixel.
- the B image sensor has an array of photosensor pixels shown in FIG. 8 where “B” denotes the position of each photosensor pixel.
- photosensor pixels are spaced at a pitch Px along a horizontal direction and at a pitch Py along a vertical direction.
- the optical position of the second G image sensor relative to the incident light beam slightly differs from that of the first G image sensor such that the photosensor pixels in the first and second G image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch Px and vertical intervals equal to half the vertical pixel pitch Py.
- the optical position of the R image sensor relative to the incident light beam slightly differs from that of the first G image sensor such that the photosensor pixels in the R and first G image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch Px.
- FIGS. 5 and 6 the optical position of the second G image sensor relative to the incident light beam slightly differs from that of the first G image sensor such that the photosensor pixels in the R and first G image sensors are staggered at horizontal intervals equal to half the horizontal pixel pitch Px.
- the optical position of the B image sensor relative to the incident light beam slightly differs from that of the first G image sensor such that the photosensor pixels in the B and first G image sensors are staggered at vertical intervals equal to half the vertical pixel pitch Py. Accordingly, a superposition-like combination of the photosensor pixels in the first G, second G, R, and B image sensors has a configuration shown in FIG. 9 .
- a third prior-art imaging apparatus disclosed in Japanese patent application publication number 11-234690/1999 includes an image sensor provided with a color filter having a pixel array shown in FIG. 10 .
- the third prior-art imaging apparatus detects the difference in signal level between left and right pixels and the difference in signal level between upper and lower pixels, and thus detects signal edge components. Interpolation responsive to the detected differences is implemented in order to generate a video signal segment corresponding to each omitted pixel.
- the third prior-art imaging apparatus calculates the absolute value ⁇ H of the difference in signal level between left and right pixels and the absolute value ⁇ V of the difference in signal level between upper and lower pixels for each omitted pixel of interest.
- ⁇ H
- ⁇ V
- Signal level changes at surrounding pixels along the horizontal direction and the vertical direction are decided on the basis of the horizontal-direction edge component (the horizontal-direction difference) ⁇ H and the vertical-direction edge component (the vertical-direction difference) ⁇ V.
- the value ⁇ H or the value ⁇ V is greater than a predetermined value “th”, it is decided that an edge component exists for that pixel.
- the value ⁇ H is greater than the value ⁇ V, it is further decided that a high correlation occurs in the vertical direction.
- the value ⁇ H is equal to or smaller than the value ⁇ V, it is further decided that a high correlation occurs in the horizontal direction.
- An optimal interpolation method is selected in accordance with the results of these decisions.
- a fourth prior-art imaging apparatus disclosed in Japanese patent application publication number 10-155158/1998 uses a method of calculating a direction-dependent correlation.
- S ⁇ 1 and the maximum value of the correlation degree S is equal to 1.
- the correlation degree S is calculated according to the above equation.
- the direction in which the calculated correlation peaks is decided as a correlation direction. Then, optimal interpolation is implemented in accordance with the decided correlation direction.
- the first, second, third, and fourth prior-art imaging apparatuses have drawbacks as follows.
- omitted pixels related to oblique directions are many. Accordingly, the higher-definition effect provided by the slight differences in optical position between the G, R, and B image sensors is reduced for oblique-direction definitions. For example, as shown in FIG. 11 , pixels are absent from positions on the dashed oblique lines, and pixel data information about those pixels is unavailable. Therefore, it is difficult to attain a sufficient high definition for oblique lines.
- the second prior-art imaging apparatus uses the four image sensors (the R, G 1 , G 2 , and B image sensors). Accordingly, the number of designing steps is large. The cost of the apparatus is high. The number of production steps is large. An optical block is large in size.
- the differences between pixel-corresponding data pieces are used as indications of the degrees of correlations, and an interpolation direction is decided in accordance with the degrees of correlations. Correlations in horizontal and vertical directions are considered while those in oblique directions are not. Accordingly, for high-frequency signal components, the calculation of the degrees of correlations tends to be inaccurate, and the decided interpolation direction tends to be unreliable. Under the conditions in FIG.
- 2”. Since ⁇ H ⁇ V, it is erroneously decided that a horizontal-direction correlation is higher than a vertical-direction correlation.
- the data value about the pixel (i, j) is calculated from the average of those about the surrounding pixels through the interpolation. Therefore, the calculated data value about the pixel (i, j) is greater than the desired value.
- the number of samples (pixels) used for the decision about the direction of a correlation is relatively small as mentioned above, and hence the decided correlation direction tends to be low in reliability.
- the fourth prior-art imaging apparatus uses the interpolation method in which the degree of each correlation is calculated from the ratio “MIN/MAX” between the minimum value and the maximum value among the data values about sample pixels. Only the minimum value and the maximum value are considered for the calculation of each correlation degree while other values (non-minimum and non-maximum values) are not. Accordingly, for high-frequency signal components, the calculation of the degrees of correlations tends to be inaccurate, and the decided interpolation direction tends to be unreliable.
- This invention has been carried out in order to remove the above-mentioned drawbacks in the first, second, third, and fourth prior-art imaging apparatuses.
- FIG. 13 shows an imaging apparatus according to a first embodiment of this invention.
- the imaging apparatus of FIG. 13 includes a color-separation optical system 11 , and solid-state image sensors 12 R, 12 G, and 12 B for red, green, and blue (R, G, and B) respectively.
- the optical system 11 separates incident light into a red (R) light beam, a green (G) light beam, and a blue (B) light beam, which are applied to the image sensors 12 R, 12 G, and 12 B respectively.
- the image sensor 12 R changes the R light beam into a corresponding electric analog R signal through photoelectric conversion.
- the image sensor 12 G changes the G light beam into a corresponding electric analog G signal through photoelectric conversion.
- the image sensor 12 B changes the B light beam into a corresponding electric analog B signal through photoelectric conversion.
- the image sensors 12 R, 12 G, and 12 B output the analog R, G, and B signals, respectively.
- the imaging apparatus of FIG. 13 further includes an analog signal processing circuit 13 , A/D converters 14 R, 14 G, and 14 B, and a video signal processing circuit 1 .
- the analog signal processing circuit 13 receives the analog R, G, and B signals from the image sensors 12 R, 12 G, and 12 B.
- the analog signal processing circuit 13 subjects the received analog R signal to known analog signal processing to generate a processed analog R signal.
- the analog signal processing circuit 13 subjects the received analog G signal to known analog signal processing to generate a processed analog G signal.
- the analog signal processing circuit 13 subjects the received analog B signal to known analog signal processing to generate a processed analog B signal.
- the analog signal processing circuit 13 outputs the processed analog R, G, and B signals.
- the A/D converters 14 R, 14 G, and 14 B receive the processed analog R, G, and B signals from the analog signal processing circuit 13 , respectively.
- the A/D converter 14 R changes the received analog R signal into a corresponding digital R signal.
- the A/D converter 14 G changes the received analog G signal into a corresponding digital G signal.
- the A/D converter 14 B changes the received analog B signal into a corresponding digital B signal.
- the A/D converters 14 R, 14 G, and 14 B output the digital R, G, and B signals, respectively. For every frame, each of the digital R, G, and B signals is divided into pixel-corresponding segments, namely segments representing respective pixels.
- the video signal processing circuit 1 receives the digital R, G, and B signals from the A/D converters 14 R, 14 G, and 14 B.
- the video signal processing circuit 1 includes a horizontal-vertical interpolating section 15 , a low-frequency component extracting section 16 , a high-frequency component extracting section 17 , a correlation-direction detecting section 18 , a high-frequency component interpolating section 19 , and a high-frequency component adding section 20 .
- the horizontal-vertical interpolating section 15 receives the digital R, G, and B signals from the A/D converters 14 R, 14 G, and 14 B.
- the horizontal-vertical interpolating section 15 implements two-dimensional interpolation to convert the received digital R signal into an interpolation-result red signal RI representing not only original pixels corresponding to the received digital R signal but also added pixels horizontally and vertically located between the original pixels for every frame.
- the horizontal-vertical interpolating section 15 implements two-dimensional interpolation to convert the received digital G signal into an interpolation-result green signal GI representing not only original pixels corresponding to the received digital G signal but also added pixels horizontally and vertically located between the original pixels for every frame.
- the horizontal-vertical interpolating section 15 implements two-dimensional interpolation to convert the received digital B signal into an interpolation-result blue signal BI representing not only original pixels corresponding to the received digital B signal but also added pixels horizontally and vertically located between the original pixels for every frame.
- the low-frequency component extracting section 16 processes the interpolation-result red, green, and blue signals RI, GI, and BI generated by the horizontal-vertical interpolating section 15 . Specifically, the low-frequency component extracting section 16 extracts low-frequency components, which are defined in every two-dimensional space (every frame), from the interpolation-result red signal RI to generate a low-frequency red signal RL. In addition, the low-frequency component extracting section 16 extracts low-frequency components, which are defined in every two-dimensional space (every frame), from the interpolation-result green signal GI to generate a low-frequency green signal GL.
- the low-frequency component extracting section 16 extracts low-frequency components, which are defined in every two-dimensional space (every frame), from the interpolation-result blue signal BI to generate a low-frequency blue signal BL.
- the low-frequency components extracted by the low-frequency component extracting section 16 mean signal components having spatial frequencies lower than a first predetermined reference spatial frequency.
- the high-frequency component extracting section 17 processes the interpolation-result red, green, and blue signals RI, GI, and BI generated by the horizontal-vertical interpolating section 15 . Specifically, the high-frequency component extracting section 17 extracts high-frequency components, which are defined in every two-dimensional space (every frame), from the interpolation-result red signal RI to generate a high-frequency red signal RH. In addition, the high-frequency component extracting section 17 extracts high-frequency components, which are defined in every two-dimensional space (every frame), from the interpolation-result green signal GI to generate a high-frequency green signal GH.
- the high-frequency component extracting section 17 extracts high-frequency components, which are defined in every two-dimensional space (every frame), from the interpolation-result blue signal BI to generate a high-frequency blue signal BH.
- the high-frequency components extracted by the high-frequency component extracting section 17 mean signal components having spatial frequencies higher than a second predetermined reference spatial frequency.
- the second predetermined reference spatial frequency is equal to or higher than the first predetermined reference spatial frequency.
- the correlation-direction detecting section 18 responds to the high-frequency red, green, and blue signals RH, GH, and BH generated by the high-frequency component extracting section 17 .
- the correlation-direction detecting section 18 maps or places pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH onto correct pixel positions in every two-dimensional space (every frame).
- the correlation-direction detecting section 18 detects different-direction correlations among pixels in every two-dimensional space (every frame) represented by the mapped high-frequency red, green, and blue signals RH, GH, and BH.
- the correlation-direction detecting section 18 calculates the standard deviations of pixel signal values in every two-dimensional space along different predetermined directions, and detects different-direction correlations through the use of the calculated standard deviations. The correlation-direction detecting section 18 decides which of the detected different-direction correlations is the highest. The correlation-direction detecting section 18 identifies, among the different predetermined directions, one direction related to the highest correlation. The correlation-direction detecting section 18 notifies the identified highest-correlation direction to the high-frequency component interpolating section 19 .
- the high-frequency component interpolating section 19 receives the high-frequency red, green, and blue signals RH, GH, and BH from the high-frequency component extracting section 17 .
- the high-frequency component interpolating section 19 maps or places pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH onto correct pixel positions in every two-dimensional space (every frame).
- the high-frequency component interpolating section 19 implements interpolation responsive to the mapped high-frequency red, green, and blue signals RH, GH, and BH and also responsive to the highest-correlation direction identified by the correlation-direction detecting section 18 .
- the high-frequency component interpolating section 19 generates a high-frequency composite signal or a high-frequency interpolation-result signal RGBHI. Specifically, the high-frequency component interpolating section 19 converts the received high-frequency red, green, and blue signals RH, GH, and BH into the high-frequency interpolation-result signal (the high-frequency composite signal) RGBHI through an interpolation-based process responsive to the highest-correlation direction identified by the correlation-direction detecting section 18 .
- the high-frequency component adding section 20 receives the low-frequency red, green, and blue signals RL, GL, and BL from the low-frequency component extracting section 16 .
- the high-frequency component adding section 20 receives the high-frequency interpolation-result signal RGBHI from the high-frequency component interpolating section 19 .
- the high-frequency component adding section 20 combines the low-frequency red signal RL and the high-frequency interpolation-result signal RGBHI into a high-definition red signal R(HD).
- the high-frequency component adding section 20 combines the low-frequency green signal GL and the high-frequency interpolation-result signal RGBHI into a high-definition green signal G(HD).
- the high-frequency component adding section 20 combines the low-frequency blue signal BL and the high-frequency interpolation-result signal RGBHI into a high-definition blue signal B(HD). In this way, the high-frequency component adding section 20 adds the high-frequency interpolation-result signal RGBHI to the low-frequency red signal RL, the low-frequency green signal GL, and the low-frequency blue signal BL to generate a set of the high-definition component video signals R(HD), G(HD), and B(HD). The high-frequency component adding section 20 outputs the set of the high-definition component video signals R(HD), G(HD), and B(HD).
- the image sensor 12 G has an array of photosensor pixels shown in FIG. 14 where “G” denotes the position of each photosensor pixel.
- the image sensor 12 R has an array of photosensor pixels shown in FIG. 15 where “R” denotes the position of each photosensor pixel.
- the image sensor 12 B has an array of photosensor pixels shown in FIG. 16 where “B” denotes the position of each photosensor pixel.
- photosensor pixels are spaced at a pitch Px along a horizontal direction and at a pitch Py along a vertical direction.
- the optical position of the image sensor 12 R relative to the incident light slightly differs from that of the image sensor 12 G such that the photosensor pixels in the image sensors 12 R and 12 G are staggered at vertical intervals equal to half the vertical pixel pitch Py.
- the optical position of the image sensor 12 B relative to the incident light slightly differs from that of the image sensor 12 G such that the photosensor pixels in the image sensors 12 B and 12 G are staggered at horizontal intervals equal to half the horizontal pixel pitch Px.
- a superposition-like combination of the photosensor pixels in the image sensors 12 R, 12 G, and 12 B has a configuration shown in FIG. 17 . In this combination, all the photosensor pixels do not overlap, and there is only a small number of omitted pixels or nonexistent pixels (blank squares in FIG. 17 ). Thus, it is possible to attain a high definition of video.
- incident light from a subject travels to the optical system 11 through an optical low pass filter (not shown).
- the optical system 11 separates the incident light into a red (R) light beam, a green (G) light beam, and a blue (B) light beam, which are applied to the image sensors 12 R, 12 G, and 12 B respectively.
- the image sensor 12 R changes the R light beam into a corresponding electric analog R signal through photoelectric conversion.
- the image sensor 12 G changes the G light beam into a corresponding electric analog G signal through photoelectric conversion.
- the image sensor 12 B changes the B light beam into a corresponding electric analog B signal through photoelectric conversion.
- the image sensors 12 R, 12 G, and 12 B output the analog R, G, and B signals, respectively.
- each of the image sensors 12 R, 12 G, and 12 B includes CCDs (Charge Coupled Devices), and is driven by a clock signal fed from a timing signal generator (not shown).
- the R light beam forms an R image of the subject on the photoelectric conversion surface of the image sensor 12 R.
- the G light beam forms a G image of the subject on the photoelectric conversion surface of the image sensor 12 G.
- the B light beam forms a B image of the subject on the photoelectric conversion surface of the image sensor 12 B.
- the image sensor 12 R spatially samples the R image in response to the clock signal at a prescribed sampling frequency “fs”, thereby generating the analog R signal.
- the image sensor 12 G spatially samples the G image in response to the clock signal at the prescribed sampling frequency “fs”, thereby generating the analog G signal.
- the image sensor 12 B spatially samples the B image in response to the clock signal at the prescribed sampling frequency “fs”, thereby generating the analog B signal.
- the analog signal processing circuit 13 receives the analog R, G, and B signals from the image sensors 12 R, 12 G, and 12 B.
- the analog signal processing circuit 13 subjects the received analog R signal to prescribed analog signal processing such as CDS-based (correlated-double-sampling-based) noise reduction, AGC-based level adjustment, and white balance to generate a processed analog R signal.
- the analog signal processing circuit 13 subjects the received analog G signal to the prescribed analog signal processing to generate a processed analog G signal.
- the analog signal processing circuit 13 subjects the received analog B signal to the prescribed analog signal processing to generate a processed analog B signal.
- the analog signal processing circuit 13 outputs the processed analog R, G, and B signals.
- the A/D converters 14 R, 14 G, and 14 B receive the processed analog R, G, and B signals from the analog signal processing circuit 13 , respectively.
- the A/D converter 14 R digitizes the received analog R signal in response to a clock signal with the prescribed sampling frequency “fs” to generate a digital R signal.
- the A/D converter 14 G digitizes the received analog G signal in response to a clock signal with the prescribed sampling frequency “fs” to generate a digital G signal.
- the A/D converter 14 B digitizes the received analog B signal in response to a clock signal with the prescribed sampling frequency “fs” to generate a digital B signal.
- the A/D converters 14 R, 14 G, and 14 B output the digital R, G, and B signals, respectively.
- the horizontal-vertical interpolating section 15 in the video signal processing circuit 1 receives the digital R, G, and B signals from the A/D converters 14 R, 14 G, and 14 B.
- the horizontal-vertical interpolating section 15 implements two-dimensional interpolation, namely horizontal and vertical interpolation, through the use of the received digital R, G, and B signals.
- the horizontal-vertical interpolating section 15 interposes dummy pixel-corresponding signal segments (for example, “0”) among the pixel-corresponding segments of the received digital G signal to generate a sample-number-increased green signal (a pixel-number-increased green signal) representing an array of pixels shown in FIG. 18 .
- the horizontal-vertical interpolating section 15 inserts a dummy pixel-corresponding signal segment between every two adjacent pixel-corresponding segments of the received digital G signal.
- the horizontal-vertical interpolating section 15 interposes dummy pixel-corresponding signal segments (for example, “0”) among the pixel-corresponding segments of the received digital R signal to generate a sample-number-increased red signal (a pixel-number-increased red signal) representing an array of pixels shown in FIG. 19 .
- the horizontal-vertical interpolating section 15 inserts a dummy pixel-corresponding signal segment between every two adjacent pixel-corresponding segments of the received digital R signal.
- the horizontal-vertical interpolating section 15 interposes dummy pixel-corresponding signal segments (for example, “0”) among the pixel-corresponding segments of the received digital B signal to generate a sample-number-increased blue signal (a pixel-number-increased blue signal) representing an array of pixels shown in FIG. 20 .
- the horizontal-vertical interpolating section 15 inserts a dummy pixel-corresponding signal segment between every two adjacent pixel-corresponding segments of the received digital B signal.
- each of the dummy pixel-corresponding signal segments is prescribed one such as a signal segment of “0”.
- the dummy pixel-corresponding signal segments may be replaced by predetermined ineffective pixel-corresponding signal segments.
- the frequency spectrum of the sample-number-increased red signal generated by the horizontal-vertical interpolating section 15 assumes conditions shown in FIG. 22 .
- the digital G and B signals received by the horizontal-vertical interpolating section 15 and the sample-number-increased green and blue signals generated by the horizontal-vertical interpolating section 15 are in frequency-spectrum relations similar to the above one. Interposing the dummy pixel-corresponding signal segments among the pixel-corresponding segments of the received digital R signal increases the sampling frequency from “fs” to “2 fs”, and causes aliasing and imaging (see FIG.
- the horizontal-vertical interpolating section 15 includes low pass filters having a cutoff frequency equal to a value of “0.5 fs”.
- the sample-number-increased red, green, and blue signals are passed through the low pass filters to generate interpolation-result red, green, and blue signals RI, GI, and BI, respectively.
- each of the low pass filters is of a two-dimensional type having a horizontal low pass filter and a vertical low pass filter.
- the low pass filters include, for example, FIR filters.
- the low pass filtering causes interpolation which changes the value of each dummy signal segment from “0” to an interpolation-result value depending on the values of actual signal segments representing pixels neighboring the pixel represented by the dummy signal segment in vertical and horizontal directions.
- each of the interpolation-result red, green, and blue signals RI, GI, and BI has a frequency spectrum such as shown in FIG. 23 .
- each of the interpolation-result red, green, and blue signals RI, GI, and BI consists of only components having frequencies up to a value of “0.5 fs”.
- Components of the received digital R, G, and B signals which have frequencies up to a value of “0.5 fs” remain in the interpolation-result red, green, and blue signals RI, GI, and BI as they are.
- the low-frequency component extracting section 16 includes two-dimensional low pass filters each having a horizontal low pass filter and a vertical low pass filter with a cutoff frequency equal to a value of “0.25 fs”.
- the two-dimensional low pass filters include, for example, FIR filters.
- the interpolation-result red, green, and blue signals RI, GI, and BI are passed through the two-dimensional low pass filters in the low-frequency component extracting section 16 to generate low-frequency red, green, and blue signals RL, GL, and BL, respectively.
- Each of the low-frequency red, green, and blue signals RL, GL, and BL has a frequency spectrum such as shown in FIG. 24 .
- the high-frequency component extracting section 17 subtracts the low-frequency red signal RL from the interpolation-result red signal RI to generate a high-frequency red signal RH. In addition, the high-frequency component extracting section 17 subtracts the low-frequency green signal GL from the interpolation-result green signal GI to generate a high-frequency green signal GH. Furthermore, the high-frequency component extracting section 17 subtracts the low-frequency blue signal BL from the interpolation-result blue signal BI to generate a high-frequency blue signal BH.
- Each of the high-frequency red, green, and blue signals RH, GH, and BH has a frequency spectrum such as shown in FIG. 25 .
- the high-frequency component extracting section 17 uses the following equations.
- the high-frequency red, green, and blue signals RH, GH, and BH are fed from the high-frequency component extracting section 17 to the correlation-direction detecting section 18 and the high-frequency component interpolating section 19 .
- Each of the correlation-direction detecting section 18 and the high-frequency component interpolating section 19 maps or places pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH onto correct pixel positions in every frame to generate a pixel arrangement shown in FIG. 26 .
- a superposition-like combination of the photosensor pixels in the image sensors 12 R, 12 G, and 12 B has a configuration shown in FIG. 17 where actual pixel positions are denoted by “R”, “G”, and “B” and nonexistent pixel positions are denoted by the blank squares.
- the pixel arrangement in FIG. 26 has nonexistent pixel positions denoted by “NON” which originate from the nonexistent pixel positions (the blank squares) in FIG. 17 . Accordingly, the nonexistent pixel positions “NON” correspond to none of the photosensor pixels in the image sensors 12 R, 12 G, and 12 B.
- the nonexistent pixel positions “NON” are assigned high-frequency signal components changed from dummy signal segments through the interpolation by the horizontal-vertical interpolating section 15 .
- the high-frequency component interpolating section 19 generates pixel-corresponding signal segments for the respective nonexistent pixel positions “NON” through an interpolation-based process responsive to the pixel-corresponding segments of the mapped high-frequency red, green, and blue signals RH, GH, and BH. This interpolation-based process is further responsive to the highest-correlation direction identified by the correlation-direction detecting section 18 .
- the pixels at the positions “NON” are referred to as the interpolation pixels.
- the correlation-direction detecting section 18 detects different-direction correlations on the basis of the mapped high-frequency red, green, and blue signals RH, GH, and BH for an interpolation-based process to generate a pixel-corresponding signal segment assigned to every nonexistent pixel position “NON”. Then, the correlation-direction detecting section 18 decides a direction in which the highest of the detected different-direction correlations occurs. The correlation-direction detecting section 18 notifies the decided direction to the high-frequency component interpolating section 19 . In response to the decided direction, the high-frequency component interpolating section 19 implements an interpolation-based process to generate a pixel-corresponding signal segment for the nonexistent pixel position of interest.
- the high-frequency red, green, and blue signals RH, GH, and BH are free from aliasing components and color components which would adversely affect the detection of different-direction correlations.
- the detection of different-direction correlations and the decision about a correlation-peak direction through the use of the high-frequency red, green, and blue signals RH, GH, and BH are better in accuracy than those using signals containing low-frequency components.
- the correlation-direction detecting section 18 calculates four standard deviations among the values of pixels represented by the high-frequency red, green, and blue signals RH, GH, and BH in a window centered at every nonexistent pixel position “NON”.
- the four standard deviations are along a vertical direction, a horizontal direction, a 45-degree oblique direction, and a 135-degree oblique direction, respectively. These four directions pass the nonexistent pixel position “NON” of interest.
- the correlation-direction detecting section 18 identifies the smallest one among the four standard deviations.
- the correlation-direction detecting section 18 decides that a smallest pixel-value variation occurs and hence a highest correlation occurs in the direction related to the smallest standard deviation.
- the correlation-direction detecting section 18 uses the simplified equation (5) in calculating each of the four standard deviations.
- the use of the simplified equation (5) allows a reduction in the circuit scale.
- Pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH are taken as samples.
- five pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH are selected as samples for calculating each of the four standard deviations.
- pixel-corresponding segments BH(i, j ⁇ 2), BH(i, j ⁇ 1), BH(i, j), BH(i, j+1), and BH(i, j+2) of the high-frequency blue signal BH are selected as samples to calculate the standard deviation along the vertical direction.
- Pixel-corresponding segments RH(i ⁇ 2, j), RH(i ⁇ 1, j), RH(i, j), RH(i+1, j), and RH(i+2, j) of the high-frequency red signal RH are selected as samples to calculate the standard deviation along the horizontal direction.
- Pixel-corresponding segments GH(i ⁇ 2, j+2), GH(i ⁇ 1, j+1), GH(i, j), GH(i+1, j ⁇ 1), and GH(i+2, j ⁇ 2) of the high-frequency green signal GH are selected as samples to calculate the standard deviation along the 45-degree oblique direction.
- Pixel-corresponding segments GH(i ⁇ 2, j ⁇ 2), GH(i ⁇ 1, j ⁇ 1), GH(i, j), GH(i+1, j+1), and GH(i+2, j+2) of the high-frequency green signal GH are selected as samples to calculate the standard deviation along the 135-degree oblique direction.
- the samples for calculating the standard deviation along each of the vertical, horizontal, 45-degree oblique, and 135-degree oblique directions contain ones at nonexistent pixel positions inclusive of the nonexistent pixel position of interest. It should be noted that there are high-frequency signal components at these nonexistent pixel positions which are changed from the dummy signal segments through the interpolation by the horizontal-vertical interpolating section 15 .
- the four calculated standard deviations “ ⁇ ” are used as an indication of the four detected different-direction correlations.
- a correlation increases as a corresponding standard deviation decreases.
- the use of the standard deviations allows accurate detection of the different-direction correlations.
- the correlation-direction detecting section 18 decides which of the four calculated standard deviations is the smallest, that is, which of the four detected correlations is the highest. Then, the correlation-direction detecting section 18 identifies, among the four different directions, one direction related to the smallest standard deviation (or the highest correlation).
- the correlation-direction detecting section 18 labels the identified direction as the highest-correlation direction.
- the correlation-direction detecting section 18 notifies the highest-correlation direction to the high-frequency component interpolating section 19 .
- the calculated standard deviations in the vertical, horizontal, 45-degree oblique, and 135-degree oblique directions are denoted by “a”, “b”, “c”, and “d”, respectively.
- the correlation-direction detecting section 18 compares the standard deviations “a”, “b”, “c”, and “d” to decide the smallest one thereamong. When the standard deviation “a” is the smallest, the correlation-direction detecting section 18 concludes that the correlation along the vertical direction is the highest among the correlations along the four directions. Then, the correlation-direction detecting section 18 labels the vertical direction as the highest-correlation direction.
- the correlation-direction detecting section 18 concludes that the correlation along the horizontal direction is the highest among the correlations along the four directions. Then, the correlation-direction detecting section 18 labels the horizontal direction as the highest-correlation direction.
- the correlation-direction detecting section 18 concludes that the correlation along the 45-degree oblique direction is the highest among the correlations along the four directions. Then, the correlation-direction detecting section 18 labels the 45-degree oblique direction as the highest-correlation direction.
- the correlation-direction detecting section 18 concludes that the correlation along the 135-degree oblique direction is the highest among the correlations along the four directions. Then, the correlation-direction detecting section 18 labels the 135-degree oblique direction as the highest-correlation direction. As previously mentioned, the correlation-direction detecting section 18 notifies the highest-correlation direction to the high-frequency component interpolating section 19 .
- the high-frequency component interpolating section 19 adopts the pixel-corresponding segment BH(i, j) of the high-frequency blue signal BH as a pixel-corresponding segment for the nonexistent pixel position (i, j) of interest.
- the high-frequency component interpolating section 19 adopts the pixel-corresponding segment RH(i, j) of the high-frequency red signal RH as a pixel-corresponding segment for the nonexistent pixel position (i, j) of interest.
- the high-frequency component interpolating section 19 when the highest-correlation direction notified by the correlation-direction detecting section 18 is the 45-degree oblique direction, the high-frequency component interpolating section 19 generates the average of the pixel-corresponding segments GH(i ⁇ 1, j+1) and GH(i+1, j ⁇ 1) of the high-frequency green signal GH which correspond to the right-upper and left-lower pixel positions neighboring the nonexistent pixel position (i, j) of interest. Then, the high-frequency component interpolating section 19 adopts the generated average as a pixel-corresponding segment for the nonexistent pixel position (i, j) of interest. With reference to FIG.
- the high-frequency component interpolating section 19 when the highest-correlation direction notified by the correlation-direction detecting section 18 is the 135-degree oblique direction, the high-frequency component interpolating section 19 generates the average of the pixel-corresponding segments GH(i ⁇ 1, j ⁇ 1) and GH(i+1, j+1) of the high-frequency green signal GH which correspond to the left-upper and right-lower pixel positions neighboring the nonexistent pixel position (i, j) of interest. Then, the high-frequency component interpolating section 19 adopts the generated average as a pixel-corresponding segment for the nonexistent pixel position (i, j) of interest.
- the high-frequency component interpolating section 19 assigns an interpolation-based pixel-corresponding signal segment to every nonexistent pixel position “NON” (see FIG. 26 ). Then, high-frequency component interpolating section 19 combines the segments of the high-frequency red, green, and blue signals RH, GH, and BH for the pixel positions except the nonexistent pixel positions “NON” and the assigned interpolation-based signal segments for the nonexistent pixel positions “NON” to generate the high-frequency interpolation-result signal (the high-frequency composite signal) RGBHI.
- the high-frequency component interpolating section 19 directly uses the high-frequency red, green, and blue signals RH, GH, and BH as the high-frequency interpolation-result signal RGBHI.
- the high-frequency component interpolating section 19 uses the assigned interpolation-based signal segments as the high-frequency interpolation-result signal RGBHI.
- the correlation-direction detecting section 18 and the high-frequency component interpolating section 19 are formed by a computer having a combination of an input/output port, a CPU, a ROM, and a RAM.
- the correlation-direction detecting section 18 and the high-frequency component interpolating section 19 operate in accordance with a control program (a computer program) stored in the ROM.
- FIG. 31 is a flowchart of a segment of the control program which is executed for every nonexistent pixel position (i, j).
- a first step S 31 of the program segment decides whether or not the minimum among the standard deviations “a”, “b”, “c”, and “d” is the standard deviation “a”.
- the program advances from the step S 31 to a step S 32 . Otherwise, the program advances from the step S 31 to a step S 33 .
- the step S 32 concludes that the correlation along the vertical direction is the highest among the correlations along the four directions.
- the step S 32 adopts the pixel-corresponding segment BH(i, j) of the high-frequency blue signal BH as a pixel-corresponding segment for the nonexistent pixel position (i, j).
- the step S 33 decides whether or not the minimum among the standard deviations “b”, “c”, and “d” is the standard deviation “b”. When the minimum among the standard deviations “b”, “c”, and “d” is the standard deviation “b”, the program advances from the step S 33 to a step S 34 . Otherwise, the program advances from the step S 33 to a step S 35 .
- the step S 34 concludes that the correlation along the horizontal direction is the highest among the correlations along the four directions.
- the step S 34 adopts the pixel-corresponding segment RH(i, j) of the high-frequency red signal RH as a pixel-corresponding segment for the nonexistent pixel position (i, j).
- the step S 35 decides whether or not the minimum among the standard deviations “c” and “d” is the standard deviation “c”. When the minimum among the standard deviations “c” and “d” is the standard deviation “c”, the program advances from the step S 35 to a step S 36 . Otherwise, the program advances from the step S 35 to a step S 37 .
- the step S 36 concludes that the correlation along the 45-degree oblique direction is the highest among the correlations along the four directions.
- the step S 36 generates the average of the pixel-corresponding segments GH(i ⁇ 1, j+1) and GH(i+1, j ⁇ 1) of the high-frequency green signal GH which correspond to the right-upper and left-lower pixel positions neighboring the nonexistent pixel position (i, j). Then, the step S 36 adopts the generated average as a pixel-corresponding segment for the nonexistent pixel position (i, j).
- the high-frequency component adding section 20 receives the high-frequency interpolation-result signal RGBHI from the high-frequency component interpolating section 19 .
- the high-frequency component adding section 20 receives the low-frequency red, green, and blue signals RL, GL, and BL from the low-frequency component extracting section 16 .
- the high-frequency interpolation-result signal RGBHI has a frequency spectrum such as shown in FIG. 32 .
- Each of the low-frequency red, green, and blue signals RL, GL, and BL has a frequency spectrum such as shown in FIG. 24 .
- the high-frequency component adding section 20 combines the low-frequency red signal RL and the high-frequency interpolation-result signal RGBHI into a high-definition red signal R(HD). In addition, the high-frequency component adding section 20 combines the low-frequency green signal GL and the high-frequency interpolation-result signal RGBHI into a high-definition green signal G(HD). Furthermore, the high-frequency component adding section 20 combines the low-frequency blue signal BL and the high-frequency interpolation-result signal RGBHI into a high-definition blue signal B(HD). Each of the high-definition component video signals R(HD), G(HD), and B(HD) has a frequency spectrum such as shown in FIG. 33 .
- the maximum frequency of each of the high-definition component video signals R(HD), G(HD), and B(HD) is “0.75 fs” equal to “0.5 fs” multiplied by 1.5.
- the higher maximum frequency causes a higher video definition.
- the signal combining actions by the high-frequency component adding section 20 are implemented according to the following equations.
- R ( HD ) RL+RGBHI
- G ( HD ) GL+RGBHI
- B ( HD ) BL+RGBHI (6)
- the high-frequency component adding section 20 outputs a set of the high-definition component video signals R(HD), G(HD), and B(HD).
- a high video definition is provided by the high-definition component video signals R(HD), G(HD), and B(HD) although only the three solid-state image sensors 12 R, 12 G, and 12 B each having a relatively small number of photosensor pixels are used. Not only higher video definitions along the horizontal and vertical directions but also higher video definitions along the oblique directions are attained. Furthermore, the accuracy of interpolation with respect to high-frequency signal components is increased. Therefore, it is possible to attain a high video definition comparable to that provided in cases where four solid-state image sensors are used.
- the interpolation with respect to the high-frequency signal components is implemented through the calculation of the standard deviations along the different directions and the detection of the direction in which the highest correlation occurs in response to the calculated standard deviations.
- a second embodiment of this invention is similar to the first embodiment thereof except for points mentioned hereafter.
- the second embodiment of this invention is designed to increase the accuracy of the detection of the highest-correlation direction by the correlation-direction detecting section 18 .
- the correlation-direction detecting section 18 in the second embodiment of this invention includes standard-deviation calculating sections 41 , 42 , and 43 , and an adding section 44 .
- segments of the high-frequency red, green, and blue signals RH, GH, and BH which correspond to a set of adjacent pixels “a 1 ”, “a 2 ”, “a 3 ”, “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, “b 5 ”, “c 1 ”, “c 2 ”, and “c 3 ” centered at the nonexistent pixel “b 3 ” are selected as samples for calculating standard deviations along the four directions (the vertical, horizontal, 45-degree oblique, and 135-degree oblique directions).
- the pixels “a 1 ”, “a 2 ”, “a 3 ”, “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, “b 5 ”, “c 1 ”, “c 2 ”, and “c 3 ” are arranged in a frame as shown in FIG. 35 .
- the pixels “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, and “b 5 ” are in a center vertical line having the nonexistent pixel “b 3 ”.
- the pixels “a 1 ”, “a 2 ”, and “a 3 ” are in a left vertical line extending adjacently leftward of the center vertical line.
- the pixels “c 1 ”, “c 2 ”, and “c 3 ” are in a right vertical line extending adjacently rightward of the center vertical line.
- corresponding segments of the high-frequency blue signal BH are used.
- corresponding segments of the high-frequency green signal GH are used.
- corresponding segments of the high-frequency red signal RH are used.
- the pixels “a 1 ”, “a 2 ”, “a 3 ”, “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, “b 5 ”, “c”, “c 2 ”, and “c 3 ” are arranged in a frame as shown in FIG. 36 .
- the pixels “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, and “b 5 ” are in a center horizontal line having the nonexistent pixel “b 3 ”.
- the pixels “a 1 ”, “a 2 ”, and “a 3 ” are in an upper horizontal line extending immediately above the center horizontal line.
- the pixels “c 1 ”, “c 2 ”, and “c 3 ” are in a lower horizontal line extending immediately below the center horizontal line.
- corresponding segments of the high-frequency red signal RH are used.
- corresponding segments of the high-frequency green signal GH are used.
- corresponding segments of the high-frequency blue signal BH are used.
- the pixels “c 1 ”, “c 2 ”, and “c 3 ” are in a right-lower oblique line extending adjacently rightward and downward of the center oblique line.
- corresponding segments of the high-frequency green signal GH are used.
- corresponding segments of the high-frequency red signal RH are used.
- corresponding segments of the high-frequency blue signal BH are used.
- the pixels “a 1 ”, “a 2 ”, “a 3 ”, “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, “b 5 ”, “c 1 ”, “c 2 ”, and “c 3 ” are arranged in a frame as shown in FIG. 38 .
- the pixels “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, and “b 5 ” are in a center oblique line having the nonexistent pixel “b 3 ”.
- the pixels “a 1 ”, “a 2 ”, and “a 3 ” are in a right -upper oblique line extending adjacently rightward and upward of the center oblique line.
- the pixels “c 1 ”, “c 2 ”, and “c 3 ” are in a left-lower oblique line extending adjacently leftward and downward of the center oblique line.
- corresponding segments of the high-frequency green signal GH are used.
- corresponding segments of the high-frequency red signal RH are used.
- corresponding segments of the high-frequency blue signal BH are used.
- the standard-deviation calculating section 41 computes each “ ⁇ 1 ” of the standard deviations of the selected pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH according to the following equation.
- a i denotes the value of each of the signal segments corresponding to the pixels “a 1 ”, “a 2 ”, and “a 3 ”
- a ave denotes the mean of the values of the signal segments corresponding to the pixels “a 1 ”, “a 2 ”, and “a 3 ”.
- the standard-deviation calculating section 42 computes each “ ⁇ 2 ” of the standard deviations of the values of the selected pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH according to the following equation.
- b i denotes the value of each of the signal segments corresponding to the pixels “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, and “b 5 ”
- b ave denotes the mean of the values of the signal segments corresponding to the pixels “b 1 ”, “b 2 ”, “b 3 ”, “b 4 ”, and “b 5 ”.
- the standard-deviation calculating section 43 computes each “ ⁇ 3 ” of the standard deviations of the selected pixel-corresponding segments of the high-frequency red, green, and blue signals RH, GH, and BH according to the following equation.
- c i denotes the value of each of the signal segments corresponding to the pixels “c 1 ”, “c 2 ”, and “c 3 ”
- c ave denotes the mean of the values of the signal segments corresponding to the pixels “c 1 ”, “c 2 ”, and “c 3 ”.
- the adding section 44 adds the computed standard deviations “ ⁇ 1 ”, “ ⁇ 2 ”, and “ ⁇ 3 ” to obtain a total deviation “ ⁇ mix ” for each of the four directions (the vertical direction, the horizontal direction, the 45-degree oblique direction, and the 135-degree oblique direction).
- the correlation-direction detecting section 18 compares the obtained total deviations “ ⁇ mix ” for the four directions to detect the smallest one thereamong.
- the correlation-direction detecting section 18 identifies which of the four directions the smallest total deviation “ ⁇ mix ” relates to.
- the correlation-direction detecting section 18 concludes that the highest correlation occurs in the identified direction. In this way, the correlation-direction detecting section 18 detects the highest-correlation direction.
- Each of the total deviations “ ⁇ mix ” is equal to the sum of the standard deviations “ ⁇ 1 ”, “ ⁇ 2 ”, and “ ⁇ 3 ” related to the three adjacent lines. The use of such total deviations increases the accuracy of the detection of the direction along which the highest correlation occurs.
- a third embodiment of this invention is similar to the first or second embodiment thereof except for design changes mentioned hereafter.
- the optical position of the image sensor 12 B relative to the incident light slightly differs from that of the image sensor 12 G such that the photosensor pixels in the image sensors 12 B and 12 G are staggered at only horizontal intervals equal to half the horizontal pixel pitch Px or only vertical intervals equal to half the vertical pixel pitch Py.
- the photosensor pixels in the image sensors 12 B and 12 G may be staggered at horizontal intervals equal to half the horizontal pixel pitch Px and also vertical intervals equal to half the vertical pixel pitch Py.
- the optical positions of photosensor pixels in the image sensor 12 B do not overlap those of photosensor pixels in the image sensor 12 R.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Solid State Image Pick-Up Elements (AREA)
- Image Processing (AREA)
Abstract
Description
ΔH=|G(2i,2j)−G(2i,2j+2| (1)
ΔV=|G(2i−1,2j+1)−G(2i+1,2j+1)| (2)
S=min(Yn)/max(Yn)
where “min” denotes an operator for selecting minimum one, and “max” denotes an operator for selecting maximum one. In the above equation, S≦1 and the maximum value of the correlation degree S is equal to 1. The correlation degree S is calculated according to the above equation. The direction in which the calculated correlation peaks is decided as a correlation direction. Then, optimal interpolation is implemented in accordance with the decided correlation direction.
RH=RI−RL, GH=GI−GL, BH=BI−BL (3)
The high-frequency red, green, and blue signals RH, GH, and BH are fed from the high-frequency component extracting section 17 to the correlation-
where “N” denotes the number of samples and “xi” denotes the value of each of the samples, and “xave” denotes the mean of the values of the samples. A simplified equation for the above equation (4) is as follows.
R(HD)=RL+RGBHI, G(HD)=GL+RGBHI, B(HD)=BL+RGBHI (6)
The high-frequency
where “ai” denotes the value of each of the signal segments corresponding to the pixels “a1”, “a2”, and “a3”, and “aave” denotes the mean of the values of the signal segments corresponding to the pixels “a1”, “a2”, and “a3”.
where “bi” denotes the value of each of the signal segments corresponding to the pixels “b1”, “b2”, “b3”, “b4”, and “b5”, and “bave” denotes the mean of the values of the signal segments corresponding to the pixels “b1”, “b2”, “b3”, “b4”, and “b5”.
where “ci” denotes the value of each of the signal segments corresponding to the pixels “c1”, “c2”, and “c3”, and “cave” denotes the mean of the values of the signal segments corresponding to the pixels “c1”, “c2”, and “c3”.
Claims (5)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2007244203 | 2007-09-20 | ||
| JP2007-244203 | 2007-09-20 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20090079855A1 US20090079855A1 (en) | 2009-03-26 |
| US7911515B2 true US7911515B2 (en) | 2011-03-22 |
Family
ID=40471179
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/078,919 Active 2029-10-10 US7911515B2 (en) | 2007-09-20 | 2008-04-08 | Imaging apparatus and method of processing video signal |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US7911515B2 (en) |
| JP (1) | JP4985584B2 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100104183A1 (en) * | 2008-10-23 | 2010-04-29 | Megachips Corporation | Image enlargement method |
| US20210342629A1 (en) * | 2019-06-14 | 2021-11-04 | Tencent Technology (Shenzhen) Company Limited | Image processing method, apparatus, and device, and storage medium |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102006006835B4 (en) * | 2006-02-14 | 2008-05-08 | Oce Printing Systems Gmbh | Method and device for scanning images |
| US9105106B2 (en) | 2010-05-11 | 2015-08-11 | Zoran (France) S.A. | Two-dimensional super resolution scaling |
| US20110317048A1 (en) * | 2010-06-29 | 2011-12-29 | Aptina Imaging Corporation | Image sensor with dual layer photodiode structure |
| US8588535B2 (en) | 2010-09-15 | 2013-11-19 | Sharp Laboratories Of America, Inc. | Methods and systems for estimation of compression noise |
| US8600188B2 (en) | 2010-09-15 | 2013-12-03 | Sharp Laboratories Of America, Inc. | Methods and systems for noise reduction and image enhancement |
| US8175411B2 (en) | 2010-09-28 | 2012-05-08 | Sharp Laboratories Of America, Inc. | Methods and systems for estimation of compression noise |
| US8538193B2 (en) | 2010-09-28 | 2013-09-17 | Sharp Laboratories Of America, Inc. | Methods and systems for image enhancement and estimation of compression noise |
| US8532429B2 (en) | 2010-09-28 | 2013-09-10 | Sharp Laboratories Of America, Inc. | Methods and systems for noise reduction and image enhancement involving selection of noise-control parameter |
| JP5741011B2 (en) | 2011-01-26 | 2015-07-01 | 株式会社リコー | Image processing apparatus, pixel interpolation method, and program |
| JP5631769B2 (en) * | 2011-02-17 | 2014-11-26 | 株式会社東芝 | Image processing device |
| US9380320B2 (en) * | 2012-02-10 | 2016-06-28 | Broadcom Corporation | Frequency domain sample adaptive offset (SAO) |
| JP5889049B2 (en) * | 2012-03-09 | 2016-03-22 | オリンパス株式会社 | Image processing apparatus, imaging apparatus, and image processing method |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4285004A (en) * | 1980-02-25 | 1981-08-18 | Ampex Corporation | Total raster error correction apparatus and method for the automatic set up of television cameras and the like |
| US4334238A (en) * | 1979-06-01 | 1982-06-08 | Nippon Electric Co., Ltd. | Color image pickup apparatus with multi-solid-state image pickup devices |
| US4725881A (en) * | 1984-05-19 | 1988-02-16 | Robert Bosch Gmbh | Method for increasing the resolution of a color television camera with three mutually-shifted solid-state image sensors |
| US5471323A (en) * | 1993-05-19 | 1995-11-28 | Matsushita Electric Industrial Co., Ltd | Solid state video camera having improved chromatic aberration suppression and moire suppression |
| US5521637A (en) * | 1992-10-09 | 1996-05-28 | Sony Corporation | Solid state image pick-up apparatus for converting the data clock rate of the generated picture data signals |
| US5657082A (en) * | 1994-12-20 | 1997-08-12 | Sharp Kabushiki Kaisha | Imaging apparatus and method using interpolation processing |
| US5754226A (en) * | 1994-12-20 | 1998-05-19 | Sharp Kabushiki Kaisha | Imaging apparatus for obtaining a high resolution image |
| JPH10155158A (en) | 1996-11-20 | 1998-06-09 | Sony Corp | Image pickup device and processing method for color image signal |
| JPH11234690A (en) | 1998-02-19 | 1999-08-27 | Mitsubishi Electric Corp | Imaging device |
| JP2000341708A (en) | 1999-05-31 | 2000-12-08 | Victor Co Of Japan Ltd | Image pickup device |
| JP2000341710A (en) | 1999-05-31 | 2000-12-08 | Victor Co Of Japan Ltd | Image pickup device |
| US7570286B2 (en) * | 2005-05-27 | 2009-08-04 | Honda Motor Co., Ltd. | System and method for creating composite images |
| US7825968B2 (en) * | 2007-09-07 | 2010-11-02 | Panasonic Corporation | Multi-color image processing apparatus and signal processing apparatus |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3651015B2 (en) * | 1994-03-16 | 2005-05-25 | ソニー株式会社 | Solid-state imaging device |
| JP4065155B2 (en) * | 2002-07-12 | 2008-03-19 | オリンパス株式会社 | Image processing apparatus and image processing program |
-
2008
- 2008-04-08 US US12/078,919 patent/US7911515B2/en active Active
- 2008-08-12 JP JP2008207635A patent/JP4985584B2/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4334238A (en) * | 1979-06-01 | 1982-06-08 | Nippon Electric Co., Ltd. | Color image pickup apparatus with multi-solid-state image pickup devices |
| US4285004A (en) * | 1980-02-25 | 1981-08-18 | Ampex Corporation | Total raster error correction apparatus and method for the automatic set up of television cameras and the like |
| US4725881A (en) * | 1984-05-19 | 1988-02-16 | Robert Bosch Gmbh | Method for increasing the resolution of a color television camera with three mutually-shifted solid-state image sensors |
| US5521637A (en) * | 1992-10-09 | 1996-05-28 | Sony Corporation | Solid state image pick-up apparatus for converting the data clock rate of the generated picture data signals |
| US5471323A (en) * | 1993-05-19 | 1995-11-28 | Matsushita Electric Industrial Co., Ltd | Solid state video camera having improved chromatic aberration suppression and moire suppression |
| US5754226A (en) * | 1994-12-20 | 1998-05-19 | Sharp Kabushiki Kaisha | Imaging apparatus for obtaining a high resolution image |
| US5657082A (en) * | 1994-12-20 | 1997-08-12 | Sharp Kabushiki Kaisha | Imaging apparatus and method using interpolation processing |
| JPH10155158A (en) | 1996-11-20 | 1998-06-09 | Sony Corp | Image pickup device and processing method for color image signal |
| JPH11234690A (en) | 1998-02-19 | 1999-08-27 | Mitsubishi Electric Corp | Imaging device |
| JP2000341708A (en) | 1999-05-31 | 2000-12-08 | Victor Co Of Japan Ltd | Image pickup device |
| JP2000341710A (en) | 1999-05-31 | 2000-12-08 | Victor Co Of Japan Ltd | Image pickup device |
| US7570286B2 (en) * | 2005-05-27 | 2009-08-04 | Honda Motor Co., Ltd. | System and method for creating composite images |
| US7825968B2 (en) * | 2007-09-07 | 2010-11-02 | Panasonic Corporation | Multi-color image processing apparatus and signal processing apparatus |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100104183A1 (en) * | 2008-10-23 | 2010-04-29 | Megachips Corporation | Image enlargement method |
| US8335372B2 (en) * | 2008-10-23 | 2012-12-18 | Megachips Corporation | Image enlargement method that uses information acquired in a pixel interpolation process |
| US20210342629A1 (en) * | 2019-06-14 | 2021-11-04 | Tencent Technology (Shenzhen) Company Limited | Image processing method, apparatus, and device, and storage medium |
| US11663819B2 (en) * | 2019-06-14 | 2023-05-30 | Tencent Technology (Shenzhen) Company Limited | Image processing method, apparatus, and device, and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| US20090079855A1 (en) | 2009-03-26 |
| JP2009095005A (en) | 2009-04-30 |
| JP4985584B2 (en) | 2012-07-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7911515B2 (en) | Imaging apparatus and method of processing video signal | |
| US9191593B2 (en) | Image signal processing apparatus, imaging apparatus, image signal processing method and computer program | |
| US8199229B2 (en) | Color filter, image processing apparatus, image processing method, image-capture apparatus, image-capture method, program and recording medium | |
| US7839437B2 (en) | Image pickup apparatus, image processing method, and computer program capable of obtaining high-quality image data by controlling imbalance among sensitivities of light-receiving devices | |
| KR101695252B1 (en) | Camera system with multi-spectral filter array and image processing method thereof | |
| US8736724B2 (en) | Imaging apparatus, method of processing captured image, and program for processing captured image | |
| EP2523160A1 (en) | Image processing device, image processing method, and program | |
| US8131067B2 (en) | Image processing apparatus, image processing method, and computer-readable media for attaining image processing | |
| CN101494795B (en) | Image sensor and method using unit pixel groups with overlapping green spectral ranges | |
| EP2211554A1 (en) | Image processing device, image processing method, and image processing program | |
| US10027942B2 (en) | Imaging processing apparatus, image-capturing apparatus, and storage medium having image processing program stored thereon | |
| JP2008070853A (en) | Image array data compensation method | |
| EP2175656A1 (en) | Image processing device, image processing method, program, and imaging device | |
| EP2031881B1 (en) | Image pickup device and signal processing method | |
| US8077234B2 (en) | Image pickup device and method for processing an interpolated color signal | |
| US20070222868A1 (en) | Color interpolation processing method | |
| JP4962293B2 (en) | Image processing apparatus, image processing method, and program | |
| KR100894420B1 (en) | Apparatus and method for generating an image using a multichannel filter | |
| US10249020B2 (en) | Image processing unit, imaging device, computer-readable medium, and image processing method | |
| US20070126896A1 (en) | Pixel signal processing apparatus and pixel signal processing method | |
| JP2003158744A (en) | Pixel flaw detection / correction apparatus and imaging apparatus using the same | |
| JP5036524B2 (en) | Image processing apparatus, image processing method, program, and imaging apparatus | |
| KR20070099238A (en) | Image sensor for outputting Bayer images after widening the dynamic range | |
| JP4635769B2 (en) | Image processing apparatus, image processing method, and imaging apparatus | |
| JP4303525B2 (en) | Interpolated pixel generation apparatus and method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: VICTOR COMPANY OF JAPAN, LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ITO, TAIICHI;KOBAYASHI, TOSHIHIDE;YAMADA, KAZUYA;REEL/FRAME:020821/0808 Effective date: 20080325 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: JVC KENWOOD CORPORATION, JAPAN Free format text: MERGER;ASSIGNOR:VICTOR COMPANY OF JAPAN, LTD.;REEL/FRAME:028000/0827 Effective date: 20111001 |
|
| FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |