WO2018220993A1 - Dispositif de traitement de signal, procédé de traitement de signal et programme informatique - Google Patents
Dispositif de traitement de signal, procédé de traitement de signal et programme informatique Download PDFInfo
- Publication number
- WO2018220993A1 WO2018220993A1 PCT/JP2018/014210 JP2018014210W WO2018220993A1 WO 2018220993 A1 WO2018220993 A1 WO 2018220993A1 JP 2018014210 W JP2018014210 W JP 2018014210W WO 2018220993 A1 WO2018220993 A1 WO 2018220993A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- synthesis
- motion
- images
- signal processing
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B15/00—Special procedures for taking photographs; Apparatus therefor
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B7/00—Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
- G03B7/28—Circuitry to measure or to take account of the object contrast
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present disclosure relates to a signal processing device, a signal processing method, and a computer program.
- JP 2013-152334 A Japanese Patent Laying-Open No. 2015-186062
- the processing time for obtaining the HDR image is shortened, but the image quality is deteriorated, or the deterioration of the image quality of the HDR image is suppressed, but it is difficult to shorten the processing time. Further, in the existing technology, when obtaining an HDR image, the trade-off between image quality and processing time is not adjusted flexibly.
- the dynamic range performance is not degraded, the processing time is shortened, and the trade-off between the image quality and the processing time can be adjusted flexibly.
- a signal processing apparatus, a signal processing method, and a computer program are proposed.
- a composition processing unit that performs composition processing at least N ⁇ 1 times on images of N frames (N is an integer of 3 or more) with different exposure times, and the image processing unit
- a motion adaptation processing unit that performs motion adaptation processing at the time of one synthesis using two of the groups, and the motion adaptation when the synthesis processing unit performs the N-1 synthesis processings
- a signal processing device is provided in which the number of motion adaptation processes in the processing unit is set to N-2 or less.
- the processor performs at least N ⁇ 1 synthesis processing on images of N frames (N is an integer of 3 or more) having different exposure times, and Performing motion adaptation processing at the time of one synthesis using two of them, and setting the number of motion adaptation processing at the time of performing the N-1 synthesis processing to be N-2 times or less, A signal processing method is provided.
- the computer performs at least N ⁇ 1 synthesis processing on images of N frames (N is an integer of 3 or more) having different exposure times, And performing the motion adaptation process at the time of one synthesis using two of them, and the number of the motion adaptation processes at the time of performing the N-1 synthesis process is N-2 times or less.
- N is an integer of 3 or more
- FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device according to the embodiment.
- FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device according to the embodiment. It is a block diagram which shows the schematic structural example of the vehicle control system which is an example of the mobile body control system with which the technique which concerns on this indication can be applied. It is a figure which shows the example of the installation position of the imaging part 12031.
- FIG. 6 shows the schematic structural example of the vehicle control system which is an example of the mobile body control system with which the technique which concerns on this indication can be applied. It is a figure which shows the example of the installation position of the imaging part 12031.
- HDR High Dynamic Range
- an imaging technique for combining image signals captured by a plurality of types of exposure has been used. If the size of the image sensor is small, the dynamic range becomes small, and so-called whiteout or blackout tends to occur in the captured image. An image in which whiteout or blackout has occurred is different from how it is seen with the naked eye. Therefore, the HDR technology for expanding the dynamic range is useful as a technology for expanding the dynamic range of a small size image sensor.
- the HDR image is obtained by combining a plurality of images with different exposures.
- a moving object is included in the subject.
- the purpose is to reduce the blur of the moving object that occurs in images with long exposure times (long accumulation images), or to reduce the displacement and double image of moving objects if the frame composition type
- a motion adaptation process is performed in which a motion region is detected to increase a blend ratio of an image having a short exposure time (short accumulation image), or a motion vector is detected to align an object.
- short accumulation image short exposure time
- a motion vector is detected to align an object.
- smartphones are the most popular platform for performing composition processing, and in this case, processing is performed by software on an AP (application processor).
- AP application processor
- the computing performance of APs for smartphones is not yet sufficient, and the computing performance of APs mounted on models of middle class and below is relatively low.
- a long processing time is required. The user waits for a long time until the HDR composite image is output after shooting with the HDR function enabled.
- the calculation performance is not sufficient, it is not easy to obtain a wide dynamic range in a reasonable processing time while maintaining a constant image quality with respect to movement.
- the processing time for obtaining the HDR image is shortened, but the image quality is deteriorated, or the deterioration of the image quality of the HDR image is suppressed, but it is difficult to shorten the processing time. Further, in the existing technology, when obtaining an HDR image, the trade-off between image quality and processing time is not adjusted flexibly.
- the present disclosure person can shorten the processing time without reducing the performance of the dynamic range and flexibly adjust the trade-off between the image quality and the processing time when obtaining the HDR image.
- the present disclosure when obtaining an HDR image, shortens the processing time without reducing the performance of the dynamic range, or flexibly adjusts the trade-off between image quality and processing time. It came to devise the technology that can do.
- FIG. 1 is an explanatory diagram showing a state of generating an HDR image that is assumed by the signal processing apparatus according to the present embodiment.
- FIG. 1 shows a state where an HDR image is generated using four image frames.
- the four image frames are designated as long accumulation, middle accumulation, short accumulation, and ultrashort accumulation in order of long exposure time.
- an image with long accumulation has an exposure time of 1/30 seconds
- an image with intermediate accumulation has an exposure time of 1/120 seconds
- an image with short accumulation has an exposure time of 1/480 seconds
- an image with ultra short accumulation has The exposure time is 1/1920 seconds.
- FIG. 1 shows a state in which an HDR image is generated using four images having different exposure times.
- the number of images that are the basis of the HDR image is not limited to four, and two images are generated. That is all you need.
- FIG. 1 shows a state in which the exposure time is long, but the HDR image may be combined from the short exposure time when generating the HDR image.
- gradation compression processing is performed in order to fit the normal dynamic range. After the gradation compression process, the camera signal processing system is used even when HDR is not performed.
- Camera signal processing includes white balance, demosaic, color matrix, edge enhancement, noise reduction, and the like.
- FIG. 2 is an explanatory diagram showing an example of an image sensor in which differently exposed pixels are two-dimensionally arranged.
- the image sensor 10 shown in FIG. 2 has a configuration in which long accumulation pixels 11, middle accumulation pixels 12, short accumulation pixels 13, and ultrashort accumulation pixels 14 are arranged in a matrix.
- a method of changing the sensitivity without changing the shutter time may be used. If the shutter time is shortened, it is required to deal with flicker. Therefore, the correspondence to flicker can be omitted by changing the sensitivity without changing the shutter time.
- FIG. 3 is an explanatory diagram illustrating a functional configuration example of the signal processing device 100 according to the embodiment of the present disclosure.
- FIG. 3 shows a functional configuration example of the signal processing device 100 that executes processing for generating an HDR image.
- a functional configuration example of the signal processing device 100 according to the embodiment of the present disclosure will be described with reference to FIG.
- the signal processing apparatus 100 includes a saturation detection unit 102, a motion detection unit 104, a black crush detection unit 106, a blend rate determination unit 108, a synthesis unit 110, It is comprised including.
- the saturation detection unit 102 detects the degree of saturation of an image having a longer exposure time among two images having different exposure times input to the signal processing apparatus 100.
- the degree of image saturation is information that is defined as multi-valued with respect to a pixel value that is 0 below a certain threshold and exceeds the threshold.
- the saturation detection unit 102 sends information on the detected degree of saturation to the blend rate determination unit 108.
- Information on the degree of saturation is used by the blend rate determination unit 108 to determine the blend rate (synthesis ratio) of two images.
- the motion detection unit 104 detects a motion between images using two images input to the signal processing device 100.
- the motion detection unit 104 normalizes brightness by applying an exposure ratio gain to a short storage frame in advance, and then calculates a difference between pixel values of the long storage frame and the short storage frame. take.
- the motion detection unit 104 determines that there is motion between images if the difference value is greater than a predetermined threshold value. Further, the motion detection unit 104 may calculate an average value in a certain range and take the difference as a process of detecting the motion.
- the motion detection unit 104 may detect a change by paying attention to a frequency or a gradient in order to detect a motion more accurately as a process for detecting the motion.
- the motion detection unit 104 may detect a corresponding change in position between a plurality of frames, or may detect a motion by a method of detecting a movement vector and using the value.
- the motion detection unit 104 sends information on the area detected as motion to the blend rate determination unit 108.
- the blend rate determination unit 108 increases the blend rate of an image with a shorter exposure time for the purpose of reducing blur and misalignment in an area detected as motion.
- the black crushing detection unit 106 detects the degree of black crushing of an image having a shorter exposure time out of two images having different exposure times.
- the degree of black crushing in an image is information that is defined as multivalued for pixel values that are 0 or more below a certain threshold.
- the black crushing detection unit 106 sends information on the detected degree of black crushing to the blend rate determination unit 108.
- Information on the degree of black crushing is used in the blend rate determination unit 108 to determine the blend rate of two images.
- the blend rate determination unit 108 determines the blend rate of two images input to the signal processing apparatus 100. In determining the blend ratio of two images, the blend rate determination unit 108 determines the degree of saturation of the image detected by the saturation detection unit 102, the level of black collapse of the image detected by the black collapse detection unit 106, and the motion detection unit 104. The information on the presence / absence of the motion detected by is used. For example, if the degree of saturation of the image detected by the saturation detection unit 102 is equal to or greater than a predetermined threshold, the blend rate determination unit 108 increases the blend rate of the image with the shorter exposure time.
- the blend rate determination unit 108 increases the blend rate of the image with the longer exposure time. Further, for example, in the region detected by the motion detection unit 104 as a motion, the blend rate determination unit 108 increases the blend rate of an image with a shorter exposure time for the purpose of reducing blur and misalignment.
- the blend rate determination unit 108 determines a final blend rate from these elements, and sends information on the determined blend rate to the synthesis unit 110.
- the synthesizing unit 110 performs a process of synthesizing two images input to the signal processing device 100 based on the blend rate determined by the blend rate determining unit 108.
- the image synthesized by the synthesis unit 110 is further synthesized with another image that is an HDR image generation target.
- the signal processing apparatus 100 generates an HDR image by performing a series of processes on all HDR image generation targets.
- the amount of calculation of the motion adaptation processing including motion detection of the subject between the images is relatively large. If motion adaptation processing is not performed at all, the amount of calculation is small and the synthesis processing is completed in a relatively short time. Instead, blurring of the moving object is noticeable, and artifacts in which multiple contours are generated can be seen.
- an object is to flexibly control the relationship between the image quality of the HDR image and the processing time required for generating the HDR image.
- FIG. 4 is an explanatory diagram illustrating HDR image generation processing by the signal processing apparatus 100 according to the present embodiment.
- FIG. 4 shows an example in which an HDR image is generated from four images having different exposure times.
- the shutter time of each frame is 1/30 seconds, 1/120 seconds, 1/480 seconds, 1/1920 seconds from the long storage side, and the exposure ratio interval increases by 4 times. A 64 times case is shown. It is desirable that the exposure ratio interval is about 16 times at the maximum.
- the shutter time of each frame can change according to the subject and the environment (brightness, etc.) at the time of imaging.
- the motion adaptation process is executed a total of three times.
- the signal processing apparatus 100 performs the motion adaptation process only once on the long accumulation side.
- the motion adaptation process only once on the long storage side, it is possible to reduce the amount of calculation required for the motion adaptation process while eliminating adverse effects such as a fatal blur of the moving object and a significant S / N reduction.
- FIG. 4 it has been described that the motion adaptation process is performed only once, but the number of times of the motion adaptation process is not limited to this example.
- a feature of the signal processing apparatus 100 according to the present embodiment is that the number of motion adaptive processes is fixed to a number smaller than the number of times originally necessary.
- the amount of blur included in each frame is determined by many factors. If the moving speed of the moving object is high, the blur amount naturally increases. If the distance between the camera and the moving subject is long or the angle of view of the lens is wide, the moving speed is relatively slow and the blur amount is small. Further, the allowable limit for the blur amount is subjective, and varies depending on the evaluator (for example, the photographer) of the image.
- the actual shutter time of each frame depends on whether the scene to be photographed is bright or dark, how many times the exposure ratio between frames is set, how wide the overall exposure ratio, that is, the dynamic range, AE (Auto It depends on various factors such as the method and policy of exposure (auto exposure).
- AE Auto It depends on various factors such as the method and policy of exposure (auto exposure).
- the S / N of the moving object decreases as the shutter time becomes shorter, so this is a trade-off between reducing blur or securing S / N of moving object.
- the balance with the S / N securing of the moving object is also subjective and again varies depending on the evaluator. Considering the above factors, the appropriate number of motion adaptation processes is not easily determined, and it can be said that it is better to control flexibly according to the situation.
- the signal processing apparatus 100 determines whether or not the shutter time of the image that is the basis for generating the HDR image is equal to or greater than a predetermined threshold value. If the shutter time is equal to or greater than a predetermined threshold value, the signal processing apparatus 100 performs motion adaptation processing to perform image synthesis processing because blur of the moving object may be conspicuous in the image.
- the composition process is a process for determining the composition ratio of two images in the blend ratio determination unit 108 or a composition of images based on the composition ratio of two images by the composition unit 110. Any one of processing shall be said.
- FIG. 5 is a flowchart illustrating an operation example of the signal processing device 100 according to the embodiment of the present disclosure.
- FIG. 5 shows an operation example of the signal processing apparatus 100 when performing HDR image generation processing.
- N is the number of combined frames
- TN is the shutter time of the Nth frame
- TH Shut is a threshold value of a shutter time that is defined in advance in consideration of blur and S / N.
- N is the long accumulation side and the larger N is the short accumulation side.
- the signal processing apparatus 100 first initializes the value of N to 1 (step S101), followed by, T N is determined whether a more TH Shut (step S102).
- the determination in step S102 is executed by, for example, the motion detection unit 104.
- step S102 if TN is equal to or higher than TH Shut (step S102, Yes), the signal processing apparatus 100 uses the two input images (Nth and N + 1th frames) to perform motion adaptation processing. Is performed (step S103).
- the process of step S103 is performed by, for example, the motion detection unit 104.
- step S102 if TN is not equal to or greater than TH Shut (step S102, No), the signal processing apparatus 100 skips the motion adaptation process in step S103.
- the signal processing apparatus 100 performs a process of combining the two input images (step S104).
- the compositing unit 110 executes the image compositing process.
- the signal processing apparatus 100 uses information on the degree of saturation, information on the degree of black crushing, information on movement between the two images, and the like when combining two images.
- the signal processing apparatus 100 determines whether the value of N is equal to a value that is 1 less than the number of frames to be synthesized (step S105). If the value of N is equal to one less than the number of frames to be combined (step S105, Yes), the signal processing apparatus 100 ends the series of processes assuming that the processing has been completed for all the frames to be combined. On the other hand, if the value of N is not equal to a value that is one less than the number of frames to be combined (step S105, No), the signal processing apparatus 100 increments the value of N by 1 (step S106), and the process of step S102 is performed. Return.
- FIG. 6 is an explanatory diagram illustrating a specific example of the operation of the signal processing device 100 according to the embodiment of the present disclosure. In FIG. 6, it is assumed that TH Shut is 1/60 second.
- the signal processing device 100 executes a motion adaptation process between the long storage frame and the intermediate storage frame.
- the medium ⁇ frame shutter time (1/120 sec) is shorter than the threshold value TH Shut, blur risk of middle ⁇ frame is small. Further, assuming that the S / N of the intermediate storage frame is insufficient, the signal processing device 100 does not execute the motion adaptation process between the intermediate storage frame and the short storage frame.
- the signal processing apparatus 100 does not execute the motion adaptation process between the short accumulation frame and the ultra short accumulation frame in which the shutter time is shortened.
- the signal processing apparatus 100 according to the present embodiment can reduce the number of motion adaptation processes while considering blur and S / N.
- the place where the motion adaptation process is executed three times is reduced to one if the threshold TH Shut is not taken into consideration.
- the signal processing apparatus 100 according to the present embodiment defines a shutter time threshold TH Shut as an element that reflects the image quality of the combined HDR image, such as blur and S / N, and the finally set shutter time. The number of executions of the motion adaptation process is adjusted from the above relationship.
- the signal processing apparatus 100 may execute the HDR synthesis process after obtaining the number of executions of the motion adaptation process in advance.
- Formulas (1) and (2) show the calculation formula for the number of executions.
- the processing time of the entire HDR process is T SUM
- the sum of the time T HDR required for one HDR synthesis and the time T MOVE required for one motion adaptation process is multiplied by the number of synthesized frames N, and T SUM is obtained.
- T SUM the number M of motion adaptation processes when it is desired to keep the entire processing time TSUM below a certain value.
- the signal processing apparatus 100 according to the present embodiment can improve the image quality of the moving object within a range that does not exceed a certain processing time by performing the motion adaptation process for the number M obtained by Expression (2).
- the signal processing apparatus 100 can use both a method for determining whether or not the motion adaptation process can be performed based on a threshold value of the shutter time and a method for obtaining the number of executions of the motion adaptation process in advance. In this case, the signal processing apparatus 100 may use the smaller number of the upper limit of the number of motion adaptation processes considering blur and S / N and the number of times M obtained as described above.
- the signal processing apparatus 100 may determine whether or not the motion adaptation process can be performed based on information on the amount of motion between images.
- FIG. 7 is an explanatory diagram illustrating a specific example of the operation of the signal processing device 100 according to the embodiment of the present disclosure.
- the signal processing apparatus 100 first performs motion detection on the long accumulation frame and the middle accumulation frame, and detects the magnitude of motion 12 between the images. If the detected movement is large, the moving speed of the moving object is fast with respect to the shutter time, which means that there is a high risk of occurrence of blur and double image. It may be assumed that the magnitude of the movement and the shutter time are basically proportional. Here, the signal processing apparatus 100 can also calculate the movement amounts Motion 23 and Motion 34 on the short accumulation side from the Motion 12.
- the signal processing device 100 performs motion adaptation processing only between frames whose motion amount is equal to or greater than TH Motion. .
- the motion adaptation process is executed between the long accumulation frame and the intermediate accumulation frame, and the motion adaptation process is not performed between the other frames.
- the signal processing apparatus 100 determines whether or not the motion adaptation process can be performed based on the magnitude of the motion amount, the signal processing apparatus 100 does not perform the motion adaptation process if the motion amount between frames is zero or sufficiently small. There is no degradation in image quality, and motion adaptation processing for subsequent frames can be automatically turned off. That is, the signal processing apparatus 100 according to the embodiment of the present disclosure has an effect that the processing time can be shortened according to the amount of motion in the shooting scene.
- the signal processing apparatus 100 uses both a method for determining whether or not the motion adaptive process can be performed based on the shutter time threshold value, and a method for determining whether or not the motion adaptive process can be performed based on information on the amount of motion between images. It is also possible. In this case, the signal processing apparatus 100 has an upper limit of the number of times of motion adaptation processing considering blur and S / N and an upper limit number of times of motion adaptation processing considering information on the amount of motion between images. The smaller number may be used.
- the signal processing apparatus 100 can use both a method for determining whether or not the motion adaptive process can be executed based on information on the amount of motion between images, and a method for obtaining the number of executions of the motion adaptive process in advance.
- the signal processing apparatus 100 may use the upper limit of the number of times of the motion adaptation processing considering information on the amount of motion between images and the smaller number of times M obtained as described above.
- the signal processing device 100 determines whether or not to execute the motion adaptation process based on a threshold of the shutter time, determines whether or not to execute the motion adaptation process based on information on the amount of motion between images, It is also possible to use a method for determining whether or not to execute the motion adaptation process based on the information on the amount of motion in between.
- the signal processing apparatus 100 has an upper limit of the number of times of motion adaptation processing considering blur and S / N and an upper limit number of times of motion adaptation processing considering information on the amount of motion between images. Of the times M obtained as described above, the smaller number may be used.
- the signal processing apparatus 100 may determine whether or not to execute the motion adaptation process based on a motion amount in a predetermined range of a part, for example, the central part, instead of the entire image.
- the technology according to the present disclosure can be applied to various products.
- the technology according to the present disclosure is realized as a device that is mounted on any type of mobile body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, and a robot. May be.
- FIG. 8 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
- the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
- the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, a vehicle exterior information detection unit 12030, a vehicle interior information detection unit 12040, and an integrated control unit 12050.
- a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are illustrated.
- the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
- the drive system control unit 12010 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
- the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
- the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
- the body control unit 12020 can be input with radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
- the body system control unit 12020 receives input of these radio waves or signals, and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
- the vehicle outside information detection unit 12030 detects information outside the vehicle on which the vehicle control system 12000 is mounted.
- the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
- the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle and receives the captured image.
- the vehicle outside information detection unit 12030 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received image.
- the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal corresponding to the amount of received light.
- the imaging unit 12031 can output an electrical signal as an image, or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared rays.
- the vehicle interior information detection unit 12040 detects vehicle interior information.
- a driver state detection unit 12041 that detects a driver's state is connected to the in-vehicle information detection unit 12040.
- the driver state detection unit 12041 includes, for example, a camera that images the driver, and the vehicle interior information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver is asleep.
- the microcomputer 12051 calculates a control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside / outside the vehicle acquired by the vehicle outside information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit A control command can be output to 12010.
- the microcomputer 12051 realizes an ADAS (Advanced Driver Assistance System) function including vehicle collision avoidance or impact mitigation, following traveling based on inter-vehicle distance, vehicle speed maintaining traveling, vehicle collision warning, or vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose.
- ADAS Advanced Driver Assistance System
- the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of automatic driving that autonomously travels without depending on the operation.
- the microcomputer 12051 can output a control command to the body system control unit 12020 based on information outside the vehicle acquired by the vehicle outside information detection unit 12030.
- the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of anti-glare, such as switching from a high beam to a low beam. It can be carried out.
- the sound image output unit 12052 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
- an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
- the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
- FIG. 9 is a diagram illustrating an example of an installation position of the imaging unit 12031.
- the vehicle 12100 includes imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
- the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door, and an upper part of a windshield in the vehicle interior of the vehicle 12100.
- the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
- the imaging units 12102 and 12103 provided in the side mirror mainly acquire an image of the side of the vehicle 12100.
- the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
- the forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
- FIG. 9 shows an example of the shooting range of the imaging units 12101 to 12104.
- the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
- the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
- the imaging range 12114 The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, an overhead image when the vehicle 12100 is viewed from above is obtained.
- At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
- at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
- the microcomputer 12051 based on the distance information obtained from the imaging units 12101 to 12104, the distance to each three-dimensional object in the imaging range 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100).
- a predetermined speed for example, 0 km / h or more
- the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like.
- automatic brake control including follow-up stop control
- automatic acceleration control including follow-up start control
- cooperative control for the purpose of autonomous driving or the like autonomously traveling without depending on the operation of the driver can be performed.
- the microcomputer 12051 converts the three-dimensional object data related to the three-dimensional object to other three-dimensional objects such as a two-wheeled vehicle, a normal vehicle, a large vehicle, a pedestrian, and a utility pole based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles.
- the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
- the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 is connected via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration or avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
- At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
- the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the captured images of the imaging units 12101 to 12104. Such pedestrian recognition is, for example, whether or not the user is a pedestrian by performing a pattern matching process on a sequence of feature points indicating the outline of an object and a procedure for extracting feature points in the captured images of the imaging units 12101 to 12104 as infrared cameras. It is carried out by the procedure for determining.
- the audio image output unit 12052 When the microcomputer 12051 determines that there is a pedestrian in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 has a rectangular contour line for emphasizing the recognized pedestrian.
- the display unit 12062 is controlled so as to be superimposed and displayed.
- voice image output part 12052 may control the display part 12062 so that the icon etc. which show a pedestrian may be displayed on a desired position.
- the technology according to the present disclosure can be applied to the microcomputer 12051 among the configurations described above.
- the signal processing apparatus 100 can be applied to a microcomputer 12051.
- the number of motion detections is set to a smaller number than the number of times of execution, thereby avoiding harmful effects such as fatal blur and significant S / N reduction of moving objects in HDR images.
- the signal processing device 100 that can reduce the amount of calculation when generating the HDR image is provided.
- a signal processing device 100 is provided.
- the signal processing apparatus 100 is provided in which the image quality of the HDR image including the image quality is automatically adjusted.
- a signal processing apparatus 100 that can be used is provided.
- each step in the processing executed by each device in this specification does not necessarily have to be processed in chronological order in the order described as a sequence diagram or flowchart.
- each step in the processing executed by each device may be processed in an order different from the order described as the flowchart, or may be processed in parallel.
- a composition processing unit that performs composition processing at least N-1 times on images of N frames (N is an integer of 3 or more) having different exposure times;
- a motion adaptation processing unit for performing motion adaptation processing at the time of one synthesis using two of the groups of images in the synthesis processing unit;
- the signal processing device wherein the number of motion adaptation processes in the motion adaptation processing unit when the synthesis processing unit performs the N-1 synthesis processes is N-2 times or less.
- the motion adaptation processing unit performs the motion adaptation processing only on an image whose exposure time of the image that is a target of the synthesis processing in the synthesis processing unit is a predetermined threshold value or more.
- composition processing at least N ⁇ 1 times on images of N frames (N is an integer of 3 or more) having different exposure times; Performing a motion adaptation process in one synthesis using two of the group of images; And execute A computer program in which the number of motion adaptation processes when performing the N-1 synthesis processes is N-2 or less.
- Image sensor 11 Long accumulation pixel 12: Medium accumulation pixel 13: Short accumulation pixel 14: Ultra short accumulation pixel 100: Signal processing device
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Exposure Control For Cameras (AREA)
- Details Of Cameras Including Film Mechanisms (AREA)
- Image Processing (AREA)
Abstract
Le problème décrit par la présente invention est de fournir un dispositif de traitement de signal qui est apte à diminuer le temps de traitement sans dégrader les performances de plage dynamique ou à établir de manière flexible un compromis entre la qualité d'image et le temps de traitement lors de la réception d'une image HDR. La solution selon l'invention concerne un dispositif de traitement de signal comprenant : une unité de traitement de synthèse qui synthétise, au moins (N-1) fois, une image de N trames qui ont des temps d'exposition qui sont différents les uns des autres ; et une unité de traitement d'adaptation de mouvement qui effectue un processus d'adaptation de mouvement dans une synthèse en une seule fois qui utilise deux images du groupe des images dans l'unité de traitement de synthèse, le nombre de processus d'adaptation de mouvement dans l'unité de traitement d'adaptation de mouvement étant égal ou inférieur à (N-2), lorsque l'unité de traitement de synthèse effectue la synthèse (N-1) fois.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017105680A JP2018201158A (ja) | 2017-05-29 | 2017-05-29 | 信号処理装置、信号処理方法及びコンピュータプログラム |
| JP2017-105680 | 2017-05-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018220993A1 true WO2018220993A1 (fr) | 2018-12-06 |
Family
ID=64454728
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/014210 Ceased WO2018220993A1 (fr) | 2017-05-29 | 2018-04-03 | Dispositif de traitement de signal, procédé de traitement de signal et programme informatique |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2018201158A (fr) |
| WO (1) | WO2018220993A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110636227A (zh) * | 2019-09-24 | 2019-12-31 | 合肥富煌君达高科信息技术有限公司 | 高动态范围hdr图像合成方法及集成该方法的高速相机 |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2021114696A (ja) * | 2020-01-17 | 2021-08-05 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000050151A (ja) * | 1998-07-28 | 2000-02-18 | Olympus Optical Co Ltd | 撮像装置 |
| JP2001333420A (ja) * | 2000-05-22 | 2001-11-30 | Hitachi Ltd | 画像監視方法および装置 |
| JP2009071408A (ja) * | 2007-09-11 | 2009-04-02 | Mitsubishi Electric Corp | 画像処理装置及び画像処理方法 |
| JP2010239610A (ja) * | 2009-03-13 | 2010-10-21 | Omron Corp | 画像処理装置および画像処理方法 |
| JP2015142342A (ja) * | 2014-01-30 | 2015-08-03 | オリンパス株式会社 | 撮像装置、画像生成方法及び画像生成プログラム |
-
2017
- 2017-05-29 JP JP2017105680A patent/JP2018201158A/ja active Pending
-
2018
- 2018-04-03 WO PCT/JP2018/014210 patent/WO2018220993A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000050151A (ja) * | 1998-07-28 | 2000-02-18 | Olympus Optical Co Ltd | 撮像装置 |
| JP2001333420A (ja) * | 2000-05-22 | 2001-11-30 | Hitachi Ltd | 画像監視方法および装置 |
| JP2009071408A (ja) * | 2007-09-11 | 2009-04-02 | Mitsubishi Electric Corp | 画像処理装置及び画像処理方法 |
| JP2010239610A (ja) * | 2009-03-13 | 2010-10-21 | Omron Corp | 画像処理装置および画像処理方法 |
| JP2015142342A (ja) * | 2014-01-30 | 2015-08-03 | オリンパス株式会社 | 撮像装置、画像生成方法及び画像生成プログラム |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110636227A (zh) * | 2019-09-24 | 2019-12-31 | 合肥富煌君达高科信息技术有限公司 | 高动态范围hdr图像合成方法及集成该方法的高速相机 |
| CN110636227B (zh) * | 2019-09-24 | 2021-09-10 | 合肥富煌君达高科信息技术有限公司 | 高动态范围hdr图像合成方法及集成该方法的高速相机 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2018201158A (ja) | 2018-12-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10432847B2 (en) | Signal processing apparatus and imaging apparatus | |
| US12134316B2 (en) | State detection device, state detection system, and state detection method | |
| US11082626B2 (en) | Image processing device, imaging device, and image processing method | |
| US11553117B2 (en) | Image pickup control apparatus, image pickup apparatus, control method for image pickup control apparatus, and non-transitory computer readable medium | |
| WO2020230660A1 (fr) | Dispositif de reconnaissance d'image, dispositif d'imagerie à semi-conducteurs et procédé de reconnaissance d'image | |
| WO2017175492A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, programme informatique et dispositif électronique | |
| US11025828B2 (en) | Imaging control apparatus, imaging control method, and electronic device | |
| WO2018008426A1 (fr) | Dispositif et procédé de traitement de signaux, et dispositif d'imagerie | |
| US12455382B2 (en) | Distance measuring sensor, signal processing method, and distance measuring module | |
| WO2017195459A1 (fr) | Dispositif d'imagerie et procédé d'imagerie | |
| WO2017169233A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, programme informatique et dispositif électronique | |
| WO2018220993A1 (fr) | Dispositif de traitement de signal, procédé de traitement de signal et programme informatique | |
| WO2021065494A1 (fr) | Capteur de mesure de distances, procédé de traitement de signaux et module de mesure de distances | |
| WO2021065495A1 (fr) | Capteur de télémétrie, procédé de traitement de signal, et module de télémétrie | |
| US20200402206A1 (en) | Image processing device, image processing method, and program | |
| US10999488B2 (en) | Control device, imaging device, and control method | |
| WO2017149964A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, programme informatique et dispositif électronique | |
| WO2022249562A1 (fr) | Dispositif de traitement de signal, procédé et programme | |
| US20210217146A1 (en) | Image processing apparatus and image processing method | |
| EP4518335A1 (fr) | Dispositif d'imagerie et procédé de traitement de signal | |
| WO2021065500A1 (fr) | Capteur de mesure de distance, procédé de traitement de signal, et module de mesure de distance | |
| EP3905656B1 (fr) | Dispositif de traitement d'image | |
| WO2018142969A1 (fr) | Dispositif et procédé d'affichage, et programme | |
| WO2022219874A1 (fr) | Dispositif et procédé de traitement de signaux, et programme | |
| US20230007146A1 (en) | Image processing device, image processing method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18810811 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18810811 Country of ref document: EP Kind code of ref document: A1 |