[go: up one dir, main page]

WO2025115710A1 - Method used for image correction, and processing circuit for executing said method - Google Patents

Method used for image correction, and processing circuit for executing said method Download PDF

Info

Publication number
WO2025115710A1
WO2025115710A1 PCT/JP2024/041056 JP2024041056W WO2025115710A1 WO 2025115710 A1 WO2025115710 A1 WO 2025115710A1 JP 2024041056 W JP2024041056 W JP 2024041056W WO 2025115710 A1 WO2025115710 A1 WO 2025115710A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correction
subject
hyperspectral
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/041056
Other languages
French (fr)
Japanese (ja)
Inventor
基樹 八子
智 佐藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Management Co Ltd
Original Assignee
Panasonic Intellectual Property Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Management Co Ltd filed Critical Panasonic Intellectual Property Management Co Ltd
Publication of WO2025115710A1 publication Critical patent/WO2025115710A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data

Definitions

  • the present disclosure relates to a method used to correct an image, and a processing circuit that executes the method.
  • bands narrow wavelength bands
  • RGB images red (R), green (G), and blue (B).
  • a camera that captures images in such many wavelength bands is called a “hyperspectral camera.” These cameras are used in a variety of fields, such as food inspection, biomedical testing, pharmaceutical development, and mineral composition analysis.
  • Patent Document 1 discloses an image analysis device for analyzing the distribution of substances in biological tissue.
  • the image analysis device obtains multiple sample images by illuminating biological tissue with light in multiple wavelength bands selected from a predetermined wavelength range and photographing the tissue. Sample data based on the multiple sample images is compared with teacher data on the substances to generate distribution data of the substances in the tissue.
  • Patent Document 1 discloses normalizing and correcting the intensity of light reflected by a sample based on the intensity of light reflected by a reference member such as a whiteboard.
  • Patent Document 2 discloses an example of a hyperspectral imaging device that uses compressed sensing technology.
  • Compressive sensing technology is a technology that restores more data than the observed data by assuming that the data distribution of the observed object is sparse in a certain space (e.g., frequency space).
  • the imaging device disclosed in Patent Document 2 is equipped with a coding mask, which is an array of multiple optical filters with different spectral transmittances, on the optical path connecting the subject and the image sensor.
  • the imaging device can generate images of multiple wavelength bands in one shot by performing restoration calculations based on a compressed image acquired by imaging using the coding mask.
  • Patent Document 1 if the subject for correction, such as a reference member, is not appropriate, it is not easy to accurately obtain spectral information of the subject to be analyzed due to inappropriate correction. Therefore, there is a demand for more accurate acquisition of spectral information of the subject to be analyzed by preventing inappropriate correction.
  • a method used for correcting an image includes: taking an image generated by photographing a first subject as a first image; taking an image generated by photographing a second subject as a second image, the first image being a correction image for correcting the second image; and determining whether the first image satisfies a suitability condition for the correction image based on pixel values of the first image.
  • a comprehensive or specific aspect of the present disclosure may be realized in a system, an apparatus, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable recording disk, or in any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
  • a computer-readable recording medium may include a non-volatile recording medium such as a CD-ROM (Compact Disc-Read Only Memory).
  • An apparatus may be composed of one or more devices. When an apparatus is composed of two or more devices, the two or more devices may be located within a single device, or may be located separately within two or more separate devices. In this specification and claims, "apparatus" may mean not only one device, but also a system consisting of multiple devices.
  • FIG. 1A is a diagram for explaining the relationship between a target wavelength range W and a plurality of wavelength bands W 1 , W 2 , . . . , Wi included therein.
  • FIG. 1B is a diagram illustrating a schematic example of a hyperspectral image.
  • FIG. 2A is a block diagram illustrating a schematic configuration of an imaging system according to an exemplary embodiment of the present disclosure that captures an image of a correction subject.
  • FIG. 2B is a block diagram illustrating a schematic configuration of an imaging system according to an exemplary embodiment of the present disclosure for imaging another subject for correction.
  • FIG. 2C is a block diagram illustrating a schematic configuration of an imaging system according to an exemplary embodiment of the present disclosure that captures an image of a subject to be analyzed.
  • FIG. 1A is a diagram for explaining the relationship between a target wavelength range W and a plurality of wavelength bands W 1 , W 2 , . . . , Wi included therein.
  • FIG. 1B is
  • FIG. 3 is a diagram illustrating a specific configuration of an imaging system according to an exemplary embodiment of the present disclosure.
  • FIG. 4A is a diagram illustrating an example of white board correction.
  • FIG. 4B is a diagram illustrating an example of black filtering.
  • FIG. 5 is a flowchart illustrating an outline of a first example of a processing operation executed by a processing circuit in the imaging system according to the present embodiment.
  • FIG. 6A is a diagram illustrating a first hyperspectral image and a reflectance spectrum when a correction subject is appropriate.
  • FIG. 6B is a diagram illustrating a hyperspectral image and a reflectance spectrum when the correction subject is not appropriate.
  • FIG. 7 is a diagram showing a first hyperspectral image and an edge image obtained by performing edge detection on the first hyperspectral image when the correction subject is appropriate and when it is inappropriate.
  • FIG. 8A is a diagram showing a first hyperspectral image and a histogram of pixel values of the first hyperspectral image when another object for correction is appropriate.
  • FIG. 8B is a diagram showing the first hyperspectral image and a histogram of pixel values of the first hyperspectral image when another subject for correction is not appropriate.
  • FIG. 9 is a flowchart illustrating an example 2 of the processing operation executed by the processing circuit in the imaging system according to this embodiment.
  • FIG. 10A is a diagram showing a first compressed image and a histogram of its pixel values when the subject for correction is appropriate.
  • FIG. 10A is a diagram showing a first compressed image and a histogram of its pixel values when the subject for correction is appropriate.
  • FIG. 10B is a diagram showing the first compressed image and a histogram of its pixel values when the correction subject is not appropriate.
  • FIG. 11 is a flowchart illustrating an example 3 of the processing operation executed by the processing circuit in the imaging system according to the present embodiment.
  • FIG. 12A is a diagram illustrating an example of a UI of the display device.
  • FIG. 12B is a diagram illustrating another example of the UI of the display device.
  • FIG. 12C is a diagram illustrating still another example of a UI of the display device.
  • FIG. 13 is a flowchart outlining a summary of examples 1 to 3 of the processing operation executed by the processing circuit in the imaging system according to this embodiment.
  • FIG. 14 is a diagram showing an example of an image generated by capturing a scene including a correction subject and an analysis target subject with a camera.
  • FIG. 15 is a flowchart illustrating an outline of a fourth example of the processing operation executed by the processing circuit in the imaging system according to the present embodiment.
  • FIG. 16A is a diagram illustrating a schematic configuration example of a compressed sensing type hyperspectral camera.
  • FIG. 16B is a diagram illustrating a schematic configuration example of a compressed sensing type hyperspectral camera.
  • FIG. 16C is a diagram illustrating a schematic diagram of yet another configuration example of a compressed sensing type hyperspectral camera.
  • FIG. 16D is a diagram illustrating a schematic diagram of yet another configuration example of a compressed sensing type hyperspectral camera.
  • FIG. 16A is a diagram illustrating a schematic configuration example of a compressed sensing type hyperspectral camera.
  • FIG. 16B is a diagram illustrating a schematic configuration example of a compressed sensing type hyperspectral
  • FIG. 17A is a schematic diagram illustrating an example of a filter array.
  • FIG. 17B is a diagram showing an example of the spatial distribution of the light transmittance of each of the wavelength bands W 1 , W 2 , . . . , W N included in the target wavelength range.
  • FIG. 17C is a diagram showing an example of the spectral transmittance of a certain region included in the filter array shown in FIG. 17A.
  • FIG. 17D is a diagram showing an example of the spectral transmittance of another region included in the filter array shown in FIG. 17A.
  • FIG. 18A is a diagram for explaining the characteristics of the spectral transmittance in a certain region of the filter array.
  • FIG. 18B is a diagram showing the results of averaging the spectral transmittance shown in FIG. 18A for each of the wavelength bands W 1 , W 2 , . . . , W N.
  • all or part of a circuit, unit, device, member or part, or all or part of a functional block in a block diagram may be implemented by one or more electronic circuits including, for example, a semiconductor device, a semiconductor integrated circuit (IC), or an LSI (large scale integration).
  • the LSI or IC may be integrated into a single chip, or may be configured by combining multiple chips.
  • functional blocks other than memory elements may be integrated into a single chip.
  • the name may vary depending on the degree of integration, and may be called a system LSI, VLSI (very large scale integration), or ULSI (ultra large scale integration).
  • a Field Programmable Gate Array (FPGA), which is programmed after the LSI is manufactured, or a reconfigurable logic device, which can reconfigure the junctions within the LSI or set up circuit partitions within the LSI, may also be used for the same purpose.
  • FPGA Field Programmable Gate Array
  • all or part of the functions or operations of a circuit, unit, device, member or part can be executed by software processing.
  • the software is recorded on one or more non-transitory recording media such as ROM, optical disk, hard disk drive, etc., and when the software is executed by a processor, the functions specified in the software are executed by the processor and peripheral devices.
  • the system or apparatus may include one or more non-transitory recording media on which the software is recorded, a processor, and any necessary hardware devices, such as interfaces.
  • a hyperspectral image is image data having more wavelength information than a general RGB image.
  • An RGB image has values for each of three bands, red (R), green (G), and blue (B), for each pixel.
  • a hyperspectral image has values for more bands for each pixel than the number of bands in an RGB image.
  • a "hyperspectral image” means image data including multiple images corresponding to each of four or more multiple bands included in a predetermined target wavelength range.
  • pixel value the value that each pixel has for each band is referred to as a "pixel value.”
  • the number of bands in a hyperspectral image is typically 10 or more, and in some cases may exceed 100.
  • a “hyperspectral image” is also called a “hyperspectral data cube” or a “hyperspectral cube.”
  • FIG. 1A is a diagram for explaining the relationship between the target wavelength range W and a plurality of wavelength bands W 1 , W 2 , ..., W i included therein.
  • the target wavelength range W can be set to various ranges depending on the application.
  • the target wavelength range W can be, for example, a visible light wavelength range of about 400 nm to about 700 nm, a near-infrared wavelength range of about 700 nm to about 2500 nm, or a near-ultraviolet wavelength range of about 10 nm to about 400 nm.
  • the target wavelength range W may be a mid-infrared or far-infrared wavelength range.
  • the wavelength range used is not limited to the visible light range.
  • not only visible light, but also electromagnetic waves of wavelengths not included in the visible light wavelength range, such as ultraviolet and near-infrared, are referred to as "light" for convenience.
  • N is an arbitrary integer equal to or greater than 4, and the wavelength bands obtained by dividing the target wavelength range W into N equal parts are designated as wavelength bands W 1 , W 2 , ..., W N.
  • the number of wavelength bands included in the target wavelength range W may be set arbitrarily.
  • Each wavelength band may be a wavelength range having a predetermined width, such as 5 nm, 10 nm, 20 nm, or 50 nm.
  • the widths of the multiple wavelength bands may be the same or different. If the number of wavelength bands is four or more, more information can be obtained from the hyperspectral image than from the RGB image.
  • FIG. 1B is a diagram showing a schematic example of a hyperspectral image.
  • the subject is an apple.
  • the hyperspectral image 36 includes images 36W 1 , 36W 2 , ..., 36W N corresponding to wavelength bands W 1 , W 2 , ..., W N , respectively.
  • Each of these images includes a plurality of pixels arranged two-dimensionally.
  • FIG. 1B shows an example of vertical and horizontal dashed lines indicating pixel divisions.
  • the actual number of pixels per image can be a large value, for example, tens of thousands to tens of millions, but in FIG. 1B, for ease of understanding, the pixel divisions are shown as if the number of pixels is extremely small.
  • Reflected light generated when the subject is irradiated with light is detected by a plurality of photodetection elements included in the image sensor.
  • a signal indicating the amount of light detected by each photodetection element represents the pixel value of the pixel corresponding to that photodetection element.
  • Each pixel in the hyperspectral image 36 has a pixel value for each wavelength band. Therefore, by acquiring the hyperspectral image 36, spectral information of the subject can be obtained. Based on the spectral information of the object, it becomes possible to accurately analyze the light characteristics of the object.
  • the method may include, for example, the following operations (1) to (3).
  • a hyperspectral image for correction is obtained by photographing a correction subject with a hyperspectral camera.
  • the hyperspectral image for correction includes a plurality of correction images respectively corresponding to a plurality of wavelength bands.
  • a hyperspectral image of the analysis target is obtained by photographing the analysis target object with a hyperspectral camera.
  • the hyperspectral image of the analysis target includes a plurality of images of the analysis target corresponding to a plurality of wavelength bands.
  • an example of correcting a hyperspectral image of an analysis target based on a hyperspectral image for correction is normalizing each of a plurality of images of an analysis target by a corresponding correction image among a plurality of correction images in that the wavelength bands are the same, as in the correction disclosed in Patent Document 1.
  • the subject for correction is a white board
  • white board correction is performed to reduce the effects of the spectral shape of the irradiated light, the illumination distribution during shooting, the peripheral light falloff of the lens, the uneven sensitivity of the image sensor, and the like.
  • normalizing image A by image B means dividing the pixel value of each of the multiple pixels in image A by the pixel value of the corresponding pixel among the multiple pixels in image B, and multiplying the result by the maximum pixel value that the pixel can have. If the spatial distribution of the pixel values of image B is approximately constant, the pixel value of one representative pixel or the average of the pixel values of two or more representative pixels may be used instead of the pixel value of the corresponding pixel.
  • the maximum pixel value that the pixel can have is 255 for 8 bits and 4095 for 12 bits.
  • Normalizing image A by image B may be interpreted as determining (pvmax x (pixel value pvA11 of pixel pA11 included in image A)/(pixel value pvB11 of pixel pB11 included in image B)), ..., (pvmax x (pixel value pvAmn of pixel pAmn included in image A)/(pixel value pvBmn of pixel pBmn included in image B)).
  • Images A and B each contain m x n pixels, and the position of pixel pA11 in image A corresponds to pixel pB11 in image B, ..., and the position of pixel pAmn in image A corresponds to pixel pBmn in image B.
  • the maximum value that each of pixel values pvA11, ..., pixel value pvAmn, pixel value pvB11, ..., pixel value pvBmn can take may be pvmax.
  • pixel ⁇ in image B can be said to correspond to pixel ⁇ in image A if the position of pixel ⁇ is the same as the position of pixel ⁇ .
  • pixel ⁇ in image B can be said to correspond to pixel ⁇ in image A if, in an image sensor that outputs an image signal, the position of the photodetection element that outputs the signal of pixel ⁇ is the same as the position of the photodetection element that outputs the signal of pixel ⁇ .
  • another example of correcting the hyperspectral image to be analyzed based on the hyperspectral image for correction is to subtract from each of the images to be analyzed a corresponding correction image from among multiple correction images in that the correction image has the same wavelength band.
  • the correction subject is a light-blocking lens cap attached to a hyperspectral camera
  • the pixel values of the correction image are nearly zero, and this type of correction is called "blackout subtraction.” Blackout subtraction is performed to reduce the effects of image sensor dark current, bright spot pixels, fixed pattern noise, and fluctuations in sensor performance.
  • “subtracting image B from image A” means subtracting the pixel value of a corresponding pixel among multiple pixels in image B from the pixel value of each of multiple pixels in image A.
  • “Subtracting image B from image A” may be interpreted as finding ((pixel value pvA11 of pixel pA11 included in image A) - (pixel value pvB11 of pixel pB11 included in image B)), ..., ((pixel value pvAmn of pixel pAmn included in image A) - (pixel value pvBmn of pixel pBmn included in image B)).
  • Images A and B each contain m x n pixels, and the position of pixel pA11 in image A corresponds to pixel pB11 in image B, ..., and the position of pixel pAmn in image A corresponds to pixel pBmn in image B.
  • correction image B is subtracted from the image to be analyzed A. If the spatial distribution of pixel values in image B is approximately constant, the pixel value of one representative pixel or the average of the pixel values of two or more representative pixels may be used instead of the pixel values of the corresponding pixels.
  • whiteboard correction if the whiteboard used as the subject for correction is dirty, or if an object other than a whiteboard is mistakenly photographed as the subject for correction, the accuracy of the whiteboard correction decreases.
  • blackout correction if the light-blocking lens cap used as the subject for correction is not properly attached to the hyperspectral camera, the accuracy of the blackout correction decreases.
  • FIG. 2A to FIG. 2C are block diagrams that show a schematic configuration of the imaging system according to the exemplary embodiment of the present disclosure.
  • FIG. 2A illustrates a white board as the correction subject 10a1.
  • the white board may be, for example, a standard white board with a uniform spatial distribution of reflectance and a small wavelength dispersion of reflectance.
  • FIG. 2B illustrates a light-shielding lens cap that can realize imaging in a light-shielding state as the correction subject 10a2.
  • FIG. 2A illustrates a white board as the correction subject 10a1.
  • the white board may be, for example, a standard white board with a uniform spatial distribution of reflectance and a small wavelength dispersion of reflectance.
  • FIG. 2B illustrates a light-shielding lens cap that can realize imaging in a light-shielding state as the correction subject 10a2.
  • FIG. 2C illustrates an apple as the subject 10b to be analyzed.
  • the subject 10b is not limited to an apple and may be any object.
  • the correction subject 10a1 or the subject 10a2 is also referred to as a "first subject”
  • the analysis subject 10b is also referred to as a "second subject”.
  • the imaging system 100 shown in Figures 2A to 2C includes a light source 20, a camera 30, a display device 40, and a processing device 50.
  • the processing device 50 includes a processing circuit 52, a memory 54, and a storage device 56.
  • the thick lines with arrows shown in Figures 2A to 2C indicate the flow of signals.
  • the camera 30 may be, for example, a line scan type or snapshot type hyperspectral camera, which will be described later.
  • a hyperspectral image for correction is generated by photographing the subject 10a1 or subject 10a2 with the camera 30 as shown in FIG. 2A or FIG. 2B.
  • the hyperspectral image for correction is generated when the user receives an instruction to photograph the subject 10a1 or subject 10a2.
  • a hyperspectral image to be analyzed is generated by photographing the subject 10b with the camera 30 as shown in FIG. 2C.
  • the hyperspectral image to be analyzed is generated when the user receives an instruction to photograph the subject 10b.
  • the camera 30 may be a compressed sensing type hyperspectral camera, which will be described later.
  • a compressed image for correction is generated by photographing the subject 10a1 or the subject 10a2 with the camera 30, and a hyperspectral image for correction is generated based on the compressed image.
  • the compressed image for correction is generated when the user receives an instruction to photograph the subject 10a1 or the subject 10a2.
  • a compressed image to be analyzed is generated by photographing the subject 10b with the camera 30, and a hyperspectral image to be analyzed is generated based on the compressed image.
  • the compressed image to be analyzed is generated when the user receives an instruction to photograph the subject 10b.
  • the hyperspectral image for correction or the compressed image for correction is also referred to as the "first image”
  • the hyperspectral image to be analyzed or the compressed image to be analyzed is also referred to as the "second image.” Since a hyperspectral image includes multiple images corresponding to multiple wavelength bands, and information of multiple wavelength bands is compressed in a compressed image, it can be said that the first and second images include information of multiple wavelength bands.
  • the first image is acquired as a correction image for correcting the second image.
  • the correction of the second image based on the first image is whiteboard correction.
  • the correction of the second image based on the first image is blackout correction.
  • the imaging system 100 determines whether the first image satisfies the suitability conditions for an image to be corrected based on the pixel values of the first image. Therefore, even if the user believes that he or she has photographed the appropriate subject 10a1 or subject 10a2 in response to instructions, inappropriate correction can be prevented if the subject 10a1 or subject 10a2 is actually not appropriate. As a result, appropriate correction makes it possible to more accurately obtain spectral information for subject 10b.
  • spectral information means information about a spectrum that indicates the wavelength dependency of light intensity.
  • Spectral information may be, for example, information about the spectrum itself, or information from which a spectrum can be derived.
  • the spectrum may be a reflection spectrum or a transmission spectrum.
  • analysis can be, for example, determining characteristics of the subject 10b, such as sugar content and ripeness, and inspecting the subject 10b for defects and/or foreign matter.
  • characteristics of the subject 10b such as sugar content and ripeness
  • inspection includes human evaluation as well as machine processing.
  • the components of the imaging system 100 are described below.
  • the light source 20 emits illumination light for illuminating the subject 10a1 or the subject 10b.
  • the illumination light includes light of a plurality of wavelength bands.
  • the light source 20 may be, for example, an incandescent lamp, a halogen lamp, a mercury lamp, a fluorescent lamp, or an LED lamp that emits white light.
  • the hollow arrows shown in Figures 2A and 2C represent the light emitted from the light source 20 and the light reflected from the subject 10a1 and the subject 10b.
  • the light source 20 is not necessarily required. If the imaging system 100 does not include the light source 20, the subject 10a1 or the subject 10b may be illuminated with sunlight or indoor light.
  • ⁇ Camera 30> The camera 30 captures an image of the subject 10a1 or 10a2 as shown in Fig. 2A or 2B. Similarly, the camera 30 captures an image of the subject 10b as shown in Fig. 2C.
  • the target wavelength range W shown in FIG. 1A is a wavelength range in which the camera 30 can detect light.
  • the target wavelength range W can be determined, for example, based on the transmission range of the optical system and the sensitivity range of the image sensor.
  • the target wavelength range W is determined by the sensitivity range of the image sensor.
  • the camera 30 further includes a bandpass filter, the target wavelength range W can be determined based on the transmission range of the bandpass filter in addition to the transmission range of the optical system and the sensitivity range of the image sensor.
  • the target wavelength range W is determined by the transmission range of the bandpass filter.
  • Examples of the camera 30 include line-scan, snapshot, and compressed sensing hyperspectral cameras. Below, we explain the representative components and operation of each hyperspectral camera.
  • the camera 30 When the camera 30 is a line-scan hyperspectral camera, the camera 30 includes a prism or a diffraction grating, an image sensor, and a slide mechanism for sliding the object to be photographed in one direction.
  • the light source 20 emits a line beam extending in a direction perpendicular to the one direction.
  • the object to be photographed is illuminated with the line beam emitted from the light source 20.
  • the light generated by the illumination is separated into wavelength bands via the prism or the diffraction grating and detected by the image sensor. In line scanning, such light detection is performed while the object to be photographed is moved in one direction by the slide mechanism.
  • Camera 30 generates and outputs a hyperspectral image for correction by line scanning subject 10a1 or subject 10a2. Similarly, camera 30 generates and outputs a hyperspectral image to be analyzed by line scanning subject 10b.
  • Line-scan hyperspectral cameras have high spatial and wavelength resolution, but the imaging time is long due to the line scan.
  • the camera 30 When the camera 30 is a snapshot-type hyperspectral camera, the camera 30 includes a plurality of light-transmitting regions corresponding to a plurality of wavelength bands, respectively, and an image sensor. Each of the plurality of light-transmitting regions transmits light of the corresponding wavelength band among the plurality of wavelength bands in the target wavelength range. Light from an object to be photographed is detected by the image sensor via the plurality of light-transmitting regions.
  • Camera 30 generates and outputs a hyperspectral image for correction by capturing an image of subject 10a1 or subject 10a2 in one shot. Similarly, camera 30 generates and outputs a hyperspectral image to be analyzed by capturing an image of subject 10b in one shot. This is similar to the principle of a color camera capturing an image of a target object in one shot through red, green, and blue color filters to generate and output red, green, and blue images. Although snapshot-type hyperspectral cameras are capable of capturing images in one shot, their sensitivity and spatial resolution are often insufficient.
  • the camera 30 is a compressed sensing type hyperspectral camera as disclosed in Patent Document 2
  • the camera 30 includes an encoding mask including a plurality of regions having different transmission spectra, an image sensor, and an image processing device.
  • an encoding mask including a plurality of regions having different transmission spectra
  • an image sensor detects Light from an object to be photographed
  • a compressed image is generated in which information of a plurality of wavelength bands is compressed.
  • the image processing device generates a hyperspectral image of the object based on the compressed image.
  • Camera 30 photographs subject 10a1 or subject 10a2 to generate a compressed image for correction, and generates and outputs a hyperspectral image for correction based on the compressed image.
  • camera 30 photographs subject 10b to generate a compressed image of the subject to be analyzed, and generates and outputs a hyperspectral image of the subject to be analyzed based on the compressed image.
  • a compressed sensing type hyperspectral camera can generate a hyperspectral image for correction or analysis in one shot without reducing sensitivity and spatial resolution.
  • the camera 30 may be configured to include an encoding mask and an image sensor, but may not include an image processing device. In such a configuration, the camera 30 generates and outputs a compressed image, and the processing device 50 generates a hyperspectral image based on the compressed image.
  • the input UI 42 and the display UI 44 are displayed as a GUI (Graphical User Interface). It can be said that the information shown on the input UI 42 and the display UI 44 is displayed on the display device 40.
  • the input UI 42 and the display UI 44 may be realized by a device capable of both input and output, such as a touch screen. In that case, the touch screen may function as the display device 40.
  • the input UI 42 is a device independent of the display device 40.
  • the processing circuitry 52 included in the processing device 50 controls the operations of the light source 20, the camera 30, and the storage device 56.
  • the processing circuitry 52 acquires the hyperspectral image for correction and the hyperspectral image to be analyzed generated by the camera 30, and performs processing based on these hyperspectral images.
  • the processing circuitry 52 acquires the compressed image for correction and the compressed image to be analyzed generated by the camera 30, and performs processing based on these compressed images.
  • the memory 54 included in the processing device 50 stores a computer program executed by the processing circuit 52.
  • the processing circuit 52 and the memory 54 may be integrated on a single circuit board or may be provided on separate circuit boards.
  • the functions of the processing circuit 52 may also be distributed across multiple circuits.
  • a part or the entirety of the processing circuit 52 may be installed in a remote location away from the light source 20, the camera 30, and the storage device 56, and may control the operation of these components via a wired or wireless communication network.
  • the storage device 56 included in the processing device 50 is a device that includes any storage medium, such as a semiconductor storage medium or a magnetic storage medium.
  • the storage device 56 is connected to the processing circuit 52 and stores the processing results of the processing circuit 52.
  • FIG. 3 is a diagram showing a schematic configuration of the imaging system according to the exemplary embodiment of the present disclosure.
  • the imaging system 100 shown in FIG. 3 further includes a stage 60, a support 70, and an adjustment device 80 in addition to the above-mentioned light source 20, the camera 30, the display device 40, and the processing device 50.
  • the number of light sources 20 is two, but may be one, or may be three or more.
  • the processing device 50 is connected to the light source 20, the camera 30, the display device 40, and the adjustment device 80 by wire or wirelessly.
  • the processing circuit 52 shown in FIG. 2A to FIG. 2C included in the processing device 50 controls the operation of the adjustment device 80 in addition to the operation of the light source 20, the camera 30, and the display device 40.
  • the stage 60 has a flat support surface for placing the subject 10a1 shown in FIG. 2A and the subject 10b shown in FIG. 2C.
  • the support 70 is fixed to the stage 60 and has a structure that extends in a direction perpendicular to the support surface of the stage 60, i.e., in the height direction.
  • the support 70 supports the light source 20, the camera 30, and the adjustment device 80.
  • the adjustment device 80 includes a mechanism for independently moving the light source 20 and the camera 30 in a direction perpendicular to the support surface of the stage 60.
  • the adjustment device 80 may include an actuator including one or more motors, such as a linear actuator.
  • the actuator may be configured to change the distance between the light source 20 and the subject 10a1 or the distance between the light source 20 and the subject 10b, and the distance between the camera 30 and the subject 10a1 or the distance between the camera 30 and the subject 10b, using, for example, an electric motor, hydraulic pressure, or pneumatic pressure.
  • the adjustment device 80 can adjust the distance between the light source 20 and the subject 10a1 or the distance between the light source 20 and the subject 10b, it is possible to appropriately adjust the amount of light emitted from the light source 20, reflected by the subject 10a1 or the subject 10b, and incident on the camera 30. This reduces the possibility that the hyperspectral image or compressed image will be too bright or too dark. Furthermore, because the adjustment device 80 can adjust the distance between the camera 30 and the subject 10a1 or the distance between the camera 30 and the subject 10b, it becomes easier to focus the camera 30.
  • the adjustment device 80 further includes a measuring device that measures the distance between the stage 60 and the light source 20, and the distance between the stage 60 and the camera 30.
  • the support 150 is provided with a scale that indicates the height from the support surface of the stage 190. The positions of the light source 20 and the camera 30 in the height direction can be known based on the scale.
  • the hyperspectral image to be analyzed is normalized by the hyperspectral image for correction. More specifically, each of the multiple images included in the hyperspectral image to be analyzed is normalized by a corresponding image among the multiple images included in the hyperspectral image for correction in that the wavelength bands are the same.
  • a suitable whiteboard has a certain or higher reflectance in the wavelength range included in the hyperspectral image.
  • the whiteboard correction is performed to reduce the effects of the spectral shape of the irradiated light, the irradiance distribution during shooting, the peripheral light falloff of the lens, and the uneven sensitivity of the image sensor. However, if there is no need to reduce such effects, it is not necessary to perform the whiteboard correction.
  • FIG. 4A is a diagram showing a schematic example of white board correction.
  • the top three diagrams in FIG. 4A show examples of correct white board correction, while the middle three diagrams and the bottom three diagrams in FIG. 4A show examples of incorrect white board correction.
  • the diagram on the left shows a hyperspectral image containing subject 10b
  • the central diagram shows a hyperspectral image containing a white board
  • the diagram on the right shows a hyperspectral image after white board correction.
  • an image corresponding to a certain wavelength band contained in the hyperspectral image is shown as an example of the hyperspectral image.
  • an appropriate hyperspectral image for correction is generated, as shown in the center diagram in the top row.
  • the hyperspectral image to be analyzed is corrected based on the appropriate hyperspectral image for correction, so that a corrected hyperspectral image after white board correction is correctly generated, as shown in the right diagram in the top row.
  • the hyperspectral image for correction In contrast, if dirt adheres to the white board, the dirt will appear in the hyperspectral image for correction, as shown in the center figure in the middle row. If the hyperspectral image to be analyzed is corrected based on such an inappropriate hyperspectral image for correction, the hyperspectral image after white board correction will not be generated correctly, as shown in the right figure in the middle row. In the hyperspectral image for correction, the parts that correspond to the dirt will be darker than the other parts, so due to the normalization described above, the parts that correspond to the dirt will appear white in the hyperspectral image after white board correction.
  • the object will appear in the hyperspectral image for correction, as shown in the center diagram in the lower row.
  • a fish is shown as an example of the object.
  • the hyperspectral image to be analyzed is corrected based on such an inappropriate hyperspectral image for correction, the hyperspectral image after white board correction will not be generated correctly, as shown in the right diagram in the lower row.
  • the part corresponding to the object will be darker than the other parts, and therefore, due to the above normalization, the part corresponding to the object will appear white in the hyperspectral image after white board correction.
  • a correction hyperspectral image is subtracted from the hyperspectral image to be analyzed. More specifically, from each of the images included in the hyperspectral image to be analyzed, a corresponding image in the same wavelength band is subtracted from the images included in the hyperspectral image to be corrected.
  • Blackout correction is performed to reduce the effects of dark current, bright pixels, fixed pattern noise, and sensor performance fluctuations of the image sensor. However, if there is no need to reduce such effects, blackout correction is not necessarily required.
  • the hyperspectral image to be analyzed may be read as the compressed image to be analyzed, and the hyperspectral image for correction may be read as the compressed image to be corrected.
  • FIG. 4B is a diagram showing a schematic example of black subtraction.
  • the top three diagrams in FIG. 4B show examples of correct black subtraction, and the bottom three diagrams in FIG. 4B show examples of incorrect black subtraction.
  • the diagram on the left shows a hyperspectral image containing subject 10b
  • the diagram in the center shows a hyperspectral image representing a light-shielded image
  • the diagram on the right shows a hyperspectral image after black subtraction.
  • an image corresponding to a certain wavelength band contained in the hyperspectral image is shown as an example of the hyperspectral image.
  • an appropriate hyperspectral image for correction is generated, as shown in the center diagram in the top row.
  • the hyperspectral image for correction which is a light-blocked image, contains noise caused by the image sensor.
  • the hyperspectral image to be analyzed contains subject 10b with the noise superimposed thereon, as shown in the left diagram in the top row. Since the hyperspectral image to be analyzed is corrected based on the appropriate hyperspectral image for correction, a correctly generated hyperspectral image after blackout is generated, as shown in the right diagram in the top row.
  • the hyperspectral image after blackout contains subject 10b with the noise removed.
  • a hyperspectral image containing the white board, rather than a light-blocked image is generated as the hyperspectral image for correction, as shown in the center diagram in the lower row. If the hyperspectral image to be analyzed is corrected based on such an inappropriate hyperspectral image for correction, a corrected hyperspectral image after black-out will not be generated correctly, as shown in the right diagram in the lower row.
  • the corrected hyperspectral image which is not a light-blocked image
  • more pixel values than necessary are subtracted. This causes the incorrectly black-subtracted image to be darker than the correctly black-subtracted hyperspectral image. In the incorrectly black-subtracted image, pixel values may be negative. In such cases, pixel values may be output as zero. Therefore, the corrected hyperspectral image may be different from the image obtained by simply subtracting the correction hyperspectral image from the hyperspectral image to be analyzed.
  • subject 10a1 or subject 10a2 are not suitable, it is not easy to accurately obtain spectral information of subject 10b. If subject 10a2 is not suitable, for example, a light-blocking lens cap may not be properly attached to camera 30, making it difficult to capture an image in a light-blocking state.
  • the inventors discovered this problem and came up with a method used to correct images in this embodiment that can solve the problem.
  • FIG. 5 is a flow chart showing an outline of example 1 of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment.
  • the processing circuit 52 performs the operations of steps S101 to S108 shown in FIG. 5.
  • the "HS image" shown in FIG. 5 represents a hyperspectral image.
  • the processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph a whiteboard or an instruction to the user to photograph in a light-shielded state. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction by voice.
  • Processing circuit 52 receives input from the user via input UI 42 and causes camera 30 to capture subject 10a1 or subject 10a2 to generate a first hyperspectral image. In this way, the first hyperspectral image is generated when the user receives an instruction to capture subject 10a1 or subject 10a2.
  • processing circuitry 52 may receive input from a user via input UI 42 and adjust parameters related to camera 30.
  • the parameters related to camera 30 may be, for example, exposure time, gain, number of integrations, and/or the distance between camera 30 and subject 10a1.
  • processing circuit 52 When photographing subject 10a1, processing circuit 52 receives input from the user via input UI 42 before step S101 and causes light source 20 to emit illumination light for illuminating subject 10a1. When photographing subject 10a2, processing circuit 52 does not need to cause light source 20 to emit illumination light before step S101.
  • the processing circuit 52 may receive input from the user via the input UI 42 and adjust parameters related to the light source 20.
  • the parameters related to the light source 20 may be, for example, the distance between the light source 20 and the subject 10a1, the current for driving the light source 20, the voltage, the duty ratio of a PWM (Pulse Width Modulation) signal, and/or the attenuation rate of an ND (Neutral Density) filter (not shown) disposed between the light source 20 and the subject 10a1.
  • PWM Pulse Width Modulation
  • ND Neutral Density
  • the processing circuitry 52 acquires a first hyperspectral image as a hyperspectral image for correction from the camera 30.
  • the processing circuitry 52 may store the acquired first hyperspectral image in the storage device 56.
  • the processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.
  • the user receives the instruction and places the subject 10b in front of the camera 30.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10b and generate a second hyperspectral image. In this way, the second hyperspectral image is generated when the user receives an instruction to capture the subject 10b.
  • processing circuit 52 causes light source 20 to emit illumination light for illuminating subject 10b before step S103.
  • the processing circuitry 52 acquires a second hyperspectral image from the camera 30 as the hyperspectral image to be analyzed.
  • the processing circuitry 52 may store the acquired second hyperspectral image in the storage device 56.
  • processing circuit 52 may execute the operations of steps S103 and S104 between steps S105 and S106.
  • the processing circuit 52 determines whether the first hyperspectral image satisfies a suitability condition for use as an image for correction of the second hyperspectral image based on the pixel values of the first hyperspectral image.
  • the suitability condition for use as an image for whiteboard correction and black subtraction will be described later.
  • step S106 If the determination is Yes, the processing circuit 52 executes the operation of step S106. If the determination is No, the processing circuit 52 executes the operation of step S108.
  • the processing circuitry 52 performs whiteboard correction by normalizing the second hyperspectral image by the first hyperspectral image, or performs black subtraction by subtracting the first hyperspectral image from the second hyperspectral image.
  • black subtraction may already have been performed on the second hyperspectral image.
  • the processing circuitry 52 stores the corrected hyperspectral image in the memory device 56 .
  • the processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first hyperspectral image.
  • the external server may perform the operations of steps S102 and S105.
  • the above method used for image correction in this embodiment can prevent inappropriate whiteboard correction when a user receives instructions to photograph an appropriate subject 10a1 but the subject 10a1 is actually not suitable. Similarly, when a user receives instructions to photograph an appropriate subject 10a2 but the subject 10a2 is actually not suitable, it can prevent inappropriate blackout. As a result, it is possible to obtain spectral information of the subject 10b more accurately than when it is not determined whether the first hyperspectral image satisfies the suitability conditions.
  • FIG. 6A is a diagram showing a first hyperspectral image and a reflectance spectrum when the subject 10a1 is appropriate.
  • the first hyperspectral image is shown at the top of FIG. 6A.
  • the first hyperspectral image includes five images corresponding to five wavelength bands. For the wavelength bands W 1 to W 5 , the smaller the subscript number, the shorter the central wavelength of the wavelength band. These images are smooth images without boundaries and structures. The whiter the image, the brighter it is, and the blacker the image, the darker it is.
  • the reflection spectrum of the subject 10a1 is shown at the bottom of FIG. 6A.
  • the reflection spectrum was generated based on the first hyperspectral image.
  • the reflection intensity in each wavelength band is calculated by averaging the pixel values of multiple pixels near the center of the image corresponding to the wavelength band.
  • the appropriately selected subject 10a1 has the same reflectance in each of the wavelength bands W 1 to W 5.
  • the reflection spectrum of the subject 10a1 is almost the same as the spectrum of the light emitted from the light source 20. In other words, the spectrum obtained when the camera 30 detects the reflected light is almost the same as the spectrum obtained when the camera 30 directly detects the light emitted from the light source 20.
  • the reflected light is light generated by the light emitted from the light source 20 being reflected by the subject 10a.
  • the reflection intensity in the wavelength band W 1 is the highest, the reflection intensity in the wavelength band W 2 is the lowest, and the reflection intensity in the wavelength bands W 3 to W 5 is higher than the reflection intensity in the wavelength band W 2 and lower than the reflection intensity in the wavelength band W 1 .
  • the reflection intensity in waveband W4 is higher than the reflection intensity in waveband W3 and waveband W5 .
  • FIG. 6B is a diagram showing a schematic diagram of the first hyperspectral image and the reflectance spectrum when the subject 10a1 is not suitable. The diagrams shown in the upper and lower parts of FIG. 6B are as described above.
  • the reflection spectrum of the subject 10a1 will differ from the spectrum of the light emitted from the light source 20.
  • a blue plate was used as the subject 10a1.
  • the reflection intensity decreases and the image becomes darker.
  • Spectral information of the light from the light source 20 is pre-stored in the storage device 56.
  • the processing circuit 52 acquires the spectral information of the light from the light source 20 from the storage device 56, and determines whether or not the first hyperspectral image satisfies the suitability conditions by comparing the spectral information acquired from the first hyperspectral image with the spectral information of the light from the light source 20. If the shapes of the two spectra are similar, the processing circuit 52 determines that the first hyperspectral image satisfies the suitability conditions.
  • the two pieces of spectral information can be compared using, for example, the Spectral Angular Mapping (SAM) method.
  • SAM Spectral Angular Mapping
  • vector u and vector v face in the same direction.
  • the shapes of the two spectra are the same.
  • vector u and vector v can be compared regardless of the amount of light irradiated when generating the first hyperspectral image. If the spectral angle formed by vector u and vector v is, for example, 1° or less, 5° or less, or 10° or less, or any angle in the range of 1° to 10°, the first hyperspectral image may be determined to satisfy the suitability condition.
  • the first hyperspectral image is subjected to edge detection, and based on the edge image obtained thereby, it is determined whether the first hyperspectral image satisfies the suitability conditions.
  • the Sobel method can be used for edge detection.
  • FIG. 7 shows the first hyperspectral image when the subject 10a1 is appropriate and when it is not appropriate, and an edge image obtained by edge detection of the first hyperspectral image.
  • the upper left side of FIG. 7 shows the first hyperspectral image when the subject 10a1 is appropriate, and the upper right side of FIG. 7 shows an edge image obtained by edge detection of the first hyperspectral image.
  • the lower left side of FIG. 7 shows the first hyperspectral image when a white board is not appropriately used as the subject 10a1, and the lower right side of FIG. 7 shows an edge image obtained by edge detection of the first hyperspectral image.
  • an image corresponding to a certain wavelength band included in the first hyperspectral image is illustrated as the first hyperspectral image.
  • the first hyperspectral image means an image corresponding to a certain wavelength band included in the first hyperspectral image.
  • the first hyperspectral image is a smooth image without boundaries or structures. Even if edge detection is performed on the first hyperspectral image, zero pixels are detected as edges.
  • the first hyperspectral image is a non-smooth image having boundaries and structures.
  • edge detection is performed on the first hyperspectral image, there are pixels that are detected as edges.
  • the first hyperspectral image contains a number of vegetables. Of the 65,535 pixels contained in the first hyperspectral image, 2,109 pixels are detected as edges.
  • the first hyperspectral image may be determined to satisfy the suitability condition.
  • the condition may be determined, for example, based on the number of pixels detected as edges among multiple pixels in an image corresponding to a certain wavelength band included in the first hyperspectral image. Specifically, the condition may be, for example, that the number of pixels detected as edges is equal to or less than a predetermined ratio.
  • the number of images for edge detection among the multiple images included in the first hyperspectral image may be one or more.
  • FIG. 8A is a diagram showing a first hyperspectral image and a histogram of pixel values of the first hyperspectral image when the subject 10a2 is appropriate.
  • the upper part of FIG. 8A shows a hyperspectral image
  • the lower part of FIG. 8A shows a histogram of pixel values of the first hyperspectral image.
  • FIG. 8A shows an example of an image corresponding to a certain wavelength band contained in the first hyperspectral image as the first hyperspectral image.
  • the first hyperspectral image means an image corresponding to a certain wavelength band contained in the first hyperspectral image.
  • the first hyperspectral image is a dark image. All pixels in the first hyperspectral image have pixel values near zero.
  • the first hyperspectral image includes 256 ⁇ 256 pixels.
  • pixel values are normalized by the maximum pixel value that the pixel can have.
  • the pixels with the lowest pixel value are the most numerous, accounting for more than 97% of all pixels.
  • the minimum normalized pixel value is 2.44 ⁇ 10 ⁇ 4 . Pixels with normalized pixel values of 1.0 ⁇ 10 ⁇ 3 or more account for 0.07% of all pixels.
  • FIG. 8B is a diagram showing the first hyperspectral image and a histogram of pixel values of the first hyperspectral image when the subject 10a2 is not suitable.
  • the figures shown at the top and bottom of FIG. 8B are as described above.
  • the first hyperspectral image means an image corresponding to a certain wavelength band contained in the first hyperspectral image.
  • the first hyperspectral image is not a dark image but a landscape image.
  • the pixel with the lowest pixel value accounts for 0.006% of all pixels.
  • the lowest normalized pixel value is 0.0273.
  • the pixels with normalized pixel values of 1.0 x 10-3 or more account for 100% of all pixels.
  • the first hyperspectral image may be determined to satisfy the suitability condition.
  • the condition may be determined, for example, based on the number of pixels having the lowest pixel value, or the number of pixels whose pixel values are greater than or equal to a threshold value or less than or equal to a threshold value, in an image corresponding to a certain wavelength band included in the first hyperspectral image.
  • the condition may be that the number of pixels having the lowest pixel value is greater than or equal to a predetermined percentage, that the number of pixels whose pixel values are greater than or equal to a threshold value is less than or equal to a predetermined percentage, or that the number of pixels whose pixel values are less than or equal to a threshold value is greater than or equal to a predetermined percentage.
  • the number of images whose pixel value histograms are checked may be one or more.
  • the first hyperspectral image may be determined to satisfy the suitability conditions.
  • FIG. 9 is a flow chart that shows an outline of a second example of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment.
  • the processing circuit 52 performs the operations of steps S201 to S210 shown in FIG. 9.
  • the processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the whiteboard. Alternatively, if the imaging system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as voice.
  • the user receives the instruction and places the subject 10a1 in front of the camera 30.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10a1 and generate a first compressed image. In this way, the first compressed image is generated when the user receives an instruction to capture the subject 10a1.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the light source 20 to emit irradiation light for irradiating the subject 10a1.
  • the processing circuitry 52 acquires a first compressed image from the camera 30.
  • the processing circuitry 52 may store the acquired first compressed image in the storage device 56.
  • the processing circuitry 52 generates a first hyperspectral image based on the first compressed image.
  • the processing circuitry 52 may store the generated first hyperspectral image in the storage device 56.
  • the processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.
  • the user receives the instruction and places the subject 10b in front of the camera 30.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10b and generate a second compressed image. In this way, the second compressed image is generated when the user receives an instruction to capture the subject 10b.
  • the subject 10b is illuminated with the above-mentioned illumination light.
  • the processing circuitry 52 acquires a second compressed image from the camera 30.
  • the processing circuitry 52 may store the acquired second compressed image in the storage device 56.
  • the processing circuitry 52 generates a second hyperspectral image based on the second compressed image.
  • the processing circuitry 52 may store the generated second hyperspectral image in the storage device 56.
  • processing circuit 52 may execute the operations of steps S203 to S206 between steps S207 and S208.
  • Step S207> The processing circuit 52 determines whether the first compressed image satisfies a suitability condition for an image for whiteboard correction based on the pixel values of the first compressed image.
  • the suitability condition for an image for whiteboard correction will be described later.
  • step S208 If the determination is Yes, the processing circuit 52 executes the operation of step S208. If the determination is No, the processing circuit 52 executes the operation of step S210.
  • the processing circuit 52 performs whiteboard correction by normalizing the second hyperspectral image by the first hyperspectral image.
  • the second hyperspectral image may have already been subjected to black subtraction.
  • Step S209 The processing circuitry 52 stores the corrected hyperspectral image in the memory device 56 .
  • Step S210> The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first compressed image.
  • the external server may execute the operations of steps S202 and S207.
  • the above method used for image correction in this embodiment can prevent inappropriate whiteboard correction when the user believes that he or she has photographed an appropriate subject 10a1 in response to instructions, but the subject 10a1 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information of the subject 10b than when it is not determined whether the first compressed image satisfies the suitability conditions for an image for whiteboard correction.
  • processing circuit 52 may execute the operations of S105 to S108 shown in FIG. 5 instead of the operations of S207 to S210 shown in FIG. 9.
  • Suitability conditions for an image for whiteboard correction 10A and 10B, examples of suitability conditions that the first compressed image satisfies as an image for whiteboard correction will be described below.
  • whether or not the first compressed image satisfies the suitability conditions as an image for whiteboard correction is determined based on a histogram of pixel values of the first compressed image.
  • FIG. 10A is a diagram showing a first compressed image and a histogram of its pixel values when the subject 10a1 is appropriate.
  • the upper part of FIG. 10A shows the first compressed image, and the lower part of FIG. 10A shows a histogram of the pixel values of the first compressed image.
  • the first compressed image has an irregular distribution of light and dark pixel values that reflects the spatial distribution of the transmittance of the encoding mask.
  • a histogram of pixel values of the first compressed image roughly shows a single peak.
  • the peak width is narrower than when it is not.
  • ⁇ / ⁇ 0.2615.
  • FIG. 10B is a diagram showing the first compressed image and its pixel value histogram when the subject 10a1 is not appropriate.
  • the upper and lower diagrams in FIG. 10B are as described above.
  • the first compressed image has an irregular distribution of light and dark pixel values.
  • the first compressed image shown in FIG. 10B resembles the first compressed image shown in FIG. 10A.
  • the first compressed image may be determined whether or not the first compressed image satisfies the suitability conditions based on a histogram of the pixel values of the first compressed image. For example, if the ⁇ / ⁇ of the histogram of the first compressed image is equal to or less than a predetermined value, it is determined that the first compressed image satisfies the suitability conditions.
  • FIG. 11 is a flow chart that shows an outline of example 3 of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment.
  • the processing circuit 52 performs the operations of steps S301 to S308 shown in FIG. 11.
  • the processing circuitry 52 causes the display device 40 to display an instruction to the user to take a picture in a light-shielded state.
  • the processing circuitry 52 may cause the speaker to output the instruction as sound.
  • the user receives the instruction and attaches the subject 10a2 to the camera 30.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10a2 and generate a first compressed image. In this way, the first compressed image is generated when the user receives an instruction to capture the subject 10a2.
  • the processing circuitry 52 acquires a first compressed image from the camera 30.
  • the processing circuitry 52 may store the acquired first compressed image in the storage device 56.
  • the processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.
  • the user receives the instruction and places the subject 10b in front of the camera 30.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10b and generate a second compressed image. In this way, the second compressed image is generated when the user receives an instruction to capture the subject 10b.
  • the processing circuit 52 receives input from the user via the input UI 42 and causes the light source 20 to emit irradiation light for irradiating the subject 10b.
  • the processing circuitry 52 acquires a second compressed image from the camera 30.
  • the processing circuitry 52 may store the acquired second compressed image in the storage device 56.
  • processing circuit 52 may execute the operations of steps S303 and S304 between steps S305 and S306.
  • the processing circuit 52 determines whether the first compressed image satisfies the suitability conditions for an image for black subtraction based on the pixel values of the first compressed image.
  • the suitability conditions for an image for black subtraction correction that the first compressed image satisfies may be the same as the suitability conditions described with reference to Figures 8A and 8B. More specifically, the processing circuit 52 may determine whether the first compressed image satisfies the suitability conditions based on the number of pixels in the first compressed image that have the lowest pixel value or the number of pixels whose pixel values are equal to or greater than a predetermined value or less than a predetermined value.
  • the first compressed image may be determined to satisfy the suitability conditions if the conditions for determining that the spatial distribution of pixel values of the first compressed image is constant are satisfied, even in the case of black subtraction, as in the case of whiteboard correction.
  • step S306 If the determination is Yes, the processing circuit 52 executes the operation of step S306. If the determination is No, the processing circuit 52 executes the operation of step S308.
  • Step S306> The processing circuit 52 performs black subtraction by subtracting the first compressed image from the second compressed image.
  • Step S307 The processing circuitry 52 causes the storage device 56 to store the corrected compressed image.
  • Step S308> The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first compressed image.
  • the external server may execute the operations of steps S302 and S305.
  • the above method used for image correction in this embodiment can prevent inappropriate black subtraction when the user believes that he or she has photographed an appropriate subject 10a2 in response to instructions, but the subject 10a2 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information of the subject 10b than when it is not determined whether the first compressed image satisfies the suitability conditions for an image for black subtraction.
  • Figures 12A to 12C are diagrams that diagrammatically show examples of UIs of the display device 40 when the camera 30 generates and outputs a compressed image in a compressed sensing type hyperspectral camera.
  • the UI serves as both an input UI 42 and a display UI 44.
  • the upper left side of FIG. 12A shows a space for a compressed image of subject 10b.
  • the lower side of FIG. 12A shows a space for a corrected hyperspectral image of subject 10b.
  • the upper right side of FIG. 12A shows the shooting conditions, such as resolution, exposure time, gain, and number of integrations.
  • the center right of FIG. 12A shows buttons for "whiteboard correction” and "black subtraction.”
  • the processing circuit 52 executes the operation of the flowchart shown in FIG. 9.
  • the processing circuit 52 executes the operation of the flowchart shown in FIG. 11.
  • an error pop-up is displayed on the UI of the display device 40, as shown in FIG. 12B.
  • the error pop-up states, "An abnormality has been detected in the compressed image of the whiteboard. Please check whether it is an appropriate whiteboard. Do you want to continue processing?" In this way, the error may prompt the user to check whether the whiteboard has been correctly photographed as the subject 10a1 for correction. Note that the compressed image of subject 10b generated during the processing is shown on the upper left side.
  • an error pop-up is displayed on the UI of the display device 40, as shown in FIG. 12C.
  • the error pop-up states, "An abnormality has been detected in the shading image. Please check the shading state. Do you want to continue processing?" In this way, the error may prompt the user to check the shading state when photographing the correction subject 10a2.
  • an error may be displayed and the process may be interrupted, or the process may be continued after confirmation from the user.
  • the user may be requested to retake a compressed image of subject 10a1 or subject 10a2.
  • FIG. 13 is a flowchart outlining a summary of examples 1 to 3 of the processing operations performed by the processing circuit 52 in the imaging system according to this embodiment.
  • the processing circuit 52 performs the operations of steps S401 to S406 shown in FIG. 13.
  • Step S401> The processing circuit 52 acquires a first image from the camera 30 as an image for correction.
  • the first image is a first hyperspectral image.
  • the first image is a first compressed image.
  • Step S402 The processing circuitry 52 obtains a second image from the camera 30 .
  • the second image is a second hyperspectral image.
  • the processing circuit 52 In methods 1 and 2 used to correct an image, the processing circuit 52 generates and obtains a second hyperspectral image based on the second compressed image.
  • the second image In method 3 used to correct an image, the second image is a second compressed image.
  • the processing circuit 52 may also execute the operation of step S402 between steps S403 and S404.
  • Step S403> The processing circuit 52 determines whether the first image satisfies the suitability conditions for a correction image for correcting the second image based on the pixel values of the first image. If the determination is Yes, the processing circuit 52 executes the operation of step S404. If the determination is No, the processing circuit 52 executes the operation of step S406.
  • Step S404> The processing circuitry 52 corrects the second image based on the first image.
  • the processing circuit 52 performs whiteboard correction by normalizing the second hyperspectral image by the first hyperspectral image. Alternatively, the processing circuit 52 performs black subtraction by subtracting the first hyperspectral image from the second hyperspectral image.
  • the processing circuit 52 performs whiteboard correction by generating a first hyperspectral image based on the first compressed image and normalizing the second hyperspectral image by the first hyperspectral image.
  • the processing circuit 52 performs black removal by subtracting the first compressed image from the second compressed image.
  • Step S405 The processing circuitry 52 causes the storage device 56 to store the corrected image.
  • Step S406 The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first image.
  • the external server may execute the operations of steps S401 and S403.
  • the above method used for image correction in this embodiment can prevent inappropriate correction when the user believes that he or she has photographed the appropriate subject 10a1 or subject 10a2 in response to instructions, but the subject 10a1 or subject 10a2 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information for the subject 10b than when it is not determined whether the first image satisfies the suitability conditions.
  • Method 4 used for image correction will be described below with reference to Figures 14 and 15.
  • an image generated by capturing a scene including the subject 10a1 and the subject 10b by the camera 30 is used.
  • the image correction described below is whiteboard correction.
  • FIG. 14 is a schematic diagram showing an example of an image generated by capturing a scene including subjects 10a1 and 10b with camera 30.
  • Image 38 shown in FIG. 14 is a hyperspectral image or a compressed image.
  • Subjects 10a1 and 10b are captured in image 38.
  • FIG. 15 is a flow chart that shows an outline of example 4 of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment.
  • the processing circuit 52 performs the operations of steps S501 to S507 shown in FIG. 15.
  • the processing circuitry 52 causes the display device 40 to display instructions to the user to photograph the subject for correction and the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instructions as sound.
  • Processing circuit 52 receives input from the user via input UI 42 and causes a scene including subjects 10a1 and 10b to be photographed, generating image 38 in which subjects 10a1 and 10b appear. In this way, image 38 is generated when the user receives instructions to photograph subjects 10a1 and 10b.
  • Step S502 The processing circuitry 52 acquires the image 38 from the camera 30 .
  • Processing circuitry 52 generates, based on image 38, a first sub-image corresponding to object 10a1 and a second sub-image corresponding to object 10a2.
  • processing circuitry 52 extracts first and second sub-images from image 38, for example by edge detection.
  • the first sub-image is one part of the hyperspectral image
  • the second sub-image is another part of the hyperspectral image.
  • the first sub-image may be an area defined by the contour of subject 10a1 appearing in image 38, or an area including subject 10a1 appearing in image 38 and having any shape, such as a rectangle or a circle.
  • processing circuitry 52 may execute an operation of extracting the second sub-image from the hyperspectral image between steps S504 and S505.
  • processing circuitry 52 If image 38 is a compressed image, processing circuitry 52 generates a hyperspectral image based on the compressed image and extracts the first and second sub-images from the hyperspectral image. Thus, the first sub-image is one part of the hyperspectral image and the second sub-image is another part of the hyperspectral image. Note that processing circuitry 52 may perform the operation of extracting the second sub-image from the hyperspectral image between steps S504 and S505.
  • processing circuitry 52 extracts a first sub-image from the compressed image. Processing circuitry 52 further generates a hyperspectral image based on the compressed image and extracts a second sub-image from the hyperspectral image. Thus, the first sub-image is part of the compressed image and the second sub-image is part of the hyperspectral image. Note that processing circuitry 52 may perform the operations of generating a hyperspectral image based on the compressed image and extracting the second sub-image from the hyperspectral image between steps S504 and S505.
  • Step S504 The processing circuit 52 determines, based on the pixel values of the first sub-image, whether the first sub-image satisfies a suitability condition for use as a correction image for correcting the second sub-image.
  • image 38 is a hyperspectral image
  • the suitability conditions for an image for whiteboard correction are as described with reference to Figures 6A to 7.
  • image 38 is a compressed image and the first sub-image is part of a hyperspectral image.
  • Step S505 The processing circuitry 52 corrects the second sub-image based on the first sub-image.
  • processing circuitry 52 performs whiteboard correction by normalizing the second sub-image by the first sub-image. The same is true if image 38 is a compressed image, the first sub-image is part of the hyperspectral image, and the second sub-image is another part of the hyperspectral image.
  • processing circuit 52 performs whiteboard correction by extracting a sub-image corresponding to subject 10a1 from the hyperspectral image from which the second sub-image has been extracted, and normalizing the second sub-image by the extracted sub-image.
  • Step S506 The processing circuitry 52 causes the storage device 56 to store the corrected sub-image.
  • Step S507 The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first sub-image.
  • the external server may execute the operations of steps S502 to S504.
  • the above method used for image correction in this embodiment can prevent inappropriate whiteboard correction when the user believes that he or she has photographed an appropriate subject 10a1 in response to instructions, but the subject 10a1 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information about the subject 10b compared to a case where it is not determined whether the first sub-image satisfies the suitability conditions. Furthermore, the above method makes it possible to perform whiteboard correction even in cases where it is not easy to capture a whiteboard over the entire image, such as when photographing outdoors.
  • FIG. 16A is a diagram showing a schematic configuration example of a camera 30 which is a compressed sensing type hyperspectral camera.
  • the camera 30 shown in Fig. 16A includes an optical system 31, a filter array 32, an image sensor 33, and an image processing device 34, similar to the configuration disclosed in Patent Document 2.
  • the optical system 31 and the filter array 32 are disposed on the optical path of light incident from the subject 10b.
  • the filter array 32 is disposed between the optical system 31 and the image sensor 33.
  • the image sensor 33 generates data of a compressed image 35 in which information of a plurality of wavelength bands is compressed as a two-dimensional monochrome image.
  • the image processing device 34 generates data representing a plurality of images corresponding one-to-one to the plurality of wavelength bands included in the target wavelength range, based on the data of the compressed image 35 generated by the image sensor 33.
  • the number of wavelength bands included in the target wavelength range is set to N (N is an integer equal to or greater than 4).
  • the N images generated based on the compressed image 35 are referred to as restored images 36W1 , 36W2 , ..., 36WN , and these may be collectively referred to as "hyperspectral image 36.”
  • the filter array 32 is an array of multiple light-transmitting filters arranged in rows and columns.
  • the multiple filters include multiple types of filters that differ from one another in terms of spectral transmittance, i.e., the wavelength dependency of light transmittance.
  • the filter array 32 modulates the intensity of the incident light for each wavelength and outputs it. This process performed by the filter array 32 is called “encoding,” and the filter array 32 is also called an "encoding mask.”
  • the filter array 32 is placed near or directly above the image sensor 33.
  • “near” means close enough that a relatively clear image of the light from the optical system 31 is formed on the surface of the filter array 32.
  • “Directly above” means that the two are so close that there is almost no gap between them.
  • the filter array 32 and the image sensor 33 may be integrated.
  • the optical system 31 includes at least one lens. In FIG. 16A, the optical system 31 is shown as one lens, but the optical system 31 may be a combination of multiple lenses. The optical system 31 forms an image on the imaging surface of the image sensor 33 via the filter array 32.
  • FIGS. 16B to 16D are diagrams showing configuration examples of the camera 30 in which the filter array 32 is disposed away from the image sensor 33.
  • the filter array 32 is disposed between the optical system 31 and the image sensor 33 and at a position distant from the image sensor 33.
  • the filter array 32 is disposed between the subject 10b and the optical system 31.
  • the camera 30 has two optical systems 31A and 31B, and the filter array 32 is disposed between them.
  • an optical system including one or more lenses may be disposed between the filter array 32 and the image sensor 33.
  • the image sensor 33 is a monochrome type light detection device having a plurality of light detection elements (also referred to as "pixels" in this specification) arranged two-dimensionally.
  • the image sensor 33 may be, for example, a charge-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, or an infrared array sensor.
  • the light detection elements include, for example, photodiodes.
  • the image sensor 33 does not necessarily have to be a monochrome type sensor.
  • a color type sensor may be used.
  • the color type sensor may include, for example, a plurality of red (R) filters that transmit red light, a plurality of green (G) filters that transmit green light, and a plurality of blue (B) filters that transmit blue light.
  • the color type sensor may further include a plurality of IR filters that transmit infrared light.
  • the color type sensor may also include a plurality of transparent filters that transmit all red, green, and blue light.
  • the image processing device 34 may be a computer including one or more processors and one or more storage media such as a memory.
  • the image processing device 34 generates data of restored images 36W 1 , 36W 2 , ..., 36W N based on the compressed image 35 acquired by the image sensor 33.
  • FIG. 17A is a diagram showing a schematic example of a filter array 32.
  • the filter array 32 has a number of regions arranged two-dimensionally. In this specification, these regions are sometimes referred to as "cells.” In each region, an optical filter having an individually set spectral transmittance is arranged.
  • the spectral transmittance is expressed as a function T( ⁇ ), where ⁇ is the wavelength of the incident light.
  • the spectral transmittance T( ⁇ ) can take a value between 0 and 1.
  • the filter array 32 has 48 rectangular regions arranged in 6 rows and 8 columns. This is merely an example, and in actual applications, more regions may be provided. The number may be approximately the same as the number of pixels in the image sensor 33, for example. The number of filters included in the filter array 32 is determined according to the application, ranging from several tens to tens of millions, for example.
  • Fig. 17B is a diagram showing an example of the spatial distribution of the light transmittance of each of the wavelength bands W1 , W2 , ..., WN included in the target wavelength range.
  • the difference in the shading of each region represents the difference in the transmittance. The lighter the region, the higher the transmittance, and the darker the region, the lower the transmittance.
  • the spatial distribution of the light transmittance differs depending on the wavelength band.
  • FIG. 17C and 17D are diagrams showing examples of the spectral transmittance of region A1 and region A2 included in the filter array 32 shown in FIG. 17A, respectively.
  • the spectral transmittance of region A1 and the spectral transmittance of region A2 are different from each other. In this way, the spectral transmittance of the filter array 32 differs depending on the region. However, it is not necessary that the spectral transmittance of all regions is different.
  • the filter array 32 the spectral transmittance of at least some of the multiple regions is different from each other.
  • the filter array 32 includes two or more filters having different spectral transmittances from each other.
  • the number of spectral transmittance patterns of the multiple regions included in the filter array 32 may be the same as or greater than the number N of wavelength bands included in the target wavelength range.
  • the filter array 32 may be designed so that the spectral transmittance of more than half of the regions is different.
  • FIG. 18A is a diagram for explaining the characteristics of the spectral transmittance in a certain region of the filter array 32.
  • the spectral transmittance has multiple maximum values P1 to P5 and multiple minimum values for wavelengths in the target wavelength range W.
  • the maximum value of the light transmittance in the target wavelength range W is normalized to 1 and the minimum value is 0.
  • the spectral transmittance has maximum values in wavelength ranges such as wavelength band W 2 and wavelength band W N-1 . In this way, the spectral transmittance of each region can be designed to have maximum values in at least two wavelength ranges among the wavelength bands W 1 , W 2 , ..., W N.
  • the maximum values P1, P3, P4, and P5 are 0.5 or more.
  • FIG. 18B is a diagram showing, as an example, the result of averaging the spectral transmittance shown in FIG. 18A for each wavelength band W 1 , W 2 , ..., W N.
  • the averaged transmittance is obtained by integrating the spectral transmittance T( ⁇ ) for each wavelength band and dividing by the bandwidth of the wavelength band.
  • the transmittance value averaged for each wavelength band in this way is defined as the transmittance in that wavelength band.
  • the transmittance is remarkably high in three wavelength ranges having maximum values P1, P3, and P5. In particular, the transmittance exceeds 0.8 in two wavelength ranges having maximum values P3 and P5.
  • a grayscale transmittance distribution is assumed in which the transmittance of each region can take any value between 0 and 1.
  • a binary scale transmittance distribution may be used in which the transmittance of each region can take a value of either approximately 0 or approximately 1.
  • each region transmits most of the light in at least two of the multiple wavelength ranges included in the target wavelength range, and does not transmit most of the light in the remaining wavelength ranges.
  • "most" refers to approximately 80% or more.
  • a part of all the cells may be replaced with a transparent region.
  • a transparent region transmits the light of each of the wavelength bands W 1 , W 2 , ..., W N included in the target wavelength range W with a similarly high transmittance, for example, a transmittance of 80% or more.
  • the multiple transparent regions may be arranged, for example, in a checkerboard pattern. That is, in two arrangement directions of the multiple regions in the filter array 32, regions whose light transmittance varies depending on the wavelength and transparent regions may be arranged alternately.
  • the data showing the spatial distribution of the spectral transmittance of the filter array 32 is acquired in advance based on design data or actual measurement calibration, and is stored in a storage medium provided in the image processing device 34. This data is used in the calculation processing described below.
  • the filter array 32 may be configured using, for example, a multilayer film, an organic material, a diffraction grating structure, a microstructure including a metal, or a metasurface.
  • a multilayer film for example, a dielectric multilayer film or a multilayer film including a metal layer may be used.
  • at least one of the thickness, material, and stacking order of each multilayer film is formed so that it differs for each cell. This allows different spectral characteristics to be realized for each cell.
  • a configuration using an organic material can be realized by making the pigment or dye contained different for each cell, or by stacking different materials.
  • a configuration using a diffraction grating structure can be realized by providing a diffraction structure with a different diffraction pitch or depth for each cell.
  • a microstructure including a metal can be created by utilizing the spectrum due to the plasmon effect.
  • a metasurface can be created by microfabricating a dielectric material in a size smaller than the wavelength of the incident light.
  • the refractive index for the incident light is spatially modulated.
  • the incident light may be encoded by directly processing a plurality of pixels included in the image sensor 33 without using the filter array 32.
  • the camera 30 has multiple light receiving areas with different optical response characteristics.
  • the multiple light receiving areas can be realized by an image sensor 33 arranged adjacent to or directly above the filter array 32.
  • the optical response characteristics of the multiple light receiving areas are determined based on the optical transmission characteristics of the multiple filters included in the filter array 32.
  • the multiple light receiving regions can be realized by, for example, an image sensor 33 in which multiple pixels are directly processed so that their photoresponse characteristics are irregularly different from one another.
  • the photoresponse characteristics of the multiple light receiving regions are determined based on the photoresponse characteristics of the multiple pixels included in the image sensor 33.
  • the above multilayer film, organic material, diffraction grating structure, microstructure including metal, or metasurface can encode incident light if it is configured such that the spectral transmittance is modulated to vary depending on the position in a two-dimensional plane. Therefore, the above multilayer film, organic material, diffraction grating structure, microstructure including metal, or metasurface does not need to be configured with multiple filters arranged in an array.
  • the image processing device 34 reconstructs a multi-wavelength hyperspectral image 36 based on the compressed image 35 output from the image sensor 33 and the spatial distribution characteristics of the transmittance for each wavelength of the filter array 32.
  • multi-wavelength means more wavelength ranges than the wavelength ranges of the three colors RGB captured by a normal color camera, for example.
  • the number of wavelength ranges can be, for example, about 4 to 100. This number of wavelength ranges is referred to as the "number of bands.” Depending on the application, the number of bands may exceed 100.
  • the data to be obtained is the data of the hyperspectral image 36, and the data is denoted as f.
  • f is data obtained by integrating the data f 1 , f 2 , ..., f N of N image bands.
  • the horizontal direction of the image is the x direction
  • the vertical direction of the image is the y direction.
  • the number of pixels in the x direction of the image data to be obtained is denoted as m
  • the number of pixels in the y direction is denoted as n
  • each of the image data f 1 , f 2 , ..., f N has m x n pixel values.
  • the data f is data with the number of elements m x n x N.
  • the data g of the compressed image 35 obtained by encoding and multiplexing by the filter array 32 is two-dimensional data including m x n pixel values corresponding to m x n pixels.
  • the data g can be expressed by the following formula (1).
  • f represents the data of the hyperspectral image expressed as a one-dimensional vector.
  • Each of f 1 , f 2 , ..., f N has m x n elements. Therefore, the vector on the right side is a one-dimensional vector of m x n x N rows and 1 column.
  • the data g of the compressed image 35 is converted and expressed as a one-dimensional vector of m x n rows and 1 column.
  • the matrix H represents a transformation in which each component f 1 , f 2 , ..., f N of the vector f is encoded and intensity-modulated with different encoding information for each wavelength band, and then added. Therefore, H is a matrix of m x n rows and m x n x N columns.
  • Formula (1) can also be expressed as follows.
  • pg ij represents the pixel value in the i-th row and j-th column of the compressed image 35 .
  • the image processing device 34 utilizes the sparsity of the image contained in the data f to find a solution using a compressed sensing technique.
  • the desired data f is estimated by solving the following equation (2).
  • f' represents the estimated f data.
  • the first term in the parentheses in the above equation represents the amount of deviation between the estimated result Hf and the acquired data g, the so-called residual term.
  • the sum of squares is used as the residual term, but the absolute value or the square root of the sum of squares, etc. may also be used as the residual term.
  • the second term in the parentheses is a regularization term or stabilization term. Equation (2) means to find f that minimizes the sum of the first and second terms.
  • the function in the parentheses in equation (2) is called the evaluation function.
  • the image processing device 34 can converge the solution by recursive iterative calculations and calculate the f that minimizes the evaluation function as the final solution f'.
  • the first term in the parentheses in formula (2) means an operation to obtain the sum of squares of the difference between the acquired data g and Hf obtained by transforming f in the estimation process by the matrix H.
  • the second term ⁇ (f) is a constraint condition in the regularization of f, and is a function reflecting the sparse information of the estimated data. This function has the effect of smoothing or stabilizing the estimated data.
  • the regularization term can be expressed, for example, by the discrete cosine transform (DCT), wavelet transform, Fourier transform, or total variation (TV) of f. For example, when the total variation is used, stable estimated data that suppresses the influence of noise in the observed data g can be obtained.
  • the sparsity of the subject 10b in the space of each regularization term differs depending on the texture of the subject 10b.
  • a regularization term that makes the texture of the subject 10b sparser in the space of the regularization term may be selected.
  • multiple regularization terms may be included in the operation.
  • is a weighting coefficient. The larger the weighting factor ⁇ , the more redundant data is reduced, and the higher the compression ratio. The smaller the weighting factor ⁇ , the weaker the convergence to a solution.
  • the weighting factor ⁇ is set to an appropriate value that allows f to converge to a certain extent, but does not result in over-compression.
  • the image encoded by the filter array 32 is acquired in a blurred state on the imaging surface of the image sensor 33. Therefore, by storing this blur information in advance and reflecting the blur information in the above-mentioned matrix H, the hyperspectral image 36 can be reconstructed.
  • the blur information is represented by a point spread function (PSF).
  • the PSF is a function that defines the degree of spread of a point image to surrounding pixels. For example, when a point image corresponding to one pixel on an image spreads to a region of k ⁇ k pixels around the pixel due to blurring, the PSF can be defined as a group of coefficients that indicate the influence on the pixel value of each pixel in that region, that is, a matrix.
  • the hyperspectral image 36 can be reconstructed by reflecting the influence of the blurring of the encoding pattern by the PSF in the matrix H.
  • the position at which the filter array 32 is arranged is arbitrary, but a position at which the encoding pattern of the filter array 32 does not diffuse too much and disappear can be selected.
  • a hyperspectral image 36 can be restored based on a compressed image 35 of the subject 10b acquired by the image sensor 33 via the filter array 32. Details of the method for restoring the hyperspectral image 36 are disclosed in Patent Document 2. The disclosure of Patent Document 2 is incorporated herein by reference in its entirety.
  • This method allows for more accurate acquisition of spectral information of the subject being analyzed by preventing inappropriate corrections.
  • This method can prevent inappropriate blackouts when shooting in dark conditions.
  • Determining whether the first image satisfies the suitability condition means determining that the first image satisfies the suitability condition when a condition for determining that a pixel value of the first image is small is satisfied. The method described in technique 2.
  • This method makes it possible to determine whether or not a photograph was taken in a dark environment.
  • This method can prevent inappropriate white board correction when the white board is not photographed correctly.
  • Determining whether the first image satisfies the suitability condition means determining that the first image satisfies the suitability condition when a condition for determining that a spatial distribution of pixel values of the first image is constant is satisfied. The method described in technique 4.
  • This method makes it possible to determine whether or not the whiteboard has been photographed correctly.
  • Determining whether the first image satisfies the suitability condition includes determining whether the first image satisfies the suitability condition by comparing spectrum information acquired from the first image with pre-stored spectrum information. The method according to technique 4.
  • This method makes it possible to determine whether or not the whiteboard has been photographed correctly.
  • the first image is a compressed image in which information of a plurality of wavelength bands about the first object is compressed; determining whether the first image satisfies the suitability condition based on a histogram of pixel values of the compressed image; The method described in technique 4.
  • This method makes it possible to determine whether or not the whiteboard has been photographed correctly.
  • This method makes it possible to determine whether an image was captured in a darkened state, or whether a white board was captured correctly.
  • This method allows users to check whether they are photographing the whiteboard correctly.
  • This method allows the user to check the light blocking status.
  • This method allows the second image to be appropriately corrected based on the first image.
  • This method allows for more accurate acquisition of spectral information of the subject being analyzed by preventing inappropriate corrections.
  • This method can prevent inappropriate white board correction when the white board is not photographed correctly.
  • This processing circuitry prevents inappropriate corrections and allows for more accurate acquisition of spectral information of the subject being analyzed.
  • An object other than a whiteboard may be used as the object for correction.
  • the object may have a spectral reflectance characteristic in which the reflectance of light in a specific wavelength range is high and the reflectance of light in another specific wavelength range is low.
  • the spectral reflectance characteristic of the object may be stored in the memory 54. Based on the stored spectral reflectance characteristic, correction may be performed to reduce the influence of the shape of the spectrum of the irradiated light, the irradiation distribution at the time of shooting, the peripheral light falloff of the lens, the uneven sensitivity of the image sensor, and the like.
  • the first image may be an image including information of three or less wavelength bands, for example.
  • the second image including information of multiple wavelength bands may be an image generated based on a wavelength band corresponding to red, a wavelength band corresponding to green, and a wavelength band corresponding to blue.
  • the second image including information on multiple wavelength bands may be, for example, an image including information on three or fewer wavelength bands.
  • the corrected image generated by correcting the second image using the first image may be an image including information on three or fewer wavelength bands.
  • the technology disclosed herein is useful, for example, in cameras and measuring devices that capture multi-wavelength or high-resolution images.
  • the technology disclosed herein can also be applied, for example, to sensing for biomedical and cosmetic applications, food foreign body and pesticide residue inspection systems, remote sensing systems, and vehicle-mounted sensing systems.
  • Reference Signs List 10a1, 10a2, 10b Object 20 Light source 30 Hyperspectral camera 31, 31A, 31B Optical system 32 Filter array 33 Image sensor 34 Image processing device 35 Compressed image 36W 1 to 36W N restored image 38 Image 40 Display device 42 Input UI 44 Display UI 50 Processing device 52 Processing circuit 54 Memory 56 Storage device 60 Stage 70 Support 80 Adjustment device

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Multimedia (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

This method used for image correction includes: employing an image that is generated by imaging a first subject as a first image, employing an image that includes information pertaining to a plurality of wavelength bands and is generated by imaging a second subject as a second image, and acquiring the first image as a correction image for correcting the second image; and determining, on the basis of a pixel value for the first image, whether the first image satisfies a suitability condition as the correction image.

Description

画像の補正に用いられる方法、およびその方法を実行する処理回路METHOD FOR IMAGE CORRECTION AND PROCESSING CIRCUIT FOR CARRYING OUT THE METHOD - Patent application

 本開示は、画像の補正に用いられる方法、およびその方法を実行する処理回路に関する。 The present disclosure relates to a method used to correct an image, and a processing circuit that executes the method.

 各々が狭帯域である多数の波長バンド(以下、単に「バンド」とも称する。)、例えば数十バンドのスペクトル情報を活用することにより、赤色(R)、緑色(G)、および青色(B)の3バンドの情報を有する従来のRGB画像では不可能であった被写体の詳細な物性の把握が可能になる。そのような多くの波長バンドの画像を取得するカメラは、「ハイパースペクトルカメラ」と呼ばれる。これらのカメラは、食品検査、生体検査、医薬品開発、および鉱物の成分分析などの様々な分野で利用されている。 By utilizing spectral information from a large number of narrow wavelength bands (hereafter simply referred to as "bands"), for example several dozen bands, it becomes possible to grasp the detailed physical properties of a subject, which was not possible with conventional RGB images that have information from three bands of red (R), green (G), and blue (B). A camera that captures images in such many wavelength bands is called a "hyperspectral camera." These cameras are used in a variety of fields, such as food inspection, biomedical testing, pharmaceutical development, and mineral composition analysis.

 特許文献1は、生物の組織における物質の分布を分析するための画像解析装置を開示している。当該画像解析装置は、所定の波長域から選ばれる複数の波長バンドの光で生物の組織を照射して撮影することによって複数の試料画像を取得する。複数の試料画像に基づく試料データと物質の教師データとを比較して、組織における物質の分布データが生成される。特許文献1は、白板のような参照部材で反射された光の強度に基づいて、サンプルで反射された光の強度を正規化して補正することを開示している。 Patent Document 1 discloses an image analysis device for analyzing the distribution of substances in biological tissue. The image analysis device obtains multiple sample images by illuminating biological tissue with light in multiple wavelength bands selected from a predetermined wavelength range and photographing the tissue. Sample data based on the multiple sample images is compared with teacher data on the substances to generate distribution data of the substances in the tissue. Patent Document 1 discloses normalizing and correcting the intensity of light reflected by a sample based on the intensity of light reflected by a reference member such as a whiteboard.

 特許文献2は、圧縮センシング技術を利用したハイパースペクトル撮像装置の例を開示している。圧縮センシング技術は、観測対象のデータ分布が、ある空間(例えば周波数空間)においてスパース(疎)であると仮定することで、観測されたデータよりも多くのデータを復元する技術である。特許文献2に開示された撮像装置は、被写体とイメージセンサとを結ぶ光路上に、分光透過率(spectral transmittance)が互いに異なる複数の光学フィルタのアレイである符号化マスクを備える。当該撮像装置は、符号化マスクを用いた撮像によって取得された圧縮画像に基づく復元演算により、ワンショットで複数の波長バンドの画像を生成することができる。 Patent Document 2 discloses an example of a hyperspectral imaging device that uses compressed sensing technology. Compressive sensing technology is a technology that restores more data than the observed data by assuming that the data distribution of the observed object is sparse in a certain space (e.g., frequency space). The imaging device disclosed in Patent Document 2 is equipped with a coding mask, which is an array of multiple optical filters with different spectral transmittances, on the optical path connecting the subject and the image sensor. The imaging device can generate images of multiple wavelength bands in one shot by performing restoration calculations based on a compressed image acquired by imaging using the coding mask.

国際公開第2015/199067号International Publication No. 2015/199067 米国特許第9599511号明細書U.S. Pat. No. 9,599,511 特開2006-153498号公報JP 2006-153498 A

 特許文献1に開示された補正では、参照部材のような補正用の被写体が適切でない場合、不適切な補正が原因で、分析対象の被写体のスペクトル情報を正確に取得することが容易ではない。したがって、不適切な補正を防ぐことにより、分析対象の被写体のスペクトル情報をより正確に取得することが求められている。 In the correction disclosed in Patent Document 1, if the subject for correction, such as a reference member, is not appropriate, it is not easy to accurately obtain spectral information of the subject to be analyzed due to inappropriate correction. Therefore, there is a demand for more accurate acquisition of spectral information of the subject to be analyzed by preventing inappropriate correction.

 本開示の一態様に係る画像の補正に用いられる方法は、第1被写体を撮影して生成される画像を第1画像とし、第2被写体を撮影して生成される複数の波長バンドの情報を含む画像を第2画像として、前記第1画像を、前記第2画像を補正するための補正用の画像として取得することと、前記第1画像の画素値に基づいて、前記第1画像が前記補正用の画像としての適性条件を満たすか否かを判定することと、を含む。 A method used for correcting an image according to one aspect of the present disclosure includes: taking an image generated by photographing a first subject as a first image; taking an image generated by photographing a second subject as a second image, the first image being a correction image for correcting the second image; and determining whether the first image satisfies a suitability condition for the correction image based on pixel values of the first image.

 本開示の包括的または具体的な態様は、システム、装置、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能な記録ディスク等の記録媒体で実現されてもよく、システム、装置、方法、集積回路、コンピュータプログラムおよび記録媒体の任意の組み合わせで実現されてもよい。コンピュータ読み取り可能な記録媒体は、例えばCD-ROM(Compact Disc‐Read Only Memory)等の不揮発性の記録媒体を含み得る。装置は、1つ以上の装置で構成されてもよい。装置が2つ以上の装置で構成される場合、当該2つ以上の装置は、1つの機器内に配置されてもよく、分離した2つ以上の機器内に分かれて配置されてもよい。本明細書および特許請求の範囲では、「装置」とは、1つの装置を意味し得るだけでなく、複数の装置からなるシステムも意味し得る。 A comprehensive or specific aspect of the present disclosure may be realized in a system, an apparatus, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable recording disk, or in any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium. A computer-readable recording medium may include a non-volatile recording medium such as a CD-ROM (Compact Disc-Read Only Memory). An apparatus may be composed of one or more devices. When an apparatus is composed of two or more devices, the two or more devices may be located within a single device, or may be located separately within two or more separate devices. In this specification and claims, "apparatus" may mean not only one device, but also a system consisting of multiple devices.

 本開示の一態様によれば、不適切な補正を防ぐことにより、分析対象の被写体のスペクトル情報をより正確に取得することができる。 According to one aspect of the present disclosure, by preventing inappropriate corrections, it is possible to obtain more accurate spectral information of the subject being analyzed.

図1Aは、対象波長域Wと、それに含まれる複数の波長バンドW、W、・・・、Wとの関係を説明するための図である。FIG. 1A is a diagram for explaining the relationship between a target wavelength range W and a plurality of wavelength bands W 1 , W 2 , . . . , Wi included therein. 図1Bは、ハイパースペクトル画像の例を模式的に示す図である。FIG. 1B is a diagram illustrating a schematic example of a hyperspectral image. 図2Aは、補正用の被写体を撮影する本開示の例示的な実施形態による撮影システムの構成を模式的に示すブロック図である。FIG. 2A is a block diagram illustrating a schematic configuration of an imaging system according to an exemplary embodiment of the present disclosure that captures an image of a correction subject. 図2Bは、補正用の他の被写体を撮影する本開示の例示的な実施形態による撮影システムの構成を模式的に示すブロック図である。FIG. 2B is a block diagram illustrating a schematic configuration of an imaging system according to an exemplary embodiment of the present disclosure for imaging another subject for correction. 図2Cは、分析対象の被写体を撮影する本開示の例示的な実施形態による撮影システムの構成を模式的に示すブロック図である。FIG. 2C is a block diagram illustrating a schematic configuration of an imaging system according to an exemplary embodiment of the present disclosure that captures an image of a subject to be analyzed. 図3は、本開示の例示的な実施形態による撮影システムの具体的な構成を模式的に示す図である。FIG. 3 is a diagram illustrating a specific configuration of an imaging system according to an exemplary embodiment of the present disclosure. 図4Aは、白板補正の例を模式的に示す図である。FIG. 4A is a diagram illustrating an example of white board correction. 図4Bは、黒引きの例を模式的に示す図である。FIG. 4B is a diagram illustrating an example of black filtering. 図5は、本実施形態による撮影システムにおける処理回路が実行する処理動作の例1を概略的に示すフローチャートである。FIG. 5 is a flowchart illustrating an outline of a first example of a processing operation executed by a processing circuit in the imaging system according to the present embodiment. 図6Aは、補正用の被写体が適切である場合の第1ハイパースペクトル画像および反射スペクトルを模式的に示す図である。FIG. 6A is a diagram illustrating a first hyperspectral image and a reflectance spectrum when a correction subject is appropriate. 図6Bは、補正用の被写体が適切でない場合のハイパースペクトル画像および反射スペクトルを模式的に示す図である。FIG. 6B is a diagram illustrating a hyperspectral image and a reflectance spectrum when the correction subject is not appropriate. 図7は、補正用の被写体が適切である場合および適切でない場合の、第1ハイパースペクトル画像、および第1ハイパースペクトル画像をエッジ検出して得られるエッジ画像を示す図である。FIG. 7 is a diagram showing a first hyperspectral image and an edge image obtained by performing edge detection on the first hyperspectral image when the correction subject is appropriate and when it is inappropriate. 図8Aは、補正用の他の被写体が適切である場合の、第1ハイパースペクトル画像、および第1ハイパースペクトル画像の画素値のヒストグラムを示す図である。FIG. 8A is a diagram showing a first hyperspectral image and a histogram of pixel values of the first hyperspectral image when another object for correction is appropriate. 図8Bは、補正用の他の被写体が適切でない場合の、第1ハイパースペクトル画像、および第1ハイパースペクトル画像の画素値のヒストグラムを示す図である。FIG. 8B is a diagram showing the first hyperspectral image and a histogram of pixel values of the first hyperspectral image when another subject for correction is not appropriate. 図9は、本実施形態による撮影システムにおける処理回路が実行する処理動作の例2を概略的に示すフローチャートである。FIG. 9 is a flowchart illustrating an example 2 of the processing operation executed by the processing circuit in the imaging system according to this embodiment. 図10Aは、補正用の被写体が適切である場合の第1圧縮画像およびその画素値のヒストグラムを示す図である。FIG. 10A is a diagram showing a first compressed image and a histogram of its pixel values when the subject for correction is appropriate. 図10Bは、補正用の被写体が適切でない場合の第1圧縮画像およびその画素値のヒストグラムを示す図である。FIG. 10B is a diagram showing the first compressed image and a histogram of its pixel values when the correction subject is not appropriate. 図11は、本実施形態による撮影システムにおける処理回路が実行する処理動作の例3を概略的に示すフローチャートである。FIG. 11 is a flowchart illustrating an example 3 of the processing operation executed by the processing circuit in the imaging system according to the present embodiment. 図12Aは、表示装置のUIの例を模式的に示す図である。FIG. 12A is a diagram illustrating an example of a UI of the display device. 図12Bは、表示装置のUIの他の例を模式的に示す図である。FIG. 12B is a diagram illustrating another example of the UI of the display device. 図12Cは、表示装置のUIのさらに他の例を模式的に示す図である。FIG. 12C is a diagram illustrating still another example of a UI of the display device. 図13は、本実施形態による撮影システムにおける処理回路が実行する処理動作の例1~3のまとめを概略的に示すフローチャートである。FIG. 13 is a flowchart outlining a summary of examples 1 to 3 of the processing operation executed by the processing circuit in the imaging system according to this embodiment. 図14は、補正用の被写体および分析対象の被写体を含むシーンをカメラによって撮影して生成される画像の例を模式的に示す図である。FIG. 14 is a diagram showing an example of an image generated by capturing a scene including a correction subject and an analysis target subject with a camera. 図15は、本実施形態による撮影システムにおける処理回路が実行する処理動作の例4を概略的に示すフローチャートである。FIG. 15 is a flowchart illustrating an outline of a fourth example of the processing operation executed by the processing circuit in the imaging system according to the present embodiment. 図16Aは、圧縮センシング型のハイパースペクトルカメラの構成例を模式的に示す図である。FIG. 16A is a diagram illustrating a schematic configuration example of a compressed sensing type hyperspectral camera. 図16Bは、圧縮センシング型のハイパースペクトルカメラの他の構成例を模式的に示す図である。FIG. 16B is a diagram illustrating a schematic configuration example of a compressed sensing type hyperspectral camera. 図16Cは、圧縮センシング型のハイパースペクトルカメラのさらに他の構成例を模式的に示す図である。FIG. 16C is a diagram illustrating a schematic diagram of yet another configuration example of a compressed sensing type hyperspectral camera. 図16Dは、圧縮センシング型のハイパースペクトルカメラのさらに他の構成例を模式的に示す図である。FIG. 16D is a diagram illustrating a schematic diagram of yet another configuration example of a compressed sensing type hyperspectral camera. 図17Aは、フィルタアレイの例を模式的に示す図である。FIG. 17A is a schematic diagram illustrating an example of a filter array. 図17Bは、対象波長域に含まれる波長バンドW、W、・・・、Wのそれぞれの光の透過率の空間分布の一例を示す図である。FIG. 17B is a diagram showing an example of the spatial distribution of the light transmittance of each of the wavelength bands W 1 , W 2 , . . . , W N included in the target wavelength range. 図17Cは、図17Aに示すフィルタアレイに含まれるある領域の分光透過率の例を示す図である。FIG. 17C is a diagram showing an example of the spectral transmittance of a certain region included in the filter array shown in FIG. 17A. 図17Dは、図17Aに示すフィルタアレイに含まれる他の領域の分光透過率の例を示す図である。FIG. 17D is a diagram showing an example of the spectral transmittance of another region included in the filter array shown in FIG. 17A. 図18Aは、フィルタアレイのある領域における分光透過率の特性を説明するための図である。FIG. 18A is a diagram for explaining the characteristics of the spectral transmittance in a certain region of the filter array. 図18Bは、図18Aに示す分光透過率を、波長バンドW、W、・・・、Wごとに平均化した結果を示す図である。FIG. 18B is a diagram showing the results of averaging the spectral transmittance shown in FIG. 18A for each of the wavelength bands W 1 , W 2 , . . . , W N.

 以下で説明される実施形態は、いずれも包括的または具体的な例を示すものである。以下の実施形態で示される数値、形状、材料、構成要素、構成要素の配置位置および接続形態、ステップ、およびステップの順序は、一例であり、本開示の技術を限定する趣旨ではない。以下の実施形態における構成要素のうち、最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。各図は模式図であり、必ずしも厳密に図示されたものではない。さらに、各図において、実質的に同一または類似の構成要素には同一の符号が付されている。重複する説明は省略または簡略化されることがある。 The embodiments described below are all comprehensive or specific examples. The numerical values, shapes, materials, components, the arrangement and connection forms of the components, steps, and the order of the steps shown in the following embodiments are merely examples and are not intended to limit the technology of the present disclosure. Among the components in the following embodiments, those not described in the independent claims showing the highest concept are described as optional components. Each figure is a schematic diagram and is not necessarily an exact illustration. Furthermore, in each figure, substantially the same or similar components are given the same reference numerals. Duplicate descriptions may be omitted or simplified.

 本開示において、回路、ユニット、装置、部材または部の全部または一部、またはブロック図における機能ブロックの全部または一部は、例えば、半導体装置、半導体集積回路(IC)、またはLSI(large scale integration)を含む1つまたは複数の電子回路によって実行され得る。LSIまたはICは、1つのチップに集積されてもよいし、複数のチップを組み合わせて構成されてもよい。例えば、記憶素子以外の機能ブロックは、1つのチップに集積されてもよい。ここでは、LSIまたはICと呼んでいるが、集積の度合いによって呼び方が変わり、システムLSI、VLSI(very large scale integration)、もしくはULSI(ultra large scale integration)と呼ばれるものであってもよい。LSIの製造後にプログラムされる、Field Programmable Gate Array(FPGA)、またはLSI内部の接合関係の再構成またはLSI内部の回路区画のセットアップができるreconfigurable logic deviceも同じ目的で使うことができる。 In this disclosure, all or part of a circuit, unit, device, member or part, or all or part of a functional block in a block diagram may be implemented by one or more electronic circuits including, for example, a semiconductor device, a semiconductor integrated circuit (IC), or an LSI (large scale integration). The LSI or IC may be integrated into a single chip, or may be configured by combining multiple chips. For example, functional blocks other than memory elements may be integrated into a single chip. Although referred to as an LSI or IC here, the name may vary depending on the degree of integration, and may be called a system LSI, VLSI (very large scale integration), or ULSI (ultra large scale integration). A Field Programmable Gate Array (FPGA), which is programmed after the LSI is manufactured, or a reconfigurable logic device, which can reconfigure the junctions within the LSI or set up circuit partitions within the LSI, may also be used for the same purpose.

 さらに、回路、ユニット、装置、部材または部の全部または一部の機能または操作は、ソフトウェア処理によって実行することが可能である。この場合、ソフトウェアは1つまたは複数のROM、光学ディスク、ハードディスクドライブなどの非一時的記録媒体に記録され、ソフトウェアが処理装置(processor)によって実行されたときに、そのソフトウェアで特定された機能が処理装置(processor)および周辺装置によって実行される。 Furthermore, all or part of the functions or operations of a circuit, unit, device, member or part can be executed by software processing. In this case, the software is recorded on one or more non-transitory recording media such as ROM, optical disk, hard disk drive, etc., and when the software is executed by a processor, the functions specified in the software are executed by the processor and peripheral devices.

 システムまたは装置は、ソフトウェアが記録されている1つまたは複数の非一時的記録媒体、処理装置(processor)、および必要とされるハードウェアデバイス、例えばインターフェースを備えていてもよい。 The system or apparatus may include one or more non-transitory recording media on which the software is recorded, a processor, and any necessary hardware devices, such as interfaces.

 (本開示の基礎となった知見)
 本開示の実施形態を説明する前に、本開示の基礎となった知見を説明する。
(Findings that formed the basis of this disclosure)
Before describing the embodiments of the present disclosure, the findings on which the present disclosure is based will be described.

 まず、図1Aおよび図1Bを参照して、ハイパースペクトルカメラによって生成されるハイパースペクトル画像の例を簡単に説明する。ハイパースペクトル画像は、一般的なRGB画像よりも多くの波長の情報を有する画像データである。RGB画像は、画素ごとに、赤(R)、緑(G)、および青(B)の3つのバンドのそれぞれについての値を有する。これに対し、ハイパースペクトル画像は、RGB画像のバンド数よりも多くのバンドについての値を画素ごとに有する。本明細書において、「ハイパースペクトル画像」は、予め定められた対象波長域に含まれる4つ以上の複数のバンドのそれぞれに対応する複数の画像を含む画像データを意味する。各画素がバンドごとに有する値を、以下の説明において、「画素値」と称する。ハイパースペクトル画像におけるバンド数は、典型的には10以上であり、場合によっては100を超えることもある。「ハイパースペクトル画像」は、「ハイパースペクトルデータキューブ」または「ハイパースペクトルキューブ」と呼ばれることもある。 First, referring to FIG. 1A and FIG. 1B, an example of a hyperspectral image generated by a hyperspectral camera will be briefly described. A hyperspectral image is image data having more wavelength information than a general RGB image. An RGB image has values for each of three bands, red (R), green (G), and blue (B), for each pixel. In contrast, a hyperspectral image has values for more bands for each pixel than the number of bands in an RGB image. In this specification, a "hyperspectral image" means image data including multiple images corresponding to each of four or more multiple bands included in a predetermined target wavelength range. In the following description, the value that each pixel has for each band is referred to as a "pixel value." The number of bands in a hyperspectral image is typically 10 or more, and in some cases may exceed 100. A "hyperspectral image" is also called a "hyperspectral data cube" or a "hyperspectral cube."

 図1Aは、対象波長域Wと、それに含まれる複数の波長バンドW、W、・・・、Wとの関係を説明するための図である。対象波長域Wは、用途によって様々な範囲に設定され得る。対象波長域Wは、例えば、約400nmから約700nmの可視光の波長域、約700nmから約2500nmの近赤外線の波長域、または約10nmから約400nmの近紫外線の波長域であり得る。あるいは、対象波長域Wは、中赤外または遠赤外の波長域であってもよい。このように、使用される波長域は可視光域とは限らない。本明細書では、可視光に限らず、紫外線および近赤外線などの可視光の波長域に含まれない波長の電磁波も便宜上「光」と称する。 FIG. 1A is a diagram for explaining the relationship between the target wavelength range W and a plurality of wavelength bands W 1 , W 2 , ..., W i included therein. The target wavelength range W can be set to various ranges depending on the application. The target wavelength range W can be, for example, a visible light wavelength range of about 400 nm to about 700 nm, a near-infrared wavelength range of about 700 nm to about 2500 nm, or a near-ultraviolet wavelength range of about 10 nm to about 400 nm. Alternatively, the target wavelength range W may be a mid-infrared or far-infrared wavelength range. In this manner, the wavelength range used is not limited to the visible light range. In this specification, not only visible light, but also electromagnetic waves of wavelengths not included in the visible light wavelength range, such as ultraviolet and near-infrared, are referred to as "light" for convenience.

 図1Aに示す例では、Nを4以上の任意の整数として、対象波長域WをN等分したそれぞれの波長域を波長バンドW、W、・・・、Wとしている。ただしこのような例に限定されない。対象波長域Wに含まれる複数の波長バンドは任意に設定してもよい。各波長バンドは、例えば5nm、10nm、20nm、または50nmなどの、所定の幅を持つ波長域であり得る。複数の波長バンドの幅は、同一でもよいし異なっていてもよい。波長バンドの数が4つ以上であれば、RGB画像よりも多くの情報をハイパースペクトル画像から得ることができる。 In the example shown in FIG. 1A, N is an arbitrary integer equal to or greater than 4, and the wavelength bands obtained by dividing the target wavelength range W into N equal parts are designated as wavelength bands W 1 , W 2 , ..., W N. However, this is not limited to such an example. The number of wavelength bands included in the target wavelength range W may be set arbitrarily. Each wavelength band may be a wavelength range having a predetermined width, such as 5 nm, 10 nm, 20 nm, or 50 nm. The widths of the multiple wavelength bands may be the same or different. If the number of wavelength bands is four or more, more information can be obtained from the hyperspectral image than from the RGB image.

 図1Bは、ハイパースペクトル画像の例を模式的に示す図である。図1Bに示す例において、被写体はリンゴである。ハイパースペクトル画像36は、波長バンドW、W2、・・・、Wにそれぞれ対応する画像36W、36W、・・・、36Wを含む。これらの画像の各々は、2次元的に配列された複数の画素を含む。図1Bには、画素の区切りを示す縦横の破線が例示されている。1画像当たりの実際の画素数は、例えば数万から数千万のように大きい値であり得るが、図1Bでは、わかりやすさのため、画素数が極端に少ないものとして画素の区切りが示されている。被写体を光で照射した場合に生じる反射光は、イメージセンサに含まれる複数の光検出素子によって検出される。各光検出素子によって検出される光量を示す信号が、その光検出素子に対応する画素の画素値を表す。ハイパースペクトル画像36における各画素は、波長バンドごとに画素値を有する。したがって、ハイパースペクトル画像36を取得することにより、被写体のスペクトル情報を得ることができる。被写体のスペクトル情報に基づいて、被写体の光に関する特性を正確に分析することが可能になる。 FIG. 1B is a diagram showing a schematic example of a hyperspectral image. In the example shown in FIG. 1B, the subject is an apple. The hyperspectral image 36 includes images 36W 1 , 36W 2 , ..., 36W N corresponding to wavelength bands W 1 , W 2 , ..., W N , respectively. Each of these images includes a plurality of pixels arranged two-dimensionally. FIG. 1B shows an example of vertical and horizontal dashed lines indicating pixel divisions. The actual number of pixels per image can be a large value, for example, tens of thousands to tens of millions, but in FIG. 1B, for ease of understanding, the pixel divisions are shown as if the number of pixels is extremely small. Reflected light generated when the subject is irradiated with light is detected by a plurality of photodetection elements included in the image sensor. A signal indicating the amount of light detected by each photodetection element represents the pixel value of the pixel corresponding to that photodetection element. Each pixel in the hyperspectral image 36 has a pixel value for each wavelength band. Therefore, by acquiring the hyperspectral image 36, spectral information of the subject can be obtained. Based on the spectral information of the object, it becomes possible to accurately analyze the light characteristics of the object.

 次に、ハイパースペクトルカメラによって被写体を撮影して被写体のハイパースペクトル画像を取得する際に行われる補正の方法を説明する。当該方法は、例えば、以下の(1)~(3)の動作を含み得る。
(1)ハイパースペクトルカメラによって補正用の被写体を撮影することにより、補正用のハイパースペクトル画像を取得する。補正用のハイパースペクトル画像は、複数の波長バンドにそれぞれ対応する複数の補正用の画像を含む。
(2)ハイパースペクトルカメラによって分析対象の被写体を撮影することにより、分析対象のハイパースペクトル画像を取得する。分析対象のハイパースペクトル画像は、複数の波長バンドにそれぞれ対応する複数の分析対象の画像を含む。
(3)補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像を補正する。
Next, a correction method that is performed when a hyperspectral image of an object is acquired by photographing the object with a hyperspectral camera will be described. The method may include, for example, the following operations (1) to (3).
(1) A hyperspectral image for correction is obtained by photographing a correction subject with a hyperspectral camera. The hyperspectral image for correction includes a plurality of correction images respectively corresponding to a plurality of wavelength bands.
(2) A hyperspectral image of the analysis target is obtained by photographing the analysis target object with a hyperspectral camera. The hyperspectral image of the analysis target includes a plurality of images of the analysis target corresponding to a plurality of wavelength bands.
(3) Correcting the hyperspectral image to be analyzed based on the hyperspectral image for correction.

 (3)では、補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像を補正する例として、特許文献1に開示された補正のように、複数の分析対象の画像の各々を、複数の補正用の画像のうち、波長バンドが同じという点で対応する補正用の画像によって正規化することが挙げられる。補正用の被写体が白板である場合、このような補正は「白板補正」と呼ばれる。白板補正は、照射光のスペクトルの形状、撮影時の照射分布、レンズの周辺減光、イメージセンサの不均一な感度などの影響を低減するために行われる。 In (3), an example of correcting a hyperspectral image of an analysis target based on a hyperspectral image for correction is normalizing each of a plurality of images of an analysis target by a corresponding correction image among a plurality of correction images in that the wavelength bands are the same, as in the correction disclosed in Patent Document 1. When the subject for correction is a white board, such correction is called "white board correction." White board correction is performed to reduce the effects of the spectral shape of the irradiated light, the illumination distribution during shooting, the peripheral light falloff of the lens, the uneven sensitivity of the image sensor, and the like.

 本明細書において、「画像Aを画像Bによって正規化する」とは、画像A内の複数の画素の各々の画素値を、画像B内の複数の画素のうち、対応する画素の画素値によって除算した値に当該画素がとり得る最高画素値を乗算することを意味する。画像Bの画素値の空間分布がほぼ一定である場合、対応する画素の画素値の代わりに、代表的な1つの画素の画素値または代表的な2つ以上の画素の画素値の平均を用いてもよい。当該画素がとり得る最高画素値は、8ビットでは255であり、12ビットでは4095である。 In this specification, "normalizing image A by image B" means dividing the pixel value of each of the multiple pixels in image A by the pixel value of the corresponding pixel among the multiple pixels in image B, and multiplying the result by the maximum pixel value that the pixel can have. If the spatial distribution of the pixel values of image B is approximately constant, the pixel value of one representative pixel or the average of the pixel values of two or more representative pixels may be used instead of the pixel value of the corresponding pixel. The maximum pixel value that the pixel can have is 255 for 8 bits and 4095 for 12 bits.

 「画像Aを画像Bによって正規化する」とは、(pvmax×(画像Aに含まれる画素pA11の画素値pvA11)/(画像Bに含まれる画素pB11の画素値pvB11))、・・・、(pvmax×(画像Aに含まれる画素pAmnの画素値pvAmn)/(画像Bに含まれる画素pBmnの画素値pvBmn))を求めることであると解釈されてもよい。画像A、画像Bはそれぞれm×n個の画素を含み、画像Aにおける画素pA11の位置は画像Bにおける画素pB11に対応し、・・・、画像Aにおける画素pAmnの位置は画像Bにおける画素pBmnに対応する。画素値pvA11、・・・、画素値pvAmn、画素値pvB11、・・・、画素値pvBmnのそれぞれがとりうる最大値がpvmaxであってもよい。 "Normalizing image A by image B" may be interpreted as determining (pvmax x (pixel value pvA11 of pixel pA11 included in image A)/(pixel value pvB11 of pixel pB11 included in image B)), ..., (pvmax x (pixel value pvAmn of pixel pAmn included in image A)/(pixel value pvBmn of pixel pBmn included in image B)). Images A and B each contain m x n pixels, and the position of pixel pA11 in image A corresponds to pixel pB11 in image B, ..., and the position of pixel pAmn in image A corresponds to pixel pBmn in image B. The maximum value that each of pixel values pvA11, ..., pixel value pvAmn, pixel value pvB11, ..., pixel value pvBmn can take may be pvmax.

 本明細書において、画像B内の画素βが画像A内の画素αに対応すると言えるのは、画素βの位置が画素αの位置と同一である場合である。あるいは、画像B内の画素βが画像A内の画素αに対応すると言えるのは、画像信号を出力するイメージセンサにおいて、画素βの信号を出力する光検出素子の位置が、画素αの信号を出力する光検出素子の位置と同一である場合である。 In this specification, pixel β in image B can be said to correspond to pixel α in image A if the position of pixel β is the same as the position of pixel α. Alternatively, pixel β in image B can be said to correspond to pixel α in image A if, in an image sensor that outputs an image signal, the position of the photodetection element that outputs the signal of pixel β is the same as the position of the photodetection element that outputs the signal of pixel α.

 また、(3)では、補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像を補正する他の例として、複数の分析対象の画像の各々から、複数の補正用の画像のうち、波長バンドが同じという点で対応する補正用の画像を差し引くことが挙げられる。補正用の被写体が、ハイパースペクトルカメラに装着された遮光性のレンズキャップである場合、補正用の画像の画素値はほぼゼロであることから、このような補正は「黒引き」と呼ばれる。黒引きは、イメージセンサの暗電流、輝点画素、固定パターンノイズ、およびセンサ性能の揺らぎなどの影響を低減するために行われる。 In addition, in (3), another example of correcting the hyperspectral image to be analyzed based on the hyperspectral image for correction is to subtract from each of the images to be analyzed a corresponding correction image from among multiple correction images in that the correction image has the same wavelength band. When the correction subject is a light-blocking lens cap attached to a hyperspectral camera, the pixel values of the correction image are nearly zero, and this type of correction is called "blackout subtraction." Blackout subtraction is performed to reduce the effects of image sensor dark current, bright spot pixels, fixed pattern noise, and fluctuations in sensor performance.

 本明細書において、「画像Aから画像Bを差し引く」とは、画像A内の複数の画素の各々の画素値から、画像B内の複数の画素のうち、対応する画素の画素値を減算することを意味する。「画像Aから画像Bを差し引く」とは、((画像Aに含まれる画素pA11の画素値pvA11)-(画像Bに含まれる画素pB11の画素値pvB11))、・・・、((画像Aに含まれる画素pAmnの画素値pvAmn)-(画像Bに含まれる画素pBmnの画素値pvBmn))を求めることであると解釈されてもよい。画像A、画像Bはそれぞれm×n個の画素を含み、画像Aにおける画素pA11の位置は画像Bにおける画素pB11に対応し、・・・、画像Aにおける画素pAmnの位置は画像Bにおける画素pBmnに対応する。黒引きにおいては、分析対象Aの画像から補正用の画像Bを差し引く。画像Bの画素値の空間分布がほぼ一定である場合、対応する画素の画素値の代わりに、代表的な1つの画素の画素値または代表的な2つ以上の画素の画素値の平均を用いてもよい。 In this specification, "subtracting image B from image A" means subtracting the pixel value of a corresponding pixel among multiple pixels in image B from the pixel value of each of multiple pixels in image A. "Subtracting image B from image A" may be interpreted as finding ((pixel value pvA11 of pixel pA11 included in image A) - (pixel value pvB11 of pixel pB11 included in image B)), ..., ((pixel value pvAmn of pixel pAmn included in image A) - (pixel value pvBmn of pixel pBmn included in image B)). Images A and B each contain m x n pixels, and the position of pixel pA11 in image A corresponds to pixel pB11 in image B, ..., and the position of pixel pAmn in image A corresponds to pixel pBmn in image B. In black subtraction, correction image B is subtracted from the image to be analyzed A. If the spatial distribution of pixel values in image B is approximately constant, the pixel value of one representative pixel or the average of the pixel values of two or more representative pixels may be used instead of the pixel values of the corresponding pixels.

 白板補正および/または黒引きを行うことにより、複数の波長バンドにそれぞれ対応する複数の補正後の画像に基づいて、分析対象の被写体のスペクトル情報を取得することが可能になる。 By performing whiteboard correction and/or black subtraction, it is possible to obtain spectral information of the subject being analyzed based on multiple corrected images corresponding to multiple wavelength bands.

 しかしながら、白板補正では、補正用の被写体として用いた白板に汚れが付着していた場合、または補正用の被写体として白板ではない物体を誤って撮影した場合、白板補正の精度が低下する。黒引きでは、補正用の被写体として用いる遮光性のレンズキャップが、ハイパースペクトルカメラに適切に装着されていない場合、黒引きの精度が低下する。 However, in whiteboard correction, if the whiteboard used as the subject for correction is dirty, or if an object other than a whiteboard is mistakenly photographed as the subject for correction, the accuracy of the whiteboard correction decreases. In blackout correction, if the light-blocking lens cap used as the subject for correction is not properly attached to the hyperspectral camera, the accuracy of the blackout correction decreases.

 本発明者らは、上記の課題を見出し、当該課題を解決するための本開示の実施形態による画像の補正に用いられる方法に想到した。以下、図面を参照しながら、本開示のより具体的な実施形態を説明する。 The inventors have found the above problem and have come up with a method for correcting an image according to an embodiment of the present disclosure to solve the problem. A more specific embodiment of the present disclosure will be described below with reference to the drawings.

 (実施形態)
 [1.撮影システム]
 [1.1.撮像システムの概略的な構成例]
 以下に、図2Aから図2Cを参照して、本開示の実施形態による撮影システムの概略的な構成例を説明する。図2Aから図2Cは、本開示の例示的な実施形態による撮影システムの構成を模式的に示すブロック図である。図2Aには、補正用の被写体10a1として、白板が例示されている。白板は、例えば、反射率の空間分布が均一であり、反射率の波長分散が小さい標準白板であり得る。図2Bには、補正用の被写体10a2として、遮光状態での撮影を実現できる遮光性のレンズキャップが例示されている。図2Cには、分析対象の被写体10bとして、リンゴが例示されている。被写体10bはリンゴに限られず任意の物体である。本明細書において、補正用の被写体10a1または被写体10a2を「第1被写体」とも称し、分析対象の被写体10bを「第2被写体」とも称する。
(Embodiment)
[1. Imaging System]
[1.1. Schematic configuration example of imaging system]
Below, with reference to FIG. 2A to FIG. 2C, a schematic configuration example of the imaging system according to the embodiment of the present disclosure will be described. FIG. 2A to FIG. 2C are block diagrams that show a schematic configuration of the imaging system according to the exemplary embodiment of the present disclosure. FIG. 2A illustrates a white board as the correction subject 10a1. The white board may be, for example, a standard white board with a uniform spatial distribution of reflectance and a small wavelength dispersion of reflectance. FIG. 2B illustrates a light-shielding lens cap that can realize imaging in a light-shielding state as the correction subject 10a2. FIG. 2C illustrates an apple as the subject 10b to be analyzed. The subject 10b is not limited to an apple and may be any object. In this specification, the correction subject 10a1 or the subject 10a2 is also referred to as a "first subject", and the analysis subject 10b is also referred to as a "second subject".

 図2Aから図2Cに示す撮影システム100は、光源20と、カメラ30と、表示装置40と、処理装置50とを備える。処理装置50は、処理回路52と、メモリ54と、記憶装置56とを備える。図2Aから図2Cに示す矢印付きの太線は、信号の流れを表す。 The imaging system 100 shown in Figures 2A to 2C includes a light source 20, a camera 30, a display device 40, and a processing device 50. The processing device 50 includes a processing circuit 52, a memory 54, and a storage device 56. The thick lines with arrows shown in Figures 2A to 2C indicate the flow of signals.

 カメラ30は、例えば、後述するラインスキャン型およびスナップショット型のハイパースペクトルカメラであり得る。その場合、図2Aまたは図2Bに示すようにカメラ30によって被写体10a1または被写体10a2を撮影することにより、補正用のハイパースペクトル画像が生成される。補正用のハイパースペクトル画像は、被写体10a1または被写体10a2を撮影する指示をユーザが受けて生成される。同様に、図2Cに示すようにカメラ30によって被写体10bを撮影することにより、分析対象のハイパースペクトル画像が生成される。分析対象のハイパースペクトル画像は、被写体10bを撮影する指示をユーザが受けて生成される。 The camera 30 may be, for example, a line scan type or snapshot type hyperspectral camera, which will be described later. In this case, a hyperspectral image for correction is generated by photographing the subject 10a1 or subject 10a2 with the camera 30 as shown in FIG. 2A or FIG. 2B. The hyperspectral image for correction is generated when the user receives an instruction to photograph the subject 10a1 or subject 10a2. Similarly, a hyperspectral image to be analyzed is generated by photographing the subject 10b with the camera 30 as shown in FIG. 2C. The hyperspectral image to be analyzed is generated when the user receives an instruction to photograph the subject 10b.

 あるいは、カメラ30は、後述する圧縮センシング型のハイパースペクトルカメラであってもよい。その場合、図2Aまたは図2Bに示すようにカメラ30によって被写体10a1または被写体10a2を撮影することにより、補正用の圧縮画像が生成され、当該圧縮画像に基づいて補正用のハイパースペクトル画像が生成される。補正用の圧縮画像は、被写体10a1または被写体10a2を撮影する指示をユーザが受けて生成される。同様に、図2Cに示すようにカメラ30によって被写体10bを撮影することにより、分析対象の圧縮画像が生成され、当該圧縮画像に基づいて分析対象のハイパースペクトル画像が生成される。分析対象の圧縮画像は、被写体10bを撮影する指示をユーザが受けて生成される。 Alternatively, the camera 30 may be a compressed sensing type hyperspectral camera, which will be described later. In that case, as shown in FIG. 2A or FIG. 2B, a compressed image for correction is generated by photographing the subject 10a1 or the subject 10a2 with the camera 30, and a hyperspectral image for correction is generated based on the compressed image. The compressed image for correction is generated when the user receives an instruction to photograph the subject 10a1 or the subject 10a2. Similarly, as shown in FIG. 2C, a compressed image to be analyzed is generated by photographing the subject 10b with the camera 30, and a hyperspectral image to be analyzed is generated based on the compressed image. The compressed image to be analyzed is generated when the user receives an instruction to photograph the subject 10b.

 本明細書において、補正用のハイパースペクトル画像または補正用の圧縮画像を「第1画像」とも称し、分析対象のハイパースペクトル画像または分析対象の圧縮画像を「第2画像」とも称する。ハイパースペクトル画像は複数の波長バンドに対応する複数の画像を含み、圧縮画像では複数の波長バンドの情報が圧縮されているので、第1および第2画像は複数の波長バンドの情報を含むと言うことができる。 In this specification, the hyperspectral image for correction or the compressed image for correction is also referred to as the "first image," and the hyperspectral image to be analyzed or the compressed image to be analyzed is also referred to as the "second image." Since a hyperspectral image includes multiple images corresponding to multiple wavelength bands, and information of multiple wavelength bands is compressed in a compressed image, it can be said that the first and second images include information of multiple wavelength bands.

 後で詳しく説明するが、本実施形態による撮影システム100では、第1画像が、第2画像を補正するための補正用の画像として取得される。図2Aに示す被写体10a1を撮影して第1画像が生成される場合、第1画像に基づく第2画像の補正は白板補正である。図2Bに示す被写体10a2を撮影して第1画像が生成される場合、第1画像に基づく第2画像の補正は黒引きである。 As will be explained in detail later, in the imaging system 100 according to this embodiment, the first image is acquired as a correction image for correcting the second image. When the first image is generated by imaging the subject 10a1 shown in FIG. 2A, the correction of the second image based on the first image is whiteboard correction. When the first image is generated by imaging the subject 10a2 shown in FIG. 2B, the correction of the second image based on the first image is blackout correction.

 前述したように、白板補正では、被写体10a1として用いる白板に汚れが付着していた場合、または被写体10a1として白板ではない物体を誤って用いた場合、白板補正の精度が低下する。黒引きでは、被写体10a2として遮光性のレンズキャップがカメラ30に適切に装着されていない場合、黒引きの精度が低下する。 As mentioned above, in the case of whiteboard correction, if the whiteboard used as the subject 10a1 is dirty, or if an object other than a whiteboard is mistakenly used as the subject 10a1, the accuracy of the whiteboard correction decreases. In the case of blackout correction, if a light-blocking lens cap is not properly attached to the camera 30 as the subject 10a2, the accuracy of the blackout correction decreases.

 そこで、本実施形態による撮影システム100では、第1画像の画素値に基づいて、第1画像が補正用の画像としての適性条件を満たすか否かが判定される。したがって、ユーザが指示を受けて適切な被写体10a1または被写体10a2を撮影したつもりでも実際には被写体10a1または被写体10a2が適切でない場合、不適切な補正を防ぐことができる。その結果、適切な補正により、被写体10bのスペクトル情報をより正確に取得することが可能になる。 In this embodiment, the imaging system 100 determines whether the first image satisfies the suitability conditions for an image to be corrected based on the pixel values of the first image. Therefore, even if the user believes that he or she has photographed the appropriate subject 10a1 or subject 10a2 in response to instructions, inappropriate correction can be prevented if the subject 10a1 or subject 10a2 is actually not appropriate. As a result, appropriate correction makes it possible to more accurately obtain spectral information for subject 10b.

 本明細書において、「スペクトル情報」は、光強度の波長依存性を示すスペクトルに関する情報を意味する。スペクトル情報は、例えば、スペクトルそれ自体の情報であってもよいし、スペクトルを導出できる情報であってもよい。スペクトルは、反射スペクトルであってもよし、透過スペクトルであってもよい。 In this specification, "spectral information" means information about a spectrum that indicates the wavelength dependency of light intensity. Spectral information may be, for example, information about the spectrum itself, or information from which a spectrum can be derived. The spectrum may be a reflection spectrum or a transmission spectrum.

 本明細書において、分析は、例えば、被写体10bについて、糖度および熟成度合いなどの特性を決定したり、欠陥および/または異物を含むか否かを検査したりすることであり得る。当該分析は、機械による処理だけでなく、人間による評価も含む。 As used herein, analysis can be, for example, determining characteristics of the subject 10b, such as sugar content and ripeness, and inspecting the subject 10b for defects and/or foreign matter. Such analysis includes human evaluation as well as machine processing.

 以下に、撮影システム100の各構成要素を説明する。 The components of the imaging system 100 are described below.

 <光源20>
 光源20は、被写体10a1または被写体10bを照射するための照射光を出射する。当該照射光は、複数の波長バンドの光を含む。光源20は、例えば、白色光を出射する白熱ランプ、ハロゲンランプ、水銀灯、蛍光ランプ、またはLEDランプであり得る。図2Aおよび図2Cに示す白抜きの矢印は、光源20から出射された光、および被写体10a1および被写体10bからの反射光を表す。
<Light source 20>
The light source 20 emits illumination light for illuminating the subject 10a1 or the subject 10b. The illumination light includes light of a plurality of wavelength bands. The light source 20 may be, for example, an incandescent lamp, a halogen lamp, a mercury lamp, a fluorescent lamp, or an LED lamp that emits white light. The hollow arrows shown in Figures 2A and 2C represent the light emitted from the light source 20 and the light reflected from the subject 10a1 and the subject 10b.

 なお、光源20は必ずしも必要ではない。撮影システム100が光源20を備えない場合、被写体10a1または被写体10bは、太陽光または室内光で照射されてもよい。 Note that the light source 20 is not necessarily required. If the imaging system 100 does not include the light source 20, the subject 10a1 or the subject 10b may be illuminated with sunlight or indoor light.

 <カメラ30>
 カメラ30は、図2Aまたは図2Bに示すように、被写体10a1または被写体10a2を撮影する。同様に、カメラ30は、図2Cに示すように、被写体10bを撮影する。
<Camera 30>
The camera 30 captures an image of the subject 10a1 or 10a2 as shown in Fig. 2A or 2B. Similarly, the camera 30 captures an image of the subject 10b as shown in Fig. 2C.

 図1Aに示す対象波長域Wは、カメラ30が光を検出可能な波長域である。カメラ30が光学系と、イメージセンサとを備える場合、対象波長域Wは、例えば、光学系の透過域およびイメージセンサの感度域に基づいて決定され得る。光学系の透過域がイメージセンサの感度域を含む場合、対象波長域Wはイメージセンサの感度域によって決定される。カメラ30がバンドパスフィルタをさらに含む場合、対象波長域Wは、光学系の透過域およびイメージセンサの感度域に加えてバンドパスフィルタの透過域に基づいて決定され得る。光学系の透過域がイメージセンサの感度域を含み、イメージセンサの感度域がバンドパスフィルタの透過域を含む場合、対象波長域Wは、バンドパスフィルタの透過域によって決定される。 The target wavelength range W shown in FIG. 1A is a wavelength range in which the camera 30 can detect light. When the camera 30 includes an optical system and an image sensor, the target wavelength range W can be determined, for example, based on the transmission range of the optical system and the sensitivity range of the image sensor. When the transmission range of the optical system includes the sensitivity range of the image sensor, the target wavelength range W is determined by the sensitivity range of the image sensor. When the camera 30 further includes a bandpass filter, the target wavelength range W can be determined based on the transmission range of the bandpass filter in addition to the transmission range of the optical system and the sensitivity range of the image sensor. When the transmission range of the optical system includes the sensitivity range of the image sensor and the sensitivity range of the image sensor includes the transmission range of the bandpass filter, the target wavelength range W is determined by the transmission range of the bandpass filter.

 カメラ30の例として、ラインスキャン型、スナップショット型、および圧縮センシング型のハイパースペクトルカメラが挙げられる。以下では、各ハイパースペクトルカメラの代表的な構成要素および動作を説明する。 Examples of the camera 30 include line-scan, snapshot, and compressed sensing hyperspectral cameras. Below, we explain the representative components and operation of each hyperspectral camera.

 ・ラインスキャン型のハイパースペクトルカメラ
 カメラ30がラインスキャン型のハイパースペクトルカメラである場合、カメラ30は、プリズムまたは回折格子と、イメージセンサと、撮影対象の物体を一方向にスライドさせるスライド機構とを備える。光源20は、当該一方向に対して垂直な方向に延びるラインビームを出射する。光源20から出射されたラインビームで撮影対象の物体が照射される。照射によって生じる光はプリズムまたは回折格子を介して波長バンドごとに分離され、イメージセンサによって検出される。ラインスキャンでは、そのような光検出が、スライド機構によって撮影対象の物体を一方向に移動させながら行われる。
Line-scan hyperspectral camera When the camera 30 is a line-scan hyperspectral camera, the camera 30 includes a prism or a diffraction grating, an image sensor, and a slide mechanism for sliding the object to be photographed in one direction. The light source 20 emits a line beam extending in a direction perpendicular to the one direction. The object to be photographed is illuminated with the line beam emitted from the light source 20. The light generated by the illumination is separated into wavelength bands via the prism or the diffraction grating and detected by the image sensor. In line scanning, such light detection is performed while the object to be photographed is moved in one direction by the slide mechanism.

 カメラ30は、被写体10a1または被写体10a2をラインスキャンすることにより、補正用のハイパースペクトル画像を生成して出力する。同様に、カメラ30は、被写体10bをラインスキャンすることにより、分析対象のハイパースペクトル画像を生成して出力する。ラインスキャン型のハイパースペクトルカメラでは、空間および波長の解像度が高いものの、ラインスキャンのために撮影時間が長くなる。 Camera 30 generates and outputs a hyperspectral image for correction by line scanning subject 10a1 or subject 10a2. Similarly, camera 30 generates and outputs a hyperspectral image to be analyzed by line scanning subject 10b. Line-scan hyperspectral cameras have high spatial and wavelength resolution, but the imaging time is long due to the line scan.

 ・スナップショット型のハイパースペクトルカメラ
 カメラ30がスナップショット型のハイパースペクトルカメラである場合、カメラ30は、複数の波長バンドにそれぞれ対応する複数の透光領域と、イメージセンサとを備える。複数の透光領域の各々は、対象波長域において、複数の波長バンドのうち、対応する波長バンドの光を透過させる。撮影対象の物体からの光は、複数の透光領域を介してイメージセンサによって検出される。
Snapshot-type hyperspectral camera When the camera 30 is a snapshot-type hyperspectral camera, the camera 30 includes a plurality of light-transmitting regions corresponding to a plurality of wavelength bands, respectively, and an image sensor. Each of the plurality of light-transmitting regions transmits light of the corresponding wavelength band among the plurality of wavelength bands in the target wavelength range. Light from an object to be photographed is detected by the image sensor via the plurality of light-transmitting regions.

 カメラ30は、被写体10a1または被写体10a2をワンショットで撮影することによって補正用のハイパースペクトル画像を生成して出力する。同様に、カメラ30は、被写体10bをワンショットで撮影することによって分析対象のハイパースペクトル画像を生成して出力する。これは、カラーカメラが、赤色、緑色、および青色のカラーフィルタを介して撮影対象の物体をワンショットで撮像して赤色、緑色、および青色の画像を生成して出力する原理に類似している。スナップショット型のハイパースペクトルカメラでは、ワンショットによる撮影が可能であるものの、感度および空間分解能が十分でない場合が多い。 Camera 30 generates and outputs a hyperspectral image for correction by capturing an image of subject 10a1 or subject 10a2 in one shot. Similarly, camera 30 generates and outputs a hyperspectral image to be analyzed by capturing an image of subject 10b in one shot. This is similar to the principle of a color camera capturing an image of a target object in one shot through red, green, and blue color filters to generate and output red, green, and blue images. Although snapshot-type hyperspectral cameras are capable of capturing images in one shot, their sensitivity and spatial resolution are often insufficient.

 ・圧縮センシング型のハイパースペクトルカメラ
 カメラ30が、特許文献2に開示されているような圧縮センシング型のハイパースペクトルカメラである場合、カメラ30は、透過スペクトルが互いに異なる複数の領域を含む符号化マスクと、イメージセンサと、画像処理装置とを備える。撮影対象の物体からの光は符号化マスクを介してイメージセンサによって検出され、複数の波長バンドの情報が圧縮された圧縮画像が生成される。画像処理装置は、圧縮画像に基づいて当該物体のハイパースペクトル画像を生成する。
Compressive sensing type hyperspectral camera When the camera 30 is a compressed sensing type hyperspectral camera as disclosed in Patent Document 2, the camera 30 includes an encoding mask including a plurality of regions having different transmission spectra, an image sensor, and an image processing device. Light from an object to be photographed is detected by the image sensor through the encoding mask, and a compressed image is generated in which information of a plurality of wavelength bands is compressed. The image processing device generates a hyperspectral image of the object based on the compressed image.

 カメラ30は、被写体10a1または被写体10a2を撮影して補正用の圧縮画像を生成し、当該圧縮画像に基づいて補正用のハイパースペクトル画像を生成して出力する。同様に、カメラ30は、被写体10bを撮影して分析対象の圧縮画像を生成し、当該圧縮画像に基づいて分析対象のハイパースペクトル画像を生成して出力する。圧縮センシング型のハイパースペクトルカメラでは、感度および空間分解能を低下させることなくワンショットで補正用または分析対象のハイパースペクトル画像を生成できる。 Camera 30 photographs subject 10a1 or subject 10a2 to generate a compressed image for correction, and generates and outputs a hyperspectral image for correction based on the compressed image. Similarly, camera 30 photographs subject 10b to generate a compressed image of the subject to be analyzed, and generates and outputs a hyperspectral image of the subject to be analyzed based on the compressed image. A compressed sensing type hyperspectral camera can generate a hyperspectral image for correction or analysis in one shot without reducing sensitivity and spatial resolution.

 圧縮センシング型のハイパースペクトルカメラでは、カメラ30が符号化マスクと、イメージセンサとを備える一方、画像処理装置を備えない構成もあり得る。そのような構成では、カメラ30が圧縮画像を生成して出力し、処理装置50が圧縮画像に基づいてハイパースペクトル画像を生成する。 In a compressed sensing type hyperspectral camera, the camera 30 may be configured to include an encoding mask and an image sensor, but may not include an image processing device. In such a configuration, the camera 30 generates and outputs a compressed image, and the processing device 50 generates a hyperspectral image based on the compressed image.

 圧縮センシング型のハイパースペクトルカメラの詳細については後述する。 Details about compressed sensing hyperspectral cameras will be provided later.

 <表示装置40>
 表示装置40は、入力UI(User Interface)42および表示UI44を表示する。入力UI42は、ユーザが情報を入力するために用いられる。ユーザが入力UI42に入力した情報は、処理回路52によって受け取られる。表示UI44は、処理回路52によって生成された情報を表示するために用いられる。
<Display device 40>
The display device 40 displays an input user interface (UI) 42 and a display UI 44. The input UI 42 is used for a user to input information. The information input by the user to the input UI 42 is received by the processing circuitry 52. The display UI 44 is used for displaying information generated by the processing circuitry 52.

 入力UI42および表示UI44は、GUI(Graphical User Interface)として表示される。入力UI42および表示UI44に示される情報は、表示装置40に表示されると言うことができる。入力UI42および表示UI44は、タッチスクリーンのように入力および出力の両方が可能なデバイスによって実現されていてもよい。その場合、タッチスクリーンが表示装置40として機能してもよい。入力UI42としてキーボードおよび/またはマウスを用いる場合、入力UI42は、表示装置40とは独立した装置である。 The input UI 42 and the display UI 44 are displayed as a GUI (Graphical User Interface). It can be said that the information shown on the input UI 42 and the display UI 44 is displayed on the display device 40. The input UI 42 and the display UI 44 may be realized by a device capable of both input and output, such as a touch screen. In that case, the touch screen may function as the display device 40. When a keyboard and/or a mouse are used as the input UI 42, the input UI 42 is a device independent of the display device 40.

 <処理装置50>
 処理装置50に含まれる処理回路52は、光源20、カメラ30、および記憶装置56の動作を制御する。処理回路52は、カメラ30によって生成された補正用のハイパースペクトル画像および分析対象のハイパースペクトル画像を取得し、これらのハイパースペクトル画像に基づいて処理を行う。あるいは、処理回路52は、カメラ30によって生成された補正用の圧縮画像および分析対象の圧縮画像を取得し、これらの圧縮画像に基づいて処理を行う。
<Processing device 50>
The processing circuitry 52 included in the processing device 50 controls the operations of the light source 20, the camera 30, and the storage device 56. The processing circuitry 52 acquires the hyperspectral image for correction and the hyperspectral image to be analyzed generated by the camera 30, and performs processing based on these hyperspectral images. Alternatively, the processing circuitry 52 acquires the compressed image for correction and the compressed image to be analyzed generated by the camera 30, and performs processing based on these compressed images.

 処理装置50に含まれるメモリ54は、処理回路52によって実行されるコンピュータプログラムを格納する。処理回路52およびメモリ54は、1つの回路基板に集積されていてもよいし、別々の回路基板に設けられていてもよい。また、処理回路52の機能が複数の回路に分散していてもよい。処理回路52の一部または全体が、光源20、カメラ30、および記憶装置56から離れた遠隔地に設置され、有線または無線の通信ネットワークを介して、これらの構成要素の動作を制御してもよい。 The memory 54 included in the processing device 50 stores a computer program executed by the processing circuit 52. The processing circuit 52 and the memory 54 may be integrated on a single circuit board or may be provided on separate circuit boards. The functions of the processing circuit 52 may also be distributed across multiple circuits. A part or the entirety of the processing circuit 52 may be installed in a remote location away from the light source 20, the camera 30, and the storage device 56, and may control the operation of these components via a wired or wireless communication network.

 処理装置50に含まれる記憶装置56は、半導体記憶媒体または磁気記憶媒体などの任意の記憶媒体を含む装置である。記憶装置56は処理回路52に接続されており、処理回路52の処理結果を記憶する。 The storage device 56 included in the processing device 50 is a device that includes any storage medium, such as a semiconductor storage medium or a magnetic storage medium. The storage device 56 is connected to the processing circuit 52 and stores the processing results of the processing circuit 52.

 [1.2.撮像システムの具体的な構成例]
 次に、図3を参照して、本開示の実施形態による撮影システムの具体的な構成例を説明する。図3は、本開示の例示的な実施形態による撮影システムの具体的な構成を模式的に示す図である。図3に示す撮影システム100は、前述の光源20、カメラ30、表示装置40、および処理装置50に加えて、ステージ60と、支持体70と、調整装置80とをさらに備える。図3に示す例において、光源20の個数は2個であるが、1個であってもよいし、3個以上であってもよい。処理装置50は、有線または無線により、光源20、カメラ30、表示装置40、および調整装置80に接続されている。処理装置50に含まれる図2Aから図2Cに示す処理回路52は、光源20、カメラ30、および表示装置40の動作に加えて、調整装置80の動作を制御する。
[1.2. Specific configuration example of imaging system]
Next, a specific configuration example of the imaging system according to the embodiment of the present disclosure will be described with reference to FIG. 3. FIG. 3 is a diagram showing a schematic configuration of the imaging system according to the exemplary embodiment of the present disclosure. The imaging system 100 shown in FIG. 3 further includes a stage 60, a support 70, and an adjustment device 80 in addition to the above-mentioned light source 20, the camera 30, the display device 40, and the processing device 50. In the example shown in FIG. 3, the number of light sources 20 is two, but may be one, or may be three or more. The processing device 50 is connected to the light source 20, the camera 30, the display device 40, and the adjustment device 80 by wire or wirelessly. The processing circuit 52 shown in FIG. 2A to FIG. 2C included in the processing device 50 controls the operation of the adjustment device 80 in addition to the operation of the light source 20, the camera 30, and the display device 40.

 ステージ60は、図2Aに示す被写体10a1および図2Cに示す被写体10bを配置するための平坦な支持面を有する。支持体70は、ステージ60に固定され、ステージ60の支持面に垂直な方向、すなわち高さ方向に延びた構造を有する。支持体70は、光源20、カメラ30、および調整装置80を支持する。 The stage 60 has a flat support surface for placing the subject 10a1 shown in FIG. 2A and the subject 10b shown in FIG. 2C. The support 70 is fixed to the stage 60 and has a structure that extends in a direction perpendicular to the support surface of the stage 60, i.e., in the height direction. The support 70 supports the light source 20, the camera 30, and the adjustment device 80.

 調整装置80は、光源20およびカメラ30をステージ60の支持面に垂直な方向に独立に移動させる機構を備える。調整装置80は、リニアアクチュエータのように、1つ以上のモータを含むアクチュエータを備え得る。アクチュエータは、例えば電動モータ、油圧、または空圧を利用して、光源20と被写体10a1との距離または光源20と被写体10bとの距離、およびカメラ30と被写体10a1との距離またはカメラ30と被写体10bとの距離を変化させるように構成され得る。 The adjustment device 80 includes a mechanism for independently moving the light source 20 and the camera 30 in a direction perpendicular to the support surface of the stage 60. The adjustment device 80 may include an actuator including one or more motors, such as a linear actuator. The actuator may be configured to change the distance between the light source 20 and the subject 10a1 or the distance between the light source 20 and the subject 10b, and the distance between the camera 30 and the subject 10a1 or the distance between the camera 30 and the subject 10b, using, for example, an electric motor, hydraulic pressure, or pneumatic pressure.

 調整装置80によって光源20と被写体10a1との距離または光源20と被写体10bとの距離を調整できるので、光源20から出射され、被写体10a1または被写体10bで反射されてカメラ30に入射する光の量を適切に調整することが可能になる。したがって、ハイパースペクトル画像または圧縮画像が明るすぎたり、暗すぎたりする可能性を低減できる。さらに、調整装置80によってカメラ30と被写体10a1との距離またはカメラ30と被写体10bとの距離を調整できるので、カメラ30のピントを合わせやすくなる。 Because the adjustment device 80 can adjust the distance between the light source 20 and the subject 10a1 or the distance between the light source 20 and the subject 10b, it is possible to appropriately adjust the amount of light emitted from the light source 20, reflected by the subject 10a1 or the subject 10b, and incident on the camera 30. This reduces the possibility that the hyperspectral image or compressed image will be too bright or too dark. Furthermore, because the adjustment device 80 can adjust the distance between the camera 30 and the subject 10a1 or the distance between the camera 30 and the subject 10b, it becomes easier to focus the camera 30.

 調整装置80は、ステージ60と光源20との距離、およびステージ60とカメラ30との距離を計測する計測器をさらに備える。支持体150には、ステージ190の支持面からの高さを示す目盛が設けられている。目盛に基づいて光源20およびカメラ30の高さ方向における位置を知ることができる。 The adjustment device 80 further includes a measuring device that measures the distance between the stage 60 and the light source 20, and the distance between the stage 60 and the camera 30. The support 150 is provided with a scale that indicates the height from the support surface of the stage 190. The positions of the light source 20 and the camera 30 in the height direction can be known based on the scale.

 [2.白板補正および黒引きにおける課題]
 以下に、図4Aおよび図4Bを参照して、白板補正および黒引きにおける課題を説明する。
[2. Issues with whiteboard correction and blackboard subtraction]
Problems in white board correction and black subtraction will be described below with reference to FIGS. 4A and 4B.

 [2.1.白板補正]
 白板補正では、分析対象のハイパースペクトル画像が補正用のハイパースペクトル画像によって正規化される。より具体的には、分析対象のハイパースペクトル画像に含まれる複数の画像の各々が、補正用のハイパースペクトル画像に含まれる複数の画像のうち、波長バンドが同じという点で対応する画像によって正規化される。適切な白板は、多くの場合ハイパースペクトル画像に含まれる波長域において、一定以上の反射率を有する。白板補正は、照射光のスペクトルの形状、撮影時の照射分布、レンズの周辺減光、およびイメージセンサの不均一な感度などの影響を低減するために行われる。ただし、そのような影響を低減する必要がないのであれば、白板補正を必ずしも行う必要はない。
[2.1. White board correction]
In the whiteboard correction, the hyperspectral image to be analyzed is normalized by the hyperspectral image for correction. More specifically, each of the multiple images included in the hyperspectral image to be analyzed is normalized by a corresponding image among the multiple images included in the hyperspectral image for correction in that the wavelength bands are the same. In many cases, a suitable whiteboard has a certain or higher reflectance in the wavelength range included in the hyperspectral image. The whiteboard correction is performed to reduce the effects of the spectral shape of the irradiated light, the irradiance distribution during shooting, the peripheral light falloff of the lens, and the uneven sensitivity of the image sensor. However, if there is no need to reduce such effects, it is not necessary to perform the whiteboard correction.

 図4Aは、白板補正の例を模式的に示す図である。図4Aの上段の3つの図は、正しい白板補正の例を示し、図4Aの中段の3つの図および図4Aの下段の3つの図は、誤った白板補正の例を示す。各段の3つの図のうち、左側の図は被写体10bが写るハイパースペクトル画像を示し、中央の図は白板が写るハイパースペクトル画像を示し、右側の図は白板補正後のハイパースペクトル画像を示す。図4Aでは、ハイパースペクトル画像として、当該ハイパースペクトル画像に含まれるある波長バンドに対応する画像が例示されている。 FIG. 4A is a diagram showing a schematic example of white board correction. The top three diagrams in FIG. 4A show examples of correct white board correction, while the middle three diagrams and the bottom three diagrams in FIG. 4A show examples of incorrect white board correction. Of the three diagrams in each row, the diagram on the left shows a hyperspectral image containing subject 10b, the central diagram shows a hyperspectral image containing a white board, and the diagram on the right shows a hyperspectral image after white board correction. In FIG. 4A, an image corresponding to a certain wavelength band contained in the hyperspectral image is shown as an example of the hyperspectral image.

 被写体10a1として適切な白板が用いられる場合、上段の中央の図に示すように、適切な補正用のハイパースペクトル画像が生成される。適切な補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像が補正されるので、上段の右側の図に示すように、白板補正後のハイパースペクトル画像が正しく生成される。 When an appropriate white board is used as the subject 10a1, an appropriate hyperspectral image for correction is generated, as shown in the center diagram in the top row. The hyperspectral image to be analyzed is corrected based on the appropriate hyperspectral image for correction, so that a corrected hyperspectral image after white board correction is correctly generated, as shown in the right diagram in the top row.

 これに対して、白板に汚れが付着した場合、中段の中央の図に示すように、補正用のハイパースペクトル画像に汚れが写る。そのような適切でない補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像が補正されると、中段の右側の図に示すように、白板補正後のハイパースペクトル画像が正しく生成されない。補正用のハイパースペクトル画像のうち、汚れに相当する部分は、それ以外の部分よりも暗くなるので、上記の正規化により、白板補正後のハイパースペクトル画像において、汚れに相当する部分は白く浮き上がって見える。 In contrast, if dirt adheres to the white board, the dirt will appear in the hyperspectral image for correction, as shown in the center figure in the middle row. If the hyperspectral image to be analyzed is corrected based on such an inappropriate hyperspectral image for correction, the hyperspectral image after white board correction will not be generated correctly, as shown in the right figure in the middle row. In the hyperspectral image for correction, the parts that correspond to the dirt will be darker than the other parts, so due to the normalization described above, the parts that correspond to the dirt will appear white in the hyperspectral image after white board correction.

 あるいは、被写体10a1として、白板ではない物体が誤って用いられた場合、下段の中央の図に示すように、補正用のハイパースペクトル画像に当該物体が写る。当該物体として魚が例示されている。そのような適切でない補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像が補正されると、下段の右側の図に示すように、白板補正後のハイパースペクトル画像が正しく生成されない。補正用のハイパースペクトル画像のうち、当該物体に相当する部分は、それ以外の部分よりも暗くなるので、上記の正規化により、白板補正後のハイパースペクトル画像において、当該物体に相当する部分は白く浮き上がって見える。 Alternatively, if an object other than a white board is mistakenly used as the subject 10a1, the object will appear in the hyperspectral image for correction, as shown in the center diagram in the lower row. A fish is shown as an example of the object. If the hyperspectral image to be analyzed is corrected based on such an inappropriate hyperspectral image for correction, the hyperspectral image after white board correction will not be generated correctly, as shown in the right diagram in the lower row. In the hyperspectral image for correction, the part corresponding to the object will be darker than the other parts, and therefore, due to the above normalization, the part corresponding to the object will appear white in the hyperspectral image after white board correction.

 したがって、誤った白板補正では、白板補正後のハイパースペクトル画像に基づいて、被写体10bのスペクトル情報を正確に取得することが容易ではない。 Therefore, with incorrect white board correction, it is not easy to accurately obtain spectral information of the subject 10b based on the hyperspectral image after white board correction.

 [2.2.黒引き]
 黒引き補正では、分析対象のハイパースペクトル画像から補正用のハイパースペクトル画像が差し引かれる。より具体的には、分析対象のハイパースペクトル画像に含まれる複数の画像の各々から、補正用のハイパースペクトル画像に含まれる複数の画像のうち、波長バンドが同じという点で対応する画像が差し引かれる。黒引きは、イメージセンサの暗電流、輝点画素、固定パターンノイズ、およびセンサ性能の揺らぎなどの影響を低減するために行われる。ただし、そのような影響を低減する必要がないのであれば、黒引きを必ずしも行う必要はない。
[2.2. Black Line]
In the blackout correction, a correction hyperspectral image is subtracted from the hyperspectral image to be analyzed. More specifically, from each of the images included in the hyperspectral image to be analyzed, a corresponding image in the same wavelength band is subtracted from the images included in the hyperspectral image to be corrected. Blackout correction is performed to reduce the effects of dark current, bright pixels, fixed pattern noise, and sensor performance fluctuations of the image sensor. However, if there is no need to reduce such effects, blackout correction is not necessarily required.

 なお、以下の説明において、分析対象のハイパースペクトル画像を分析対象の圧縮画像に読み替え、補正用のハイパースペクトル画像を補正対象の圧縮画像に読み替えてもよい。 In the following description, the hyperspectral image to be analyzed may be read as the compressed image to be analyzed, and the hyperspectral image for correction may be read as the compressed image to be corrected.

 図4Bは、黒引きの例を模式的に示す図である。図4Bの上段の3つの図は、正しい黒引きの例を示し、図4Bの下段の3つの図は、誤った黒引きの例を示す。各段の3つの図のうち、左側の図は被写体10bが写るハイパースペクトル画像を示し、中央の図は遮光画像を表すハイパースペクトル画像を示し、右側の図は黒引き後のハイパースペクトル画像を示す。図4Bでは、ハイパースペクトル画像として、当該ハイパースペクトル画像に含まれるある波長バンドに対応する画像が例示されている。 FIG. 4B is a diagram showing a schematic example of black subtraction. The top three diagrams in FIG. 4B show examples of correct black subtraction, and the bottom three diagrams in FIG. 4B show examples of incorrect black subtraction. Of the three diagrams in each row, the diagram on the left shows a hyperspectral image containing subject 10b, the diagram in the center shows a hyperspectral image representing a light-shielded image, and the diagram on the right shows a hyperspectral image after black subtraction. In FIG. 4B, an image corresponding to a certain wavelength band contained in the hyperspectral image is shown as an example of the hyperspectral image.

 被写体10a2として遮光性のレンズキャップがカメラ30に適切に装着されている場合、すなわち遮光状態で撮影された場合、上段の中央の図に示すように、適切な補正用のハイパースペクトル画像が生成される。遮光画像である補正用のハイパースペクトル画像には、イメージセンサに起因するノイズが写っている。分析対象のハイパースペクトル画像には、上段の左側の図に示すように、当該ノイズが重畳された被写体10bが写っている。適切な補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像が補正されるので、上段の右側の図に示すように、黒引き後のハイパースペクトル画像が正しく生成される。黒引き後のハイパースペクトル画像には、当該ノイズが除去された被写体10bが写っている。 When a light-blocking lens cap is properly attached to the camera 30 as the subject 10a2, i.e., when an image is taken in a light-blocking state, an appropriate hyperspectral image for correction is generated, as shown in the center diagram in the top row. The hyperspectral image for correction, which is a light-blocked image, contains noise caused by the image sensor. The hyperspectral image to be analyzed contains subject 10b with the noise superimposed thereon, as shown in the left diagram in the top row. Since the hyperspectral image to be analyzed is corrected based on the appropriate hyperspectral image for correction, a correctly generated hyperspectral image after blackout is generated, as shown in the right diagram in the top row. The hyperspectral image after blackout contains subject 10b with the noise removed.

 これに対して、遮光性のレンズキャップをカメラ30に装着し忘れ、白板を撮影した場合、下段の中央の図に示すように、補正用のハイパースペクトル画像として、遮光画像ではなく、白板が写ったハイパースペクトル画像が生成される。そのような適切でない補正用のハイパースペクトル画像に基づいて分析対象のハイパースペクトル画像が補正されると、下段の右側の図に示すように、黒引き後のハイパースペクトル画像が正しく生成されない。 In contrast, if the light-blocking lens cap is forgotten to be attached to the camera 30 and a white board is photographed, a hyperspectral image containing the white board, rather than a light-blocked image, is generated as the hyperspectral image for correction, as shown in the center diagram in the lower row. If the hyperspectral image to be analyzed is corrected based on such an inappropriate hyperspectral image for correction, a corrected hyperspectral image after black-out will not be generated correctly, as shown in the right diagram in the lower row.

 分析対象のハイパースペクトル画像から、遮光画像ではない補正用のハイパースペクトル画像を差し引くので、必要以上に多くの画素値が減算される。そのことが原因で、正しい黒引き後のハイパースペクトル画像と比較して、誤った黒引き後の画像は暗くなる。誤った黒引き後の画像では、画素値が負になる場合もある。その場合、画素値がゼロとして出力され得る。したがって、補正後のハイパースペクトル画像は、単純に分析対象のハイパースペクトル画像から補正用のハイパースペクトル画像を差し引いた画像とは異なるハイパースペクトル画像になる可能性がある。 Because the correction hyperspectral image, which is not a light-blocked image, is subtracted from the hyperspectral image to be analyzed, more pixel values than necessary are subtracted. This causes the incorrectly black-subtracted image to be darker than the correctly black-subtracted hyperspectral image. In the incorrectly black-subtracted image, pixel values may be negative. In such cases, pixel values may be output as zero. Therefore, the corrected hyperspectral image may be different from the image obtained by simply subtracting the correction hyperspectral image from the hyperspectral image to be analyzed.

 したがって、誤った黒引きでは、補正後のハイパースペクトル画像に基づいて、被写体10bのスペクトル情報を正確に取得することが容易ではない。 Therefore, with incorrect black subtraction, it is not easy to accurately obtain spectral information of the subject 10b based on the corrected hyperspectral image.

 上記のように、被写体10a1または被写体10a2が適切でない場合、被写体10bのスペクトル情報を正確に取得することが容易ではない。被写体10a2が適切でない場合は、例えば、遮光性のレンズキャップがカメラ30に適切に装着されておらず、遮光状態での撮影が容易ではない場合であり得る。 As described above, if subject 10a1 or subject 10a2 is not suitable, it is not easy to accurately obtain spectral information of subject 10b. If subject 10a2 is not suitable, for example, a light-blocking lens cap may not be properly attached to camera 30, making it difficult to capture an image in a light-blocking state.

 本発明者らはこの課題を見出し、当該課題を解決することが可能な本実施形態による画像の補正に用いられる方法に想到した。 The inventors discovered this problem and came up with a method used to correct images in this embodiment that can solve the problem.

 [3.画像の補正に用いられる方法1]
 [3.1.処理動作]
 次に、図5を参照して、カメラ30がハイパースペクトル画像を生成して出力する場合の本実施形態による画像の補正に用いられる方法の例を説明する。以下に説明する画像の補正は、白板補正および黒引きのいずれでもよい。
[3. Method 1 used for image correction]
3.1. Processing Operation
Next, an example of a method used for image correction according to the present embodiment when the camera 30 generates and outputs a hyperspectral image will be described with reference to Fig. 5. The image correction described below may be either whiteboard correction or black subtraction.

 図5は、本実施形態による撮影システムにおける処理回路52が実行する処理動作の例1を概略的に示すフローチャートである。処理回路52は、図5に示すステップS101~S108の動作を実行する。図5に示す「HS画像」はハイパースペクトル画像を表す。 FIG. 5 is a flow chart showing an outline of example 1 of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment. The processing circuit 52 performs the operations of steps S101 to S108 shown in FIG. 5. The "HS image" shown in FIG. 5 represents a hyperspectral image.

 <ステップS101>
 処理回路52は、表示装置40に、白板を撮影するというユーザへの指示、または遮光状態で撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S101>
The processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph a whiteboard or an instruction to the user to photograph in a light-shielded state. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction by voice.

 ユーザは指示を受けて、カメラ30の前に被写体10a1を配置する、またはカメラ30に被写体10a2を装着する。処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に、被写体10a1または被写体10a2を撮影させて第1ハイパースペクトル画像を生成させる。このように、第1ハイパースペクトル画像は、被写体10a1または被写体10a2を撮影する指示をユーザが受けて生成される。 The user receives the instruction and places subject 10a1 in front of camera 30, or attaches subject 10a2 to camera 30. Processing circuit 52 receives input from the user via input UI 42 and causes camera 30 to capture subject 10a1 or subject 10a2 to generate a first hyperspectral image. In this way, the first hyperspectral image is generated when the user receives an instruction to capture subject 10a1 or subject 10a2.

 被写体10a1または被写体10a2を撮影する前に、処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に関するパラメータを調整してもよい。カメラ30に関するパラメータは、例えば、露光時間、ゲイン、積算回数、および/またはカメラ30と被写体10a1との距離であり得る。 Before photographing subject 10a1 or subject 10a2, processing circuitry 52 may receive input from a user via input UI 42 and adjust parameters related to camera 30. The parameters related to camera 30 may be, for example, exposure time, gain, number of integrations, and/or the distance between camera 30 and subject 10a1.

 被写体10a1を撮影する場合、処理回路52は、ステップS101の前に、入力UI42を介したユーザからの入力を受けて、光源20に、被写体10a1を照射するための照射光を出射させる。被写体10a2を撮影する場合、処理回路52は、ステップS101の前に、光源20に照射光を出射させる必要はない。 When photographing subject 10a1, processing circuit 52 receives input from the user via input UI 42 before step S101 and causes light source 20 to emit illumination light for illuminating subject 10a1. When photographing subject 10a2, processing circuit 52 does not need to cause light source 20 to emit illumination light before step S101.

 光源20に照射光を出射させる前に、処理回路52は、入力UI42を介したユーザからの入力を受けて、光源20に関するパラメータを調整してもよい。光源20に関するパラメータは、例えば、光源20と被写体10a1との距離、光源20を駆動するための電流、電圧、PWM(Pulse Width Modulation)信号のデューティ比、および/または光源20と被写体10a1との間に配置される不図示のND(Neutral Density)フィルタの減衰率であり得る。 Before causing the light source 20 to emit irradiation light, the processing circuit 52 may receive input from the user via the input UI 42 and adjust parameters related to the light source 20. The parameters related to the light source 20 may be, for example, the distance between the light source 20 and the subject 10a1, the current for driving the light source 20, the voltage, the duty ratio of a PWM (Pulse Width Modulation) signal, and/or the attenuation rate of an ND (Neutral Density) filter (not shown) disposed between the light source 20 and the subject 10a1.

 <ステップS102>
 処理回路52は、カメラ30から、補正用のハイパースペクトル画像として第1ハイパースペクトル画像を取得する。処理回路52は、取得した第1ハイパースペクトル画像を記憶装置56に記憶させてもよい。
<Step S102>
The processing circuitry 52 acquires a first hyperspectral image as a hyperspectral image for correction from the camera 30. The processing circuitry 52 may store the acquired first hyperspectral image in the storage device 56.

 <ステップS103>
 処理回路52は、表示装置40に、分析対象の被写体を撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S103>
The processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.

 ユーザは指示を受けて、カメラ30の前に被写体10bを配置する。処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に、被写体10bを撮影させて第2ハイパースペクトル画像を生成させる。このように、第2ハイパースペクトル画像は、被写体10bを撮影する指示をユーザが受けて生成される。 The user receives the instruction and places the subject 10b in front of the camera 30. The processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10b and generate a second hyperspectral image. In this way, the second hyperspectral image is generated when the user receives an instruction to capture the subject 10b.

 ステップS101において被写体10a1が撮影された場合、上記の照射光で被写体10bが照射される。ステップS101において被写体10a2が撮影された場合、処理回路52は、ステップS103の前に、光源20に、被写体10bを照射するための照射光を出射させる。 When subject 10a1 is photographed in step S101, subject 10b is illuminated with the above-mentioned illumination light. When subject 10a2 is photographed in step S101, processing circuit 52 causes light source 20 to emit illumination light for illuminating subject 10b before step S103.

 <ステップS104>
 処理回路52は、カメラ30から、分析対象のハイパースペクトル画像として第2ハイパースペクトル画像を取得する。処理回路52は、取得した第2ハイパースペクトル画像を記憶装置56に記憶させてもよい。
<Step S104>
The processing circuitry 52 acquires a second hyperspectral image from the camera 30 as the hyperspectral image to be analyzed. The processing circuitry 52 may store the acquired second hyperspectral image in the storage device 56.

 なお、処理回路52は、ステップS103およびS104の動作を、ステップS105とステップS106との間に実行してもよい。 In addition, the processing circuit 52 may execute the operations of steps S103 and S104 between steps S105 and S106.

 <ステップS105>
 処理回路52は、第1ハイパースペクトル画像の画素値に基づいて、第1ハイパースペクトル画像が、第2ハイパースペクトル画像を補正するための補正用の画像としての適性条件を満たすか否かを判定する。白板補正および黒引き用の画像としての適性条件については後述する。
<Step S105>
The processing circuit 52 determines whether the first hyperspectral image satisfies a suitability condition for use as an image for correction of the second hyperspectral image based on the pixel values of the first hyperspectral image. The suitability condition for use as an image for whiteboard correction and black subtraction will be described later.

 判定がYesである場合、処理回路52は、ステップS106の動作を実行する。判定がNoである場合、処理回路52は、ステップS108の動作を実行する。 If the determination is Yes, the processing circuit 52 executes the operation of step S106. If the determination is No, the processing circuit 52 executes the operation of step S108.

 <ステップS106>
 処理回路52は、第2ハイパースペクトル画像を第1ハイパースペクトル画像によって正規化することにより、白板補正を行う。あるいは、処理回路52は、第2ハイパースペクトル画像から第1ハイパースペクトル画像を差し引くことにより、黒引きを行う。
<Step S106>
The processing circuitry 52 performs whiteboard correction by normalizing the second hyperspectral image by the first hyperspectral image, or performs black subtraction by subtracting the first hyperspectral image from the second hyperspectral image.

 白板補正では、第2ハイパースペクトル画像にすでに黒引きが行われていてもよい。 In whiteboard correction, black subtraction may already have been performed on the second hyperspectral image.

 <ステップS107>
 処理回路52は、記憶装置56に補正後のハイパースペクトル画像を記憶させる。
<Step S107>
The processing circuitry 52 stores the corrected hyperspectral image in the memory device 56 .

 <ステップS108>
 処理回路52は、表示装置40に、第1ハイパースペクトル画像に異常がある旨のエラーを表示させる。
<Step S108>
The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first hyperspectral image.

 処理回路52の一部が遠隔地に設置された外部サーバである場合、当該外部サーバがステップS102およびS105の動作を実行してもよい。 If part of the processing circuit 52 is an external server installed in a remote location, the external server may perform the operations of steps S102 and S105.

 本実施形態による画像の補正に用いられる上記の方法により、ユーザが指示を受けて適切な被写体10a1を撮影したつもりでも実際には被写体10a1が適切でない場合、不適切な白板補正を防ぐことができる。同様に、ユーザが指示を受けて適切な被写体10a2を撮影したつもりでも実際には被写体10a2が適切でない場合、不適切な黒引きを防ぐことができる。その結果、第1ハイパースペクトル画像が適性条件を満たすか否かの判定をしない場合と比較して、被写体10bのスペクトル情報をより正確に取得することができる。 The above method used for image correction in this embodiment can prevent inappropriate whiteboard correction when a user receives instructions to photograph an appropriate subject 10a1 but the subject 10a1 is actually not suitable. Similarly, when a user receives instructions to photograph an appropriate subject 10a2 but the subject 10a2 is actually not suitable, it can prevent inappropriate blackout. As a result, it is possible to obtain spectral information of the subject 10b more accurately than when it is not determined whether the first hyperspectral image satisfies the suitability conditions.

 [3.2.白板補正用の画像としての適性条件]
 以下に、図6Aおよび図6Bを参照して、第1ハイパースペクトル画像が満たす白板補正用の画像としての適性条件の例を説明する。ここでは、第1ハイパースペクトル画像から取得されるスペクトル情報に基づいて、第1ハイパースペクトル画像が白板補正用の画像としての適性条件を満たすか否かが判定される。
[3.2. Suitability conditions for an image for whiteboard correction]
6A and 6B, examples of suitability conditions that the first hyperspectral image satisfies as an image for whiteboard correction will be described below. Here, whether or not the first hyperspectral image satisfies the suitability conditions as an image for whiteboard correction is determined based on the spectral information acquired from the first hyperspectral image.

 図6Aは、被写体10a1が適切である場合の第1ハイパースペクトル画像および反射スペクトルを模式的に示す図である。図6Aの上側には、第1ハイパースペクトル画像が示されている。当該第1ハイパースペクトル画像は、5個の波長バンドにそれぞれ対応する5つの画像を含む。波長バンドW~Wについて、添え字の数字が小さいほど、波長バンドの中心波長が短い。これらの画像は、境界および構造を有しない滑らかな画像である。画像は白いほど明るく、画像は黒いほど暗い。 6A is a diagram showing a first hyperspectral image and a reflectance spectrum when the subject 10a1 is appropriate. The first hyperspectral image is shown at the top of FIG. 6A. The first hyperspectral image includes five images corresponding to five wavelength bands. For the wavelength bands W 1 to W 5 , the smaller the subscript number, the shorter the central wavelength of the wavelength band. These images are smooth images without boundaries and structures. The whiter the image, the brighter it is, and the blacker the image, the darker it is.

 図6Aの下側には、被写体10a1の反射スペクトルが示されている。当該反射スペクトルは第1ハイパースペクトル画像に基づいて生成された。各波長バンドでの反射強度は、当該波長バンドに対応する画像内の中央付近の複数の画素の画素値を平均することによって算出される。適切に選ばれた被写体10a1は、波長バンドW~Wのいずれにおいても、同程度の反射率を有する。被写体10a1の反射スペクトルは、光源20から出射される光のスペクトルとほぼ同じになる。つまり、カメラ30が反射光を検出した場合に得られるスペクトルは、カメラ30が光源20から出射される光を直接検出した場合に得られるスペクトルとほぼ同じになる。反射光は、光源20から出射された光が被写体10aで反射して生成された光である。図6Aに示す例では、波長バンドWでの反射強度が最も高く、波長バンドWでの反射強度が最も低く、波長バンドW~Wでの反射強度は波長バンドWでの反射強度よりも高く、波長バンドWでの反射強度よりも低い。波長バンドWでの反射強度は、波長バンドWおよび波長バンドWでの反射強度よりも高い。 The reflection spectrum of the subject 10a1 is shown at the bottom of FIG. 6A. The reflection spectrum was generated based on the first hyperspectral image. The reflection intensity in each wavelength band is calculated by averaging the pixel values of multiple pixels near the center of the image corresponding to the wavelength band. The appropriately selected subject 10a1 has the same reflectance in each of the wavelength bands W 1 to W 5. The reflection spectrum of the subject 10a1 is almost the same as the spectrum of the light emitted from the light source 20. In other words, the spectrum obtained when the camera 30 detects the reflected light is almost the same as the spectrum obtained when the camera 30 directly detects the light emitted from the light source 20. The reflected light is light generated by the light emitted from the light source 20 being reflected by the subject 10a. In the example shown in FIG. 6A, the reflection intensity in the wavelength band W 1 is the highest, the reflection intensity in the wavelength band W 2 is the lowest, and the reflection intensity in the wavelength bands W 3 to W 5 is higher than the reflection intensity in the wavelength band W 2 and lower than the reflection intensity in the wavelength band W 1 . The reflection intensity in waveband W4 is higher than the reflection intensity in waveband W3 and waveband W5 .

 図6Bは、被写体10a1が適切でない場合の第1ハイパースペクトル画像および反射スペクトルを模式的に示す図である。図6Bの上側および下側に示す図については上記の通りである。 FIG. 6B is a diagram showing a schematic diagram of the first hyperspectral image and the reflectance spectrum when the subject 10a1 is not suitable. The diagrams shown in the upper and lower parts of FIG. 6B are as described above.

 被写体10a1が適切でない場合、被写体10a1の反射スペクトルは、光源20から出射される光のスペクトルとは異なる。図6Bに示す例では、被写体10a1として青色の板が用いられた。図6Bに示す例では、波長バンドの中心波長が増加するにつれて、反射強度は減少し、画像は暗くなる。 If the subject 10a1 is not suitable, the reflection spectrum of the subject 10a1 will differ from the spectrum of the light emitted from the light source 20. In the example shown in FIG. 6B, a blue plate was used as the subject 10a1. In the example shown in FIG. 6B, as the central wavelength of the wavelength band increases, the reflection intensity decreases and the image becomes darker.

 光源20からの光のスペクトル情報は記憶装置56に予め記憶されている。処理回路52は、光源20からの光のスペクトル情報を記憶装置56から取得し、第1ハイパースペクトル画像から取得されたスペクトル情報と、光源20からの光のスペクトル情報とを比較することにより、第1ハイパースペクトル画像が適性条件を満たすか否かを判定する。2つのスペクトルの形状が近い場合、処理回路52は、第1ハイパースペクトル画像が適性条件を満たすと判定する。 Spectral information of the light from the light source 20 is pre-stored in the storage device 56. The processing circuit 52 acquires the spectral information of the light from the light source 20 from the storage device 56, and determines whether or not the first hyperspectral image satisfies the suitability conditions by comparing the spectral information acquired from the first hyperspectral image with the spectral information of the light from the light source 20. If the shapes of the two spectra are similar, the processing circuit 52 determines that the first hyperspectral image satisfies the suitability conditions.

 2つのスペクトル情報の比較には、例えば、SAM(Spectral Angular Mapping)法が用いられ得る。スペクトルがN個の波長バンドにそれぞれ対応するN個の光強度を含む場合、当該スペクトルは、N個の光強度を成分として含むN次のベクトルとして表される。第1ハイパースペクトル画像から取得されたスペクトルを表すベクトルをu、光源20からの光のスペクトルを表すベクトルをvとすると、ベクトルuとベクトルvとがなすスペクトル角を算出することができる。スペクトル角は The two pieces of spectral information can be compared using, for example, the Spectral Angular Mapping (SAM) method. When a spectrum contains N light intensities corresponding to N wavelength bands, the spectrum is expressed as an N-dimensional vector containing the N light intensities as components. If the vector representing the spectrum acquired from the first hyperspectral image is u and the vector representing the spectrum of the light from the light source 20 is v, the spectral angle formed by the vectors u and v can be calculated. The spectral angle is

Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001

である。u・vはベクトルuおよびベクトルvの内積を表し、|u|はベクトルuの大きさを表し、|v|はベクトルvの大きさを表す。 where u·v represents the inner product of vector u and vector v, |u| represents the magnitude of vector u, and |v| represents the magnitude of vector v.

 スペクトル角がゼロである場合、ベクトルuおよびベクトルvは同じ方向を向く。この場合、2つのスペクトルの形状が同じである。|u|が|v|とは異なっていても、ベクトルuおよびベクトルvが同じ方向を向けば、2つのスペクトルの形状が同じであると言うことができる。したがって、SAM法では、第1ハイパースペクトル画像を生成する際の照射光量に関係なく、ベクトルuとベクトルvとを比較できる。ベクトルuとベクトルvとがなすスペクトル角が、例えば1°以下、5°以下、もしくは10°以下、または1°以上10°以下の範囲における任意の角度以下である場合、第1ハイパースペクトル画像は適性条件を満たすと判定してもよい。 When the spectral angle is zero, vector u and vector v face in the same direction. In this case, the shapes of the two spectra are the same. Even if |u| is different from |v|, if vector u and vector v face in the same direction, it can be said that the shapes of the two spectra are the same. Therefore, in the SAM method, vector u and vector v can be compared regardless of the amount of light irradiated when generating the first hyperspectral image. If the spectral angle formed by vector u and vector v is, for example, 1° or less, 5° or less, or 10° or less, or any angle in the range of 1° to 10°, the first hyperspectral image may be determined to satisfy the suitability condition.

 次に、図7を参照して、第1ハイパースペクトル画像が満たす白板補正の適性条件の他の例を説明する。実際には、上記のように白板ではなく青色のような別の色の板を誤って撮影することは稀であり、1つ以上の物体が存在するシーンを誤って撮影することが多い。したがって、多くの場合、第1ハイパースペクトル画像の画素値の空間分布が一定であれば、被写体10a1は適切であると考えても問題ない。 Next, referring to FIG. 7, another example of the suitability conditions for white board correction that the first hyperspectral image satisfies will be described. In reality, it is rare to mistakenly capture a board of a different color, such as blue, instead of a white board, as described above, and it is common to mistakenly capture a scene in which one or more objects are present. Therefore, in many cases, if the spatial distribution of pixel values in the first hyperspectral image is constant, it is safe to consider that subject 10a1 is suitable.

 ここでは、第1ハイパースペクトル画像をエッジ検出し、それによって得られたエッジ画像に基づいて、第1ハイパースペクトル画像が適性条件を満たすか否かが判定される。エッジ検出には、例えばSobel法が用いられ得る。 Here, the first hyperspectral image is subjected to edge detection, and based on the edge image obtained thereby, it is determined whether the first hyperspectral image satisfies the suitability conditions. For example, the Sobel method can be used for edge detection.

 図7は、被写体10a1が適切である場合および適切でない場合の第1ハイパースペクトル画像、および第1ハイパースペクトル画像をエッジ検出して得られるエッジ画像を示す図である。図7の左上側には、被写体10a1が適切である場合の第1ハイパースペクトル画像が示されており、図7の右上側には、第1ハイパースペクトル画像をエッジ検出して得られるエッジ画像が示されている。図7の左下側には、被写体10a1として白板が適切に用いられない場合の第1ハイパースペクトル画像が示されており、図7の右下側には、第1ハイパースペクトル画像をエッジ検出して得られるエッジ画像が示されている。図7には、第1ハイパースペクトル画像として、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像が例示されている。図7を参照する以下の説明において、第1ハイパースペクトル画像は、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像を意味する。 7 shows the first hyperspectral image when the subject 10a1 is appropriate and when it is not appropriate, and an edge image obtained by edge detection of the first hyperspectral image. The upper left side of FIG. 7 shows the first hyperspectral image when the subject 10a1 is appropriate, and the upper right side of FIG. 7 shows an edge image obtained by edge detection of the first hyperspectral image. The lower left side of FIG. 7 shows the first hyperspectral image when a white board is not appropriately used as the subject 10a1, and the lower right side of FIG. 7 shows an edge image obtained by edge detection of the first hyperspectral image. In FIG. 7, an image corresponding to a certain wavelength band included in the first hyperspectral image is illustrated as the first hyperspectral image. In the following description referring to FIG. 7, the first hyperspectral image means an image corresponding to a certain wavelength band included in the first hyperspectral image.

 図7に示すように、被写体10a1が適切である場合、第1ハイパースペクトル画像は、境界および構造を有しない滑らかな画像である。第1ハイパースペクトル画像をエッジ検出しても、エッジとして検出される画素はゼロである。 As shown in FIG. 7, when the subject 10a1 is appropriate, the first hyperspectral image is a smooth image without boundaries or structures. Even if edge detection is performed on the first hyperspectral image, zero pixels are detected as edges.

 これに対して、被写体10a1が適切でない場合、第1ハイパースペクトル画像は、境界および構造を有する滑らかでない画像である。第1ハイパースペクトル画像をエッジ検出すると、エッジとして検出される画素が存在する。図7に示す例において、第1ハイパースペクトル画像には、複数の野菜が写っている。第1ハイパースペクトル画像に含まれる65535画素のうち、2109画素がエッジとして検出された。 In contrast, if the subject 10a1 is not suitable, the first hyperspectral image is a non-smooth image having boundaries and structures. When edge detection is performed on the first hyperspectral image, there are pixels that are detected as edges. In the example shown in FIG. 7, the first hyperspectral image contains a number of vegetables. Of the 65,535 pixels contained in the first hyperspectral image, 2,109 pixels are detected as edges.

 上記のことから、第1ハイパースペクトル画像の画素値の空間分布が一定であると判断される条件が満たされる場合、第1ハイパースペクトル画像は適性条件を満たすと判定してもよい。当該条件は、例えば、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像内の複数の画素のうち、エッジとして検出される画素の数に基づいて決定され得る。具体的には、当該条件は、例えば、エッジとして検出される画素の数が所定の割合以下であることであり得る。第1ハイパースペクトル画像に含まれる複数の画像のうち、エッジ検出する画像の数は1つであってもよいし、複数であってもよい。 In view of the above, if the condition for determining that the spatial distribution of pixel values of the first hyperspectral image is constant is satisfied, the first hyperspectral image may be determined to satisfy the suitability condition. The condition may be determined, for example, based on the number of pixels detected as edges among multiple pixels in an image corresponding to a certain wavelength band included in the first hyperspectral image. Specifically, the condition may be, for example, that the number of pixels detected as edges is equal to or less than a predetermined ratio. The number of images for edge detection among the multiple images included in the first hyperspectral image may be one or more.

 [3.3.黒引き用の画像としての適性条件]
 以下に、図8Aおよび図8Bを参照して、第1ハイパースペクトル画像が満たす黒引き用の画像としての適性条件の例を説明する。ここでは、第1ハイパースペクトル画像の画素値に基づいて、第1ハイパースペクトル画像が黒引き用の画像としての適性条件を満たすか否かが判定される。
[3.3. Suitability conditions for images for black out]
An example of the suitability conditions for the first hyperspectral image to be used as an image for black subtraction that the first hyperspectral image satisfies will be described below with reference to Figures 8A and 8B. Here, it is determined whether the first hyperspectral image satisfies the suitability conditions for the image for black subtraction based on the pixel values of the first hyperspectral image.

 図8Aは、被写体10a2が適切である場合の、第1ハイパースペクトル画像、および第1ハイパースペクトル画像の画素値のヒストグラムを示す図である。図8Aの上側にはハイパースペクトル画像が示されており、図8Aの下側には第1ハイパースペクトル画像の画素値のヒストグラムが示されている。図8Aには、第1ハイパースペクトル画像として、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像が例示されている。図8Aを参照する以下の説明において、第1ハイパースペクトル画像は、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像を意味する。 FIG. 8A is a diagram showing a first hyperspectral image and a histogram of pixel values of the first hyperspectral image when the subject 10a2 is appropriate. The upper part of FIG. 8A shows a hyperspectral image, and the lower part of FIG. 8A shows a histogram of pixel values of the first hyperspectral image. FIG. 8A shows an example of an image corresponding to a certain wavelength band contained in the first hyperspectral image as the first hyperspectral image. In the following description referring to FIG. 8A, the first hyperspectral image means an image corresponding to a certain wavelength band contained in the first hyperspectral image.

 図8Aに示すように、第1ハイパースペクトル画像は遮光画像である。第1ハイパースペクトル画像内のすべての画素の画素値はほぼゼロである。図8Aに示す例において、第1ハイパースペクトル画像は256×256画素を含む。第1ハイパースペクトル画像の画素値のヒストグラムにおいて、画素値は当該画素がとり得る最高画素値によって正規化されている。第1ハイパースペクトル画像の画素値のヒストグラムにおいて、最低画素値を有する画素が最も多く、全画素の97%以上を占める。正規化された最低画素値は2.44×10-4である。正規化された画素値が1.0×10-3以上である画素は全画素の0.07%を占める。 As shown in FIG. 8A, the first hyperspectral image is a dark image. All pixels in the first hyperspectral image have pixel values near zero. In the example shown in FIG. 8A, the first hyperspectral image includes 256×256 pixels. In the histogram of pixel values of the first hyperspectral image, pixel values are normalized by the maximum pixel value that the pixel can have. In the histogram of pixel values of the first hyperspectral image, the pixels with the lowest pixel value are the most numerous, accounting for more than 97% of all pixels. The minimum normalized pixel value is 2.44×10 −4 . Pixels with normalized pixel values of 1.0×10 −3 or more account for 0.07% of all pixels.

 図8Bは、被写体10a2が適切でない場合の、第1ハイパースペクトル画像、および第1ハイパースペクトル画像の画素値のヒストグラムを示す図である。図8Bの上側および下側に示される図については上記の通りである。図8Bを参照する以下の説明において、第1ハイパースペクトル画像は、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像を意味する。 FIG. 8B is a diagram showing the first hyperspectral image and a histogram of pixel values of the first hyperspectral image when the subject 10a2 is not suitable. The figures shown at the top and bottom of FIG. 8B are as described above. In the following description referring to FIG. 8B, the first hyperspectral image means an image corresponding to a certain wavelength band contained in the first hyperspectral image.

 図8Bに示すように、第1ハイパースペクトル画像は遮光画像ではなく、風景の画像である。第1ハイパースペクトル画像の画素値のヒストグラムにおいて、最低画素値を有する画素は全画素の0.006%を占める。正規化された最低画素値は0.0273である。正規化された画素値が1.0×10-3以上である画素は全画素の100%を占める。 As shown in Fig. 8B, the first hyperspectral image is not a dark image but a landscape image. In the histogram of pixel values of the first hyperspectral image, the pixel with the lowest pixel value accounts for 0.006% of all pixels. The lowest normalized pixel value is 0.0273. The pixels with normalized pixel values of 1.0 x 10-3 or more account for 100% of all pixels.

 上記のことから、第1ハイパースペクトル画像の画素値が小さいと判断される条件が満たされる場合、第1ハイパースペクトル画像は適性条件を満たすと判定してもよい。当該条件は、例えば、第1ハイパースペクトル画像に含まれるある波長バンドに対応する画像における、最低画素値を有する画素の数、または画素値が閾値以上もしくは閾値以下である画素の数に基づいて決定され得る。具体的には、当該条件は、最低画素値を有する画素の数が所定の割合以上であること、画素値が閾値以上である画素の数が所定の割合以下であること、または画素値が閾値以下である画素の数が所定の割合以上であることであり得る。第1ハイパースペクトル画像に含まれる複数の画像のうち、画素値のヒストグラムを調べる画像の数は1つであってもよいし、複数であってもよい。 In view of the above, if the condition for determining that the pixel value of the first hyperspectral image is small is satisfied, the first hyperspectral image may be determined to satisfy the suitability condition. The condition may be determined, for example, based on the number of pixels having the lowest pixel value, or the number of pixels whose pixel values are greater than or equal to a threshold value or less than or equal to a threshold value, in an image corresponding to a certain wavelength band included in the first hyperspectral image. Specifically, the condition may be that the number of pixels having the lowest pixel value is greater than or equal to a predetermined percentage, that the number of pixels whose pixel values are greater than or equal to a threshold value is less than or equal to a predetermined percentage, or that the number of pixels whose pixel values are less than or equal to a threshold value is greater than or equal to a predetermined percentage. Of the multiple images included in the first hyperspectral image, the number of images whose pixel value histograms are checked may be one or more.

 あるいは、図7を参照して説明したように、白板補正と同様に、黒引きでも、第1ハイパースペクトル画像の画素値の空間分布が一定であると判断される条件が満たされる場合、第1ハイパースペクトル画像は適性条件を満たすと判定してもよい。 Alternatively, as described with reference to FIG. 7, if the conditions for determining that the spatial distribution of pixel values of the first hyperspectral image is constant are met, even with black subtraction, as with whiteboard correction, the first hyperspectral image may be determined to satisfy the suitability conditions.

 [4.画像の補正に用いられる方法2]
 [4.1.処理動作]
 以下に、図9を参照して、圧縮センシング型のハイパースペクトルカメラにおいてカメラ30が圧縮画像を生成して出力する場合の本実施形態による画像の補正に用いられる方法の例を説明する。以下に説明する画像の補正は白板補正である。
[4. Method 2 used for image correction]
4.1. Processing Operation
An example of a method used for image correction according to the present embodiment when the camera 30 generates and outputs a compressed image in a compressed sensing type hyperspectral camera will be described below with reference to Fig. 9. The image correction described below is white board correction.

 図9は、本実施形態による撮影システムにおける処理回路52が実行する処理動作の例2を概略的に示すフローチャートである。処理回路52は、図9に示すステップS201~S210の動作を実行する。 FIG. 9 is a flow chart that shows an outline of a second example of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment. The processing circuit 52 performs the operations of steps S201 to S210 shown in FIG. 9.

 <ステップS201>
 処理回路52は、表示装置40に、白板を撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S201>
The processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the whiteboard. Alternatively, if the imaging system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as voice.

 ユーザは指示を受けて、カメラ30の前に被写体10a1を配置する。処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に、被写体10a1を撮影させて第1圧縮画像を生成させる。このように、第1圧縮画像は、被写体10a1を撮影する指示をユーザが受けて生成される。 The user receives the instruction and places the subject 10a1 in front of the camera 30. The processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10a1 and generate a first compressed image. In this way, the first compressed image is generated when the user receives an instruction to capture the subject 10a1.

 処理回路52は、ステップS201の前に、入力UI42を介したユーザからの入力を受けて、光源20に、被写体10a1を照射するための照射光を出射させる。 Before step S201, the processing circuit 52 receives input from the user via the input UI 42 and causes the light source 20 to emit irradiation light for irradiating the subject 10a1.

 <ステップS202>
 処理回路52は、カメラ30から第1圧縮画像を取得する。処理回路52は、取得した第1圧縮画像を記憶装置56に記憶させてもよい。
<Step S202>
The processing circuitry 52 acquires a first compressed image from the camera 30. The processing circuitry 52 may store the acquired first compressed image in the storage device 56.

 <ステップS203>
 処理回路52は、第1圧縮画像に基づいて第1ハイパースペクトル画像を生成する。処理回路52は、生成した第1ハイパースペクトル画像を記憶装置56に記憶させてもよい。
<Step S203>
The processing circuitry 52 generates a first hyperspectral image based on the first compressed image. The processing circuitry 52 may store the generated first hyperspectral image in the storage device 56.

 <ステップS204>
 処理回路52は、表示装置40に、分析対象の被写体を撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S204>
The processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.

 ユーザは指示を受けて、カメラ30の前に被写体10bを配置する。処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に、被写体10bを撮影させて第2圧縮画像を生成させる。このように、第2圧縮画像は、被写体10bを撮影する指示をユーザが受けて生成される。被写体10bは上記の照射光で照射される。 The user receives the instruction and places the subject 10b in front of the camera 30. The processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10b and generate a second compressed image. In this way, the second compressed image is generated when the user receives an instruction to capture the subject 10b. The subject 10b is illuminated with the above-mentioned illumination light.

 <ステップS205>
 処理回路52は、カメラ30から第2圧縮画像を取得する。処理回路52は、取得した第2圧縮画像を記憶装置56に記憶させてもよい。
<Step S205>
The processing circuitry 52 acquires a second compressed image from the camera 30. The processing circuitry 52 may store the acquired second compressed image in the storage device 56.

 <ステップS206>
 処理回路52は、第2圧縮画像に基づいて第2ハイパースペクトル画像を生成する。処理回路52は、生成した第2ハイパースペクトル画像を記憶装置56に記憶させてもよい。
<Step S206>
The processing circuitry 52 generates a second hyperspectral image based on the second compressed image. The processing circuitry 52 may store the generated second hyperspectral image in the storage device 56.

 なお、処理回路52は、ステップS203~S206の動作を、ステップS207とステップS208との間に実行してもよい。 In addition, the processing circuit 52 may execute the operations of steps S203 to S206 between steps S207 and S208.

 <ステップS207>
 処理回路52は、第1圧縮画像の画素値に基づいて、第1圧縮画像が、白板補正用の画像としての適性条件を満たすか否かを判定する。白板補正用の画像としての適性条件については後述する。
<Step S207>
The processing circuit 52 determines whether the first compressed image satisfies a suitability condition for an image for whiteboard correction based on the pixel values of the first compressed image. The suitability condition for an image for whiteboard correction will be described later.

 判定がYesである場合、処理回路52は、ステップS208の動作を実行する。判定がNoである場合、処理回路52は、ステップS210の動作を実行する。 If the determination is Yes, the processing circuit 52 executes the operation of step S208. If the determination is No, the processing circuit 52 executes the operation of step S210.

 <ステップS208>
 処理回路52は、第2ハイパースペクトル画像を第1ハイパースペクトル画像によって正規化することにより、白板補正を行う。第2ハイパースペクトル画像にすでに黒引きが行われていてもよい。
<Step S208>
The processing circuit 52 performs whiteboard correction by normalizing the second hyperspectral image by the first hyperspectral image. The second hyperspectral image may have already been subjected to black subtraction.

 <ステップS209>
 処理回路52は、記憶装置56に補正後のハイパースペクトル画像を記憶させる。
<Step S209>
The processing circuitry 52 stores the corrected hyperspectral image in the memory device 56 .

 <ステップS210>
 処理回路52は、表示装置40に、第1圧縮画像に異常がある旨のエラーを表示させる。
<Step S210>
The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first compressed image.

 処理回路52の一部が遠隔地に設置された外部サーバである場合、当該外部サーバがステップS202およびS207の動作を実行してもよい。 If part of the processing circuit 52 is an external server installed in a remote location, the external server may execute the operations of steps S202 and S207.

 本実施形態による画像の補正に用いられる上記の方法により、ユーザが指示を受けて適切な被写体10a1を撮影したつもりでも実際には被写体10a1が適切でない場合、不適切な白板補正を防ぐことができる。その結果、第1圧縮画像が白板補正用の画像としての適性条件を満たすか否かの判定をしない場合と比較して、被写体10bのスペクトル情報をより正確に取得することができる。 The above method used for image correction in this embodiment can prevent inappropriate whiteboard correction when the user believes that he or she has photographed an appropriate subject 10a1 in response to instructions, but the subject 10a1 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information of the subject 10b than when it is not determined whether the first compressed image satisfies the suitability conditions for an image for whiteboard correction.

 なお、処理回路52は、図9に示すS207~S210の動作の代わりに、図5に示すS105~S108の動作を実行してもよい。 In addition, the processing circuit 52 may execute the operations of S105 to S108 shown in FIG. 5 instead of the operations of S207 to S210 shown in FIG. 9.

 [4.2.白板補正用の画像としての適性条件]
 以下に、図10Aおよび図10Bを参照して、第1圧縮画像が満たす白板補正用の画像としての適性条件の例を説明する。ここでは、第1圧縮画像の画素値のヒストグラムに基づいて、第1圧縮画像が白板補正用の画像としての適性条件を満たすか否かが判定される。
[4.2. Suitability conditions for an image for whiteboard correction]
10A and 10B, examples of suitability conditions that the first compressed image satisfies as an image for whiteboard correction will be described below. Here, whether or not the first compressed image satisfies the suitability conditions as an image for whiteboard correction is determined based on a histogram of pixel values of the first compressed image.

 図10Aは、被写体10a1が適切である場合の第1圧縮画像およびその画素値のヒストグラムを示す図である。図10Aの上側には第1圧縮画像が示されており、図10Aの下側には第1圧縮画像の画素値のヒストグラムが示されている。 FIG. 10A is a diagram showing a first compressed image and a histogram of its pixel values when the subject 10a1 is appropriate. The upper part of FIG. 10A shows the first compressed image, and the lower part of FIG. 10A shows a histogram of the pixel values of the first compressed image.

 図10Aに示すように、第1圧縮画像は、符号化マスクの透過率の空間分布を反映して不規則な明暗の画素値分布を有する。第1圧縮画像の画素値のヒストグラムは、概略的に単一のピークを示す。被写体10a1が白板である場合、そうでない場合と比較して、ピーク幅は狭くなる。図10Aに示す例では、画素値の平均値をμとし、画素値の標準偏差をσとすると、σ/μ=0.2615である。 As shown in FIG. 10A, the first compressed image has an irregular distribution of light and dark pixel values that reflects the spatial distribution of the transmittance of the encoding mask. A histogram of pixel values of the first compressed image roughly shows a single peak. When the subject 10a1 is a white board, the peak width is narrower than when it is not. In the example shown in FIG. 10A, if the average pixel value is μ and the standard deviation of the pixel values is σ, then σ/μ = 0.2615.

 図10Bは、被写体10a1が適切でない場合の第1圧縮画像およびその画素値のヒストグラムを示す図である。図10Bの上側および下側の図については上記の通りである。 FIG. 10B is a diagram showing the first compressed image and its pixel value histogram when the subject 10a1 is not appropriate. The upper and lower diagrams in FIG. 10B are as described above.

 図10Bに示すように、第1圧縮画像は不規則な明暗の画素値分布を有する。目視では、図10Bに示す第1圧縮画像は図10Aに示す第1圧縮画像に類似している。第1圧縮画像の画素値のヒストグラムは、概略的に単一のピークを示す。被写体10a1が白板でない場合、ピーク幅は広くなる。図10Bに示す例では、σ/μ=0.3016である。 As shown in FIG. 10B, the first compressed image has an irregular distribution of light and dark pixel values. To the naked eye, the first compressed image shown in FIG. 10B resembles the first compressed image shown in FIG. 10A. A histogram of pixel values of the first compressed image roughly shows a single peak. If the subject 10a1 is not a white board, the peak width will be wider. In the example shown in FIG. 10B, σ/μ=0.3016.

 上記のことから、第1圧縮画像の画素値のヒストグラムに基づいて、第1圧縮画像が適性条件を満たすか否かを判定してもよい。例えば、第1圧縮画像のヒストグラムのσ/μが所定の値以下である場合、第1圧縮画像が適性条件を満たすと判定される。 In view of the above, it may be possible to determine whether or not the first compressed image satisfies the suitability conditions based on a histogram of the pixel values of the first compressed image. For example, if the σ/μ of the histogram of the first compressed image is equal to or less than a predetermined value, it is determined that the first compressed image satisfies the suitability conditions.

 [5.画像の補正に用いられる方法3]
 以下に、図11を参照して、圧縮センシング型のハイパースペクトルカメラにおいてカメラ30が圧縮画像を生成して出力する場合の本実施形態による画像の補正に用いられる方法の他の例を説明する。以下に説明する画像の補正は黒引きである。
[5. Method 3 used for image correction]
Another example of the method used for image correction according to the present embodiment when the camera 30 generates and outputs a compressed image in a compressed sensing type hyperspectral camera will be described below with reference to Fig. 11. The image correction described below is black subtraction.

 図11は、本実施形態による撮影システムにおける処理回路52が実行する処理動作の例3を概略的に示すフローチャートである。処理回路52は、図11に示すステップS301~S308の動作を実行する。 FIG. 11 is a flow chart that shows an outline of example 3 of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment. The processing circuit 52 performs the operations of steps S301 to S308 shown in FIG. 11.

 <ステップS301>
 処理回路52は、表示装置40に、遮光状態で撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S301>
The processing circuitry 52 causes the display device 40 to display an instruction to the user to take a picture in a light-shielded state. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.

 ユーザは指示を受けて、カメラ30に被写体10a2を装着する。処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に、被写体10a2を撮影させて第1圧縮画像を生成させる。このように、第1圧縮画像は、被写体10a2を撮影する指示をユーザが受けて生成される。 The user receives the instruction and attaches the subject 10a2 to the camera 30. The processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10a2 and generate a first compressed image. In this way, the first compressed image is generated when the user receives an instruction to capture the subject 10a2.

 <ステップS302>
 処理回路52は、カメラ30から第1圧縮画像を取得する。処理回路52は、取得した第1圧縮画像を記憶装置56に記憶させてもよい。
<Step S302>
The processing circuitry 52 acquires a first compressed image from the camera 30. The processing circuitry 52 may store the acquired first compressed image in the storage device 56.

 <ステップS303>
 処理回路52は、表示装置40に、分析対象の被写体を撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S303>
The processing circuitry 52 causes the display device 40 to display an instruction to the user to photograph the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instruction as sound.

 ユーザは指示を受けて、カメラ30の前に被写体10bを配置する。処理回路52は、入力UI42を介したユーザからの入力を受けて、カメラ30に、被写体10bを撮影させて第2圧縮画像を生成させる。このように、第2圧縮画像は、被写体10bを撮影する指示をユーザが受けて生成される。 The user receives the instruction and places the subject 10b in front of the camera 30. The processing circuit 52 receives input from the user via the input UI 42 and causes the camera 30 to capture the subject 10b and generate a second compressed image. In this way, the second compressed image is generated when the user receives an instruction to capture the subject 10b.

 処理回路52は、ステップS303の前に、入力UI42を介したユーザからの入力を受けて、光源20に、被写体10bを照射するための照射光を出射させる。 Before step S303, the processing circuit 52 receives input from the user via the input UI 42 and causes the light source 20 to emit irradiation light for irradiating the subject 10b.

 <ステップS304>
 処理回路52は、カメラ30から第2圧縮画像を取得する。処理回路52は、取得した第2圧縮画像を記憶装置56に記憶させてもよい。
<Step S304>
The processing circuitry 52 acquires a second compressed image from the camera 30. The processing circuitry 52 may store the acquired second compressed image in the storage device 56.

 なお、処理回路52は、ステップS303およびS304の動作を、ステップS305とステップS306との間に実行してもよい。 In addition, the processing circuit 52 may execute the operations of steps S303 and S304 between steps S305 and S306.

 <ステップS305>
 処理回路52は、第1圧縮画像の画素値に基づいて、第1圧縮画像が、黒引き用の画像としての適性条件を満たすか否かを判定する。第1圧縮画像が満たす黒引きのよる補正用の画像としての適性条件は、例えば、図8Aおよび図8Bを参照して説明した適性条件と同じであり得る。より具体的には、第1圧縮画像における、最低画素値を有する画素の数、または画素値が所定の値以上または所定の値以下である画素の数に基づいて、第1圧縮画像が適性条件を満たすか否かを判定してもよい。
<Step S305>
The processing circuit 52 determines whether the first compressed image satisfies the suitability conditions for an image for black subtraction based on the pixel values of the first compressed image. The suitability conditions for an image for black subtraction correction that the first compressed image satisfies may be the same as the suitability conditions described with reference to Figures 8A and 8B. More specifically, the processing circuit 52 may determine whether the first compressed image satisfies the suitability conditions based on the number of pixels in the first compressed image that have the lowest pixel value or the number of pixels whose pixel values are equal to or greater than a predetermined value or less than a predetermined value.

 あるいは、図7を参照して説明したように、白板補正と同様に、黒引きでも、第1圧縮画像の画素値の空間分布が一定であると判断される条件が満たされる場合、第1圧縮画像は適性条件を満たすと判定してもよい。 Alternatively, as described with reference to FIG. 7, the first compressed image may be determined to satisfy the suitability conditions if the conditions for determining that the spatial distribution of pixel values of the first compressed image is constant are satisfied, even in the case of black subtraction, as in the case of whiteboard correction.

 判定がYesである場合、処理回路52は、ステップS306の動作を実行する。判定がNoである場合、処理回路52は、ステップS308の動作を実行する。 If the determination is Yes, the processing circuit 52 executes the operation of step S306. If the determination is No, the processing circuit 52 executes the operation of step S308.

 <ステップS306>
 処理回路52は、第2圧縮画像から第1圧縮画像を差し引くことにより、黒引きを行う。
<Step S306>
The processing circuit 52 performs black subtraction by subtracting the first compressed image from the second compressed image.

 <ステップS307>
 処理回路52は、記憶装置56に補正後の圧縮画像を記憶させる。
<Step S307>
The processing circuitry 52 causes the storage device 56 to store the corrected compressed image.

 <ステップS308>
 処理回路52は、表示装置40に、第1圧縮画像に異常がある旨のエラーを表示させる。
<Step S308>
The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first compressed image.

 処理回路52の一部が遠隔地に設置された外部サーバである場合、当該外部サーバがステップS302およびS305の動作を実行してもよい。 If part of the processing circuit 52 is an external server installed in a remote location, the external server may execute the operations of steps S302 and S305.

 本実施形態による画像の補正に用いられる上記の方法により、ユーザが指示を受けて適切な被写体10a2を撮影したつもりでも実際には被写体10a2が適切でない場合、不適切な黒引きを防ぐことができる。その結果、第1圧縮画像が黒引き用の画像としての適性条件を満たすか否かの判定をしない場合と比較して、被写体10bのスペクトル情報をより正確に取得することができる。 The above method used for image correction in this embodiment can prevent inappropriate black subtraction when the user believes that he or she has photographed an appropriate subject 10a2 in response to instructions, but the subject 10a2 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information of the subject 10b than when it is not determined whether the first compressed image satisfies the suitability conditions for an image for black subtraction.

 [6.表示装置にエラーを表示させる例]
 以下では、図12Aから図12Cを参照して、表示装置40に表示するエラーの例を説明する。図12Aから図12Cは、圧縮センシング型のハイパースペクトルカメラにおいてカメラ30が圧縮画像を生成して出力する場合の表示装置40のUIの例を模式的に示す図である。当該UIは、入力UI42および表示UI44の両方を兼ねる。
[6. Example of displaying an error on a display device]
Below, examples of errors displayed on the display device 40 will be described with reference to Figures 12A to 12C. Figures 12A to 12C are diagrams that diagrammatically show examples of UIs of the display device 40 when the camera 30 generates and outputs a compressed image in a compressed sensing type hyperspectral camera. The UI serves as both an input UI 42 and a display UI 44.

 図12Aの左上側には被写体10bの圧縮画像用のスペースが示されている。図12Aの下側には被写体10bの補正後のハイパースペクトル画像用のスペースが示されている。図12Aの右上側には、撮影条件として、解像度、露光時間、ゲイン、および積算回数が示されている。図12Aの右中央には、「白板補正」および「黒引き」のボタンが示されている。ユーザが「白板補正」のボタンを押すと、処理回路52は、図9に示すフローチャートの動作を実行する。ユーザが「黒引き」のボタンを押すと、処理回路52は、図11に示すフローチャートの動作を実行する。 The upper left side of FIG. 12A shows a space for a compressed image of subject 10b. The lower side of FIG. 12A shows a space for a corrected hyperspectral image of subject 10b. The upper right side of FIG. 12A shows the shooting conditions, such as resolution, exposure time, gain, and number of integrations. The center right of FIG. 12A shows buttons for "whiteboard correction" and "black subtraction." When the user presses the "whiteboard correction" button, the processing circuit 52 executes the operation of the flowchart shown in FIG. 9. When the user presses the "black subtraction" button, the processing circuit 52 executes the operation of the flowchart shown in FIG. 11.

 ユーザが「白板補正」のボタンを押した場合、第1圧縮画像が適性条件を満たさなければ、図12Bに示すように、表示装置40のUIにエラーのポップアップが表示される。エラーのポップアップには、「白板の圧縮画像に異常を検出しました。適切な白板であるか確認してください。処理を続行しますか?」と記載されている。このように、エラーは、補正用の被写体10a1として白板を正しく撮影できているかの確認をユーザに促してもよい。なお、左上側には、処理の過程で生成された被写体10bの圧縮画像が示されている。 When the user presses the "Whiteboard Correction" button, if the first compressed image does not satisfy the suitability conditions, an error pop-up is displayed on the UI of the display device 40, as shown in FIG. 12B. The error pop-up states, "An abnormality has been detected in the compressed image of the whiteboard. Please check whether it is an appropriate whiteboard. Do you want to continue processing?" In this way, the error may prompt the user to check whether the whiteboard has been correctly photographed as the subject 10a1 for correction. Note that the compressed image of subject 10b generated during the processing is shown on the upper left side.

 ユーザが「黒引き」のボタンを押した場合、第1圧縮画像が適性条件を満たさなければ、図12Cに示すように、表示装置40のUIにエラーのポップアップが表示される。エラーのポップアップには、「遮光画像に異常を検出しました。遮光状態を確認してください。処理を続行しますか?」と記載されている。このように、エラーは、補正用の被写体10a2の撮影時における遮光状態の確認をユーザに促してもよい。 When the user presses the "black out" button, if the first compressed image does not satisfy the suitability conditions, an error pop-up is displayed on the UI of the display device 40, as shown in FIG. 12C. The error pop-up states, "An abnormality has been detected in the shading image. Please check the shading state. Do you want to continue processing?" In this way, the error may prompt the user to check the shading state when photographing the correction subject 10a2.

 図12Bおよび図12Cに示すように、エラーを表示して処理を中断してもよいし、ユーザの確認後に処理を続行してもよい。あるいは、ユーザに被写体10a1または被写体10a2の圧縮画像の撮り直しを要求してもよい。 As shown in FIG. 12B and FIG. 12C, an error may be displayed and the process may be interrupted, or the process may be continued after confirmation from the user. Alternatively, the user may be requested to retake a compressed image of subject 10a1 or subject 10a2.

 [7.画像の補正に用いられる方法1~3のまとめ]
 以下では、図13を参照して、本実施形態による画像の補正に用いられる方法1~3のまとめを、これらの方法1~3に共通する処理動作を中心に説明する。
[7. Summary of methods 1 to 3 used for image correction]
Below, with reference to FIG. 13, a summary of methods 1 to 3 used for image correction according to this embodiment will be described, focusing on processing operations common to these methods 1 to 3.

 図13は、本実施形態による撮影システムにおける処理回路52が実行する処理動作の例1~3のまとめを概略的に示すフローチャートである。処理回路52は、図13に示すステップS401~S406の動作を実行する。 FIG. 13 is a flowchart outlining a summary of examples 1 to 3 of the processing operations performed by the processing circuit 52 in the imaging system according to this embodiment. The processing circuit 52 performs the operations of steps S401 to S406 shown in FIG. 13.

 <ステップS401>
 処理回路52は、カメラ30から、補正用の画像として第1画像を取得する。
<Step S401>
The processing circuit 52 acquires a first image from the camera 30 as an image for correction.

 画像の補正に用いられる方法1において、第1画像は第1ハイパースペクトル画像である。画像の補正に用いられる方法2および3において、第1画像は第1圧縮画像である。 In method 1 used to correct an image, the first image is a first hyperspectral image. In methods 2 and 3 used to correct an image, the first image is a first compressed image.

 <ステップS402>
 処理回路52は、カメラ30から第2画像を取得する。
<Step S402>
The processing circuitry 52 obtains a second image from the camera 30 .

 画像の補正に用いられる方法1および2において、第2画像は第2ハイパースペクトル画像である。画像の補正に用いられる方法2において、処理回路52は、第2圧縮画像に基づいて第2ハイパースペクトル画像を生成して取得する。画像の補正に用いられる方法3において、第2画像は第2圧縮画像である。 In methods 1 and 2 used to correct an image, the second image is a second hyperspectral image. In method 2 used to correct an image, the processing circuit 52 generates and obtains a second hyperspectral image based on the second compressed image. In method 3 used to correct an image, the second image is a second compressed image.

 なお、処理回路52は、ステップS402の動作を、ステップS403とステップS404との間に実行してもよい。 The processing circuit 52 may also execute the operation of step S402 between steps S403 and S404.

 <ステップS403>
 処理回路52は、第1画像の画素値に基づいて、第1画像が、第2画像を補正するための補正用の画像としての適性条件を満たすか否かを判定する。判定がYesである場合、処理回路52は、ステップS404の動作を実行する。判定がNoである場合、処理回路52は、ステップS406の動作を実行する。
<Step S403>
The processing circuit 52 determines whether the first image satisfies the suitability conditions for a correction image for correcting the second image based on the pixel values of the first image. If the determination is Yes, the processing circuit 52 executes the operation of step S404. If the determination is No, the processing circuit 52 executes the operation of step S406.

 <ステップS404>
 処理回路52は、第1画像に基づいて第2画像を補正する。
<Step S404>
The processing circuitry 52 corrects the second image based on the first image.

 画像の補正に用いられる方法1において、処理回路52は、第1ハイパースペクトル画像によって第2ハイパースペクトル画像を正規化することにより、白板補正を行う。あるいは、処理回路52は、第2ハイパースペクトル画像から第1ハイパースペクトル画像を差し引くことにより、黒引きを行う。 In method 1 used for image correction, the processing circuit 52 performs whiteboard correction by normalizing the second hyperspectral image by the first hyperspectral image. Alternatively, the processing circuit 52 performs black subtraction by subtracting the first hyperspectral image from the second hyperspectral image.

 画像の補正に用いられる方法2において、処理回路52は、第1圧縮画像に基づいて第1ハイパースペクトル画像を生成し、第1ハイパースペクトル画像によって第2ハイパースペクトル画像を正規化することにより、白板補正を行う。 In method 2 used to correct an image, the processing circuit 52 performs whiteboard correction by generating a first hyperspectral image based on the first compressed image and normalizing the second hyperspectral image by the first hyperspectral image.

 画像の補正に用いられる方法3において、処理回路52は、第2圧縮画像から第1圧縮画像を差し引くことにより、黒引きを行う。 In method 3 used for image correction, the processing circuit 52 performs black removal by subtracting the first compressed image from the second compressed image.

 <ステップS405>
 処理回路52は、記憶装置56に補正後の画像を記憶させる。
<Step S405>
The processing circuitry 52 causes the storage device 56 to store the corrected image.

 <ステップS406>
 処理回路52は、表示装置40に、第1画像に異常がある旨のエラーを表示させる。
<Step S406>
The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first image.

 処理回路52の一部が遠隔地に設置された外部サーバである場合、当該外部サーバがステップS401およびS403の動作を実行してもよい。 If part of the processing circuit 52 is an external server installed in a remote location, the external server may execute the operations of steps S401 and S403.

 本実施形態による画像の補正に用いられる上記の方法により、ユーザが指示を受けて適切な被写体10a1または被写体10a2を撮影したつもりでも実際には被写体10a1または被写体10a2が適切でない場合、不適切な補正を防ぐことができる。その結果、第1画像が適性条件を満たすか否かの判定をしない場合と比較して、被写体10bのスペクトル情報をより正確に取得することができる。 The above method used for image correction in this embodiment can prevent inappropriate correction when the user believes that he or she has photographed the appropriate subject 10a1 or subject 10a2 in response to instructions, but the subject 10a1 or subject 10a2 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information for the subject 10b than when it is not determined whether the first image satisfies the suitability conditions.

 [8.画像の補正に用いられる方法4]
 以下では、図14および図15を参照して、本実施形態による画像の補正に用いられる方法4を説明する。ここでは、被写体10a1および被写体10bを含むシーンをカメラ30によって撮影して生成される画像が用いられる。以下に説明する画像の補正は、白板補正である。
[8. Method 4 used for image correction]
Method 4 used for image correction according to the present embodiment will be described below with reference to Figures 14 and 15. Here, an image generated by capturing a scene including the subject 10a1 and the subject 10b by the camera 30 is used. The image correction described below is whiteboard correction.

 図14は、被写体10a1および被写体10bを含むシーンをカメラ30によって撮影して生成される画像の例を模式的に示す図である。図14に示す画像38は、ハイパースペクトル画像または圧縮画像である。画像38には被写体10a1および被写体10bが写っている。 FIG. 14 is a schematic diagram showing an example of an image generated by capturing a scene including subjects 10a1 and 10b with camera 30. Image 38 shown in FIG. 14 is a hyperspectral image or a compressed image. Subjects 10a1 and 10b are captured in image 38.

 図15は、本実施形態による撮影システムにおける処理回路52が実行する処理動作の例4を概略的に示すフローチャートである。処理回路52は、図15に示すステップS501~S507の動作を実行する。 FIG. 15 is a flow chart that shows an outline of example 4 of the processing operation performed by the processing circuit 52 in the imaging system according to this embodiment. The processing circuit 52 performs the operations of steps S501 to S507 shown in FIG. 15.

 <ステップS501>
 処理回路52は、表示装置40に、補正用の被写体および分析対象の被写体を撮影するというユーザへの指示を表示させる。あるいは、撮影システム100がスピーカを備える場合、処理回路52は、スピーカに、当該指示を音声で出力させてもよい。
<Step S501>
The processing circuitry 52 causes the display device 40 to display instructions to the user to photograph the subject for correction and the subject to be analyzed. Alternatively, if the photographing system 100 includes a speaker, the processing circuitry 52 may cause the speaker to output the instructions as sound.

 ユーザは指示を受けて、カメラ30の前に被写体10a1および被写体10bを配置する。処理回路52は、入力UI42を介したユーザからの入力を受けて、被写体10a1および被写体10bを含むシーンを撮影させて、被写体10a1および被写体10bが写る画像38を生成させる。このように、画像38は、被写体10a1および被写体10bを撮影する指示をユーザが受けて生成される。 The user receives instructions and places subjects 10a1 and 10b in front of camera 30. Processing circuit 52 receives input from the user via input UI 42 and causes a scene including subjects 10a1 and 10b to be photographed, generating image 38 in which subjects 10a1 and 10b appear. In this way, image 38 is generated when the user receives instructions to photograph subjects 10a1 and 10b.

 <ステップS502>
 処理回路52は、カメラ30から画像38を取得する。
<Step S502>
The processing circuitry 52 acquires the image 38 from the camera 30 .

 <ステップS503>
 処理回路52は、画像38に基づいて、被写体10a1に対応する第1サブ画像、および被写体10a2に対応する第2サブ画像を生成する。
<Step S503>
Processing circuitry 52 generates, based on image 38, a first sub-image corresponding to object 10a1 and a second sub-image corresponding to object 10a2.

 画像38がハイパースペクトル画像である場合、処理回路52は、例えばエッジ検出により、画像38から第1および第2サブ画像を抽出する。したがって、第1サブ画像はハイパースペクトル画像の一部であり、第2サブ画像はハイパースペクトル画像の他の一部である。第1サブ画像は、画像38に写る被写体10a1の輪郭によって規定される領域であってもよいし、画像38に写る被写体10a1を含む領域であって、矩形または円形などの任意の形状を有する領域であってもよい。第2サブ画像についても同様である。なお、処理回路52は、当該ハイパースペクトル画像から第2サブ画像を抽出する動作を、ステップS504とステップS505との間に実行してもよい。 If image 38 is a hyperspectral image, processing circuitry 52 extracts first and second sub-images from image 38, for example by edge detection. Thus, the first sub-image is one part of the hyperspectral image, and the second sub-image is another part of the hyperspectral image. The first sub-image may be an area defined by the contour of subject 10a1 appearing in image 38, or an area including subject 10a1 appearing in image 38 and having any shape, such as a rectangle or a circle. The same applies to the second sub-image. Note that processing circuitry 52 may execute an operation of extracting the second sub-image from the hyperspectral image between steps S504 and S505.

 画像38が圧縮画像である場合、処理回路52は、当該圧縮画像に基づいてハイパースペクトル画像を生成し、当該ハイパースペクトル画像から第1および第2サブ画像を抽出する。したがって、第1サブ画像はハイパースペクトル画像の一部であり、第2サブ画像はハイパースペクトル画像の他の一部である。なお、処理回路52は、当該ハイパースペクトル画像から第2サブ画像を抽出する動作を、ステップS504とステップS505との間に実行してもよい。 If image 38 is a compressed image, processing circuitry 52 generates a hyperspectral image based on the compressed image and extracts the first and second sub-images from the hyperspectral image. Thus, the first sub-image is one part of the hyperspectral image and the second sub-image is another part of the hyperspectral image. Note that processing circuitry 52 may perform the operation of extracting the second sub-image from the hyperspectral image between steps S504 and S505.

 あるいは、画像38が圧縮画像である場合、処理回路52は、当該圧縮画像から第1サブ画像を抽出する。処理回路52は、さらに、当該圧縮画像に基づいてハイパースペクトル画像を生成し、当該ハイパースペクトル画像から第2サブ画像を抽出する。したがって、第1サブ画像は圧縮画像の一部であり、第2サブ画像はハイパースペクトル画像の一部である。なお、処理回路52は、当該圧縮画像に基づいてハイパースペクトル画像を生成し、当該ハイパースペクトル画像から第2サブ画像を抽出する動作を、ステップS504とステップS505との間に実行してもよい。 Alternatively, if image 38 is a compressed image, processing circuitry 52 extracts a first sub-image from the compressed image. Processing circuitry 52 further generates a hyperspectral image based on the compressed image and extracts a second sub-image from the hyperspectral image. Thus, the first sub-image is part of the compressed image and the second sub-image is part of the hyperspectral image. Note that processing circuitry 52 may perform the operations of generating a hyperspectral image based on the compressed image and extracting the second sub-image from the hyperspectral image between steps S504 and S505.

 <ステップS504>
 処理回路52は、第1サブ画像の画素値に基づいて、第1サブ画像が、第2サブ画像を補正するための補正用の画像としての適性条件を満たすか否かを判定する。
<Step S504>
The processing circuit 52 determines, based on the pixel values of the first sub-image, whether the first sub-image satisfies a suitability condition for use as a correction image for correcting the second sub-image.

 画像38がハイパースペクトル画像である場合、白板補正用の画像としての適性条件については、図6Aから図7を参照して説明した通りである。画像38が圧縮画像であり、第1サブ画像がハイパースペクトル画像の一部である場合についても同様である。 When image 38 is a hyperspectral image, the suitability conditions for an image for whiteboard correction are as described with reference to Figures 6A to 7. The same applies when image 38 is a compressed image and the first sub-image is part of a hyperspectral image.

 画像38が圧縮画像であり、第1サブ画像が圧縮画像の一部である場合、白板補正用の画像としての適性条件については、図10Aおよび図10Bを参照して説明した通りである。 When image 38 is a compressed image and the first sub-image is part of the compressed image, the suitability conditions for the image to be used for whiteboard correction are as described with reference to Figures 10A and 10B.

 判定がYesである場合、処理回路52は、ステップS505の動作を実行する。判定がNoである場合、処理回路52は、ステップS507の動作を実行する。 If the determination is Yes, the processing circuit 52 executes the operation of step S505. If the determination is No, the processing circuit 52 executes the operation of step S507.

 <ステップS505>
 処理回路52は、第1サブ画像に基づいて第2サブ画像を補正する。
<Step S505>
The processing circuitry 52 corrects the second sub-image based on the first sub-image.

 画像38がハイパースペクトル画像である場合、処理回路52は、第1サブ画像によって第2サブ画像を正規化することにより、白板補正を行う。画像38が圧縮画像である場合であって、第1サブ画像がハイパースペクトル画像の一部であり、第2サブ画像がハイパースペクトル画像の他の一部である場合についても同様である。 If image 38 is a hyperspectral image, processing circuitry 52 performs whiteboard correction by normalizing the second sub-image by the first sub-image. The same is true if image 38 is a compressed image, the first sub-image is part of the hyperspectral image, and the second sub-image is another part of the hyperspectral image.

 画像38が圧縮画像である場合であって、第1サブ画像が圧縮画像の一部であり、第2サブ画像がハイパースペクトル画像の一部である場合、処理回路52は、第2サブ画像を抽出したハイパースペクトル画像から被写体10a1に対応するサブ画像を抽出し、抽出した当該サブ画像によって第2サブ画像を正規化することにより、白板補正を行う。 If image 38 is a compressed image, the first sub-image is part of the compressed image, and the second sub-image is part of a hyperspectral image, processing circuit 52 performs whiteboard correction by extracting a sub-image corresponding to subject 10a1 from the hyperspectral image from which the second sub-image has been extracted, and normalizing the second sub-image by the extracted sub-image.

 <ステップS506>
 処理回路52は、記憶装置56に補正後のサブ画像を記憶させる。
<Step S506>
The processing circuitry 52 causes the storage device 56 to store the corrected sub-image.

 <ステップS507>
 処理回路52は、表示装置40に、第1サブ画像に異常がある旨のエラーを表示させる。
<Step S507>
The processing circuit 52 causes the display device 40 to display an error indicating that there is an abnormality in the first sub-image.

 処理回路52の一部が遠隔地に設置された外部サーバである場合、当該外部サーバがステップS502~S504の動作を実行してもよい。 If part of the processing circuit 52 is an external server installed in a remote location, the external server may execute the operations of steps S502 to S504.

 本実施形態による画像の補正に用いられる上記の方法により、ユーザが指示を受けて適切な被写体10a1を撮影したつもりでも実際には被写体10a1が適切でない場合、不適切な白板補正を防ぐことができる。その結果、第1サブ画像が適性条件を満たすか否かの判定をしない場合と比較して、被写体10bのスペクトル情報をより正確に取得することができる。さらに、上記の方法により、屋外での撮影のように、画像全体に白板を写すことが容易ではない場合でも、白板補正が可能になる。 The above method used for image correction in this embodiment can prevent inappropriate whiteboard correction when the user believes that he or she has photographed an appropriate subject 10a1 in response to instructions, but the subject 10a1 is actually not appropriate. As a result, it is possible to obtain more accurate spectral information about the subject 10b compared to a case where it is not determined whether the first sub-image satisfies the suitability conditions. Furthermore, the above method makes it possible to perform whiteboard correction even in cases where it is not easy to capture a whiteboard over the entire image, such as when photographing outdoors.

 [9.圧縮センシング型のハイパースペクトルカメラ]
 以下に、図16Aから図18Bを参照して、圧縮センシング型のハイパースペクトルカメラの構成例を説明する。図16Aは、圧縮センシング型のハイパースペクトルカメラであるカメラ30の構成例を模式的に示す図である。図16Aに示すカメラ30は、特許文献2に開示された構成と同様に、光学系31と、フィルタアレイ32と、イメージセンサ33と、画像処理装置34とを備える。光学系31およびフィルタアレイ32は、被写体10bから入射する光の光路上に配置される。図16Aに示す例において、フィルタアレイ32は、光学系31とイメージセンサ33との間に配置されている。
[9. Compressive sensing type hyperspectral camera]
Hereinafter, a configuration example of a compressed sensing type hyperspectral camera will be described with reference to Figs. 16A to 18B. Fig. 16A is a diagram showing a schematic configuration example of a camera 30 which is a compressed sensing type hyperspectral camera. The camera 30 shown in Fig. 16A includes an optical system 31, a filter array 32, an image sensor 33, and an image processing device 34, similar to the configuration disclosed in Patent Document 2. The optical system 31 and the filter array 32 are disposed on the optical path of light incident from the subject 10b. In the example shown in Fig. 16A, the filter array 32 is disposed between the optical system 31 and the image sensor 33.

 イメージセンサ33は、複数の波長バンドの情報が2次元のモノクロ画像として圧縮された圧縮画像35のデータを生成する。画像処理装置34は、イメージセンサ33が生成した圧縮画像35のデータに基づいて、対象波長域に含まれる複数の波長バンドに1対1に対応する複数の画像を示すデータを生成する。ここで、対象波長域に含まれる波長バンドの数をN(Nは4以上の整数)とする。以下の説明において、圧縮画像35に基づいて生成されるN個の画像を、復元画像36W、36W、・・・、36Wと称し、これらを「ハイパースペクトル画像36」と総称することがある。 The image sensor 33 generates data of a compressed image 35 in which information of a plurality of wavelength bands is compressed as a two-dimensional monochrome image. The image processing device 34 generates data representing a plurality of images corresponding one-to-one to the plurality of wavelength bands included in the target wavelength range, based on the data of the compressed image 35 generated by the image sensor 33. Here, the number of wavelength bands included in the target wavelength range is set to N (N is an integer equal to or greater than 4). In the following description, the N images generated based on the compressed image 35 are referred to as restored images 36W1 , 36W2 , ..., 36WN , and these may be collectively referred to as "hyperspectral image 36."

 フィルタアレイ32は、行および列状に配列された透光性を有する複数のフィルタのアレイである。複数のフィルタは、分光透過率、すなわち光透過率の波長依存性が互いに異なる複数種類のフィルタを含む。フィルタアレイ32は、入射光の強度を波長ごとに変調して出力する。フィルタアレイ32によるこの過程を「符号化」と称し、フィルタアレイ32を「符号化マスク」とも称する。 The filter array 32 is an array of multiple light-transmitting filters arranged in rows and columns. The multiple filters include multiple types of filters that differ from one another in terms of spectral transmittance, i.e., the wavelength dependency of light transmittance. The filter array 32 modulates the intensity of the incident light for each wavelength and outputs it. This process performed by the filter array 32 is called "encoding," and the filter array 32 is also called an "encoding mask."

 図16Aに示す例において、フィルタアレイ32は、イメージセンサ33の近傍または直上に配置されている。ここで「近傍」とは、光学系31からの光の像がある程度鮮明な状態でフィルタアレイ32の面上に形成される程度に近接していることを意味する。「直上」とは、ほとんど隙間が生じない程両者が近接していることを意味する。フィルタアレイ32およびイメージセンサ33は一体化されていてもよい。 In the example shown in FIG. 16A, the filter array 32 is placed near or directly above the image sensor 33. Here, "near" means close enough that a relatively clear image of the light from the optical system 31 is formed on the surface of the filter array 32. "Directly above" means that the two are so close that there is almost no gap between them. The filter array 32 and the image sensor 33 may be integrated.

 光学系31は、少なくとも1つのレンズを含む。図16Aでは、光学系31が1つのレンズとして示されているが、光学系31は複数のレンズの組み合わせであってもよい。光学系31は、フィルタアレイ32を介して、イメージセンサ33の撮像面上に像を形成する。 The optical system 31 includes at least one lens. In FIG. 16A, the optical system 31 is shown as one lens, but the optical system 31 may be a combination of multiple lenses. The optical system 31 forms an image on the imaging surface of the image sensor 33 via the filter array 32.

 フィルタアレイ32は、イメージセンサ33から離れて配置されていてもよい。図16Bから図16Dは、フィルタアレイ32がイメージセンサ33から離れて配置されているカメラ30の構成例を示す図である。図16Bに示す例では、フィルタアレイ32が、光学系31とイメージセンサ33との間でかつイメージセンサ33から離れた位置に配置されている。図16Cに示す例では、フィルタアレイ32が被写体10bと光学系31との間に配置されている。図16Dに示す例では、カメラ30が2つの光学系31Aおよび31Bを備え、それらの間にフィルタアレイ32が配置されている。これらの例のように、フィルタアレイ32とイメージセンサ33との間に1つ以上のレンズを含む光学系が配置されていてもよい。 The filter array 32 may be disposed away from the image sensor 33. FIGS. 16B to 16D are diagrams showing configuration examples of the camera 30 in which the filter array 32 is disposed away from the image sensor 33. In the example shown in FIG. 16B, the filter array 32 is disposed between the optical system 31 and the image sensor 33 and at a position distant from the image sensor 33. In the example shown in FIG. 16C, the filter array 32 is disposed between the subject 10b and the optical system 31. In the example shown in FIG. 16D, the camera 30 has two optical systems 31A and 31B, and the filter array 32 is disposed between them. As in these examples, an optical system including one or more lenses may be disposed between the filter array 32 and the image sensor 33.

 イメージセンサ33は、2次元的に配列された複数の光検出素子(本明細書において、「画素」とも呼ぶ。)を有するモノクロタイプの光検出装置である。イメージセンサ33は、例えばCCD(Charge-Coupled Device)、CMOS(Complementary Metal Oxide Semiconductor)センサ、または赤外線アレイセンサであり得る。光検出素子は、例えばフォトダイオードを含む。イメージセンサ33は、必ずしもモノクロタイプのセンサである必要はない。例えば、カラータイプのセンサを用いてもよい。カラータイプのセンサは、例えば、赤色の光を透過する複数の赤(R)フィルタ、緑色の光を透過する複数の緑(G)フィルタ、および青色の光を透過する複数の青(B)フィルタを含み得る。カラータイプのセンサは、さらに、赤外線を透過する複数のIRフィルタを含んでいてもよい。また、カラータイプのセンサは、赤色、緑色、青色の全ての光を透過する複数の透明フィルタを含んでいてもよい。カラータイプのセンサを使用することで、波長に関する情報量を増やすことができ、ハイパースペクトル画像36の再構成(reconstruction)の精度を向上させることができる。取得対象の波長範囲は任意に決定してよく、可視の波長範囲に限らず、紫外、近赤外、中赤外、または遠赤外の波長範囲であってもよい。 The image sensor 33 is a monochrome type light detection device having a plurality of light detection elements (also referred to as "pixels" in this specification) arranged two-dimensionally. The image sensor 33 may be, for example, a charge-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, or an infrared array sensor. The light detection elements include, for example, photodiodes. The image sensor 33 does not necessarily have to be a monochrome type sensor. For example, a color type sensor may be used. The color type sensor may include, for example, a plurality of red (R) filters that transmit red light, a plurality of green (G) filters that transmit green light, and a plurality of blue (B) filters that transmit blue light. The color type sensor may further include a plurality of IR filters that transmit infrared light. The color type sensor may also include a plurality of transparent filters that transmit all red, green, and blue light. By using a color type sensor, the amount of information regarding wavelengths can be increased, and the accuracy of the reconstruction of the hyperspectral image 36 can be improved. The wavelength range to be acquired may be determined arbitrarily, and is not limited to the visible wavelength range, but may be ultraviolet, near infrared, mid-infrared, or far infrared.

 画像処理装置34は、1つ以上のプロセッサと、メモリ等の1つ以上の記憶媒体とを備えるコンピュータであり得る。画像処理装置34は、イメージセンサ33によって取得された圧縮画像35に基づいて、復元画像36W、36W、・・・、36Wのデータを生成する。 The image processing device 34 may be a computer including one or more processors and one or more storage media such as a memory. The image processing device 34 generates data of restored images 36W 1 , 36W 2 , ..., 36W N based on the compressed image 35 acquired by the image sensor 33.

 図17Aは、フィルタアレイ32の例を模式的に示す図である。フィルタアレイ32は、2次元的に配列された複数の領域を有する。本明細書では、当該領域を、「セル」と称することがある。各領域には、個別に設定された分光透過率を有する光学フィルタが配置されている。分光透過率は、入射光の波長をλとして、関数T(λ)で表される。分光透過率T(λ)は、0以上1以下の値を取り得る。 FIG. 17A is a diagram showing a schematic example of a filter array 32. The filter array 32 has a number of regions arranged two-dimensionally. In this specification, these regions are sometimes referred to as "cells." In each region, an optical filter having an individually set spectral transmittance is arranged. The spectral transmittance is expressed as a function T(λ), where λ is the wavelength of the incident light. The spectral transmittance T(λ) can take a value between 0 and 1.

 図17Aに示す例において、フィルタアレイ32は、6行8列に配列された48個の矩形領域を有する。これはあくまで例示であり、実際の用途では、これよりも多くの領域が設けられ得る。その数は、例えばイメージセンサ33の画素数と同程度であってもよい。フィルタアレイ32に含まれるフィルタ数は、例えば数十から数千万の範囲で用途に応じて決定される。 In the example shown in FIG. 17A, the filter array 32 has 48 rectangular regions arranged in 6 rows and 8 columns. This is merely an example, and in actual applications, more regions may be provided. The number may be approximately the same as the number of pixels in the image sensor 33, for example. The number of filters included in the filter array 32 is determined according to the application, ranging from several tens to tens of millions, for example.

 図17Bは、対象波長域に含まれる波長バンドW、W、・・・、Wのそれぞれの光の透過率の空間分布の一例を示す図である。図17Bに示す例では、各領域の濃淡の違いは、透過率の違いを表している。淡い領域ほど透過率が高く、濃い領域ほど透過率が低い。図17Bに示すように、波長バンドによって光透過率の空間分布が異なっている。 Fig. 17B is a diagram showing an example of the spatial distribution of the light transmittance of each of the wavelength bands W1 , W2 , ..., WN included in the target wavelength range. In the example shown in Fig. 17B, the difference in the shading of each region represents the difference in the transmittance. The lighter the region, the higher the transmittance, and the darker the region, the lower the transmittance. As shown in Fig. 17B, the spatial distribution of the light transmittance differs depending on the wavelength band.

 図17Cおよび図17Dは、それぞれ、図17Aに示すフィルタアレイ32に含まれる領域A1および領域A2の分光透過率の例を示す図である。領域A1の分光透過率と領域A2の分光透過率とは、互いに異なる。このように、フィルタアレイ32の分光透過率は、領域によって異なる。ただし、必ずしもすべての領域の分光透過率が異なっている必要はない。フィルタアレイ32では、複数の領域の少なくとも一部の領域の分光透過率が互いに異なっている。フィルタアレイ32は、分光透過率が互いに異なる2つ以上のフィルタを含む。ある例では、フィルタアレイ32に含まれる複数の領域の分光透過率のパターンの数は、対象波長域に含まれる波長バンドの数Nと同じか、それ以上であり得る。フィルタアレイ32は、半数以上の領域の分光透過率が異なるように設計されていてもよい。 17C and 17D are diagrams showing examples of the spectral transmittance of region A1 and region A2 included in the filter array 32 shown in FIG. 17A, respectively. The spectral transmittance of region A1 and the spectral transmittance of region A2 are different from each other. In this way, the spectral transmittance of the filter array 32 differs depending on the region. However, it is not necessary that the spectral transmittance of all regions is different. In the filter array 32, the spectral transmittance of at least some of the multiple regions is different from each other. The filter array 32 includes two or more filters having different spectral transmittances from each other. In one example, the number of spectral transmittance patterns of the multiple regions included in the filter array 32 may be the same as or greater than the number N of wavelength bands included in the target wavelength range. The filter array 32 may be designed so that the spectral transmittance of more than half of the regions is different.

 図18Aは、フィルタアレイ32のある領域における分光透過率の特性を説明するための図である。図18Aに示す例において、分光透過率は、対象波長域W内の波長に関して、複数の極大値P1からP5、および複数の極小値を有する。図18Aに示す例では、対象波長域W内での光透過率の最大値が1、最小値が0となるように正規化されている。図18Aに示す例では、波長バンドW、および波長バンドWN-1などの波長域において、分光透過率が極大値を有している。このように、各領域の分光透過率は、波長バンドW、W、・・・、Wのうち、少なくとも2つの複数の波長域において極大値を有するように設計され得る。図18Aの例では、極大値P1、P3、P4およびP5は0.5以上である。 FIG. 18A is a diagram for explaining the characteristics of the spectral transmittance in a certain region of the filter array 32. In the example shown in FIG. 18A, the spectral transmittance has multiple maximum values P1 to P5 and multiple minimum values for wavelengths in the target wavelength range W. In the example shown in FIG. 18A, the maximum value of the light transmittance in the target wavelength range W is normalized to 1 and the minimum value is 0. In the example shown in FIG. 18A, the spectral transmittance has maximum values in wavelength ranges such as wavelength band W 2 and wavelength band W N-1 . In this way, the spectral transmittance of each region can be designed to have maximum values in at least two wavelength ranges among the wavelength bands W 1 , W 2 , ..., W N. In the example of FIG. 18A, the maximum values P1, P3, P4, and P5 are 0.5 or more.

 このように、各領域の光透過率は、波長によって異なる。したがって、フィルタアレイ32は、入射する光のうち、ある波長域の成分を多く透過させ、他の波長域の成分をそれほど透過させない。例えば、N個の波長バンドのうちのk個の波長バンドの光については、透過率が0.5よりも大きく、残りのN-k個の波長域の光については、透過率が0.5未満であり得る。kは、2≦k<Nを満たす整数である。仮に入射光が、すべての可視光の波長成分を均等に含む白色光であった場合には、フィルタアレイ32は、入射光を領域ごとに、波長に関して離散的な複数の強度のピークを有する光に変調し、これらの多波長の光を重畳して出力する。 In this way, the light transmittance of each region varies depending on the wavelength. Therefore, the filter array 32 transmits a large amount of components in a certain wavelength range among the incident light, and does not transmit components in other wavelength ranges very much. For example, the transmittance of light in k wavelength bands out of the N wavelength bands may be greater than 0.5, and the transmittance of light in the remaining N-k wavelength bands may be less than 0.5, where k is an integer satisfying 2≦k<N. If the incident light is white light that contains all visible light wavelength components evenly, the filter array 32 modulates the incident light into light having multiple discrete intensity peaks with respect to wavelength for each region, and outputs this multi-wavelength light by superimposing it.

 図18Bは、一例として、図18Aに示す分光透過率を、波長バンドW、W、・・・、Wごとに平均化した結果を示す図である。平均化された透過率は、分光透過率T(λ)を波長バンドごとに積分してその波長バンドの帯域幅で除算することによって得られる。本明細書では、このように波長バンドごとに平均化した透過率の値を、その波長バンドにおける透過率とする。この例では、極大値P1、P3およびP5をとる3つの波長域において、透過率が突出して高くなっている。特に、極大値P3およびP5をとる2つの波長域において、透過率が0.8を超える。 FIG. 18B is a diagram showing, as an example, the result of averaging the spectral transmittance shown in FIG. 18A for each wavelength band W 1 , W 2 , ..., W N. The averaged transmittance is obtained by integrating the spectral transmittance T(λ) for each wavelength band and dividing by the bandwidth of the wavelength band. In this specification, the transmittance value averaged for each wavelength band in this way is defined as the transmittance in that wavelength band. In this example, the transmittance is remarkably high in three wavelength ranges having maximum values P1, P3, and P5. In particular, the transmittance exceeds 0.8 in two wavelength ranges having maximum values P3 and P5.

 図17Aから図17Dに示す例では、各領域の透過率が0以上1以下の任意の値をとり得るグレースケールの透過率分布が想定されている。しかし、必ずしもグレースケールの透過率分布にする必要はない。例えば、各領域の透過率がほぼ0またはほぼ1のいずれかの値を取り得るバイナリスケールの透過率分布を採用してもよい。バイナリスケールの透過率分布では、各領域は、対象波長域に含まれる複数の波長域のうちの少なくとも2つの波長域の光の大部分を透過させ、残りの波長域の光の大部分を透過させない。ここで「大部分」とは、概ね80%以上を指す。 In the examples shown in Figures 17A to 17D, a grayscale transmittance distribution is assumed in which the transmittance of each region can take any value between 0 and 1. However, it is not necessary to use a grayscale transmittance distribution. For example, a binary scale transmittance distribution may be used in which the transmittance of each region can take a value of either approximately 0 or approximately 1. In a binary scale transmittance distribution, each region transmits most of the light in at least two of the multiple wavelength ranges included in the target wavelength range, and does not transmit most of the light in the remaining wavelength ranges. Here, "most" refers to approximately 80% or more.

 全セルのうちの一部、例えば半分のセルを、透明領域に置き換えてもよい。そのような透明領域は、対象波長域Wに含まれる波長バンドW、W、・・・、Wのそれぞれの光を同程度の高い透過率、例えば80%以上の透過率で透過させる。そのような構成では、複数の透明領域は、例えば市松(checkerboard)状に配置され得る。すなわち、フィルタアレイ32における複数の領域の2つの配列方向において、光透過率が波長によって異なる領域と、透明領域とが交互に配列され得る。 A part of all the cells, for example, half of the cells, may be replaced with a transparent region. Such a transparent region transmits the light of each of the wavelength bands W 1 , W 2 , ..., W N included in the target wavelength range W with a similarly high transmittance, for example, a transmittance of 80% or more. In such a configuration, the multiple transparent regions may be arranged, for example, in a checkerboard pattern. That is, in two arrangement directions of the multiple regions in the filter array 32, regions whose light transmittance varies depending on the wavelength and transparent regions may be arranged alternately.

 このようなフィルタアレイ32の分光透過率の空間分布を示すデータは、設計データまたは実測キャリブレーションに基づいて事前に取得され、画像処理装置34が備える記憶媒体に格納される。このデータは、後述する演算処理に利用される。 The data showing the spatial distribution of the spectral transmittance of the filter array 32 is acquired in advance based on design data or actual measurement calibration, and is stored in a storage medium provided in the image processing device 34. This data is used in the calculation processing described below.

 フィルタアレイ32は、例えば、多層膜、有機材料、回折格子構造、金属を含む微細構造、またはメタサーフェスを用いて構成され得る。多層膜を用いる場合、例えば、誘電体多層膜または金属層を含む多層膜が用いられ得る。この場合、セルごとに各多層膜の厚さ、材料、および積層順序の少なくとも1つが異なるように形成される。これにより、セルによって異なる分光特性を実現できる。多層膜を用いることにより、分光透過率におけるシャープな立ち上がりおよび立下りを実現できる。有機材料を用いた構成は、セルによって含有する顔料または染料が異なるようにしたり、異種の材料を積層させたりすることによって実現され得る。回折格子構造を用いた構成は、セルごとに異なる回折ピッチまたは深さの回折構造を設けることによって実現され得る。金属を含む微細構造は、プラズモン効果による分光を利用して作製され得る。メタサーフェスは、入射光の波長よりも小さいサイズで誘電体材料を微細加工することによって作製され得る。当該構造では、入射光に対する屈折率が空間的に変調される。あるいは、フィルタアレイ32を用いずに、イメージセンサ33に含まれる複数の画素を直接加工することにより、入射光を符号化してもよい。 The filter array 32 may be configured using, for example, a multilayer film, an organic material, a diffraction grating structure, a microstructure including a metal, or a metasurface. When a multilayer film is used, for example, a dielectric multilayer film or a multilayer film including a metal layer may be used. In this case, at least one of the thickness, material, and stacking order of each multilayer film is formed so that it differs for each cell. This allows different spectral characteristics to be realized for each cell. By using a multilayer film, a sharp rise and fall in the spectral transmittance can be realized. A configuration using an organic material can be realized by making the pigment or dye contained different for each cell, or by stacking different materials. A configuration using a diffraction grating structure can be realized by providing a diffraction structure with a different diffraction pitch or depth for each cell. A microstructure including a metal can be created by utilizing the spectrum due to the plasmon effect. A metasurface can be created by microfabricating a dielectric material in a size smaller than the wavelength of the incident light. In this structure, the refractive index for the incident light is spatially modulated. Alternatively, the incident light may be encoded by directly processing a plurality of pixels included in the image sensor 33 without using the filter array 32.

 上記のことから、カメラ30は、光応答特性が互いに異なる複数の受光領域を有すると言うことができる。カメラ30が、複数のフィルタを含むフィルタアレイ32を備え、当該複数のフィルタが互いに不規則に異なる光透過特性を有する場合、複数の受光領域は、フィルタアレイ32が近傍または直上に配置されたイメージセンサ33によって実現され得る。この場合、複数の受光領域の光応答特性は、それぞれ、フィルタアレイ32に含まれる複数のフィルタの光透過特性に基づいて決定される。 From the above, it can be said that the camera 30 has multiple light receiving areas with different optical response characteristics. When the camera 30 is equipped with a filter array 32 including multiple filters, and the multiple filters have optical transmission characteristics that are irregularly different from one another, the multiple light receiving areas can be realized by an image sensor 33 arranged adjacent to or directly above the filter array 32. In this case, the optical response characteristics of the multiple light receiving areas are determined based on the optical transmission characteristics of the multiple filters included in the filter array 32.

 あるいは、カメラ30がフィルタアレイ32を備えない場合、複数の受光領域は、例えば、光応答特性が互いに不規則に異なるように複数の画素が直接加工されたイメージセンサ33によって実現され得る。この場合、複数の受光領域の光応答特性は、それぞれ、イメージセンサ33に含まれる複数の画素の光応答特性に基づいて決定される。 Alternatively, if the camera 30 does not include a filter array 32, the multiple light receiving regions can be realized by, for example, an image sensor 33 in which multiple pixels are directly processed so that their photoresponse characteristics are irregularly different from one another. In this case, the photoresponse characteristics of the multiple light receiving regions are determined based on the photoresponse characteristics of the multiple pixels included in the image sensor 33.

 上記の多層膜、有機材料、回折格子構造、金属を含む微細構造、またはメタサーフェスは、2次元平面内において分光透過率が位置に応じて異なるように変調された構成であれば、入射光の符号化が可能である、したがって、上記の多層膜、有機材料、回折格子構造、金属を含む微細構造、またはメタサーフェスは、複数のフィルタがアレイ状に配置された構成である必要はない。 The above multilayer film, organic material, diffraction grating structure, microstructure including metal, or metasurface can encode incident light if it is configured such that the spectral transmittance is modulated to vary depending on the position in a two-dimensional plane. Therefore, the above multilayer film, organic material, diffraction grating structure, microstructure including metal, or metasurface does not need to be configured with multiple filters arranged in an array.

 次に、画像処理装置34による信号処理の例を説明する。画像処理装置34は、イメージセンサ33から出力された圧縮画像35、およびフィルタアレイ32の波長ごとの透過率の空間分布特性に基づいて、多波長のハイパースペクトル画像36を再構成する。ここで多波長とは、例えば通常のカラーカメラで取得されるRGBの3色の波長域よりも多くの波長域を意味する。この波長域の数は、例えば4から100程度の数であり得る。この波長域の数を、「バンド数」と称する。用途によっては、バンド数は100を超えていてもよい。 Next, an example of signal processing by the image processing device 34 will be described. The image processing device 34 reconstructs a multi-wavelength hyperspectral image 36 based on the compressed image 35 output from the image sensor 33 and the spatial distribution characteristics of the transmittance for each wavelength of the filter array 32. Here, multi-wavelength means more wavelength ranges than the wavelength ranges of the three colors RGB captured by a normal color camera, for example. The number of wavelength ranges can be, for example, about 4 to 100. This number of wavelength ranges is referred to as the "number of bands." Depending on the application, the number of bands may exceed 100.

 求めたいデータはハイパースペクトル画像36のデータであり、そのデータをfとする。バンド数をNとすると、fは、N個の画像バンドのデータf、f、・・・、fを統合したデータである。ここで、画像の横方向をx方向、画像の縦方向をy方向とする。求めるべき画像データのx方向の画素数をmとし、y方向の画素数をnとすると、画像データf、f、・・・、fの各々は、m×n個の画素値を有する。したがって、データfは要素数m×n×Nのデータである。一方、フィルタアレイ32によって符号化および多重化されて取得される圧縮画像35のデータgはm×n画素に対応するm×n画素値を含む2次元データである。データgは、以下の式(1)によって表すことができる。 The data to be obtained is the data of the hyperspectral image 36, and the data is denoted as f. If the number of bands is denoted as N, f is data obtained by integrating the data f 1 , f 2 , ..., f N of N image bands. Here, the horizontal direction of the image is the x direction, and the vertical direction of the image is the y direction. If the number of pixels in the x direction of the image data to be obtained is denoted as m, and the number of pixels in the y direction is denoted as n, each of the image data f 1 , f 2 , ..., f N has m x n pixel values. Therefore, the data f is data with the number of elements m x n x N. On the other hand, the data g of the compressed image 35 obtained by encoding and multiplexing by the filter array 32 is two-dimensional data including m x n pixel values corresponding to m x n pixels. The data g can be expressed by the following formula (1).

Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002

 式(1)におけるfは、1次元ベクトルとして表現されたハイパースペクトル画像のデータを表している。f、f、・・・、fの各々は、m×n個の要素を有する。したがって、右辺のベクトルは、m×n×N行1列の1次元ベクトルである。式(1)において、圧縮画像35のデータgは、m×n行1列の1次元ベクトルに変換されて表されている。行列Hは、ベクトルfの各成分f、f、・・・、fを波長バンドごとに異なる符号化情報で符号化および強度変調し、それらを加算する変換を表す。したがって、Hは、m×n行m×n×N列の行列である。式(1)は、以下のように表すこともできる。 In formula (1), f represents the data of the hyperspectral image expressed as a one-dimensional vector. Each of f 1 , f 2 , ..., f N has m x n elements. Therefore, the vector on the right side is a one-dimensional vector of m x n x N rows and 1 column. In formula (1), the data g of the compressed image 35 is converted and expressed as a one-dimensional vector of m x n rows and 1 column. The matrix H represents a transformation in which each component f 1 , f 2 , ..., f N of the vector f is encoded and intensity-modulated with different encoding information for each wavelength band, and then added. Therefore, H is a matrix of m x n rows and m x n x N columns. Formula (1) can also be expressed as follows.

 g=(pg11・・・pg1n・・・pgm1・・・pgmn=H(f・・・f
ここで、pgijは、圧縮画像35の第i行第j列の画素値を表す。
g=(pg 11 ...pg 1n ...pg m1 ...pg mn ) T =H( f1 ... fN ) T
Here, pg ij represents the pixel value in the i-th row and j-th column of the compressed image 35 .

 ベクトルgと行列Hが与えられれば、式(1)の逆問題を解くことにより、fを算出することができそうである。しかし、求めるデータfの要素数m×n×Nが取得データgの要素数m×nよりも多いため、この問題は不良設定問題であり、このままでは解くことができない。そこで、画像処理装置34は、データfに含まれる画像のスパース性を利用し、圧縮センシングの手法を用いて解を求める。具体的には、以下の式(2)を解くことにより、求めるデータfが推定される。 If vector g and matrix H are given, it seems possible to calculate f by solving the inverse problem of equation (1). However, because the number of elements m×n×N of the desired data f is greater than the number of elements m×n of the acquired data g, this problem is ill-posed and cannot be solved as is. Therefore, the image processing device 34 utilizes the sparsity of the image contained in the data f to find a solution using a compressed sensing technique. Specifically, the desired data f is estimated by solving the following equation (2).

Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003

 ここで、f’は、推定されたfのデータを表す。上式の括弧内の第1項は、推定結果Hfと取得データgとのずれ量、いわゆる残差項を表す。ここでは2乗和を残差項としているが、絶対値または二乗和平方根等を残差項としてもよい。括弧内の第2項は、正則化項または安定化項である。式(2)は、第1項と第2項との和を最小化するfを求めることを意味する。式(2)における括弧内の関数を評価関数と呼ぶ。画像処理装置34は、再帰的な反復演算によって解を収束させ、評価関数を最小にするfを、最終的な解f’として算出することができる。 Here, f' represents the estimated f data. The first term in the parentheses in the above equation represents the amount of deviation between the estimated result Hf and the acquired data g, the so-called residual term. Here, the sum of squares is used as the residual term, but the absolute value or the square root of the sum of squares, etc. may also be used as the residual term. The second term in the parentheses is a regularization term or stabilization term. Equation (2) means to find f that minimizes the sum of the first and second terms. The function in the parentheses in equation (2) is called the evaluation function. The image processing device 34 can converge the solution by recursive iterative calculations and calculate the f that minimizes the evaluation function as the final solution f'.

 式(2)の括弧内の第1項は、取得データgと、推定過程のfを行列Hによって変換したHfとの差の二乗和を求める演算を意味する。第2項のΦ(f)は、fの正則化における制約条件であり、推定データのスパース情報を反映した関数である。この関数は、推定データを滑らかまたは安定にする効果をもたらす。正則化項は、例えば、fの離散的コサイン変換(DCT)、ウェーブレット変換、フーリエ変換、またはトータルバリエーション(TV)などによって表され得る。例えば、トータルバリエーションを使用した場合、観測データgのノイズの影響を抑えた安定した推測データを取得できる。それぞれの正則化項の空間における被写体10bのスパース性は、被写体10bのテキスチャによって異なる。被写体10bのテキスチャが正則化項の空間においてよりスパースになる正則化項を選んでもよい。あるいは、複数の正則化項を演算に含んでもよい。τは、重み係数である。重み係数τが大きいほど冗長的なデータの削減量が多くなり、圧縮する割合が高まる。重み係数τが小さいほど解への収束性が弱くなる。重み係数τは、fがある程度収束し、かつ、過圧縮にならない適度な値に設定される。 The first term in the parentheses in formula (2) means an operation to obtain the sum of squares of the difference between the acquired data g and Hf obtained by transforming f in the estimation process by the matrix H. The second term Φ(f) is a constraint condition in the regularization of f, and is a function reflecting the sparse information of the estimated data. This function has the effect of smoothing or stabilizing the estimated data. The regularization term can be expressed, for example, by the discrete cosine transform (DCT), wavelet transform, Fourier transform, or total variation (TV) of f. For example, when the total variation is used, stable estimated data that suppresses the influence of noise in the observed data g can be obtained. The sparsity of the subject 10b in the space of each regularization term differs depending on the texture of the subject 10b. A regularization term that makes the texture of the subject 10b sparser in the space of the regularization term may be selected. Alternatively, multiple regularization terms may be included in the operation. τ is a weighting coefficient. The larger the weighting factor τ, the more redundant data is reduced, and the higher the compression ratio. The smaller the weighting factor τ, the weaker the convergence to a solution. The weighting factor τ is set to an appropriate value that allows f to converge to a certain extent, but does not result in over-compression.

 なお、図16Bおよび図16Cの構成においては、フィルタアレイ32によって符号化された像は、イメージセンサ33の撮像面上でボケた状態で取得される。したがって、予めこのボケ情報を保有しておき、そのボケ情報を前述の行列Hに反映させることにより、ハイパースペクトル画像36を再構成することができる。ここで、ボケ情報は、点拡がり関数(Point Spread Function:PSF)によって表される。PSFは、点像の周辺画素への拡がりの程度を規定する関数である。例えば、画像上で1画素に相当する点像が、ボケによってその画素の周囲のk×k画素の領域に広がる場合、PSFは、その領域内の各画素の画素値への影響を示す係数群、すなわち行列として規定され得る。PSFによる符号化パターンのボケの影響を、行列Hに反映させることにより、ハイパースペクトル画像36を再構成することができる。フィルタアレイ32が配置される位置は任意であるが、フィルタアレイ32の符号化パターンが拡散しすぎて消失しない位置が選択され得る。 16B and 16C, the image encoded by the filter array 32 is acquired in a blurred state on the imaging surface of the image sensor 33. Therefore, by storing this blur information in advance and reflecting the blur information in the above-mentioned matrix H, the hyperspectral image 36 can be reconstructed. Here, the blur information is represented by a point spread function (PSF). The PSF is a function that defines the degree of spread of a point image to surrounding pixels. For example, when a point image corresponding to one pixel on an image spreads to a region of k×k pixels around the pixel due to blurring, the PSF can be defined as a group of coefficients that indicate the influence on the pixel value of each pixel in that region, that is, a matrix. The hyperspectral image 36 can be reconstructed by reflecting the influence of the blurring of the encoding pattern by the PSF in the matrix H. The position at which the filter array 32 is arranged is arbitrary, but a position at which the encoding pattern of the filter array 32 does not diffuse too much and disappear can be selected.

 以上の処理により、フィルタアレイ32を介して被写体10bをイメージセンサ33によって取得された圧縮画像35に基づいて、ハイパースペクトル画像36を復元することができる。ハイパースペクトル画像36の復元方法の詳細は、特許文献2に開示されている。特許文献2の開示内容の全体を本明細書に援用する。 By the above processing, a hyperspectral image 36 can be restored based on a compressed image 35 of the subject 10b acquired by the image sensor 33 via the filter array 32. Details of the method for restoring the hyperspectral image 36 are disclosed in Patent Document 2. The disclosure of Patent Document 2 is incorporated herein by reference in its entirety.

 [10.付記]
 以上の実施の形態の記載により、以下の技術が開示される。
[10. Notes]
The above description of the embodiments discloses the following techniques.

 (技術1)
 第1被写体を撮影して生成される画像を第1画像とし、第2被写体を撮影して生成される複数の波長バンドの情報を含む画像を第2画像として、前記第1画像を、前記第2画像を補正するための補正用の画像として取得することと、
 前記第1画像の画素値に基づいて、前記第1画像が前記補正用の画像としての適性条件を満たすか否かを判定することと、
を含む、
 方法。
(Technology 1)
acquiring an image generated by photographing a first object as a first image, an image generated by photographing a second object and including information of a plurality of wavelength bands as a second image, and the first image as a correction image for correcting the second image;
determining whether or not the first image satisfies a suitability condition for the correction image based on pixel values of the first image;
Including,
method.

 この方法により、不適切な補正を防いで分析対象の被写体のスペクトル情報をより正確に取得することができる。 This method allows for more accurate acquisition of spectral information of the subject being analyzed by preventing inappropriate corrections.

 (技術2)
 前記第1画像に基づく前記第2画像の補正は黒引きである、技術1に記載の方法。
(Technology 2)
The method according to technique 1, wherein the correction of the second image based on the first image is black subtraction.

 この方法により、遮光状態で撮影できていない場合に、不適切な黒引きを防ぐことができる。 This method can prevent inappropriate blackouts when shooting in dark conditions.

 (技術3)
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像の画素値が小さいと判断される条件が満たれる場合に前記第1画像が前記適性条件を満たすと判定することである、
 技術2に記載の方法。
(Technology 3)
Determining whether the first image satisfies the suitability condition means determining that the first image satisfies the suitability condition when a condition for determining that a pixel value of the first image is small is satisfied.
The method described in technique 2.

 この方法により、遮光状態で撮影できたか否かを判定できる。 This method makes it possible to determine whether or not a photograph was taken in a dark environment.

 (技術4)
 前記第1画像に基づく前記第2画像の補正は白板補正である、
 技術1に記載の方法。
(Technology 4)
The correction of the second image based on the first image is a whiteboard correction.
The method described in technique 1.

 この方法により、白板を正しく撮影できていない場合に、不適切な白板補正を防ぐことができる。 This method can prevent inappropriate white board correction when the white board is not photographed correctly.

 (技術5)
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像の画素値の空間分布が一定であると判断される条件が満される場合に前記適性条件を満たすと判定することである、
 技術4に記載の方法。
(Technique 5)
Determining whether the first image satisfies the suitability condition means determining that the first image satisfies the suitability condition when a condition for determining that a spatial distribution of pixel values of the first image is constant is satisfied.
The method described in technique 4.

 この方法により、白板を正しく撮影できたか否かを判定できる。 This method makes it possible to determine whether or not the whiteboard has been photographed correctly.

 (技術6)
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像から取得されたスペクトル情報と、予め記憶されたスペクトル情報とを比較することにより、前記第1画像が前記適性条件を満たすか否かを判定することである、
 技術4に記載の方法。
(Technology 6)
Determining whether the first image satisfies the suitability condition includes determining whether the first image satisfies the suitability condition by comparing spectrum information acquired from the first image with pre-stored spectrum information.
The method according to technique 4.

 この方法により、白板を正しく撮影できたか否かを判定できる。 This method makes it possible to determine whether or not the whiteboard has been photographed correctly.

 (技術7)
 前記第1画像は、前記第1被写体についての複数の波長バンドの情報が圧縮された圧縮画像であり、
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記圧縮画像の画素値のヒストグラムに基づいて、前記第1画像が前記適性条件を満たすか否かを判定することである、
 技術4に記載の方法。
(Technology 7)
the first image is a compressed image in which information of a plurality of wavelength bands about the first object is compressed;
determining whether the first image satisfies the suitability condition based on a histogram of pixel values of the compressed image;
The method described in technique 4.

 この方法により、白板を正しく撮影できたか否かを判定できる。 This method makes it possible to determine whether or not the whiteboard has been photographed correctly.

 (技術8)
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像をエッジ検出することにより、前記第1画像が前記適性条件を満たすか否かを判定することである、技術2または4に記載の方法。
(Technology 8)
The method according to any one of claims 2 to 4, wherein determining whether the first image satisfies the suitability condition comprises determining whether the first image satisfies the suitability condition by detecting edges in the first image.

 この方法により、遮光状態で撮影できたか否かの判定、または白板を正しく撮影できたか否かを判定できる。 This method makes it possible to determine whether an image was captured in a darkened state, or whether a white board was captured correctly.

 (技術9)
 前記第1画像が前記適性条件を満たさない場合、表示装置にエラーを表示させることをさらに含む、
 技術1から8のいずれかに記載の方法。
(Technology 9)
and displaying an error on a display device if the first image does not satisfy the suitability condition.
The method according to any one of techniques 1 to 8.

 この方法により、ユーザは、第1画像が適性条件を満たしていないことを知ることができる。 In this way, the user can know that the first image does not meet the suitability criteria.

 (技術10)
 前記エラーは、前記第1被写体として白板を正しく撮影できているかの確認をユーザに促す、
 技術9に記載の方法。
(Technology 10)
The error prompts the user to confirm whether the whiteboard has been correctly photographed as the first object.
The method according to technique 9.

 この方法により、ユーザは、白板を正しく撮影できているかを確認できる。 This method allows users to check whether they are photographing the whiteboard correctly.

 (技術11)
 前記エラーは、前記第1被写体の撮影時における遮光状態の確認をユーザに促す、
 技術9に記載の方法。
(Technology 11)
the error prompts a user to check a light blocking state when photographing the first subject;
The method according to technique 9.

 この方法により、ユーザは遮光状態を確認できる。 This method allows the user to check the light blocking status.

 (技術12)
 前記第2画像を取得することと、
 前記第1画像が前記適性条件を満す場合、前記第1画像に基づいて前記第2画像を補正することと、
をさらに含む、
 技術1から11のいずれかに記載の方法。
(Technology 12)
acquiring the second image;
correcting the second image based on the first image if the first image satisfies the suitability condition;
Further comprising:
The method according to any one of techniques 1 to 11.

 この方法により、第1画像に基づいて第2画像を適切に補正することができる。 This method allows the second image to be appropriately corrected based on the first image.

 (技術13)
 第1被写体および第2被写体を含むシーンを撮影して生成される複数の波長バンドの情報を含む画像を取得することと、
 前記画像に基づいて生成される前記第1被写体に対応するサブ画像を第1サブ画像とし、前記画像に基づいて生成される前記第2被写体に対応するサブ画像を第2サブ画像として、前記第1サブ画像の画素値に基づいて、前記第1サブ画像が、前記第2サブ画像を補正するための補正用の画像としての適性条件を満たすか否かを判定することと、
を含む、
 方法。
(Technology 13)
Obtaining an image including information of a plurality of wavelength bands generated by photographing a scene including a first object and a second object;
a sub-image corresponding to the first subject generated based on the image as a first sub-image, a sub-image corresponding to the second subject generated based on the image as a second sub-image, and based on pixel values of the first sub-image, determining whether or not the first sub-image satisfies a suitability condition for use as a correction image for correcting the second sub-image;
Including,
method.

 この方法により、不適切な補正を防いで分析対象の被写体のスペクトル情報をより正確に取得することができる。 This method allows for more accurate acquisition of spectral information of the subject being analyzed by preventing inappropriate corrections.

 (技術14)
 前記第1サブ画像に基づく前記第2サブ画像の補正は白板補正である、技術13に記載の方法。
(Technology 14)
14. The method of claim 13, wherein the correction of the second sub-image based on the first sub-image is a whiteboard correction.

 この方法により、白板を正しく撮影できていない場合に、不適切な白板補正を防ぐことができる。 This method can prevent inappropriate white board correction when the white board is not photographed correctly.

 (技術15)
 技術1から14のいずれかに記載の方法を実行する処理回路。
(Technology 15)
A processing circuit for carrying out the method according to any one of techniques 1 to 14.

 この処理回路により、不適切な補正を防いで分析対象の被写体のスペクトル情報をより正確に取得することができる。 This processing circuitry prevents inappropriate corrections and allows for more accurate acquisition of spectral information of the subject being analyzed.

 (その他の形態1)
 補正用の被写体として、白板とは異なる被写体を用いてもよい。当該被写体は、例えば特定の波長域の光の反射率が高く、その他の特定の波長域の光の反射率が低い分光反射特性を有していてもよい。例えば、メモリ54に当該被写体の分光反射特性を記憶させてもよい。記憶された分光反射特性に基づいて、照射光のスペクトルの形状、撮影時の照射分布、レンズの周辺減光、イメージセンサの不均一な感度などの影響を低減するための補正が行われてもよい。
(Other form 1)
An object other than a whiteboard may be used as the object for correction. The object may have a spectral reflectance characteristic in which the reflectance of light in a specific wavelength range is high and the reflectance of light in another specific wavelength range is low. For example, the spectral reflectance characteristic of the object may be stored in the memory 54. Based on the stored spectral reflectance characteristic, correction may be performed to reduce the influence of the shape of the spectrum of the irradiated light, the irradiation distribution at the time of shooting, the peripheral light falloff of the lens, the uneven sensitivity of the image sensor, and the like.

 (その他の形態2)
 第1画像は、例えば3つ以下の波長バンドの情報を含む画像であってもよい。例えば、複数の波長バンドの情報を含む第2画像は、赤色に対応する波長バンド、緑色に対応する波長バンド、および青色に対応する波長バンドに基づいて生成された画像であってもよい。
(Other form 2)
The first image may be an image including information of three or less wavelength bands, for example. For example, the second image including information of multiple wavelength bands may be an image generated based on a wavelength band corresponding to red, a wavelength band corresponding to green, and a wavelength band corresponding to blue.

 複数の波長バンドの情報を含む第2画像は、例えば3つ以下の波長バンドの情報を含む画像であってもよい。また、第1画像を用いて第2画像を補正することにより生成された補正後の画像は3つ以下の波長バンドの情報を含む画像であってもよい。 The second image including information on multiple wavelength bands may be, for example, an image including information on three or fewer wavelength bands. Also, the corrected image generated by correcting the second image using the first image may be an image including information on three or fewer wavelength bands.

 本開示の技術は、例えば、多波長または高解像度の画像を取得するカメラおよび測定機器に有用である。本開示の技術は、例えば、生体・医療・美容向けセンシング、食品の異物・残留農薬検査システム、リモートセンシングシステムおよび車載センシングシステムにも応用できる。 The technology disclosed herein is useful, for example, in cameras and measuring devices that capture multi-wavelength or high-resolution images. The technology disclosed herein can also be applied, for example, to sensing for biomedical and cosmetic applications, food foreign body and pesticide residue inspection systems, remote sensing systems, and vehicle-mounted sensing systems.

  10a1、10a2、10b   被写体
  20    光源
  30    ハイパースペクトルカメラ
  31、31A、31B  光学系
  32    フィルタアレイ
  33    イメージセンサ
  34    画像処理装置
  35    圧縮画像
  36W~36W  復元画像
  38    画像
  40    表示装置
  42    入力UI
  44    表示UI
  50    処理装置
  52    処理回路
  54    メモリ
  56    記憶装置
  60    ステージ
  70    支持体
  80    調整装置
Reference Signs List 10a1, 10a2, 10b Object 20 Light source 30 Hyperspectral camera 31, 31A, 31B Optical system 32 Filter array 33 Image sensor 34 Image processing device 35 Compressed image 36W 1 to 36W N restored image 38 Image 40 Display device 42 Input UI
44 Display UI
50 Processing device 52 Processing circuit 54 Memory 56 Storage device 60 Stage 70 Support 80 Adjustment device

Claims (15)

 第1被写体を撮影して生成される画像を第1画像とし、第2被写体を撮影して生成される複数の波長バンドの情報を含む画像を第2画像として、前記第1画像を、前記第2画像を補正するための補正用の画像として取得することと、
 前記第1画像の画素値に基づいて、前記第1画像が前記補正用の画像としての適性条件を満たすか否かを判定することと、
を含む、
 方法。
acquiring an image generated by photographing a first object as a first image, an image generated by photographing a second object and including information of a plurality of wavelength bands as a second image, and the first image as a correction image for correcting the second image;
determining whether or not the first image satisfies a suitability condition for the correction image based on pixel values of the first image;
Including,
method.
 前記第1画像に基づく前記第2画像の補正は黒引きである、
 請求項1に記載の方法。
The correction of the second image based on the first image is black subtraction.
The method of claim 1.
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像の画素値が小さいと判断される条件が満たれる場合に前記第1画像が前記適性条件を満たすと判定することである、
 請求項2に記載の方法。
Determining whether the first image satisfies the suitability condition means determining that the first image satisfies the suitability condition when a condition for determining that a pixel value of the first image is small is satisfied.
The method of claim 2.
 前記第1画像に基づく前記第2画像の補正は白板補正である、
 請求項1に記載の方法。
The correction of the second image based on the first image is a whiteboard correction.
The method of claim 1.
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像の画素値の空間分布が一定であると判断される条件が満される場合に前記適性条件を満たすと判定することである、
 請求項4に記載の方法。
Determining whether the first image satisfies the suitability condition means determining that the first image satisfies the suitability condition when a condition for determining that a spatial distribution of pixel values of the first image is constant is satisfied.
The method according to claim 4.
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像から取得されたスペクトル情報と、予め記憶されたスペクトル情報とを比較することにより、前記第1画像が前記適性条件を満たすか否かを判定することである、
 請求項4に記載の方法。
Determining whether the first image satisfies the suitability condition includes determining whether the first image satisfies the suitability condition by comparing spectrum information acquired from the first image with pre-stored spectrum information.
The method according to claim 4.
 前記第1画像は、前記第1被写体についての複数の波長バンドの情報が圧縮された圧縮画像であり、
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記圧縮画像の画素値のヒストグラムに基づいて、前記第1画像が前記適性条件を満たすか否かを判定することである、
 請求項4に記載の方法。
the first image is a compressed image in which information of a plurality of wavelength bands about the first object is compressed;
determining whether the first image satisfies the suitability condition based on a histogram of pixel values of the compressed image;
The method according to claim 4.
 前記第1画像が前記適性条件を満たすか否かを判定することは、前記第1画像をエッジ検出することにより、前記第1画像が前記適性条件を満たすか否かを判定することである、
 請求項2または4に記載の方法。
Determining whether or not the first image satisfies the suitability condition includes determining whether or not the first image satisfies the suitability condition by detecting edges of the first image.
The method according to claim 2 or 4.
 前記第1画像が前記適性条件を満たさない場合、表示装置にエラーを表示させることをさらに含む、
 請求項1から7のいずれかに記載の方法。
and displaying an error on a display device if the first image does not satisfy the suitability condition.
The method according to any one of claims 1 to 7.
 前記エラーは、前記第1被写体として白板を正しく撮影できているかの確認をユーザに促す、
 請求項9に記載の方法。
The error prompts the user to confirm whether the whiteboard has been correctly photographed as the first object.
10. The method of claim 9.
 前記エラーは、前記第1被写体の撮影時における遮光状態の確認をユーザに促す、
 請求項9に記載の方法。
the error prompts a user to check a light blocking state when photographing the first subject;
10. The method of claim 9.
 前記第2画像を取得することと、
 前記第1画像が前記適性条件を満す場合、前記第1画像に基づいて前記第2画像を補正することと、
をさらに含む、
 請求項1から7のいずれかに記載の方法。
acquiring the second image;
correcting the second image based on the first image if the first image satisfies the suitability condition;
Further comprising:
The method according to any one of claims 1 to 7.
 第1被写体および第2被写体を含むシーンを撮影して生成される複数の波長バンドの情報を含む画像を取得することと、
 前記画像に基づいて生成される前記第1被写体に対応するサブ画像を第1サブ画像とし、前記画像に基づいて生成される前記第2被写体に対応するサブ画像を第2サブ画像として、前記第1サブ画像の画素値に基づいて、前記第1サブ画像が、前記第2サブ画像を補正するための補正用の画像としての適性条件を満たすか否かを判定することと、
を含む、
 方法。
Obtaining an image including information of a plurality of wavelength bands generated by photographing a scene including a first object and a second object;
a sub-image corresponding to the first subject generated based on the image as a first sub-image, a sub-image corresponding to the second subject generated based on the image as a second sub-image, and based on pixel values of the first sub-image, determining whether or not the first sub-image satisfies a suitability condition for use as a correction image for correcting the second sub-image;
Including,
method.
 前記第1サブ画像に基づく前記第2サブ画像の補正は白板補正である、
 請求項13に記載の方法。
the correction of the second sub-image based on the first sub-image is a whiteboard correction;
The method of claim 13.
 請求項1から7のいずれかに記載の方法を実行する処理回路。 A processing circuit for performing the method according to any one of claims 1 to 7.
PCT/JP2024/041056 2023-11-29 2024-11-20 Method used for image correction, and processing circuit for executing said method Pending WO2025115710A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023202073 2023-11-29
JP2023-202073 2023-11-29

Publications (1)

Publication Number Publication Date
WO2025115710A1 true WO2025115710A1 (en) 2025-06-05

Family

ID=95897760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/041056 Pending WO2025115710A1 (en) 2023-11-29 2024-11-20 Method used for image correction, and processing circuit for executing said method

Country Status (1)

Country Link
WO (1) WO2025115710A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010130682A (en) * 2008-12-01 2010-06-10 Quantaview Inc Color correcting method
JP2016127567A (en) * 2015-01-08 2016-07-11 キヤノン株式会社 Image processor, information processing method and program
CN113420614A (en) * 2021-06-03 2021-09-21 江苏海洋大学 Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010130682A (en) * 2008-12-01 2010-06-10 Quantaview Inc Color correcting method
JP2016127567A (en) * 2015-01-08 2016-07-11 キヤノン株式会社 Image processor, information processing method and program
CN113420614A (en) * 2021-06-03 2021-09-21 江苏海洋大学 Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm

Similar Documents

Publication Publication Date Title
JP7668452B2 (en) Signal processing method, signal processing device, and imaging system
US10212401B2 (en) Image generation device and imaging device
US8619153B2 (en) Radiometric calibration using temporal irradiance mixtures
WO2021192891A1 (en) Signal processing method, signal processing device, and image-capturing system
WO2023176492A1 (en) Image capture system
JP6969164B2 (en) Evaluation device, evaluation program and evaluation method
CN110661940A (en) Imaging system with depth detection and method of operating the same
US11017500B2 (en) Image acquisition using time-multiplexed chromatic illumination
CN117178290A (en) Image processing device, camera system and method for estimating error of restored image
WO2022163421A1 (en) Method and device for detecting foreign substance contained in inspection target
WO2025115710A1 (en) Method used for image correction, and processing circuit for executing said method
US20240311971A1 (en) Signal processing method, non-volatile computer-readable recording medium, and system
Schöberl et al. Building a high dynamic range video sensor with spatially nonregular optical filtering
WO2025100176A1 (en) Method and system for correcting image
Singh et al. Detail Enhanced Multi-Exposer Image Fusion Based on Edge Perserving Filters
WO2025192485A1 (en) Imaging system and imaging method
WO2024053302A1 (en) Information processing method and imaging system
US20250386084A1 (en) Imaging system, matrix data, and matrix data generation method
WO2024195499A1 (en) Imaging system, matrix data, and method for generating matrix data
JP2024020922A (en) Restored image evaluation method and imaging system;
EP4447475A1 (en) Device and filter array used in system for generating spectral image, system for generating spectral image, and method for manufacturing filter array
WO2024203282A1 (en) Imaging system and method
CN120344839A (en) Light detection device, light detection system and filter array
WO2025084081A1 (en) Imaging system and method used for same
WO2025192316A1 (en) Imaging device, imaging system, and imaging method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24897396

Country of ref document: EP

Kind code of ref document: A1