[go: up one dir, main page]

WO2025126817A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2025126817A1
WO2025126817A1 PCT/JP2024/041564 JP2024041564W WO2025126817A1 WO 2025126817 A1 WO2025126817 A1 WO 2025126817A1 JP 2024041564 W JP2024041564 W JP 2024041564W WO 2025126817 A1 WO2025126817 A1 WO 2025126817A1
Authority
WO
WIPO (PCT)
Prior art keywords
spectrum
light source
sky
information processing
estimation unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/041564
Other languages
French (fr)
Japanese (ja)
Inventor
泰洋 八尾
隼 今村
毅 長門
祥平 鎌田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Semiconductor Solutions Corp
Original Assignee
Sony Semiconductor Solutions Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corp filed Critical Sony Semiconductor Solutions Corp
Publication of WO2025126817A1 publication Critical patent/WO2025126817A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/50Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
    • G01J3/51Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors using colour filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/27Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands using photo-electric detection ; circuits for computing concentration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • G01W1/12Sunshine duration recorders

Definitions

  • This technology relates to an information processing device, an information processing method, and a program, and in particular to a technology for estimating the components of a light source spectrum contained in a spectroscopic image obtained by a spectroscopic camera.
  • Spectroscopic sensors are known that obtain multiple narrowband images that are wavelength characteristic analysis images of light from a subject, in other words, analysis images of the subject's spectral information (spectral spectrum).
  • applications have been developed that perform various analyses of a subject based on the spectral information obtained by the spectroscopic sensor, such as estimating the vegetation state of plants or the condition of human skin, based on these multiple narrowband images.
  • the spectral information of the light source illuminating the subject becomes a noise component, and it is desirable to remove this noise component. For this reason, in the field of spectroscopic sensing, the light source spectrum, which is the spectral information of the light source, is estimated.
  • Patent Document 1 discloses an image acquisition device that includes an image sensor that acquires a predetermined image, and a processor that acquires a basis based on the surrounding environment, estimates lighting information using the acquired basis, reflects the estimated lighting information, and performs color conversion related to the image.
  • Patent Document 1 uses a method for estimating lighting information (light source spectrum) by preparing a basis for each expected environment in advance and using the basis according to the results of environmental detection by a sensor, so it is not possible to estimate the light source spectrum in response to environmental changes other than the expected environment.
  • This technology was developed in consideration of the above issues, and aims to improve the robustness of light source spectrum estimation.
  • the information processing device includes a light source spectrum estimation unit that uses a light source model that expresses a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky, to estimate the light source spectrum based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera.
  • This makes it possible to estimate the light source spectrum by fitting a function serving as a light source model to the object spectrum detected by the spectroscopic camera.
  • FIG. 1 is a diagram illustrating an example of the configuration of a light source spectrum estimation system including an information processing device according to an embodiment.
  • FIG. 2 is a block diagram showing an example of a schematic configuration of a spectroscopic camera used in the embodiment. 2 is a diagram illustrating a schematic configuration example of a pixel array unit of a spectroscopic sensor.
  • FIG. 5A and 5B are explanatory diagrams of band narrowing processing in the embodiment.
  • FIG. 1 is a block diagram illustrating an example of a hardware configuration of an information processing device according to an embodiment.
  • FIG. 2 is a functional block diagram for explaining functions of an information processing device according to the first embodiment.
  • FIG. 1 is a diagram showing an example of a solar spectrum and a sky spectrum.
  • FIG. 11 is a block diagram showing an example of the configuration of a light source spectrum estimation system according to a second embodiment.
  • FIG. 11 is a functional block diagram for explaining functions of an information processing device according to a second embodiment.
  • FIG. 11 is a functional block diagram for explaining functions of an information processing device as a modified example.
  • First embodiment 1-1 System configuration [1-2. Spectroscopic camera configuration] [1-3. Configuration of information processing device] [1-4. Functions of information processing device] ⁇ 2. Second embodiment> 3. Modifications 4. Summary of the embodiment ⁇ 5. This Technology>
  • FIG. 1 is a diagram showing an example of the configuration of a light source spectrum estimation system including an information processing device according to an embodiment.
  • the light source spectrum estimation system includes an information processing device 1 , a spectroscopic camera 2 , and an upward-facing camera 3 .
  • the spectroscopic camera 2 refers to a camera equipped with a spectroscopic sensor as a light receiving sensor.
  • the "spectroscopic sensor” is a light receiving sensor for obtaining a plurality of narrowband images that serve as wavelength characteristic analysis images of light from a subject.
  • the information processing device 1 is configured as a computer device, and as described below, performs processing to estimate the light source spectrum, which is the spectral information of the light source, based on the spectral information of the subject obtained by the spectroscopic camera 2.
  • spectral information refers to information that indicates the light intensity for each wavelength.
  • the information processing device 1 estimates the light source spectrum of an outdoor light source (natural light source) as the light source spectrum, and estimates the light source spectrum using a light source model that expresses the light source spectrum, which is the spectral information of an outdoor light source, by weighted addition of the sunlight spectrum, which is the spectral information of light irradiated from the sun, and the sky spectrum, which is the spectral information of light irradiated from the sky; details will be explained later.
  • a light source model that expresses the light source spectrum, which is the spectral information of an outdoor light source, by weighted addition of the sunlight spectrum, which is the spectral information of light irradiated from the sun, and the sky spectrum, which is the spectral information of light irradiated from the sky; details will be explained later.
  • the upward facing camera 3 is a camera that captures images of the sky, in other words, a camera whose imaging direction is directed upward, and is capable of obtaining images of the sky. The use of the upward facing camera 3 will be explained later.
  • FIG. 2 is a block diagram showing an example of the configuration of the spectroscopic camera 2.
  • the spectroscopic camera 2 includes at least a spectroscopic sensor 4 and a spectroscopic image generating unit 5.
  • the spectroscopic camera 2 of the embodiment includes a control unit 6, a timing unit 7, a GNSS (Global Navigation Satellite System) sensor 8, and a communication unit 9 in addition to the spectroscopic sensor 4 and the spectroscopic image generating unit 5.
  • GNSS Global Navigation Satellite System
  • FIG. 3 is a schematic diagram showing an example of the configuration of the pixel array unit 4 a of the spectroscopic sensor 4 .
  • the pixel array section 4a has a plurality of spectroscopic pixel units Pu formed therein, each of which has a plurality of pixels Px, each of which receives light of a different wavelength band, arranged two-dimensionally in a predetermined pattern.
  • the pixel array section 4a has a plurality of spectroscopic pixel units Pu arranged two-dimensionally.
  • each spectroscopic pixel unit Pu individually receives light in a total of eight wavelength bands, from ⁇ 1 to ⁇ 8, at each pixel Px, in other words, an example in which the number of wavelength bands received and separated within each spectroscopic pixel unit Pu (hereinafter referred to as the ⁇ number of received wavelength channels'') is ⁇ 8'', but this is merely one example for explanatory purposes, and the number of received wavelength channels in the spectroscopic pixel unit Pu may be at least multiple and can be set arbitrarily.
  • the number of light-receiving wavelength channels in the spectroscopic pixel unit Pu is defined as "N".
  • the spectroscopic image generating unit 5 generates M narrowband images based on a RAW image output from the spectroscopic sensor 4.
  • the spectroscopic image generating unit 5 has a demosaic unit 5a and a narrowband image generating unit 5b.
  • the demosaic unit 5a performs demosaic processing on the raw image from the spectroscopic sensor 4, and the narrowband image generating unit 5b performs narrowband processing (linear matrix processing) based on the wavelength band images for N channels obtained by the demosaic processing, thereby generating M narrowband images from N wavelength band images.
  • FIG. 4 is an explanatory diagram of the band narrowing process for obtaining M narrowband images.
  • a predetermined matrix operation is performed for each pixel position based on the N-channel wavelength band images obtained by the demosaic process performed by the demosaic unit 5a, thereby obtaining M-channel narrowband images.
  • the process of obtaining M-channel pixel values ( I'0 to I'M -1 in the figure) by matrix operation using the N- channel pixel values (I0 to IN-1 in the figure) for each pixel position is the band narrowing process.
  • the calculation formula for the narrowband processing can be expressed as the following [Equation 1].
  • a total of N ⁇ M narrowband coefficients C are used: C 0 [0] to C 0 [N-1] for obtaining pixel value B 0 , C 1 [0] to C 1 [N-1] for obtaining pixel value B 1 , ..., C M-1 [0] to C M-1 [N-1] for obtaining pixel value B M-1.
  • control unit 6 is configured with a microcomputer having, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory), and the CPU performs overall control of the spectroscopic camera 2 by executing processing based on, for example, a program stored in the ROM or a program loaded into the RAM.
  • a microcomputer having, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory), and the CPU performs overall control of the spectroscopic camera 2 by executing processing based on, for example, a program stored in the ROM or a program loaded into the RAM.
  • the information processing device 1 is not limited to being configured by a single computer device as shown in Fig. 5, but may be configured by a system of multiple computer devices.
  • the multiple computer devices may be systemized by a LAN (Local Area Network) or the like, or may be located in a remote location by a VPN (Virtual Private Network) using the Internet or the like.
  • the multiple computer devices may include computer devices as a server group (cloud) available through a cloud computing service.
  • the information processing device 1 estimates a light source spectrum based on the spectral information of the subject obtained by the spectroscopic camera 2, using a predetermined light source model.
  • various functions of the information processing device 1 according to the first embodiment, including the light source spectrum estimation function, will be described.
  • the spectral information of the subject obtained by the spectroscopic camera 2 (the brightness values of each narrowband image) is referred to as the "object spectrum.”
  • a light source model that represents the light source spectrum by weighted addition of the sunlight spectrum and the sky spectrum is used to estimate the light source spectrum.
  • the Bird model is used as the light source model.
  • the environmental parameters used in calculating the solar spectrum and sky spectrum are estimated based on the image captured by the upward camera 3, the position information detected by the GNSS sensor 8, and the current time information measured by the timing unit 7.
  • the solar altitude, solar angle, and amount of ozone used in calculating the solar spectrum and sky spectrum are estimated based on the position information detected by the GNSS sensor 8 and the current time information measured by the timing unit 7.
  • This estimation is performed by the first environmental parameter estimation unit F2.
  • the amount of ozone can be estimated by latitude, longitude, and time.
  • the solar altitude can be estimated by time
  • the solar angle can be estimated by latitude, longitude, and time.
  • step S103 the CPU 11 substitutes the derived b0 and b1 and the initial value c into the error function Fd to derive m0, m1, and m2 that minimize the error Ds. Furthermore, in the following step S104, the CPU 11 substitutes the derived b0, b1, m0, m1, and m2 into the error function Fd to derive c that minimizes the error Ds.
  • step S105 the CPU 11 substitutes m0, m1, m2, and c derived in the immediately preceding derivation process into the error function Fd to derive b0 and b1 that minimize the error Ds. That is, for example, if the immediately preceding derivation process is the initial derivation process, then the m0, m1, m2, and c finally derived in the initial derivation process are substituted into the error function Fd to derive b0 and b1 that minimize the error Ds.
  • step S106 the CPU 11 substitutes the derived b0 and b1 and the c derived in the immediately preceding derivation process into the error function Fd to derive m0, m1, and m2 that minimize the error Ds. Furthermore, in the following step S107, the CPU 11 substitutes the derived b0, b1, m0, m1, and m2 into the error function Fd to derive c that minimizes the error Ds.
  • step S108 the CPU 11 determines whether the derivation end condition is satisfied.
  • a condition is set that can estimate that the degree of parameter optimization (reduction of error Ds) has reached a desired degree.
  • a condition based on the value of error Ds calculated using the finally derived b0, b1, m0, m1, m2, and c can be set.
  • a condition that the error Ds is equal to or less than a predetermined threshold value or a condition that an inflection point (inflection point from a decrease to an increase) of the value of the error Ds is detected can be set.
  • the CPU 11 executes the above-described derivation process for each pixel. This determines the light source spectrum for each pixel, and also determines the weighting coefficients b0 and b1 for each pixel.
  • an example of estimating the light source spectrum is given of a method for deriving the weighting coefficients b0 and b1 using a parameter optimization method that minimizes the error Ds.
  • b0 and b1 are inferred for each pixel of the image captured by the spectroscopic camera 2 using a DNN (Deep Neural Network) capable of outputting the same size as the input image, such as a U-net used for segmentation, and this DNN can be trained, for example, by the error Ds.
  • DNN Deep Neural Network
  • m0, m1, m2, and c are required for the calculation of the error Ds, and these can be derived, for example, by using the b0 and b1 inferred by the DNN using the same method as in steps S103 and S104 described above.
  • the aerosol turbidity and the amount of precipitated water vapor are estimated based on the images captured by the upward-facing camera 3.
  • the upward camera 3 is configured as a camera equipped with a spectroscopic sensor 4, similar to the spectroscopic camera 2, and is configured to obtain spectroscopic images (M narrowband images) as output images.
  • M narrowband images spectroscopic images
  • the solar spectrum and the sky spectrum are expressed as functions with the variables being the solar altitude, the solar angle, the amount of ozone, the aerosol turbidity, and the amount of precipitated water vapor.
  • the aerosol turbidity will be represented as ⁇ and the amount of precipitated water vapor as ⁇ .
  • the aerosol turbidity ⁇ and the amount of precipitated water vapor ⁇ can be estimated by a method of defining a predetermined error function and deriving parameters that minimize the error.
  • spectral information expressed by the following formula is detected on the imaging plane of upward camera 3.
  • the weighting coefficient b0' of the sunlight spectrum and the weighting coefficient b1' of the sky spectrum in the above formula can be said to be coefficients indicating the degree of sunlight at the position where the upward camera 3 is disposed (imaging position).
  • the following error function Fd' is used as the error function used to estimate the aerosol turbidity ⁇ and the amount of precipitated water vapor ⁇ .
  • Error Ds'
  • the "upward average spectral information” refers to the average value of the entire image of the spectral information obtained by the upward camera 3. Specifically, it is M brightness values obtained by calculating the average brightness value of the entire image for each of the M narrowband images.
  • the solar spectrum and sky spectrum are each expressed as a function with the solar altitude, solar angle, ozone amount, aerosol turbidity ⁇ , and precipitated water vapor amount ⁇ as variables, and the solar altitude, solar angle, and ozone amount estimated by the first environmental parameter estimation unit F2, as well as the aerosol turbidity ⁇ and precipitated water vapor amount ⁇ , are substituted into the calculation of the solar spectrum and sky spectrum in the above formula, respectively.
  • the second environmental parameter estimation unit F3 estimates the aerosol turbidity ⁇ and the amount of precipitated water vapor ⁇ in the following procedure. First, the initial values of ⁇ and ⁇ are set. In this case, it is also possible to use standard values for the initial values.
  • the second environment parameter estimation unit F3 substitutes ⁇ and ⁇ as initial values into the error function Fd′, and derives b0′ and b1′ that minimize the error Ds′. Furthermore, the second environmental parameter estimation unit F3 substitutes the derived b0', b1' and ⁇ as an initial value into the error function Fd', and derives ⁇ that minimizes the error Ds'. Next, the second environmental parameter estimation unit F3 substitutes the derived b0', b1', and ⁇ into the error function Fd' to derive ⁇ that minimizes the error Ds'.
  • the initial derivation process of b0', b1', ⁇ , and ⁇ is executed.
  • the second and subsequent derivation processes are executed as follows until a predetermined derivation end condition is met. That is, first, ⁇ and ⁇ derived in the immediately preceding derivation process are substituted into the error function to derive b0′ and b1′ that minimize the error Ds′. Next, the derived b0 and b1 and the ⁇ derived in the immediately preceding derivation process are substituted into the error function Fd' to derive ⁇ that minimizes the error Ds'.
  • the derivation end condition may be set to the same condition as in the above-mentioned parameter derivation process for b0, b1, m0, m1, m2, and c.
  • the above derivation process makes it possible to estimate the solar spectrum according to the environment, the aerosol turbidity ⁇ for calculating the sky spectrum, and the amount of precipitating water vapor ⁇ .
  • the estimation of environmental parameters by the second environmental parameter estimation unit F3 is exemplified by a case in which an environmental parameter is derived by a parameter optimization method that minimizes the error Ds', but the estimation method of the environmental parameters by the second environmental parameter estimation unit F3 is not limited to this.
  • an AI artificial intelligence
  • the machine learning of the AI model is considered to be performed by deep learning using images captured of the sky as learning input data and correct answer information for ⁇ and ⁇ as teacher data.
  • the upward camera 3 is not limited to being a camera equipped with a spectroscopic sensor 4 like the spectroscopic camera 2, but any camera configured to receive and separate at least a plurality of wavelength bands of light, such as an RGB camera, may be used.
  • the image processing by the image processing unit F4 may include semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit F1 and the spectroscopic image obtained by the spectroscopic camera 2.
  • semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit F1 and the spectroscopic image obtained by the spectroscopic camera 2.
  • an AI model for performing semantic segmentation processing rather than using a model trained on an image from which the light source component has been removed as learning input data, it is also possible to use a model trained on an image from which the light source component has not been removed and the light source spectrum as learning input data. This makes it possible to realize highly accurate semantic segmentation processing that eliminates the influence of light source components (specifically, the influence of sunlight and shade) while eliminating the need for light source cancellation processing.
  • the sun/shade area determination unit F5 determines whether the subject captured by the spectroscopic camera 2 is in a sun or shade area using the sunlight spectrum weighting coefficient b0 and the sky spectrum weighting coefficient b1 estimated by the light source spectrum estimation unit F1.
  • the weighting coefficient b0 of the solar spectrum tends to be large in sunny areas compared to shaded areas, so pixels with a large weighting coefficient b0 are determined to be in sunny areas, and other pixels are determined to be in shaded areas.
  • one possible method is to generate a histogram using b0, find a threshold value for b0 for determining whether the area is in sunny or shade based on the histogram, and determine whether the area is in sunny or shade based on the results of comparing the threshold value with b0.
  • image processing by the image processing unit F4 is realized by software processing by the CPU 11, but at least a part of the image processing may be realized by a method other than software processing by the CPU 11.
  • at least a part of the image processing by the image processing unit F4 may be realized by hardware processing by a signal processing unit such as a DPS (Digital Signal Processor) or GPU (Graphics Processing Unit) outside the CPU 11.
  • a signal processing unit such as a DPS (Digital Signal Processor) or GPU (Graphics Processing Unit) outside the CPU 11.
  • a signal processing unit such as a DPS (Digital Signal Processor) or GPU (Graphics Processing Unit) outside the CPU 11.
  • a hardware signal processing unit such as a GPU.
  • the light source spectrum is estimated not for each divided region as in the first embodiment, but rather as an average light source spectrum for the entire image.
  • all environmental spectra are estimated in the first embodiment, but fixed values (standard values) are used for at least some environmental parameters without estimation.
  • standard values are used for the amount of ozone, the amount of aerosol turbidity, and the amount of precipitated water vapor, excluding the amount of solar altitude and the solar angle, among the environmental parameters of the solar altitude, the solar angle, the amount of ozone, the aerosol turbidity, and the amount of precipitated water vapor, which are used to estimate the light source spectrum.
  • FIG. 9 is a block diagram showing an example of the configuration of a light source spectrum estimation system according to the second embodiment.
  • parts that are similar to parts that have already been described will be given the same reference numerals and description thereof will be omitted.
  • the changes from the first embodiment are that the upward facing camera 3 is omitted and that an information processing device 1A is used instead of the information processing device 1.
  • FIG. 10 is a functional block diagram for explaining functions of an information processing device 1A according to the second embodiment.
  • the CPU 11 included in the information processing device 1A is referred to as a CPU 11A.
  • FIG. 10 shows the functions of the CPU 11A in blocks.
  • the differences from the CPU 11 in the first embodiment are that the second environmental parameter estimation unit F3 and the sun/shade area determination unit F5 are omitted, and that a first environmental parameter estimation unit F2A is provided instead of the first environmental parameter estimation unit F2, and a light source spectrum estimation unit F1A is provided instead of the light source spectrum estimation unit F1.
  • the first environmental parameter estimation unit F2A estimates the solar altitude and solar angle based on the current position information and current time information input from the spectroscopic camera 2.
  • the image processing unit F4A performs predetermined image processing based on the spectroscopic image input from the spectroscopic camera 2 and the average light source spectrum of the entire image obtained by the light source spectrum estimation unit F1A.
  • image processing by the image processing unit F4A may involve removing the average light source spectrum of the entire image from the spectroscopic image.
  • standard values are used for the amount of ozone, the amount of aerosol turbidity, and the amount of precipitated water vapor, but the combination of environmental parameters for which standard values are used is not limited to this, and it is also possible to use a standard value for only the amount of ozone, or to use standard values for only the amount of aerosol turbidity and the amount of precipitated water vapor as the second environmental parameters, etc. Alternatively, it is also possible to use a standard value for only one of the amount of aerosol turbidity and the amount of precipitated water vapor.
  • the system is configured to have an upward camera 3 as in the first embodiment.
  • FIG. 11 is a functional block diagram for explaining the functions of a CPU 11X corresponding to this case.
  • the sunlight spectrum and sky spectrum are calculated using the solar altitude and solar angle estimated by the first environmental parameter estimation unit F2A, and the amount of ozone, aerosol turbidity, and amount of precipitated water vapor as standard values, and then the light source spectrum is estimated by substituting the standard values for b0 and b1 in the aforementioned "sunlight spectrum * b0 + sky spectrum * b1".
  • the amount of ozone can be estimated based on location information and time information, as in the first embodiment. Also, at least one of the aerosol turbidity and the amount of precipitated water vapor can be estimated based on an image captured by the upward camera 3, as in the first embodiment.
  • a configuration can be adopted in which b0', b1' calculated by the second environmental parameter estimation unit F3 described in Figure 6 based on the image captured by the upward camera 3 is used.
  • the information processing device according to the present technology is configured as a device separate from the spectroscopic camera 2, but the information processing device according to the present technology may also be configured as an integrated device with the spectroscopic camera 2.
  • the information processing device (1 or 1A) as an embodiment includes a light source spectrum estimation unit (F1, F1A) that estimates a light source spectrum based on an object spectrum that is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum that is spectral information of an outdoor light source by weighted addition of a solar spectrum that is spectral information of light irradiated from the sun and a sky spectrum that is spectral information of light irradiated from the sky.
  • This makes it possible to estimate the light source spectrum by fitting a function serving as a light source model to the object spectrum detected by the spectroscopic camera.
  • the light source spectrum estimation unit estimates the light source spectrum by estimating the parameters of the light source model and the object spectral reflectance, using the error (Ds) between the trial object spectrum, which is a spectrum calculated by multiplying the trial light source spectrum, which is a light source spectrum calculated by setting candidate values for the parameters of the light source model, by a candidate value of the object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum as a reference value for estimation.
  • Ds error between the trial object spectrum, which is a spectrum calculated by multiplying the trial light source spectrum, which is a light source spectrum calculated by setting candidate values for the parameters of the light source model, by a candidate value of the object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum as a reference value for estimation.
  • the light source spectrum estimating unit estimates the weighting coefficient (b0) of the sunlight spectrum and the weighting coefficient (b1) of the sky spectrum as parameters of the light source model based on the error. This makes it possible to estimate an appropriate light source spectrum according to whether the environment is in sunlight or shade.
  • a first environmental parameter estimation unit (F2 or F2A) is provided that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum based on at least one of the position information and the time information of the spectroscopic camera.
  • F2 or F2A a first environmental parameter estimation unit
  • the light source spectrum estimation unit uses environmental parameters such as the solar altitude, solar angle, and ozone amount to calculate the solar spectrum and the sky spectrum
  • the first environmental parameter estimation unit estimates at least one of the solar altitude, solar angle, and ozone amount based on at least one of the position information of the spectroscopic camera and current time information.
  • the solar altitude, solar angle, and amount of ozone are environmental parameters that can be estimated based on location information and current time information. Therefore, according to the above configuration, it is possible to estimate appropriate spectra according to the actual environment as the sunlight spectrum and the sky spectrum, thereby improving the accuracy of estimating the light source spectrum.
  • the light source spectrum estimation unit is equipped with a second environmental parameter estimation unit (F3) that estimates at least some of the environmental parameters used in calculating the solar spectrum and sky spectrum based on an image captured by an upward-facing camera (F3) that captures an image of the sky.
  • F3 second environmental parameter estimation unit
  • the light source spectrum estimation unit uses environmental parameters such as aerosol turbidity and precipitated water vapor amount to calculate the solar spectrum and sky spectrum
  • the second environmental parameter estimation unit estimates at least one of the aerosol turbidity and the precipitated water vapor amount based on an image captured by an upward-facing camera.
  • the light source spectrum estimation unit (F1A) uses a fixed value for one of the environmental parameters used to calculate the sunlight spectrum and the sky spectrum. This makes it possible to use standard values corresponding to a standard environment for at least one of the environmental parameters used in calculating the solar spectrum and the sky spectrum. Some environmental parameters have little effect on the estimated light source spectrum even when standard values are used. In addition, if fixed values are used for the environmental parameters, there is no need to perform estimation processing for the environmental parameters. Therefore, with the above configuration, it is possible to reduce the processing load required for estimating the light source spectrum while suppressing a decrease in the estimation accuracy of the light source spectrum.
  • the light source spectrum estimation unit (F1) estimates the light source spectrum using a light source model for each divided region obtained by dividing the spectroscopic image obtained by the spectroscopic camera into regions each having a predetermined number of pixels. This allows estimation of the light source spectrum at a finer granularity, for example pixel by pixel, rather than for the entire image. Therefore, it is possible to improve the estimation accuracy of the light source spectrum in the spatial direction.
  • the light source spectrum estimating unit (F1A) estimates an average light source spectrum for the entire spectroscopic image obtained by the spectroscopic camera.
  • the above configuration is suitable for cases where information on the average light source spectrum over the entire subject is required.
  • the information processing device includes an image processing unit (F4 or F4A) that performs light source cancellation processing on the spectroscopic image obtained by the spectroscopic camera based on the light source spectrum estimated by the light source spectrum estimation unit.
  • an image processing unit F4 or F4A
  • the light source cancellation process is performed based on the appropriate light source spectrum estimated by the light source spectrum estimation unit. Therefore, the accuracy of the light source cancellation process is improved.
  • the information processing device includes an image processing unit (F4) that performs semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit and the spectroscopic image obtained by the spectroscopic camera.
  • image processing unit F4 that performs semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit and the spectroscopic image obtained by the spectroscopic camera.
  • An information processing method as an embodiment is an information processing method in which an information processing device estimates a light source spectrum based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that represents a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
  • a program for realizing the functions of the light source spectrum estimation unit F1 or F1A etc. described with reference to FIG. 8 etc. in, for example, a CPU, a DSP etc., or a device including these, can be considered.
  • the program of the embodiment is a program readable by a computer device, and causes the computer device to realize a function of estimating a light source spectrum based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
  • the functions of the light source spectrum estimation unit F1 or F1A or the like described above can be realized in a device such as the information processing device 1 or 1A or the like.
  • the above-mentioned programs can be recorded in advance in a recording medium such as an HDD (Hard Disc Drive) or SSD (Solid State Drive) built into a device such as a computer device, or in a ROM within a microcomputer having a CPU.
  • the software may be temporarily or permanently stored (recorded) on a removable recording medium such as a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a Blu-ray Disc (registered trademark), a magnetic disk, a semiconductor memory, a memory card, etc.
  • a removable recording medium may be provided as so-called package software.
  • Such a program can be installed in a personal computer or the like from a removable recording medium, or can be downloaded from a download site via a network such as a LAN or the Internet.
  • Such a program is suitable for providing the light source spectrum estimation method of the embodiment in a wide range of applications. For example, by downloading the program to a personal computer, a portable information processing device, a mobile phone, a game device, a video device, a PDA (Personal Digital Assistant), etc., the personal computer, etc. can be made to function as a device that realizes the light source spectrum estimation method of the present disclosure.
  • a personal computer a portable information processing device, a mobile phone, a game device, a video device, a PDA (Personal Digital Assistant), etc.
  • the personal computer, etc. can be made to function as a device that realizes the light source spectrum estimation method of the present disclosure.
  • An information processing device comprising: a light source spectrum estimation unit that estimates the light source spectrum based on an object spectrum that is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum that is spectral information of an outdoor light source by weighted addition of a solar spectrum that is spectral information of light irradiated from the sun and a sky spectrum that is spectral information of light irradiated from the sky.
  • the light source spectrum estimation unit estimates the light source spectrum by estimating the parameters of the light source model and the object spectral reflectance, using an error between a trial object spectrum, which is a spectrum calculated by multiplying a trial light source spectrum, which is a light source spectrum calculated by setting candidate values for parameters of the light source model, by a candidate value of object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum, as a reference value for estimation.
  • (3) The light source spectrum estimation unit The information processing device according to (2), further comprising: estimating a weighting coefficient of the sunlight spectrum and a weighting coefficient of the sky spectrum as parameters of the light source model based on the error.
  • the information processing device further comprising a sun/shade area determination unit that determines a sun area and a shade area of a subject imaged by the spectroscopic camera using the weighting coefficient of the sunlight spectrum and the weighting coefficient of the sky spectrum estimated by the light source spectrum estimation unit.
  • a first environmental parameter estimation unit that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum, based on at least one of position information of the spectroscopic camera and current time information.
  • the light source spectrum estimation unit uses environmental parameters such as a solar altitude, a solar angle, and an ozone amount to calculate the solar spectrum and the sky spectrum;
  • the first environmental parameter estimation unit The information processing device according to (5), wherein at least one of the solar altitude, the solar angle, and the amount of ozone is estimated based on at least one of position information of the spectroscopic camera and current time information.
  • the information processing device according to any one of (1) to (6), further comprising a second environmental parameter estimation unit that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum based on an image captured by an upward-facing camera that captures an image of the sky.
  • the light source spectrum estimation unit uses environmental parameters, such as aerosol turbidity and precipitated water vapor, in calculating the solar spectrum and the sky spectrum;
  • the second environment parameter estimation unit The information processing device according to (7), wherein at least one of the aerosol turbidity and the amount of precipitated water vapor is estimated based on an image captured by the upward camera.
  • the light source spectrum estimation unit The information processing device according to any one of (1) to (4), wherein a fixed value is used for any of the environmental parameters used in the calculation of the solar spectrum and the sky spectrum.
  • the light source spectrum estimation unit The information processing device according to any one of (1) to (9), further comprising: a light source model for estimating the light source spectrum for each divided region obtained by dividing a spectroscopic image obtained by the spectroscopic camera into regions each having a predetermined number of pixels.
  • the light source spectrum estimation unit The information processing device according to any one of (1) to (9), further comprising: estimating an average light source spectrum of an entire spectroscopic image obtained by the spectroscopic camera.
  • the information processing device according to any one of (1) to (11), further comprising an image processing unit that performs semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit and a spectroscopic image obtained by the spectroscopic camera.
  • An information processing device An information processing method for estimating a light source spectrum, which is spectral information of an outdoor light source, based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that represents a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
  • a computer readable program causes the computer device to realize a function of estimating the light source spectrum based on the object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses the light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Immunology (AREA)
  • Chemical & Material Sciences (AREA)
  • Mathematical Physics (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental Sciences (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

An information processing device according to the present technology comprises a light source spectrum estimation unit that uses a light source model for expressing a light source spectrum, which is spectral information of an outdoor light source according to the weighted addition of a solar spectrum which is spectral information of emission light from the sun and a sky spectrum which is spectral information of emission light from the sky, to estimate the light source spectrum on the basis of an object spectrum which is spectral information of a subject captured by a spectroscopic camera.

Description

情報処理装置、情報処理方法、プログラムInformation processing device, information processing method, and program

 本技術は、情報処理装置、情報処理方法、及びプログラムに関するものであり、特には、分光カメラにより得られる分光画像に含まれる光源スペクトルの成分を推定するための技術に関する。 This technology relates to an information processing device, an information processing method, and a program, and in particular to a technology for estimating the components of a light source spectrum contained in a spectroscopic image obtained by a spectroscopic camera.

 被写体からの光についての波長特性解析画像、換言すれば被写体の分光情報(分光スペクトル)の解析画像となる複数の狭帯域画像を得るための分光センサ(マルチスペクトラムセンサ)が知られており、また、それら複数の狭帯域画像に基づき、例えば植物の植生状態を推定したり人肌の状態を推定したりする等、分光センサにより得られる分光情報に基づいて被写体の各種解析を行うアプリケーションが開発されている。 Spectroscopic sensors (multispectral sensors) are known that obtain multiple narrowband images that are wavelength characteristic analysis images of light from a subject, in other words, analysis images of the subject's spectral information (spectral spectrum). In addition, applications have been developed that perform various analyses of a subject based on the spectral information obtained by the spectroscopic sensor, such as estimating the vegetation state of plants or the condition of human skin, based on these multiple narrowband images.

 被写体の分光情報を得る際、被写体を照明する光源の分光情報はノイズ成分となり、これを除去することが望まれる。この点より、分光センシングの分野においては、光源の分光情報である光源スペクトルを推定するということが行われている。 When obtaining spectral information of a subject, the spectral information of the light source illuminating the subject becomes a noise component, and it is desirable to remove this noise component. For this reason, in the field of spectroscopic sensing, the light source spectrum, which is the spectral information of the light source, is estimated.

 例えば、下記特許文献1では、所定の映像を獲得するイメージセンサと、周囲の環境によって基底を獲得し、前記獲得された基底を利用し、照明情報を推定し、推定された照明情報を反映させ、映像に係わる色変換を行うプロセッサとを含む映像獲得装置が開示されている。 For example, the following Patent Document 1 discloses an image acquisition device that includes an image sensor that acquires a predetermined image, and a processor that acquires a basis based on the surrounding environment, estimates lighting information using the acquired basis, reflects the estimated lighting information, and performs color conversion related to the image.

特開2023-048996号公報JP 2023-048996 A

 しかしながら、特許文献1に記載の技術では、照明情報(光源スペクトル)の推定にあたり、予め想定環境ごとの基底を用意しておき、センサによる環境検出結果に応じた基底を用いるという手法を採っているため、想定環境以外の環境変化に対応して光源スペクトルを推定することができないものとなる。 However, the technology described in Patent Document 1 uses a method for estimating lighting information (light source spectrum) by preparing a basis for each expected environment in advance and using the basis according to the results of environmental detection by a sensor, so it is not possible to estimate the light source spectrum in response to environmental changes other than the expected environment.

 本技術は上記課題に鑑み為されたものであり、光源スペクトル推定のロバスト性向上を図ることを目的とする。 This technology was developed in consideration of the above issues, and aims to improve the robustness of light source spectrum estimation.

 本技術に係る情報処理装置は、太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する光源スペクトル推定部を備えたものである。
 これにより、分光カメラにより検出された物体スペクトルに対し、光源モデルとしての関数をフィッティングさせることで光源スペクトルを推定することが可能となる。
The information processing device according to the present technology includes a light source spectrum estimation unit that uses a light source model that expresses a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky, to estimate the light source spectrum based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera.
This makes it possible to estimate the light source spectrum by fitting a function serving as a light source model to the object spectrum detected by the spectroscopic camera.

実施形態としての情報処理装置を備えて構成された光源スペクトル推定システムの構成例を示した図である。FIG. 1 is a diagram illustrating an example of the configuration of a light source spectrum estimation system including an information processing device according to an embodiment. 実施形態で用いる分光カメラの概略構成例を示したブロック図である。FIG. 2 is a block diagram showing an example of a schematic configuration of a spectroscopic camera used in the embodiment. 分光センサが有する画素アレイ部の構成例を模式的に示した図である。2 is a diagram illustrating a schematic configuration example of a pixel array unit of a spectroscopic sensor. FIG. 実施形態における狭帯域化処理の説明図である。5A and 5B are explanatory diagrams of band narrowing processing in the embodiment. 実施形態としての情報処理装置のハードウェア構成例を示したブロック図である。FIG. 1 is a block diagram illustrating an example of a hardware configuration of an information processing device according to an embodiment. 第一実施形態としての情報処理装置が有する機能を説明するための機能ブロック図である。FIG. 2 is a functional block diagram for explaining functions of an information processing device according to the first embodiment. 太陽光スペクトルと空スペクトルの例を示した図である。FIG. 1 is a diagram showing an example of a solar spectrum and a sky spectrum. 第一実施形態としての光源スペクトル推定手法を実現するための処理手順例を示したフローチャートである。1 is a flowchart illustrating an example of a processing procedure for implementing a light source spectrum estimation method according to a first embodiment. 第二実施形態としての光源スペクトル推定システムの構成例を示したブロック図である。FIG. 11 is a block diagram showing an example of the configuration of a light source spectrum estimation system according to a second embodiment. 第二実施形態としての情報処理装置が有する機能を説明するための機能ブロック図である。FIG. 11 is a functional block diagram for explaining functions of an information processing device according to a second embodiment. 変形例としての情報処理装置が有する機能を説明するための機能ブロック図である。FIG. 11 is a functional block diagram for explaining functions of an information processing device as a modified example.

 以下、添付図面を参照し、本技術に係る実施形態を次の順序で説明する。
<1.第一実施形態>
[1-1.システム構成]
[1-2.分光カメラの構成]
[1-3.情報処理装置の構成]
[1-4.情報処理装置の機能]
<2.第二実施形態>
<3.変形例>
<4.実施形態のまとめ>
<5.本技術>
Hereinafter, with reference to the accompanying drawings, embodiments of the present technology will be described in the following order.
1. First embodiment
1-1. System configuration
[1-2. Spectroscopic camera configuration]
[1-3. Configuration of information processing device]
[1-4. Functions of information processing device]
<2. Second embodiment>
3. Modifications
4. Summary of the embodiment
<5. This Technology>

<1.第一実施形態>
[1-1.システム構成]
 図1は、実施形態としての情報処理装置を備えて構成される光源スペクトル推定システムの構成例を示した図である。
 図示のように光源スペクトル推定システムは、情報処理装置1、分光カメラ2、及び上向きカメラ3を備えている。
 ここで、分光カメラ2は、受光センサとして分光センサを備えたカメラを意味する。「分光センサ」とは、被写体からの光についての波長特性解析画像となる複数の狭帯域画像を得るための受光センサである。
1. First embodiment
1-1. System configuration
FIG. 1 is a diagram showing an example of the configuration of a light source spectrum estimation system including an information processing device according to an embodiment.
As shown in the figure, the light source spectrum estimation system includes an information processing device 1 , a spectroscopic camera 2 , and an upward-facing camera 3 .
Here, the spectroscopic camera 2 refers to a camera equipped with a spectroscopic sensor as a light receiving sensor. The "spectroscopic sensor" is a light receiving sensor for obtaining a plurality of narrowband images that serve as wavelength characteristic analysis images of light from a subject.

 情報処理装置1は、コンピュータ装置として構成され、後述するように、分光カメラ2により得られる被写体の分光情報に基づき、光源の分光情報である光源スペクトルを推定する処理を行う。 The information processing device 1 is configured as a computer device, and as described below, performs processing to estimate the light source spectrum, which is the spectral information of the light source, based on the spectral information of the subject obtained by the spectroscopic camera 2.

 ここで、分光情報とは、波長ごとの光強度を示す情報を意味するものである。 Here, spectral information refers to information that indicates the light intensity for each wavelength.

 本実施形態における情報処理装置1は、光源スペクトルとして、屋外光源(自然光源)についての光源スペクトルを推定するものとされ、太陽からの照射光の分光情報である太陽光スペクトルと、空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、光源スペクトルの推定を行うが、詳細については後に改めて説明する。 In this embodiment, the information processing device 1 estimates the light source spectrum of an outdoor light source (natural light source) as the light source spectrum, and estimates the light source spectrum using a light source model that expresses the light source spectrum, which is the spectral information of an outdoor light source, by weighted addition of the sunlight spectrum, which is the spectral information of light irradiated from the sun, and the sky spectrum, which is the spectral information of light irradiated from the sky; details will be explained later.

 上向きカメラ3は、上空側を撮像するカメラ、換言すれば、撮像方向が上方向に向けられたカメラであり、上空を撮像した撮像画像を得ることが可能とされている。
 なお、上向きカメラ3の用途については改めて説明する。
The upward facing camera 3 is a camera that captures images of the sky, in other words, a camera whose imaging direction is directed upward, and is capable of obtaining images of the sky.
The use of the upward facing camera 3 will be explained later.

[1-2.分光カメラの構成]
 図2は、分光カメラ2の構成例を示したブロック図である。
 分光カメラ2は、少なくとも分光センサ4、及び分光画像生成部5を備えるものである。図示のように実施形態の分光カメラ2は、これら分光センサ4及び分光画像生成部5と共に、制御部6、計時部7、GNSS(Global Navigation Satellite System)センサ8、及び通信部9を備えている。
[1-2. Spectroscopic camera configuration]
FIG. 2 is a block diagram showing an example of the configuration of the spectroscopic camera 2.
The spectroscopic camera 2 includes at least a spectroscopic sensor 4 and a spectroscopic image generating unit 5. As shown in the figure, the spectroscopic camera 2 of the embodiment includes a control unit 6, a timing unit 7, a GNSS (Global Navigation Satellite System) sensor 8, and a communication unit 9 in addition to the spectroscopic sensor 4 and the spectroscopic image generating unit 5.

 図3は、分光センサ4が有する画素アレイ部4aの構成例を模式的に示している。
 図示のように画素アレイ部4aには、それぞれが異なる波長帯の光を受光する複数の画素Pxが所定パターンで二次元配列されて成る分光画素ユニットPuが形成されている。画素アレイ部4aは、分光画素ユニットPuが二次元に複数配列されて成る。
FIG. 3 is a schematic diagram showing an example of the configuration of the pixel array unit 4 a of the spectroscopic sensor 4 .
As shown in the figure, the pixel array section 4a has a plurality of spectroscopic pixel units Pu formed therein, each of which has a plurality of pixels Px, each of which receives light of a different wavelength band, arranged two-dimensionally in a predetermined pattern. The pixel array section 4a has a plurality of spectroscopic pixel units Pu arranged two-dimensionally.

 図の例では、各分光画素ユニットPuがλ1からλ8の計8の波長帯の光を各画素Pxで個別に受光する例、換言すれば、各分光画素ユニットPu内で受光し分ける波長帯の数(以下「受光波長チャンネル数」と表記する)が「8」である例を示しているが、これは説明上の一例を示したものに過ぎず、分光画素ユニットPuにおける受光波長チャンネル数は少なくとも複数であればよく、任意に設定可能である。
 ここで以下、分光画素ユニットPuにおける受光波長チャンネル数を「N」とする。
The example shown in the figure is an example in which each spectroscopic pixel unit Pu individually receives light in a total of eight wavelength bands, from λ1 to λ8, at each pixel Px, in other words, an example in which the number of wavelength bands received and separated within each spectroscopic pixel unit Pu (hereinafter referred to as the ``number of received wavelength channels'') is ``8'', but this is merely one example for explanatory purposes, and the number of received wavelength channels in the spectroscopic pixel unit Pu may be at least multiple and can be set arbitrarily.
Hereinafter, the number of light-receiving wavelength channels in the spectroscopic pixel unit Pu is defined as "N".

 図2において、分光画像生成部5は、分光センサ4から出力される画像としてのRAW画像に基づいて、M個の狭帯域画像を生成する。ここで、「M>N」であり、一例としてN=8に対しM=41等であるとする。 In FIG. 2, the spectroscopic image generating unit 5 generates M narrowband images based on a RAW image output from the spectroscopic sensor 4. Here, M>N, and as an example, N=8 and M=41, etc.

 分光画像生成部5は、デモザイク部5aと狭帯域画像生成部5bと有し、デモザイク部5aが分光センサ4からのRAW画像に対してデモザイク処理を行い、狭帯域画像生成部5bがデモザイク処理で得られたNチャンネル分の各波長帯画像に基づく狭帯域化処理(リニアマトリクス処理)を行うことで、N個の波長帯画像からM個の狭帯域画像を生成する。 The spectroscopic image generating unit 5 has a demosaic unit 5a and a narrowband image generating unit 5b. The demosaic unit 5a performs demosaic processing on the raw image from the spectroscopic sensor 4, and the narrowband image generating unit 5b performs narrowband processing (linear matrix processing) based on the wavelength band images for N channels obtained by the demosaic processing, thereby generating M narrowband images from N wavelength band images.

 図4は、M個の狭帯域画像を得るための狭帯域化処理についての説明図である。
 デモザイク部5aによるデモザイク処理で得られたNチャンネル分の波長帯画像に基づき、所定の行列演算を画素位置ごとに行うことで、Mチャンネル分の狭帯域画像を得る。このようにNチャンネル分の波長帯画像をMチャンネル分の狭帯域画像に変換するために、画素位置ごとにNチャンネル分の画素値(図中、IからIN-1)を用いた行列演算によりMチャンネル分の画素値(図中、I’からI’M-1)を得る処理が、狭帯域化処理である。
FIG. 4 is an explanatory diagram of the band narrowing process for obtaining M narrowband images.
A predetermined matrix operation is performed for each pixel position based on the N-channel wavelength band images obtained by the demosaic process performed by the demosaic unit 5a, thereby obtaining M-channel narrowband images. In this manner, in order to convert the N-channel wavelength band images into M-channel narrowband images, the process of obtaining M-channel pixel values ( I'0 to I'M -1 in the figure) by matrix operation using the N- channel pixel values (I0 to IN-1 in the figure) for each pixel position is the band narrowing process.

 ここで、デモザイク処理後の画素値をR、入力波長チャンネルをn(0からN-1)、狭帯域化係数をC、狭帯域化処理による出力画素値をB、出力波長チャンネルをm(0からM-1)とすると、狭帯域化処理の演算式は、下記[式1]で表すことができる。

Figure JPOXMLDOC01-appb-M000001
Here, if the pixel value after demosaic processing is R, the input wavelength channel is n (0 to N-1), the narrowband coefficient is C, the output pixel value by the narrowband processing is B, and the output wavelength channel is m (0 to M-1), then the calculation formula for the narrowband processing can be expressed as the following [Equation 1].

Figure JPOXMLDOC01-appb-M000001

 すなわち、m=0番目の出力波長チャンネルの画素値B=R[0]×C[0]+R[1]×C[1]+R[2]×C[2]+、・・・+R[N-1]×C[N-1]であり、また、m=1番目の出力波長チャンネルの画素値B=R[0]×C[0]+R[1]×C[1]+R[2]×C[2]+、・・・+R[N-1]×C[N-1]である。
 以降、同様であり、最後のm=M-1番目の出力波長チャンネルの画素値BM-1=R[0]×CM-1 [0]+R[1]×CM-1[1]+R[2]×CM-1[2]+、・・・+R[N-1]×CM-1[N-1]である。
 このとき、狭帯域化係数Cとしては、画素値Bを求めるためのC[0]からC[N-1]、画素値Bを求めるためのC[0]からC[N-1]、・・・、画素値BM-1を求めるためのCM-1[0]からCM-1[N-1]の合計N×M個を用いる。
That is, the pixel value B 0 of the m=0th output wavelength channel is R[0]×C 0 [0]+R[1]×C 0 [1]+R[2]×C 0 [2]+, ...+R[N-1]×C 0 [N-1], and the pixel value B 1 of the m=1st output wavelength channel is R[0]×C 1 [0]+R[1]×C 1 [1]+R[2]×C 1 [2]+, ...+R[N-1]×C 1 [N-1].
The process is similar thereafter, and the pixel value B M- 1 of the final m=M-1th output wavelength channel is R[0]×C M-1 [0]+R[1]×C M-1 [1]+R[2]×C M-1 [2]+, ...+R[N-1]×C M-1 [N-1].
In this case, a total of N × M narrowband coefficients C are used: C 0 [0] to C 0 [N-1] for obtaining pixel value B 0 , C 1 [0] to C 1 [N-1] for obtaining pixel value B 1 , ..., C M-1 [0] to C M-1 [N-1] for obtaining pixel value B M-1.

 図2において、制御部6は、例えばCPU(Central Processing Unit)、ROM(Read Only Memory)及びRAM(Random Access Memory)等を有したマイクロコンピュータを備えて構成され、CPUが例えばROMに記憶されたプログラムやRAMにロードされたプログラムに基づく処理を実行することで、分光カメラ2の全体制御を行う。 In FIG. 2, the control unit 6 is configured with a microcomputer having, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory), and the CPU performs overall control of the spectroscopic camera 2 by executing processing based on, for example, a program stored in the ROM or a program loaded into the RAM.

 計時部7は、現在時刻の計時を行う。
 GNSSセンサ8は、分光カメラ2の現在位置を検出する位置検出デバイスの一例である。
 なお、計時部7により得られる現在時刻情報、及びGNSSセンサ8により検出される分光カメラ2の現在位置情報の用途については後に改めて説明する。
The clock unit 7 keeps track of the current time.
The GNSS sensor 8 is an example of a position detection device that detects the current position of the spectroscopic camera 2 .
The uses of the current time information obtained by the clock unit 7 and the current position information of the spectroscopic camera 2 detected by the GNSS sensor 8 will be explained later.

 通信部9は、外部装置との間で有線又は無線によるデータ通信を行う。例えば、通信部9としては、USB(Universal Serial Bus)通信規格等の所定の有線通信規格に従った外部装置との間の有線データ通信、ブルートゥース(Bluetooth:登録商標)通信規格等の所定の無線通信規格に従った外部装置との間の無線データ通信、或いはインターネット等の所定のネットワークを介した外部装置との間の無線又は有線データ通信を行う構成とすること等が考えられる。
 制御部6は、この通信部9を介して外部装置(特に本例では情報処理装置1)との間でデータの送受信を行うことができる。
The communication unit 9 performs wired or wireless data communication with an external device. For example, the communication unit 9 may be configured to perform wired data communication with an external device according to a predetermined wired communication standard such as the Universal Serial Bus (USB) communication standard, wireless data communication with an external device according to a predetermined wireless communication standard such as the Bluetooth (registered trademark) communication standard, or wireless or wired data communication with an external device via a predetermined network such as the Internet.
The control unit 6 can transmit and receive data to and from an external device (particularly, the information processing device 1 in this example) via the communication unit 9 .

[1-3.情報処理装置の構成]
 図5は、情報処理装置1のハードウェア構成例を示したブロック図である。
 図5に示すように情報処理装置1は、CPU11、ROM12、及びRAM13を備えている。CPU11は、各種の処理を行う演算処理部として機能し、ROM12に記憶されているプログラム、又は記憶部19からRAM13にロードされたプログラムに従って各種の処理を実行する。RAM13にはまた、CPU11が各種の処理を実行する上において必要なデータ等も適宜記憶される。
[1-3. Configuration of information processing device]
FIG. 5 is a block diagram showing an example of the hardware configuration of the information processing device 1.
5, the information processing device 1 includes a CPU 11, a ROM 12, and a RAM 13. The CPU 11 functions as an arithmetic processing unit that performs various processes, and executes the various processes according to a program stored in the ROM 12 or a program loaded from the storage unit 19 to the RAM 13. The RAM 13 also stores data and the like necessary for the CPU 11 to execute the various processes, as appropriate.

 CPU11、ROM12、及びRAM13は、バス14を介して相互に接続されている。このバス14にはまた、入出力インタフェース(I/F)15も接続されている。 The CPU 11, ROM 12, and RAM 13 are interconnected via a bus 14. An input/output interface (I/F) 15 is also connected to this bus 14.

 入出力インタフェース15には、操作子や操作デバイスよりなる入力部16が接続される。
 例えば入力部16としては、キーボード、マウス、キー、ダイヤル、タッチパネル、タッチパッド、リモートコントローラ等の各種の操作子や操作デバイスが想定される。本例において、入力部16におけるタッチパネルは、表示部17の表示画面において形成されており、ユーザは表示画面にタッチ操作を行うことが可能とされている。
 入力部16によりユーザの操作が検知され、入力された操作に応じた信号はCPU11によって解釈される。
The input/output interface 15 is connected to an input unit 16 including an operator and an operating device.
For example, various types of operators and operation devices such as a keyboard, a mouse, keys, a dial, a touch panel, a touch pad, a remote controller, etc. are assumed as the input unit 16. In this example, the touch panel in the input unit 16 is formed on the display screen of the display unit 17, and the user can perform a touch operation on the display screen.
An operation by a user is detected by the input unit 16 , and a signal corresponding to the input operation is interpreted by the CPU 11 .

 また入出力インタフェース15には、LCD(Liquid Crystal Display)或いは有機EL(Electro-Luminescence)パネルなどよりなる表示部17や、スピーカなどよりなる音声出力部18が一体又は別体として接続される。
 表示部17は、各種の情報表示に用いられ、例えば情報処理装置1の筐体に設けられるディスプレイデバイスとして構成される。
In addition, the input/output interface 15 is connected, either integrally or separately, to a display unit 17 formed of an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence) panel, or the like, and an audio output unit 18 formed of a speaker, or the like.
The display unit 17 is used to display various types of information, and is configured as a display device provided in the housing of the information processing device 1, for example.

 表示部17は、CPU11の指示に基づいて表示画面上に各種の画像処理のための画像や処理対象の動画等の表示を実行する。また表示部17はCPU11の指示に基づいて、各種操作メニュー、アイコン、メッセージ等、即ちGUI(Graphical User Interface)としての表示を行う。 The display unit 17 displays images for various types of image processing, videos to be processed, etc., on the display screen based on instructions from the CPU 11. The display unit 17 also displays various operation menus, icons, messages, etc., i.e., a GUI (Graphical User Interface), based on instructions from the CPU 11.

 入出力インタフェース15には記憶部19や、通信部20が接続され得る。
 記憶部19はHDD(Hard Disk Drive)や、SSD(Solid State Drive)などより構成され、各種の情報記憶を行う。
A storage unit 19 and a communication unit 20 can be connected to the input/output interface 15 .
The storage unit 19 is configured with a hard disk drive (HDD) or a solid state drive (SSD) and stores various types of information.

 通信部20は、インターネット等の伝送路を介しての通信処理や、各種機器との有線/無線通信、バス通信などによる通信を行う。
 特に本実施形態の場合、通信部20は、先に説明した分光カメラ2における通信部9と同様、USB通信規格等の所定の有線通信規格に従った外部装置との間の有線データ通信、ブルートゥース通信規格等の所定の無線通信規格に従った外部装置との間の無線データ通信、或いはインターネット等の所定のネットワークを介した外部装置との間の無線又は有線データ通信を行う構成とされている。
 これによりCPU11は、通信部20を介して分光カメラ2との間でデータの送受信を行うことが可能とされる。
The communication unit 20 performs communication processing via a transmission path such as the Internet, and communication with various devices via wired/wireless communication, bus communication, and the like.
In particular, in the case of this embodiment, the communication unit 20, like the communication unit 9 in the spectroscopic camera 2 described above, is configured to perform wired data communication with an external device according to a specified wired communication standard such as the USB communication standard, wireless data communication with an external device according to a specified wireless communication standard such as the Bluetooth communication standard, or wireless or wired data communication with an external device via a specified network such as the Internet.
This enables the CPU 11 to transmit and receive data to and from the spectroscopic camera 2 via the communication unit 20 .

 入出力インタフェース15にはまた、必要に応じてドライブ21が接続され、メモリカードや光ディスクなどのリムーバブル記録媒体22が適宜装着される。 If necessary, a drive 21 is also connected to the input/output interface 15, and a removable recording medium 22 such as a memory card or optical disk is appropriately attached.

 ドライブ21により、リムーバブル記録媒体22から各処理に用いられるプログラム等のデータファイルなどを読み出すことが可能とされる。読み出されたデータファイルは記憶部19に記憶されたり、データファイルに含まれる画像や音声が表示部17や音声出力部18で出力されたりする。またリムーバブル記録媒体22から読み出されたコンピュータプログラム等は必要に応じて記憶部19にインストールされる。 The drive 21 makes it possible to read data files such as programs used for each process from the removable recording medium 22. The read data files are stored in the memory unit 19, and images and sounds contained in the data files are output on the display unit 17 and the audio output unit 18. In addition, computer programs and the like read from the removable recording medium 22 are installed in the memory unit 19 as necessary.

 ここで、情報処理装置1は、図5に示すようなコンピュータ装置が単一で構成されることに限らず、複数のコンピュータ装置がシステム化されて構成されてもよい。複数のコンピュータ装置は、LAN(Local Area Network)等によりシステム化されていてもよいし、インターネット等を利用したVPN(Virtual Private Network)等により遠隔地に配置されたものでもよい。複数のコンピュータ装置には、クラウドコンピューティングサービスによって利用可能なサーバ群(クラウド)としてのコンピュータ装置が含まれてもよい。
Here, the information processing device 1 is not limited to being configured by a single computer device as shown in Fig. 5, but may be configured by a system of multiple computer devices. The multiple computer devices may be systemized by a LAN (Local Area Network) or the like, or may be located in a remote location by a VPN (Virtual Private Network) using the Internet or the like. The multiple computer devices may include computer devices as a server group (cloud) available through a cloud computing service.

[1-4.情報処理装置の機能]
 前述のように本実施形態では、情報処理装置1が、所定の光源モデルを用いて、分光カメラ2により得られる被写体の分光情報に基づいた光源スペクトルの推定を行う。
 以下、このような光源スペクトルの推定機能を含めた、第一実施形態としての情報処理装置1が有する各種機能について説明する。
 ここで、以下の説明において、分光カメラ2により得られる被写体の分光情報(各狭帯域画像の輝度値)のことを「物体スペクトル」と表記する。
[1-4. Functions of information processing device]
As described above, in this embodiment, the information processing device 1 estimates a light source spectrum based on the spectral information of the subject obtained by the spectroscopic camera 2, using a predetermined light source model.
Hereinafter, various functions of the information processing device 1 according to the first embodiment, including the light source spectrum estimation function, will be described.
In the following description, the spectral information of the subject obtained by the spectroscopic camera 2 (the brightness values of each narrowband image) is referred to as the "object spectrum."

 図6は、第一実施形態としての情報処理装置1のCPU11が有する各種機能を説明するための機能ブロック図である。
 図示のようにCPU11は、光源スペクトル推定部F1、第一環境パラメータ推定部F2、第二環境パラメータ推定部F3、画像処理部F4、及び日向日陰領域判定部F5としての機能を有している。
FIG. 6 is a functional block diagram for explaining various functions of the CPU 11 of the information processing device 1 according to the first embodiment.
As shown in the figure, the CPU 11 has functions as a light source spectrum estimation section F1, a first environmental parameter estimation section F2, a second environmental parameter estimation section F3, an image processing section F4, and a sun/shade area determination section F5.

 先に述べたように、本実施形態では、光源スペクトルの推定には、太陽光スペクトルと空スペクトルとの重み付き加算により光源スペクトルを表現する光源モデルを用いる。具体的に、本例では、光源モデルとしてBirdモデルを用いる。 As mentioned above, in this embodiment, a light source model that represents the light source spectrum by weighted addition of the sunlight spectrum and the sky spectrum is used to estimate the light source spectrum. Specifically, in this example, the Bird model is used as the light source model.

 公知のようにBirdモデルは、太陽高度(earth-sun distance)、太陽角度(solar zenith angle)、空気中のエアロゾル濁度(aerosol turbidity)、沈殿水蒸気量(Precipitable Water Vapor:「可降水量」とも呼ばれる)、及びオゾン量(ozone mass)を入力として、太陽光スペクトルと空スペクトルとを算出可能な屋外光源モデルである。なお、太陽高度は地球と太陽との間の距離を、太陽角度は地球上の観測位置の天頂方向を基準方向として太陽方向の該基準方向に対するずれ量を角度により表すものである。
 Birdモデルでは、対象の被写体に日向領域と日陰領域とが存在する場合を想定して、被写体が太陽光によって照らされてる割合と空によって照らされている割合により光源のスペクトルを表現する形態が採られている。
As is well known, the Bird model is an outdoor light source model that can calculate the solar spectrum and sky spectrum using the earth-sun distance, solar zenith angle, aerosol turbidity in the air, precipitable water vapor (also called "precipitable water vapor"), and ozone mass as inputs. Note that the earth-sun distance represents the distance between the earth and the sun, and the solar angle represents the amount of deviation of the sun direction from the zenith direction of the observation position on the earth as an angle.
The Bird model assumes that the target subject has sunny and shaded areas, and represents the spectrum of the light source based on the proportion of the subject that is illuminated by sunlight and the proportion of the subject that is illuminated by the sky.

 Birdモデルでは、太陽光スペクトルと空スペクトルを求めることで、それらの重み付き加算により天候に依らず(晴天、曇天、雨天)近似的に日向や日陰での天空からの光源スペクトルを表現可能とされる。
 具体的には、
「光源スペクトル=太陽光スペクトル*b0+空スペクトル*b1」
 により、天候や日向日陰といった環境種別に対応した光源スペクトルを表現できる。なお、「b0」は太陽光スペクトルの重み係数、「b1」は空スペクトルの重み係数である。
In the Bird model, the solar spectrum and sky spectrum are calculated, and then their weighted addition makes it possible to approximately represent the light source spectrum from the sky in sunny or shady conditions regardless of the weather (sunny, cloudy, rainy).
in particular,
"Light source spectrum = sunlight spectrum * b0 + sky spectrum * b1"
It is possible to express a light source spectrum corresponding to an environmental type such as weather, sunshine or shade by using the above formula. Note that "b0" is a weighting coefficient for the sunlight spectrum, and "b1" is a weighting coefficient for the sky spectrum.

 参考として、図7に、Birdモデルにおいて計算される太陽光スペクトルと空スペクトルの例を示しておく。 For reference, Figure 7 shows an example of the solar spectrum and sky spectrum calculated using the Bird model.

 第一実施形態では、実環境に応じた太陽光スペクトルと空スペクトルが光源スペクトルの推定に用いられるべく、太陽光スペクトルと空スペクトルの計算に用いる環境パラメータについて、上向きカメラ3による撮像画像や、GNSSセンサ8により検出される位置情報及び計時部7が計時する現在時刻情報に基づく推定を行う。
 具体的に本例では、太陽光スペクトルと空スペクトルの計算に用いる太陽高度、太陽角度、及びオゾン量についてはGNSSセンサ8により検出される位置情報や計時部7が計時する現在時刻情報に基づく推定を行う。この推定を行うのが、第一環境パラメータ推定部F2である。
 公知のようにオゾン量は、緯度及び経度と時刻によって推定可能なものである。また、太陽高度については時刻によって推定可能であり、太陽角度は緯度及び経度と時刻とによって推定可能である。
In the first embodiment, in order to use the solar spectrum and sky spectrum corresponding to the actual environment in estimating the light source spectrum, the environmental parameters used in calculating the solar spectrum and sky spectrum are estimated based on the image captured by the upward camera 3, the position information detected by the GNSS sensor 8, and the current time information measured by the timing unit 7.
Specifically, in this example, the solar altitude, solar angle, and amount of ozone used in calculating the solar spectrum and sky spectrum are estimated based on the position information detected by the GNSS sensor 8 and the current time information measured by the timing unit 7. This estimation is performed by the first environmental parameter estimation unit F2.
As is well known, the amount of ozone can be estimated by latitude, longitude, and time. The solar altitude can be estimated by time, and the solar angle can be estimated by latitude, longitude, and time.

 また、太陽光スペクトルと空スペクトルの計算に用いるエアロゾル濁度及び沈殿水蒸気量については、上向きカメラ3による撮像画像に基づく推定を行う。この推定は、第二環境パラメータ推定部F3が行う。
 上向きカメラ3による撮像画像に基づくエアロゾル濁度及び沈殿水蒸気量の具体的な推定手法については改めて説明する。
Furthermore, the aerosol turbidity and the amount of precipitated water vapor used in the calculation of the solar spectrum and the sky spectrum are estimated based on images captured by the upward camera 3. This estimation is performed by a second environmental parameter estimation unit F3.
A specific method for estimating the aerosol turbidity and the amount of precipitated water vapor based on images captured by the upward camera 3 will be described later.

 ここで、オゾン量については、想定環境に対して標準的な値を用いたとしても、算出される光源スペクトルへの影響は軽微であり、従って現在位置情報と現在時刻情報とに基づく推定は行わずに、固定値(標準値としての固定値)を用いることも考えられる。この点は、後の第二実施形態において説明する。 Here, even if a standard value for the assumed environment is used for the amount of ozone, the effect on the calculated light source spectrum is minor, so it is possible to use a fixed value (a fixed value serving as a standard value) without making an estimate based on the current location information and current time information. This point will be explained later in the second embodiment.

 また、エアロゾル濁度と沈殿水蒸気については、日の出や日の入りの時間帯では算出される光源スペクトルへの影響が比較的大きく、上向きカメラ3を用いた推定を行うことが望ましい。ただし、例えばそれらの時間帯を除く使用想定等であれば、標準的な値を用いてもよい。この点についても、後の第二実施形態で説明する。 Furthermore, for aerosol turbidity and precipitating water vapor, the influence on the calculated light source spectrum is relatively large during sunrise and sunset, so it is desirable to estimate using the upward-facing camera 3. However, if the intended use is, for example, excluding those times, standard values may be used. This point will also be explained later in the second embodiment.

 光源スペクトル推定部F1による光源スペクトル推定の具体例について説明する。
 まず、推定にあたっては、下記のように誤差Dsを計算する誤差関数Fdを用いる。なお、「^」はべき乗を意味する。

 誤差Ds=||物体スペクトル-(光源スペクトル×物体分光反射率+鏡面反射)||^2

 ここで、「物体分光反射率」は、被写体としての物体の分光反射率を意味する。分光反射率とは、波長ごとの反射率を示す情報を意味するものである。
 本例では、物体分光反射率は、下記のような波長λの多項式(特には、二次以下の関数)でモデル化して扱う。

 物体分光反射率=m2λ^2+m1λ+m0
A specific example of light source spectrum estimation by the light source spectrum estimation unit F1 will be described.
First, in the estimation, an error function Fd is used to calculate an error Ds as follows: where "^" means exponentiation.

Error Ds = || object spectrum - (light source spectrum x object spectral reflectance + specular reflection) ||^2

Here, the term "spectral reflectance of an object" refers to the spectral reflectance of an object as a subject. The spectral reflectance refers to information indicating the reflectance for each wavelength.
In this example, the object spectral reflectance is modeled and handled using a polynomial of wavelength λ (particularly a function of second order or lower) as shown below.

Object spectral reflectance = m2λ^2+m1λ+m0

 また、「鏡面反射」は、「c*太陽光スペクトル」とする。 Furthermore, "specular reflection" is defined as "c*solar spectrum."

 第一実施形態において、光源スペクトル推定部F1は、分光カメラ2により得られる分光画像(各狭帯域画像)の画素ごとに、以下で説明する制約条件を満たしつつ、上記した誤差Dsを最小化するパラメータ(具体的にはb0、b1、m0、m1、m2、c)を導出することで、画素ごとの光源スペクトルを推定する。 In the first embodiment, the light source spectrum estimation unit F1 estimates the light source spectrum for each pixel of the spectroscopic image (each narrowband image) obtained by the spectroscopic camera 2 by deriving parameters (specifically b0, b1, m0, m1, m2, c) that minimize the above-mentioned error Ds while satisfying the constraint conditions described below.

 光源スペクトル推定における制約条件は下記の通りである。
 1)b1は全画素で共通の値(空スペクトルはアンビエント光相当であるため)
 2)b0、b1、cは全て0以上
 3)物体分光反射率について、v≦物体分光反射率≦1.0を満たす(vはユーザ設定値)
 4)b0/b1≧w(wはユーザ設定値)
 5)b0/b1<zならばc=0(zはユーザ設定値)
 上記3)の条件は、二次関数でモデル化した分光反射率情報について、パラメータ導出の収束性向上を意図した条件である。
 上記4)の条件は、例えば深い影の領域であっても「b0/b1」は必ず0.1以下に収まる等という経験則がある場合に対応して、「b0/b1」の値の下限値をユーザ設定で条件付けできるようにしてパラメータ導出の収束性向上を図るものである。
 上記5)の条件は、鏡面反射についての条件をユーザ設定させることでパラメータ導出の収束性向上を図るものである。
The constraints on light source spectrum estimation are as follows:
1) b1 is a common value for all pixels (because the sky spectrum is equivalent to ambient light)
2) b0, b1, and c are all equal to or greater than 0. 3) The object spectral reflectance satisfies v≦object spectral reflectance≦1.0 (v is a user-set value).
4) b0/b1≧w (w is a user-set value)
5) If b0/b1<z, then c=0 (z is a user-specified value)
The above condition 3) is a condition intended to improve the convergence of parameter derivation for spectral reflectance information modeled by a quadratic function.
The above condition 4) corresponds to the case where there is an empirical rule that "b0/b1" is always equal to or less than 0.1 even in a deep shadow area, for example, and is intended to improve the convergence of parameter derivation by allowing the user to set a lower limit value for "b0/b1".
The above condition 5) aims to improve the convergence of parameter derivation by allowing the user to set the conditions for specular reflection.

 具体的なパラメータ導出の処理例を、図8のフローチャートを参照して説明する。
 先ず、CPU11(光源スペクトル推定部F1)はステップS101で、b0、b1を除くm0、m1、m2、cについて、初期値を設定する。なお、初期値としては、それぞれの標準値を用いることが考えられる。
A specific example of the parameter derivation process will be described with reference to the flowchart of FIG.
First, in step S101, the CPU 11 (light source spectrum estimation unit F1) sets initial values for m0, m1, m2, and c, excluding b0 and b1. Note that it is possible to use standard values for each of the initial values.

 ステップS101に続くステップS102でCPU11は、初期値としてのm0、m1、m2、cを誤差関数Fdに代入して誤差Dsを最小化するb0、b1を導出する。すなわち、誤差関数Fdにおける物体分光反射率の計算に用いるm0、m1、m2、及び鏡面反射の計算に用いるcを初期値に固定して、誤差Dsを最小化するb0、b1を導出するものである。
 ここで、誤差Dsの計算には、光源スペクトルの計算を要し、光源スペクトルの計算には、太陽光スペクトルと空スペクトルの計算を要する。Birdモデルにおいて、太陽光スペクトルと空スペクトルは、それぞれ太陽高度、太陽角度、オゾン量、エアロゾル濁度、及び沈殿水蒸気量を変数とした関数で表現されるものである。このため、誤差Dsの計算過程においてCPU11は、太陽光スペクトルと空スペクトルの算出に、第一環境パラメータ推定部F2が推定した太陽高度、太陽角度、及びオゾン量と、第二環境パラメータ推定部F3が後述のように推定するエアロゾル濁度及び沈殿水蒸気量を用いる。
In step S102 following step S101, the CPU 11 substitutes initial values m0, m1, m2, and c into the error function Fd to derive b0 and b1 that minimize the error Ds. That is, m0, m1, and m2 used in the calculation of the object spectral reflectance in the error function Fd, and c used in the calculation of specular reflection are fixed to their initial values, and b0 and b1 that minimize the error Ds are derived.
Here, the calculation of the error Ds requires the calculation of the light source spectrum, which in turn requires the calculation of the solar spectrum and the sky spectrum. In the Bird model, the solar spectrum and the sky spectrum are expressed by functions with the solar altitude, solar angle, amount of ozone, aerosol turbidity, and amount of precipitated water vapor as variables, respectively. Therefore, in the calculation process of the error Ds, the CPU 11 uses the solar altitude, solar angle, and amount of ozone estimated by the first environmental parameter estimation unit F2, and the aerosol turbidity and amount of precipitated water vapor estimated by the second environmental parameter estimation unit F3 as described below, to calculate the solar spectrum and the sky spectrum.

 ステップS102に続くステップS103でCPU11は、導出したb0、b1と初期値としてのcを誤差関数Fdに代入して誤差Dsを最小化するm0、m1、m2を導出する。
 さらに、続くステップS104でCPU11は、導出したb0、b1、m0、m1、m2を誤差関数Fdに代入して、誤差Dsを最小化するcを導出する。
In step S103 following step S102, the CPU 11 substitutes the derived b0 and b1 and the initial value c into the error function Fd to derive m0, m1, and m2 that minimize the error Ds.
Furthermore, in the following step S104, the CPU 11 substitutes the derived b0, b1, m0, m1, and m2 into the error function Fd to derive c that minimizes the error Ds.

 上記のステップS101からステップS104の処理が、初回の導出処理となる。
 初回の導出処理を終えると、CPU11はステップS105以降の処理により、2回目以降の導出処理を行う。
The above-mentioned processes from step S101 to step S104 constitute the initial derivation process.
After completing the first derivation process, the CPU 11 performs the second and subsequent derivation processes in steps S105 and subsequent steps.

 具体的に、CPU11は先ずステップS105で、直前の回の導出処理で導出したm0、m1、m2、cを誤差関数Fdに代入して、誤差Dsを最小化するb0、b1を導出する。すなわち、例えば直前の回の導出処理が初回の導出処理であれば、初回の導出処理で最終導出されたm0、m1、m2、cを誤差関数Fdに代入して、誤差Dsを最小化するb0、b1を導出するものである。 Specifically, first in step S105, the CPU 11 substitutes m0, m1, m2, and c derived in the immediately preceding derivation process into the error function Fd to derive b0 and b1 that minimize the error Ds. That is, for example, if the immediately preceding derivation process is the initial derivation process, then the m0, m1, m2, and c finally derived in the initial derivation process are substituted into the error function Fd to derive b0 and b1 that minimize the error Ds.

 ステップS105に続くステップS106でCPU11は、導出したb0、b1と直前の回の導出処理で導出したcを誤差関数Fdに代入して、誤差Dsを最小化するm0、m1、m2を導出する。
 さらに、続くステップS107でCPU11は、導出したb0、b1、m0、m1、m2を誤差関数Fdに代入して、誤差Dsを最小化するcを導出する。
In step S106 following step S105, the CPU 11 substitutes the derived b0 and b1 and the c derived in the immediately preceding derivation process into the error function Fd to derive m0, m1, and m2 that minimize the error Ds.
Furthermore, in the following step S107, the CPU 11 substitutes the derived b0, b1, m0, m1, and m2 into the error function Fd to derive c that minimizes the error Ds.

 ステップS107に続くステップS108でCPU11は、導出終了条件が成立したか否かを判定する。ここで、導出終了条件としては、パラメータの最適化(誤差Dsの縮小化)度合いが所望の度合いに達したと推定できる条件を定める。例えば、最終導出されたb0、b1、m0、m1、m2、cを用いて算出される誤差Dsの値に基づく条件を定めることが考えられる。一例としては、該誤差Dsが所定の閾値以下となったとの条件や、該誤差Dsの値の変極点(低下から上昇への変極点)を検出したとの条件を定めることが考えられる。なお、後者の条件を定めた場合、変極点を検出した回の直前の回の導出処理で導出されたパラメータを最終導出のパラメータとして扱う。
 或いは、導出終了条件としては、誤差Dsの値に基づく条件とするのではなく、例えば導出処理の回数が規定回数に達したとの条件を定めること等も考えられる。
In step S108 following step S107, the CPU 11 determines whether the derivation end condition is satisfied. Here, as the derivation end condition, a condition is set that can estimate that the degree of parameter optimization (reduction of error Ds) has reached a desired degree. For example, a condition based on the value of error Ds calculated using the finally derived b0, b1, m0, m1, m2, and c can be set. As an example, a condition that the error Ds is equal to or less than a predetermined threshold value or a condition that an inflection point (inflection point from a decrease to an increase) of the value of the error Ds is detected can be set. Note that, when the latter condition is set, the parameters derived in the derivation process immediately before the inflection point is detected are treated as the parameters of the final derivation.
Alternatively, the derivation end condition may not be based on the value of the error Ds, but may be, for example, that the number of derivation processes reaches a specified number.

 ステップS108において、導出終了条件が成立していないと判定した場合、CPU11はステップS105に戻る。これにより、導出終了条件が成立するまで、2回目以降の導出処理が繰り返される。 If it is determined in step S108 that the derivation end condition is not satisfied, the CPU 11 returns to step S105. As a result, the second and subsequent derivation processes are repeated until the derivation end condition is satisfied.

 一方ステップS108において、導出終了条件が成立したと判定した場合、CPU11はステップS109に進み、最終導出されたb0、b1と太陽光スペクトル及び空スペクトルを用いて光源スペクトルを算出する。すなわち、前述した
 「光源スペクトル=太陽光スペクトル*b0+空スペクトル*b1」
 の計算により、光源スペクトルを求めるものである。
 なお、上式で計算される光源スペクトルは、分光センサ4の分光感度を考慮したものではない。分光センサ4の分光感度を考慮する場合には、上式で計算した光源スペクトル(Nch)に分光センサ4の分光感度を畳み込んでセンサ信号を得、それを先の[式1]によって狭帯域化してMchの光源スペクトルを得る。
On the other hand, if it is determined in step S108 that the derivation end condition is satisfied, the CPU 11 proceeds to step S109, where it calculates the light source spectrum using the finally derived b0 and b1, the sunlight spectrum, and the sky spectrum. That is, the above-mentioned "light source spectrum = sunlight spectrum * b0 + sky spectrum * b1"
The light source spectrum is calculated by the above calculation.
It should be noted that the light source spectrum calculated by the above formula does not take into account the spectral sensitivity of the spectroscopic sensor 4. When the spectral sensitivity of the spectroscopic sensor 4 is taken into account, the light source spectrum (Nch) calculated by the above formula is convoluted with the spectral sensitivity of the spectroscopic sensor 4 to obtain a sensor signal, which is then narrowed in band by the above-mentioned [Formula 1] to obtain the light source spectrum of Mch.

 CPU11(光源スペクトル推定部F1)は、上記のような導出処理を画素ごとに実行する。これにより、画素ごとの光源スペクトルが求まると共に、画素ごとの重み係数b0、b1が求まる。 The CPU 11 (light source spectrum estimation unit F1) executes the above-described derivation process for each pixel. This determines the light source spectrum for each pixel, and also determines the weighting coefficients b0 and b1 for each pixel.

 ここで、上記のような光源スペクトルの推定手法は、以下のような推定手法であると言うことができる。
 すなわち、光源モデルのパラメータに候補値をセットして算出される光源スペクトルである試算光源スペクトルを、被写体としての物体の分光反射率を示す物体分光反射率の候補値に掛け合わせて算出されるスペクトルである試算物体スペクトルと、物体スペクトルとの誤差を推定の基準値として、光源モデルのパラメータと物体分光反射率とを推定することにより光源スペクトルを推定する、という推定手法である。
Here, the above-mentioned light source spectrum estimation method can be said to be an estimation method as follows.
In other words, this estimation method estimates the light source spectrum by estimating the parameters of the light source model and the object spectral reflectance using the error between a trial object spectrum, which is a spectrum calculated by multiplying a trial light source spectrum, which is a light source spectrum calculated by setting candidate values for the parameters of the light source model, by a candidate value of the object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum, as a reference value for estimation.

 なお、上記では光源スペクトルの推定について、誤差Dsを最小化するパラメータ最適化手法により重み係数b0、b1を導出する手法を例示したが、光源スペクトルの推定は、分光カメラ2による分光画像を入力とし、重み係数b0、b1を出力するように機械学習されたAIモデルを用いて行うことも考えられる。
 具体的には、セグメンテーション等に用いられるU-netのような入力画像サイズと同サイズの出力を行うことのできるDNN(Deep Neural Network)によって、分光カメラ2の画像の画素ごとにb0、b1を推論するもので、このDNNは、例えば誤差Dsによってトレーニング可能である。学習の際、誤差Dsの計算でm0、m1、m2、cが必要となるが、これらはDNNが推論したb0、b1を用いて、例えば先のステップS103、S104と同様の手法で導出した値を用いることが可能である。
In the above, an example of estimating the light source spectrum is given of a method for deriving the weighting coefficients b0 and b1 using a parameter optimization method that minimizes the error Ds. However, it is also possible to estimate the light source spectrum using an AI model that has been machine-learned to input a spectroscopic image captured by the spectroscopic camera 2 and output the weighting coefficients b0 and b1.
Specifically, b0 and b1 are inferred for each pixel of the image captured by the spectroscopic camera 2 using a DNN (Deep Neural Network) capable of outputting the same size as the input image, such as a U-net used for segmentation, and this DNN can be trained, for example, by the error Ds. During learning, m0, m1, m2, and c are required for the calculation of the error Ds, and these can be derived, for example, by using the b0 and b1 inferred by the DNN using the same method as in steps S103 and S104 described above.

 続いて、図6に示した第二環境パラメータ推定部F3によるエアロゾル濁度、沈殿水蒸気量の推定処理について説明する。 Next, we will explain the estimation process of aerosol turbidity and precipitated water vapor amount by the second environmental parameter estimation unit F3 shown in Figure 6.

 前述のようにエアロゾル濁度と沈殿水蒸気量の推定は、上向きカメラ3の撮像画像に基づき行う。
 前提として、本例では、上向きカメラ3は、分光カメラ2と同様に分光センサ4を備えたカメラとして構成され、出力画像として分光画像(M個の狭帯域画像)を得るものとされている。
 また、確認のため述べておくと、太陽光スペクトルと空スペクトルは、それぞれ太陽高度、太陽角度、オゾン量、エアロゾル濁度、及び沈殿水蒸気量を変数とした関数で表現されるものである。
 以下、説明上、エアロゾル濁度をτ、沈殿水蒸気量をσと表記する。
As described above, the aerosol turbidity and the amount of precipitated water vapor are estimated based on the images captured by the upward-facing camera 3.
As a premise, in this example, the upward camera 3 is configured as a camera equipped with a spectroscopic sensor 4, similar to the spectroscopic camera 2, and is configured to obtain spectroscopic images (M narrowband images) as output images.
For confirmation, the solar spectrum and the sky spectrum are expressed as functions with the variables being the solar altitude, the solar angle, the amount of ozone, the aerosol turbidity, and the amount of precipitated water vapor.
For the sake of explanation, the aerosol turbidity will be represented as τ and the amount of precipitated water vapor as σ.

 エアロゾル濁度τと沈殿水蒸気量σの推定についても、上記したb0、b1の導出処理と同様、所定の誤差関数を定義し、誤差を最小化するパラメータを導出するという手法によって行うことができる。
 ここで、上向きカメラ3の撮像面においては、以下の式で示される分光情報が検出されると言うことができる。
 「太陽光スペクトル*b0’+空スペクトル*b1’」
 ここで、上記式における太陽光スペクトルの重み係数b0’、空スペクトルの重み係数b1’は、上向きカメラ3の配置位置(撮像位置)における日向度合いを示す係数であると換言できる。
As with the above-described derivation process of b0 and b1, the aerosol turbidity τ and the amount of precipitated water vapor σ can be estimated by a method of defining a predetermined error function and deriving parameters that minimize the error.
Here, it can be said that spectral information expressed by the following formula is detected on the imaging plane of upward camera 3.
"Sunlight spectrum * b0' + sky spectrum * b1'"
Here, the weighting coefficient b0' of the sunlight spectrum and the weighting coefficient b1' of the sky spectrum in the above formula can be said to be coefficients indicating the degree of sunlight at the position where the upward camera 3 is disposed (imaging position).

 上記の前提に基づき、本例では、エアロゾル濁度τと沈殿水蒸気量σの推定に用いる誤差関数として、下記のような誤差関数Fd’を用いる。

誤差Ds’=
 ||上向き平均分光情報-(太陽光スペクトル*b0’+空スペクトル*b1’)||^2

 ここで、「上向き平均分光情報」は、上向きカメラ3により得られた分光情報の画像全体の平均値を意味する。具体的には、M個の狭帯域画像それぞれについて画像全体の平均輝度値を算出することで求まる、M個の輝度値である。
 また、前述のように太陽光スペクトルと空スペクトルはそれぞれ太陽高度、太陽角度、オゾン量、エアロゾル濁度τ、及び沈殿水蒸気量σを変数とした関数で表現されるものであり、上記式における太陽光スペクトル、空スペクトルそれぞれの計算に第一環境パラメータ推定部F2が推定した太陽高度、太陽角度、及びオゾン量と、エアロゾル濁度τ及び沈殿水蒸気量σが代入されるものとなる。
Based on the above premise, in this example, the following error function Fd' is used as the error function used to estimate the aerosol turbidity τ and the amount of precipitated water vapor σ.

Error Ds'=
||Upward average spectral information - (sunlight spectrum * b0' + sky spectrum * b1') ||^2

Here, the "upward average spectral information" refers to the average value of the entire image of the spectral information obtained by the upward camera 3. Specifically, it is M brightness values obtained by calculating the average brightness value of the entire image for each of the M narrowband images.
As described above, the solar spectrum and sky spectrum are each expressed as a function with the solar altitude, solar angle, ozone amount, aerosol turbidity τ, and precipitated water vapor amount σ as variables, and the solar altitude, solar angle, and ozone amount estimated by the first environmental parameter estimation unit F2, as well as the aerosol turbidity τ and precipitated water vapor amount σ, are substituted into the calculation of the solar spectrum and sky spectrum in the above formula, respectively.

 第二環境パラメータ推定部F3は、次のような手順でエアロゾル濁度τと沈殿水蒸気量σを推定する。
 先ずは、τ、σの初期値を設定する。この場合も初期値については、標準値を用いることが考えられる。
The second environmental parameter estimation unit F3 estimates the aerosol turbidity τ and the amount of precipitated water vapor σ in the following procedure.
First, the initial values of τ and σ are set. In this case, it is also possible to use standard values for the initial values.

 次いで第二環境パラメータ推定部F3は、初期値としてのτ、σを誤差関数Fd’に代入して、誤差Ds’を最小化するb0’、b1’を導出する。
 さらに第二環境パラメータ推定部F3は、導出したb0’、b1’と初期値としてのσを誤差関数Fd’に代入して、誤差Ds’を最小化するτを導出する。
 次いで第二環境パラメータ推定部F3は、導出したb0’、b1’、τを誤差関数Fd’に代入して、誤差Ds’を最小化するσを導出する。
Next, the second environment parameter estimation unit F3 substitutes τ and σ as initial values into the error function Fd′, and derives b0′ and b1′ that minimize the error Ds′.
Furthermore, the second environmental parameter estimation unit F3 substitutes the derived b0', b1' and σ as an initial value into the error function Fd', and derives τ that minimizes the error Ds'.
Next, the second environmental parameter estimation unit F3 substitutes the derived b0', b1', and τ into the error function Fd' to derive σ that minimizes the error Ds'.

 このようにして、b0’、b1’、τ、σの初回の導出処理を実行する。以降は、所定の導出終了条件が成立するまで、以下のようにして2回目以降の導出処理を実行する。
 すなわち先ず、直前の回の導出処理で導出したτ、σを誤差関数に代入して、誤差Ds’を最小化するb0’、b1’を導出する。
 次いで、導出したb0、b1と、直前の回の導出処理で導出したσを誤差関数Fd’に代入して、誤差Ds’を最小化するτを導出する。
 さらに、導出したb0’、b1’、τを誤差関数Fd’に代入して、誤差Ds’を最小化するσを導出する。
 この場合も導出終了条件としては、前述したb0、b1、m0、m1、m2、cのパラメータ導出処理の場合と同様の条件を設定することが考えられる。
In this manner, the initial derivation process of b0', b1', τ, and σ is executed. After that, the second and subsequent derivation processes are executed as follows until a predetermined derivation end condition is met.
That is, first, τ and σ derived in the immediately preceding derivation process are substituted into the error function to derive b0′ and b1′ that minimize the error Ds′.
Next, the derived b0 and b1 and the σ derived in the immediately preceding derivation process are substituted into the error function Fd' to derive τ that minimizes the error Ds'.
Furthermore, the derived b0', b1', and τ are substituted into the error function Fd' to derive σ that minimizes the error Ds'.
In this case as well, the derivation end condition may be set to the same condition as in the above-mentioned parameter derivation process for b0, b1, m0, m1, m2, and c.

 上記のような導出処理により、環境に応じた太陽光スペクトル、空スペクトルを算出するためのエアロゾル濁度τ、沈殿水蒸気量σを推定することができる。 The above derivation process makes it possible to estimate the solar spectrum according to the environment, the aerosol turbidity τ for calculating the sky spectrum, and the amount of precipitating water vapor σ.

 なお、上記では第二環境パラメータ推定部F3による環境パラメータの推定について、誤差Ds’を最小化するパラメータ最適化手法により環境パラメータを導出する手法を採る場合を例示したが、第二環境パラメータ推定部F3による環境パラメータの推定手法はこれに限定されるものではない。例えば、上向きカメラ3による撮像画像から環境パラメータを導出できるように機械学習されたAI(人工知能)モデルを用いることが考えられる。この場合、AIモデルの機械学習は、上空を撮像した撮像画像を学習用入力データとし、τ、σの正解情報を教師データとした深層学習(ディープラニング)により行うことが考えられる。このとき、コスト関数として、上記の誤差関数Fd’を用いることが考えられる。 In the above, the estimation of environmental parameters by the second environmental parameter estimation unit F3 is exemplified by a case in which an environmental parameter is derived by a parameter optimization method that minimizes the error Ds', but the estimation method of the environmental parameters by the second environmental parameter estimation unit F3 is not limited to this. For example, it is possible to use an AI (artificial intelligence) model that has been machine-learned so that it can derive environmental parameters from images captured by the upward camera 3. In this case, the machine learning of the AI model is considered to be performed by deep learning using images captured of the sky as learning input data and correct answer information for τ and σ as teacher data. In this case, it is considered to use the above error function Fd' as the cost function.

 また、第二環境パラメータ推定部F3による環境パラメータの推定について、上向きカメラ3としては、分光カメラ2と同様に分光センサ4を備えたカメラを用いることに限定されず、例えばRGBカメラ等、少なくとも複数の波長帯の光を受光し分けることが可能に構成されたカメラが用いられればよい。 Furthermore, in estimating the environmental parameters by the second environmental parameter estimation unit F3, the upward camera 3 is not limited to being a camera equipped with a spectroscopic sensor 4 like the spectroscopic camera 2, but any camera configured to receive and separate at least a plurality of wavelength bands of light, such as an RGB camera, may be used.

 図6において、画像処理部F4は、光源スペクトル推定部F1が推定した光源スペクトルと、分光カメラ2により得られた分光画像とに基づいて所定の画像処理を行う。
 画像処理部F4による画像処理としては、光源キャンセル処理が考えられる。具体的には、光源スペクトル推定部F1が推定した光源スペクトルを用いて、分光画像としての物体スペクトルに含まれる光源のスペクトルを除去する処理である。
In FIG. 6, an image processing unit F4 performs predetermined image processing based on the light source spectrum estimated by the light source spectrum estimation unit F1 and the spectroscopic image obtained by the spectroscopic camera 2.
The image processing by the image processing unit F4 may be a light source cancellation process, which is a process of removing the spectrum of the light source included in the object spectrum as the spectroscopic image by using the light source spectrum estimated by the light source spectrum estimation unit F1.

 或いは、画像処理部F4による画像処理としては、光源スペクトル推定部F1が推定した光源スペクトルと分光カメラ2により得られる分光画像とに基づくセマンティックセグメンテーション処理を挙げることができる。
 具体的には、分光画像について、光源スペクトル推定部F1が推定した光源スペクトルを用いた光源キャンセル処理を行った上で、キャンセル後の画像についてAIモデルを用いたセマンティックセグメンテーション処理を行うことが考えられる。
 このとき、セマンティックセグメンテーション処理を行うAIモデルとしては、光源成分が除去された画像を学習用入力データとして機械学習されたモデルを用いるのではなく、非光源除去の画像と光源スペクトルとを学習用入力データとして機械学習されたモデルを用いることも考えられる。
 これにより、光源キャンセル処理を行うことを不要としながら、光源成分の影響(具体的には日向日陰の影響)が排除された高精度なセマンティックセグメンテーション処理を実現することができる。
Alternatively, the image processing by the image processing unit F4 may include semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit F1 and the spectroscopic image obtained by the spectroscopic camera 2.
Specifically, it is conceivable to perform light source cancellation processing on the spectroscopic image using the light source spectrum estimated by the light source spectrum estimation unit F1, and then perform semantic segmentation processing on the image after cancellation using an AI model.
In this case, as an AI model for performing semantic segmentation processing, rather than using a model trained on an image from which the light source component has been removed as learning input data, it is also possible to use a model trained on an image from which the light source component has not been removed and the light source spectrum as learning input data.
This makes it possible to realize highly accurate semantic segmentation processing that eliminates the influence of light source components (specifically, the influence of sunlight and shade) while eliminating the need for light source cancellation processing.

 また、図6において、日向日陰領域判定部F5は、光源スペクトル推定部F1が推定した太陽光スペクトルの重み係数b0と空スペクトルの重み係数b1とを用いて分光カメラ2により撮像される被写体についての日向領域、日陰領域の判定を行う。
 基本的に、日向は日陰に対して太陽光スペクトルの重み係数b0が大きい傾向となるため、重み係数b0の大きい画素を日向領域として判定し、それ以外の画素を日陰領域として判定する。具体的な手法は種々考えられるが、例えば、b0によるヒストグラムを生成し、該ヒストグラムに基づいて日向日陰の判定のためのb0の閾値を求め、該閾値とb0の比較結果に基づいて日向日陰の判定を行うという手法が考えられる。
Also, in Figure 6, the sun/shade area determination unit F5 determines whether the subject captured by the spectroscopic camera 2 is in a sun or shade area using the sunlight spectrum weighting coefficient b0 and the sky spectrum weighting coefficient b1 estimated by the light source spectrum estimation unit F1.
Basically, the weighting coefficient b0 of the solar spectrum tends to be large in sunny areas compared to shaded areas, so pixels with a large weighting coefficient b0 are determined to be in sunny areas, and other pixels are determined to be in shaded areas. There are various specific methods, but one possible method is to generate a histogram using b0, find a threshold value for b0 for determining whether the area is in sunny or shade based on the histogram, and determine whether the area is in sunny or shade based on the results of comparing the threshold value with b0.

 ここで、上記では、画像処理部F4による画像処理がCPU11のソフトウェア処理により実現される例としたが、該画像処理の少なくとも一部は、CPU11のソフトウェア処理以外の手法で実現されてもよい。例えば、画像処理部F4による画像処理の少なくとも一部は、CPU11外部におけるDPS(Digital Signal Processor)やGPU(Graphics Processing Unit)等の信号処理部によるハードウェア処理により実現されてもよい。特に、AIモデルを用いたセマンティックセグメンテーション処理の場合、必要な畳み込み演算をGPU等のハードウェア信号処理部を用いて実現することが考えられる。 In the above, an example has been given in which image processing by the image processing unit F4 is realized by software processing by the CPU 11, but at least a part of the image processing may be realized by a method other than software processing by the CPU 11. For example, at least a part of the image processing by the image processing unit F4 may be realized by hardware processing by a signal processing unit such as a DPS (Digital Signal Processor) or GPU (Graphics Processing Unit) outside the CPU 11. In particular, in the case of semantic segmentation processing using an AI model, it is conceivable that the necessary convolution calculations will be realized by using a hardware signal processing unit such as a GPU.

 また、上記では光源スペクトルの推定を画素単位で行う例を挙げたが、第一実施形態において、光源スペクトルの推定は、分光画像を所定数の画素単位で領域分割した分割領域ごとに行われればよい。
 例えば、2×2=4画素や8×8=64画素のサイズ等による分割領域ごとに行うといったことが考えられる。
 これにより、画像全体よりも細かい粒度で光源スペクトルの推定を行うことができ、空間方向における光源スペクトルの推定精度向上を図ることができる。
Although the above describes an example in which the light source spectrum is estimated on a pixel-by-pixel basis, in the first embodiment, the light source spectrum may be estimated for each divided region obtained by dividing the spectral image into regions each having a predetermined number of pixels.
For example, it is possible to carry out the process for each divided region having a size of 2×2=4 pixels or 8×8=64 pixels.
This allows estimation of the light source spectrum at a finer granularity than the entire image, thereby improving the estimation accuracy of the light source spectrum in the spatial direction.

<2.第二実施形態>
 続いて、第二実施形態について説明する。
 第二実施形態は、光源スペクトルの推定として、第一実施形態のように分割領域ごとの推定を行うのではなく、画像全体の平均的な光源スペクトルの推定を行う。
 また、第二実施形態では、第一実施形態において全ての環境スペクトルを推定していたものを、少なくとも一部の環境パラメータについては推定せずに固定値(標準値)を用いる。具体的に本例では、光源スペクトルの推定に用いる太陽高度、太陽角度、オゾン量、エアロゾル濁度、及び沈殿水蒸気量の各環境パラメータのうち、太陽高度及び太陽角度を除くオゾン量、エアロゾル濁度、及び沈殿水蒸気量として標準値を用いる。
<2. Second embodiment>
Next, a second embodiment will be described.
In the second embodiment, the light source spectrum is estimated not for each divided region as in the first embodiment, but rather as an average light source spectrum for the entire image.
In the second embodiment, all environmental spectra are estimated in the first embodiment, but fixed values (standard values) are used for at least some environmental parameters without estimation. Specifically, in this example, standard values are used for the amount of ozone, the amount of aerosol turbidity, and the amount of precipitated water vapor, excluding the amount of solar altitude and the solar angle, among the environmental parameters of the solar altitude, the solar angle, the amount of ozone, the aerosol turbidity, and the amount of precipitated water vapor, which are used to estimate the light source spectrum.

 図9は、第二実施形態としての光源スペクトル推定システムの構成例を示したブロック図である。
 なお以下の説明において、既に説明済みとなった部分と同様となる部分については同一符号を付して説明を省略する。
 第一実施形態の場合からの変更点は、上向きカメラ3が省略された点と、情報処理装置1に代えて情報処理装置1Aが用いられる点である。
FIG. 9 is a block diagram showing an example of the configuration of a light source spectrum estimation system according to the second embodiment.
In the following description, parts that are similar to parts that have already been described will be given the same reference numerals and description thereof will be omitted.
The changes from the first embodiment are that the upward facing camera 3 is omitted and that an information processing device 1A is used instead of the information processing device 1.

 図10は、第二実施形態としての情報処理装置1Aが有する機能を説明するための機能ブロック図である。
 ここで、情報処理装置1Aが有するCPU11をCPU11Aと表記する。
 図10では、CPU11Aが有する機能をブロック化して示している。
FIG. 10 is a functional block diagram for explaining functions of an information processing device 1A according to the second embodiment.
Here, the CPU 11 included in the information processing device 1A is referred to as a CPU 11A.
FIG. 10 shows the functions of the CPU 11A in blocks.

 第一実施形態におけるCPU11との相違点は、第二環境パラメータ推定部F3及び日向日陰領域判定部F5が省略された点と、第一環境パラメータ推定部F2に代えて第一環境パラメータ推定部F2Aを有し、また光源スペクトル推定部F1に代えて光源スペクトル推定部F1Aを有する点である。 The differences from the CPU 11 in the first embodiment are that the second environmental parameter estimation unit F3 and the sun/shade area determination unit F5 are omitted, and that a first environmental parameter estimation unit F2A is provided instead of the first environmental parameter estimation unit F2, and a light source spectrum estimation unit F1A is provided instead of the light source spectrum estimation unit F1.

 第一環境パラメータ推定部F2Aは、分光カメラ2より入力される現在位置情報と現在時刻情報とに基づき、太陽高度及び太陽角度を推定する。 The first environmental parameter estimation unit F2A estimates the solar altitude and solar angle based on the current position information and current time information input from the spectroscopic camera 2.

 光源スペクトル推定部F1Aは、オゾン量の標準値(図中、標準的第一環境パラメータ)と、エアロゾル濁度及び沈殿水蒸気量の標準値(図中、標準的第二環境パラメータ)と、第一環境パラメータ推定部F2Aにより推定された太陽高度及び太陽角度とを入力し、分光カメラ2より入力される分光画像に基づき、画像全体の平均的な光源スペクトルを推定する。
 具体的に、画像全体の平均的な光源スペクトルは、前述した誤差Dsの計算において、「物体スペクトル」としてM個の狭帯域画像それぞれの画像全体の平均輝度値を用いることで実現する。すなわち、前述した図8の処理において用いる誤差Dsの計算において、「物体スペクトル」に分光画像として入力されるM個の狭帯域画像それぞれの画像全体の平均輝度値を用いればよい。
The light source spectrum estimation unit F1A inputs the standard value of the ozone amount (standard first environmental parameter in the figure), the standard values of the aerosol turbidity and precipitated water vapor amount (standard second environmental parameter in the figure), and the solar altitude and solar angle estimated by the first environmental parameter estimation unit F2A, and estimates the average light source spectrum for the entire image based on the spectroscopic image input from the spectroscopic camera 2.
Specifically, the average light source spectrum for the entire image is realized by using the average luminance value for the entire image of each of the M narrowband images as the "object spectrum" in the calculation of the error Ds described above. That is, in the calculation of the error Ds used in the processing of FIG. 8 described above, the average luminance value for the entire image of each of the M narrowband images input as the spectral image may be used as the "object spectrum."

 画像処理部F4Aは、分光カメラ2より入力される分光画像と、光源スペクトル推定部F1Aにより得られた画像全体の平均的な光源スペクトルとに基づき、所定の画像処理を行う。例えば、画像処理部F4Aによる画像処理としては、分光画像から画像全体の平均的な光源スペクトルを除去する処理を行うことが考えられる。 The image processing unit F4A performs predetermined image processing based on the spectroscopic image input from the spectroscopic camera 2 and the average light source spectrum of the entire image obtained by the light source spectrum estimation unit F1A. For example, image processing by the image processing unit F4A may involve removing the average light source spectrum of the entire image from the spectroscopic image.

 なお、上記ではオゾン量、エアロゾル濁度、及び沈殿水蒸気量について標準値を用いる例としたが、標準値を用いることとする環境パラメータの組み合わせはこれに限定されるものではなく、例えばオゾン量のみ標準値を用いる、或いは第二環境パラメータとしてのエアロゾル濁度及び沈殿水蒸気量のみ標準値を用いる等といったことも考えられる。或いは、エアロゾル濁度、沈殿水蒸気量の何れか一方のみ標準値を用いるといったことも考えられる。
 なお、エアロゾル濁度、沈殿水蒸気量の少なくとも何れか一方を推定する場合には、第一実施形態と同様に上向きカメラ3を有するシステム構成とする。
In the above example, standard values are used for the amount of ozone, the amount of aerosol turbidity, and the amount of precipitated water vapor, but the combination of environmental parameters for which standard values are used is not limited to this, and it is also possible to use a standard value for only the amount of ozone, or to use standard values for only the amount of aerosol turbidity and the amount of precipitated water vapor as the second environmental parameters, etc. Alternatively, it is also possible to use a standard value for only one of the amount of aerosol turbidity and the amount of precipitated water vapor.
When estimating at least one of the aerosol turbidity and the amount of precipitated water vapor, the system is configured to have an upward camera 3 as in the first embodiment.

<3.変形例>
 なお、実施形態としては上記した具体例に限定されるものでなく、多様な変形例としての構成を採り得る。
 例えば、上記では、太陽光スペクトルの重み係数b0、空スペクトルの重み係数b1を推定する手法を例示したが、これら重み係数b0、b1を推定せずに標準値を用いることも考えられる。
 図11は、その場合に対応したCPU11Xが有する機能説明のための機能ブロック図である。
 ここでは、第二実施形態のように太陽高度と太陽角度以外のオゾン量、エアロゾル濁度、及び沈殿水蒸気量の各環境パラメータに標準値を用い且つ画像全体の平均的な光源スペクトルを推定する場合を想定した機能構成例を示している。
3. Modifications
The embodiment is not limited to the specific example described above, and various modified configurations may be adopted.
For example, in the above example, a method for estimating the weighting coefficient b0 of the sunlight spectrum and the weighting coefficient b1 of the sky spectrum is exemplified, but it is also possible to use standard values without estimating these weighting coefficients b0 and b1.
FIG. 11 is a functional block diagram for explaining the functions of a CPU 11X corresponding to this case.
Here, an example of a functional configuration is shown assuming that standard values are used for each environmental parameter, other than the solar altitude and solar angle, namely, the amount of ozone, the aerosol turbidity, and the amount of precipitated water vapor, as in the second embodiment, and the average light source spectrum for the entire image is estimated.

 第二実施形態におけるCPU11Aとの相違点は、光源スペクトル推定部F1Aに代えて光源スペクトル推定部F1Xを有する点である。光源スペクトル推定部F1Xは、標準値としての固定値による重み係数b0、b1(図中「標準的な日向度合い」)を入力し、それら重み係数b0、b1を用いて画像全体の平均的な光源スペクトルを推定する。具体的な処理としては、太陽光スペクトル及び空スペクトルを、第一環境パラメータ推定部F2Aが推定した太陽高度及び太陽角度と、標準値としてのオゾン量、エアロゾル濁度、及び沈殿水蒸気量とを用いて算出した上で、前述した「太陽光スペクトル*b0+空スペクトル*b1」におけるb0、b1に標準値を代入して光源スペクトルの推定を行う。 The difference from the CPU 11A in the second embodiment is that it has a light source spectrum estimation unit F1X instead of the light source spectrum estimation unit F1A. The light source spectrum estimation unit F1X inputs weighting coefficients b0 and b1 ("standard degree of sunlight" in the figure) that are fixed values as standard values, and estimates the average light source spectrum of the entire image using these weighting coefficients b0 and b1. Specifically, the sunlight spectrum and sky spectrum are calculated using the solar altitude and solar angle estimated by the first environmental parameter estimation unit F2A, and the amount of ozone, aerosol turbidity, and amount of precipitated water vapor as standard values, and then the light source spectrum is estimated by substituting the standard values for b0 and b1 in the aforementioned "sunlight spectrum * b0 + sky spectrum * b1".

 なお、重み係数b0、b1に標準値を用いる場合において、第一実施形態の場合と同様にオゾン量を位置情報と時刻情報に基づいて推定する構成を採ることもできる。また、エアロゾル濁度及び沈殿水蒸気量についても、少なくとも何れかを第一実施形態の場合と同様に上向きカメラ3の撮像画像に基づき推定する構成を採ることもできる。
 また、図11に示す構成において、光源スペクトルの推定にb0、b1の標準値(標準的な日向度合い)を用いるのではなく、図6で説明した第二環境パラメータ推定部F3が上向きカメラ3の撮像画像に基づき算出したb0’,b1’を用いる構成を採ることもできる。
When standard values are used for the weighting coefficients b0 and b1, the amount of ozone can be estimated based on location information and time information, as in the first embodiment. Also, at least one of the aerosol turbidity and the amount of precipitated water vapor can be estimated based on an image captured by the upward camera 3, as in the first embodiment.
In addition, in the configuration shown in Figure 11, instead of using the standard values of b0 and b1 (standard degree of sunlight) to estimate the light source spectrum, a configuration can be adopted in which b0', b1' calculated by the second environmental parameter estimation unit F3 described in Figure 6 based on the image captured by the upward camera 3 is used.

 これまでの説明では、光源スペクトルの推定に用いる光源モデルの例としてBirdモデルを挙げたが、該光源モデルとしては、例えばSMARTS2(Simple Model of the Atmospheric Radiative Transfer of Sunshine 2)の光源モデルやMODTRAN6(MODerate resolution atmospheric TRANsmission 6)の光源モデル等、太陽光スペクトルと空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルであれば、他の光源モデルを採用することができる。
 SMARTS2の光源モデルは、環境パラメータ数がBirdモデルよりも多く、より正確なスペクトルを生成可能なモデルである。
 MODTRAN6の光源モデルは、SMARTS2の光源モデルよりもさらに環境パラメータ数が多く、さらに正確なスペクトルを生成可能である。
 なお、例えばSMARTS2やMODTRAN6の光源モデルを採用する場合のように環境パラメータ数が増える場合も、光源スペクトル推定に要する環境パラメータの導出については、同様に、対象のパラメータについて、対象以外のパラメータを固定して、誤差関数による誤差を最小化するパラメータの導出を行うという手法を採ればよい。
In the explanation so far, the Bird model has been given as an example of a light source model used to estimate the light source spectrum, but other light source models can be used as the light source model as long as they express the light source spectrum, which is spectral information of an outdoor light source, by weighted addition of the solar spectrum and the sky spectrum, such as the SMARTS2 (Simple Model of the Atmospheric Radiative Transfer of Sunshine 2) light source model or the MODTRAN6 (MODerate resolution atmospheric TRANsmission 6) light source model.
The SMARTS2 light source model has a larger number of environmental parameters than the Bird model, and is capable of generating a more accurate spectrum.
The MODTRAN6 light source model has a larger number of environmental parameters than the SMARTS2 light source model, and can generate more accurate spectra.
In addition, even when the number of environmental parameters increases, for example, when adopting a light source model such as SMARTS2 or MODTRAN6, the environmental parameters required for light source spectrum estimation can be derived in a similar manner, in which parameters other than the target parameter are fixed and parameters that minimize the error due to the error function are derived for the target parameters.

 また、これまでの説明では、本技術に係る情報処理装置が、分光カメラ2とは別体の装置として構成された例を挙げたが、本技術に係る情報処理装置は、分光カメラ2と一体の構成も採り得る。
In addition, in the above description, an example has been given in which the information processing device according to the present technology is configured as a device separate from the spectroscopic camera 2, but the information processing device according to the present technology may also be configured as an integrated device with the spectroscopic camera 2.

<4.実施形態のまとめ>
 以上で説明したように実施形態としての情報処理装置(同1又は1A)は、太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき光源スペクトルを推定する光源スペクトル推定部(同F1,F1A)を備えたものである。
 これにより、分光カメラにより検出された物体スペクトルに対し、光源モデルとしての関数をフィッティングさせることで光源スペクトルを推定することが可能となる。
 従来技術のように光源スペクトルを推定可能な環境が想定環境に限定されてしまうという制約をなくすことが可能となるため、光源スペクトル推定のロバスト性向上を図ることができる。
 また、光源モデルとして、太陽光スペクトルと空スペクトルとの重み付き加算により光源スペクトルを表現する光源モデルを用いているため、日向日陰の別を考慮した適切な光源スペクトル推定を実現可能となる。 
4. Summary of the embodiment
As described above, the information processing device (1 or 1A) as an embodiment includes a light source spectrum estimation unit (F1, F1A) that estimates a light source spectrum based on an object spectrum that is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum that is spectral information of an outdoor light source by weighted addition of a solar spectrum that is spectral information of light irradiated from the sun and a sky spectrum that is spectral information of light irradiated from the sky.
This makes it possible to estimate the light source spectrum by fitting a function serving as a light source model to the object spectrum detected by the spectroscopic camera.
Since it is possible to eliminate the restriction imposed by the conventional technology that the environment in which the light source spectrum can be estimated is limited to the assumed environment, it is possible to improve the robustness of the light source spectrum estimation.
In addition, since the light source model used represents the light source spectrum by weighted addition of the sunlight spectrum and the sky spectrum, it is possible to realize appropriate light source spectrum estimation that takes into account whether the object is in sunlight or shade.

 また、実施形態としての情報処理装置においては、光源スペクトル推定部は、光源モデルのパラメータに候補値をセットして算出される光源スペクトルである試算光源スペクトルを、被写体としての物体の分光反射率を示す物体分光反射率の候補値に掛け合わせて算出されるスペクトルである試算物体スペクトルと、物体スペクトルとの誤差(同Ds)を推定の基準値として、光源モデルのパラメータと物体分光反射率とを推定することにより光源スペクトルを推定している。
 これにより、例えば分光カメラにより検出された物体スペクトルに対する試算物体スペクトルの誤差が最小になる等、該誤差が小さくなるという条件を満たす候補値を光源モデルのパラメータ、物体分光反射率の値の正解値として推定することが可能となり、環境に応じた光源スペクトルを適切に推定することができる。
Furthermore, in the information processing device as an embodiment, the light source spectrum estimation unit estimates the light source spectrum by estimating the parameters of the light source model and the object spectral reflectance, using the error (Ds) between the trial object spectrum, which is a spectrum calculated by multiplying the trial light source spectrum, which is a light source spectrum calculated by setting candidate values for the parameters of the light source model, by a candidate value of the object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum as a reference value for estimation.
This makes it possible to estimate candidate values that satisfy the condition that the error is small, for example, that the error between the estimated object spectrum and the object spectrum detected by a spectroscopic camera is minimized, as the correct values for the parameters of the light source model and the object spectral reflectance, and thus makes it possible to appropriately estimate the light source spectrum according to the environment.

 さらに、実施形態としての情報処理装置においては、光源スペクトル推定部は、誤差に基づき、光源モデルのパラメータとして太陽光スペクトルの重み係数(b0)、及び空スペクトルの重み係数(b1)を推定している。
 これにより、日向、日陰の環境に応じた適切な光源スペクトルを推定することができる。
Furthermore, in the information processing device as the embodiment, the light source spectrum estimating unit estimates the weighting coefficient (b0) of the sunlight spectrum and the weighting coefficient (b1) of the sky spectrum as parameters of the light source model based on the error.
This makes it possible to estimate an appropriate light source spectrum according to whether the environment is in sunlight or shade.

 さらにまた、実施形態としての情報処理装置(同1)においては、光源スペクトル推定部が推定した太陽光スペクトルの重み係数と空スペクトルの重み係数とを用いて分光カメラにより撮像される被写体についての日向領域、日陰領域の判定を行う日向日陰領域判定部(同F5)を備えている。
 これにより、分光カメラによる撮像画像について、日向領域、日陰領域を適切に判定することができる。
Furthermore, in the information processing device of the embodiment (same as F1), a sun/shade area determination unit (same as F5) is provided that determines the sunshine area and the shade area of the subject imaged by the spectroscopic camera using the weighting coefficient of the sunlight spectrum and the weighting coefficient of the sky spectrum estimated by the light source spectrum estimation unit.
This makes it possible to appropriately determine sunny areas and shaded areas in an image captured by the spectroscopic camera.

 また、実施形態としての情報処理装置においては、光源スペクトル推定部が太陽光スペクトルと空スペクトルの計算に用いる環境パラメータのうち少なくとも一部のパラメータを分光カメラの位置情報と在時刻情報の少なくとも何れかに基づき推定する第一環境パラメータ推定部(同F2又はF2A)を備えている。
 これにより、太陽光スペクトルと空スペクトルの計算に用いる環境パラメータのうち、分光カメラの位置情報や現在時刻情報に基づき推定可能なパラメータについて適切な値を推定することができ、太陽光スペクトル及び空スペクトルとして実環境に応じた適切なスペクトルを推定することが可能となる。
 従って、光源スペクトルの推定精度向上を図ることができる。
In addition, in the information processing device as an embodiment, a first environmental parameter estimation unit (F2 or F2A) is provided that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum based on at least one of the position information and the time information of the spectroscopic camera.
This makes it possible to estimate appropriate values for environmental parameters used in calculating the solar spectrum and sky spectrum that can be estimated based on the position information of the spectroscopic camera and current time information, and makes it possible to estimate appropriate solar spectrum and sky spectrum according to the actual environment.
Therefore, the estimation accuracy of the light source spectrum can be improved.

 さらに、実施形態としての情報処理装置においては、光源スペクトル推定部は、太陽光スペクトルと空スペクトルの計算に太陽高度、太陽角度、及びオゾン量としての環境パラメータを用いるものとされ、第一環境パラメータ推定部は、太陽高度、太陽角度、オゾン量の少なくとも何れかを分光カメラの位置情報と現在時刻情報の少なくとも何れかに基づき推定している。
 太陽高度、太陽角度、オゾン量はそれぞれ位置情報や現在時刻情報に基づいて推定可能な環境パラメータである。
 従って、上記構成によれば、太陽光スペクトル及び空スペクトルとして、実環境に応じた適切なスペクトルを推定することが可能となり、光源スペクトルの推定精度向上を図ることができる。
Furthermore, in the information processing device as an embodiment, the light source spectrum estimation unit uses environmental parameters such as the solar altitude, solar angle, and ozone amount to calculate the solar spectrum and the sky spectrum, and the first environmental parameter estimation unit estimates at least one of the solar altitude, solar angle, and ozone amount based on at least one of the position information of the spectroscopic camera and current time information.
The solar altitude, solar angle, and amount of ozone are environmental parameters that can be estimated based on location information and current time information.
Therefore, according to the above configuration, it is possible to estimate appropriate spectra according to the actual environment as the sunlight spectrum and the sky spectrum, thereby improving the accuracy of estimating the light source spectrum.

 さらにまた、実施形態としての情報処理装置においては、光源スペクトル推定部が太陽光スペクトルと空スペクトルの計算に用いる環境パラメータのうち少なくとも一部のパラメータを上空側を撮像する上向きカメラ(同3)の撮像画像に基づき推定する第二環境パラメータ推定部(同F3)を備えている。
 これにより、太陽光スペクトルと空スペクトルの計算に用いる環境パラメータのうち、上向きカメラの撮像画像に基づき推定可能なパラメータについて適切な値を推定することができ、太陽光スペクトル及び空スペクトルとして実環境に応じた適切なスペクトルを推定することが可能となる。
 従って、光源スペクトルの推定精度向上を図ることができる。
Furthermore, in the information processing device as an embodiment, the light source spectrum estimation unit is equipped with a second environmental parameter estimation unit (F3) that estimates at least some of the environmental parameters used in calculating the solar spectrum and sky spectrum based on an image captured by an upward-facing camera (F3) that captures an image of the sky.
This makes it possible to estimate appropriate values for the environmental parameters used to calculate the solar spectrum and sky spectrum that can be estimated based on images captured by the upward-facing camera, and to estimate appropriate solar spectrum and sky spectrum that are suited to the actual environment.
Therefore, the estimation accuracy of the light source spectrum can be improved.

 また、実施形態としての情報処理装置においては、光源スペクトル推定部は、太陽光スペクトルと空スペクトルの計算にエアロゾル濁度及び沈殿水蒸気量としての環境パラメータを用いるものとされ、第二環境パラメータ推定部は、エアロゾル濁度と沈殿水蒸気量の少なくとも何れかを上向きカメラの撮像画像に基づき推定している。
 これにより、太陽光スペクトル及び空スペクトルの計算にエアロゾル濁度や沈殿水蒸気量を用いる場合において、それらの環境パラメータを上空の撮像画像に基づいて適切に推定することが可能となり、実環境に応じた適切な太陽光スペクトル及び空スペクトルを推定可能となる。
 従って、光源スペクトルの推定精度向上を図ることができる。
In addition, in an information processing device as an embodiment, the light source spectrum estimation unit uses environmental parameters such as aerosol turbidity and precipitated water vapor amount to calculate the solar spectrum and sky spectrum, and the second environmental parameter estimation unit estimates at least one of the aerosol turbidity and the precipitated water vapor amount based on an image captured by an upward-facing camera.
As a result, when aerosol turbidity or precipitated water vapor amount is used to calculate the solar spectrum and sky spectrum, it becomes possible to appropriately estimate these environmental parameters based on images taken from the sky, making it possible to estimate appropriate solar spectrum and sky spectrum according to the actual environment.
Therefore, the estimation accuracy of the light source spectrum can be improved.

 さらに、実施形態としての情報処理装置においては、光源スペクトル推定部(同F1A)は、太陽光スペクトルと空スペクトルの計算に用いる環境パラメータの何れかに固定値を用いている。
 これにより、太陽光スペクトルと空スペクトルの計算に用いる環境パラメータの少なくとも何れかについて、標準的な環境に対応した標準値を用いることが可能となる。
 環境パラメータによっては、標準値を用いても、推定される光源スペクトルへの影響が少ないものがある。また、環境パラメータに固定値を用いれば、該環境パラメータの推定処理を行う必要がなくなる。従って上記構成によれば、光源スペクトルの推定精度低下の抑制を図りつつ、光源スペクトルの推定に要する処理負担の軽減を図ることができる。
Furthermore, in the information processing device according to the embodiment, the light source spectrum estimation unit (F1A) uses a fixed value for one of the environmental parameters used to calculate the sunlight spectrum and the sky spectrum.
This makes it possible to use standard values corresponding to a standard environment for at least one of the environmental parameters used in calculating the solar spectrum and the sky spectrum.
Some environmental parameters have little effect on the estimated light source spectrum even when standard values are used. In addition, if fixed values are used for the environmental parameters, there is no need to perform estimation processing for the environmental parameters. Therefore, with the above configuration, it is possible to reduce the processing load required for estimating the light source spectrum while suppressing a decrease in the estimation accuracy of the light source spectrum.

 さらにまた、実施形態としての情報処理装置においては、光源スペクトル推定部(同F1)は、分光カメラにより得られる分光画像を所定数の画素単位で領域分割した分割領域ごとに、光源モデルを用いた光源スペクトルの推定を行っている。
 これにより、画像全体ではなく例えば画素単位等のより細かい粒度で光源スペクトルの推定を行うことができる。
 従って、空間方向における光源スペクトルの推定精度向上を図ることができる。
Furthermore, in the information processing device of the embodiment, the light source spectrum estimation unit (F1) estimates the light source spectrum using a light source model for each divided region obtained by dividing the spectroscopic image obtained by the spectroscopic camera into regions each having a predetermined number of pixels.
This allows estimation of the light source spectrum at a finer granularity, for example pixel by pixel, rather than for the entire image.
Therefore, it is possible to improve the estimation accuracy of the light source spectrum in the spatial direction.

 また、実施形態としての情報処理装置においては、光源スペクトル推定部(同F1A)は、分光カメラにより得られる分光画像全体の平均的な光源スペクトルを推定している。
 上記構成は、被写体全体の平均的な光源スペクトルの情報が必要とされる場合に好適である。
Furthermore, in the information processing device according to the embodiment, the light source spectrum estimating unit (F1A) estimates an average light source spectrum for the entire spectroscopic image obtained by the spectroscopic camera.
The above configuration is suitable for cases where information on the average light source spectrum over the entire subject is required.

 さらに、実施形態としての情報処理装置においては、光源スペクトル推定部が推定した光源スペクトルに基づき、分光カメラにより得られる分光画像についての光源キャンセル処理を行う画像処理部(同F4又はF4A)を備えている。
 これにより、光源スペクトル推定部が推定した適切な光源スペクトルに基づき光源キャンセル処理が行われる。
 従って、光源キャンセル処理の精度向上が図られる。
Furthermore, the information processing device according to the embodiment includes an image processing unit (F4 or F4A) that performs light source cancellation processing on the spectroscopic image obtained by the spectroscopic camera based on the light source spectrum estimated by the light source spectrum estimation unit.
As a result, the light source cancellation process is performed based on the appropriate light source spectrum estimated by the light source spectrum estimation unit.
Therefore, the accuracy of the light source cancellation process is improved.

 さらにまた、実施形態としての情報処理装置においては、光源スペクトル推定部が推定した光源スペクトルと、分光カメラにより得られる分光画像とに基づきセマンティックセグメンテーション処理を行う画像処理部(同F4)を備えている。
 これにより、光源スペクトル推定部が推定した適切な光源スペクトルに基づき被写体についてのセマンティックセグメンテーション処理が行われる。
 従って、セマンティックセグメンテーション処理の精度向上が図られる。
Furthermore, the information processing device according to the embodiment includes an image processing unit (F4) that performs semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit and the spectroscopic image obtained by the spectroscopic camera.
As a result, semantic segmentation processing is performed on the subject based on the appropriate light source spectrum estimated by the light source spectrum estimation unit.
Therefore, the accuracy of the semantic segmentation process is improved.

 実施形態としての情報処理方法は、情報処理装置が、太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき光源スペクトルを推定する情報処理方法である。
 このような情報処理方法によっても、上記した実施形態としての情報処理装置と同様の作用及び効果を得ることができる。
An information processing method as an embodiment is an information processing method in which an information processing device estimates a light source spectrum based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that represents a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
With such an information processing method, it is possible to obtain the same functions and effects as those of the information processing device according to the above embodiment.

 ここで、実施形態としては、図8等を参照して説明した光源スペクトル推定部F1又はF1A等としての機能を、例えばCPU、DSP等、或いはこれらを含むデバイスに実現せるプログラムを考えることができる。
 すなわち、実施形態のプログラムは、コンピュータ装置が読み取り可能なプログラムであって、太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき光源スペクトルを推定する機能を、コンピュータ装置に実現させるプログラムである。
 このようなプログラムにより、前述した光源スペクトル推定部F1又はF1A等としての機能を情報処理装置1又は1A等としての機器において実現できる。
Here, as an embodiment, a program for realizing the functions of the light source spectrum estimation unit F1 or F1A etc. described with reference to FIG. 8 etc. in, for example, a CPU, a DSP etc., or a device including these, can be considered.
That is, the program of the embodiment is a program readable by a computer device, and causes the computer device to realize a function of estimating a light source spectrum based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
By using such a program, the functions of the light source spectrum estimation unit F1 or F1A or the like described above can be realized in a device such as the information processing device 1 or 1A or the like.

 上記のようなプログラムは、コンピュータ装置等の機器に内蔵されている記録媒体としてのHDD(Hard Disc Drive)、SSD(Solid State Drive)や、CPUを有するマイクロコンピュータ内のROM等に予め記録しておくことができる。
 或いはまた、フレキシブルディスク、CD-ROM(Compact Disc Read Only Memory)、MO(Magneto Optical)ディスク、DVD(Digital Versatile Disc)、ブルーレイディスク(Blu-ray Disc(登録商標))、磁気ディスク、半導体メモリ、メモリカードなどのリムーバブル記録媒体に、一時的あるいは永続的に格納(記録)しておくことができる。このようなリムーバブル記録媒体は、いわゆるパッケージソフトウエアとして提供することができる。
 また、このようなプログラムは、リムーバブル記録媒体からパーソナルコンピュータ等にインストールする他、ダウンロードサイトから、LAN、インターネットなどのネットワークを介してダウンロードすることもできる。
The above-mentioned programs can be recorded in advance in a recording medium such as an HDD (Hard Disc Drive) or SSD (Solid State Drive) built into a device such as a computer device, or in a ROM within a microcomputer having a CPU.
Alternatively, the software may be temporarily or permanently stored (recorded) on a removable recording medium such as a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disk, a DVD (Digital Versatile Disc), a Blu-ray Disc (registered trademark), a magnetic disk, a semiconductor memory, a memory card, etc. Such removable recording media may be provided as so-called package software.
Such a program can be installed in a personal computer or the like from a removable recording medium, or can be downloaded from a download site via a network such as a LAN or the Internet.

 またこのようなプログラムによれば、実施形態としての光源スペクトル推定手法の広範な提供に適している。例えばパーソナルコンピュータ、携帯型情報処理装置、携帯電話機、ゲーム機器、ビデオ機器、PDA(Personal Digital Assistant)等にプログラムをダウンロードすることで、当該パーソナルコンピュータ等を、本開示の光源スペクトル推定手法を実現する装置として機能させることができる。 Furthermore, such a program is suitable for providing the light source spectrum estimation method of the embodiment in a wide range of applications. For example, by downloading the program to a personal computer, a portable information processing device, a mobile phone, a game device, a video device, a PDA (Personal Digital Assistant), etc., the personal computer, etc. can be made to function as a device that realizes the light source spectrum estimation method of the present disclosure.

 なお、本明細書に記載された効果はあくまでも例示であって限定されるものではなく、また他の効果があってもよい。
It should be noted that the effects described in this specification are merely examples and are not limiting, and other effects may also be obtained.

<5.本技術>
 本技術は以下のような構成を採ることもできる。
(1)
 太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する光源スペクトル推定部を備えた
 情報処理装置。
(2)
 前記光源スペクトル推定部は、
 前記光源モデルのパラメータに候補値をセットして算出される光源スペクトルである試算光源スペクトルを、前記被写体としての物体の分光反射率を示す物体分光反射率の候補値に掛け合わせて算出されるスペクトルである試算物体スペクトルと、前記物体スペクトルとの誤差を推定の基準値として、前記光源モデルのパラメータと前記物体分光反射率とを推定することにより前記光源スペクトルを推定する
 前記(1)に記載の情報処理装置。
(3)
 前記光源スペクトル推定部は、
 前記誤差に基づき、前記光源モデルのパラメータとして前記太陽光スペクトルの重み係数、及び前記空スペクトルの重み係数を推定する
 前記(2)に記載の情報処理装置。
(4)
 前記光源スペクトル推定部が推定した前記太陽光スペクトルの重み係数と前記空スペクトルの重み係数とを用いて前記分光カメラにより撮像される被写体についての日向領域、日陰領域の判定を行う日向日陰領域判定部を備えた
 前記(3)に記載の情報処理装置。
(5)
 前記光源スペクトル推定部が前記太陽光スペクトルと前記空スペクトルの計算に用いる環境パラメータのうち少なくとも一部のパラメータを前記分光カメラの位置情報と現在時刻情報の少なくとも何れかに基づき推定する第一環境パラメータ推定部を備えた
 前記(1)から(4)の何れかに記載の情報処理装置。
(6)
 前記光源スペクトル推定部は、前記太陽光スペクトルと前記空スペクトルの計算に太陽高度、太陽角度、及びオゾン量としての環境パラメータを用いるものとされ、
 前記第一環境パラメータ推定部は、
 前記太陽高度、前記太陽角度、前記オゾン量の少なくとも何れかを前記分光カメラの位置情報と現在時刻情報の少なくとも何れかに基づき推定する
 前記(5)に記載の情報処理装置。
(7)
 前記光源スペクトル推定部が前記太陽光スペクトルと前記空スペクトルの計算に用いる環境パラメータのうち少なくとも一部のパラメータを上空側を撮像する上向きカメラの撮像画像に基づき推定する第二環境パラメータ推定部を備えた
 前記(1)から(6)の何れかに記載の情報処理装置。
(8)
 前記光源スペクトル推定部は、前記太陽光スペクトルと前記空スペクトルの計算にエアロゾル濁度及び沈殿水蒸気量としての環境パラメータを用いるものとされ、
 前記第二環境パラメータ推定部は、
 前記エアロゾル濁度と前記沈殿水蒸気量の少なくとも何れかを前記上向きカメラの撮像画像に基づき推定する
 前記(7)に記載の情報処理装置。
(9)
 前記光源スペクトル推定部は、
 前記太陽光スペクトルと前記空スペクトルの計算に用いる環境パラメータの何れかに固定値を用いる
 前記(1)から(4)の何れかに記載の情報処理装置。
(10)
 前記光源スペクトル推定部は、
 前記分光カメラにより得られる分光画像を所定数の画素単位で領域分割した分割領域ごとに、前記光源モデルを用いた前記光源スペクトルの推定を行う
 前記(1)から(9)の何れかに記載の情報処理装置。
(11)
 前記光源スペクトル推定部は、
 前記分光カメラにより得られる分光画像全体の平均的な光源スペクトルを推定する
 前記(1)から(9)の何れかに記載の情報処理装置。
(12)
 前記光源スペクトル推定部が推定した前記光源スペクトルに基づき、前記分光カメラにより得られる分光画像についての光源キャンセル処理を行う画像処理部を備えた
 前記(1)から(11)の何れかに記載の情報処理装置。
(13)
 前記光源スペクトル推定部が推定した前記光源スペクトルと、前記分光カメラにより得られる分光画像とに基づきセマンティックセグメンテーション処理を行う画像処理部を備えた
 前記(1)から(11)の何れかに記載の情報処理装置。
(14)
 情報処理装置が、
 太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する
 情報処理方法。
(15)
 コンピュータ装置が読み取り可能なプログラムであって、
 太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する機能を、前記コンピュータ装置に実現させる
 プログラム。
<5. This Technology>
The present technology can also be configured as follows.
(1)
An information processing device comprising: a light source spectrum estimation unit that estimates the light source spectrum based on an object spectrum that is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum that is spectral information of an outdoor light source by weighted addition of a solar spectrum that is spectral information of light irradiated from the sun and a sky spectrum that is spectral information of light irradiated from the sky.
(2)
The light source spectrum estimation unit
The information processing device described in (1) estimates the light source spectrum by estimating the parameters of the light source model and the object spectral reflectance, using an error between a trial object spectrum, which is a spectrum calculated by multiplying a trial light source spectrum, which is a light source spectrum calculated by setting candidate values for parameters of the light source model, by a candidate value of object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum, as a reference value for estimation.
(3)
The light source spectrum estimation unit
The information processing device according to (2), further comprising: estimating a weighting coefficient of the sunlight spectrum and a weighting coefficient of the sky spectrum as parameters of the light source model based on the error.
(4)
The information processing device according to (3), further comprising a sun/shade area determination unit that determines a sun area and a shade area of a subject imaged by the spectroscopic camera using the weighting coefficient of the sunlight spectrum and the weighting coefficient of the sky spectrum estimated by the light source spectrum estimation unit.
(5)
The information processing device according to any one of (1) to (4), further comprising a first environmental parameter estimation unit that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum, based on at least one of position information of the spectroscopic camera and current time information.
(6)
the light source spectrum estimation unit uses environmental parameters such as a solar altitude, a solar angle, and an ozone amount to calculate the solar spectrum and the sky spectrum;
The first environmental parameter estimation unit
The information processing device according to (5), wherein at least one of the solar altitude, the solar angle, and the amount of ozone is estimated based on at least one of position information of the spectroscopic camera and current time information.
(7)
The information processing device according to any one of (1) to (6), further comprising a second environmental parameter estimation unit that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum based on an image captured by an upward-facing camera that captures an image of the sky.
(8)
the light source spectrum estimation unit uses environmental parameters, such as aerosol turbidity and precipitated water vapor, in calculating the solar spectrum and the sky spectrum;
The second environment parameter estimation unit
The information processing device according to (7), wherein at least one of the aerosol turbidity and the amount of precipitated water vapor is estimated based on an image captured by the upward camera.
(9)
The light source spectrum estimation unit
The information processing device according to any one of (1) to (4), wherein a fixed value is used for any of the environmental parameters used in the calculation of the solar spectrum and the sky spectrum.
(10)
The light source spectrum estimation unit
The information processing device according to any one of (1) to (9), further comprising: a light source model for estimating the light source spectrum for each divided region obtained by dividing a spectroscopic image obtained by the spectroscopic camera into regions each having a predetermined number of pixels.
(11)
The light source spectrum estimation unit
The information processing device according to any one of (1) to (9), further comprising: estimating an average light source spectrum of an entire spectroscopic image obtained by the spectroscopic camera.
(12)
The information processing device according to any one of (1) to (11), further comprising an image processing unit that performs a light source cancellation process on a spectroscopic image obtained by the spectroscopic camera based on the light source spectrum estimated by the light source spectrum estimation unit.
(13)
The information processing device according to any one of (1) to (11), further comprising an image processing unit that performs semantic segmentation processing based on the light source spectrum estimated by the light source spectrum estimation unit and a spectroscopic image obtained by the spectroscopic camera.
(14)
An information processing device,
An information processing method for estimating a light source spectrum, which is spectral information of an outdoor light source, based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that represents a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
(15)
A computer readable program,
The program causes the computer device to realize a function of estimating the light source spectrum based on the object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses the light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.

1 情報処理装置
2 分光カメラ
3 上向きカメラ
4 分光センサ
4a 画素アレイ部
5 分光画像生成部
5a デモザイク部
5b 狭帯域画像生成部
6 制御部
7 計時部
8 GNSSセンサ
9 通信部
Px 画素
Pu 分光画素ユニット
11,11A CPU
12 ROM
13 RAM
14 バス
15 入出力インタフェース
16 入力部
17 表示部
18 音声出力部
19 記憶部
20 通信部
21 ドライブ
22 リムーバブル記録媒体
F1,F1A 光源スペクトル推定部
F2,F2A 第一環境パラメータ推定部
F3,F3A 第二環境パラメータ推定部
F4,F4A 画像処理部
REFERENCE SIGNS LIST 1 Information processing device 2 Spectroscopic camera 3 Upward camera 4 Spectroscopic sensor 4a Pixel array unit 5 Spectroscopic image generating unit 5a Demosaic unit 5b Narrowband image generating unit 6 Control unit 7 Timekeeping unit 8 GNSS sensor 9 Communication unit Px Pixel Pu Spectroscopic pixel unit 11, 11A CPU
12 ROM
13 RAM
14 Bus 15 Input/Output Interface 16 Input Unit 17 Display Unit 18 Audio Output Unit 19 Memory Unit 20 Communication Unit 21 Drive 22 Removable Recording Medium F1, F1A Light Source Spectrum Estimation Unit F2, F2A First Environmental Parameter Estimation Unit F3, F3A Second Environmental Parameter Estimation Unit F4, F4A Image Processing Unit

Claims (15)

 太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する光源スペクトル推定部を備えた
 情報処理装置。
An information processing device comprising: a light source spectrum estimation unit that estimates the light source spectrum based on an object spectrum that is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses a light source spectrum that is spectral information of an outdoor light source by weighted addition of a solar spectrum that is spectral information of light irradiated from the sun and a sky spectrum that is spectral information of light irradiated from the sky.
 前記光源スペクトル推定部は、
 前記光源モデルのパラメータに候補値をセットして算出される光源スペクトルである試算光源スペクトルを、前記被写体としての物体の分光反射率を示す物体分光反射率の候補値に掛け合わせて算出されるスペクトルである試算物体スペクトルと、前記物体スペクトルとの誤差を推定の基準値として、前記光源モデルのパラメータと前記物体分光反射率とを推定することにより前記光源スペクトルを推定する
 請求項1に記載の情報処理装置。
The light source spectrum estimation unit
2. The information processing device according to claim 1, wherein the light source spectrum is estimated by estimating the parameters of the light source model and the object spectral reflectance, using an error between a trial object spectrum, which is a spectrum calculated by multiplying a trial light source spectrum, which is a light source spectrum calculated by setting candidate values for parameters of the light source model, by a candidate value of object spectral reflectance indicating the spectral reflectance of the object as the subject, and the object spectrum as a reference value for estimation.
 前記光源スペクトル推定部は、
 前記誤差に基づき、前記光源モデルのパラメータとして前記太陽光スペクトルの重み係数、及び前記空スペクトルの重み係数を推定する
 請求項2に記載の情報処理装置。
The light source spectrum estimation unit
The information processing apparatus according to claim 2 , further comprising: estimating a weighting coefficient of the sunlight spectrum and a weighting coefficient of the sky spectrum as parameters of the light source model based on the error.
 前記光源スペクトル推定部が推定した前記太陽光スペクトルの重み係数と前記空スペクトルの重み係数とを用いて前記分光カメラにより撮像される被写体についての日向領域、日陰領域の判定を行う日向日陰領域判定部を備えた
 請求項3に記載の情報処理装置。
The information processing device according to claim 3 , further comprising a sun/shade area determination unit that determines a sunshine area and a shade area of a subject imaged by the spectroscopic camera using the weighting coefficient of the sunlight spectrum and the weighting coefficient of the sky spectrum estimated by the light source spectrum estimation unit.
 前記光源スペクトル推定部が前記太陽光スペクトルと前記空スペクトルの計算に用いる環境パラメータのうち少なくとも一部のパラメータを前記分光カメラの位置情報と在時刻情報の少なくとも何れかに基づき推定する第一環境パラメータ推定部を備えた
 請求項1に記載の情報処理装置。
2. The information processing device according to claim 1, further comprising a first environmental parameter estimating unit that estimates at least a part of the environmental parameters used by the light source spectrum estimating unit to calculate the solar spectrum and the sky spectrum based on at least one of position information and time information of the spectroscopic camera.
 前記光源スペクトル推定部は、前記太陽光スペクトルと前記空スペクトルの計算に太陽高度、太陽角度、及びオゾン量としての環境パラメータを用いるものとされ、
 前記第一環境パラメータ推定部は、
 前記太陽高度、前記太陽角度、前記オゾン量の少なくとも何れかを前記分光カメラの位置情報と現在時刻情報の少なくとも何れかに基づき推定する
 請求項5に記載の情報処理装置。
the light source spectrum estimation unit uses environmental parameters such as a solar altitude, a solar angle, and an ozone amount to calculate the solar spectrum and the sky spectrum;
The first environmental parameter estimation unit
The information processing apparatus according to claim 5 , wherein at least one of the solar altitude, the solar angle, and the amount of ozone is estimated based on at least one of position information of the spectroscopic camera and current time information.
 前記光源スペクトル推定部が前記太陽光スペクトルと前記空スペクトルの計算に用いる環境パラメータのうち少なくとも一部のパラメータを上空側を撮像する上向きカメラの撮像画像に基づき推定する第二環境パラメータ推定部を備えた
 請求項1に記載の情報処理装置。
The information processing device according to claim 1 , further comprising a second environmental parameter estimation unit that estimates at least a portion of the environmental parameters used by the light source spectrum estimation unit to calculate the solar spectrum and the sky spectrum based on an image captured by an upward-facing camera that captures an image of the sky.
 前記光源スペクトル推定部は、前記太陽光スペクトルと前記空スペクトルの計算にエアロゾル濁度及び沈殿水蒸気量としての環境パラメータを用いるものとされ、
 前記第二環境パラメータ推定部は、
 前記エアロゾル濁度と前記沈殿水蒸気量の少なくとも何れかを前記上向きカメラの撮像画像に基づき推定する
 請求項7に記載の情報処理装置。
the light source spectrum estimation unit uses environmental parameters, such as aerosol turbidity and precipitated water vapor, in calculating the solar spectrum and the sky spectrum;
The second environment parameter estimation unit
The information processing device according to claim 7 , wherein at least one of the aerosol turbidity and the amount of precipitated water vapor is estimated based on an image captured by the upward-facing camera.
 前記光源スペクトル推定部は、
 前記太陽光スペクトルと前記空スペクトルの計算に用いる環境パラメータの何れかに固定値を用いる
 請求項1に記載の情報処理装置。
The light source spectrum estimation unit
The information processing apparatus according to claim 1 , wherein a fixed value is used for any one of the environmental parameters used in the calculation of the solar spectrum and the sky spectrum.
 前記光源スペクトル推定部は、
 前記分光カメラにより得られる分光画像を所定数の画素単位で領域分割した分割領域ごとに、前記光源モデルを用いた前記光源スペクトルの推定を行う
 請求項1に記載の情報処理装置。
The light source spectrum estimation unit
The information processing apparatus according to claim 1 , further comprising: a spectroscopic image obtained by dividing the spectroscopic image obtained by the spectroscopic camera into regions each having a predetermined number of pixels;
 前記光源スペクトル推定部は、
 前記分光カメラにより得られる分光画像全体の平均的な光源スペクトルを推定する
 請求項1に記載の情報処理装置。
The light source spectrum estimation unit
The information processing device according to claim 1 , further comprising: an optical fiber estimating unit configured to estimate an average light source spectrum of an entire spectroscopic image obtained by the spectroscopic camera.
 前記光源スペクトル推定部が推定した前記光源スペクトルに基づき、前記分光カメラにより得られる分光画像についての光源キャンセル処理を行う画像処理部を備えた
 請求項1に記載の情報処理装置。
The information processing device according to claim 1 , further comprising an image processing unit that performs a light source cancellation process on the spectroscopic image obtained by the spectroscopic camera, based on the light source spectrum estimated by the light source spectrum estimation unit.
 前記光源スペクトル推定部が推定した前記光源スペクトルと、前記分光カメラにより得られる分光画像とに基づきセマンティックセグメンテーション処理を行う画像処理部を備えた
 請求項1に記載の情報処理装置。
The information processing device according to claim 1 , further comprising an image processing unit that performs a semantic segmentation process based on the light source spectrum estimated by the light source spectrum estimation unit and a spectroscopic image obtained by the spectroscopic camera.
 情報処理装置が、
 太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する
 情報処理方法。
An information processing device,
An information processing method for estimating a light source spectrum, which is spectral information of an outdoor light source, based on an object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that represents a light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
 コンピュータ装置が読み取り可能なプログラムであって、
 太陽からの照射光の分光情報である太陽光スペクトルと空からの照射光の分光情報である空スペクトルとの重み付き加算により屋外光源の分光情報である光源スペクトルを表現する光源モデルを用いて、分光カメラにより得られた被写体の分光情報である物体スペクトルに基づき前記光源スペクトルを推定する機能を、前記コンピュータ装置に実現させる
 プログラム。
A computer readable program,
The program causes the computer device to realize a function of estimating the light source spectrum based on the object spectrum, which is spectral information of a subject obtained by a spectroscopic camera, using a light source model that expresses the light source spectrum, which is spectral information of an outdoor light source, by weighted addition of a solar spectrum, which is spectral information of light irradiated from the sun, and a sky spectrum, which is spectral information of light irradiated from the sky.
PCT/JP2024/041564 2023-12-12 2024-11-25 Information processing device, information processing method, and program Pending WO2025126817A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023209455 2023-12-12
JP2023-209455 2023-12-12

Publications (1)

Publication Number Publication Date
WO2025126817A1 true WO2025126817A1 (en) 2025-06-19

Family

ID=96057114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/041564 Pending WO2025126817A1 (en) 2023-12-12 2024-11-25 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2025126817A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010106582A1 (en) * 2009-03-18 2010-09-23 株式会社パスコ Method and device for evaluation of solar radiation intensity
WO2011018999A1 (en) * 2009-08-12 2011-02-17 日本電気株式会社 Obstacle detection device and method and obstacle detection system
JP2011053377A (en) * 2009-08-31 2011-03-17 Canon Inc Imaging apparatus and method for controlling the same
WO2014203453A1 (en) * 2013-06-19 2014-12-24 日本電気株式会社 Illumination estimation device, illumination estimation method, and illumination estimation program
WO2018016555A1 (en) * 2016-07-22 2018-01-25 日本電気株式会社 Image processing device, image processing method, and recording medium
WO2023068208A1 (en) * 2021-10-20 2023-04-27 国立研究開発法人情報通信研究機構 Aerosol concentration inference method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010106582A1 (en) * 2009-03-18 2010-09-23 株式会社パスコ Method and device for evaluation of solar radiation intensity
WO2011018999A1 (en) * 2009-08-12 2011-02-17 日本電気株式会社 Obstacle detection device and method and obstacle detection system
JP2011053377A (en) * 2009-08-31 2011-03-17 Canon Inc Imaging apparatus and method for controlling the same
WO2014203453A1 (en) * 2013-06-19 2014-12-24 日本電気株式会社 Illumination estimation device, illumination estimation method, and illumination estimation program
WO2018016555A1 (en) * 2016-07-22 2018-01-25 日本電気株式会社 Image processing device, image processing method, and recording medium
WO2023068208A1 (en) * 2021-10-20 2023-04-27 国立研究開発法人情報通信研究機構 Aerosol concentration inference method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HITACHI: "What is semantic segmentation? How it works and what techniques are used", HTTPS://WWW.HITACHI-SOLUTIONS-CREATE.CO.JP, HITACHI SOLUTIONS CREATE, LTD., JP, 30 September 2023 (2023-09-30), JP, pages 1 - 5, XP093324300, Retrieved from the Internet <URL:https://www.hitachi-solutions-create.co.jp/column/technology/semantic-segmentation.html> *
KANEKO EIJI, TODA MASATO, AOKI HIROFUMI, TSUKADA MASATO: "Daylight Spectrum Model under Weather Conditions from Clear Sky to Cloudy", IPSJ SIG TECHNICAL REPORT, vol. 2012-CVIM-182, no. 32, 24 May 2012 (2012-05-24), pages 1 - 6, XP093324292 *
OKE SHINICHIRO, HOASHI KAI, YAMAMOTO MASAHIRO: "A simple model using global irradiance for estimation of solar spectral irradiance in any place", JOURNAL OF JAPAN SOLAR ENERGY SOCIETY, vol. 42, no. 3 (233), 1 January 2016 (2016-01-01), JP, pages 37 - 43, XP093324306 *

Similar Documents

Publication Publication Date Title
US8509476B2 (en) Automated system and method for optical cloud shadow detection over water
Chen et al. Improving estimates of fractional vegetation cover based on UAV in alpine grassland on the Qinghai–Tibetan Plateau
US7826685B2 (en) Spatial and spectral calibration of a panchromatic, multispectral image pair
US20230026811A1 (en) System and method for removing haze from remote sensing images
CN108460739A (en) A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network
WO2020160643A1 (en) Shadow and cloud masking for agriculture applications using convolutional neural networks
CN114419392A (en) Hyperspectral snapshot image recovery method, device, equipment and medium
CN112529788B (en) Multispectral remote sensing image thin cloud removing method based on thin cloud thickness map estimation
JP6943251B2 (en) Image processing equipment, image processing methods and computer-readable recording media
US10511793B2 (en) Techniques for correcting fixed pattern noise in shutterless FIR cameras
WO2014116472A1 (en) System and method for atmospheric parameter enhancement
CN115170442B (en) Point light source-based absolute radiation correction method, apparatus, device, and medium
JPWO2016098353A1 (en) Image information processing apparatus, image information processing system, image information processing method, and image information processing program
WO2025126817A1 (en) Information processing device, information processing method, and program
US12322070B1 (en) System and method for hyperspectral image generation with quality assurance
CN120411687A (en) Cross-scene training method, device and electronic equipment for plant segmentation model
JP2010055546A (en) Image processing apparatus, image processing method and program
CN118279671B (en) Satellite inversion cloud classification method, device, electronic equipment and computer storage medium
Gao et al. An automatic exposure method of plane array remote sensing image based on two-dimensional entropy
CN115049754B (en) Method and device for generating infrared thermodynamic diagram on orbit based on satellite
Lu et al. Modification of 6SV to remove skylight reflected at the air-water interface: Application to atmospheric correction of Landsat 8 OLI imagery in inland waters
Yu et al. Efficient statistical validation of autonomous driving systems
CN119513523B (en) Illumination condition self-adaptive soil humidity estimation method based on multi-mode data fusion
JPWO2016189853A1 (en) Image processing apparatus, image processing system, image processing method, and computer program
CN118761330B (en) Surface temperature inversion method and surface temperature inversion device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24903476

Country of ref document: EP

Kind code of ref document: A1