[go: up one dir, main page]

WO2016093071A1 - Dispositif de traitement d'image, procédé de traitement d'image et programme - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image et programme Download PDF

Info

Publication number
WO2016093071A1
WO2016093071A1 PCT/JP2015/083243 JP2015083243W WO2016093071A1 WO 2016093071 A1 WO2016093071 A1 WO 2016093071A1 JP 2015083243 W JP2015083243 W JP 2015083243W WO 2016093071 A1 WO2016093071 A1 WO 2016093071A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blur
visual impairment
use position
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2015/083243
Other languages
English (en)
Japanese (ja)
Inventor
碧輝 江山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of WO2016093071A1 publication Critical patent/WO2016093071A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/08Biomedical applications

Definitions

  • the present disclosure relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program that can easily display an image suitable for a visually impaired user.
  • Private displays such as smartphones are generally designed for users who do not have visual impairments such as myopia and presbyopia. Therefore, the user needs to use a device that corrects visual impairment such as glasses or contact lenses when using the display, and may feel annoyed.
  • the computer receives a display hardware characteristic, receives at least one parameter describing the user's visual impairment, receives at least one parameter describing the condition when the display is viewed by the user.
  • a step of calculating a pupil function, a step of calculating a point spread function (PSF) or a light transfer function (OTF), and a precorrected image corresponding to at least a part of the original image for correcting visual impairment of the user has been devised (see, for example, Patent Document 1).
  • Patent Document 1 since it is necessary to acquire a large number of parameters, the processing and configuration become complicated, and the processing cost and manufacturing cost increase. Therefore, it is desired to easily display an image suitable for a user with visual impairment.
  • the present disclosure has been made in view of such a situation, and makes it possible to easily display an image suitable for a visually impaired user.
  • An image processing apparatus is acquired by a position acquisition unit that acquires an optimal use position that is a use position of a display device when a blur due to a visual impairment of a user does not occur, and the position acquisition unit. Based on the optimal use position, the normal use position of the display device, and the display target image, a display control unit that causes the display device to display an image for visual impairment that does not cause the blur in the display device at the normal use position.
  • the image processing method and program according to the first aspect of the present disclosure correspond to the image processing apparatus according to the first aspect of the present disclosure.
  • the optimum use position that is the use position of the display device when the blur due to the visual impairment of the user does not occur is acquired, and the obtained optimum use position, the normal use position of the display device Based on the image to be displayed and the image to be displayed, an image for visual impairment in which the blur does not occur in the display device at the normal use position is displayed on the display device.
  • An image processing apparatus includes a blur degree acquisition unit that acquires a degree of blur caused by a user's visual impairment in an image displayed on a display device, and the blur degree acquisition unit that acquires the blur degree acquisition unit.
  • the image processing apparatus includes: a display control unit that causes the display device to display an image for visual impairment that does not generate the blur based on a degree of blur and an image to be displayed.
  • the image processing method and program according to the second aspect of the present disclosure correspond to the image processing apparatus according to the second aspect of the present disclosure.
  • the degree of blur caused by the visual impairment of the user in the image displayed on the display device is acquired, and based on the acquired degree of blur and the image to be displayed, An image for visual impairment in which the blur does not occur is displayed on the display device.
  • an image can be displayed. Further, according to the first and second aspects of the present disclosure, it is possible to easily display an image suitable for a user with a visual impairment.
  • First embodiment Smartphone (FIGS. 1 to 5) 2.
  • Second embodiment Information processing system (FIGS. 6 to 8) 3.
  • Third Embodiment Smartphone (FIGS. 9 to 12) 4).
  • Fourth embodiment Information processing system (FIGS. 13 to 17)
  • FIG. 1 is a block diagram illustrating a hardware configuration example of a first embodiment of a smartphone as an image processing apparatus to which the present disclosure is applied.
  • the smartphone 10 includes a communication unit 12, an audio codec 13, a camera unit 16, an image processing unit 17, a recording / reproducing unit 19, a recording unit 20, a touch panel 21 (display device), and a control unit 22. These are connected to each other via a bus 24.
  • an antenna 11 is connected to the communication unit 12, and a speaker 14 and a microphone 15 are connected to the audio codec 13. Furthermore, an operation unit 23 such as a power button is connected to the control unit 22.
  • the smartphone 10 performs processing in various modes such as a communication mode, a call mode, a shooting mode, and a visual impairment setting mode.
  • the visual impairment setting mode is a mode for performing a setting for changing an image to be displayed suitable for a user having no visual impairment to an image suitable for a user having a visual impairment and displaying it.
  • an analog audio signal generated by the microphone 15 is input to the audio codec 13.
  • the audio codec 13 converts an analog audio signal into digital audio data, compresses the converted audio data, and supplies the compressed audio data to the communication unit 12.
  • the communication unit 12 performs modulation processing, frequency conversion processing, and the like on the compressed audio data to generate a transmission signal.
  • the communication part 12 supplies a transmission signal to the antenna 11, and transmits to the base station which is not shown in figure.
  • the communication unit 12 also obtains digital audio data transmitted from the other party by performing amplification, frequency conversion processing, demodulation processing, and the like of the received signal received by the antenna 11 and supplies the digital audio data to the audio codec 13.
  • the audio codec 13 expands the audio data, converts the expanded audio data into an analog audio signal, and outputs the analog audio signal to the speaker 14.
  • the control unit 22 accepts characters input by operating the touch panel 21 by the user and displays the characters on the touch panel 21. Further, the control unit 22 generates mail data based on an instruction input by the user operating the touch panel 21 and supplies the mail data to the communication unit 12.
  • the communication unit 12 performs mail data modulation processing, frequency conversion processing, and the like, and transmits the obtained transmission signal from the antenna 11.
  • the communication unit 12 also performs amplification, frequency conversion processing, demodulation processing, and the like of the received signal received by the antenna 11 to restore mail data.
  • This mail data is supplied to the touch panel 21 and displayed.
  • the smartphone 10 can also record the received mail data on the recording unit 20 by the recording / reproducing unit 19.
  • the recording unit 20 is a removable medium such as a semiconductor memory such as a RAM (Random Access Memory) or a built-in flash memory, a hard disk, a magnetic disk, a magneto-optical disk, an optical disk, a USB (Universal Serial Bus) memory, or a memory card.
  • a semiconductor memory such as a RAM (Random Access Memory) or a built-in flash memory
  • a hard disk such as a RAM (Random Access Memory) or a built-in flash memory
  • a hard disk such as a hard disk, a magnetic disk, a magneto-optical disk, an optical disk, a USB (Universal Serial Bus) memory, or a memory card.
  • USB Universal Serial Bus
  • the control unit 22 supplies the camera unit 16 with an instruction to start the shooting preparation operation.
  • the camera unit 16 includes a back camera having a lens on the back surface (surface facing the touch panel 21) of the smartphone 10 in a normal use state, and a front camera having a lens on the front surface (surface on which the touch panel 21 is disposed).
  • the back camera is used when the user photographs a subject other than himself, and the front camera is used when the user photographs himself / herself as a subject.
  • the back camera or front camera of the camera unit 16 performs a shooting preparation operation such as an AF (distance measurement) operation or a temporary shooting in response to a shooting preparation operation start command supplied from the control unit 22.
  • the control unit 22 supplies a shooting command to the camera unit 16 in accordance with a shooting command input by the user operating the touch panel 21.
  • the camera unit 16 performs actual shooting according to the shooting command.
  • a photographed image photographed by provisional photographing or actual photographing is supplied to the touch panel 21 and displayed.
  • the captured image captured by the actual capturing is also supplied to the image processing unit 17 and encoded by the image processing unit 17.
  • the encoded data generated as a result of encoding is supplied to the recording / reproducing unit 19 and recorded in the recording unit 20.
  • the control unit 22 blurs due to a user's visual impairment in an image displayed on the touch panel 21 input by operating the touch panel 21 by a user with visual impairment. Get the degree of.
  • a user with a visual impairment is suitable for a user with a visual impairment, such as an image that is displayed on the touch panel 21 at a normal use position and that is suitable for a user without a visual impairment (hereinafter referred to as an original image).
  • an original image When viewing an image that is not visible, the image that is visible to the user is blurred.
  • the user inputs the degree of blur by operating the touch panel 21, and the control unit 22 acquires the degree of blur.
  • the control unit 22 records the filter coefficient corresponding to the obtained degree of blur among the filter coefficients corresponding to the degree of blur that can be input by the user and recorded in the recording unit 20. And set to the current filter coefficient.
  • the control unit 22 performs an inverse filter process on the currently displayed image using the current filter coefficient, thereby generating a visually impaired image that does not cause blurring due to the user's visual impairment on the touch panel 21 at the current use position. Generate and display on the touch panel 21.
  • control unit 22 After completion of the processing in the visual impairment setting mode, the control unit 22 generates a visual impairment image by performing an inverse filter process on the original image to be displayed using the current filter coefficient. Then, the control unit 22 causes the touch panel 21 to display an image for visual impairment instead of the original image to be displayed.
  • control unit 22 turns on or off the power of the smartphone 10 when the user presses the power button of the operation unit 23.
  • the control unit 22 performs the above-described processing by executing a program recorded in the recording unit 20, for example.
  • This program can be received by the communication unit 12 via a wired or wireless transmission medium and installed in the recording unit 20.
  • the program can be installed in the recording unit 20 in advance.
  • FIG. 2 is a diagram illustrating an example of a visual impairment setting screen displayed on the touch panel 21 in the visual impairment setting mode.
  • an image 41 to be displayed is displayed on the entire surface of the visual impairment setting screen 40 in FIG.
  • a Blur bar 42 representing the degree of blur according to the position in the horizontal direction and a knob 43 representing the degree of blur to be input are superimposed on the image 41 on the visual impairment setting screen 40. Is displayed.
  • the position of the Blur bar 42 indicates the degree of small blur as it is closer to the left end, and indicates the degree of blur as it is closer to the right end.
  • the user operates the touch panel 21 while viewing the visual impairment setting screen 40, and moves the knob 43 to the position of the Blur bar 42 indicating the degree of blurring caused by the visual impairment of the user occurring on the visual impairment setting screen 40. As a result, the degree of blur is input.
  • FIG. 3 is a block diagram illustrating a functional configuration example of a visual impairment processing unit that performs processing in the visual impairment setting mode realized by the smartphone 10.
  • the 3 includes a blur degree acquisition unit 61, a coefficient acquisition unit 62, a coefficient storage unit 63, a generation unit 64, and a display control unit 65.
  • the blur degree acquisition unit 61, the coefficient acquisition unit 62, the generation unit 64, and the display control unit 65 are realized by, for example, the control unit 22 in FIG. 1, and the coefficient storage unit 63 is realized by, for example, the recording unit 20.
  • the blur degree acquisition unit 61 operates the touch panel 21 and moves the knob 43 in the visual impairment setting screen 40 in FIG. 2 to input the blur. Is obtained and supplied to the coefficient obtaining unit 62.
  • the coefficient acquisition unit 62 reads out the filter coefficient corresponding to the degree of blur from the coefficient storage unit 63 based on the degree of blur supplied from the blur degree acquisition unit 61, and sets it as the current filter coefficient.
  • the coefficient acquisition unit 62 supplies the current filter coefficient to the generation unit 64.
  • the coefficient storage unit 63 stores filter coefficients corresponding to the degree of blur that can be input by the user.
  • the generating unit 64 generates an image for visual impairment by performing an inverse filter process such as a Wiener filter process on the image 41 being displayed using the filter coefficient supplied from the coefficient acquiring unit 62.
  • an inverse filter process such as a Wiener filter process
  • the image being displayed is the original image
  • the image for visual impairment for the image to be displayed has already been generated
  • the image being displayed is It is an image for visual impairment generated last time with respect to a display target image.
  • the generation unit 64 supplies the generated visual impairment image to the display control unit 65.
  • the display control unit 65 displays the visual impairment setting screen 40 on the touch panel 21. Further, the display control unit 65 supplies the visual impairment image generated by the generation unit 64 to the touch panel 21 and displays it as the image 41 of the visual impairment setting screen 40.
  • Filter coefficients corresponding to the degree of blur that can be input by the user are generated based on a PSF (Point Spread Function) representing the blur.
  • PSF Point Spread Function
  • FIG. 4 is a diagram illustrating an example of a PSF representing each degree of blur that can be input by the user.
  • FIG. 4 there are three types of blur degrees that can be input by the user: mild, medium, and severe.
  • 4A shows a mildly blurred PSF
  • FIG. 4B shows a moderately blurred PSF
  • FIG. 4C shows a severely blurred PSF.
  • the ⁇ of the PSF is smaller as the degree of blur is lighter.
  • the filter coefficient corresponding to each degree of blur is generated by the following equation (1) using the value after FFT (Fast Fourier Transform) processing of the PSF corresponding to the degree of blur.
  • Equation (1) Coef is a filter coefficient of the Wiener filter process
  • H is a PSF after the FFT process. According to the equation (1), it is possible to generate a filter coefficient of the Wiener filter process for removing the blur from the image where the blur represented by the PSF is generated.
  • the generation unit 64 performs an FFT process on the image being displayed, and uses the filter coefficient corresponding to the degree of blur input by the user, using the following equation (2), the image being displayed after the FFT process:
  • the Wiener filter processing deconvolution
  • Coef is a filter coefficient corresponding to the degree of blur input by the user
  • F is an image being displayed after FFT processing.
  • Expression (2) it is possible to remove the blur represented by the PSF that has occurred in the displayed image.
  • the blur represented by the PSF does not occur in the image being displayed, and it appears to the user with visual impairment as if the blur represented by the PSF has occurred in the image being displayed. . Therefore, according to the expression (2), a visually impaired image that appears to have no blur represented by the PSF to a user who has a visually impaired appearance that the blurred image represented by the PSF appears in the displayed image. The value after FFT processing can be obtained.
  • the generation unit 64 performs an IFFT (Inverse Fast Fourier transform) process on the image for visual impairment after the FFT process obtained as a result of the Wiener filter process, and generates an image for visual impairment.
  • IFFT Inverse Fast Fourier transform
  • the visually impaired image generated in this way has larger blur as the degree of blur input by the user is heavier.
  • FIG. 5 is a flowchart for explaining the visual impairment setting process of the smartphone 10 of FIG. This visual impairment setting process is started, for example, when the user commands the start of the visual impairment setting mode by operating the touch panel 21.
  • step S ⁇ b> 11 the display control unit 65 (FIG. 3) of the smartphone 10 generates the visual impairment setting screen 40 and supplies it to the touch panel 21, thereby starting display of the visual impairment setting screen 40.
  • the image 41 displayed on the visual impairment setting screen 40 may be an image currently displayed, or may be an image for a predetermined visual impairment setting process.
  • step S ⁇ b> 12 the blur degree acquisition unit 61 inputs the degree of blur by the user operating the touch panel 21 and moving the knob 43 in the visual impairment setting screen 40 based on the operation signal from the touch panel 21. Determine if you did.
  • step S12 If it is determined in step S12 that the user has not yet input the degree of blur, the blur degree acquisition unit 61 waits until the degree of blur is input. On the other hand, if it is determined in step S12 that the user has input the degree of blur, the process proceeds to step S13.
  • step S ⁇ b> 13 the blur degree acquisition unit 61 acquires the degree of blur input by the user and supplies it to the coefficient acquisition unit 62.
  • step S ⁇ b> 14 the coefficient acquisition unit 62 reads out the filter coefficient corresponding to the degree of blur from the coefficient storage unit 63 based on the degree of blur supplied from the blur degree acquisition unit 61 and sets it as the current filter coefficient. .
  • the coefficient acquisition unit 62 supplies the current filter coefficient to the generation unit 64.
  • step S15 the generation unit 64 generates a visual impairment image by performing the Wiener filter process on the image 41 being displayed using the filter coefficient supplied from the coefficient acquisition unit 62.
  • the generation unit 64 supplies the generated visual impairment image to the display control unit 65.
  • step S ⁇ b> 16 the display control unit 65 supplies the image for visual impairment generated by the generation unit 64 to the touch panel 21 and displays it as the image 41 of the visual impairment setting screen 40.
  • step S ⁇ b> 17 the control unit 22 (FIG. 1) of the smartphone 10 determines whether to end the visual impairment setting process, for example, whether the user has commanded the end of the visual impairment setting mode by operating the touch panel 21. To do.
  • step S17 If it is determined in step S17 that the visual impairment setting process is not terminated, the process returns to step S12, and the processes in steps S12 to S17 are repeated until it is determined that the visual impairment setting process is terminated.
  • step S18 the display control unit 65 ends the display of the visual impairment setting screen 40 on the touch panel 21 and the process ends.
  • the control unit 22 After the visual impairment setting process, the control unit 22 generates a visual impairment image by performing a Wiener filter process on the original image to be displayed using the current filter coefficient, and instead of the original image to be displayed. It is supplied to the touch panel 21 and displayed.
  • the smartphone 10 acquires the degree of blur caused by the visual impairment of the user in the image displayed on the touch panel 21, and generates a visual impairment image based on the degree of blur and the image being displayed. To do. Therefore, an image suitable for a user with a visual impairment can be easily displayed as compared with the case where a large number of parameters need to be acquired in order to generate a visual impairment image. As a result, the processing cost and manufacturing cost of the smartphone 10 are reduced, and the hurdle to commercialization is reduced.
  • the smartphone 10 uses a filter coefficient generated based on the PSF corresponding to the degree of blur input by the user, and from the displayed image in which the blur represented by the PSF does not actually occur, A visual impairment image is generated by performing a Wiener filter process for removing blur. Therefore, it is possible to generate a visual impairment image with a simple architecture as compared with a case where a visual impairment image is generated based on a mechanism for identifying blur due to visual impairment. In addition, even if an unexpected parameter contributes to the occurrence of blur due to visual impairment, a highly accurate image for visual impairment can be generated.
  • the smartphone 10 can generate a visual impairment image by inputting the degree of blurring of the image being displayed by the user. Therefore, even when the accuracy of the image for visual impairment generated last time with respect to the image to be displayed is not good, by inputting the degree of blur again for the image for visual impairment, high-precision visual A fault image can be generated.
  • the parameter that the user needs to input in order to generate the image for visual impairment in the smartphone 10 is the degree of blur, the user can input the parameter intuitively.
  • the filter coefficient is stored in the recording unit 20 included in the smartphone 10.
  • the filter coefficient may be stored in a server outside the smartphone 10.
  • FIG. 6 is a block diagram illustrating a configuration example of an information processing system including a second embodiment of a smartphone as an image processing apparatus to which the present disclosure is applied.
  • a smartphone 81 and a server 82 are connected via a network 83.
  • the smartphone 81 does not previously store filter coefficients corresponding to the degree of blur that can be input by the user, but when the degree of blur is input by the user, A filter coefficient corresponding to the degree is generated.
  • the smartphone 81 of the information processing system 80 acquires the degree of blur input by a user with visual impairment when performing processing in the visual impairment setting mode.
  • the smartphone 81 transmits the obtained degree of blur to the server 82 via the network 83.
  • the smartphone 81 receives the PSF corresponding to the degree of blur transmitted from the server 82 in response to the transmission of the degree of blur.
  • the smartphone 81 generates a filter coefficient based on the received PSF and sets the current filter coefficient.
  • the smartphone 81 generates and displays an image for visual impairment by performing an inverse filter process on the currently displayed image using the current filter coefficient.
  • the smartphone 81 After the processing in the visual impairment setting mode is completed, the smartphone 81 generates an image for visual impairment by performing an inverse filter process on the original image to be displayed using the current filter coefficient. Then, the smartphone 81 displays an image for visual impairment instead of the original image to be displayed.
  • the server 82 stores a PSF corresponding to each degree of blur that can be input by the user of the smartphone 81.
  • the server 82 receives the degree of blur transmitted from the smartphone 81.
  • the server 82 reads out the PSF corresponding to the received degree of blur and transmits it to the smartphone 81.
  • the configuration of the smartphone 81 is the same as that of the smartphone 10 in FIG. 1 except for the functional configuration of the visual impairment processing unit. Therefore, hereinafter, each hardware block of the smartphone 10 of FIG. 1 will be described as each hardware block of the smartphone 81.
  • FIG. 7 is a block diagram illustrating a functional configuration example of the visual impairment processing unit realized by the smartphone 81.
  • the transmission unit 101 and the reception unit 102 are realized by the communication unit 12, for example, and the coefficient generation unit 103 is realized by the control unit 22, for example.
  • the transmission unit 101 of the visual impairment processing unit 100 performs modulation processing or frequency conversion processing of the degree of blur acquired by the blur degree acquisition unit 61, and generates a transmission signal.
  • the transmission unit 101 transmits the transmission signal to the server 82 via the antenna 11, a base station (not shown), the network 83, and the like.
  • the reception unit 102 receives a PSF reception signal corresponding to the degree of blur transmitted from the server 82 via the network 83, a base station (not shown), the antenna 11, and the like. To do.
  • the receiving unit 102 acquires the PSF by performing amplification, frequency conversion processing, demodulation processing, and the like of the received signal, and supplies the PSF to the coefficient generation unit 103.
  • the coefficient generation unit 103 generates a filter coefficient according to the above-described equation (1) based on the PSF supplied from the reception unit 102.
  • the coefficient generation unit 103 sets the generated filter coefficient to the current filter coefficient and supplies the generated filter coefficient to the generation unit 64.
  • FIG. 8 is a flowchart for explaining the visual impairment setting process of the smartphone 81 of FIG. This visual impairment setting process is started, for example, when the user commands the start of the visual impairment setting mode by operating the touch panel 21.
  • steps S31 to S33 Since the processing of steps S31 to S33 is the same as the processing of steps S11 to S13 in FIG.
  • step S34 the transmission unit 101 generates a transmission signal by performing modulation processing or frequency conversion processing of the degree of blur acquired by the blur degree acquisition unit 61, and transmits the antenna 11, a base station (not shown), the network 83, and the like. To the server 82.
  • step S35 the receiving unit 102 transmits a PSF corresponding to the degree of blur transmitted from the server 82 via the network 83, a base station (not shown), the antenna 11 or the like in response to the transmission of the degree of blur.
  • the receiving unit 102 acquires the PSF by performing amplification, frequency conversion processing, demodulation processing, and the like of the received signal, and supplies the PSF to the coefficient generation unit 103.
  • step S36 the coefficient generation unit 103 generates a filter coefficient according to the above-described equation (1) based on the PSF supplied from the reception unit 102.
  • the coefficient generation unit 103 sets the generated filter coefficient to the current filter coefficient and supplies the generated filter coefficient to the generation unit 64.
  • steps S37 to S40 Since the processing of steps S37 to S40 is the same as the processing of steps S15 to S18 in FIG.
  • the smartphone 81 acquires the degree of blur caused by the visual impairment of the user in the image displayed on the touch panel 21, and generates a visual impairment image based on the degree of blur and the image being displayed. To do. Therefore, an image suitable for a user with a visual impairment can be easily displayed as compared with the case where a large number of parameters need to be acquired in order to generate a visual impairment image.
  • the smartphone 81 since the smartphone 81 generates the filter coefficient based on the PSF corresponding to the degree of blur, it is not necessary to hold the filter coefficient.
  • ⁇ Third Embodiment> (Functional configuration example of the visual impairment processing unit of the third embodiment of the smartphone)
  • the configuration of the third embodiment of the smartphone as the image processing device to which the present disclosure is applied is the same as that of the smartphone 10 in FIG. 1 except for the functional configuration of the visual impairment processing unit. Therefore, in the following description, each block of hardware of the smartphone 10 in FIG. 1 is used as each block of hardware of the smartphone according to the third embodiment.
  • FIG. 9 is a block diagram illustrating a functional configuration example of the visual impairment processing unit according to the third embodiment of the smartphone.
  • the visual impairment processing unit 120 in FIG. 9 includes a blur degree acquisition unit 61, a display control unit 65, a class tap extraction unit 121, a class classification unit 122, a coefficient storage unit 123, a coefficient acquisition unit 124, a prediction tap extraction unit 125, and a generation.
  • the unit 126 is configured.
  • the class tap extraction unit 121, the class classification unit 122, the coefficient acquisition unit 124, the prediction tap extraction unit 125, and the generation unit 126 are realized by the control unit 22, for example, and the coefficient storage unit 123 is realized by the recording unit 20, for example.
  • the visual impairment processing unit 120 generates an image for visual impairment from the image being displayed by class classification adaptation processing, not Wiener filter processing.
  • the class tap extraction unit 121 of the visual impairment processing unit 120 sequentially sets each pixel of the predicted visual impairment image as a target pixel.
  • the class tap extraction unit 121 determines one or more pixel values used for classifying the pixel value of each pixel of the target pixel into one of one or more classes from the displayed image. Get as a class tap. Then, the class tap extraction unit 121 supplies the class tap to the class classification unit 122.
  • the class classification unit 122 classifies the pixel value of the target pixel based on the class tap supplied from the class tap extraction unit 121.
  • the class classification unit 122 supplies the class obtained as a result to the coefficient acquisition unit 124.
  • ADRC Adaptive Dynamic Range Coding
  • DCT Discrete Cosine Transform
  • VQ Vector Quantization
  • DPCM differential pulse code modulation
  • the coefficient storage unit 123 stores, as filter coefficients, prediction coefficients for each class and degree of blur obtained by learning described later with reference to FIGS.
  • the coefficient acquisition unit 124 reads out the prediction coefficient corresponding to the class supplied from the class classification unit 122 and the degree of blur acquired by the blur degree acquisition unit 61 from the coefficient storage unit 123 and supplies the prediction coefficient to the generation unit 126.
  • the prediction tap extraction unit 125 acquires one or more pixel values used for predicting the pixel value of the target pixel as a prediction tap from the image being displayed. Then, the prediction tap extraction unit 125 supplies the prediction tap to the generation unit 126.
  • the generation unit 126 uses the prediction coefficient supplied from the coefficient acquisition unit 124 to perform predetermined prediction for obtaining a true prediction value of the pixel value of the target pixel. The calculation is performed as a filter process. Accordingly, the generation unit 126 generates a predicted value of the pixel value of the target pixel as the pixel value of the target pixel of the image for visual impairment, and supplies the generated value to the display control unit 65.
  • the pixel value y of each pixel of the image for visual impairment is obtained by the following linear linear expression.
  • x i represents the i-th pixel value among the pixel values constituting the prediction tap for the pixel value y
  • W i is multiplied by the i-th pixel value.
  • n represents the number of pixel values constituting the prediction tap. The same applies to the expressions described later.
  • the predicted value of the pixel value of the pixel of the image for visual impairment of the k-th sample is expressed as y k ′
  • the predicted value y k ′ is expressed by the following formula (4).
  • x ki represents the i-th pixel value among the pixel values constituting the prediction tap for the true value of the predicted value y k ′. The same applies to the equations described later.
  • Predictive coefficient W i of the prediction error e k to be 0 in Equation (5) is, although the optimal to predict the true value y k, when the number of samples for learning smaller than n, the prediction coefficients W i is not uniquely determined.
  • the optimum prediction coefficient W i is expressed by the square error represented by the following equation (6). Can be obtained by minimizing the sum E of
  • the minimum value (local minimum value) of the sum E of square errors in equation (6) is given by W i , which is 0 as a result of partial differentiation of the sum E by the prediction coefficient W i as shown in equation (7) below. It is done.
  • equation (7) can be expressed in the form of a determinant as in the following equation (10).
  • the normal equation of Expression (10) is a true value y k of a predicted value y k ′ of a pixel value of each pixel of the image for visual impairment, a pixel value x ki constituting a prediction tap corresponding to the pixel, and a prediction coefficient W. It is a formula showing the relationship of i .
  • Normal equation of Equation (10) for example, by using a general matrix solution such as sweep-out method (Gauss-Jordan elimination) can be solved for the prediction coefficient W i.
  • learning of the optimal prediction coefficients W i for each degree of class and blur can be performed by solving the normal equations in equation (10) for each degree of class and blur.
  • the pixel value y can be obtained not by the linear primary expression shown in Expression (3) but by a higher order expression of the second or higher order.
  • FIG. 10 is a flowchart illustrating the visual impairment setting process of the third embodiment of the smartphone. This visual impairment setting process is started, for example, when the user commands the start of the visual impairment setting mode by operating the touch panel 21.
  • step S54 the class tap extraction unit 121 of the visual impairment processing unit 120 determines, as a target pixel, a pixel that has not yet been selected as a target pixel among the pixels of the visual impairment image to be predicted.
  • step S55 the prediction tap extraction unit 125 acquires, as a prediction tap, one or more pixel values used to predict the pixel value of the target pixel from the display target image. Then, the prediction tap extraction unit 125 supplies the prediction tap to the generation unit 126.
  • step S56 the class tap extraction unit 121 acquires, from the display target image, one or more pixel values used for classifying the pixel value of the target pixel as a class tap. Then, the class tap extraction unit 121 supplies the class tap to the class tap extraction unit 121.
  • step S57 the class classification unit 122 classifies the pixel value of the target pixel based on the class tap supplied from the class tap extraction unit 121.
  • the class classification unit 122 supplies the class obtained as a result to the coefficient acquisition unit 124.
  • step S58 the coefficient acquisition unit 124 corresponds to the class supplied from the class classification unit 122 and the degree of blur acquired by the blur degree acquisition unit 61 among the prediction coefficients stored in the coefficient storage unit 123. Read the prediction coefficient. Then, the coefficient acquisition unit 124 supplies the read prediction coefficient to the generation unit 126.
  • step S ⁇ b> 59 the generation unit 126 uses the prediction tap supplied from the prediction tap extraction unit 125 and the prediction coefficient supplied from the coefficient acquisition unit 124 to perform the calculation of Equation (3) described above as a predetermined prediction calculation. Do as. Accordingly, the generation unit 126 generates a predicted value of the pixel value of the target pixel as the pixel value of the target pixel of the image for visual impairment, and supplies the generated value to the display control unit 65.
  • step S60 the prediction tap extraction unit 125 determines whether or not all the pixels of the image for visual impairment are the target pixels. If it is determined in step S60 that all the pixels in the image for visual impairment are not yet the target pixel, the process returns to step S54, and steps S54 to S60 are performed until all the pixels in the image for visual impairment are set as the target pixel. Repeat the process.
  • step S60 determines whether all pixels in the image for visual impairment are the target pixels. If it is determined in step S60 that all pixels in the image for visual impairment are the target pixels, the process proceeds to step S61.
  • steps S61 to S63 Since the processing of steps S61 to S63 is the same as the processing of steps S16 to S18 in FIG.
  • the third embodiment of the smartphone acquires the degree of blur caused by the visual impairment of the user in the image displayed on the touch panel 21, and visually determines based on the degree of blur and the image being displayed. Generate a fault image. Therefore, an image suitable for a user with a visual impairment can be easily displayed as compared with the case where a large number of parameters need to be acquired in order to generate a visual impairment image.
  • FIG. 11 is a block diagram illustrating a configuration example of the learning device 140 that learns the prediction coefficient stored in the coefficient storage unit 123 of FIG. 9.
  • 11 includes a learning pair storage unit 141, a class tap extraction unit 142, a class classification unit 143, a prediction tap extraction unit 144, a normal equation generation unit 145, a coefficient generation unit 146, and a coefficient storage unit 147. .
  • the learning pair storage unit 141 of the learning device 140 stores a pair of a student image and a teacher image for each degree of blur as a learning image used for learning a prediction coefficient.
  • an image corresponding to the original image that is the display target image in the third embodiment of the smartphone is used as the student image.
  • the teacher image an ideal image for visual impairment for each degree of blur generated from the student image, that is, an image corresponding to the image for visual impairment generated for each degree of blur by the generation unit 126 in FIG. Used.
  • the learning pair storage unit 141 supplies the student image of the pair to the class tap extraction unit 142 and the prediction tap extraction unit 144, and supplies the teacher image to the normal equation generation unit 145.
  • the class tap extraction unit 142 sequentially sets each pixel of the teacher image that forms a pair with the student image supplied from the learning pair storage unit 141 as a target pixel. Similar to the class tap extraction unit 121 in FIG. 9, the class tap extraction unit 142 acquires, as a class tap, one or more pixel values used for classifying the pixel value of the target pixel from the student image, and the class classification unit 143.
  • the class classification unit 143 classifies the pixel value of the target pixel based on the class tap supplied from the class tap extraction unit 142, and classifies the resulting class as a normal equation. It supplies to the production
  • the prediction tap extraction unit 144 predicts one or more pixel values used to predict the pixel value of the target pixel from the student image supplied from the learning pair storage unit 141. Obtained as a tap and supplied to the normal equation generation unit 145.
  • the normal equation generation unit 145 includes the target pixel for the pixel value of the target pixel of the teacher image supplied from the learning pair storage unit 141 and the prediction tap supplied from the prediction tap extraction unit 144. This is performed for each class from the degree of blur corresponding to the teacher image and the class classification unit 143.
  • the coefficient generation unit 146 finds an optimal prediction coefficient for each class and degree of blur by solving the normal equation for each class and degree of blur supplied from the normal equation generation unit 145, and supplies it to the coefficient storage unit 147. To do.
  • the coefficient storage unit 147 stores the prediction coefficient for each class and degree of blur supplied from the coefficient generation unit 146.
  • the prediction coefficient for each class and degree of blur is stored in the coefficient storage unit 123 of FIG.
  • FIG. 12 is a flowchart illustrating the learning process of the learning device 140 in FIG.
  • the learning pair storage unit 141 of the learning device 140 reads a pair of student images and teacher images that has not been read yet, from the stored pairs of student images and teacher images.
  • the learning pair storage unit 141 supplies the read student image to the class tap extraction unit 142 and the prediction tap extraction unit 144, and supplies the teacher image to the normal equation generation unit 145.
  • step S72 the class tap extraction unit 142 determines, as a target pixel, a pixel that has not yet been set as a target pixel among the pixels of the teacher image that form a pair with the student image supplied from the learning pair storage unit 141.
  • step S73 the class tap extraction unit 142 acquires the class tap of the target pixel from the student image supplied from the learning pair storage unit 141, and supplies the class tap to the class classification unit 143.
  • step S ⁇ b> 74 the prediction tap extraction unit 144 acquires the prediction tap of the target pixel from the student image supplied from the learning pair storage unit 141 and supplies the prediction tap to the normal equation generation unit 145.
  • step S75 the class classification unit 143 classifies the pixel of interest based on the class tap supplied from the class tap extraction unit 142, and supplies the resulting class to the normal equation generation unit 145.
  • step S76 the normal equation generation unit 145 adds the pixel values of the target pixel in the teacher image supplied from the learning pair storage unit 141 and the prediction tap supplied from the prediction tap extraction unit 144 as targets. Is performed for each degree of blur corresponding to the teacher image including the class and the target pixel from the class classification unit 143.
  • step S77 the normal equation generation unit 145 determines whether or not all the pixels of the teacher image are the target pixel. If it is determined in step S77 that not all the pixels of the teacher image have been set as the target pixel, the process returns to step S72, and the processes in steps S72 to S77 are repeated until all the pixels are set as the target pixel.
  • step S77 if it is determined in step S77 that all the pixels of the teacher image are the target pixels, the process proceeds to step S78.
  • step S78 the normal equation generation unit 145 determines whether all pairs stored in the learning pair storage unit 141 have been read.
  • step S78 If it is determined in step S78 that all pairs have not yet been read, the process returns to step S71, and the processes in steps S71 to S78 are repeated until all pairs are read.
  • the normal equation generation unit 145 generates the normal equation of equation (10) for each class and degree of blur generated by adding in step S76. Is supplied to the coefficient generation unit 146.
  • step S79 the coefficient generation unit 146 solves the normal equation of the equation (10) for each class and degree of blur supplied from the normal equation generation unit 145, and obtains an optimal prediction coefficient for each class and degree of blur. Ask.
  • step S80 the coefficient generation unit 146 supplies the optimum prediction coefficient for each obtained class and degree of blur to the coefficient storage unit 147 for storage. Then, the process ends.
  • a pair of a student image and a teacher image is stored for each degree of blur.
  • a pair of a student image and a teacher image for each degree of blur corresponding to the student image. It may be stored. That is, since the student image is common to all the degrees of blur, only one student image may be stored for all the degrees of blur.
  • an image for visual impairment is generated for the image 41 on the visual impairment setting screen 40 and displayed as the image 41.
  • An image may be generated and displayed as the visual impairment setting screen 40.
  • the blur generated on the Blur bar 42 and the knob 43 due to the visual impairment of the user can also be removed.
  • ⁇ Fourth embodiment> (Functional configuration example of the visual impairment processing unit of the fourth embodiment of the smartphone)
  • the configuration of the fourth embodiment of the smartphone as the image processing apparatus to which the present disclosure is applied is the same as that of the smartphone 10 in FIG. 1 except for the functional configuration of the visual impairment processing unit. Therefore, below, each block of the hardware of the smart phone 10 of FIG. 1 is used also as each block of the hardware of the fourth embodiment of the smart phone.
  • FIG. 13 is a block diagram illustrating a functional configuration example of the visual impairment processing unit according to the fourth embodiment of the smartphone.
  • the configuration of the visual impairment processing unit 160 in FIG. 13 is that a position acquisition unit 161 is provided instead of the blur degree acquisition unit 61, and a coefficient generation unit 162 is provided instead of the coefficient acquisition unit 62 and the coefficient storage unit 63.
  • a display control unit 163 is provided instead of the display control unit 65, but is different from the configuration of the visual impairment processing unit 60 in FIG.
  • the position acquisition unit 161, the coefficient generation unit 162, and the display control unit 163 are realized by the control unit 22, for example.
  • the visual impairment processing unit 160 is not the PSF corresponding to the degree of blur, but the optimum use position that is the use position of the touch panel 21 when the blur due to the visual impairment of the user does not occur and the normal use position of the user touch panel 21. Based on this, a filter coefficient is generated.
  • the position acquisition unit 161 acquires, from the touch panel 21, an operation signal that represents a visual failure setting mode start command input by the user operating the touch panel 21. In response to the operation signal, the position acquisition unit 161 supplies a shooting preparation operation start command to the front camera of the camera unit 16 to start the shooting preparation operation.
  • the position acquisition unit 161 acquires, from the touch panel 21, an operation signal representing a shooting command (shutter command) input by the user operating the touch panel 21. In accordance with the operation signal, the position acquisition unit 161 determines the distance between the subject obtained by the AF operation from the front camera of the camera unit 16 and the camera unit 16 as the optimal use position information or the normal use position. Is acquired as the normal use position information representing. The position acquisition unit 161 supplies the acquired optimal use position information and normal use position information to the coefficient generation unit 162.
  • the coefficient generation unit 162 generates a PSF representing blur due to the visual impairment of the user based on the optimal use position information and the normal use position information supplied from the position acquisition unit 161.
  • the coefficient generation unit 162 generates a filter coefficient based on the generated PSF according to the above-described equation (1).
  • the coefficient generation unit 162 sets the generated filter coefficient to the current filter coefficient and supplies it to the generation unit 64.
  • the display control unit 163 causes the touch panel 21 to display a normal use position setting screen for setting normal use position information and an optimum use position setting screen for setting optimum use position information. In addition, the display control unit 163 causes the touch panel 21 to display the visual impairment image supplied from the generation unit 64.
  • FIG. 14 is a diagram illustrating an example of a normal use position setting screen.
  • an image 181 obtained by provisional shooting or actual shooting of the front camera of the camera unit 16 is displayed.
  • a shutter button 182, a mode button 183, and the like are displayed superimposed on the image display area 181.
  • the shutter button 182 is a button operated when the user inputs a shooting command.
  • the mode button 183 is a button representing the current mode. In the normal use position setting screen 180, the mode button 183 has an eye mark representing the visual impairment setting mode.
  • the normal use position setting screen 180 is superimposed on the image 181 with, for example, a message portion 184 in which a message “Please press the shutter button at the normal use position” is described. .
  • the position acquisition unit 161 acquires the distance from the subject to the front camera measured by the AF operation, that is, the distance between the user and the touch panel 21 as normal use position information.
  • FIG. 15 is a diagram illustrating an example of the optimum use position setting screen.
  • the user grips the touch panel 21 at the optimum use position and operates the shutter button 182 according to the message displayed on the message unit 201.
  • the position acquisition unit 161 acquires the distance from the subject to the front camera measured by the AF operation, that is, the distance between the user and the touch panel 21 as optimal use position information.
  • FIG. 16 is a diagram illustrating generation of PSF by the coefficient generation unit 162 in FIG.
  • ⁇ c is an angle formed by the lens surface of the eye and the light directed from the point light source M at the optimum use position on the optical axis.
  • ⁇ e is an angle formed by the lens surface of the eye and the light from the point light source M traveling from the lens surface of the eye toward the image plane on the retina of the eye.
  • ⁇ n is an angle formed by the lens surface of the eye and the light directed to the eye from the point light source M ′ at the normal use position on the optical axis.
  • ⁇ p is an angle formed by the lens surface of the eye and the light from the point light source M ′ that travels from the lens surface of the eye toward the image plane on the retina of the eye.
  • de is the distance between the lens surface of the eye and the image plane on the retina, as shown in FIG. dc is the distance between the optimal use position and the eye lens, and dn is the distance between the normal use position and the eye lens.
  • the radius r of the image formed on the image plane on the retina of the eye by the light from the point light source M ′ is expressed by the following equation (15).
  • focal length f of the point light source M ′ is expressed by the following equation (16).
  • the radius r d of the light source at the normal use position corresponding to the image of the radius r formed on the image plane, that is, the blur caused by the visual disturbance at the normal use position is Using the focal length f, it is expressed by the following equation (17).
  • Equation (17) by substituting Equation (14) to (16), the radius A of the eye lens, as the distance d e of the image surface on the lens of the eye and the retina, and substituting a predetermined value, obtains the radius r d
  • the expression is an expression having the distance dc and the distance dn as variables.
  • the coefficient generation unit 162 first substitutes the distance represented by the optimal use position information for the distance dc in this equation, and substitutes the distance represented by the normal use position information for the distance dn, so that the user can visually recognize the normal use position.
  • a radius r d of blur caused by the failure is obtained.
  • the coefficient generation unit 162, by dividing the radius r d is the absolute radius pixel size of the touch panel 21, and converts the pixel number r p.
  • FIG. 17 is a flowchart illustrating the visual impairment setting process of the fourth embodiment of the smartphone. This visual impairment setting process is started, for example, when the user commands the start of the visual impairment setting mode by operating the touch panel 21.
  • step S101 of FIG. 17 the position acquisition unit 161 supplies the front camera of the camera unit 16 with a start command for the shooting preparation operation, and starts the shooting preparation operation.
  • step S102 the display control unit 163 displays the normal use position setting screen 180.
  • step S103 the position acquisition unit 161 determines whether the shutter button 182 has been operated by the user. If the position acquisition unit 161 has not acquired from the touch panel 21 an operation signal representing a shooting command input by operating the shutter button 182 by the user, it is determined in step S103 that the shutter button 182 has not been operated yet. To do. Then, the position acquisition unit 161 stands by until the operation signal is acquired.
  • the position acquisition unit 161 acquires from the touch panel 21 an operation signal representing a shooting command input by operating the shutter button 182 by the user, it determines that the shutter button 182 has been operated in step S103. Then, the display control unit 163 deletes the message unit 184 in the normal use position setting screen 180 and advances the process to step S104.
  • step S104 the position acquisition unit 161 acquires, from the front camera of the camera unit 16, the distance between the subject obtained by the AF operation of the shooting preparation operation and the camera unit 16 as normal use position information.
  • the position acquisition unit 161 supplies the acquired normal use position information to the coefficient generation unit 162.
  • step S105 the display control unit 163 displays the optimum use position setting screen 200 on the touch panel 21 instead of the normal use position setting screen 180 from which the message unit 184 has been deleted.
  • step S106 the position acquisition unit 161 determines whether the user has operated the shutter button 182 as in the process of step S103.
  • step S103 If it is determined in step S103 that the shutter button 182 has not been operated yet, the position acquisition unit 161 waits until the shutter button 182 is operated. On the other hand, when it is determined in step S103 that the shutter button 182 has been operated, the display control unit 163 deletes the message unit 201 in the optimum use position setting screen 200, and the process proceeds to step S107.
  • step S107 the position acquisition unit 161 acquires, from the front camera of the camera unit 16, the distance between the subject obtained by the AF operation of the shooting preparation operation and the camera unit 16 as optimum use position information.
  • the position acquisition unit 161 supplies the acquired optimum use position information to the coefficient generation unit 162.
  • step S108 the coefficient generation unit 162 generates a PSF representing the blur due to the user's visual impairment based on the optimal use position information and the normal use position information supplied from the position acquisition unit 161.
  • step S109 the coefficient generation unit 162 generates a filter coefficient according to the above-described equation (1) based on the PSF.
  • the coefficient generation unit 162 sets the generated filter coefficient to the current filter coefficient and supplies it to the generation unit 64.
  • step S110 the generation unit 64 generates a visual impairment image by performing a Wiener filter process on the image 181 being displayed using the filter coefficient supplied from the coefficient generation unit 162.
  • the generation unit 64 supplies the generated visual impairment image to the display control unit 163.
  • step S111 the display control unit 163 supplies the visual impairment image generated by the generation unit 64 to the touch panel 21, and displays it as the image 181 of the optimum use position setting screen 200.
  • step S112 the control unit 22 determines whether to end the visual impairment setting process, for example, whether the user has commanded the end of the visual impairment setting mode by operating the touch panel 21.
  • step S112 If it is determined in step S112 that the visual impairment setting process is not terminated, the process returns to step S102, and the display control unit 163 replaces the optimum use position setting screen 200 from which the message part 201 has been deleted with the normal use position setting. A screen 180 is displayed on the touch panel 21. And the process of step S103 thru
  • step S113 the position acquisition unit 161 supplies a shooting preparation operation end command to the front camera of the camera unit 16 to end the shooting preparation operation.
  • step S114 the display control unit 163 ends the display of the optimum use position setting screen 200 from which the message unit 201 has been deleted, and ends the process.
  • the smartphone generates the PSF, but an external server may generate the PSF.
  • the smartphone transmits the optimal use position information and the normal use position information to an external server, and receives the PSF generated based on the optimum use position information and the normal use position information from the external server.
  • the smartphone transmits the optimum use position information and the normal use position information to an external server, and receives a filter coefficient based on the PSF generated based on the optimum use position information and the normal use position information from the external server. You may make it do.
  • the normal use position information may not be input by the user but may be determined in advance.
  • the display order of the normal use position setting screen and the optimum use position setting screen may be the order of the normal use position setting screen and the optimum use position setting screen as in the case of FIG. The order may be the screen and the normal use position setting screen.
  • the fourth embodiment of the smartphone acquires the optimum use position, and displays the visual impairment image based on the optimum use position, the normal use position, and the display target image. Therefore, an image suitable for a user with a visual impairment can be easily displayed as compared with the case where a large number of parameters need to be acquired in order to generate a visual impairment image.
  • the step of describing the program stored in the program recording medium is not limited to the processing performed in time series in the order described, but is not necessarily performed in time series. Or the process performed separately is also included.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
  • the present disclosure can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is processed jointly.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • This disclosure can have the following configurations.
  • a position acquisition unit that acquires an optimal use position that is a use position of the display device when blur due to a visual impairment of the user does not occur; Based on the optimum use position acquired by the position acquisition unit, the normal use position of the display device, and the image to be displayed, the image for visual impairment in which the blur does not occur in the display device at the normal use position is obtained.
  • An image processing apparatus comprising: a display control unit that displays on a display device. (2) The position acquisition unit acquires the normal use position, The display control unit is configured to display the image for visual impairment on the display device based on the optimum use position and the normal use position acquired by the position acquisition unit and the image to be displayed. The image processing apparatus according to (1).
  • a generating unit that generates the image for visual impairment by performing a filtering process on the display target image using a filter coefficient generated based on the optimum use position and the normal use position;
  • the image processing apparatus according to (1) or (2), wherein the display control unit is configured to cause the display device to display the visual impairment image generated by the generation unit.
  • a coefficient generation unit that generates a function representing the blur based on the optimal use position and the normal use position, and generates the filter coefficient based on the function;
  • the image processing device according to (3), wherein the generation unit is configured to perform a filter process on the display target image using the filter coefficient generated by the coefficient generation unit.
  • the image processing apparatus according to any one of (1) to (4), wherein the position acquisition unit is configured to acquire the optimum use position by an AF operation.
  • the image processing device A position acquisition step of acquiring an optimum use position that is a use position of the display device when blurring due to a user's visual impairment does not occur; Based on the optimal use position acquired by the processing of the position acquisition step, the normal use position of the display device, and the image to be displayed, the image for visual impairment in which the blur does not occur in the display device at the normal use position A display control step for displaying the image on the display device.
  • Computer A position acquisition unit that acquires an optimal use position that is a use position of the display device when blur due to a visual impairment of the user does not occur; Based on the optimum use position acquired by the position acquisition unit, the normal use position of the display device, and the image to be displayed, the image for visual impairment in which the blur does not occur in the display device at the normal use position is obtained.
  • a blur degree acquisition unit that acquires the degree of blur caused by the visual impairment of the user in the image displayed on the display device;
  • An image processing apparatus comprising: a display control unit that causes the display device to display an image for visual impairment that does not generate blur based on the degree of blur acquired by the blur degree acquisition unit and an image to be displayed.
  • a generation unit that generates the visual impairment image by performing a filtering process on the display target image using a filter coefficient corresponding to the degree of blur acquired by the blur degree acquisition unit;
  • the image processing device according to (8), wherein the display control unit is configured to display the visual impairment image generated by the generation unit on the display device.
  • a coefficient generation unit that generates the filter coefficient based on a function representing the blur corresponding to the degree of blur acquired by the blur degree acquisition unit;
  • the image processing apparatus according to (9), wherein the generation unit performs a filter process on the display target image using the filter coefficient generated by the coefficient generation unit.
  • the filter coefficient uses a teacher image corresponding to the image for visual impairment and a student image corresponding to the image to be displayed, a pixel value of each pixel of the teacher image, and the pixel
  • the image processing device A blur degree acquisition step for acquiring a degree of blur caused by a visual impairment of the user in an image displayed on the display device; Image processing including: a display control step for causing the display device to display a visual impairment image that does not generate blur based on the degree of blur acquired by the processing of the blur degree acquisition step and an image to be displayed.
  • Method. (13) Computer A blur degree acquisition unit that acquires the degree of blur caused by the visual impairment of the user in the image displayed on the display device; Based on the degree of blur acquired by the blur degree acquisition unit and an image to be displayed, the display control unit is configured to function as a display control unit that causes the display device to display an image for visual impairment that does not generate blur. program.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image, un procédé de traitement d'image et un programme qui rendent possible pour une image appropriée pour un utilisateur ayant une déficience visuelle à être affichée d'une manière directe. Une unité d'acquisition de position acquiert une position d'utilisation optimale, qui est la position d'utilisation d'un panneau tactile lorsqu'il n'y a pas de flou qui se produit du fait de la déficience visuelle de l'utilisateur. Une unité de commande d'affichage amène une image de déficience visuelle, grâce à laquelle un flou ne se produit pas sur le panneau tactile dans une position d'utilisation normale, à être affichée sur le panneau tactile, sur la base de la position d'utilisation optimale, de la position d'utilisation normale de l'écran tactile, et de l'image devant être affichée. Cette invention peut par exemple être appliquée à des téléphones intelligents et analogue(s).
PCT/JP2015/083243 2014-12-10 2015-11-26 Dispositif de traitement d'image, procédé de traitement d'image et programme Ceased WO2016093071A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014250166 2014-12-10
JP2014-250166 2014-12-10

Publications (1)

Publication Number Publication Date
WO2016093071A1 true WO2016093071A1 (fr) 2016-06-16

Family

ID=56107261

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/083243 Ceased WO2016093071A1 (fr) 2014-12-10 2015-11-26 Dispositif de traitement d'image, procédé de traitement d'image et programme

Country Status (1)

Country Link
WO (1) WO2016093071A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102023132267A1 (de) * 2023-11-20 2025-05-22 Schwind Eye-Tech-Solutions Gmbh Verfahren zum Kompensieren einer Fehlsichtigkeit eines Benutzers für eine Anzeigevorrichtung, Anzeigevorrichtung, Computerprogramm und computerlesbares Medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0997333A (ja) * 1995-07-25 1997-04-08 Toyota Central Res & Dev Lab Inc 画像処理装置
JP2011095839A (ja) * 2009-10-27 2011-05-12 Olympus Corp 補正画像データ生成方法及び表示装置
JP2012063589A (ja) * 2010-09-16 2012-03-29 Nippon Hoso Kyokai <Nhk> 視度補正装置
JP2013528115A (ja) * 2010-06-11 2013-07-08 バック イン フォーカス 観察者の視覚障害を補正するための表示を表現するシステム、方法、およびそのプログラムを格納している記憶媒体
JP2015094788A (ja) * 2013-11-08 2015-05-18 富士通株式会社 表示装置および表示制御プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0997333A (ja) * 1995-07-25 1997-04-08 Toyota Central Res & Dev Lab Inc 画像処理装置
JP2011095839A (ja) * 2009-10-27 2011-05-12 Olympus Corp 補正画像データ生成方法及び表示装置
JP2013528115A (ja) * 2010-06-11 2013-07-08 バック イン フォーカス 観察者の視覚障害を補正するための表示を表現するシステム、方法、およびそのプログラムを格納している記憶媒体
JP2012063589A (ja) * 2010-09-16 2012-03-29 Nippon Hoso Kyokai <Nhk> 視度補正装置
JP2015094788A (ja) * 2013-11-08 2015-05-18 富士通株式会社 表示装置および表示制御プログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MIDORI AOKI ET AL.: "Jizen Boke Fukugen ni Motozuku Shiryoku Kaso Kyosei Display", SSII2014 DAI 20 KAI SYMPOSIUM ON SENSING VIA IMAGE INFORMATION KOEN RONBUNSHU, 11 June 2014 (2014-06-11), pages 223-1 - 223-7 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102023132267A1 (de) * 2023-11-20 2025-05-22 Schwind Eye-Tech-Solutions Gmbh Verfahren zum Kompensieren einer Fehlsichtigkeit eines Benutzers für eine Anzeigevorrichtung, Anzeigevorrichtung, Computerprogramm und computerlesbares Medium

Similar Documents

Publication Publication Date Title
JP7186672B2 (ja) マルチスコピック雑音削減およびハイ・ダイナミック・レンジのためのシステムおよび方法
KR102388588B1 (ko) 이미지 노이즈 제거 모델의 트레이닝 방법, 이미지 노이즈 제거 방법, 장치 및 매체
CN113992861B (zh) 一种图像处理方法及图像处理装置
CN101309367B (zh) 成像装置
US11450013B2 (en) Method and apparatus for obtaining sample image set
CN109741388B (zh) 用于生成双目深度估计模型的方法和装置
CN106651808B (zh) 一种鱼眼图转换方法及装置
CN103078924A (zh) 视野共享方法及设备
CN110827219A (zh) 图像处理模型的训练方法、装置及介质
CN110533618A (zh) 一种镜头畸变矫正的方法和照相装置
CN112288664B (zh) 高动态范围图像的融合方法、装置和电子设备
CN112085775A (zh) 图像处理的方法、装置、终端和存储介质
JP7403995B2 (ja) 情報処理装置、制御方法およびプログラム
JP2016220171A (ja) 拡張現実対象認識装置
CN110060208B (zh) 一种提高超分辨率算法重构性能的方法
WO2016093071A1 (fr) Dispositif de traitement d&#39;image, procédé de traitement d&#39;image et programme
DE112013004760B4 (de) Bilderfassungsvorrichtung und Bildverarbeitungsverfahren
CN113891004B (zh) 图像处理方法、装置、设备及存储介质
CN108848366B (zh) 基于3d摄像机的信息获取装置及方法
CN111242998B (zh) 图像融合方法和装置
JP2013192152A (ja) 撮像装置、撮像システムおよび画像処理方法
CN108921102B (zh) 3d影像的处理方法及装置
KR102268337B1 (ko) Ar 기반 퍼포먼스 영상 관람 시스템 및 이를 이용한 퍼포먼스 영상 제공 방법
CN106954060B (zh) 一种面向智能手机的立体视频生成方法及辅助拍摄装置
EP3629284A1 (fr) Procédé de traitement d&#39;images, appareil de traitement d&#39;images, appareil d&#39;imagerie, programme et support d&#39;enregistrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15867573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15867573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP