[go: up one dir, main page]

WO2014119555A1 - Dispositif de traitement d'images, dispositif d'affichage et programme - Google Patents

Dispositif de traitement d'images, dispositif d'affichage et programme Download PDF

Info

Publication number
WO2014119555A1
WO2014119555A1 PCT/JP2014/051796 JP2014051796W WO2014119555A1 WO 2014119555 A1 WO2014119555 A1 WO 2014119555A1 JP 2014051796 W JP2014051796 W JP 2014051796W WO 2014119555 A1 WO2014119555 A1 WO 2014119555A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
contour
display
unit
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2014/051796
Other languages
English (en)
Japanese (ja)
Inventor
英範 栗林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nikon Corp
Original Assignee
Nikon Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nikon Corp filed Critical Nikon Corp
Publication of WO2014119555A1 publication Critical patent/WO2014119555A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/376Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/023Display panel composed of stacked panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention relates to an image processing device, a display device, and a program.
  • This application claims priority based on Japanese Patent Application No. 2013-17968 filed on January 31, 2013, and Japanese Patent Application No. 2013-17969 filed on January 31, 2013, the contents of which are incorporated herein by reference. Incorporate.
  • Some image processing apparatuses generate image information to be displayed on a display system capable of three-dimensional display.
  • image information displayed on such a display system that enables three-dimensional display is created exclusively as image information for three-dimensional display.
  • there is a technique for performing three-dimensional display based on image information for two-dimensional display see, for example, Patent Document 1).
  • the positions of a plurality of images viewed from the user may be shifted depending on the definition of the pixels.
  • the accuracy of the stereoscopic image perceived by the user cannot be improved depending on the display method as described above.
  • An object of an aspect of the present invention is to provide an image processing device and a program that improve the visibility of a displayed stereoscopic image. Another object is to provide a display device that can improve the accuracy of a stereoscopic image perceived by a user.
  • a display device includes a first display surface that displays a first image based on first image data including a display target, and a second image based on second image data including the display target.
  • a second display surface to be displayed; a detection unit for detecting a position of an observer observing the first display surface and the second display surface; and an image in the vicinity of the contour portion to be displayed in the second image data.
  • a control unit that corrects data based on the position of the observer detected by the detection unit and displays the corrected data on the second display surface.
  • a program includes a first display surface that displays a first image based on first image data including a display target, and a second image based on second image data including the display target.
  • a computer of a display device comprising: a second display surface for displaying the image; and a detection unit for detecting a position of an observer observing the first display surface and the second display surface.
  • the control step of correcting the image data in the vicinity of the contour portion to be displayed based on the position of the observer detected by the detection unit and displaying the correction on the second display surface is executed.
  • the display device includes a first display surface that displays a first image based on first image data including an object, an observer who observes the first display surface, and the first A detection unit for detecting a relative position with respect to a display surface; and correcting image data in the vicinity of the contour of the object in the first image data based on the relative position detected by the detection unit, A control unit for displaying on the surface.
  • a program includes a first display surface that displays a first image based on first image data including an object, an observer who observes the first display surface, and the first display.
  • a detection unit that detects a relative position with respect to the surface; and a computer of a display device having image data in the vicinity of the contour of the object among the first image data based on the relative position detected by the detection unit. The control step of correcting and displaying on the first display surface is executed.
  • a display device includes a first display unit that displays an image to be displayed at a first depth position, and a plurality of pixels that are two-dimensionally arranged.
  • a second display unit that displays a contour image indicating the contour portion to be displayed at different second depth positions, a position of a contour pixel that displays the contour image among the pixels of the second display unit, and the contour
  • a contour correcting unit that corrects the contour image based on a position on the first display unit of the contour unit corresponding to a pixel and a contour position on the second display unit determined based on a predetermined viewpoint position; .
  • the contour image indicating the position the position of the contour pixel that displays the contour image among the plurality of pixels arranged in a two-dimensional manner of the second display unit, and the first display of the contour portion corresponding to the contour pixel
  • a contour correction unit configured to correct the contour image based on a position on the unit and a contour position on the second display unit determined based on a predetermined viewpoint position.
  • a program for displaying the display target displayed by the second display unit at a second depth position different from the first depth position at which the first display unit displays an image to be displayed Regarding the contour image indicating the contour portion, the position of the contour pixel that displays the contour image among the plurality of pixels arranged in a two-dimensional manner of the second display portion, and the first portion of the contour portion corresponding to the contour pixel.
  • a contour correction step for correcting the contour image is executed based on a position on the first display unit and a contour position on the second display unit determined based on a predetermined viewpoint position.
  • the accuracy of the stereoscopic image perceived by the user can be improved.
  • FIG. 1 is a diagram showing an overview of a display system in the present embodiment.
  • the display system 100 shown in this figure displays an image that allows stereoscopic viewing on the display unit.
  • an XYZ rectangular coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ rectangular coordinate system.
  • a direction in which the display device 10 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively.
  • the X-axis direction is the horizontal direction of the display device 10
  • the Y-axis direction is the vertical direction of the display device 10.
  • the observer 1 is at a position where the display surface 11S of the display unit 11 and the display surface 12S of the display unit 12 enter the visual field.
  • the display device 10 shown in FIG. 1 displays a stereoscopic image so that it can be stereoscopically viewed from a predetermined position (Ma) (viewing position) in the positive direction of the Z axis (the direction facing the display unit 12).
  • the observer 1 can stereoscopically view the stereoscopic image displayed on the display unit 11 and the display unit 12 of the display device 10 from the viewing position.
  • FIG. 2 is a configuration diagram illustrating an example of a configuration of the display system 100 including the display device 10 according to the present embodiment.
  • a display system 100 according to the present embodiment includes an image processing device 2 and a display device 10.
  • the image processing device 2 supplies the image information D11 and the image information D12 to the display device 10.
  • the image information D12 is information for displaying the image P12 displayed by the display device 10.
  • the image information D11 is information for displaying the image P11 displayed by the display device 10, and is image information of the edge image PE generated based on the image information D12, for example.
  • the edge image PE is an image showing an edge portion E in the image P12. The edge image PE will be described later with reference to FIGS. 3A and 3B.
  • the display device 10 includes a display unit 11 and a display unit 12.
  • the display device 10 displays an image P 11 based on the image information D 11 acquired from the image processing device 2, and displays the image information D 12 acquired from the image processing device 2. Based on this, the image P12 is displayed.
  • the display unit 11 and the display unit 12 of this embodiment are arranged in the order of the display unit 11 and the display unit 12 in the (+ Z) direction. That is, the display unit 11 and the display unit 12 are arranged at different depth positions. Here, the depth position is a position in the Z-axis direction.
  • the display unit 12 includes a display surface 12S that displays an image in the (+ Z) direction, and displays the image P12 on the display surface 12S based on the image information D12 acquired from the image processing device 2.
  • a light ray (light) R12 emitted from the image P12 displayed on the display surface 12S is visually recognized by the observer 1 as an optical image.
  • the display unit 11 includes a display surface 11S that displays an image in the (+ Z) direction, and displays the image P11 on the display surface 11S based on the image information D11 acquired from the image processing device 2.
  • a light ray (light) R11 emitted from the image P11 displayed on the display surface 11S is visually recognized by the observer 1 as an optical image.
  • the display unit 12 of the present embodiment is a transmissive display unit that can transmit the light beam R11 (light) corresponding to the image P11 displayed by the display unit 11. That is, the display surface 12S displays the image P12 and transmits the light ray R11 of the image P11 displayed by the display unit 11. That is, the display device 10 displays the image P11 and the image P12 so that the viewer 1 can visually recognize the image P11 and the image P12 so as to overlap each other. In this way, the display unit 11 displays the image P11 indicating the edge portion in the image P12 at a depth position different from the depth position where the image P12 is displayed.
  • FIGS. 3A and 3B are schematic diagrams illustrating an example of an image P12 and an image P11 in the present embodiment.
  • FIG. 3A shows an example of the image P12 in the present embodiment.
  • FIG. 3B shows an example of the image P11 in the present embodiment.
  • the image P12 of the present embodiment is an image showing a square pattern as shown in FIG. 3A, for example.
  • the four sides constituting the quadrilateral can be edge portions, but in the following description, for convenience, an edge portion (left side edge portion) E1 indicating the left side of the quadrilateral
  • the edge portion (right side edge portion) E2 indicating the right side will be described as the edge portion E.
  • the image P11 of the present embodiment includes, for example, as shown in FIG. 3B, an edge image (left side edge image) PE1 showing a left side edge part E1 of a square pattern and an edge image (right side edge image) PE2 showing a right side edge part E2.
  • an edge portion (which may be simply expressed as an edge or an edge region) is a portion where the brightness (for example, luminance) of adjacent or neighboring pixels in the image changes suddenly.
  • the edge portion E indicates a theoretical line segment having no width on the left side or the right side of the quadrangle shown in FIG. 3A and, for example, a region around the edge having a finite width corresponding to the resolution of the display unit 11. It also shows.
  • FIG. 4 is a schematic diagram illustrating an example of an image displayed by the display device 10 according to the present embodiment.
  • the display unit 11 displays the image P11 so that the viewer 1 can visually recognize it.
  • the display part 12 displays the image P12 so that the observer 1 can visually recognize it.
  • the image P12 is displayed at a position that is a predetermined distance ZD away from the position at which the image P11 is displayed in the Z-axis direction.
  • the display unit 12 of the present embodiment is a transmissive display unit that transmits light.
  • the image P11 displayed on the display unit 11 and the image P12 displayed on the display unit 12 are visually recognized by the observer 1 so as to overlap.
  • the predetermined distance ZD is a distance between the position where the image P11 is displayed and the position where the image P12 is displayed.
  • the display device 10 corresponds to a left side edge part E1 in the image P12 displayed by the display unit 12 and a left side edge image PE1 corresponding to the edge part.
  • the images P11 and P12 are displayed so as to be visually recognized.
  • the display device 10 displays an image so that the right side edge portion E2 in the image P12 displayed by the display unit 12 and the right side edge image PE2 corresponding to the edge portion are visually recognized.
  • P11 and image P12 are displayed.
  • the display device 10 is connected to the left eye L of the observer 1 on the ( ⁇ X) side of the left edge portion E1 of the quadrangle indicated by the image P12 (that is, outside the quadrilateral).
  • Each image is displayed so that the left side edge image PE1 can be visually recognized. Further, the display device 10 displays the right-side edge portion E2 and the right-side edge image on the left eye L of the observer 1 on the ( ⁇ X) side of the right-side edge portion E2 of the square indicated by the image P12 (that is, on the inside of the square). Each image is displayed so that it overlaps with PE2. Similarly, for example, the display device 10 places the right side edge portion E2 and the right side on the right eye R of the observer 1 on the (+ X) side of the right side edge portion E2 of the square indicated by the image P12 (that is, outside the square). Each image is displayed so that the edge image PE2 overlaps and is visually recognized.
  • the display device 10 has the left-side edge portion E1 and the left-side edge image PE1 on the right eye R of the observer 1 on the (+ X) side of the left-side edge portion E1 of the quadrangle indicated by the image P12 (that is, on the inner side of the quadrangle). Each image is displayed so that can be visually recognized.
  • FIG. 5 is a schematic diagram illustrating an example of the optical image IM in the present embodiment.
  • the optical image IM is an image in which the image P11 and the image P12 are visually recognized by the observer 1.
  • the optical image IM includes an optical image IML visually recognized by the left eye L of the observer 1 and an optical image IMR visually recognized by the right eye R.
  • the optical image IML visually recognized by the left eye L of the observer 1 will be described.
  • an optical image IML formed by combining the image P11L visually recognized by the left eye L and the image P12L visually recognized by the left eye L is formed. For example, as described with reference to FIG.
  • the left side edge portion E1 is placed on the ( ⁇ X) side of the left side edge portion E1 of the quadrangle shown by the image P12 (that is, outside the quadrangle).
  • An optical image IML is formed by combining the image shown and the left-side edge image PE1.
  • the image indicating the right edge portion E2 and the right edge image PE2 are synthesized on the ( ⁇ X) side (that is, inside the rectangle) of the square right edge portion E2 indicated by the image P12.
  • An optical image IML is formed.
  • FIG. 6 shows the brightness distribution of the optical image IML visually recognized by the left eye L in the case of FIG.
  • FIG. 6 is a graph showing an example of the brightness distribution of the optical image IM in the present embodiment.
  • X coordinates X 1 to X 6 are X coordinates corresponding to the brightness change points of the optical image IML.
  • the brightness of the image P12L visually recognized by the left eye L will be described as zero in the X coordinates X 1 to X 2 here.
  • the brightness of the image P12L is the brightness value BR2 at the X coordinates X 2 to X 6 .
  • the case of the brightness value BR will be described as an example of the brightness of the image.
  • the brightness of the image P11L visually recognized by the left eye L is the brightness value BR1 at the X coordinates X 1 to X 2 and the X coordinates X 4 to X 5 , and the X coordinates X 2 to X 4 Is zero. Therefore, the brightness (for example, luminance) of the optical image IML visually recognized by the left eye L becomes the luminance value BR1 at the X coordinates X 1 to X 2 .
  • the brightness of the optical image IML is the luminance value BR2 at the X coordinates X 2 to X 4 and the X coordinates X 5 to X 6 , and the luminance value BR1 and the luminance value BR2 at the X coordinates X 4 to X 5
  • the luminance value BR3 which is the synthesized brightness is obtained.
  • FIG. 7 is a graph showing an example of binocular parallax that occurs in the left eye L and right eye R in the present embodiment.
  • the distribution of brightness of the image recognized by the observer 1 by the optical image IML imaged on the retina of the left eye L is as shown by the waveform WL in FIG.
  • the observer 1 visually recognizes the position on the X-axis where the change in the brightness of the visually recognized image is maximized (that is, the gradient between the waveform WL and the waveform WR is maximized). Recognize that it is an edge part of an object.
  • the observer 1 visually recognizes the position of the X EL shown in FIG. 7 (that is, the position of the distance L EL from the origin O of the X axis) for the waveform WL on the left eye L side. It is recognized as an edge part on the left side of the rectangle.
  • the distribution of brightness of the image recognized by the observer 1 by the optical image IMR synthesized on the retina of the right eye R is as shown by a waveform WR in FIG.
  • the observer 1 visually recognizes the position of the XER shown in FIG. 7 (that is, the position of the distance LER from the origin O of the X axis) for the waveform WR on the right eye R side. Recognize that it is a part.
  • the observer 1 recognizes the position X EL of the edge portion of the square left eye L is viewing, and a position X ER of the edge portion of the square right eye R is visually recognized as binocular parallax.
  • the observer 1 recognizes the quadrilateral image as a stereoscopic image (three-dimensional image) based on the binocular parallax at the edge portion.
  • the display device 10 includes the display unit 12 that displays the image P12 and the edge that indicates the edge portion in the image P12 at a depth position different from the depth position where the image P12 is displayed. And a display unit 11 that displays an image P11 including the image PE. Accordingly, the display device 10 can display the image P12 and the edge image PE (that is, the edge portion) of the image P12 in an overlapping manner. That is, the display device 10 of the present embodiment has no influence on the image other than the edge portion displayed on the display unit 12 without the influence of the image displayed on the display unit 11 (that is, the edge image PE). An image can be displayed.
  • FIG. 8 is a diagram for explaining the influence when the positions of the edges of two images displayed on the display device 10 are shifted.
  • the display unit 11 and the display unit 12 in the display device 10 each have a display capable of displaying a stereoscopic image (3D image).
  • Each display display is provided with a pixel array composed of two-dimensionally arranged pixels on the display surface.
  • the luminance at each position on the display surface 11S is adjusted in units of pixels in the pixel array of the display unit 11.
  • the luminance of each position on the display surface 12 ⁇ / b> S is adjusted in units of pixels in the pixel array of the display unit 12.
  • the definition of the displayable image may be limited by the resolution of the display device.
  • the edge width (line width) is in units of pixels. It can only be adjusted and is limited by the size of the pixel. Further, for example, in order to emphasize the edge of the edge image PE displayed on the display unit 11, the width of the edge may be displayed larger than the size (width) of the pixel.
  • the edge width is limited by the size of the pixel.
  • the edge width corresponds to the width of one pixel.
  • the present invention can be applied to the case where an edge formed by a plurality of pixels is displayed.
  • the description based on the pixel size may be replaced with the description based on the pixel pitch.
  • FIG. 8A shows a front view of the display unit 11 from the side of viewing the display surface 11S.
  • This display surface 11S shows a state in which the right side edge image PE2 indicated by the image P11 displayed on the display unit 11 is displayed.
  • the grid shown in the figure indicates the position of each pixel.
  • Each pixel is arranged at a position corresponding to the grid.
  • the right side edge image PE2 is displayed on the pixel in the kth column on the X axis. In the Z-axis direction,...
  • FIG. A2 shows a state in which a quadrangle Ob indicated by an image P12 displayed on the display unit 12 is displayed on the front view of the display unit 12 from the display surface 12S side.
  • a state in which the right end OR of the quadrangle Ob is in a position that coincides with the right end of the right side edge image PE2 is shown.
  • the right end OR of the quadrangle Ob is at a position corresponding to the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 shown in FIG.
  • FIG. A3 shows an image (composite image) that can be viewed in a state where the right edge image PE2 shown in FIG. A1 and the right edge portion E2 of the quadrangle Ob shown in FIG. A2 overlap.
  • FIG. A4 shows the brightness (luminance) of the part cut out in the horizontal direction (X-axis direction) so as to include the quadrangle Ob in the image (composite image) shown in FIG. A3.
  • an outline is shown in which the luminance (brightness) (V0) of the portion indicating the rectangle Ob before the outline correction is enhanced up to the luminance (brightness) Vp by being synthesized.
  • FIG. 8A in the right side edge portion E2 of the quadrangle Ob shown by the image P12, the right side of the right end OR of the quadrangle Ob is on the ( ⁇ X) side (that is, the inside of the quadrangle).
  • An image when the right end of the edge image PE2 is displayed so as to touch is shown. Since the right edge of the right-side edge image PE2 is in contact with the right edge OR of the quadrangle Ob as described above, the change in the luminance of the image at the right-side edge portion E2 of the quadrangle Ob is large and sharply changed. Can be synthesized.
  • the observer 1 can visually recognize an image whose luminance peak value is increased to Vp (FIG. A4).
  • the image synthesized in this way can be visually recognized when the viewing position of the observer 1 is at a position where the contour can be emphasized most.
  • the visual recognition position where the synthesized image can be visually recognized is set as a predetermined position where the observer 1 can visually recognize the stereoscopic image.
  • FIG. 8B shows a case where the observer 1 moves in the ( ⁇ X) axis direction along the X axis
  • FIG. 8B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis.
  • FIGS. B1 to b4 shown in FIG. 8B and FIGS. C1 to c4 shown in FIG. 8C correspond to the above-described FIGS. A1 to a4, respectively.
  • FIG. 8B and FIG. 8C will be described in order focusing on differences from FIG. 8A.
  • FIG. B2 an image that can be visually recognized will be described with reference to FIG. In FIG. B2, observation is performed by moving the right end OR of the quadrangle Ob in the (+ X) axis direction from the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 illustrated in FIG. Is in the position to be.
  • the right end OR of the quadrangle Ob is at a position where it is observed by moving in the (+ X) axis direction from the right end of the right side edge image PE2.
  • the right end OR of the quadrangle Ob is displayed at a position moved in the (+ X) axial direction from the right end of the right side edge image PE2, so that the position of the edge image PE2 in the right side edge portion E2 of the quadrangle Ob is a quadrangle.
  • the image moved inside Ob is synthesized.
  • an image similar to that in FIG. 8A described above is synthesized with the brightness of the contour image emphasized by the right edge image PE2 and the width in the X-axis direction at the right edge portion E2 of the quadrangle Ob.
  • the luminance of the image at the right edge portion E2 of the quadrangle Ob changes in a staircase pattern.
  • the amount of enhancement as a contour image by the added right-side edge image PE2 is reduced compared to the case of FIG. 8A described above.
  • FIG. C2 the right end OR of the rectangle Ob moves in the ( ⁇ X) axis direction from the boundary between the pixel in the k-th column and the pixel in the (k + 1) -th column that displays the right-side edge image PE2 shown in FIG. In the position to be observed.
  • the right end OR of the quadrangle Ob is at a position where it is observed by moving in the ( ⁇ X) axis direction from the right end of the right side edge image PE2.
  • the right end OR of the quadrangle Ob is displayed at a position moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2, so that the position of the edge image PE2 in the right side edge portion E2 of the quadrangle Ob is determined.
  • An image that moves in the outward direction of the rectangle Ob and protrudes from the shape of the rectangle Ob is synthesized.
  • an image having the same brightness as that of FIG. 8A described above is synthesized at the right edge portion E2 of the quadrangle Ob with the brightness of the contour image emphasized by the right edge image PE2.
  • the width in the X-axis direction of the above-described range in which the luminance is ensured is narrowed, and the luminance of the image at the right edge portion E2 of the quadrangle Ob changes in a staircase pattern.
  • the amount of enhancement as a contour image by the added right-side edge image PE2 is reduced compared to the case of FIG. 8A described above.
  • the width in the X-axis direction of the range in which the brightness of the contour image is ensured is narrower than in FIGS. 8A and 8B described above, the same right edge as in FIGS. 8A and 8B.
  • the image PE2 is displayed, the enhancement amount as the contour image is reduced as compared with the case of FIGS. 8A and 8B described above.
  • the image shown in FIG. 8A cannot be viewed, and the visibility of the stereoscopic image may be reduced.
  • the factor in the case where the visibility of the stereoscopic image is reduced is different from the factor in the case where the displayed image is deteriorated due to aliasing caused by the pixels of the display device 10 being discretely arranged.
  • the visibility of the stereoscopic image is improved by adjusting the edge image PE according to the method described below.
  • the display system 100 in the present embodiment will be described in detail.
  • FIG. 9 is a schematic block diagram showing the configuration of the display system 100 according to an embodiment of the present invention.
  • a display system 100 illustrated in FIG. 9 includes an image processing device 2 and a display device 10.
  • the display device 10 has a display capable of displaying a stereoscopic image (3D image) for displaying a stereoscopic image so as to be stereoscopically viewed from a predetermined viewing position.
  • the stereoscopic image (3D image) may be any of 3D video (3D moving image) and 3D still image.
  • a stereoscopic image (3D image) is any image of a natural image obtained by photographing an actual subject and an image (computer graphics (CG) image or the like) generated, processed, or synthesized by software processing. Also good.
  • This display is, for example, a liquid crystal display, an organic EL (Electro-Luminescence) display, a plasma display, etc., and is a display capable of displaying a stereoscopic image as described above.
  • the display display of the display device 10 includes a pixel array including pixels arranged two-dimensionally on the display surface.
  • the image processing apparatus 2 includes a contour correction unit 210, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the contour correction unit 210 corrects and outputs at least one of the supplied image information according to the image information supplied to the contour correction unit 210.
  • the image information to be processed by the contour correcting unit 210 includes image information D11P (first image information) and image information D12P (second image information).
  • the image information D11P is displayed on the display unit 11 out of image information for stereoscopically displaying a display target to be displayed on the display unit 11 (first display unit) and the display unit 12 (second display unit) at a predetermined position by binocular parallax. This is image information to be displayed.
  • the image information D12P is image information to be displayed on the display unit 12 among the image information for stereoscopically displaying the display target at a predetermined position by binocular parallax.
  • the contour correction unit 210 in the present embodiment corrects the image information D11P (first image information) to generate image information D11.
  • the contour correction unit 210 outputs the image information D12P as it is without correcting the image information D12P (second image information).
  • the contour correction unit 210 includes a predetermined position (Ma (FIG. 1)) for stereoscopic display of a display target to be displayed on the display unit 11 and the display unit 12 by binocular parallax, and a plurality of the display unit 11 has. Based on the position of the pixel (for example, the position of the pixel arrayed two-dimensionally) and the pixel position on the display unit 11 corresponding to the contour part (for example, the edge part E) to be displayed, The image information corresponding to the contour portion to be displayed is corrected.
  • a predetermined position Mo (FIG. 1)
  • the predetermined position where the display target to be displayed on the display unit 11 and the display unit 12 is stereoscopically displayed by binocular parallax is a position where the observer 1 can visually recognize a stereoscopic image.
  • the positions of the plurality of pixels arrayed two-dimensionally included in the display unit 11 are the positions of the pixels in the pixel array including the pixels arrayed two-dimensionally provided in the display provided on the display surface 11S.
  • the pixel position on the display unit 11 corresponding to the display target contour is a position at which the display target contour is displayed on the display unit 11.
  • the contour correcting unit 210 corrects the image information corresponding to the contour portion to be displayed according to the image information D12P (second image information) when correcting the image information D11P.
  • the image information D12P is image information to be displayed on the display unit 12 among image information for stereoscopically displaying a display target at a predetermined position by binocular parallax.
  • the contour correction unit 210 has a relative relationship between the pixel position in the display unit 11 corresponding to the display target contour unit and the positions of the plurality of pixels arranged in the display unit 11 in the display target contour unit.
  • Image information corresponding to the contour portion to be displayed in the image information D11P is corrected so as to reduce the reduction in the visibility of the stereoscopic display caused by the positional relationship.
  • the contour correction unit 210 corrects the luminance of the contour portion to be displayed in the correction of the image information D11P included in the contour portion to be displayed.
  • the contour correcting unit 210 corrects the width of the contour portion to be displayed in the correction of the image information D11P included in the contour portion to be displayed. Details of the process of correcting the image information corresponding to the contour portion of the display target in the contour portion of the display target will be described later so as to reduce the reduction in the visibility of the stereoscopic display.
  • the contour correction unit 210 in the present embodiment will be described with a more specific example.
  • the contour correction unit 210 in the present embodiment includes a determination unit 213 and a correction unit 211.
  • the determination unit 213 performs determination for determining a condition for controlling correction processing in the correction unit 211 described later. For example, in the correction of the image information D11P included in the contour portion to be displayed, the determination unit 213 has the position of the contour portion to be displayed on the display portion 11 that is adjacent to the first pixel adjacent to the display portion 11. It is determined whether or not it falls within the range of the second pixel.
  • the determination unit 213 determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11 falls within the range of the first pixel and the second pixel adjacent to each other among the pixels of the display unit 11. It is determined that correction is necessary, and it is determined that correction is not necessary when the range of the adjacent first pixel and second pixel among the pixels of the display unit 11 is not applied.
  • the adjacent first pixel and second pixel are pixels arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
  • the correction unit 211 displays a display target to be displayed on the display device 10 when it is determined that the position of the contour of the display target is within the range of the first pixel and the second pixel based on the determination result in the determination unit 213. Correction of the contour portion of.
  • the correction unit 211 performs correction, for example, the image information D11P to be displayed on at least one of the first pixel and the second pixel is arranged in a two-dimensional manner with a predetermined position and the display unit 11 The correction is performed based on the positions of the plurality of pixels and the pixel positions on the display unit 11 corresponding to the contour portion to be displayed.
  • the correction unit 211 corrects the image information D11P displayed on the first pixel and the second pixel according to the correction amount of the image information D11P determined by pairing the first pixel and the second pixel. Thereby, the contour correction unit 210 sets the image information D11P so that the position of the observer 1 (user) who views the display unit 11 and the display unit 12 is a predetermined position where the display target can be stereoscopically viewed by binocular parallax. It can be corrected.
  • the imaging unit 230 images the direction including the above-described viewing position on the display surface side displayed by the display device 10. That is, the imaging unit 230 captures the direction facing the display surface of the display device 10.
  • the imaging unit 230 is not shown, but an optical lens, an imaging device that captures a subject light beam (optical image) input via the optical lens, and imaging data captured by the imaging device are digital images.
  • An imaging signal processing unit that outputs data. Then, the imaging unit 230 supplies the captured image (digital image data) to the detection unit 250 and the control unit 260.
  • the detection unit 250 includes an observer detection unit 251 (detection unit), a face detection unit 252, and a face recognition unit 253.
  • the observer detection unit 251 detects the relative position of the observer on the display surface side displayed by the display device 10 with respect to the display device 10 based on the image captured by the imaging unit 230. That is, the observer detection unit 251 detects the relative position of the observer with respect to the display device 10 in the direction facing the display device 10 based on the image captured by the imaging unit 230.
  • the observer detection unit 251 is a surface parallel to the display surface of the display device 10 based on the position of the image region of the observer's face detected by the face detection unit 252 with respect to the image region captured by the imaging unit 230.
  • An observer's position (X-axis and Y-axis coordinates) in (XY plane) is detected.
  • the observer detection unit 251 also determines the distance (the distance in the Z-axis direction) from the display device 10 to the observer with respect to the display surface of the display device 10 based on the image area of the observer's face detected by the face detection unit 252. , Z-axis coordinates). For example, the observer detection unit 251 determines the size of the image area of the observer's face detected by the face detection unit 252 or the interval between the left eye image area and the right eye image area in the face image area. Based on the above, the distance (distance in the Z-axis direction, coordinates in the Z-axis) to the display surface of the display device 10 from the display device 10 to the observer is detected. Note that the observer detection unit 251 may detect an observer detected from an image captured by the imaging unit 230 based on human characteristics other than the face (for example, body shape characteristics).
  • the face detection unit 252 detects a face image area from the image captured by the imaging unit 230.
  • the face detection unit 252 extracts image information indicating the outline of the face and the position of the eyes from the image captured by the imaging unit 230, and the extracted image information and information indicating the characteristics of the human face (face And the image area of the face in the direction facing the display device 10, that is, the face of the observer is detected.
  • “detecting a face image area” is also simply referred to as “detecting an observer's face”.
  • the above-mentioned information indicating the characteristics of the human face is stored in the storage unit 270 as a face detection database used for detecting a face from an image, for example. Then, the face detection unit 252 supplies the detection result to the observer detection unit 251 or the face recognition unit 253.
  • the face detection unit 252 fails to detect a face from the image captured by the imaging unit 230, the face detection unit 252 observes a detection result indicating that there is no face in the direction facing the display device 10, that is, there is no observer. To the person detection unit 251 or the face recognition unit 253.
  • the face recognition unit 253 recognizes the face of the observer detected by the face detection unit 252 based on the image captured by the imaging unit 230. For example, the face recognition unit 253 detects the detected face based on the detection result of the face detection unit 252 (information indicating the outline of the face, the position of the eyes, etc.) and the information indicating the characteristics of the faces of a plurality of people. Which observer's face is recognized.
  • the information indicating the facial features of each of the plurality of persons described above is stored in the storage unit 270 as a face recognition database used to recognize a face extracted from an image, for example.
  • the face recognition database may be configured to arbitrarily add information.
  • the display system 100 is provided with a menu for registering information indicating the features for recognizing the observer's face, and by executing this menu, the imaging unit 230 images the face of the observer to be registered, and the image is captured.
  • Information for recognizing the face indicating the characteristics of the face of the observer to be registered based on the face image may be generated and registered in the face recognition database.
  • the storage unit 270 stores information necessary for the detection unit 250 to detect.
  • the storage unit 270 stores, as a face detection database, information indicating human face characteristics necessary for detecting a face from an image.
  • the storage unit 270 stores, as a face recognition database, information indicating facial features of each of a plurality of persons necessary to recognize a face extracted from an image.
  • the control unit 260 includes a difference calculation unit 261, a contour correction control unit 262, and a display control unit 263.
  • the difference calculation unit 261 calculates a difference (positional difference) between the relative position of the observer detected by the observer detection unit 251 and the visual recognition position where the stereoscopic image displayed on the display device 10 can be stereoscopically viewed. .
  • the difference calculation unit 261 calculates a difference between positions detected by coordinates on the X, Y, and Z axes of the detected relative position of the observer and the visual recognition position.
  • the contour correction control unit 262 displays correction information for correcting the contour of an image to be displayed so that a stereoscopically viewable image can be displayed from the viewing position of the viewer 1 based on the image captured by the image capturing unit 230. 210.
  • the contour correction control unit 262 performs a predetermined stereoscopic display of the display target to be displayed on the display unit 11 and the display unit 12 based on the relative position of the observer 1 detected by the observer detection unit 251.
  • the position information indicating the position of is generated.
  • the contour correction control unit 262 supplies the generated position information to the contour correction unit 210, and the correction information for correcting the contour of the image displayed on the display device 10 is contoured in the contour correction unit 210 based on the generated position information.
  • the correction unit 210 generates the data.
  • the relative position of the observer may be described as the position of the observer.
  • the display control unit 263 controls the display of the display device 10.
  • the display control unit 263 causes the display device 10 to display a stereoscopic image based on the input image signal.
  • FIG. 10 is a diagram illustrating a positional relationship between an observer, a display device, and a contour portion in a target image.
  • the position of the observer in FIG. 10 is based on information detected as position information of the observer 1 detected by the observer detection unit 251 as described above.
  • FIG. 10A shows a view of the display device 10 viewed from the display surface side ((+ Z) axis side).
  • FIG. 10B shows a plan view of the XZ plane from the (+ Y) axis side.
  • the positional relationship between the edge portion E in the image P12 and the edge image PE in the image P11 will be described in a simplified manner so that the positional relationship can be clearly shown.
  • the edge portion E in the image P12 can be visually recognized by overlapping the edge image PE in the image P11 from the position of Ma where the observer 1 is present.
  • the positional relationship in this case is such that the image P11 and the image P12 are arranged at positions where stereoscopic viewing is easy.
  • the position of one point (point POS2) representing the edge portion E in the image P12 and the position of one point (point POS1) on the image P11 with respect to the same point (point POS2) match on the synthesized image.
  • the position of the observer 1 is a position that is separated by a distance LD in the + Z-axis direction from the surface of the display device 10 (display unit 12) on the Z-axis. Further, the position of the observer 1 is indicated by Ma (0, 0, LD), the position of the point POS2 is indicated by (X2, Y2, 0), and the position of the point POS1 is indicated by (X1, Y1, -ZD).
  • Ma (0, 0, LD) point POS2, and point POS1 in a straight line
  • the point POS2 and the point POS1 can be visually recognized. Due to such a positional relationship, the observer 1 located at Ma (0, 0, LD) can visually recognize a stereoscopic image formed by the image P11 and the image P12.
  • POS1 does not overlap point POS2.
  • the position of the observer 1 moves from Ma to Mb, the position of the point POS1 moves to the point POS1 ′.
  • the position of the point POS1 moves to the point POS1 ′′.
  • the position of the point POS1 can be moved in units of one pixel in the display unit 11. .
  • the amount of movement of the position of the point POS1 is not necessarily the amount of movement in units of the width of one pixel in the display unit 11 in the X-axis direction. Since the position where the edge image PE is displayed depends on the position of the pixel on the display unit 11, the position where the edge image PE can be displayed may not necessarily correspond to the position of the point POS1 calculated as described above.
  • the point POS1 does not overlap the point POS2
  • the display is performed so as to overlap with the limit of the resolution of the display unit 11. There are cases where it cannot be done.
  • the control unit 260 increases the movement amount from (X1, Y1, -ZD). Accordingly, the contour correcting unit 210 is controlled to correct the contour image of the image P11. Note that, from the positional relationship shown in FIG. 10B, the display unit 11 of the image displayed on the display unit 11 is based on the movement amount of the observer 1 by the proportional calculation based on the similarity relationship of the triangles. The amount of movement on the display surface 11S can be calculated.
  • FIG. 11 is a diagram for explaining the correction of the contour image in the display system 100.
  • the image processing apparatus is configured so as to reduce the reduction in the visibility due to the above-described deviation by spreading an area having the same brightness (brightness) as the brightness (brightness) of the area indicating the outline in the outline image. 2 corrects the contour image.
  • an example will be shown and the details will be described.
  • FIG. 11 shows an image of the right side edge portion E2 of the quadrangle Ob indicated by the image P12 as in FIG. 8A described above.
  • FIG. 11A to FIG. 4A arranged in order from the upper side in FIG. 11A are the same as FIG. 8A to FIG. 4A in FIG. 8A.
  • FIG. 11B shows a case where the observer 1 moves in the ( ⁇ X) axis direction along the X axis
  • FIG. 11B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. 11 (c).
  • FIGS. B1 to b4 shown in FIG. 11B and FIGS. C1 to c4 shown in FIG. 11C correspond to the above-described FIGS. A1 to a4.
  • FIG. 11B and FIG. 11C will be described in order centering on the difference from FIG.
  • the width of the edge portion E (right edge portion E2) of the quadrangle Ob in the image P12 is the width of the pixel in the display unit 11 in the X-axis direction (d (FIG. 10)).
  • the right end OR of the rectangle Ob is in the (+ X) axis direction from the boundary between the pixel in the k-th column (column k) and the pixel in the (k + 1) -th column ((k + 1) -th column) shown in FIG. It is in a position to be observed by moving.
  • the right end OR of the quadrangle Ob is located at such a position, the right-side edge portion E2 of the quadrangle Ob is applied to the pixels in the adjacent column in the X-axis direction.
  • the pixels in the columns adjacent to each other in the X-axis direction are shown as a pixel in the kth column and a pixel in the (k + 1) th column shown in FIG.
  • the contour correction unit 210 generates a right side edge image PE2 ′ from the right side edge image PE2 indicated by the image information D11P.
  • the contour correction unit 210 according to the present embodiment arranges the right side edge image PE2 ′ side by side with respect to the right side edge image PE2 so as to correct pixels in columns adjacent in the X-axis direction according to the right side edge image PE2.
  • the contour correcting unit 210 arranges the right side edge image PE2 ′ in the (k + 1) column side by side with respect to the right side edge image PE2 located in the k column.
  • FIG. 11B shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are the same. The case of adjusting the luminance of the pixels indicating the right side edge image PE2 and the right side edge image PE2 ′ will be described later.
  • the contour correcting unit 210 detects that the right end OR of the quadrangle Ob is displayed at a position moved in the (+ X) axis direction from the right end of the right side edge image PE2. 'Is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 'in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. B3 and b4). As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 11A in FIG. A contour image is synthesized (FIG. B4).
  • the right end OR of the rectangle Ob is from the boundary between the k-th pixel (k column) and the (k + 1) -th pixel ((k + 1) column) displaying the right-side edge image PE2 shown in FIG. It is in a position to be observed by moving in the ( ⁇ X) axial direction.
  • the right edge portion E2 of the quadrangle Ob is shown as the pixel in the k-th column and the pixel in the (k ⁇ 1) -th column shown in FIG. This is a situation where the pixels on the adjacent columns in the direction are applied.
  • the contour correction unit 210 generates a right side edge image PE2 ′ from the right side edge image PE2 indicated by the image information D11P.
  • the contour correction unit 210 according to the present embodiment arranges the right side edge image PE2 ′ side by side with respect to the right side edge image PE2 so as to correct pixels in columns adjacent in the X-axis direction according to the right side edge image PE2.
  • the contour correcting unit 210 arranges the right side edge image PE2 ′ in the (k ⁇ 1) column side by side with respect to the right side edge image PE2 located in the k column.
  • FIG. 11C shows an example in which the luminance of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ is the same. The case of adjusting the luminance of the pixels indicating the right side edge image PE2 and the right side edge image PE2 ′ will be described later.
  • the contour correcting unit 210 detects that the right end OR of the quadrangle Ob is displayed at a position moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2. 'Is displayed. Thereby, the image corrected by the edge image PE2 and the right-side edge image PE2 'in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. C3 and c4). As a result of this correction, the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 11A in FIG. A contour image is synthesized (FIG. C4).
  • the right end OR of the quadrangle Ob is moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′.
  • the right side edge image PE2 ' is arranged on the ( ⁇ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
  • the brightness peak value Vp of the contour image is in any case compared with the synthesized images shown in FIGS. 8B and 8C.
  • the width indicating the peak luminance value Vp of the contour image that continues to the position of the right end OR of the quadrangle Ob can be secured wider than the pixel width. It is different from (b) and (c).
  • the width indicating the luminance peak value of the synthesized image in the vicinity of the right end OR of the quadrangle Ob the position of the end recognized as the edge of the contour image can be matched with the right end OR of the quadrangle Ob.
  • the display system 100 can show an outline in which the width of the outline indicated by the peak value Vp is wider than the width of the pixel, with the end aligned with the right end OR.
  • the display system 100 synthesizes the contour image in which the brightness of the edge portion is enhanced by adding the right-side edge image PE2 ′ to the right-side edge image PE2 to enhance the contour image.
  • the display system 100 can reduce the influence of the shift of the edge position caused by the observer 1 moving from a predetermined position.
  • the display system 100 according to the present embodiment can improve the visibility of a stereoscopic image even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 12 is a diagram for explaining a modification of the contour image correction method in the display system 100.
  • the above-described deviation is allowed by arranging an area in which the brightness (luminance) is reduced from the brightness (luminance) in the outline area of the outline image, continuously with the outline area before correction. It can be so. Details will be described below.
  • FIG. 12 shows an image of the right edge portion E2 of the quadrangle Ob indicated by the image P12 as in the above-described FIG. 8A.
  • FIGS. A1 to a4 arranged in order from the upper side in FIG. 12A are the same as FIGS. A1 to a4 in FIG. 8A.
  • FIG. 12B shows a case where the observer 1 moves in the ( ⁇ X) axis direction along the X axis
  • FIG. 12B shows a case where the observer 1 moves in the (+ X) axis direction along the X axis. 12 (c).
  • FIGS. B1 to b4 shown in FIG. 12B and FIGS. C1 to c4 shown in FIG. 12C correspond to the above-described FIGS. A1 to a4, respectively.
  • FIG. 12B and FIG. 12C will be described in order centering on the difference from FIG.
  • the width of the edge portion E (right edge portion E2) of the quadrangle Ob in the image P12 is set as the width of the pixel in the display unit 11 in the X-axis direction.
  • the position where the contour image is corrected in FIG. 12B is performed at the same position as the position where the contour image is corrected in FIG.
  • the contour correcting unit 210 arranges the right-side edge image PE2 'in the (k + 1) column side by side with respect to the right-side edge image PE2 located in the k column.
  • FIG. 12B shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are different from those of FIG. 11B described above.
  • a specific difference is that when the right-side edge image PE2 ′ is generated from the right-side edge image PE2 indicated by the image information D11P for the pixels in the (k + 1) -th column, as shown in FIG.
  • the brightness (luminance) of the right side edge image PE2 ′ is different.
  • the luminance of the right side edge image PE2 ′ is set to be lower than the luminance of the right side edge image PE2 as a base.
  • the correction of the contour image in this modification is different in this respect from the correction of the contour image shown as an example of the first embodiment.
  • the right edge image PE2 ′ is displayed by detecting that the right edge OR of the quadrangle Ob is displayed at a position moved in the (+ X) axis direction from the right edge of the right edge image PE2.
  • the image corrected by the edge image PE2 and the right-side edge image PE2 ′ in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. B3 and b4).
  • the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, so that the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 12A in FIG.
  • a contour image is synthesized (FIG. B4).
  • the right side edge image PE2 and the right side edge image PE2 ′ provide the right edge OR of the quadrangle Ob. Contour images continue to the position.
  • the luminance peak values of the contour image in the right side edge image PE2 and the right side edge image PE2 ′ are different. Therefore, the peak brightness value of the contour image in the right edge image PE2 ′ section is lower than the peak brightness value (Vp) of the contour image in the right edge image PE2 section.
  • the right side edge image PE2 ′ is arranged on the (+ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
  • the width indicating the peak value Vp of the brightness of the contour image is changed to the pixel width in FIG. Although it is the same as b), it is different from FIG. 8B in that the contour image continues to the position of the right end OR of the quadrangle Ob.
  • the width indicating the peak value Vp is set at that position. A delicate contour with the width of the pixel can be shown.
  • the contour correcting unit 210 arranges the right side edge image PE2 'in the (k-1) column side by side with respect to the right side edge image PE2 located in the k column.
  • FIG. 12C shows an example in which the luminances of the pixels shown as the right side edge image PE2 and the right side edge image PE2 ′ are different from those of FIG. 11C described above.
  • a specific difference is that, as shown in FIG. A1 in FIG. 12C, the right-side edge image PE2 ′ is generated from the right-side edge image PE2 indicated by the image information D11P at the pixel in the (k ⁇ 1) -th column. In this case, the brightness (luminance) of the right side edge image PE2 ′ is different.
  • the luminance of the right side edge image PE2 ′ is set to be lower than the luminance of the right side edge image PE2 as a base. Note that this is different from the correction of the contour image shown as an example of the first embodiment.
  • the right side edge image PE2 ′ is displayed.
  • the image corrected by the edge image PE2 and the right-side edge image PE2 ′ in the right-side edge portion E2 of the quadrangle Ob is synthesized (FIGS. C3 and c4).
  • the edge image PE2 and the right-side edge image PE2 ′ are arranged side by side, and the luminance is enhanced in the right-side edge portion E2 of the quadrangle Ob as in FIG. 12A in FIG.
  • a contour image is synthesized (FIG. C4).
  • the right end OR of the quadrangle Ob is moved in the ( ⁇ X) axis direction from the right end of the right side edge image PE2, the right end OR of the quadrangle Ob is obtained by the presence of the right side edge image PE2 and the right side edge image PE2 ′.
  • the contour image continues to the position of.
  • the peak value of the brightness of the contour image is different between the edge image PE2 and the right-side edge image PE2 ′. Therefore, the peak brightness value of the contour image in the right edge image PE2 ′ section is lower than the peak brightness value of the contour image in the edge image PE2 section.
  • the right side edge image PE2 ′ is arranged on the ( ⁇ X) axis side of the right side edge image PE2 so as to continue to the right side edge image PE2.
  • the width indicating the luminance peak value Vp of the contour image is the same as that in FIG. 8C when compared with the synthesized image shown in FIG. Further, the point that the contour image is corrected inside the rectangle Ob is different from FIG. In short, the width recognized as the contour image can be widened by increasing the brightness value of the composite image in the region inside the rectangle Ob from the contour image in the region of the contour image of the luminance peak value Vp. .
  • the contour image by the added right-side edge image PE2 can be enhanced as compared with FIG. 8 described above.
  • the visibility of the stereoscopic image can be improved.
  • FIG. 13 is a diagram illustrating the adjustment of the brightness (luminance) of the edge image PE ′ as a modification of the contour image correction method in the display system 100 of the present embodiment.
  • the upper part shows the relative positional relationship between the images P11 and P12 displayed on the display device 10 (display unit 11, display unit 12 (FIG. 1)) of the observer 1.
  • the brightness (luminance) of the contour image displayed on the display unit 11 is shown in the middle.
  • the result of combining the images displayed on the display device 10 is shown in the lower part.
  • the situation in which the display of the contour changes according to the amount of movement that the observer 1 has moved from the predetermined position (Ma (FIG. 10)) is shown in order from FIG. 13 (a) to FIG. 13 (e). .
  • FIG. 13 (a) shows a case where the position of the observer 1 coincides with a predetermined position (Ma (FIG. 10)), and corresponds to the above-described FIG. 11 (a) and FIG. 12 (a).
  • FIG. 13B shows a first stage in which the position of the observer 1 has moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axial direction.
  • FIG. 13B and FIG. It corresponds to c). That is, the contour correction unit 210 generates an edge image PE1 '(PE2') in which the luminance is lower than the luminance of the edge image PE1 (PE2).
  • FIG. 13 (c) shows a second stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction. It corresponds to (c). That is, the contour correction unit 210 generates an edge image PE1 '(PE2') having the same luminance as that of the edge image PE1 (PE2).
  • FIG. 13D shows a third stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the brightness of the generated edge image PE1 ′ (PE2 ′) is adjusted while maintaining the brightness of the edge image PE1 (PE2) at the initial value (Vp).
  • the luminance of the edge image PE1 (PE2) is adjusted while the luminance of the edge image PE1 ′ (PE2 ′) is held at the initial value (Vp).
  • the contour correction unit 210 reduces the luminance of the edge image PE1 (PE2) while maintaining the luminance of the edge image PE1 ′ (PE2 ′) at the same value (Vp) as in the second stage. ing.
  • FIG. 13 (e) shows a fourth stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the movement amount of the observer 1 is increased, and the edge image PE1 ′ (PE2 ′) generated as the correction information is displayed next to the pixel that displayed the edge image PE1 (PE2) in FIG.
  • the edge image PE1 '(PE2') is displayed and the edge image PE1 (PE2) is not displayed.
  • a region for adjusting the luminance of the edge image PE1 (PE2) and a region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are determined according to the movement amount of the observer 1. Yes.
  • the correction amount of a contour image can be continuously adjusted according to the movement of the observer 1.
  • the visibility of the stereoscopic image can be improved.
  • FIG. 14 is a diagram illustrating adjustment of the brightness (luminance) of the edge image PE ′ as a modification of the contour image correction method in the display system 100 of the present embodiment.
  • the upper part shows the relative positional relationship between the images P11 and P12 displayed on the display device 10 (display unit 11, display unit 12 (FIG. 1)) of the observer 1.
  • the brightness (luminance) of the contour image displayed on the display unit 11 is shown in the middle.
  • the result of combining the images displayed on the display device 10 is shown in the lower part.
  • the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are set according to the movement amount of the observer 1.
  • segments was illustrated.
  • the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are divided according to the movement amount of the observer 1.
  • An embodiment in which the brightness is adjusted according to the movement amount of the observer 1 without any change will be described below.
  • coefficients k1 and k2 whose values change according to the movement amount of the observer 1 are determined.
  • the coefficient k1 changes from 1 to 0 as the movement amount of the observer 1 increases, and the coefficient k2 changes from 0 to 1 as the movement amount of the observer 1 increases. It shall be.
  • the brightness k of the edge image PE1 PE2 is multiplied by the coefficient k1 to set the brightness of the edge image PE1 (PE2), thereby setting the edge image PE1 (PE2).
  • the luminance of the edge image PE1 gradually decreases and the luminance of the edge image PE1 ′ (PE2 ′) gradually increases as the movement amount of the observer 1 increases. To rise.
  • the value of the coefficient k1 and the value of the coefficient k2 may be added to be 1.
  • the range in which the observer 1 moves can be determined according to the pixel width in the display unit 11.
  • the luminance of the edge image PE1 (PE2) and the luminance of the edge image PE1 ′ (PE2 ′) may be set to the same luminance. it can.
  • the set luminance value can be made half the luminance value of the original edge image PE1 (PE2).
  • FIG. 14 are arranged in the order of FIG. 14 (a) to FIG. 14 (e) in accordance with the amount of movement that the observer 1 has moved from a predetermined position (Ma (FIG. 10)).
  • FIG. 14 (a) shows a case where the position of the observer 1 coincides with a predetermined position (Ma (FIG. 10)), which is shown in FIGS. 11 (a), 12 (a) and 13 (a). Equivalent to.
  • FIG. 14B shows a first stage in which the position of the observer 1 has moved from a predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • a predetermined position Mo (FIG. 10)
  • the coefficients k1 and k2 there is a relationship of 0 ⁇ k2 ⁇ k1 ⁇ 1. That is, the luminance value of the edge image PE1 (PE2) is reduced from the initial luminance value (Vs) of the edge image PE1 (PE2) to V1.
  • the luminance value of the edge image PE1 generated from the edge image PE1 (PE2) '(PE2' ), to the initial edge image PE1 (PE2) V 3 which greatly reduces the luminance value (Vs) of.
  • FIG. 14C shows a second stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • FIG. 14D shows a third stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the coefficients k1 and k2 there is a relationship of 0 ⁇ k1 ⁇ k2 ⁇ 1. That is, the value of the luminance of the edge image PE1 (PE2), significantly reduce the brightness values of the original edge image PE1 (PE2) (Vs) to V 3.
  • the brightness of the edge image PE1 edge image PE1 produced from (PE2) '(PE2') , the V 1 was reduced from a luminance value (Vs) of the original edge image PE1 (PE2).
  • FIG. 14E shows a fourth stage in which the position of the observer 1 is further moved from the predetermined position (Ma (FIG. 10)) in the (+ X) axis direction.
  • the movement amount of the observer 1 is increased, and the edge image PE1 ′ (PE2 ′) generated as the correction information is displayed next to the pixel that displayed the edge image PE1 (PE2) in FIG.
  • the edge image PE1 '(PE2') is displayed and the edge image PE1 (PE2) is not displayed.
  • the region for adjusting the luminance of the edge image PE1 (PE2) and the region for adjusting the luminance of the edge image PE1 ′ (PE2 ′) are respectively set according to the movement amount of the observer 1. I decided to adjust the brightness. Thereby, although it is a simplified method, the correction amount of a contour image can be continuously adjusted according to the movement of the observer 1. Furthermore, according to the contour image correction method in the present modification, the luminance in the edge image PE1 (PE2) and the edge image PE1 ′ (PE2 ′) is reduced according to the movement amount of the observer 1.
  • the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 15A to 15C are diagrams showing an overview of the display system in the present embodiment.
  • the display system 100A shown in this figure displays an image that allows stereoscopic viewing on the display unit.
  • FIG. 15A is a schematic diagram in which a part of a cross section of the display device 10A in the display system 100A is enlarged.
  • FIG. 15B shows the positional relationship between the display device 10 ⁇ / b> A and the observer 1.
  • FIG. 15C shows the arrangement of pixels on the display surface 11S of the display unit 11A. Even when the observer 1 moves within a predetermined range from the illustrated position, the display device 10A displays the image to be displayed so that it can be viewed stereoscopically.
  • the display unit 11A and the display unit 12 of the display device 10A correspond to the display unit 11 and the display unit 12 of the display device 10 (FIGS. 1 and 2) in the first embodiment, and as in the case of the display device 10, Arranged at different depth positions.
  • the display device 10A similarly to the display device 10 described above, an image to be displayed on the display unit 11A is transmitted through the image to be displayed on the display unit 12, and a display target is displayed.
  • the display device 10A in which the display unit 11A and the display unit 12 are combined displays a stereoscopic image by optically combining the images displayed on the respective display units.
  • the optically synthesized stereoscopic image becomes a stereoscopic image in which binocular parallax generated in the left eye L and the right eye R as shown in FIG.
  • the display system 100A displays the edge of the target image displayed on the display unit 12 on the display unit 11A, thereby displaying the image displayed on the display device 10A in a three-dimensional manner.
  • the display unit 11A of the present embodiment has a single display that can display a stereoscopic image (3D image) displayed so as to be stereoscopically viewed from a predetermined viewing position. Even a three-dimensional image can be displayed.
  • a display method for displaying a stereoscopic image that can be stereoscopically viewed with the naked eye without using special glasses different images that cause binocular parallax are displayed so as to be seen by the left eye and the right eye, respectively. There is a method.
  • a parallax barrier in which a parallax barrier (parallax barrier) is arranged in front of the display surface as shown in FIG. Examples include methods.
  • the lenticular lens method can easily increase the amount of light reaching the eyes (left eye (L) and right eye (R)) when emitting light with the same amount of light emission as compared to other methods such as a parallax barrier method. .
  • the lenticular lens method is a method suitable for the display unit 11A of the present embodiment for further displaying the image through the display unit 12, and an example in which the lenticular lens method is applied to the display unit 11A. Shown in 15A.
  • a sheet-like lenticular lens 13 is provided on the display surface 11S of the display unit 11A.
  • the lenticular lens 13 is provided with a plurality of convex lenses (for example, cylindrical lenses) that have a curvature in one direction and do not have a curvature in a direction orthogonal to the one direction in a direction orthogonal to the extending direction.
  • the extending direction of the convex lens is the vertical direction (Y-axis direction).
  • a plurality of rectangular display areas are provided along the extending direction (Y-axis direction) of the convex lens so as to correspond to the convex lenses in the lenticular lens 13.
  • a plurality of display regions R1, R2, R3, R4, and R5 that display images for the right eye are displayed on the plurality of display regions, and a display for the left eye.
  • a plurality of display areas L1, L2, L3, L4, and L5 for displaying images are assigned so as to be alternately arranged.
  • the observer 1 is within a range in which a pair of right-eye images (for example, images displayed in the display area R1) and left-eye images (for example, images displayed in the display area L1) can be observed corresponding to the convex lens.
  • the display on the display unit 11A can be observed as a stereoscopic image. As shown in FIG.
  • the display area 11S of the display unit 11A has display areas in the order of display areas L1, R1, L2, R2, L3, R3, L4, R4, L5, R5,. It is arranged.
  • a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each display area.
  • the pixel PICL1, the pixel PICR1, and the pixel PICL2 are provided in order along the X-axis direction.
  • the pair of the pixel PICL1 and the pixel PICR1 is a pair that allows the observer 1 to visually recognize a stereoscopic image.
  • each pixel in the display unit 11A may be any pixel as long as it can be handled as a single pixel, and each pixel may further include a plurality of sub-pixels.
  • the sub-pixels included in each pixel may be provided according to three colors (RGB).
  • RGB three colors
  • the most suitable position for observing the stereoscopic image to be displayed is discretely arranged like the position of the observer 1 illustrated in FIG. 15B.
  • a predetermined range based on a position most suitable for observing the stereoscopic image to be displayed is an area where the stereoscopic image is easily observed.
  • FIG. 16 is a schematic block diagram showing the configuration of a display system 100A according to an embodiment of the present invention.
  • a display system 100A illustrated in FIG. 16 includes an image processing device 2A and a display device 10A.
  • the display device 10A in the present embodiment includes a display unit 11A and a display unit 12.
  • the image processing device 2A generates a contour image for displaying a stereoscopic image on the display device 10A.
  • the image processing apparatus 2A includes a contour correction unit 210A, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the contour correcting unit 210A in the present embodiment corrects and outputs at least one of the supplied image information.
  • the contour correction unit 210A supplies, for example, image information D11 obtained by correcting the image information D11P to the display device 10A.
  • the contour correction unit 210A in the present embodiment includes a determination unit 213 and a correction unit 211A.
  • the correction unit 211A corresponds to the above-described correction unit 211 (FIG. 9).
  • the correction unit 211A of the present embodiment generates image information D11 of an image to be displayed for each of the left eye (L) and the right eye (R) based on the image information D11P.
  • image information D11 of an image displayed on the display unit 11A for the left eye (L) and the right eye (R), respectively, is referred to as image information D11L and image information D11R.
  • the contour correction unit 210A applies the contour image correction method described in the example and the modifications in the first embodiment described above, and corresponds to the image information D12P. Generated as a contour image.
  • the pixel of the display unit 11 is read as a pixel of the display unit 11A in the present embodiment or a rectangular display region discretely arranged as the display unit 11A.
  • the contour image is corrected based on the detected position of the observer 1 is exemplified.
  • the contour correcting unit 210A of the present embodiment replaces the position of the observer 1 and the position of the left eye (L) and the right eye (R) of the observer 1 estimated from the detected position of the observer 1.
  • the contour image is corrected based on the position of.
  • the contour correcting unit 210A corrects the contour image based on the detected position of the left eye (L) and right eye (R) of the viewer 1 instead of the position of the viewer 1.
  • the contour correction unit 210A generates image information D11L from the image information D11P with reference to the position of the left eye (L) of the observer 1, and displays the image on the display unit 11A for the left eye (L). Display as. Thereby, the image observed in the left eye (L) is an image obtained by optically combining the image based on the image information D11L and the image based on the image information D12.
  • the contour correction unit 210A generates image information D11R from the image information D11P with reference to the position of the right eye (R) of the observer 1 and displays the image information D11R for the right eye (R) on the display unit 11A. Display as.
  • the image observed in the right eye (R) is an image obtained by optically combining the image based on the image information D11R and the image based on the image information D12.
  • the images to be displayed for the left eye (L) and the right eye (R) are generated independently.
  • the images are formed by optically synthesized images. The result is as shown in FIG.
  • an optical image IMR formed by combining the image P11R visually recognized by the right eye R and the image P12R visually recognized by the right eye R is formed.
  • the images to be displayed for the left eye (L) and the right eye (R) are generated independently, and the left eye (L) and the right eye (R)
  • an image that can be easily viewed stereoscopically is displayed.
  • the observer 1 can observe a stereoscopic image with a more stereoscopic effect.
  • the contour image is corrected as shown in the first embodiment, so that 2 is displayed on the display unit 11A.
  • the display target to be displayed on the display unit 11A (first display unit) and the display unit 12 (second display unit) is stereoscopically displayed at a predetermined position by binocular parallax.
  • image information D11 first image information
  • the contour correction unit 210A is based on the predetermined position, the positions of a plurality of pixels arranged in a two-dimensional manner in the display unit 11A, and the pixel position in the display unit 11A corresponding to the contour part to be displayed.
  • the image information D11 the image information corresponding to the contour portion to be displayed is corrected.
  • the predetermined position is set to, for example, each position of both eyes of the observer 1. In addition, it is good also as a single position which represents the observer 1 like the above-mentioned 1st Embodiment, without making said predetermined position into each position of both eyes of the observer 1.
  • a multi-lens type lenticular method can be applied to the display unit 11A.
  • a stereoscopic image can be observed from a plurality of directions in which the display device 10A is viewed.
  • the display unit 11A displays the stereoscopic image so that the stereoscopic image can be observed from a position corresponding to the front of the display unit 11A and a position deviated from the front. In this case, even if the observer 1 moves from a position in front of the display unit 11A, a contour image that can continuously observe a stereoscopic image suitable for observation from those directions is displayed on the display unit 11A. It is good to display.
  • the contour correcting unit 210A causes the display unit 11A to display a contour image that displays a stereoscopic image suitable for the direction in which the stereoscopic image can be observed.
  • the contour correction unit 210A displays, in each direction, image information D11 that displays a stereoscopic image suitable for the direction in which the stereoscopic image can be observed based on the supplied image information (image information D11P, image information D12P). Generate and output a corresponding contour image.
  • the contour correction unit 210A applies, for example, the contour image correction method as described in the first embodiment, and is suitable for the direction in which the stereoscopic image determined by the lenticular lens 13 can be observed.
  • a contour image for displaying the stereoscopic image is output.
  • the contour correction unit 210A outputs a contour image that displays a three-dimensional image suitable for the direction in which the three-dimensional image is observable by the contour image correction method as described in the first embodiment. May be.
  • the contour correction unit 210A uses the pixel indicating the contour when viewed from the front of the display unit 11A as a reference, and the luminance of the image displayed on the pixel adjacent to the pixel indicating the contour when viewed from the front. It adjusts according to the brightness
  • the contour correcting unit 210A generates a contour image that continuously changes the contour image to be displayed in accordance with the movement of the observer 1, whereby the stereoscopic image displayed on the display unit 11A is moved by the observer 1. Can be changed gradually and continuously according to the movement, and the stereoscopic image can be changed and displayed in accordance with the movement.
  • the display system 1 can display a contour image that allows the display target to be stereoscopically viewed from the position of the observer 1A according to the stereoscopically visible direction of the display unit 11A. Thereby, the range which can observe a stereo image from the observer 1 can be expanded.
  • the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 17A to 17C are diagrams showing an outline of the display system in the present embodiment.
  • the display system 100B shown in this figure displays an image that allows stereoscopic viewing on the display unit.
  • FIG. 17A shows a state where the viewer 1 is located in a stereoscopically viewable range by enlarging a part of the cross section of the display device 10B in the display system 100B.
  • FIG. 17B shows the positional relationship between the display device 10 ⁇ / b> B and the observer 1.
  • FIG. 17C shows an arrangement of pixels on the display surface 10S of the display device 10B. Since the viewer 1 is illustrated, the viewer can stereoscopically view the image displayed on the display device 10B even if the viewer 1 moves a predetermined range from the position.
  • the display device 10B displays a stereoscopic image (3D image) so that a stereoscopic image can be stereoscopically viewed from a predetermined viewing position even when used alone without being combined with another display device.
  • a stereoscopic image 3D image
  • a sheet-like lenticular lens 13 is provided on the display surface 10S of the display device 10B shown in FIG. 17A.
  • the lenticular lens 13 has a curvature in one direction, and a plurality of convex lenses (for example, cylindrical lenses) that do not have a curvature in a direction orthogonal to the one direction are aligned in the direction orthogonal to the extension direction. Is provided.
  • the extending direction of the convex lens is the vertical direction (Y-axis direction).
  • the display surface 10S of the display device 10B is provided with a plurality of rectangular display areas corresponding to the convex lenses in the lenticular lens 13 along the extending direction (Y-axis direction) of the convex lenses.
  • a plurality of display regions R1, R2, R3, R4, and R5 that display images for the right eye are displayed on the plurality of display regions, and a display for the left eye.
  • a plurality of display areas L1, L2, L3, L4, and L5 for displaying images are allocated.
  • the display device 10B displays a stereoscopic image using the parallax generated by the lenticular lens 13.
  • the display device 10B of the present embodiment includes, for example, a lenticular lens display (display unit 11B and display unit 12B).
  • the display unit 11 ⁇ / b> B is provided with a display area for displaying an image to be presented to one of the eyes dispersed in a plurality of display areas.
  • the display unit 12B is also provided with a display area for displaying an image to be presented to the other eye of both eyes dispersed in a plurality of display areas. In the case of FIG.
  • the display unit 11B is provided in the display regions R1, R2, R3, R4, and R5 (first display region), and the display unit 12B includes the display regions L1, L2, L3, L4, L5 (second display area) is provided.
  • the arrangement of the display unit 11B and the display unit 12B in the display device 10B is different from the display device 10 (FIGS. 1 and 2) in the first embodiment and is arranged along the same surface (display surface 10S).
  • the display areas on the display surface 10S of the display device 10B are display areas L1, R1, L2, R2, L3, R3, L4, R4, L5, R5,. It is arranged.
  • a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each display area.
  • the pixel PICL1, the pixel PICR1, and the pixel PICL2 are provided in order along the X-axis direction.
  • the pair of the pixel PICL1 and the pixel PICR1 is a pair that allows the observer 1 to visually recognize a stereoscopic image.
  • each pixel in the display unit 11A may be any pixel as long as it can be handled as a single pixel, and each pixel may further include a plurality of sub-pixels.
  • the sub pixels included in each pixel may be provided according to three colors (RGB).
  • FIG. 18 is a schematic block diagram showing the configuration of a display system 100B according to an embodiment of the present invention.
  • a display system 100B illustrated in FIG. 18 includes an image processing device 2B and a display device 10B.
  • the image processing device 2B generates a contour image for displaying a stereoscopic image on the display device 10B.
  • the image processing device 2B includes a contour correction unit 210B, a stereoscopic image generation unit 220B, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the stereoscopic image generation unit 220B uses the image information D11P and the image information D12P such that the display target displayed on the display unit 11B and the display unit 12B can be stereoscopically viewed from a predetermined position by binocular parallax. Is generated.
  • the predetermined position is a position where the visual recognition position of the observer 1 can emphasize the outline most, and the observer 1 can visually recognize the stereoscopic image. Accordingly, the stereoscopic image generation unit 220B generates image information D11P and image information D12P in which the position of the contour portion to be displayed displayed by the image information D11P is adjusted according to the predetermined position.
  • the stereoscopic image generation unit 220B receives the supply of the image information D11S for display on the display unit 11B and the display unit 12B, and the display target can be stereoscopically viewed from a predetermined position based on the image information D11S. Such image information D11P and image information D12P are generated.
  • the stereoscopic image generation unit 220B is image information in which the position of the contour portion to be displayed is adjusted according to a predetermined position, and includes the lenticular lens 13 as an optical unit that generates binocular parallax.
  • the first image information to be displayed on the display unit 11B may be generated.
  • the contour correction unit 210B in the present embodiment includes a determination unit 213B and a correction unit 211B.
  • the determination unit 213B determines that the position of the contour portion to be displayed on the display portion 11B is the pixel of the display portion 11B. It is determined whether or not it falls within the range between the adjacent first pixel and second pixel.
  • the first and second pixels adjacent to each other in the display unit 11B are pixels provided in adjacent columns in the display unit 11B.
  • the determination unit 213B determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11B falls within the range of the first and second pixels adjacent to each other among the pixels of the display unit 11B. It is determined that correction is necessary, and it is determined that correction is not necessary when the range of the adjacent first pixel and second pixel among the pixels of the display unit 11 is not applied.
  • the adjacent first display area and second display area are areas arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
  • the determination unit 213B has the position of the contour portion to be displayed displayed on the display portion 11B as a first pixel adjacent to the pixel of the display portion 11B. It is determined whether or not it falls within the range of the second pixel.
  • the determination unit 213B determines that, based on the determination result, the position of the contour portion to be displayed on the display unit 11B falls within the range of the first and second pixels adjacent to each other among the pixels of the display unit 11B.
  • the adjacent first pixel and second pixel are pixels (display areas) arranged side by side in a direction (horizontal direction) in which stereoscopic parallax occurs.
  • the correction unit 211B displays a display target to be displayed on the display device 10B when it is determined that the position of the contour of the display target is within the range between the first pixel and the second pixel based on the determination result in the determination unit 213. Correction of the contour portion of.
  • image information D11P to be displayed on at least one of the first pixel and the second pixel is arranged in a two-dimensional manner with a predetermined position and the display unit 11 The correction is performed based on the positions of the plurality of pixels and the pixel position (display area position) in the display unit 11 corresponding to the contour part to be displayed.
  • the correction unit 211B corrects the image information D11P to be displayed on either the first pixel or the second pixel according to the correction amount of the image information D11P determined by pairing the first pixel and the second pixel.
  • the correction unit 211B corrects the image information D12P to be displayed on either the first pixel or the second pixel according to the correction amount of the image information D12P determined by pairing the first pixel and the second pixel.
  • the contour correction unit 210B has the image information D11P so that the position of the observer 1 (user) who views the display unit 11B and the display unit 12B becomes a predetermined position where the display target can be stereoscopically viewed by binocular parallax.
  • the image information D12P can be corrected.
  • the contour correction unit 210B corrects and outputs at least one of the supplied image information according to the image information supplied to the contour correction unit 210B.
  • the image information to be processed by the contour correcting unit 210 includes image information D11P (first image information) and image information D12P (second image information).
  • the image information D11P includes the display unit 11 and the display unit 11 among the image information that stereoscopically displays a display target to be displayed on the display unit 11 (first display unit) and the display unit 12 (second display unit) at a predetermined position by binocular parallax. This is image information to be displayed on any one of the display unit 12.
  • the image information D12P is image information that is displayed on either the display unit 11 or the display unit 12 among the image information for stereoscopically displaying the display target at a predetermined position by binocular parallax.
  • the contour correction unit 210B in the present embodiment corrects the image information D11P (first image information) to generate the image information D11, and the image information D12P (second image information).
  • Image information D12 is generated by correcting (image information).
  • the contour correction unit 210B corrects the image information D11P (first image information) to generate image information D12, and corrects the image information D12P (second image information).
  • the outline correction unit 210B switches the correspondence between the image information D11P (first image information) and the image information D12P (second image information) and the image information D11 and the image information D12 according to the display form. Output.
  • the correction method in the first embodiment can be applied.
  • the contour correction unit 210B in the present embodiment may perform correction by applying a plurality of correction methods according to the movement amount of the observer 1 as described below.
  • FIG. 19 is a diagram for explaining a correction method in a case where the observer moves from a region where a stereoscopic image determined according to the optical characteristics of the lenticular lens 13 can be observed to a region outside the region.
  • the display unit 11B is indicated by a column S1
  • the display unit 12B is indicated by a column S2.
  • FIG. 19 shows the display device 10B and the observer 1 (1 ′) at different positions in the X-axis direction.
  • Region Z1 indicates a range in which the column S1 of the display unit 11B can be observed
  • region Z2 indicates a range in which the column S2 of the display unit 12B can be observed.
  • the left eye of the observer 1 is located in the region Z1 (or region Z3), the right eye is located in the region Z2 (or region Z4), and the left eye is displayed on the display unit 11B (column S1).
  • the observer 1 can observe a stereoscopic image.
  • the left eye of the observer 1 is located in an area outside the area Z1 (or area Z3), or the right eye is located in an area outside the area Z2 (or area Z4).
  • the observer 1 cannot observe the stereoscopic image.
  • the stereoscopic image is observed while the position of the observer 1 moves from Ma (i) to Ma (i + 1). There are areas that cannot be done.
  • the display system 100B performs a process in which a part of the area in which the stereoscopic image existing in the first display form cannot be observed is changed to an area in which the stereoscopic image can be observed.
  • a part of the regions in which the stereoscopic image existing in the first display form cannot be observed can be observed as a stereoscopic image when the following condition is satisfied.
  • the position of the observer 1 is located at Ma ′ (i)
  • the left eye of the observer 1 is located inside the area Z2 and the right eye is located inside the area Z3.
  • the display according to the second display form is a display form in which the positions for displaying the images presented to the left eye and the right eye of the observer 1 are reversed with respect to the display according to the first display form.
  • a part of the region in which the stereoscopic image cannot be observed based on the first display state can be changed based on the second display state. With this, an area where a stereoscopic image can be observed can be made.
  • the contour correction unit 210B may switch the display form.
  • FIG. 20 is a diagram illustrating a correction method when the observer moves.
  • a stereoscopic image displayed on the display device 10B until the position of the observer 1 shown in FIG. 19 is from Ma (i) to Ma ′ (i) will be described.
  • FIG. 20A schematically shows a stereoscopic image observed when the observer 1 observes the target image from the position of Ma (i).
  • the diagrams from FIG. 20A to FIG. 20D move along the X axis from the position of Ma (i) to Ma ′ (i) while the observer 1 observes the display device 10B.
  • the three-dimensional image observed in the case of doing is shown typically.
  • the upper part shows the positional relationship between the display device 10B and the observer 1 as shown in FIG. From the top to the bottom, the brightness of the edges (contour image PE1 and contour image PE2) of the image presented to the left eye to be corrected according to the position of the observer 1 and the brightness of the image information D11P (D12P) are sequentially displayed from the top. The brightness of the image presented to the left eye and the brightness of the image presented to the right eye are shown.
  • the correction amount of the contour image PE1 and the contour image PE2 is calculated by the contour correction unit 210B based on each of the image information D11 and the image information D12.
  • the description until the image information D11 is obtained from the image information D11P will be representative, and only the result of the image information D12 will be shown.
  • the brightness of the image presented to the left eye is calculated by adding the brightness of the contour image PE1 and the contour image PE2 generated based on the image information D11P to the brightness of the image information D11P.
  • the luminance of the image information D11 corresponds to an image that causes the left eye of the observer 1 to perceive brightness IML as shown in FIG.
  • an image in which the edge portions of the object (the left side edge portion E1 and the right side edge portion E2) are emphasized is generated and displayed on the display unit 11B as the image information D11.
  • the luminance of the image presented to the right eye is calculated based on the image information D12P.
  • the images observed with both eyes each cause binocular parallax with the luminance or width of the edge portion corrected.
  • the image perceived by the observer 1 is such that the brightness of the edge portion is different as in the image information D11 and the image information D12 shown in FIG. In FIG.
  • the observer 1 is located at Ma (i), and the edge parts (left edge part E1 and right edge part E2) of the rectangular object displayed substantially in front of the observer 1 are shown. ), An image in which a contour image PE1 and a contour image PE2 for correcting the contour are generated is displayed. For example, an image corrected so that the brightness of the contour image PE1 and the brightness of the contour image PE2 in the correction of the edge portion are equal to each other is displayed.
  • the display form in FIG. 20A is, for example, the first display form described above with reference to FIG.
  • the position of the observer 1 moves in the X-axis direction (Ma ′ (i) direction) to the position as shown in FIG. 20B, the direction in which the object 1 is expected to be seen by the observer 1 is as shown in FIG.
  • the image is observed to the left from the viewer 1 as compared to the direction to be viewed.
  • the position of the observer 1 in the case shown in FIG. 20B is in the range of half or less from Ma (i) to Ma ′ (i), and is half of the position from Ma (i) to Ma ′ (i). It is assumed that it is within a predetermined range near the position. Thus, even if the observer 1 moves relatively little, the direction in which the object is viewed changes.
  • the correction amounts of the contour image PE1 and the contour image PE2 are adjusted according to the movement amount of the observer 1.
  • the observer 1 moves in the X-axis direction from the reference position (Ma)
  • the brightness of the edge portion in the same direction as the direction in which the observer 1 moves is displayed higher.
  • the above-described various methods can be applied to adjust the correction amounts of the contour image PE1 and the contour image PE2 according to the movement amount of the observer 1.
  • a case where the correction method as shown in FIG. 13 is applied will be described.
  • a stereoscopic image is generated that allows the observer 1 to recognize that the object has moved in the same direction as the movement direction of the observer 1 without changing the position where the object is displayed on the display device 10B. can do.
  • the display form in FIG. 20B is maintained in the first display form (FIG. 19) as in the case of FIG.
  • the observer 1 When the position of the observer 1 is further moved in the X-axis direction from the position shown in FIG. 20B and displayed by the same display method up to the above-described FIG. 20B, it is shown in FIG. As described above, it reaches an area where the stereoscopic image cannot be observed. In short, in the case shown in FIG. 20B, the observer 1 is inside the region that can be corrected from the image when Ma (i) at a position suitable for observing the stereoscopic image is used as a reference. The state where the observer 1 is located near the limit point when moving in the X-axis direction is shown.
  • the display form in the display device 10B is displayed as the second display as shown in FIG. Switch to form.
  • the position where the display form is switched is set to a half position from Ma (i) to Ma ′ (i).
  • the display form up to FIG. 20B is the first display form
  • the display form after switching is the second display form.
  • FIG. 20C shows a state immediately after switching the display form.
  • the display mode is switched to the second display mode (FIG. 19), so that the position is separated from Ma (i) from the half position from Ma (i) to Ma ′ (i).
  • the area is an area where a stereoscopic image can be observed.
  • the display form in FIG. 20B is maintained in the first display form (FIG. 19) as in the case of FIG. While the position of the observer 1 further moves in the X-axis direction and reaches the position of Ma ′ (i) shown in FIG.
  • the viewer 1 can A stereoscopic image displayed on the display device 10B can be observed while moving the range.
  • the correction shown in FIG. 20 has been described as calculating the correction amount according to the movement amount of the observer 1, but in the detection of the movement amount of the observer 1, the observer is based on a predetermined representative value.
  • the amount of calculation required for this correction processing is reduced by using a detection value that approximates the amount of movement of 1 with a discrete value or approximating the correction amount of each contour image with a discrete value. Can do. For example, from Ma (i) shown in FIG. 20 (a) to Ma ′ (i) shown in FIG. 20 (d), the number of images that can be displayed according to the position of the observer 1 is several. By limiting, it is possible to reduce the calculation load of the correction calculation process according to the movement of the observer 1.
  • FIG. 21 is a diagram illustrating a modification of the display device 10B in the display system 100B.
  • FIG. 21 shows an enlarged part of a cross section of the display device 10B.
  • the display device 10B is assumed to be provided with a binocular lenticular lens type display.
  • the display device 10B in this modification shown in FIG. A lenticular lens type display is provided.
  • a stereoscopic image is displayed on the display device 10B so that stereoscopic viewing is possible from the viewed angle. Images captured from a plurality of angles corresponding to angles viewed from each of a plurality of viewing positions (viewing regions) are respectively displayed for a plurality of viewpoints.
  • Such a display method is sometimes called an integral method (multi-view method).
  • the multi-lens type lenticular lens type display unit 11B and display unit 12B further divide the regions of the columns S1 and S2 in the display unit 11B and the display unit 12B shown in FIG. 19 into a plurality of columns.
  • the column S1 of the display unit 11B is divided into five columns (S11, S12, S13, S14, S15), and the column S2 of the display unit 12B is divided into five columns (S21, S22, S23, S24, S25).
  • the display surface 10S of the display device 10B is arranged in the order of columns S11, S12, S12, S14, S15, S21, S22, S23, S24, S25,.
  • a plurality of pixels are arranged side by side in the extending direction (Y-axis direction) of each column. As a result, stereoscopic images viewed from five directions can be displayed respectively.
  • the stereoscopic image displayed on the display device 10B can be observed from the position where it is most easily observed.
  • FIG. 22 is a diagram illustrating a method of correcting an image when observed from three directions.
  • the directions Da0, Db0, and Dc0 indicate directions in which the stereoscopic image is most easily observed.
  • Three stereoscopic images based on the direction are prepared.
  • the direction in which the viewer 1 looks at the display device 10B can be calculated from the relative positional relationship with the display device 10 based on the result of detecting the viewer 1.
  • OBJ1 and OBJ2 An object close to the observer 1 is indicated by reference numeral OBJ1, and an object far from the observer 1 is indicated by reference numeral OBJ2.
  • Each of the stereoscopic images observed from the directions Da0, Db0, and Dc0 has an emphasized outline as an outline image for enabling stereoscopic viewing.
  • the following description will be made with the amount of contours in the stereoscopic image observed from the directions Da0, Db0, and Dc0 as equal amounts.
  • a part of the shape of the object OBJ2 is shielded by the object OBJ1 and can be observed.
  • the above three cases will be described in order.
  • the direction observed from the positive direction side of the X axis with respect to the direction Da0 is indicated as the direction Da1 with reference to the stereoscopic image that can be observed from the direction Da0, and the direction observed from the negative direction side of the X axis with respect to the direction Da0.
  • the direction is defined as Da2.
  • the directions Db1, Db2 and the directions Dc1, Dc2 are determined based on the direction Db0.
  • the shielded range is observed wider than in the case of the direction Da0 compared to the case of observing from the direction Da0.
  • the shielded range is observed to be narrower than when observed from the direction Da0 compared to the direction Da0.
  • the display device 10B can present an image in a limited direction, but it is difficult to present an image in which the observation direction is continuously changed. Therefore, when observing from a direction in which an image as an image to be presented in a limited direction is not prepared, an image in a representative direction prepared to present the image in a limited direction is used as a basis. Correct the image and display it. The correction method will be described below. A left-side edge image PE1 added to the left side of the object and a right-side edge image PE2 added to the right side are added to the stereoscopic image of the object OBJ1 indicated by the symbol A0 as the edge image PE.
  • the left side edge image PE1 ′ added to the left side of the object and the right side edge image PE2 ′ added to the right side are added to the stereoscopic image of the object OBJ2 as the edge image PE.
  • the luminance of the left side edge image PE1 is increased and the luminance of the right side edge image PE2 is decreased.
  • the luminance of the left side edge image PE1 ′ is lowered and the luminance of the right side edge image PE2 ′ is increased.
  • the object OBJ1 and the object OBJ2 are perceived by the observer 1 because the region where the objects OBJ2 appear to overlap is widened.
  • the observer 1 can feel as if viewed from the direction of Da1.
  • the luminance of the left side edge image PE1 is lowered and the luminance of the right side edge image PE2 is increased.
  • the luminance of the left side edge image PE1 ′ is increased and the luminance of the right side edge image PE2 ′ is decreased.
  • symbol B0, B1, B2 is demonstrated.
  • the shielded range is observed more widely than in the direction Db0.
  • the shielded range is observed narrower than in the direction Db0. This tendency is the same as that in the direction Da0 described above.
  • corrected images can be generated in the same manner as the diagrams referenced by the aforementioned symbols A0, A1, and A2.
  • symbol C0, C1, C2 is demonstrated.
  • the object OBJ1 and the object OBJ2 observed from the direction Dc0 do not have a region that shields each other.
  • the distance between the object OBJ1 and the object OBJ2 is observed to be narrower than in the direction Dc0.
  • the shielded range is observed wider than in the direction Dc0.
  • corrected images are generated in the same manner as the diagrams referenced by the symbols A0, A1, and A2.
  • the distance between the object OBJ1 and the object OBJ2 is narrowed and perceived by the observer 1.
  • the interval between the object OBJ1 and the object OBJ2 is widened and perceived by the observer 1.
  • the observer 1 can make a relative correction with respect to the object in the same manner as in the case of observing the actual object only by slightly correcting the displayed image when the viewing direction of the display device 10B changes.
  • the observer 1 can recognize that the positional relationship has changed.
  • the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • FIG. 23 is a schematic block diagram showing a configuration of a display system 100C according to an embodiment of the present invention.
  • a display system 100 ⁇ / b> C illustrated in FIG. 23 includes an image processing device 2 ⁇ / b> C and a display device 10.
  • the image processing apparatus 2C has a feature of generating a contour image.
  • the image processing apparatus 2C includes a contour correction unit 210, a stereoscopic image generation unit 220C, an imaging unit 230, a detection unit 250, a control unit 260, and a storage unit 270.
  • the contour correcting unit 210 in the present embodiment corrects at least one of the supplied image information according to the image information (image information D11P, image information D12P) supplied from the stereoscopic image generating unit 220C. Output.
  • the stereoscopic image generation unit 220C generates image information D11P that allows the display target to be viewed stereoscopically from a predetermined position by binocular parallax, based on the display target displayed on the display unit 11 and the display unit 12.
  • the predetermined position is a position where the viewer's 1 visual recognition position can emphasize the outline most, and the observer 1 can visually recognize the stereoscopic image.
  • the stereoscopic image generation unit 220C generates image information D11P in which the position of the contour portion to be displayed displayed by the image information D11P is adjusted according to the predetermined position. For example, the stereoscopic image generation unit 220C receives the supply of the image information D11S to be displayed on the display unit 11 and the display unit 12, and the display target can be stereoscopically viewed from a predetermined position based on the image information D11S. Such image information D11P is generated. At this time, the stereoscopic image generation unit 220C outputs image information D12P based on the image information D11S.
  • the stereoscopic image generation unit 220C may generate image information D12P that allows the display target to be viewed stereoscopically from a predetermined position based on the image information D11S. More specifically, the stereoscopic image generation unit 220C sets, as the image information D11P, an image for adding the edge image PE to the image information D11S based on the image information D11S. The stereoscopic image generation unit 220C generates image information D11P from the image information D11S based on the positional relationship between the position of the observer 1 and the display device 10.
  • the stereoscopic image generation unit 220C generates the image information D12P based on the image information D11S, and the magnification of the image information D11P based on the image information D11S is determined based on the positional relationship between the position of the observer 1 and the display device 10. Image information D11P is output according to the display position.
  • the stereoscopic image generation unit 220C may generate the image information D11P that displays the display target displayed by the perspective method as an image rotated with reference to the virtual axis according to the predetermined position.
  • the stereoscopic image generation unit 220C receives supply of information (D11S) to be displayed on the display unit 11 and the display unit 12, and based on the supplied information (D11S), the stereoscopic image generation unit 220C determines the display target by binocular parallax.
  • Image information D11P is generated so that the display target can be viewed stereoscopically from a predetermined position.
  • the process of displaying the display target as an image rotated with respect to the axis can be applied to information having three-dimensional information. Details of the generation method will be described later.
  • the stereoscopic image generating unit 220C deforms the shape of the display target displayed based on the image information D11P according to the displacement of the predetermined position, thereby setting the position of the contour part of the display target.
  • Information D11P may be generated.
  • the stereoscopic image generation unit 220C displays image information for displaying a display target by transmitting one image through the other image among the image P11 displayed on the display unit 11 and the image P12 displayed on the display unit 12.
  • the position of the contour portion to be displayed may be adjusted according to the predetermined position.
  • the display unit 11 is arranged at a distance from the display unit 12 in the normal direction ( ⁇ Z direction) of the display unit 12.
  • the display unit 12 is a transmissive display unit.
  • the stereoscopic image generation unit 220C displays the contour of the display target according to the predetermined position so that the image P11 displayed on the display unit 11 is transmitted through the image P12 displayed on the display unit 12 and displayed.
  • Image information D11P is generated as image information whose position has been adjusted.
  • the stereoscopic image generation unit 220C generates the image information D11P to be displayed on the display unit 11 that is arranged at a distance from the display unit 12 in the normal direction ( ⁇ Z direction) of the display unit 12.
  • the three-dimensional image generation unit 220C includes information indicating a contour portion to be displayed from the image information D12P for displaying an image on the display unit 12 among the image information displayed in a three-dimensional manner at the predetermined position.
  • the extracted information is generated as image information D11P.
  • the stereoscopic image generating unit 220C extracts feature points for displaying a stereoscopic image from the base image information (image information D11S), and generates a stereoscopic image for emphasizing the extracted feature points to display a stereoscopic image.
  • the image information D11S based on the stereoscopic image generation unit 220C may be any of a still image, a CG image, and a moving image.
  • the feature points for performing the stereoscopic image display extracted from the base image information D11S can be optimized according to the base image information D11S.
  • the stereoscopic image generation unit 220C can select processing corresponding to the main subject based on information associated with the image or predetermined information.
  • the image processing apparatus 2C (stereoscopic image generation unit 220C) extracts a person or a focused subject as a feature point for displaying a stereoscopic image.
  • a known method can be applied to a method for extracting a person from a base image and a method for extracting a focused subject from a base image.
  • the image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information based on the extracted feature points (such as a person) so as to stereoscopically display the main subject corresponding to the feature points.
  • the base image is a still image
  • the image processing device 2C extracts the main subject as a feature point for performing stereoscopic image display.
  • the information indicating the main subject is indicated by, for example, meta information associated with the image.
  • the image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information so that the extracted main subject is stereoscopically displayed.
  • the base image is a still image
  • the image processing apparatus 2C determines whether or not there is a dividing line at a position assumed based on the golden ratio. And feature points can be detected based on the determination result.
  • the image processing apparatus 2C may extract feature points according to the arrangement in the screen.
  • a setting such as lowering the priority of extraction from the one arranged at the center is performed.
  • the priority in this way, it is possible to set the position where the feature points are extracted so as not to be biased by reducing the probability that the feature points are extracted from the corners of the screen.
  • the image processing apparatus 2C stereooscopic image generation unit 220C
  • the image processing apparatus 2 ⁇ / b> C may extract the distant scenery as a feature point preferentially when it is determined that the base image is the main subject.
  • the image processing apparatus 2 ⁇ / b> C may extract the feature point preferentially in the foreground subject when the subject in focus on the near side can be detected even if the image is the main subject in the landscape. . This selection may be switched by setting.
  • the image processing device 2C can extract the main subject based on the difference information of the plurality of images.
  • the image processing apparatus 2C can extract a subject having a large amount of movement in the screen or a subject photographed in a plurality of continuous images as a feature point.
  • the process of extracting a moving subject (moving amount, direction) showing a feature different from the feature that the background moves from an image photographed in a plurality of consecutive images is performed by “frame There are known methods for applying “difference between” and vector processing of the movement amount.
  • the image processing apparatus 2C (stereoscopic image generation unit 220C) specifies a main subject based on the extracted subject and the feature points of the subject, and generates stereoscopic image information so that the specified main subject is stereoscopically displayed.
  • the main subject can be extracted from the focal length information based on the history information at the time of shooting and the distance information to the subject.
  • the image processing apparatus 2C uses the focal length information and the distance information based on the history information at the time of shooting to extract the subject at the focused position at the time of shooting as the main subject.
  • the image processing device 2C (stereoscopic image generation unit 220C) generates stereoscopic image information so that the extracted main subject is stereoscopically displayed.
  • CG image computer graphics image
  • information in the depth direction (depth MAP) about the display target can be created using the information of the 3D model.
  • a display target can be extracted based on the depth direction information (depth MAP), or the direction in which an object is displayed on the display device 10 can be set.
  • FIG. 24 is a diagram illustrating a process of displaying a display object displayed by perspective in a pseudo-rotation manner.
  • FIGS. 24A and 24B show the rectangular parallelepiped in perspective (two-point perspective).
  • FIG. 24A shows a rectangular parallelepiped when FP1 and FP2 are defined as vanishing points, that is, a rectangular parallelepiped having the vertices (QA, QB, QC, QD, QE, QF, QF, QG) illustrated. (However, vertices hidden in the body are not included.)
  • FIG. 24A For example, a case will be described in which an image of the rectangular parallelepiped shown in FIG. 24A is virtually generated when viewed from a position closer to the front side of the surface composed of QA-QB-QF-QG than the current viewpoint.
  • the viewpoint position When the viewpoint position is moved as described above, there is a place where a rectangular parallelepiped can be visually recognized as shown in FIG.
  • FIG. 24B In the conversion process from FIG. 24A to the diagram shown in FIG. 24B, it is obtained as if the coordinate system including the rectangular parallelepiped is rotated about the rotation axis RA shown in FIG. it can.
  • the stereoscopic image generation unit 220C uses an image obtained by rotating a rectangular parallelepiped (display target) displayed by the perspective method with reference to the rotation axis RA (virtual axis) according to the predetermined position, as shown in FIG.
  • an image of a rectangular parallelepiped viewed from various directions can be obtained by calculation processing.
  • the movement of the position of the observer 1 may be detected and rotated in conjunction with the detected amount of movement with respect to the rectangular parallelepiped displayed on the display device 10. In this way, by interlocking with the movement amount (movement) of the observer 1, the image to be displayed can be rotated without using any special input means.
  • the shape of the rectangular parallelepiped to be displayed is deformed in accordance with the rotation linked to the movement amount (movement) of the observer 1.
  • the stereoscopic image generation unit 220C In order to stereoscopically display an image viewed from an arbitrary direction, the stereoscopic image generation unit 220C generates a contour image in which input image information and the position of a rectangular parallelepiped contour portion to be displayed are set according to display conditions. Thereby, even when the display condition is changed, the responsiveness until the stereoscopic image is displayed can be shortened.
  • various images such as a still image, a moving image, or a CG image can be displayed, and an object to be displayed in an emphasized manner according to the characteristics of the various images.
  • the main subject it is possible to enhance the stereoscopic expression power in the generated stereoscopic image information.
  • the contour correcting unit 210 in the image processing apparatus 2C corrects the contour image based on the image information D11P and image information D12P (stereoscopic image information) generated by the stereoscopic image generating unit 220C.
  • the contour correction unit 210 corrects the position / direction in which the contour image is arranged and the brightness balance based on the representative position of the user.
  • the contour correcting unit 210 sets the brightness of the contour portion according to the amount of movement of the position of the contour image obtained by calculation. For the correction processing performed by the contour correction unit 210, the methods described in the above embodiments can be applied.
  • FIG. 25 is a flowchart illustrating processing performed by the image processing apparatus 2C.
  • the detection unit 250 detects the position of the observer 1 based on image information obtained by the imaging unit 230 imaging so that the observer 1 is included in the imaging range (step S10). In the detection of the position of the observer 1, the detection unit 250 performs based on image information captured by the imaging unit 230.
  • the stereoscopic image generation unit 220C generates a contour image of a stereoscopic image that can be stereoscopically viewed from the observer 1 based on the detected position of the observer 1 (step S20).
  • the position of the observer 1 is the position detected by the detection unit 250 in step S10.
  • the stereoscopic image generation unit 220C generates a contour image in at least one image information indicating an image that is visually recognized as a superimposed image so that the stereoscopic image can be visually recognized from the detected position of the observer 1.
  • the stereoscopic image generation unit 220C generates at least image information (contour image) D11P.
  • the contour correction unit 210 corrects the contour portion of the generated contour image based on the detected position of the observer 1 (step S30). For example, the generated contour image is at least the contour image D11P. The contour correction unit 210 corrects at least the contour portion of the contour image D11P to generate image information D11.
  • the control unit 260 displays the corrected contour image on the display unit 11 and the display unit 12 in the display device 10 (step S40). For example, the control unit 260 causes the display unit 11 in the display device 10 to display the image information D11 including the corrected contour image.
  • the image processing device 2C can correct the contour of the image information displayed on the display device 10. Thereby, even if it is a case where the observer 1 moves from the predetermined
  • the outline image which shows only an outline was illustrated and demonstrated as one Example shown in 1st Embodiment, and the outline image D11P in each modification
  • generation of the outline image D11P, and The new image information synthesized based on the image information of the above-described contour image D11P may be used as the contour image.
  • the image information newly obtained by synthesizing as described above corresponds to the original image in which the contour is emphasized.
  • the display system 100C shown in the present embodiment the visibility of a stereoscopic image can be improved even when the observer 1 moves from a predetermined position where the stereoscopic image can be visually recognized.
  • the display system 100 (100A, 100B, 100C) shown in the present embodiment detects the position of the moving observer 1 (user) and displays a stereoscopic image corresponding to the position of the observer 1 (user). 10 (10A, 10B).
  • the edge portions in the images observed from the left eye and the right eye are corrected so as to generate parallax.
  • the observer 1 can stereoscopically view from the position where the observer 1 is present.
  • a person (object existing on the display surface side) existing on the display surface side displayed by the display device 10 (10A, 10B) of the display system 100 (100A, 100B, 100C) is observed. It is called person 1.
  • the observer 1 is, for example, a person who is looking at the display surface displayed by the display device 10 (a person who sees it), a person who is trying to see (a person who is trying to see), or a person who can see (a person who can see) Person), or a person who is simply present on the display surface side.
  • the viewing position at which the stereoscopic image can be viewed stereoscopically is a position within the viewing area where the stereoscopic viewing is possible, which is determined by, for example, the distance and angle with respect to the display surface displaying the stereoscopic image.
  • this “viewing area” or “viewing position” in the “viewing area” is described as “viewing position”.
  • the contour of the stereoscopic image exemplified in each of the above embodiments is exemplified by the vertical direction (Y-axis (FIG. 1)) of the display device 10, but the contour of the display device 10 in the horizontal direction and the oblique direction is also illustrated. Can be applied.
  • the correction amount for correcting the contour image in a direction orthogonal to the extending direction of the target contour may be adjusted.
  • the position of the pixel that performs the correction of the contour image exemplified in each of the above embodiments has been described as an example of the pixel adjacent to the position of the contour before correction.
  • the position of the contour before correction is described.
  • the position may be a predetermined number of pixels from the position. For example, when a contour is shown by connecting a plurality of pixels, a predetermined number of pixels corresponding to the width of the contour may be separated.
  • the display unit that displays the contour image shows an example of display on the display unit 11 in the depth direction (( ⁇ Z) axis direction) among the display units arranged so that the displayed images overlap.
  • the display unit 12 disposed in front of the display surface 11S of the display unit 11 may be used.
  • the contour image can be displayed on at least one of the displays arranged so that the images to be displayed overlap.
  • the contour image can be displayed on a plurality of display units. Moreover, it does not restrict
  • the display system 100 may supply image information to be displayed on a head-mounted display device, a so-called head mounted display (HMD), instead of the display device 10.
  • the display system 100 may supply image information to be displayed on a stereoscopic image display device using lens shutter glasses instead of the display device 10.
  • FIG. 27 is a schematic diagram illustrating an example of a schematic configuration of a display device 2100 according to the fifth embodiment of the present invention.
  • an XYZ rectangular coordinate system is set, and the positional relationship of each part will be described with reference to this XYZ rectangular coordinate system.
  • a direction in which the display device 2100 displays an image is a positive direction of the Z axis, and orthogonal directions on a plane perpendicular to the Z axis direction are an X axis direction and a Y axis direction, respectively.
  • the X-axis direction is the horizontal direction of the display device 2100
  • the Y-axis direction is the vertical upward direction of the display device 2100.
  • an outline of a configuration of the display device 2100 will be described.
  • a display device 2100 includes a first display unit 2011 as a display unit 2010 and a second display unit 2012.
  • the first display unit 2011 includes a first display surface 2011S for displaying an image in the depth position Z 1.
  • the second display unit 2012 is provided with a second display surface 2012S for displaying an image in the depth position Z 2.
  • User 1 sees from the viewpoint position VP predetermined for depth position Z VP, the first display surface 2011S, and a second display surface 2012S. Since the second display surface 2012S is a transmissive screen, when the user 1 views the second display surface 2012S from the viewpoint position VP, the image displayed on the first display surface 2011S and the second display surface 2012S are displayed. It looks like the image to overlap.
  • the first display surface 2011S displays a cubic image P11.
  • the second display surface 2012S displays a cubic image P12.
  • the second display surface 2012S is displayed in advance so that when the user 1 is viewed from the viewpoint position VP, the ridge lines of the cube indicated by the image P12 appear to overlap the ridge lines of the cube indicated by the image P11.
  • the size and position are set.
  • the image P12 displayed on the second display surface 2012S may be an image showing the display target OBJ as it is, or an image showing the outline (edge part) of the display target OBJ in an emphasized manner.
  • the image P12 displayed on the second display surface 2012S may be a contour image showing a contour portion of the display target OBJ.
  • This contour image is generated by extracting the contour portion of the image P11 displayed on the first display surface 2011S by using, for example, a differential filter.
  • the contour image may be an image in which the width of the pixel representing the contour portion is one pixel, or may be an image in which the width of the pixel representing the contour portion is a plurality of pixels.
  • the image P12 is a contour image will be described. That is, a case where the second display surface 2012S displays the contour image P12 will be described.
  • the second display surface 2012S is a transmissive screen and is located on the near side (+ Z side) of the depth position with respect to the first display surface 2011S when viewed from the viewpoint position VP.
  • the first display surface 2011S may be a transmissive screen and may be located closer to the depth position than the second display surface 2012S (+ Z side) when viewed from the viewpoint position VP.
  • the display device 2100 is highly accurate for the user 1.
  • a stereoscopic image can be perceived. For example, if the position of the ridgeline of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP and the position of the ridgeline of the cube displayed on the second display surface 2012S are precisely aligned, the display device 2100 The user 1 can perceive a highly accurate stereoscopic image.
  • the first display surface 2011 ⁇ / b> S and the second display surface 2012 ⁇ / b> S are, for example, a liquid crystal display or a screen of a liquid crystal projector, and have two-dimensionally arranged pixels.
  • the second display surface 2012S displays the ridge line of the cube on the pixel including the position Pt2 corresponding to the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the pixel has an area corresponding to the definition of the second display surface 2012S.
  • the second display surface 2012S displays the position Pt1 of the cube ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the position of the ridge line of the cube is precisely aligned.
  • the position Pt2 and the center position Pt3 of the pixel do not coincide with each other.
  • the alignment accuracy is lowered.
  • the display device 2100 may reduce the accuracy of the stereoscopic image perceived by the user 1 (for example, the sense of depth of the stereoscopic image). The accuracy of alignment between images will be described with reference to FIGS. 28 and 29.
  • FIG. 28 is a schematic diagram illustrating an example of a pixel configuration of the second display surface 2012S of the present embodiment.
  • the second display surface 2012S has a plurality of pixels arranged two-dimensionally on the XY plane. Some of the pixels indicate the ridges of a cube that is the display target OBJ.
  • a plurality of pixels Px (pixels Px11 to Px33) indicating the ridgelines of the cube will be described with reference to FIG.
  • FIG. 29 is a schematic diagram illustrating an example of a pixel Px on the second display surface 2012S of the present embodiment.
  • the plurality of pixels Px indicating the cubic ridge lines are nine pixels including a central pixel Px22 and surrounding pixels Px11 to Px33.
  • the central pixel Px22 is a pixel including a position Pt2 corresponding to the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the position Pt3 described above is the center position of the pixel Px22 here.
  • a case will be described in which the position Pt2 is shifted by a distance ⁇ Pt in the (+ Y) direction with respect to the position Pt3.
  • the position Pt2 and the center position Pt3 of the pixel coincide with each other, the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP and the second display surface
  • the position of the ridgeline of the cube displayed by 2012S is precisely aligned.
  • the position Pt2 is shifted from the center position Pt3 of the pixel by the distance ⁇ Pt in the (+ Y) direction
  • the ridge line is shifted in the ( ⁇ Y) direction from the original position where the cube ridge line is to be displayed. Will be displayed.
  • the display device 2100 of this embodiment includes a contour correction unit 2013 that corrects the contour image P12.
  • the contour correcting unit 2013 corrects the contour image P12 based on the direction of deviation between the position Pt2 and the position Pt3.
  • the display device 2100 uses the contour correction unit 2013 to improve the accuracy of the stereoscopic image perceived by the user 1.
  • a specific configuration of the display device 2100 including the contour correction unit 2013 will be described.
  • FIG. 30 is a schematic diagram illustrating an example of a specific configuration of the display device 2100 of the present embodiment.
  • the display device 2100 includes the first display unit 2011, the second display unit 2012, and the contour correction unit 2013.
  • the first display unit 2011 includes a first display surface 2011S.
  • the first display unit 2011 emits the light R11 to the user 1 by displaying the image P11 on the first display surface 2011S.
  • the light R11 passes through the second display surface 2012S, which is a transmissive screen, and reaches the user 1.
  • the first display surface 2011S converts the image information D11 supplied from the image supply device 2002 into an image P11 and displays it.
  • the image information D11 is, for example, image information indicating a cube that is the display target OBJ.
  • the second display unit 2012 includes a second display surface 2012S.
  • the second display unit 2012 emits the light R12 to the user 1 by displaying the contour image P12 on the second display surface 2012S.
  • the second display surface 2012S displays the image information D12 supplied from the image supply device 2002 after converting the corrected image information D12C corrected by the contour correcting unit 2013 into a contour image P12.
  • the image information D12 is, for example, image information of a contour image indicating a contour portion of a cube that is the display target OBJ.
  • the corrected image information D12C is image information of a corrected contour image obtained by correcting the image information D12 by the contour correcting unit 2013.
  • the display device 2100 displays the contour image P12 and the contour image indicating the contour portion of the display object OBJ in an overlapping manner.
  • a stereoscopic effect can be given to the display object OBJ observed by the user 1. That is, when the user 1 sees the contour image P12 and the contour image indicating the contour portion of the display target OBJ in an overlapping manner, the user 1 perceives a stereoscopic image in which the display target OBJ jumps out in the Z-axis direction.
  • a mechanism for giving a stereoscopic effect to the user 1 using an image displayed on the display device 2100 will be described with reference to FIGS. 31 to 37D.
  • FIG. 31 is a schematic diagram illustrating an example of an image P11 of the display target OBJ according to the present embodiment.
  • the display target OBJ is a square pattern as an example will be described below.
  • This square pattern is a square pattern in which four sides of equal length intersect at right angles at each vertex.
  • This rectangular pattern has four sides as contour lines that separate the outside and the inside of the rectangular pattern.
  • An observer who sees the square pattern perceives the contour line as the edge portion E of the square pattern when the difference in brightness between the outside and the inside of the square pattern is large. That is, the edge portion E is a portion of the display object OBJ in which the difference in ambient brightness is relatively larger than the difference in brightness in other portions.
  • each side can be an edge portion E, but here, of each edge portion E, two sides parallel to the Y-axis direction of the square pattern are defined as an edge portion E1 and an edge portion E2, respectively. explain.
  • FIG. 32 is a schematic diagram illustrating an example of the contour image P12 of the present embodiment.
  • the contour image P12 is an image including the edge image PE1 indicating the edge portion E1 of the square pattern and the edge image PE2 indicating the edge portion E2. That is, when the display target OBJ is a quadrangular pattern as described above, the second display surface 2012S displays the contour image P12 including the edge image PE1 and the edge image PE2 corresponding to the edge portion E1 and the edge portion E2, respectively. To do.
  • FIG. 33 is a schematic diagram illustrating an example of a positional relationship among the image P11, the contour image P12, and the viewpoint position VP in the present embodiment.
  • the viewpoint position VP at the position Z VP seen superimposed a contour image P12 at the position Z 2, and an image P11 in the position Z 1.
  • the contour image P12 is an image including the edge image PE1 and the edge image PE2 corresponding to the edge portion E1 and the edge portion E2 of the square pattern as the display object OBJ.
  • the position in the X direction of the edge portion E1 of the square pattern is the position X2
  • the position in the X direction of the edge portion E2 is the position X5.
  • the second display surface 2012S displays the contour image P12 so that the edge portion E1 at the position X2 and the edge image PE1 of the contour image P12 appear to overlap at the viewpoint position VP.
  • the second display surface 2012S displays the contour image P12 so that the edge portion E2 at the position X5 and the edge image PE2 of the contour image P12 appear to overlap at the viewpoint position VP.
  • the observer superimposes the contour image P12 displayed in this way and the image P11 showing the display object OBJ (for example, a square pattern), the brightness is so small that it cannot be recognized on the retina image of the observer. There is a step.
  • a virtual contour is perceived between steps of brightness (for example, luminance), and the contour image P12 and the image P11 are perceived as one image.
  • the virtual contour is slightly shifted and perceived as binocular parallax.
  • the apparent depth position of the image P11 changes.
  • the optical image IML viewed by the left eye L and the optical image IMR viewed by the right eye R will be described in this order, and a mechanism for changing the apparent depth position of the image P11 will be described.
  • FIG. 34 is a schematic diagram illustrating an example of an optical image IM that can be seen by an observer's eye in the present embodiment.
  • FIG. 34 (L) is a schematic diagram showing an example of an optical image IML that can be seen by the left eye L of the observer.
  • the edge image PE1 and the edge portion E1 is, the position X2 ⁇ position X3 Appears overlapping in range.
  • edge portion E1 of the square pattern is located at the position X2, the edge image PE1 and the edge portion E1 are located on the left eye L of the observer inside the square pattern (in the + X direction) than the edge portion E1.
  • Appear to overlap Further, at the position of the left eye L of the viewpoint position VP at the position Z VP, looking at the image P11 and the contour image P12, edge image PE2 and the edge E2 is visible overlap in the region of the position X5 ⁇ position X6 .
  • the edge portion E2 of the square pattern is at the position X5, the edge image PE2 and the edge portion E2 are outside the square pattern (+ X direction) than the edge portion E2 in the left eye L of the observer.
  • Appear to overlap since the edge portion E2 of the square pattern is at the position X5, the edge image PE2 and the edge portion E2 are outside the square pattern (+ X direction) than the edge portion E2 in the left eye L of the observer.
  • FIG. 35 is a graph showing an example of the brightness of the optical image IM at the viewpoint position VP of the present embodiment.
  • FIG. 35 (L) is a graph showing an example of the brightness of the optical image IML at the position of the left eye L at the viewpoint position VP.
  • an optical image IML having a brightness obtained by combining the brightness of the image P11 and the brightness of the image (contour image) P12L seen from the position of the left eye L is generated.
  • the brightness inside the square pattern viewed from the viewpoint position VP is brightness BR2. Further, the brightness of the outside of the rectangular pattern viewed from the viewpoint position VP and the brightness 0 (zero).
  • the position of the edge portion E1 of the pattern square at the position Z 1 is located X2, the position of the edge portion E2 is the position X5.
  • the brightness viewed from the viewpoint position VP of the square pattern is the brightness BR2 at the positions X2 to X5, and the brightness at the positions from the position X2 to the ( ⁇ X) direction and from the position X5 to the (+ X) direction. 0 (zero).
  • the brightness viewed from the viewpoint position VP of the edge image PE1 and the edge image PE2 of the contour image P12 is brightness BR1.
  • the edge image PE1 of the contour image P12L seen from the position of the left eye L is displayed so as to overlap the edge portion E1 of the square pattern in the range of the position X2 to the position X3.
  • the edge image PE2 of the contour image P12L is displayed so as to overlap the edge portion E2 of the square pattern in the range of the position X5 to the position X6.
  • the brightness viewed from the viewpoint position VP of the contour image P12L is brightness BR1 at positions X2 to X3 and positions X5 to X6, and brightness 0 (zero) at other positions in the X direction.
  • the brightness of the optical image IML is brightness BR3 from position X2 to position X3, brightness BR2 from position X3 to position X5, brightness BR1 from position X5 to position X6, ( ⁇ X) direction from position X2, and position.
  • the brightness is 0 (zero) at a position in the (+ X) direction from X6.
  • FIG. 36 is a graph showing an example of a contour portion of the image P11 that is perceived by the observer in the present embodiment based on the optical image IML.
  • FIG. 36 (L) is a graph showing an example of a contour portion of the image P11 perceived by the observer based on the optical image IML at the position of the left eye L of the observer.
  • the outline portion of the image P11 is a portion in which the change in brightness is larger than the change in brightness in the surrounding portions in the portion of the optical image that shows the image P11.
  • the distribution of brightness of the image recognized by the observer by the optical image IML formed on the retina of the left eye L of the observer at the viewpoint position VP is as shown by a waveform WL in FIG. .
  • the observer observes the position on the X-axis where the change in the brightness of the recognized optical image IML is maximized (that is, the gradient of the waveform WL is maximized) at the contour portion of the image P11 being observed.
  • the observer observing the optical image IML sets the X EL position (that is, the position of the distance L EL from the origin O of the X axis) shown in FIG. Perceived to be
  • optical image IML that can be seen by the left eye L of the observer and the position of the contour portion by the optical image IML have been described.
  • optical image IMR that can be seen by the observer's right eye R and the position of the contour portion by the optical image IMR will be described.
  • edge image PE2 and the edge E2 is visible overlap in a range of positions X4 ⁇ position X5 . This is different from the fact that when the image P11 and the contour image P12 are viewed at the position of the left eye L, the edge image PE2 and the edge portion E2 appear to overlap in the range from the position X5 to the position X6. Further, as described above, since the edge portion E2 of the square pattern is located at the position X5, the edge image PE2 and the edge portion E2 are located on the right eye R of the observer inside the square pattern (see FIG. -X direction) appears to overlap. This is different from the fact that the left image L of the observer appears to overlap the edge image PE2 and the edge portion E2 at a position outside the square pattern (in the + X direction) with respect to the edge portion E2.
  • FIG. 35 (R) is a graph showing an example of the brightness of the optical image IMR at the position of the right eye R at the viewpoint position VP.
  • an optical image IMR having a brightness obtained by combining the brightness of the image P11 and the brightness of the image (contour image) P12R seen from the position of the right eye R is generated.
  • the brightness viewed from the viewpoint position VP of the square pattern as the image P11 is the same as the brightness at the position of the left eye L.
  • a specific example of brightness viewed from the viewpoint position VP of the contour image P12R will be described.
  • the brightness viewed from the viewpoint position VP of the edge image PE1 and the edge image PE2 of the contour image P12R is the brightness BR1.
  • the edge image PE1 of the contour image P12R that can be seen from the position of the right eye R is displayed so as to overlap the edge portion E1 of the square pattern in the range from the position X1 to the position X2.
  • edge image PE2 of the contour image P12R is displayed so as to overlap with the edge portion E2 of the square pattern in the range of the position X4 to the position X5. This is different from the fact that the edge image PE2 of the contour image P12L is displayed so as to overlap the edge portion E2 of the square pattern in the range of the position X5 to the position X6.
  • the brightness viewed from the viewpoint position VP of the contour image P12R is brightness BR1 at positions X1 to X2 and position X4 to position X5, and brightness 0 (zero) at other positions in the X direction.
  • the brightness of the optical image IMR is brightness BR1 from position X1 to position X2, brightness BR2 from position X2 to position X4, brightness BR3 from position X4 to position X5, ( ⁇ X) direction from position X1, and position.
  • the brightness is 0 (zero) at the position in the (+ X) direction from X5.
  • This is different from the brightness of the optical image IML in that the position X2 to position X3 is brightness BR3, the position X3 to position X5 is brightness BR2, and the position X5 to position X6 is brightness BR1.
  • FIG. 36 (R) is a graph showing an example of the contour portion of the image P11 perceived by the observer based on the optical image IMR at the position of the right eye R of the observer.
  • the distribution of brightness of the image recognized by the observer by the optical image IMR formed on the retina of the right eye R of the observer at the viewpoint position VP is as shown by a waveform WR in FIG. .
  • the observer observes the position on the X-axis where the change in brightness of the recognized optical image IMR is maximized (that is, the gradient of the waveform WR is maximized) at the contour portion of the image P11 being observed. Perceived to be. Specifically, an observer is observing an optical image IMR, the position of the X ER shown in FIG. 36 (R) (i.e., the origin O of the distance L ER of position in the X-axis) contour of the square pattern Perceived to be This is different from the case where an observer observing the optical image IML perceives the position of the distance L EL from the origin O of the X axis as a rectangular pattern outline.
  • the viewer perceives the position X EL contour portion of square left eye L is observed, the position X ER of contour of the square right eye R is observed as binocular parallax. Then, the observer perceives the square pattern as a stereoscopic image (three-dimensional image) based on the binocular parallax of the contour portion.
  • the contour correction unit 2013 includes a position Pt2 on the second display surface 2012S corresponding to the position Pt1 of the ridge line of the cube displayed on the first display surface 2011S viewed from the viewpoint position VP.
  • the contour image P12 is corrected based on the center position Pt3 of the pixel Px22 of the contour image P12. That is, the contour correcting unit 2013 displays the position Pt3 of the pixel Px22 (contour pixel) displaying the contour image P12 among the pixels of the second display unit 2012 and the first display of the contour corresponding to the pixel Px22 (contour pixel).
  • the contour image P12 is corrected based on the position Pt1 on the part 2011 and the position Pt2 (contour position) on the second display part 2012 determined based on the predetermined viewpoint position VP.
  • the position Pt2 is the second display surface 2012S corresponding to the contour displayed on the first display surface 2011S when the first display surface 2011S and the second display surface 2012S are viewed from the viewpoint position VP. It is the upper position.
  • the contour image displayed on the second display surface 2012S is displayed around the position Pt2, the image P11 and the contour image P12 are precisely aligned. Therefore, as shown in the figure, when the position Pt2 is deviated from the center position Pt3 of the pixel, the contour correcting unit 2013 corrects the pixel values of the pixels Px11 to Px33.
  • the contour correcting unit 2013 sets the pixel Px11 so that the position Pt2 becomes the center of gravity when the pixel values of the pixels around the pixel Px22 (pixels Px11 to Px33) are weighted and averaged based on the distance on the XY plane. -Correct the pixel value of the pixel Px33.
  • the contour correction unit 2013 when the position Pt2 is shifted in the (+ Y) direction with respect to the center position Pt3 of the pixel Px22, the contour correction unit 2013 is in the (+ Y) direction of the pixel Px22. The pixel value of the adjacent pixel Px12 is corrected. As shown in FIG. 37B, when the position Pt2 is shifted in the ( ⁇ Y) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the ( ⁇ Y) direction of the pixel Px22. The pixel value of the current pixel Px32 is corrected.
  • the contour correcting unit 2013 when the position Pt2 is shifted in the ( ⁇ X) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the ( ⁇ X) direction of the pixel Px22.
  • the pixel value of the current pixel Px21 is corrected.
  • the pixel Px21 is a pixel displaying the contour portion indicated by the contour image P12 even before correction.
  • the contour correcting unit 2013 corrects the pixel value of the pixel Px21 with a pixel value obtained by adding the corrected pixel value of the pixel Px22 to the pixel value of the contour image P12 before the correction of the pixel Px21.
  • FIG. 37C when the position Pt2 is shifted in the ( ⁇ X) direction with respect to the center position Pt3 of the pixel Px22, the contour correcting unit 2013 is adjacent to the ( ⁇ X) direction of the pixel Px22.
  • the pixel value of the current pixel Px21 is corrected.
  • the contour correction unit 2013 is adjacent to the (+ X) direction of the pixel Px22.
  • the pixel value of the current pixel Px23 is corrected.
  • the contour correcting unit 2013 corrects the pixel values of pixels around (for example, adjacent to) the pixel Px22 (contour pixel) displaying the contour image P12 among the pixels of the second display unit 2012. Accordingly, the display device 2100 can improve the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S.
  • amendment part 2013 demonstrated as correcting the pixel value of the pixel (for example, adjacent) surrounding the pixel Px22 (contour pixel) which displays the outline image P12 here, it is not restricted to this.
  • the user 1 viewing the first display surface 2011S and the second display surface 2012S from the viewpoint position VP may correct the position Pt2 so that the position Pt1 is perceived to overlap the position Pt1, and the contour correction unit 2013 can detect the pixel Px22. It can also be configured to correct the pixel values of neighboring pixels.
  • the neighboring pixels are not necessarily adjacent to each other.
  • the neighboring pixel may be a pixel at a position further adjacent to the pixel adjacent to the pixel.
  • the contour correcting unit 2013 has described the configuration for correcting the pixel value of the pixel in the direction in which the position Pt2 is deviated from the center position Pt3 of the pixel Px22, the present invention is not limited to this.
  • the contour correcting unit 2013 may correct the pixel value of the pixel in the direction opposite to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22.
  • the contour correction unit 2013 may correct the pixel value of the pixel Px32 in the direction opposite to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. .
  • the contour correcting unit 2013 may correct the pixel value of a pixel that is in an oblique direction with respect to the direction in which the position Pt2 is deviated from the center position Pt3 of the pixel Px22. Specifically, in FIG. 37A, the contour correction unit 2013 corrects the pixel value of the pixel Px11 in the oblique direction with respect to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22. Also good. In this case, the contour correction unit 2013 also selects any of the pixels Px13, Px31, and Px33 that are oblique to the direction in which the position Pt2 is shifted from the center position Pt3 of the pixel Px22.
  • the contour correction unit 2013 combines any or all of the pixels in the direction shifted from the position Pt3, the pixel in the opposite direction, and the pixel in the oblique direction. Thus, the pixel values of the plurality of pixels may be corrected.
  • contour correction unit 2013 has been described here as correcting the pixel value of one pixel, the present invention is not limited to this.
  • the contour correction unit 2013 includes a plurality of pixels in the vicinity of the pixel Px22 so that the user 1 viewing the first display surface 2011S and the second display surface 2012S from the viewpoint position VP perceives the position Pt2 overlapping the position Pt1. It can also be configured to correct the pixel value of the pixel.
  • FIG. 38 is a flowchart showing an example of the operation of the display device 2100 of the present embodiment.
  • the first display unit 2011 acquires the image information D11 from the image supply device 2002, and displays the image P11 based on the image information D11 on the first display surface 2011S (step S2010).
  • the contour correction unit 2013 acquires the image information D12 from the image supply device 2002 (step S2020).
  • the contour correcting unit 2013 generates corrected image information D12C obtained by correcting the acquired image information D12 based on the position Pt2 and the position Pt3.
  • the contour correcting unit 2013 compares the position Pt2 that is the contour position with the position Pt3 that is the center position of the pixel, and determines the direction of the shift of the contour position with respect to the center position of the pixel (step S2030). ). If the contour correction unit 2013 determines that the position Pt2 is displaced in the (+ Y) direction with respect to the position Pt3 that is the center position of the pixel (step S2030-UP), the contour correction unit 2013 sets the pixel value of the pixel Px12. to correct.
  • the contour correcting unit 2013 determines the position of the pixel Px32. Correct the pixel value. Similarly, the contour correcting unit 2013 determines that the position Pt2 is shifted in the ( ⁇ X) direction with respect to the position Pt3 (step S2030 ⁇ LEFT), or is determined to be shifted in the (+ X) direction. In the case (step S2030-RIGHT), the pixel values of the pixel Px21 and the pixel Px23 are corrected.
  • the second display unit 2012 acquires the corrected image information D12C from the contour correcting unit 2013, and displays the contour image P12 based on the corrected image information D12C on the second display surface 2012S (step S2040).
  • the display device 2100 repeats these steps S2010 to S2040 to display the image P11 and the contour image P12 while correcting the contour image P12.
  • the display device 2100 of this embodiment includes the first display unit 2011, the second display unit 2012, and the contour correction unit 2013.
  • the first display unit 2011 displays an image to be displayed at the first depth position.
  • the second display unit 2012 has a plurality of pixels arranged two-dimensionally, and displays a contour image indicating a contour portion to be displayed at a second depth position different from the first depth position.
  • the contour correction unit 2013 also includes the position of the contour pixel that displays the contour image among the pixels of the second display unit 2012, the position of the contour corresponding to the contour pixel on the first display unit 2011, and a predetermined viewpoint position. The contour image is corrected based on the contour position on the second display unit 2012 determined based on the VP.
  • the first display surface 2011S and the second display surface 2012S are, for example, a screen of a liquid crystal display or a liquid crystal projector, and have pixels arranged two-dimensionally. Since this pixel has an area, there may be a deviation between the position Pt2 corresponding to the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S viewed from the viewpoint position VP and the position Pt3 of the center of the pixel.
  • the position Pt2 is shifted from the center position Pt3 of the pixel, the alignment accuracy between the contour image P12 displayed on the second display surface 2012S and the image P11 displayed on the first display surface 2011S is lowered. . In this case, the user 1 cannot perceive an accurate stereoscopic image.
  • the contour correction unit 2013 of the present embodiment corrects the position of the contour (for example, a cubic ridge line) indicated by the contour image P12 based on the shift between the position Pt2 and the center position Pt3 of the pixel.
  • the display device 2100 reduces the degree to which the accuracy of alignment between the contour image P12 displayed on the second display surface 2012S and the image P11 displayed on the first display surface 2011S is reduced. In this way, the display device 2100 can cause the user 1 to perceive a highly accurate stereoscopic image.
  • the contour correction unit 2013 corrects the pixel value of the pixel in the vicinity of the pixel Px22 (contour pixel) based on the correction amount based on the distance between the center position Pt3 and the position Pt2 (contour position) of the pixel Px22 (contour pixel). May be. Specifically, when the position Pt2 and the position Pt3 are separated by the distance ⁇ Pt, the contour correcting unit 2013 sets the pixel value of the pixel to be corrected by the correction amount according to the distance ⁇ Pt. For example, the contour correction unit 2013 sets the pixel value of the pixel to be corrected by increasing the correction amount as the position Pt2 is farther from the center position Pt3 of the pixel. Thereby, the outline correction
  • the contour correction unit 2013 also includes a first contour position Pt2L determined based on the position of the left eye L of the user 1, a second contour position Pt2R determined based on the position of the right eye R of the user 1, and a pixel Px22.
  • the configuration may be such that the contour image P12 is corrected based on the position Pt3 of (contour pixel).
  • FIG. 39 is a schematic diagram illustrating an example of a positional relationship between the user 1 and the display device 2100 according to a modification of the present embodiment.
  • the first display surface 2011S displays the cubic image P11.
  • the second display surface 2012S displays a cubic outline image P12.
  • the user 1 sees overlapped with the first display surface 2011S in the depth position Z 1 from the viewpoint position VP in the depth position Z VP, and a second display surface 2012S in the depth position Z 2.
  • the position Pt1 of the cube ridgeline displayed on the first display surface 2011S and the position Pt2 on the second display surface 2012S appear to overlap.
  • the left eye L and the right eye R of the user 1 are different from each other in the position on the second display surface 2012S that appears to overlap the position Pt1 of the cube ridgeline displayed on the first display surface 2011S.
  • the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S and the position Pt2L on the second display surface 2012S appear to overlap.
  • the position Pt1 of the cubic ridgeline displayed on the first display surface 2011S and the position Pt2R on the second display surface 2012S appear to overlap.
  • the contour correcting unit 2013 uses the position of the midpoint on the XY plane between the position Pt2L on the second display surface 2012S and the position Pt2R as a reference for correcting the pixel value described above (contour position). To correct the contour image P12.
  • the contour correcting unit 2013 can correct the contour image P12 based on the positions of the left eye L and the right eye R of the user 1, so that the correction accuracy can be improved. Therefore, the display device 2100 can cause the user 1 to perceive a more accurate stereoscopic image.
  • a display device 2100a according to a sixth embodiment of the present invention will be described with reference to FIG.
  • the display device 2100a of the present embodiment is different from the above-described embodiment in that the display device 2100a includes a detection unit 2014 that detects the viewpoint position VP.
  • symbol is attached
  • FIG. 40 is a schematic diagram showing an example of the configuration of the display device 2100a according to the sixth embodiment of the present invention.
  • the display device 2100a includes a contour correction unit 2013a and a detection unit 2014.
  • the detection unit 2014 includes a distance measuring sensor, detects the position where the user 1 is, outputs the detected position as the viewpoint position VP, and outputs information indicating the viewpoint position VP to the contour correction unit 2013a.
  • the contour correction unit 2013a acquires information indicating the viewpoint position VP detected by the detection unit 2014, and calculates a position Pt2 (contour position) on the second display unit 2012 based on the information indicating the acquired viewpoint position VP. . Then, the contour correcting unit 2013a corrects the contour image P12 based on the calculated position Pt2 and the position Pt3 of the pixel Px22 (contour pixel) that displays the contour image P12. In other words, the contour correcting unit 2013a determines the position Pt2 (contour position) determined based on the position Pt3 of the contour pixel, the position Pt1 on the first display unit 2011 of the contour corresponding to the contour pixel, and the detected viewpoint position VP. Based on the above, the contour image P12 is corrected.
  • the display device 2100a can detect the position where the user 1 is present as the viewpoint position VP. For example, even if the user 1 moves, the contour image P12 is corrected according to the position where the user 1 moves. can do. Therefore, even if the position of the user 1 changes, the display device 2100a improves the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S. Can do.
  • the detection unit 2014 may detect the center position of the face of the user 1 as the viewpoint position VP.
  • the detection unit 2014 includes a video camera (not shown) and an image analysis unit.
  • the image analysis unit analyzes an image captured by the video camera and extracts the face of the user 1. Is detected as the viewpoint position VP.
  • the center position of the face of the user 1 includes the position of the center of gravity based on the outline of the face of the user 1 and the position of the midpoint between the position of the left eye L and the position of the right eye R of the user 1.
  • the display device 2100a can set the viewpoint position VP according to the orientation of the face. Therefore, the display device 2100a aligns the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S even when the orientation of the face as well as the position of the user 1 changes. Accuracy can be improved.
  • the contour correcting unit 2013a may correct the contour image P12 with a correction amount based on the distance to the user 1 detected by the detecting unit 2014.
  • the contour correcting unit 2013a may correct the contour image P12 with a correction amount based on the distance to the user 1 detected by the detecting unit 2014.
  • the user 1 is in a position close to the display device 2100a, a detailed portion of the image is easily perceived by the user 1, so that the image P11 displayed on the first display surface 2011S and the second display surface 2012S are displayed.
  • the position shift from the contour image P12 to be performed is easily perceived by the user 1.
  • the user 1 is at a position far from the display device 2100a, the detailed portion of the image is less likely to be perceived by the user 1, and thus the image P11 displayed on the first display surface 2011S and the second display surface 2012S are displayed.
  • the positional deviation from the contour image P12 is less likely to be perceived by the user 1.
  • the contour correction unit 2013a reduces the correction amount to reduce the contour image P12. Make corrections.
  • the contour correction unit 2013a may not perform correction when the distance to the user 1 detected by the detection unit 2014 is further larger (for example, greater than a predetermined threshold value).
  • the contour correction unit 2013a increases the correction amount to increase the contour image P12. Make corrections.
  • the contour correction unit 2013a corrects the contour image P12 according to the distance to the user 1 detected by the detection unit 2014, so that even if the position of the user 1 changes, the contour P11 and the contour image The positional deviation from P12 can be made difficult to be perceived by the user 1. Therefore, even if the position of the user 1 changes, the display device 2100a improves the accuracy of alignment between the image P11 displayed on the first display surface 2011S and the contour image P12 displayed on the second display surface 2012S. Can do.
  • the display device 2100b of the present embodiment is different from the above-described embodiment in that it includes a first display unit 2011b, a second display unit 2012b, and a third display unit 2015.
  • symbol is attached
  • FIG. 41 is a schematic diagram showing an example of the configuration of a display device 2100b according to the seventh embodiment of the present invention.
  • the display device 2100b includes a first display unit 2011b, a second display unit 2012b, and a third display unit 2015.
  • the first display unit 2011b includes a first display surface 2011Sb for displaying an image in the depth position Z 1.
  • the second display unit 2012b includes a second display surface 2012Sb for displaying an image in the depth position Z 2.
  • the first display unit 2011 and the second display unit 2012 may be either a monochrome (monochrome) display unit or a color display unit.
  • the first display unit 2011b and the second display unit 2012b are monochrome display units. That is, the first display unit 2011b displays an image P11b that is a monochrome image on the first display surface 2011Sb.
  • the second display unit 2012b displays an image P12b that is a monochrome image on the second display surface 2012Sb.
  • the monochrome image is an image having only pixel values of brightness (for example, luminance) without pixel values of chromaticity and saturation.
  • This monochrome image includes a monochrome binary image or a gray-grey image of white and black.
  • the image P11b includes an image of the display target OBJ. That is, the first display unit 2011b displays the image P11b including the image of the display target OBJ on the first display surface 2011Sb. Further, the image P12b includes an edge image PE ′ that shows the contour portion of the display object OBJ.
  • the edge image PE ′ is a monochrome image showing the outline of the display object OBJ.
  • the second display unit 2012b displays the edge image PE ′ indicating the outline of the display target OBJ on the second display surface 2012Sb.
  • the first display unit 2011b and the second display unit 2012b display these images to generate a stereoscopic image of the display target OBJ.
  • the stereoscopic image of the display object OBJ is a monochrome stereoscopic image.
  • the third display unit 2015 is a color display unit that displays an image P15 that is a color image.
  • the image P15 is an image corresponding to the image P11b and the image P12b.
  • the third display unit 2015 gives a color to the images P11b and P12b, which are monochrome images, by displaying the image P15. That is, when the user 1 superimposes these images from the viewpoint position VP, the images P11b and P12b that are monochrome images appear to be color images.
  • the display device 2100b can improve the accuracy of the stereoscopic image perceived by the user 1 by displaying the image P11b and the image P12b, both of which are monochrome images.
  • a color image may display more information than a monochrome image.
  • the stereoscopic image generated by the images P11b and P12b is displayed by overlapping the image P15, that is, the color image, thereby improving the accuracy of the stereoscopic image perceived by the user 1 and improving the stereoscopic image.
  • the amount of information can be increased.
  • the display device 2100b can increase the information amount of the stereoscopic image while improving the accuracy of the stereoscopic image, and can cause the user 1 to perceive the stereoscopic image.
  • each part with which the display system 100 in said embodiment is provided and each part with which each display apparatus (display apparatus 2100, 2100a, 2100b. These are generically described as a display apparatus) in said embodiment are provided. It may be realized by dedicated hardware, or may be realized by a memory and a microprocessor.
  • each part with which the display system 100 is provided, and each part with which a display apparatus is provided are comprised by memory and CPU (arithmetic processing apparatus, central processing unit), and implement
  • the function may be realized by loading a program to be executed into a memory and executing the program.
  • each unit included in the display system 100 and a program for realizing the function of each unit included in the display device are recorded on a computer-readable recording medium, and the computer system reads the program recorded on the recording medium. By executing, the processing by each of the above-described units may be performed.
  • the “computer system” includes an OS and hardware such as peripheral devices.
  • the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
  • the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, it also includes those that hold a program for a certain period of time, such as a volatile memory inside a computer system serving as a server or client in that case.
  • the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

La présente invention concerne un dispositif d'affichage comportant: un premier écran servant à afficher une première image qui comprend un objet à afficher et qui est basée sur des premières données d'image; un deuxième écran servant à afficher une deuxième image qui comprend l'objet à afficher et qui est basée sur des deuxièmes données d'image; une unité de détection servant à détecter la position d'un observateur qui observe le premier écran et le deuxième écran; et une unité de commande qui corrige une partie de données d'image des deuxièmes données d'image se rapportant au voisinage du contour de l'objet à afficher sur la base de la position de l'observateur détectée par l'unité de détection et fait en sorte que le résultat s'affiche sur le deuxième écran.
PCT/JP2014/051796 2013-01-31 2014-01-28 Dispositif de traitement d'images, dispositif d'affichage et programme Ceased WO2014119555A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2013017968 2013-01-31
JP2013-017969 2013-01-31
JP2013017969 2013-01-31
JP2013-017968 2013-01-31

Publications (1)

Publication Number Publication Date
WO2014119555A1 true WO2014119555A1 (fr) 2014-08-07

Family

ID=51262269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/051796 Ceased WO2014119555A1 (fr) 2013-01-31 2014-01-28 Dispositif de traitement d'images, dispositif d'affichage et programme

Country Status (1)

Country Link
WO (1) WO2014119555A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021518701A (ja) * 2018-03-23 2021-08-02 ピーシーエムエス ホールディングス インコーポレイテッド Dibrシステムにおいて立体的視点を生じさせるための、多焦点面ベースの方法(mfp−dibr)
CN114078451A (zh) * 2020-08-14 2022-02-22 京东方科技集团股份有限公司 显示控制方法和显示装置
US11893755B2 (en) 2018-01-19 2024-02-06 Interdigital Vc Holdings, Inc. Multi-focal planes with varying positions
US12047552B2 (en) 2018-07-05 2024-07-23 Interdigital Vc Holdings, Inc. Method and system for near-eye focal plane overlays for 3D perception of content on 2D displays

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0927969A (ja) * 1995-05-08 1997-01-28 Matsushita Electric Ind Co Ltd 複数画像の中間像生成方法及び視差推定方法および装置
JP2008042745A (ja) * 2006-08-09 2008-02-21 Nippon Telegr & Teleph Corp <Ntt> 3次元表示方法
JP2010128450A (ja) * 2008-12-01 2010-06-10 Nippon Telegr & Teleph Corp <Ntt> 3次元表示物体、3次元画像作成装置、3次元画像作成方法およびプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0927969A (ja) * 1995-05-08 1997-01-28 Matsushita Electric Ind Co Ltd 複数画像の中間像生成方法及び視差推定方法および装置
JP2008042745A (ja) * 2006-08-09 2008-02-21 Nippon Telegr & Teleph Corp <Ntt> 3次元表示方法
JP2010128450A (ja) * 2008-12-01 2010-06-10 Nippon Telegr & Teleph Corp <Ntt> 3次元表示物体、3次元画像作成装置、3次元画像作成方法およびプログラム

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893755B2 (en) 2018-01-19 2024-02-06 Interdigital Vc Holdings, Inc. Multi-focal planes with varying positions
JP2021518701A (ja) * 2018-03-23 2021-08-02 ピーシーエムエス ホールディングス インコーポレイテッド Dibrシステムにおいて立体的視点を生じさせるための、多焦点面ベースの方法(mfp−dibr)
US12238270B2 (en) 2018-03-23 2025-02-25 Interdigital Vc Holdings, Inc. Multifocal plane based method to produce stereoscopic viewpoints in a DIBR system (MFP-DIBR)
JP7722818B2 (ja) 2018-03-23 2025-08-13 インターデイジタル ヴィーシー ホールディングス インコーポレイテッド Dibrシステムにおいて立体的視点を生じさせるための、多焦点面ベースの方法(mfp-dibr)
US12047552B2 (en) 2018-07-05 2024-07-23 Interdigital Vc Holdings, Inc. Method and system for near-eye focal plane overlays for 3D perception of content on 2D displays
US12407806B2 (en) 2018-07-05 2025-09-02 Interdigital Vc Holdings, Inc. Method and system for near-eye focal plane overlays for 3D perception of content on 2D displays
CN114078451A (zh) * 2020-08-14 2022-02-22 京东方科技集团股份有限公司 显示控制方法和显示装置
CN114078451B (zh) * 2020-08-14 2023-05-02 京东方科技集团股份有限公司 显示控制方法和显示装置

Similar Documents

Publication Publication Date Title
KR101761751B1 (ko) 직접적인 기하학적 모델링이 행해지는 hmd 보정
JP4764305B2 (ja) 立体画像生成装置、方法およびプログラム
JP6595726B2 (ja) 両眼視野/単眼視野間の移行
US6677939B2 (en) Stereoscopic image processing apparatus and method, stereoscopic vision parameter setting apparatus and method and computer program storage medium information processing method and apparatus
KR101675961B1 (ko) 적응적 부화소 렌더링 장치 및 방법
JP6443654B2 (ja) 立体画像表示装置、端末装置、立体画像表示方法、及びそのプログラム
US9467685B2 (en) Enhancing the coupled zone of a stereoscopic display
US11960086B2 (en) Image generation device, head-mounted display, and image generation method
US9549174B1 (en) Head tracked stereoscopic display system that uses light field type data
US11568555B2 (en) Dense depth computations aided by sparse feature matching
AU2017246470A1 (en) Generating intermediate views using optical flow
US11720996B2 (en) Camera-based transparent display
TW201322733A (zh) 影像處理裝置、立體影像顯示裝置、影像處理方法及影像處理程式
JP2014110568A (ja) 画像処理装置、画像処理方法、およびプログラム
JP6509101B2 (ja) 眼鏡状の光学シースルー型の両眼のディスプレイにオブジェクトを表示する画像表示装置、プログラム及び方法
EP3930318A1 (fr) Dispositif d&#39;affichage et procédé d&#39;affichage d&#39;image
CN110662012A (zh) 一种裸眼3d显示效果优化的排图方法、系统及电子设备
WO2014119555A1 (fr) Dispositif de traitement d&#39;images, dispositif d&#39;affichage et programme
JP2008085503A (ja) 三次元画像処理装置、方法、プログラム及び三次元画像表示装置
TWI500314B (zh) A portrait processing device, a three-dimensional portrait display device, and a portrait processing method
JP2014150402A (ja) 表示装置及びプログラム
US20140362197A1 (en) Image processing device, image processing method, and stereoscopic image display device
JP2014150401A (ja) 表示装置及びプログラム
EP3229470B1 (fr) Génération de vue de toile efficace à partir des vues intermédiaires
JP2016224656A (ja) 視点位置用要素画像生成装置およびそのプログラム、ならびに、インテグラル立体シミュレーションシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14745714

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14745714

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP