WO2025218182A1 - 显示设备、灰阶控制方法、装置及存储介质 - Google Patents
显示设备、灰阶控制方法、装置及存储介质Info
- Publication number
- WO2025218182A1 WO2025218182A1 PCT/CN2024/135910 CN2024135910W WO2025218182A1 WO 2025218182 A1 WO2025218182 A1 WO 2025218182A1 CN 2024135910 W CN2024135910 W CN 2024135910W WO 2025218182 A1 WO2025218182 A1 WO 2025218182A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- area
- pixel
- gaze point
- gaze
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
Definitions
- the present disclosure relates to the field of image technology, and in particular to a display device, a grayscale control method, an apparatus, and a storage medium.
- glasses-free 3D technology can optimize the design of prisms and other optical splitting components to improve their splitting effect.
- Traditional image arrangement algorithms in glasses-free 3D technology generally perform black or white interpolation on sub-pixels in transition areas to reduce crosstalk.
- this single black or white interpolation method currently used on sub-pixels can lead to image distortion and increased crosstalk. Therefore, reducing crosstalk has become a key technical issue that needs to be addressed in the field of glasses-free 3D displays.
- a display device On the one hand, a display device, a grayscale control method, an apparatus and a storage medium are provided, which can effectively reduce the crosstalk rate and improve the naked-eye 3D viewing effect.
- the display device includes: a display panel and a processor; the processor is configured to: determine multiple gaze areas corresponding to the display panel based on the position of the target user's gaze point within the display panel; the processor is further configured to: control the grayscale value of the sub-pixels in each gaze area of the multiple gaze areas based on the control strategy corresponding to the gaze area.
- At least two of the multiple gaze areas correspond to different control strategies.
- the display panel corresponds to multiple gaze points, and each gaze point corresponds to one area, and the area corresponding to each gaze point includes a transition area and a non-transition area.
- multiple gaze points are viewpoints from which the target user gazes at the display panel from different angles;
- the transition area is the area between the critical lines of the areas corresponding to any two gaze points, and the non-transition area is the area other than the transition area.
- the processor is further configured to: determine a target gaze point area based on the gaze point position; the target gaze point area is an area among multiple gaze point areas that affects the grayscale value of the sub-pixel; the target gaze point area includes a first gaze point area and a second gaze point area, and one of the first gaze point area and the second gaze point area is an area corresponding to the gaze point position, and the other is an adjacent area to the area corresponding to the gaze point position.
- the multiple gaze areas include a central area, which is the area at the center of the gaze point position; the processor is specifically configured to: for the multiple sub-pixels in the central area, reduce the grayscale value of the sub-pixels in the transition area among the multiple sub-pixels, and/or increase the grayscale value of the sub-pixels in the non-transition area among the multiple sub-pixels.
- the processor is specifically configured to: determine the grayscale coefficient of the first subpixel based on the position of the center point of the first subpixel in the target gaze point area and the width of the first area, and reduce the grayscale value of the first subpixel based on the grayscale coefficient of the first subpixel and the first image grayscale value of the first subpixel in the target gaze point area.
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the first area is a partial area in the target gaze point area; the first image grayscale value is used to represent the grayscale value of the first sub-pixel in the first gaze point area, or to represent the grayscale value of the first sub-pixel in the second gaze point area.
- the processor is specifically configured to: perform black processing on the grayscale value of the first sub-pixel to reduce the grayscale value of the first sub-pixel.
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area.
- the processor is further configured to increase the grayscale value of the second subpixel based on the position of the center point of the second subpixel in the target gaze point area and the width of the target gaze point area.
- the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area.
- the processor is specifically configured to: determine the grayscale coefficient of the second sub-pixel based on the position of the center point of the second sub-pixel in the target gaze point area and the width of the target gaze point area, and determine the grayscale value of the second sub-pixel based on the grayscale coefficient of the second sub-pixel and the image grayscale value of the second sub-pixel in the target gaze point area, and then use the minimum value between the grayscale value of the second sub-pixel and the grayscale threshold as the increased grayscale value of the second sub-pixel.
- the second image grayscale value is used to represent the grayscale value of the second sub-pixel in the first gaze point area, or is used to represent the grayscale value of the second sub-pixel in the second gaze point area.
- the multiple gaze areas include an edge area, which is an area adjacent to the central area; the central area is an area at the center of the gaze point position; the processor is specifically configured to: for multiple sub-pixels in the edge area, reduce the grayscale value of the sub-pixels in the transition area among the multiple sub-pixels.
- the grayscale value of the third sub-pixel is reduced; the third image grayscale value is used to represent the image grayscale value of the third sub-pixel in the first gaze point area, or to represent the image grayscale value of the third sub-pixel in the second gaze point area.
- the processor is further configured to determine an area proportion of the third sub-pixel based on a position of a center point of the third sub-pixel in the target gaze point area and a width of the third sub-pixel.
- the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area; the area proportion of the third sub-pixel is used to characterize the proportion of the third sub-pixel in the first gaze point area, or to characterize the proportion of the third sub-pixel in the second gaze point area.
- the processor is specifically configured to:
- the grayscale value of the third sub-pixel is reduced.
- the processor is specifically configured to:
- the grayscale value of the third subpixel is reduced.
- the first grayscale value is determined based on the proportion of the third sub-pixel in the first gaze point area and the image grayscale value of the third sub-pixel in the first gaze point area;
- the second grayscale value is determined based on the proportion of the third sub-pixel in the second gaze point area and the image grayscale value of the third sub-pixel in the second gaze point area.
- the processor is further configured to: perform black processing on the grayscale value of the sub-pixel to reduce the grayscale value of the sub-pixel.
- the sub-pixel is the first sub-pixel or the third sub-pixel, the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the processor is further configured to: increase the coverage of the transition area, and/or decrease the coverage of the non-transition area.
- the position of the center point of the sub-pixel in the target gaze point area can be determined by: determining the target gaze point area based on the gaze point position; determining the position of the center point of the target sub-pixel in the target gaze point area based on the position of the central sub-pixel in the row where the target sub-pixel is located.
- the target sub-pixel is any one of the first sub-pixel, the second sub-pixel, and the third sub-pixel; the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area; and the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the processor is specifically configured to: obtain the width of the target gaze point area, and based on the width of the target gaze point area and the position of the central sub-pixel, determine the width of the second area of the row where the target sub-pixel is located, and then determine the position of the center point of the target sub-pixel in the target gaze point area based on the width of the second area.
- the second area is an incomplete area on the left side of the row where the target sub-pixel is located;
- the processor is further configured to: obtain the offset between the central sub-pixel of the row where the target sub-pixel is located and the center point of the display panel, and determine the position of the central sub-pixel of the row where the target sub-pixel is located based on the offset and the pixel width.
- the multiple gaze areas include a peripheral area, which is the area excluding the central area and the edge area; the central area is the area at the center of the gaze point position, and the edge area is the area adjacent to the central area; the processor is specifically configured to: not adjust and control the multiple sub-pixels in the peripheral area.
- the display device further includes a prism disposed above the display panel; the processor is further configured to determine a placement position of the prism based on device parameters of the display device.
- the device parameters of the display device include at least two of the following: the horizontal aperture of the prism; the arch height of the prism; the distance between the prism and the display panel; the number of sub-pixels of the display panel covered by the prism; and the width of the sub-pixels of the display panel.
- a grayscale control method which is applied to a display device, the display device including a display panel and a processor; the method includes: based on the position of the target user's gaze point within the display panel, determining multiple gaze areas corresponding to the display panel, and for each gaze area in the multiple gaze areas, controlling the grayscale value of the sub-pixel in the gaze area based on the control strategy corresponding to the gaze area.
- At least two of the multiple gaze areas correspond to different control strategies.
- the display panel corresponds to multiple gaze points, and each gaze point corresponds to one area, and the area corresponding to each gaze point includes a transition area and a non-transition area.
- multiple gaze points are viewpoints from which the target user gazes at the display panel from different angles;
- the transition area is the area between the critical lines of the areas corresponding to any two gaze points, and the non-transition area is the area other than the transition area.
- a target gaze point area is determined based on the gaze point position; the target gaze point area is an area among multiple gaze point areas that affects the grayscale value of the sub-pixel; the target gaze point area includes a first gaze point area and a second gaze point area, one of the first gaze point area and the second gaze point area is an area corresponding to the gaze point position, and the other is an adjacent area to the area corresponding to the gaze point position.
- the multiple gaze areas include a central area, which is the area at the center of the gaze point position; for each of the multiple gaze areas, the grayscale values of the sub-pixels in the gaze area are controlled based on the control strategy corresponding to the gaze area, including: for the multiple sub-pixels in the central area, reducing the grayscale values of the sub-pixels in the transition area among the multiple sub-pixels, and/or increasing the grayscale values of the sub-pixels in the non-transition area among the multiple sub-pixels.
- the grayscale values of the sub-pixels in the transition area among the multiple sub-pixels are reduced, including: determining the grayscale coefficient of the first sub-pixel based on the position of the center point of the first sub-pixel in the target gaze point area and the width of the first area, and reducing the grayscale value of the first sub-pixel based on the grayscale coefficient of the first sub-pixel and the first image grayscale value of the first sub-pixel in the target gaze point area.
- the grayscale value of the first image is used to represent the grayscale value of the first sub-pixel in the first gaze point area, or to represent the grayscale value of the first sub-pixel in the second gaze point area;
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area;
- the first area is a partial area in the target gaze point area.
- reducing the grayscale values of sub-pixels in the transition area among the multiple sub-pixels includes: performing black processing on the grayscale value of the first sub-pixel to reduce the grayscale value of the first sub-pixel.
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area.
- increasing the grayscale value of a sub-pixel in a non-transition area among the plurality of sub-pixels includes: increasing the grayscale value of the second sub-pixel based on a position of a center point of the second sub-pixel in the target gaze point area and a width of the target gaze point area;
- the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area.
- the grayscale value of the second subpixel is adjusted based on the position of the center point of the second subpixel in the target gaze point area and the width of the target gaze point area to increase the grayscale value of the second subpixel, including: determining the grayscale coefficient of the second subpixel based on the position of the center point of the second subpixel in the target gaze point area and the width of the target gaze point area; determining the grayscale value of the second subpixel based on the grayscale coefficient of the second subpixel and the second image grayscale value of the second subpixel in the target gaze point area; and using the minimum value of the grayscale value of the second subpixel and the grayscale thresholds of multiple subpixels as the increased grayscale value of the second subpixel.
- the second image grayscale value is used to represent the grayscale value of the second sub-pixel in the first gaze point area, or is used to represent the grayscale value of the second sub-pixel in the second gaze point area;
- the multiple gaze areas include an edge area, which is an area adjacent to the central area; the central area is an area at the center of the gaze point position; for each of the multiple gaze areas, the grayscale values of the sub-pixels in the gaze area are controlled based on the control strategy corresponding to the gaze area, including: for the multiple sub-pixels in the edge area, reducing the grayscale values of the sub-pixels in the transition area among the multiple sub-pixels.
- the grayscale value of the sub-pixels in the transition area among the multiple sub-pixels is reduced, including: reducing the grayscale value of the third sub-pixel based on the area ratio of the third sub-pixel and the third image grayscale value of the third sub-pixel.
- the third image grayscale value is used to represent the image grayscale value of the third sub-pixel in the first gaze point area, or is used to represent the image grayscale value of the third sub-pixel in the second gaze point area.
- the area proportion of the third sub-pixel is determined based on the position of the center point of the third sub-pixel in the target gaze point area and the width of the third sub-pixel.
- the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area; the area proportion of the third sub-pixel is used to characterize the proportion of the third sub-pixel in the first gaze point area, or to characterize the proportion of the third sub-pixel in the second gaze point area.
- determining the reduced grayscale value of the third subpixel based on the area proportion of the third subpixel and the image grayscale value of the third subpixel includes: reducing the grayscale value of the third subpixel based on the proportion of the third subpixel in the first gaze point area, the proportion of the third subpixel in the second gaze point area, and the grayscale value of the third subpixel in the first gaze point area; or
- the grayscale value of the third sub-pixel is reduced.
- the grayscale value of the third sub-pixel after reduction is determined, including: reducing the grayscale value of the third sub-pixel based on the first grayscale value of the third sub-pixel in the first gaze point area and the second grayscale value of the third sub-pixel in the second gaze point area.
- the first grayscale value is determined based on the proportion of the third sub-pixel in the first gaze point area and the image grayscale value of the third sub-pixel in the first gaze point area;
- the second grayscale value is determined based on the proportion of the third sub-pixel in the second gaze point area and the image grayscale value of the third sub-pixel in the second gaze point area.
- the grayscale value of the sub-pixels in the transition area among the multiple sub-pixels is reduced, including: performing black processing on the grayscale value of the third sub-pixel to reduce the grayscale value of the third sub-pixel; the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the method further includes: increasing the coverage of the transition area, and/or decreasing the coverage of the non-transition area.
- the position of the center point of the sub-pixel in the target gaze point area can be determined by: determining the target gaze point area based on the gaze point position; determining the position of the center point of the target sub-pixel in the target gaze point area based on the position of the central sub-pixel in the row where the target sub-pixel is located.
- the target sub-pixel is any one of the first sub-pixel, the second sub-pixel, and the third sub-pixel; the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area; and the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the position of the center point of the target sub-pixel in the target gaze point area is determined, including: obtaining the width of the target gaze point area, and determining the width of a second area in the row where the target sub-pixel is located based on the width of the target gaze point area and the position of the central sub-pixel; and then, determining the position of the center point of the target sub-pixel in the target gaze point area based on the width of the second area.
- the second area is an incomplete area on the left side of the row where the target sub-pixel is located;
- the method further includes: obtaining an offset between the central sub-pixel of the row where the target sub-pixel is located and the center point of the display panel; and determining a position of the central sub-pixel of the row where the target sub-pixel is located based on the offset and the sub-pixel width.
- the multiple gaze areas include a peripheral area, which is the area excluding the central area and the edge area; the central area is the area at the center of the gaze point position, and the edge area is the area adjacent to the central area; for each of the multiple gaze areas, the grayscale value of the sub-pixel in the gaze area is controlled based on the control strategy corresponding to the gaze area, including: for multiple sub-pixels in the peripheral area, no adjustment control is performed.
- the display device further includes a prism disposed above the display panel; the method further includes: determining a placement position of the prism based on device parameters of the display device.
- the device parameters of the display device include at least two of the following: the horizontal aperture of the prism; the arch height of the prism; the distance between the prism and the display panel; the number of sub-pixels of the display panel covered by the prism; and the width of the sub-pixels of the display panel.
- a grayscale control device comprising a processor and a communication interface.
- the communication interface is coupled to the processor.
- the processor is configured to execute a computer program or instruction to implement the grayscale control method of the first aspect or any embodiment of the first aspect.
- a computer-readable storage medium stores computer program instructions, which, when executed on a computer (eg, a receiving node), causes the computer to execute the grayscale control method according to any of the above embodiments.
- a computer program product which includes computer program instructions, and when the computer program instructions are executed on a computer (eg, a receiving node), the computer program instructions cause the computer to execute the grayscale control method according to any of the above embodiments.
- a computer program is provided.
- the computer program When the computer program is executed on a computer (eg, a receiving node), the computer program enables the computer to execute the grayscale control method according to any one of the above embodiments.
- the processor in the display device can determine multiple different gaze areas corresponding to the display panel based on the target user's gaze point in real time, and flexibly adjust and control the grayscale values of sub-pixels in different gaze areas.
- different control strategies are adopted for different gaze areas, effectively solving the problem of limited visual range for naked-eye 3D viewers, while also reducing crosstalk to a certain extent and improving the 3D viewing effect.
- the different control strategies affect the overall effect of the picture, thereby improving the user's viewing experience.
- FIG1 is a schematic diagram illustrating a 3D effect generation in a real scene according to some embodiments.
- FIG2 is a schematic diagram showing a synthesis of a left view and a right view according to some embodiments
- FIG3 is a schematic diagram illustrating an image ghosting phenomenon according to some embodiments.
- FIG4 is a structural diagram of a display device according to some embodiments.
- FIG5 is a structural diagram of a display device according to some other embodiments.
- FIG6 is a flow chart of a grayscale control method according to some embodiments.
- FIG7 is a schematic diagram of multiple gaze areas according to some embodiments.
- FIG8 is a schematic diagram of an area corresponding to a gaze point according to some embodiments.
- FIG9 a is a schematic diagram illustrating a combination of multiple gaze areas and an area corresponding to a gaze point according to some embodiments
- FIG9 b is a schematic diagram illustrating a combination of multiple gaze areas and an area corresponding to a gaze point according to some other embodiments
- FIG10 is a schematic diagram of a target gaze area according to some embodiments.
- FIG11 is a schematic diagram of grayscale coefficient curves according to some other embodiments.
- FIG12 is a scene diagram of a grayscale control method according to some embodiments.
- FIG13 is a scene diagram of a grayscale control method according to some other embodiments.
- FIG14 is a scene diagram of a grayscale control method according to some other embodiments.
- FIG15 is a schematic diagram of a target gaze point area according to some other embodiments.
- FIG16 is a scene diagram of a grayscale control method according to some other embodiments.
- FIG17 is a schematic diagram of a region according to some embodiments.
- FIG18 is a schematic diagram of a target gaze point area according to some other embodiments.
- FIG19 is a schematic diagram of a target gaze point area according to some other embodiments.
- FIG20 is a scene diagram of a prism according to some embodiments.
- FIG21 is a scene diagram of a layout algorithm according to some embodiments.
- FIG22 is a schematic diagram of a target gaze point area according to some other embodiments.
- FIG23 is a scene diagram of a prism according to some other embodiments.
- FIG24 is a scene diagram of a prism according to some other embodiments.
- FIG25 is a flow chart of a grayscale control method according to some other embodiments.
- FIG26 is a structural diagram of a grayscale control device according to some embodiments.
- FIG. 27 is a structural diagram of a grayscale control device according to some embodiments.
- first and second are used for descriptive purposes only and should not be understood to indicate or imply relative importance or implicitly specify the number of the technical features indicated. Therefore, a feature defined as “first” or “second” may explicitly or implicitly include one or more of the features.
- plural means two or more.
- At least one of A, B and C has the same meaning as “at least one of A, B or C” and both include the following combinations of A, B and C: A only, B only, C only, the combination of A and B, the combination of A and C, the combination of B and C, and the combination of A, B and C.
- a and/or B includes the following three combinations: A only, B only, and a combination of A and B.
- the term “if” is optionally interpreted to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context.
- the phrases “if it is determined that” or “if [stated condition or event] is detected” are optionally interpreted to mean “upon determining” or “in response to determining” or “upon detecting [stated condition or event]” or “in response to detecting [stated condition or event],” depending on the context.
- equal includes the stated conditions and conditions similar to the stated conditions, where the range of the similar conditions is within an acceptable range of deviation, where the acceptable range of deviation is determined by one of ordinary skill in the art taking into account the measurement in question and the errors associated with the measurement of the particular quantity (i.e., the limitations of the measurement system).
- equal includes absolute equality and approximate equality, where the acceptable range of deviation for approximate equality can be, for example, that the difference between the two is less than or equal to 5% of either.
- Glasses-free 3D technology aims to provide a display technology that achieves three-dimensional visual effects without the need for auxiliary equipment. With the continuous advancement and upgrading of technology, this glasses-free 3D technology can be widely used in the field of consumer electronics, especially mainstream devices such as smartphones and TVs, and promote further innovation and development in related professional fields.
- This glasses-free 3D technology by optimizing the display system's structure and algorithms, provides users with an unprecedented immersive experience.
- this technology creates a more realistic three-dimensional visual effect, making viewers feel as if they are right in the story, greatly enhancing the interactivity and realism of the entertainment experience.
- the present invention's glasses-free 3D technology can attract more attention and enhance the effectiveness and memorability of advertising. Furthermore, for professional training, this technology can simulate more realistic operational scenarios, helping trainees better master skills and knowledge, and improving training efficiency and quality.
- glasses-free 3D technology still faces numerous technical challenges. Specifically, image processing capabilities need to be further optimized to ensure clear and stable three-dimensional images at varying viewing angles and distances, providing users with a more realistic glasses-free 3D effect and a more immersive user experience. This technology demonstrates significant market potential in entertainment, advertising, healthcare, and education.
- the display principle of naked-eye 3D technology The human eye has the ability to perceive depth, which is primarily dependent on binocular parallax. Binocular parallax refers to the fact that due to the different positions of the two eyes, they have different perspectives when observing the same object, resulting in slightly different images. These different images are fused and processed by the brain, allowing us to perceive the depth and three-dimensionality of objects in space. As shown in Figure 1, in naked-eye 3D display devices, most use the principle of light splitting using spectroscopic components such as prisms or gratings to transmit different images viewed by the left and right eyes to the corresponding eyes, thereby achieving the naked-eye 3D effect.
- spectroscopic components such as prisms or gratings
- the current naked-eye 3D technology mainly utilizes slit-type liquid crystal gratings and cylindrical prisms.
- Slit-type liquid crystal gratings use a grating in front of the screen to block the screen light.
- the opaque stripes will block the right eye.
- the opaque stripes will block the left eye.
- the 3D effect can then be produced by using binocular parallax.
- the screen brightness is only 1/4 of that of a 2D screen.
- crosstalk problem The crosstalk problem in naked-eye 3D display is mainly caused by reasons such as the image arrangement method, optical component design and manufacturing process capabilities.
- the prism In actual applications, the prism is usually placed at an angle so that the arrangement of pixels on the 2D display device is consistent with the edge angle.
- it is impossible to fully achieve a 100% light-splitting effect by relying solely on prisms and other light-splitting components.
- the arrangement of pixels in the image arrangement algorithm cannot ensure that all sub-pixels are completely divided into the left view or the right view. In other words, it is inevitable that part of the brightness of the image that originally entered the left eye will enter the right eye, resulting in crosstalk.
- Crosstalk is a key indicator for evaluating the quality of naked-eye 3D effects, measuring the brightness crossover between the left and right eye images. When the crosstalk reaches a certain level, ghosting can be perceived. As shown in Figure 3, it's generally believed that ghosting begins to be noticeable when the crosstalk exceeds 2%, while over 10% leads to noticeable ghosting. This indicates that crosstalk can negatively impact the viewing experience of naked-eye 3D.
- Multi-focal point technology uses specific optical devices or technical processing methods to split a single 3D image into multiple images with different perspectives, allowing viewers to experience a realistic 3D effect from different viewing positions and angles.
- optical devices or technical processing such as a transparent prism (rodular prism)
- the 3D image is divided into multiple perspectives or focal points, each corresponding to a different viewing position and angle.
- naked-eye 3D technology can optimize the design of prisms and other spectroscopic components to improve their spectroscopic effect. At the same time, it can ensure the accuracy of human eye tracking to improve the viewing experience and reduce crosstalk.
- embodiments of the present application provide a display device in which a processor can determine, in real time, multiple different gaze areas corresponding to a target user's gaze point, and flexibly adjust and control the grayscale values of sub-pixels in different gaze areas.
- a processor can determine, in real time, multiple different gaze areas corresponding to a target user's gaze point, and flexibly adjust and control the grayscale values of sub-pixels in different gaze areas.
- different control strategies are employed for different gaze areas, effectively addressing the limited visual range of naked-eye 3D viewers while also reducing crosstalk to a certain extent and enhancing the 3D viewing experience.
- Different control strategies influence the overall effect of the image, thereby improving the user's viewing experience.
- Display device 400 can be a terminal device with a display panel, such as a television.
- Display device 400 can include a display panel 401, at least one processor 402, a transceiver 403, and further include a memory 404.
- the processor 402, memory 404, and transceiver 403 can be connected via a communication line.
- the display panel 401 is used to display images.
- the display panel 401 corresponds to multiple cycles, one cycle includes multiple gaze points, and one gaze point corresponds to one area.
- the area corresponding to each gaze point includes a transition area and a non-transition area.
- the transition area is the area between the critical lines of the areas corresponding to any two gaze points, and the non-transition area is the area other than the transition area.
- the processor 402 can be a chip.
- Chips can include five major categories: logic chips, memory chips, sensor chips, power chips, and communication chips.
- the processor category mainly undertakes specific computing and control tasks in the system, such as microcontroller units (MCUs), central processing units (CPUs), graphics processing units (GPUs), neural processing units (NPUs), etc.
- the storage category mainly undertakes data storage chips in the system, as well as some storage controller chips, such as dynamic random access memory (DRAM), static random access memory (SRAM), flash memory (EEPROM memory, Flash), etc.
- the sensor category mainly undertakes information collection, presentation, and interaction chips in the system, such as input and output devices, some signal processing chips, etc.
- Communication chips mainly undertake communication functions in the system, such as some Ethernet chips, switching chips, wide area and local area network, point-to-point and ad hoc network chips, as well as filtering, amplification, power and other devices that assist in communication.
- These chips include wireless fidelity (WiFi), Bluetooth, fifth-generation mobile communication technology (5G) baseband, global positioning system (GPS), narrowband internet of things (NB-IoT), network cards, switches, etc. that are commonly known to the public.
- WiFi wireless fidelity
- 5G fifth-generation mobile communication technology
- GPS global positioning system
- NB-IoT narrowband internet of things
- the communication line may include a channel for transmitting information between the above components.
- the memory 404 can be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to include or store desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited to these.
- ROM read-only memory
- RAM random access memory
- EEPROM electrically erasable programmable read-only memory
- CD-ROM compact disc read-only memory
- CD-ROM compact disc read-only memory
- optical disc storage including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.
- the memory 404 can exist independently of the processor 402, that is, the memory 404 can be a memory external to the processor 402.
- the memory 404 can be connected to the processor 402 via a communication line to store execution instructions or application code, and the execution is controlled by the processor 402 to implement the network quality determination method provided in the following embodiment of this application.
- the memory 404 can also be integrated with the processor 402, that is, the memory 404 can be the internal memory of the processor 402.
- the memory 404 is a cache that can be used to temporarily store some data and instruction information.
- the processor 402 may include one or more CPUs.
- FIG. 5 is a structural diagram of another display device provided in an embodiment of the present application.
- the display device may include an eye tracking module 501, a data transmission module 502, a display control module 503, and a prism grating module 504.
- the eye tracking module 501, the data transmission module 502, and the display control module 503 are connected via a communication network.
- the eye tracking module 501 is used to capture the gaze position of the target user on the display panel, calculate and determine the coordinates of the gaze position, and send the coordinates to the data transmission module 502 .
- the eye tracking module 501 includes, but is not limited to, a binocular camera, a red, green, and blue depth camera (RGB-D camera), an infrared eye tracking camera, etc.
- the eye tracking module 501 can be installed in the middle of the top or bottom of the display device to ensure that the target user's eye position and eye movement direction can be directly captured.
- the data transmission module 502 can achieve efficient and stable data transmission through various schemes such as SPI (serial peripheral interface), USB (universal serial bus) or PCIe (PCI Express, a high-speed serial computer expansion bus standard).
- SPI serial peripheral interface
- USB universal serial bus
- PCIe PCI Express
- the SPI interface with its high real-time performance and high-speed data transmission capabilities, can meet application scenarios with extremely high requirements for data transmission timeliness;
- the USB interface with its high bandwidth and flexible and convenient characteristics, has become an ideal choice for transmitting large amounts of data;
- the PCIe interface as the standard for internal high-speed device connections, has outstanding performance and shows significant advantages when extremely high data transmission rates are required.
- the data transmission module 502 can format the coordinates of the gaze position, i.e., the eye position coordinates (x, y, z), and encapsulate them into a standard data packet. Subsequently, a high-speed data bus (including but not limited to SPI, USB, PCIe, etc.) is used to establish a connection between the eye tracking module 501 and the data transmission module 502. The data transmission module 502 can send the eye position coordinates (x, y, z) of the target user to the display control module 503, which will perform subsequent processing. This design ensures low latency and high bandwidth during data transmission, thereby achieving precise synchronization control of the camera's real-time capture of the eye coordinate position and the display panel pixel arrangement.
- a high-speed data bus including but not limited to SPI, USB, PCIe, etc.
- the embodiments of the present application can acquire and transmit accurate eye coordinate information to the display panel in real time, thereby achieving synchronous adjustment of the display panel. This process not only improves the efficiency and accuracy of data transmission, but also ensures the stability and reliability of the entire human eye tracking and display device.
- the display control module 503 receives the eye position coordinates (x, y, z) of the target user sent by the data transmission module 502.
- the display control module 503 can dynamically adjust the grayscale value of the pixel on the display panel based on the eye position coordinates and the screen coordinate data of the gaze point through a transition processing method, thereby ensuring that the user can observe the best 3D visual effect at different positions.
- the display control module 503 receives the eye position coordinates (including but not limited to position information on the x, y, and z axes) and the calculated coordinate data of the gaze point on the display panel provided by the eye tracking module 501. Subsequently, the display control module 503 uses a transition processing method to accurately and smoothly adjust the grayscale values of corresponding pixels on the display panel based on the real-time changes in the eye position coordinates and the gaze point screen coordinates.
- the eye position coordinates including but not limited to position information on the x, y, and z axes
- the display control module 503 uses a transition processing method to accurately and smoothly adjust the grayscale values of corresponding pixels on the display panel based on the real-time changes in the eye position coordinates and the gaze point screen coordinates.
- the display control module 503 not only considers the continuity and smoothness of eye movements, but also fully incorporates the visual perception characteristics of the human eye to ensure a natural and coherent visual effect when adjusting pixel grayscale values. In this way, the display control module 503 can dynamically optimize the image content on the display panel, allowing users to obtain the best 3D viewing experience even at different viewing positions.
- the display control module 503 technical solution proposed in the embodiment of the present application is also highly flexible and scalable, and can adapt to different types of display panels and 3D display technologies, thereby being widely used in various 3D display devices and systems, providing users with a more realistic and immersive visual experience.
- the light splitting effect of the prism grating module 504 can enable the target user to view the optimal 3D image effect at any angle.
- the embodiment of the present application can adopt the 3D display technology of cylindrical prisms, aiming to provide the target user with a stereoscopic and high-quality 3D visual experience by precisely controlling the separation and guidance of light.
- this 3D display technology utilizes the unique optical properties of a cylindrical prism to separate light from the display panel into different directions. This process involves cleverly installing the cylindrical prism at the front of the display panel. Its internal lens structure precisely controls the light emitted by each pixel, ensuring it is directed at a predetermined angle. This allows the left and right eyes to receive specially processed, differentiated image information, creating a 3D effect in the brain.
- the present invention also proposes a cylindrical prism installation method. Specifically, the cylindrical prism is tilted at a certain angle relative to the display panel. This design not only compensates for the loss of image resolution in both horizontal and vertical directions caused by parallax, maintaining high clarity in all directions, but also effectively reduces moiré patterns, avoiding disruptive streaks in the image, further enhancing the user's visual experience.
- the display device described in the embodiment of the present application is intended to more clearly illustrate the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided in the embodiment of the present application. Ordinary technicians in this field can know that with the evolution of display devices and the emergence of other display devices, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
- the embodiment of the present application can control and adjust the grayscale value of the sub-pixel to avoid crosstalk problems.
- the grayscale compensation method may include S601-S602.
- S601 can also be referred to as the "determining multiple gaze areas” process
- S602 can also be referred to as the "controlling the sub-pixels of the corresponding gaze areas according to different control strategies" process.
- S601-S602 are described in detail below.
- S601 Determine a plurality of gaze areas corresponding to the display panel based on a gaze point position of a target user within the display panel.
- the display device can obtain the gaze point position ( gaze point coordinates) of the target user in real time through the human eye tracking module, and then the display device can divide the display panel into multiple gaze areas according to the gaze point position and the preset distance.
- the multiple fixation areas include, but are not limited to, a central area, an edge area, and a peripheral area.
- the central area is the area at the center of the fixation point; the edge area is the area adjacent to the central area; the central area is the area at the center of the fixation point; and the peripheral area is the area excluding the central area and the edge area.
- a certain preset distance (radius r1) is expanded to both sides to obtain a rectangular central area.
- a distance of radius r1 is expanded to the periphery to obtain a circular central area.
- This central area is an important area for the target user to gaze at. Since the fovea is the area with the most acute visual perception on the retina and is responsible for high-resolution image processing, the target user is also most sensitive to the visual effects in this central area. Accordingly, the display device has the highest requirements for visual processing in this central area.
- the display device continues to expand a certain preset distance (radius r2) on both sides on the basis of the central area to obtain a rectangular edge area.
- a certain preset distance (radius r2)
- the radius r2 is expanded to the periphery to obtain a circular edge area.
- the edge area can also be understood as the area surrounding the central area, which can be defined as the area between the radius r1 and the radius r2. It should be understood that there are still higher visual requirements in this edge area, especially during the rapid eye movement of the target user.
- peripheral area may be an area outside the radius r2, which is usually not directly observed and has the lowest corresponding visual demand.
- the display device can dynamically adjust the grayscale values of sub-pixels in multiple viewing areas according to the actual size and resolution of the display panel, thereby applying differentiated transition processing solutions in different areas to optimize the naked-eye 3D display effect.
- At least two of the multiple gaze areas correspond to different control strategies.
- the display panel corresponds to multiple cycles, one cycle includes multiple gaze points, and one gaze point corresponds to one area, and the area corresponding to each gaze point includes a transition area and a non-transition area.
- the transition region is the region between the critical lines of the regions corresponding to any two gaze points. It is understood that when the center point of a sub-pixel is in the transition region between the critical lines of the regions corresponding to the two gaze points, the sub-pixel will appear to span the regions corresponding to the two gaze points.
- the non-transition area is the area other than the transition area, and is used to indicate that a sub-pixel is entirely within the area corresponding to the gaze point.
- the sub-pixel does not span the areas corresponding to two gaze points.
- the display panel corresponds to multiple cycles, and each cycle includes multiple gaze points, such as gaze point 1, gaze point 2, gaze point 3, gaze point 4, and gaze point 5.
- Each gaze point corresponds to an area, and the area corresponding to each gaze point includes a transition area and a non-transition area.
- the transition area is the shaded area in FIG8
- the non-transition area is the area not shaded in FIG8 .
- transition areas and non-transition areas in the central area, edge area and peripheral area that is to say, the central area, edge area and peripheral area all overlap with the area corresponding to the gaze point.
- Figures 7 and 8 are combined to form the region diagram shown in Figure 9a, taking gaze points 1 and 2 as examples.
- the region With the target user's gaze point (point A) as the center, the region is expanded to both sides by a predetermined distance (radius r1) to obtain the central region. Furthermore, the region is further expanded to both sides by a predetermined distance (radius r2) to obtain the peripheral region.
- the peripheral region can be the region outside radius r2.
- the gaze point position (point A) is in the area corresponding to gaze point 1, and since the central area obtained above spans the areas corresponding to gaze point 1 and gaze point 2, the central area includes a transition area (shaded part) and a non-transition area (non-shaded part).
- the edge area does not include a transition area and a non-transition area.
- the embodiment of the present application is only an example for easy understanding. In actual scenarios, the edge area may also include a transition area and a non-transition area, and the embodiment of the present application is not limited to this.
- FIG. 9b For another example, combining Figures 7 and 8 forms a regional diagram as shown in Figure 9b.
- the display panel corresponds to multiple cycles, and one cycle includes multiple gaze points, such as gaze point 1, gaze point 2, gaze point 3, gaze point 4, and gaze point 5.
- a certain preset distance (radius r1) is expanded to both sides to obtain the central area
- a certain preset distance (radius r2) is further expanded to both sides to obtain the edge area.
- the peripheral area can be the area outside the radius r2. It should be understood that different control strategies will be adopted for point B in the central area, point C in the edge area, and point D in the peripheral area to adjust the grayscale value.
- the grayscale values of the sub-pixels in the transition area among the multiple sub-pixels are reduced and/or the grayscale values of the sub-pixels in the non-transition area among the multiple sub-pixels are increased.
- the processing method of the display device can be any one of the following methods (1), (2), (3), (4), and (5).
- Method (1) For multiple sub-pixels in the central area, the grayscale values of the sub-pixels in the transition area are reduced.
- the grayscale coefficient of the first sub-pixel is determined based on the position of the center point of the first sub-pixel in the target gaze point area and the width of the first area, and the grayscale value of the first sub-pixel is reduced based on the grayscale coefficient of the first sub-pixel and the grayscale value of the first image in the target gaze point area.
- the first sub-pixel is any sub-pixel in the transition region among the multiple sub-pixels in the central region.
- the first image grayscale value is used to represent the grayscale value of the first sub-pixel in the first gazing point region, or to represent the grayscale value of the first sub-pixel in the second gazing point region.
- the target gaze point area is an area among the multiple gaze point areas that has an impact on the grayscale value of the first sub-pixel.
- the target gaze point area includes a first gaze point area and a second gaze point area, wherein one of the first gaze point area and the second gaze point area is an area corresponding to the gaze point at which the gaze point is located, and the other is an area adjacent to the area corresponding to the gaze point.
- the display device can determine the target gaze point area based on the gaze point location.
- the gaze point position (point A) is in the area corresponding to gaze point 1, then the area corresponding to gaze point 1 is the first gaze point area, and correspondingly, the area corresponding to gaze point 2 adjacent to the first gaze point area is the second gaze point area.
- the area corresponding to gaze point 2 is the first gaze point area.
- the centerline of the area corresponding to gaze point 2 is the dividing line.
- the gaze point (point A) is located to the left of the area corresponding to gaze point 2
- the second gaze point area is the area corresponding to gaze point 1. It should be understood that in this case, the areas corresponding to gaze point 1 and gaze point 2 will affect the grayscale value of the first sub-pixel.
- the gaze point position (point A) is located in the right area of the area corresponding to gaze point 2, then the second gaze point area is the area corresponding to gaze point 3.
- the area corresponding to gaze point 2 and the area corresponding to gaze point 3 are the areas that affect the grayscale value of the first sub-pixel.
- the area corresponding to gaze point 4 is the first gaze point area.
- the second gaze point area is the area corresponding to gaze point 3. If the gaze point position (point A) is located to the right of the area corresponding to gaze point 4, then the second gaze point area is the area corresponding to gaze point 5. As shown in Figure 9b, the second gaze point area is the area corresponding to gaze point 5.
- the display device may determine the target gaze point area based on the positions of the sub-pixels, where the sub-pixels may be the first sub-pixel, the second sub-pixel, or the third sub-pixel.
- Point B can be understood as the first sub-pixel. If point B is in the area corresponding to gaze point 4, then the area corresponding to gaze point 4 is the first gaze point area. Taking the center line of the area corresponding to gaze point 4 as the dividing line, if point B is located in the left area of the area corresponding to gaze point 4, then the second gaze point area is the area corresponding to gaze point 3. If point B is located in the right area of the area corresponding to gaze point 4, then the second gaze point area is the area corresponding to gaze point 5. As shown in Figure 9b, the second gaze point area is the area corresponding to gaze point 5.
- Point C can be understood as the third sub-pixel. If point C is in the area corresponding to gaze point 5, then the area corresponding to gaze point 5 is the first gaze point area. Taking the center line of the area corresponding to gaze point 5 as the dividing line, if point C is located in the left area of the area corresponding to gaze point 5, then the second gaze point area is the area corresponding to gaze point 4. If point C is located in the right area of the area corresponding to gaze point 5, then the second gaze point area is the area corresponding to gaze point 1 in the adjacent cycle. As shown in Figure 9b, the second gaze point area is the area corresponding to gaze point 4.
- the transition area provided in the embodiment of the present application includes but is not limited to a first sub-transition area and a second sub-transition area
- the first sub-transition area is an area adjacent to the boundary of the gaze point area
- the second sub-transition area is a transition area other than the first sub-transition area.
- the target gaze area as the first gaze area (the area corresponding to gaze point 1) and the second gaze area (the area corresponding to gaze point 2).
- the width of a subpixel can be D-H; the width of 0.5 subpixels can be A-C, E-G, I-K, and so on. If the center point of the first subpixel is in the C-D area, then the first subpixel is completely in the first gaze area; if the center point of the first subpixel is in the H-I area, then the first subpixel is completely in the second gaze area.
- A-C, D-H, and I-K are transition areas.
- A-B, E-F, F-G, and J-K are adjacent to the boundary of the gaze area
- A-B, E-F, F-G, and J-K can be the first sub-transition area.
- the transition areas B-C, D-E, G-H, and I-J in the transition area other than the first sub-transition area can be the second sub-transition area.
- the grayscale value of the reduced first sub-pixel is calculated.
- the grayscale coefficient of the first sub-pixel is determined based on the position of the center point of the first sub-pixel in the target gaze point area and the width of the first area.
- the first area is a partial area in the target gaze point area.
- position is the position of the center point of the first subpixel in the target gaze point area, that is, the distance between the center point of the first subpixel and the leftmost side of the target gaze point area; line_f is the width of the first area, that is, the width from A to F, and d is the width of the area between the two letters.
- the width of the E-G area is 2d, where 0 ⁇ d ⁇ subpixel/2, and the display device adjusts the range of the transition area by adjusting the size of d.
- the display device reduces the grayscale value of the first subpixel based on the grayscale coefficient of the first subpixel and the image grayscale value of the first subpixel in the target gaze point area to obtain a reduced grayscale value of the first subpixel.
- the image grayscale value is used to represent the grayscale value of the first sub-pixel in different gaze point areas in the target gaze point area.
- the display device may substitute the grayscale coefficient k_score of the first subpixel into the following formula 2 to determine the reduced grayscale value of the first subpixel.
- value k_score*value_right Formula 2
- value_right is the grayscale value of the first sub-pixel in the first gaze point area.
- the first sub-pixels are located at different positions, and thus have corresponding grayscale coefficients that are different, and thus have different grayscale values.
- the display device uses Formula 1 and Formula 2 to obtain the reduced grayscale value of the first sub-pixel, and the current grayscale value of the first sub-pixel can be adjusted to the reduced grayscale value.
- Method (2) For the plurality of sub-pixels in the central region, the grayscale values of the sub-pixels in the transition region are reduced. For example, the grayscale value of the first sub-pixel is blacked out to reduce the grayscale value of the first sub-pixel, thereby obtaining the reduced grayscale value of the first sub-pixel.
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area.
- the display device sets the grayscale value of the first subpixel to a preset value.
- a preset value As shown in Figure 10, taking the preset value of 0 as an example, if the center point of the first subpixel is in the A-C region, the D-H region, and the I-K region, the display device sets the grayscale value of the first subpixel to 0, i.e., performs black removal processing on the subpixel in the transition region.
- the display effect of the image can be as shown in Figure 12.
- the display device when the center point of a first subpixel is within the first sub-transition region, the display device sets the grayscale value of the first subpixel to a preset value. As shown in FIG10 , taking the preset value of 0 as an example, if the center point of the first subpixel is within any of the first sub-transition regions A-B, E-F, F-G, and J-K, the display device may set the grayscale value of the first subpixel to 0, i.e., perform black removal processing on the subpixel within the transition region.
- Method (3) For the multiple sub-pixels in the central area, increase the grayscale value of the sub-pixels in the non-transition area among the multiple sub-pixels.
- the display device increases the grayscale value of the second subpixel based on the position of the center point of the second subpixel in the target gaze point area and the width of the target gaze point area to obtain an increased grayscale value of the second subpixel.
- the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area.
- the display device determines the grayscale coefficient of the second subpixel based on the position of the center point of the second subpixel in the target gaze point area and the width of the target gaze point area, and determines the grayscale value of the second subpixel based on the grayscale coefficient of the second subpixel and the second image grayscale value of the second subpixel in the target gaze point area, and then uses the minimum grayscale value between the grayscale value of the second subpixel and the grayscale threshold as the increased grayscale value of the second subpixel.
- the second image grayscale value is used to represent the grayscale value of the second sub-pixel in the first gaze point area, or is used to represent the grayscale value of the second sub-pixel in the second gaze point area.
- the non-transition region may include a second sub-transition region.
- the non-transition region may be B-E and G-J.
- the non-transition region may also be C-D and H-I.
- k_ratio is the increase coefficient
- position is the position of the center point of the second sub-pixel in the target gaze point area
- deltax is the width of the target gaze point area
- value NOT k_score NOT * value_right NOT Formula 4
- value_right is the grayscale value of the second sub-pixel in the first gaze point area.
- value k_score * value_left Formula 5
- value_left is the grayscale value of the second sub-pixel in the second gaze point area.
- the display device determines the grayscale value value of the second sub-pixel, it compares the grayscale value value of the second sub-pixel with the grayscale threshold 255, and determines the smallest grayscale value between the two as the increased grayscale value of the second sub-pixel.
- Method (4) For the plurality of sub-pixels in the central area, the grayscale values of the sub-pixels in the transition area among the plurality of sub-pixels are reduced, and the grayscale values of the sub-pixels in the non-transition area among the plurality of sub-pixels are increased.
- the display device may combine the above-mentioned method (1) with the method (3), that is, the grayscale values of the sub-pixels in the transition area are reduced by the method of method (1), and the grayscale values of the sub-pixels in the non-transition area are increased by the method of method (3).
- the display effect of the image can be as shown in Figure 13.
- Method (5) For the plurality of sub-pixels in the central area, the grayscale values of the sub-pixels in the transition area among the plurality of sub-pixels are reduced, and the grayscale values of the sub-pixels in the non-transition area among the plurality of sub-pixels are increased.
- the display device may combine the above-mentioned method (2) with the method (3), that is, the grayscale values of the sub-pixels in the transition area are reduced by the method of method (2), and the grayscale values of the sub-pixels in the non-transition area are increased by the method of method (3).
- the display effect of the image can be as shown in Figure 14.
- the display device in the embodiment of the present application performs the action of reducing the grayscale value of the sub-pixels in the transition area in the central area, and increases the grayscale value of the sub-pixels in the non-transition area, so as to balance the width of the high-brightness platform and the low-brightness platform in the transition area and the non-transition area, and further effectively improve the actual visual range of naked-eye 3D.
- control strategy for the edge area is as follows:
- the grayscale values of the sub-pixels in the transition area among the multiple sub-pixels are reduced.
- the area ratio of the third sub-pixel is determined based on the position of the center point of the third sub-pixel in the target gaze point area and the width of the third sub-pixel, and the grayscale value of the third sub-pixel is reduced based on the area ratio of the third sub-pixel and the third image grayscale value of the third sub-pixel to obtain the reduced grayscale value of the third sub-pixel.
- the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area; the area proportion of the third sub-pixel is used to characterize the proportion of the third sub-pixel in the first gaze point area, or to characterize the proportion of the third sub-pixel in the second gaze point area; the third image grayscale value is used to characterize the grayscale value of the third sub-pixel in the first gaze point area, or to characterize the grayscale value of the third sub-pixel in the second gaze point area.
- the display device can use different formulas to calculate the area ratio of the third sub-pixel when the third sub-pixel is in the first sub-transition region at different positions.
- the display device may determine the area ratio of the third sub-pixel according to any one of the following cases (1), (2), (3), and (4).
- position is the position of the third subpixel's center point within the target fixation area
- subpixel is the width of one subpixel (the third subpixel). It should be noted that position can be understood as the distance from the third subpixel's center point to the leftmost side of the target fixation area. For example, if the third subpixel's center point is on the line to point B, then position is the distance from point A to point B.
- position is the position of the center point of the third sub-pixel in the target gaze point area
- subpixel is the width of a sub-pixel (the third sub-pixel)
- deltax is the width of the target gaze point area
- deltax/2 is the width of half the target gaze area, such as the width from A to F.
- Position is the width from A to E, and deltax/2-position can be used to calculate the width from E to F.
- subpixel/2 is half the width of a subpixel, that is, the width from P to E.
- the display device adds the width from E to F to the width from P to E to obtain the width from P to F. Using the width from P to F and the width of one subpixel, it calculates the proportion of the third subpixel in the first gaze area.
- the area ratio of the third sub-pixel in the embodiment of the present application can be understood as a process of solving the area.
- position is the position of the center point of the third sub-pixel in the target gaze point area
- subpixel is the width of a sub-pixel (the third sub-pixel)
- deltax is the width of the target gaze point area
- position is the position of the center point of the third sub-pixel in the target gaze point area
- subpixel is the width of a sub-pixel (the third sub-pixel)
- deltax is the width of the target gaze point area
- the grayscale value of the third sub-pixel after reduction can be determined through the area ratio.
- the area proportion of the third sub-pixel includes the area proportion of the third sub-pixel in the first gaze point area and the area proportion of the third sub-pixel in the second gaze point area.
- whether the display device uses the grayscale value of the third pixel in the first foveation area or the grayscale value of the third pixel in the second foveation area is determined based on whether the center point of the third sub-pixel is located in the first foveation area or the second foveation area. For example, if the center point of the third sub-pixel is located in the first foveation area, the grayscale value of the third sub-pixel in the first foveation area is used; if the center point of the third sub-pixel is located in the second foveation area, the grayscale value of the third sub-pixel in the second foveation area is used.
- the display device uses the grayscale value of the third pixel in the first foveation area, or uses the grayscale value of the third pixel in the second foveation area, based on whether the first sub-transition area where the center point of the third sub-pixel is located is in the first foveation area or the second foveation area. For example, if the first sub-transition area where the center point of the third sub-pixel is located is in the first foveation area, the grayscale value of the third sub-pixel in the first foveation area is used; if the first sub-transition area where the center point of the third sub-pixel is located is in the second foveation area, the grayscale value of the third sub-pixel in the second foveation area is used.
- the display device reduces the grayscale value of the third subpixel based on the proportion of the third subpixel in the first gaze point area, the proportion of the third subpixel in the second gaze point area, and the grayscale value of the third subpixel in the first gaze point area to obtain the reduced grayscale value of the third subpixel.
- the grayscale value of the third sub-pixel is reduced to obtain the reduced grayscale value of the third sub-pixel.
- the display device can use different formulas to obtain the reduced grayscale value of the third sub-pixel when the third sub-pixel is in the first sub-transition region at different positions. Specifically, the display device can determine the target grayscale value according to either of the following cases (5) and (6).
- the display device uses the grayscale value of the third subpixel in the first gaze point area to determine the grayscale value of the third subpixel after the reduction. Combining the above cases (1) and (2), as shown in FIG10 .
- value_right is the grayscale value of the third sub-pixel in the first gaze point area.
- the display device uses the grayscale value of the third subpixel in the second gaze point area to determine the grayscale value of the third subpixel after the reduction. Combining the above cases (3) and (4), as shown in FIG10 .
- value_left is the grayscale value of the third sub-pixel in the second gaze point area.
- the display effect of the image can be as shown in Figure 16.
- the target user is at gaze point 1.
- the light emitted by the pixels on the 3D display panel passes through the spectral effect of the prism and enters the left and right eyes of the target user respectively.
- the image formed by the area viewed by the left eye through the prism grating is the left image.
- the image formed by the area viewed by the right eye through the prism grating is the right image.
- the third sub-pixel can correspond to an image grayscale value in the first gaze point area, namely the first grayscale value.
- it can also correspond to an image grayscale value in the second gaze point area, namely the second grayscale value.
- the grayscale value of the third subpixel is reduced to obtain the reduced grayscale value of the third subpixel.
- the first grayscale value is determined based on the proportion of the third sub-pixel in the first gaze point area and the image grayscale value of the third sub-pixel in the first gaze point area;
- the second grayscale value is determined based on the proportion of the third sub-pixel in the second gaze point area and the image grayscale value of the third sub-pixel in the second gaze point area.
- the display device can determine a first grayscale value based on the proportion of the third sub-pixel in the first gaze point area and the image grayscale value of the third sub-pixel in the first gaze point area, and determine a second grayscale value based on the proportion of the third sub-pixel in the second gaze point area and the grayscale value of the third sub-pixel in the second gaze point area.
- the first grayscale value is the adjusted grayscale value of the third sub-pixel in the first gaze point area
- the second grayscale value is the adjusted grayscale value of the third sub-pixel in the second gaze point area
- the display device can determine the grayscale value of the third sub-pixel after reduction based on the first grayscale value and the second grayscale value, so as to avoid the problem of unsatisfactory processing effect caused by the fact that the third sub-pixel occupies a small area in the gaze point area but the grayscale value of the third sub-pixel in the gaze point area is large.
- the display device can use different formulas to determine the grayscale value of the third sub-pixel after reduction when the third sub-pixel is in the first sub-transition region at different positions. Specifically, the display device can determine the grayscale value of the third sub-pixel after reduction according to either of the following cases (7) and (8).
- Case (8) combined with the above cases (3) and (4), is shown in Figure 10.
- the grayscale value of the third sub-pixel can be better adjusted through the above situation (7) and situation (8).
- the display effect of the image can be as shown in Figure 18.
- the display device can use different formulas to obtain the area ratio of the third sub-pixel. Specifically, the display device can determine the area ratio of the third sub-pixel according to any one of the following cases (9), (10), (11), and (12).
- the grayscale value of the third sub-pixel after reduction can be determined through the area ratio.
- the display device can determine the grayscale value of the third subpixel after reduction based on the difference between the proportion of the third subpixel in the first gaze point area and the proportion of the third subpixel in the second gaze point area, and the grayscale value of the third subpixel in the second gaze point area.
- the display device can also determine the grayscale value of the third subpixel after reduction based on the difference between the proportion of the third subpixel to the first gaze point area and the proportion of the third subpixel to the second gaze point area, and the grayscale value of the third subpixel in the first gaze point area.
- the display device can use different formulas to determine the grayscale value of the third sub-pixel after reduction when the third sub-pixel is in the second sub-transition region at different positions. Specifically, the display device can determine the grayscale value of the third sub-pixel after reduction according to either of the following cases (13) and (14).
- the display device uses the grayscale value of the third sub-pixel in the first gazing point area to determine the grayscale value of the third sub-pixel after the reduction. Combining the above cases (9) and (10), as shown in Figure 10.
- the display device can determine the grayscale value of the third sub-pixel after the reduction by using formula 10 in the above case (5).
- the display device uses the grayscale value of the third sub-pixel in the second gazing point area to determine the grayscale value of the third sub-pixel after reduction. Combining the above cases (3) and (4), as shown in Figure 10.
- the display device can determine the grayscale value of the third sub-pixel after reduction by using formula 11 in the above case (6).
- the display device determines the grayscale value of the reduced third sub-pixel through situation (13) or situation (14), and adjusts the current grayscale value of the third sub-pixel.
- the display effect of the adjusted image can be as shown in Figure 14.
- the display device may determine the first grayscale value based on the proportion of the third sub-pixel in the first gaze point area and the image grayscale value of the third sub-pixel in the first gaze point area, and determine the second grayscale value based on the proportion of the third sub-pixel in the second gaze point area and the grayscale value of the third sub-pixel in the second gaze point area.
- the first grayscale value is the adjusted image grayscale value of the third sub-pixel in the first gaze point area
- the second grayscale value is the adjusted image grayscale value of the third sub-pixel in the second gaze point area
- the display device can reduce the grayscale value of the third sub-pixel according to the first grayscale value and the second grayscale value to obtain the reduced grayscale value of the third sub-pixel, thereby avoiding the problem of unsatisfactory processing effect caused by the fact that the third sub-pixel occupies a small area in the gaze point area but the grayscale value of the third sub-pixel in the gaze point area is large.
- the display device can use different formulas to determine the grayscale value of the third sub-pixel after reduction when the third sub-pixel is in the second sub-transition region at different positions. Specifically, the display device can determine the grayscale value of the third sub-pixel after reduction according to either of the following cases (15) and (16).
- Case (15), combined with the above cases (9) and (10), is shown in Figure 10.
- the display device can determine the grayscale value of the third sub-pixel after reduction by using Formula 12 in the above case (7).
- Case (16), combined with the above cases (11) and (12), is shown in Figure 10.
- the display device can determine the grayscale value of the third sub-pixel after reduction by using formula 13 in the above case (8).
- the display device determines the grayscale value of the reduced third sub-pixel through situation (15) or situation (16), and adjusts the current grayscale value of the third sub-pixel.
- the display effect of the adjusted image can be as shown in Figure 19.
- a 31.5 8K naked-eye 3D project sample was used as an example to test the crosstalk rate of a combined black and white image and to capture a 3D effect of a flower.
- the display device reduced the grayscale values of the subpixels in the edge regions.
- a crosstalk rate test was performed on the combined image and the actual 3D effect of the flower.
- the method of reducing the grayscale values of the subpixels in the edge regions reduced the crosstalk rate by approximately 1%, significantly reducing ghosting and effectively improving the 3D effect.
- the grayscale value of the third sub-pixel is subjected to black processing to determine the reduced grayscale value of the third sub-pixel.
- the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the display device when the center point of the third subpixel is in the transition region, the display device sets the grayscale value of the third subpixel to a preset value. As shown in FIG10 , taking the preset value of 0 as an example, when the center point of the third subpixel is in the A-C region, the D-H region, and the I-K region, the display device sets the grayscale value of the third subpixel to 0, i.e., performing black removal processing on the subpixel in the transition region.
- control strategy for the peripheral area is as follows:
- no adjustment control is performed on the multiple sub-pixels in the peripheral area.
- peripheral area is the area other than the central area and the edge area. Since the peripheral area is usually not directly looked at, the corresponding visual requirements are the lowest. Therefore, the grayscale values of the sub-pixels in the peripheral area can be left unprocessed to compensate for the brightness reduction caused by reducing the grayscale values of the sub-pixels in other areas (central area and edge area).
- the embodiments of the present application can adjust the coverage of the transition area and the non-transition area based on actual usage. For example, the coverage of the transition area can be increased and/or the coverage of the non-transition area can be decreased. If the coverage of the transition area is increased, the range of the transition processing will also be increased, which will correspondingly enhance the effect of the display device on the grayscale value transition processing of the sub-pixels.
- the display device of the present application embodiment can determine multiple different gaze areas corresponding to the display panel based on the target user's gaze point in real time, and flexibly adjust and control the grayscale values of sub-pixels in different gaze areas.
- different control strategies are adopted for different gaze areas, thereby effectively reducing crosstalk rate, expanding the visible range of naked-eye 3D viewing, and improving the user's viewing experience.
- the display device can determine the positions of the center points of different sub-pixels in the target gaze point area.
- the display device may determine the target gaze point area based on the gaze point position, and determine the position of the center point of the target sub-pixel in the target gaze point area based on the position of the center sub-pixel in the row where the target sub-pixel is located.
- the target sub-pixel is any one of the first sub-pixel, the second sub-pixel, and the third sub-pixel mentioned above.
- the display device can obtain the width of the target gaze point area, and based on the width of the target gaze point area and the position of the central sub-pixel, determine the width of the second area in the row where the target sub-pixel is located, and then determine the position of the center point of the target sub-pixel in the target gaze point area based on the width of the second area.
- the second area is an incomplete area on the left side of the row where the target sub-pixel is located.
- the distance between the human eye and the prism in the display panel is z
- the distance between the prism and the display panel in the display panel is h
- the horizontal pitch value of the prism is pitch.
- edge_distance is the width of a portion of the area in the row
- xi is the position of the central sub-pixel in the i-th row (the position of the central sub-pixel)
- % is used to represent the remainder algorithm.
- position is the position of the center point of the target sub-pixel in the target fixation area.
- point B is the position of the center sub-pixel x i
- point C is the center point of the target sub-pixel.
- the position of the central sub-pixel, xi is 13, the deltax of the target fixation area is 5, and the width of the leftmost incomplete area of the row, edge_distance, is 2.
- the display device can then use the result 3 with a remainder of 2 to determine that the center point of the target sub-pixel is at the second position of the fourth target fixation area.
- the position of the central sub-pixel, xi is 8, the width of the target fixation area, deltax, is 5, and the width of the leftmost incomplete area of the row, edge_distance, is 2.
- the display device can then use the result 2 with a remainder of 1 to determine that the center point of the target sub-pixel is at the first position of the third target fixation area.
- the “row” in the embodiment of the present application can be understood as the “row where the target sub-pixel is located”. That is, the “position of the central sub-pixel” can be understood as the “position of the central sub-pixel in the row where the target sub-pixel is located”.
- the width of a target gaze point area includes multiple sub-pixels
- the width pixel of a sub-pixel can be 1, and 0-2.5 is the first gaze point area of the target gaze point area, and 2.5-5 is the second gaze point area of the target gaze point area.
- the target sub-pixel's center being at the first position of the third target fixation point region. If the target sub-pixel's center is at the first position, since the width of a sub-pixel is 1 pixel, the target sub-pixel is located between 0.5 and 1.5. Therefore, the target sub-pixel is completely within the first fixation point region and not within the transition region. However, the sub-pixel immediately preceding the target sub-pixel is within the transition region, i.e., it is located between 4.5 of the second target fixation point region and 0.5 of the third target fixation point region.
- the display device of the embodiment of the present application can obtain the position of the center point of the target sub-pixel in the target gaze point area through Formula 14-Formula 16, so as to facilitate subsequent adjustment and control of the grayscale value of the target sub-pixel.
- the display device provided in the embodiment of the present application further includes a prism disposed above the display panel.
- the prism may include a cylindrical prism, which is not limited in the embodiment of the present application.
- the placement of the prism is determined based on device parameters of the display device.
- the device parameters of the display device include but are not limited to: the horizontal aperture of the prism, the arch height of the prism, the distance between the prism and the display panel, the number of sub-pixels of the display panel covered by the prism, the width of the sub-pixels of the display panel, etc.
- z is the distance from the target user's eyes to the display panel
- h is the distance between the prism and the display panel
- D2 is the coverage width of the prism
- D1 is the horizontal aperture of a prism
- D3 is the coverage width of the distance between the target user's two eyes
- w is the distance between the target user's left eye and right eye, also known as the pupil distance.
- N is the number of sub-pixels on the display panel covered by each cylindrical prism
- subpixel is the width of each sub-pixel on the display panel.
- a grayscale control method provided in an embodiment of the present application further includes the following S2501 - S2502 .
- a display device can track the location of a target user's gaze point within a display panel using an eye tracking module.
- the eye tracking module can execute a calibration procedure to establish a mapping relationship between eye movement and the display screen coordinate system.
- the eye tracking module can capture eye movement data in real time and calculate the corresponding coordinates of the user's gaze point on the screen based on the mapping relationship, i.e., the gaze point position.
- the eye tracking module in the embodiment of the present application can also process the position information of both eyes to determine the three-dimensional position coordinates (x, y, z) of the center of both eyes, i.e., the center of the eyebrows, relative to the center point of the display screen. This enables the embodiment of the present application to more accurately track the target user's line of sight, improving the accuracy and reliability of human-computer interaction.
- S2502 Determine the gaze point position of the target user on the display panel based on the eye movement data.
- the eye tracking module can be a binocular camera.
- two cameras spaced a certain distance apart can be used as input devices.
- the relative positional relationship between the binocular cameras is known and serves as the basis for subsequent calculations.
- the binocular cameras can simultaneously capture images of the same scene and perform preprocessing on the images, including but not limited to denoising and contrast enhancement, to improve the accuracy of subsequent processing.
- the binocular camera uses an image matching algorithm to find corresponding feature points in the images captured by the two cameras and calculate the parallax between the feature points. Based on the parallax principle and known camera parameters (such as focal length and camera spacing), the depth information corresponding to each feature point is calculated. After obtaining the depth information in the scene, the binocular camera can combine human eye characteristics (such as pupil position and eye corner shape) to identify the human eye in the image and infer the position information of the human eye in three-dimensional space based on the depth information.
- human eye characteristics such as pupil position and eye corner shape
- parallax refers to the position difference of the same object in two camera images. This parallax is used to reflect the distance relationship between the object and the camera; depth information is used to represent the distance of the object from the camera in three-dimensional space.
- RGB-D camera integrates multiple high-precision sensors, including a RGB camera and a time-of-flight (TOF) depth sensor.
- TOF time-of-flight
- the RGB camera captures color image information of the scene, which contains rich visual features and details, providing a foundation for subsequent image processing and recognition.
- the TOF depth sensor uses the time-of-flight principle of light pulses to quickly measure the distance from each point in the scene to the camera, thereby generating a high-precision depth image.
- the RGB camera and the TOF depth sensor can be precisely calibrated so that the data captured by the two can be strictly aligned in space.
- image processing and computer vision algorithms are used to fuse the RGB image and the depth image to extract the human eye features in the scene.
- the RGB-D camera can accurately calculate the precise coordinate position of the human eye in three-dimensional space.
- the infrared eye-tracking camera integrates an infrared light source, an infrared camera, and an image processor.
- the infrared light source emits a high-intensity infrared beam, providing stable and sufficient infrared illumination for the eye tracking process.
- the infrared camera captures infrared light reflected from the eye, forming a clear, high-contrast eye image, ensuring normal operation under various ambient lighting conditions. This provides high-quality visual input for subsequent image processing and data analysis.
- the infrared camera can transmit the eye image to the image processor, which then processes the eye image captured by the eye movement sensor in real time, analyzes the transmitted data, calculates the eye position coordinates of the target user, and based on the eye position coordinates and the position of the corneal reflection point, calculates the vector from the pupil center to the corneal reflection point as the gaze direction of the target user.
- a gaze direction vector is calculated and extended to the display panel.
- the intersection of the gaze direction vector and the display panel is the screen coordinate of the gaze point.
- the two-dimensional coordinates of the gaze point on the display panel are calculated based on the relationship between the gaze direction vector and the display panel position. If the gaze direction vectors of the two eyes do not completely overlap, the final gaze point coordinates are determined through weighted averaging or other algorithms.
- the vector from the pupil center to the corneal reflection point is used as the gaze direction.
- the physiological characteristics of the human eye particularly the high-definition visual capabilities of the fovea, combined with eye movement patterns, the brain's selective attention mechanism in information processing, and the natural pursuit of visual comfort, all contribute to the target user's natural tendency to focus their gaze on a specific area when viewing the screen. This focus not only improves the target user's efficiency in processing visual information but also effectively reduces visual fatigue.
- the embodiment of the present application provides a naked eye 3D display device integrated with human eye tracking technology, and a grayscale control method (display method) thereof.
- the display device uses precision sensors such as high-definition cameras to capture and accurately calculate the eye position coordinates (gaze point position) of the target user in real time. Subsequently, the coordinates are immediately transmitted to the core of the display control, that is, the display control module, through the data transmission module.
- the display control module controls the grayscale value of the sub-pixel of the display panel based on the received coordinates.
- the display control module can synchronously and intelligently adjust the grayscale value of each pixel on the display panel.
- the target user can enjoy an unparalleled 3D visual feast regardless of their viewing angle.
- This display device and grayscale control method can be applied to high-end 3D displays, such as advanced gaming monitors and virtual reality displays.
- a glasses-free 3D display device without an eye-tracking module and a corresponding grayscale control method (display method) are provided.
- This display device precisely sets display parameters at a preset fixed distance and angle, ensuring that users can experience optimal 3D visual effects at specific viewing angles.
- This display device is particularly suitable for use in scenarios such as 3D advertising screens and 3D displays for exhibitions, providing users with an immersive visual experience.
- the grayscale control device can be divided into functional modules or functional units according to the above method example.
- each functional module or functional unit can be divided according to each function, or two or more functions can be integrated into one processing module.
- the above-mentioned integrated module can be implemented in the form of hardware or in the form of software functional modules or functional units.
- the division of modules or units in the embodiment of the present application is schematic and is only a logical functional division. In actual implementation, there may be other division methods.
- FIG. 26 it is a structural schematic diagram of a grayscale control device provided in an embodiment of the present application.
- the device is applied to a display device, and the display device includes: the display device includes: a display panel, and a processor; the display panel corresponds to multiple gaze points, and one gaze point corresponds to one area, and the area corresponding to each gaze point includes a transition area and a non-transition area.
- multiple gaze points are viewpoints from which the target user gazes at the display panel from different angles;
- the transition area is the area between the critical lines of the areas corresponding to any two gaze points, and the non-transition area is the area other than the transition area.
- the device includes a processing unit 2501 and an acquisition unit 2502.
- the processing unit 2501 is configured to: determine multiple gaze areas corresponding to the display panel based on the gaze point position of the target user within the display panel; the processing unit 2501 is also configured to: control the grayscale value of the sub-pixel in each gaze area among the multiple gaze areas based on the control strategy corresponding to the gaze area.
- At least two of the multiple gaze areas correspond to different control strategies.
- the processing unit 2501 is further configured to: determine a target gaze point area based on the gaze point position; the target gaze point area is an area among multiple gaze point areas that affects the grayscale value of the sub-pixel; the target gaze point area includes a first gaze point area and a second gaze point area, and one of the first gaze point area and the second gaze point area is an area corresponding to the gaze point position, and the other is an adjacent area to the area corresponding to the gaze point position.
- the multiple gaze areas include a central area, which is the area at the center of the gaze point position; the processing unit 2501 is specifically configured to: for multiple sub-pixels in the central area, reduce the grayscale value of the sub-pixels in the transition area among the multiple sub-pixels, and/or increase the grayscale value of the sub-pixels in the non-transition area among the multiple sub-pixels.
- the processing unit 2501 is specifically configured to: determine the grayscale coefficient of the first subpixel based on the position of the center point of the first subpixel in the target gaze point area and the width of the first area, and reduce the grayscale value of the first subpixel based on the grayscale coefficient of the first subpixel and the first image grayscale value of the first subpixel in the target gaze point area.
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the first area is a partial area in the target gaze point area; the first image grayscale value is used to represent the grayscale value of the first sub-pixel in the first gaze point area, or to represent the grayscale value of the first sub-pixel in the second gaze point area.
- the processing unit 2501 is specifically configured to: perform black processing on the grayscale value of the first sub-pixel to reduce the grayscale value of the first sub-pixel.
- the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area.
- the processing unit 2501 is further configured to increase the grayscale value of the second sub-pixel based on the position of the center point of the second sub-pixel in the target gaze point area and the width of the target gaze point area.
- the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area.
- the processing unit 2501 is specifically configured to: determine the grayscale coefficient of the second sub-pixel based on the position of the center point of the second sub-pixel in the target gaze point area and the width of the target gaze point area, and determine the grayscale value of the second sub-pixel based on the grayscale coefficient of the second sub-pixel and the image grayscale value of the second sub-pixel in the target gaze point area, and then use the minimum value between the grayscale value of the second sub-pixel and the grayscale threshold as the increased grayscale value of the second sub-pixel.
- the second image grayscale value is used to represent the grayscale value of the second sub-pixel in the first gaze point area, or is used to represent the grayscale value of the second sub-pixel in the second gaze point area.
- the multiple gaze areas include an edge area, which is an area adjacent to the central area; the central area is an area at the center of the gaze point position; the processing unit 2501 is specifically configured to: for multiple sub-pixels in the edge area, reduce the grayscale value of the sub-pixels in the transition area among the multiple sub-pixels.
- the grayscale value of the third sub-pixel is reduced; the third image grayscale value is used to represent the image grayscale value of the third sub-pixel in the first gaze point area, or to represent the image grayscale value of the third sub-pixel in the second gaze point area.
- the processing unit 2501 is further configured to determine the area proportion of the third sub-pixel based on the position of the center point of the third sub-pixel in the target gaze point area and the width of the third sub-pixel.
- the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area; the area proportion of the third sub-pixel is used to characterize the proportion of the third sub-pixel in the first gaze point area, or to characterize the proportion of the third sub-pixel in the second gaze point area.
- processing unit 2501 is specifically configured to:
- the grayscale value of the third sub-pixel is reduced.
- processing unit 2501 is specifically configured to:
- the grayscale value of the third subpixel is reduced.
- the first grayscale value is determined based on the proportion of the third sub-pixel in the first gaze point area and the image grayscale value of the third sub-pixel in the first gaze point area;
- the second grayscale value is determined based on the proportion of the third sub-pixel in the second gaze point area and the image grayscale value of the third sub-pixel in the second gaze point area.
- the processing unit 2501 is further configured to: perform black processing on the grayscale value of the sub-pixel to reduce the grayscale value of the sub-pixel.
- the sub-pixel is the first sub-pixel or the third sub-pixel, the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the processing unit 2501 is further configured to: increase the coverage of the transition area, and/or reduce the coverage of the non-transition area.
- the position of the center point of the sub-pixel in the target gaze point area can be determined by: determining the target gaze point area based on the gaze point position; determining the position of the center point of the target sub-pixel in the target gaze point area based on the position of the central sub-pixel in the row where the target sub-pixel is located.
- the target sub-pixel is any one of the first sub-pixel, the second sub-pixel, and the third sub-pixel; the first sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the central area; the second sub-pixel is any sub-pixel in the non-transition area among the multiple sub-pixels in the central area; and the third sub-pixel is any sub-pixel in the transition area among the multiple sub-pixels in the edge area.
- the processing unit 2501 is specifically configured to: obtain the width of the target gaze point area, and based on the width of the target gaze point area and the position of the central sub-pixel, determine the width of the second area in the row where the target sub-pixel is located, and then determine the position of the center point of the target sub-pixel in the target gaze point area based on the width of the second area.
- the second area is an incomplete area on the left side of the row where the target sub-pixel is located;
- the processing unit 2501 is further configured to: obtain the offset between the central sub-pixel of the row where the target sub-pixel is located and the center point of the display panel, and determine the position of the central sub-pixel of the row where the target sub-pixel is located based on the offset and the pixel width.
- the multiple gaze areas include a peripheral area, which is an area other than a central area and an edge area; the central area is an area at the center of the gaze point position, and the edge area is an area adjacent to the central area; the processing unit 2501 is specifically configured to: not adjust and control the multiple sub-pixels in the peripheral area.
- the display device further includes a prism disposed above the display panel; the processing unit 2501 is further configured to determine a placement position of the prism based on device parameters of the display device.
- the device parameters of the display device include at least two of the following: the horizontal aperture of the prism; the arch height of the prism; the distance between the prism and the display panel; the number of sub-pixels of the display panel covered by the prism; and the width of the sub-pixels of the display panel.
- the acquisition unit 2502 in the embodiment of the present application can be integrated into the communication interface, and the processing unit 2501 can be integrated into the processor.
- the specific implementation is shown in FIG26 .
- FIG 27 shows another possible structural diagram of the grayscale control device involved in the above embodiment.
- the communication device includes: a processor 2602 and a communication interface 2603.
- the processor 2602 is used to control and manage the operation of the device, for example, executing the steps performed by the processing unit 2501 and/or performing other processes of the technology described herein.
- the communication interface 2603 is used to support communication between the device and other network entities, for example, executing the steps performed by the acquisition unit 2502.
- the device may also include a memory 2601 and a bus 2604.
- the memory 2601 is used to store program code and data of the device.
- the memory 2601 can be a memory in the device, etc., and the memory may include a volatile memory, such as a random access memory; the memory may also include a non-volatile memory, such as a read-only memory, a flash memory, a hard disk or a solid-state drive; the memory may also include a combination of the above types of memory.
- a volatile memory such as a random access memory
- the memory may also include a non-volatile memory, such as a read-only memory, a flash memory, a hard disk or a solid-state drive
- the memory may also include a combination of the above types of memory.
- the processor 2602 may implement or execute the various exemplary logic blocks, modules, and circuits described in conjunction with the disclosure herein.
- the processor may be a central processing unit, a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field-programmable gate array, or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof.
- the processor may implement or execute the various exemplary logic blocks, modules, and circuits described in conjunction with the disclosure herein.
- the processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, or a combination of a DSP and a microprocessor.
- Bus 2604 may be an Extended Industry Standard Architecture (EISA) bus, for example.
- Bus 2604 may be divided into an address bus, a data bus, a control bus, and the like.
- FIG26 shows only one thick line, but this does not indicate that there is only one bus or only one type of bus.
- the device in FIG26 may also be a chip, which includes one or more (including two) processors 2602 and a communication interface 2603 .
- the chip further includes a memory 2605, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor 2602.
- a portion of the memory 2605 may also include a non-volatile random access memory (NVRAM).
- NVRAM non-volatile random access memory
- the memory 2605 stores the following elements, execution modules or data structures, or a subset thereof, or an extended set thereof.
- the corresponding operation is performed by calling the operation instruction stored in the memory 2605 (the operation instruction may be stored in the operating system).
- Some embodiments of the present disclosure provide a computer-readable storage medium (e.g., a non-transitory computer-readable storage medium), which stores computer program instructions.
- a computer e.g., a receiving node
- the computer executes a synchronization method as in any of the above embodiments.
- the above-mentioned computer-readable storage media may include, but are not limited to: magnetic storage devices (e.g., hard disks, floppy disks or magnetic tapes, etc.), optical disks (e.g., CD (Compact Disk), DVD (Digital Versatile Disk), etc.), smart cards and flash memory devices (e.g., EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.).
- the various computer-readable storage media described in the present disclosure may represent one or more devices and/or other machine-readable storage media for storing information.
- the term "machine-readable storage medium" may include, but is not limited to, wireless channels and various other media capable of storing, containing and/or carrying instructions and/or data.
- Some embodiments of the present disclosure further provide a computer program product, for example, stored on a non-transitory computer-readable storage medium.
- the computer program product includes computer program instructions that, when executed on a computer (e.g., a receiving node), cause the computer to perform the synchronization method described in the above embodiments.
- Some embodiments of the present disclosure further provide a computer program.
- the computer program When the computer program is executed on a computer (eg, a receiving node), the computer program enables the computer to execute the synchronization method of the above embodiments.
- the disclosed systems, devices, and methods can be implemented in other ways.
- the device embodiments described above are merely schematic.
- the division of units is only a logical function division.
- Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.
- Units described as separate components may or may not be physically separate, and components shown as units may or may not be physical units, that is, they may be located in one place or distributed across multiple network units. Some or all of these units may be selected to achieve the purpose of this embodiment according to actual needs.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Abstract
本申请的实施例涉及图像技术领域,尤其涉及一种显示设备、灰阶控制方法、装置及存储介质。所述显示设备包括:显示面板和处理器;所述处理器,被配置为:基于目标用户在所述显示面板内的注视点位置,确定所述显示面板对应的多个注视区域;所述处理器,还被配置为:针对所述多个注视区域中的每个注视区域,基于所述注视区域对应的控制策略对所述注视区域中的子像素的灰阶值进行控制;其中,所述多个注视区域中至少两个注视区域对应的控制策略不同。
Description
本申请要求于2024年4月16日提交国家知识产权局、申请号为PCT/CN2024/088125、申请名称为“电子设备、灰阶补偿方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本公开涉及图像技术领域,尤其涉及一种显示设备、灰阶控制方法、装置及存储介质。
裸眼3D技术在降低串扰率方面可对棱镜等分光元器件进行优化设计以提高其分光效果,传统的裸眼3D技术中的排图算法,一般是对过渡区域中的子像素进行插黑处理或者插白处理,以此降低串扰问题,但是目前主要通过对子像素进行较为单一的插黑处理或者插白处理,将导致图像失真或者串扰影响增大的问题。因此如何降低串扰率也成为了裸眼3D显示领域中亟待解决的关键技术问题。
一方面,提供一种显示设备、灰阶控制方法、装置及存储介质,能够有效降低串扰率,提升裸眼3D的观看效果。
该显示设备包括:显示面板和处理器;处理器,被配置为:基于目标用户在显示面板内的注视点位置,确定显示面板对应的多个注视区域;处理器,还被配置为:针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制。
其中,多个注视区域中至少两个注视区域对应的控制策略不同。
在一些实施例中,显示面板对应有多个注视点,且一个注视点对应一个区域,每个注视点对应的区域均包括过渡区域和非过渡区域。
其中,多个注视点是目标用户通过不同的角度注视显示面板的视点;过渡区域为任意两个注视点对应区域的临界线之间的区域,非过渡区域为除过渡区域之外的区域。
在一些实施例中,处理器,还被配置为:基于注视点位置,确定目标注视点区域;目标注视点区域为多个注视点区域中对子像素的灰阶值存在影响的区域;目标注视点区域包括第一注视点区域和第二注视点区域,第一注视点区域和第二注视点区域中一个为注视点位置对应的区域,另一个为与注视点位置对应的区域的相邻区域。
在一些实施例中,多个注视区域包括中心区域,中心区域为注视点位置的中心的区域;处理器,具体被配置为:对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,和/或增大多个子像素中处于非过渡区域的子像素的灰阶值。
在一些实施例中,处理器,具体被配置为:基于第一子像素的中心点在目标注视点区域中的位置和第一区域的宽度,确定第一子像素的灰阶系数,并基于第一子像素的灰阶系数和第一子像素在目标注视点区域中的第一图像灰阶值,减小第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第一区域为目标注视点区域中的部分区域;第一图像灰阶值用于表征第一子像素在第一注视点区域的灰阶值,或者用于表征第一子像素在第二注视点区域的灰阶值。
在一些实施例中,处理器,具体被配置为:对第一子像素的灰阶值进行取黑处理,以减小第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,处理器,还被配置为:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,增大第二子像素的灰阶值。
其中,第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素。
在一些实施例中,处理器,具体被配置为:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,确定第二子像素的灰阶系数,并基于第二子像素的灰阶系数和第二子像素在目标注视点区域中的图像灰阶值,确定第二子像素的灰阶值,进而将第二子像素的灰阶值和灰阶阈值中最小值作为增大后的第二子像素的灰阶值。
其中,第二图像灰阶值用于表征第二子像素在第一注视点区域的灰阶值,或者用于表征第二子像素在第二注视点区域的灰阶值。
在一些实施例中,多个注视区域包括边缘区域,边缘区域为与中心区域相邻的区域;中心区域为注视点位置的中心的区域;处理器,具体被配置为:对于边缘区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值。
基于第三子像素的区域占比和第三子像素的第三图像灰阶值,减小第三子像素的灰阶值;第三图像灰阶值用于表征第三子像素在第一注视点区域的图像灰阶值,或者用于表征第三子像素在第二注视点区域的图像灰阶值。
在一些实施例中,处理器,还被配置为:基于第三子像素的中心点在目标注视点区域中的位置和第三子像素的宽度,确定第三子像素的区域占比。
其中,第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素;第三子像素的区域占比用于表征第三子像素在第一注视点区域中的占比的情况,或者用于表征第三子像素在第二注视点区域中的占比的情况。
在一些实施例中,处理器,具体被配置为:
基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第一注视点区域的灰阶值,减小第三子像素的灰阶值;或者,
基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的灰阶值,减小第三子像素的灰阶值。
在一些实施例中,处理器,具体被配置为:
基于第三子像素在第一注视点区域中的第一灰阶值和第三子像素在第二注视点区域中的第二灰阶值,减小第三子像素的灰阶值。
其中,第一灰阶值基于第三子像素在第一注视点区域中的占比和第三子像素在第一注视点区域的图像灰阶值确定;第二灰阶值基于第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的图像灰阶值确定。
在一些实施例中,处理器,还被配置为:对子像素的灰阶值进行取黑处理,以减小子像素的灰阶值。
其中,子像素为第一子像素或者第三子像素,第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,处理器,还被配置为:增大过渡区域的覆盖范围,和/或,减小非过渡区域的覆盖范围。
在一些实施例中,子像素的中心点在目标注视点区域中的位置可以通过以下方式确定:根据注视点位置,确定目标注视点区域;基于目标子像素所在行的中心子像素的位置,确定目标子像素的中心点在目标注视点区域中的位置。
其中,目标子像素为第一子像素、第二子像素、第三子像素中的任一个;第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素;第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,处理器,具体被配置为:获取目标注视点区域的宽度,并基于目标注视点区域的宽度和中心子像素的位置,确定目标子像素所在行的第二区域的宽度,进而根据第二区域的宽度,确定目标子像素的中心点在目标注视点区域中的位置。
其中,第二区域为目标子像素所在行左侧的不完整区域;
在一些实施例中,处理器,还被配置为:获取目标子像素所在行的中心子像素的与显示面板中心点之间的偏移量,并基于偏移量和像素宽度,确定目标子像素所在行的中心子像素的位置。
在一些实施例中,多个注视区域包括外围区域,外围区域为除中心区域和边缘区域之外的区域;中心区域为注视点位置的中心的区域,边缘区域为与中心区域相邻的区域;处理器,具体被配置为:对于外围区域中的多个子像素,不进行调整控制。
在一些实施例中,显示设备还包括设置于显示面板上方的棱镜;处理器,还被配置为:基于显示设备的设备参数,确定棱镜的摆放位置。
在一些实施例中,显示设备的设备参数包括以下至少两项:棱镜的水平口径;棱镜的拱高;棱镜与显示面板之间的距离;棱镜覆盖显示面板的子像素的数量;显示面板的子像素的宽度。
又一方面,提供一种灰阶控制方法,应用于显示设备,显示设备包括显示面板和处理器;该方法包括:基于目标用户在显示面板内的注视点位置,确定显示面板对应的多个注视区域,并针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制。
其中,多个注视区域中至少两个注视区域对应的控制策略不同。
在一些实施例中,显示面板对应有多个注视点,且一个注视点对应一个区域,每个注视点对应的区域均包括过渡区域和非过渡区域。
其中,多个注视点是目标用户通过不同的角度注视显示面板的视点;过渡区域为任意两个注视点对应区域的临界线之间的区域,非过渡区域为除过渡区域之外的区域。
在一些实施例中,基于注视点位置,确定目标注视点区域;目标注视点区域为多个注视点区域中对子像素的灰阶值存在影响的区域;目标注视点区域包括第一注视点区域和第二注视点区域,第一注视点区域和第二注视点区域中一个为注视点位置对应的区域,另一个为与注视点位置对应的区域的相邻区域。
在一些实施例中,多个注视区域包括中心区域,中心区域为注视点位置的中心的区域;针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制,包括:对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,和/或增大多个子像素中处于非过渡区域的子像素的灰阶值。
在一些实施例中,对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,包括:基于第一子像素的中心点在目标注视点区域中的位置和第一区域的宽度,确定第一子像素的灰阶系数,并基于第一子像素的灰阶系数和第一子像素在目标注视点区域中的第一图像灰阶值,减小第一子像素的灰阶值。
其中,第一图像灰阶值用于表征第一子像素在第一注视点区域的灰阶值,或者用于表征第一子像素在第二注视点区域的灰阶值;第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第一区域为目标注视点区域中的部分区域。
在一些实施例中,对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,包括:对第一子像素的灰阶值进行取黑处理,以减小第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,对于中心区域中的多个子像素,增大多个子像素中处于非过渡区域的子像素的灰阶值,包括:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,增大第二子像素的灰阶值;
其中,第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素。
在一些实施例中,基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,对第二子像素的灰阶值进行调整,增大第二子像素的灰阶值,包括:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,确定第二子像素的灰阶系数;基于第二子像素的灰阶系数和第二子像素在目标注视点区域中的第二图像灰阶值,确定第二子像素的灰阶值;将第二子像素的灰阶值和多个子像素的灰阶阈值中最小值作为增大后的第二子像素的灰阶值。
其中,第二图像灰阶值用于表征第二子像素在第一注视点区域的灰阶值,或者用于表征第二子像素在第二注视点区域的灰阶值;
在一些实施例中,多个注视区域包括边缘区域,边缘区域为与中心区域相邻的区域;中心区域为注视点位置的中心的区域;针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制,包括:对于边缘区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值。
在一些实施例中,对于边缘区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,包括:基于第三子像素的区域占比和第三子像素的第三图像灰阶值,减小第三子像素的灰阶值。
其中,第三图像灰阶值用于表征第三子像素在第一注视点区域的图像灰阶值,或者用于表征第三子像素在第二注视点区域的图像灰阶值。
在一些实施例中,基于第三子像素的中心点在目标注视点区域中的位置和第三子像素的宽度,确定第三子像素的区域占比。
其中,第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素;第三子像素的区域占比用于表征第三子像素在第一注视点区域中的占比的情况,或者用于表征第三子像素在第二注视点区域中的占比的情况。
在一些实施例中,基于第三子像素的区域占比和第三子像素的图像灰阶值,确定减小后第三子像素的灰阶值,包括:基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第一注视点区域的灰阶值,减小第三子像素的灰阶值;或者,
基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的灰阶值,减小第三子像素的灰阶值。
在一些实施例中,基于第三子像素的区域占比和第三子像素的图像灰阶值,确定减小后第三子像素的灰阶值,包括:基于第三子像素在第一注视点区域中的第一灰阶值和第三子像素在第二注视点区域中的第二灰阶值,减小第三子像素的灰阶值。
其中,第一灰阶值基于第三子像素在第一注视点区域中的占比和第三子像素在第一注视点区域的图像灰阶值确定;第二灰阶值基于第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的图像灰阶值确定。
在一些实施例中,对于边缘区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,包括:对第三子像素的灰阶值进行取黑处理,以减小第三子像素的灰阶值;第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,该方法还包括:增大过渡区域的覆盖范围,和/或,减小非过渡区域的覆盖范围。
在一些实施例中,子像素的中心点在目标注视点区域中的位置可以通过以下方式确定:根据注视点位置,确定目标注视点区域;基于目标子像素所在行的中心子像素的位置,确定目标子像素的中心点在目标注视点区域中的位置。
其中,目标子像素为第一子像素、第二子像素、第三子像素中的任一个;第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素;第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,基于目标子像素所在行的中心子像素的位置,确定目标子像素的中心点在目标注视点区域中的位置,包括:获取目标注视点区域的宽度,并基于目标注视点区域的宽度和中心子像素的位置,确定目标子像素所在行的第二区域的宽度;进而,根据第二区域的宽度,确定目标子像素的中心点在目标注视点区域中的位置。
其中,第二区域为目标子像素所在行左侧的不完整区域;
在一些实施例中,该方法还包括:获取目标子像素所在行的中心子像素的与显示面板中心点之间的偏移量;基于偏移量和子像素宽度,确定目标子像素所在行的中心子像素的位置。
在一些实施例中,多个注视区域包括外围区域,外围区域为除中心区域和边缘区域之外的区域;中心区域为注视点位置的中心的区域,边缘区域为与中心区域相邻的区域;针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制,包括:对于外围区域中的多个子像素,不进行调整控制。
在一些实施例中,显示设备还包括设置于显示面板上方的棱镜;该方法还包括:基于显示设备的设备参数,确定棱镜的摆放位置。
在一些实施例中,显示设备的设备参数包括以下至少两项:棱镜的水平口径;棱镜的拱高;棱镜与显示面板之间的距离;棱镜覆盖显示面板的子像素的数量;显示面板的子像素的宽度。
又一方面,提供一种灰阶控制装置,包括处理器和通信接口。通信接口和处理器耦合。处理器用于运行计算机程序或指令,以实现第一方面或第一方面任一实施例的灰阶控制方法。
再一方面,提供一种计算机可读存储介质。计算机可读存储介质存储有计算机程序指令,计算机程序指令在计算机(例如,接收节点)上运行时,使得计算机执行如上述任一实施例的灰阶控制方法。
又一方面,提供一种计算机程序产品。计算机程序产品包括计算机程序指令,在计算机(例如,接收节点)上执行计算机程序指令时,计算机程序指令使计算机执行如上述任一实施例的灰阶控制方法。
又一方面,提供一种计算机程序。当计算机程序在计算机(例如,接收节点)上执行时,计算机程序使计算机执行如上述任一实施例的灰阶控制方法。
基于上述技术方案,该显示设备中的处理器可以实时针对目标用户的注视点位置,确定显示面板对应的多个不同的注视区域,并且对于不同注视区域中的子像素的灰阶值进行灵活的调整控制。也即是说,对于不同注视区域采用不同的控制策略,有效解决裸眼3D观看者可视范围有限的问题,同时可在一定程度上降低串扰,提升3D观看效果,其中不同的控制策略影响着画面的整体效果,以此提高用户的观感。
为了更清楚地说明本公开中的技术方案,下面将对本公开一些实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例的附图,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。此外,以下描述中的附图可以视作示意图,并非对本公开实施例所涉及的产品的实际尺寸、方法的实际流程、信号的实际时序等的限制。
图1为根据一些实施例的实际场景3D效果产生示意图;
图2为根据一些实施例的左视图与右视图的合成示意图;
图3为根据一些实施例的图像重影现象的示意图;
图4为根据一些实施例的显示设备的结构图;
图5为根据另一些实施例的显示设备的结构图;
图6为根据一些实施例的灰阶控制方法的流程图;
图7为根据一些实施例的多个注视区域的示意图;
图8为根据一些实施例的注视点对应的区域的示意图;
图9a为根据一些实施例的多个注视区域与注视点对应的区域结合的示意图;
图9b为根据另一些实施例的多个注视区域与注视点对应的区域结合的示意图;
图10为根据一些实施例的目标注视点区域的示意图;
图11为根据另一些实施例的灰阶系数曲线的示意图;
图12为根据一些实施例的灰阶控制方法的场景图;
图13为根据另一些实施例的灰阶控制方法的场景图;
图14为根据另一些实施例的灰阶控制方法的场景图;
图15为根据另一些实施例的目标注视点区域的示意图;
图16为根据另一些实施例的灰阶控制方法的场景图;
图17为根据一些实施例的区域示意图;
图18为根据另一些实施例的目标注视点区域的示意图;
图19为根据另一些实施例的目标注视点区域的示意图;
图20为根据一些实施例的棱镜的场景图;
图21为根据一些实施例的排图算法的场景图;
图22为根据另一些实施例的目标注视点区域的示意图;
图23为根据另一些实施例的棱镜的场景图;
图24为根据另一些实施例的棱镜的场景图;
图25为根据另一些实施例的灰阶控制方法的流程图;
图26为根据一些实施例的灰阶控制装置的结构图;
图27为根据一些实施例的灰阶控制装置的结构图。
下面将结合附图,对本公开一些实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开所提供的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本公开保护的范围。
除非上下文另有要求,否则,在整个说明书和权利要求书中,术语“包括(comprise)”及其其他形式例如第三人称单数形式“包括(comprises)”和现在分词形式“包括(comprising)”被解释为开放、包含的意思,即为“包含,但不限于”。在说明书的描述中,术语“一个实施例(one embodiment)”、“一些实施例(some embodiments)”、“示例性实施例(exemplary embodiments)”、“示例(example)”、“特定示例(specific example)”或“一些示例(some examples)”等旨在表明与该实施例或示例相关的特定特征、结构、材料或特性包括在本公开的至少一个实施例或示例中。上述术语的示意性表示不一定是指同一实施例或示例。此外,的特定特征、结构、材料或特点可以以任何适当方式包括在任何一个或多个实施例或示例中。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
“A、B和C中的至少一个”与“A、B或C中的至少一个”具有相同含义,均包括以下A、B和C的组合:仅A,仅B,仅C,A和B的组合,A和C的组合,B和C的组合,及A、B和C的组合。
“A和/或B”,包括以下三种组合:仅A,仅B,及A和B的组合。
如本文中所使用,根据上下文,术语“如果”任选地被解释为意思是“当……时”或“在……时”或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定……”或“如果检测到[所陈述的条件或事件]”任选地被解释为是指“在确定……时”或“响应于确定……”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
本文中“适用于”或“被配置为”的使用意味着开放和包容性的语言,其不排除适用于或被配置为执行额外任务或步骤的设备。
另外,“基于”的使用意味着开放和包容性,因为“基于”一个或多个条件或值的过程、步骤、计算或其他动作在实践中可以基于额外条件或超出的值。
如本文所使用的那样,“约”、“大致”或“近似”包括所阐述的值以及处于特定值的可接受偏差范围内的平均值,其中可接受偏差范围如由本领域普通技术人员考虑到正在讨论的测量以及与特定量的测量相关的误差(即,测量系统的局限性)所确定。
如本文所使用的那样,“相等”包括所阐述的情况以及与所阐述的情况相近似的情况,该相近似的情况的范围处于可接受偏差范围内,其中可接受偏差范围如由本领域普通技术人员考虑到正在讨论的测量以及与特定量的测量相关的误差(即,测量系统的局限性)所确定。“相等”包括绝对相等和近似相等,其中近似相等的可接受偏差范围内例如可以是相等的两者之间的差值小于或等于其中任一者的5%。
以下,对本申请实施例涉及的名词进行解释,以方便读者理解。
1、裸眼3D技术旨在提供一种无需辅助设备即可实现三维视觉效果的显示技术。随着科技的持续进步与升级,该裸眼3D技术可以在消费电子产品领域,特别是智能手机和电视等主流设备中,实现广泛应用,并推动相关专业领域的进一步创新与发展。
该裸眼3D技术,通过优化显示系统的结构与算法,能够为用户提供前所未有的沉浸式体验。在电影、游戏以及虚拟现实场景中,该技术能够营造出更为逼真的三维视觉效果,使观众仿佛置身于故事场景之中,极大地增强了娱乐体验的互动性和真实感。
此外,在广告传播领域,本发明的裸眼3D技术能够吸引更多的目光与关注,提升广告的传播效果与记忆点。同时,对于专业培训而言,该技术能够模拟出更为真实的操作场景,帮助学员更好地掌握技能与知识,提高培训效率与质量。
然而,裸眼3D技术仍然也需要突破克服诸多技术上的壁垒。具体而言,需要进一步优化其图像处理能力,以确保在不同视角与距离下均能呈现出清晰、稳定的三维图像,为用户提供更真实的裸眼3D效果和更沉浸的使用体验,使得该技术在娱乐、广告、医疗和教育方面展现了较大的市场潜力。
2、裸眼3D技术的显示原理。人眼具有感知深度的能力,这主要依赖于双眼视差。双眼视差是指由于两只眼睛的位置不同,它们观察同一物体时会产生不同的视角,从而捕捉到略有差异的图像。这些差异图像被大脑融合处理后,能够让我们感受到物体在空间中的深度和立体感。如图1所示,具体在裸眼3D显示设备中,大多采用棱镜或光栅等分光元器件的分光原理,将左眼和右眼观看到的不同图像分别传送到对应的眼睛当中,来实现裸眼3D的效果。
如图2所示,目前裸眼3D技术手段主要是利用狭缝式液晶光栅和柱状棱镜。
(1)狭缝式液晶光栅,是利用在屏幕前加入光栅,对屏幕光进行遮挡,使得左眼看到的图像显示在液晶屏上时,不透明的条纹会遮挡右眼。同理,应该由右眼看到的图像显示在液晶屏上时,不透明的条纹会遮挡左眼,再利用双眼视差就可以产生3D效果。但由于遮挡屏幕光的原因,其画面亮度只有2D屏幕时的1/4。
(2)柱状棱镜,是通过棱镜的折射原理,将左右眼对应的像素点分别投射在左右眼中,实现图像分离,从而使得观察者看到3D立体图像,对比狭缝光栅技术最大的优点是棱镜不会遮挡光线,所以画面亮度基本不受到影响,3D显示效果更好。
3、串扰问题。裸眼3D显示中的串扰问题主要是由于排图方法、光学元件设计和制造工艺能力等原因导致的。实际应用中,棱镜的摆放通常采用倾斜放置的方式,使得2D显示器件上像素的排列与边缘夹角一致。在实际应用中,仅凭棱镜等分光元器件是无法完全实现百分之百的分光效果的,同时排图算法中对于像素的排列也无法做到所有的子像素完全被划分到左视图或者右视图,即不可避免的会出现原本进入左眼的图像,会有部分亮度进入右眼的情况发生,从而导致产生串扰。
其中,串扰率是裸眼3D效果质量中的一个重要的评估指标,用于衡量左右眼图像之间的亮度交叉,当串扰率达到一定程度时,就可以感知到重影的现象发生。如图3所示,通常认为,串扰率大于2%时,可以开始感知到重影的现象,而当串扰率超过10%,则会导致明显的重影现象,由此可知,串扰的发生会影响裸眼3D的实际观看体验。
4、多注视点技术是指通过特定的光学器件或技术处理手段,将单一的3D图像分割成多个具有不同视角的图像,从而让观众在不同的观看位置和角度上都能获得真实的立体效果。即通过光学器件或技术处理,如透光棱镜(柱状棱镜),将3D图像分割成多个视角或注视点,每个注视点都对应着不同的观看位置和角度。
以上,给出了本申请相关技术的简要介绍。
裸眼3D技术在降低串扰率方面可对棱镜等分光元器件进行优化设计以提高其分光效果,同时可以保证人眼追踪的精度来提高观看体验,减少串扰。
传统的裸眼3D技术中的排图算法,一般是对过渡区域中的子像素进行插黑处理或者插白处理,以此降低串扰问题,但是目前主要通过对子像素进行较为单一的插黑处理或者插白处理,将导致图像失真或者串扰影响增大的问题。因此如何降低串扰率也成为了裸眼3D显示领域中亟待解决的关键技术问题。
鉴于此,本申请实施例提供了一种显示设备,该显示设备中的处理器可以实时针对目标用户的注视点位置,确定显示面板对应的多个不同的注视区域,并且对于不同注视区域中的子像素的灰阶值进行灵活的调整控制。也即是说,对于不同注视区域采用不同的控制策略,有效解决裸眼3D观看者可视范围有限的问题,同时可在一定程度上降低串扰,提升3D观看效果,其中不同的控制策略影响着画面的整体效果,以此提高用户的观感。
下面将结合说明书附图,对本申请实施例的实施方式进行详细描述。
如图4所示,图4为本申请实施例提供的一种显示设备400的结构图。该显示设备400可以为带有显示面板的终端设备。如:电视机。该显示设备400可以包括显示面板401、至少一个处理器402、以及收发器403,还可以包括存储器404。其中,处理器402,存储器404以及收发器403三者之间可以通过通信线路连接。
本申请实施例中,显示面板401用于显示图像,显示面板401对应有多个周期,一个周期中包括多个注视点,且一个注视点对应一个区域,每个注视点对应的区域均包括过渡区域和非过渡区域。
其中,过渡区域为任意两个注视点对应区域的临界线之间的区域,非过渡区域为除过渡区域之外的区域。
本申请实施例中,处理器402可以为芯片。芯片可以包括逻辑芯片、存储芯片、传感器芯片、电源芯片和通信芯片五大类。其中,处理器类主要在系统中承担具体计算、控制任务的芯片,比如微控制单元(microcontroller unit,MCU)、中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、神经网络处理单元(neural processing unit,NPU)等。存储类主要在系统中承担对数据的存储的芯片,以及一些存储控制器类芯片,比如动态随机访问存储器(dynamic ram,DRAM)、静态随机存取存储器(static random-access memory,SRAM)、闪存(flash eeprom memory,Flash)等。传感类在系统中主要承担信息的采集、呈现与交互的芯片,如输入输出设备、一部分的信号处理芯片等。通信类(有线、无线)主要在系统中承担通讯功能的芯片,比如一些以太网类芯片,交换类芯片,广域与局域网、点对点与自组网类芯片,以及辅助通讯的滤波、放大、功率等器件都可属于此类,大众通常所知道的(wireless fidelity,WiFi)、蓝牙、第五代移动通信技术(5th generation mobile communication technology,5G)基带、全球定位系统(global positioning system,GPS)、窄带物联网(narrow band internet of things,NB-IoT)、网卡、交换机等都可以被划分到这一类别中。
其中,通信线路可以包括一通路,用于在上述组件之间传送信息。
存储器404可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于包括或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
一种可能的设计中,存储器404可以独立于处理器402存在,即存储器404可以为处理器402外部的存储器,此时,存储器404可以通过通信线路与处理器402相连接,用于存储执行指令或者应用程序代码,并由处理器402来控制执行,实现本申请下述实施例提供的网络质量确定方法。又一种可能的设计中,存储器404也可以和处理器402集成在一起,即存储器404可以为处理器402的内部存储器,例如,该存储器404为高速缓存,可以用于暂存一些数据和指令信息等。
作为一种可实现方式,处理器402可以包括一个或多个CPU。
如图5所示,图5为本申请实施例提供的另一种显示设备的结构图。该显示设备可以包括人眼跟踪模块501、数据传输模块502、显示控制模块503、棱镜光栅模块504。人眼跟踪模块501、数据传输模块502、显示控制模块503之间通过通信网络连接。
其中,人眼跟踪模块501用于捕捉目标用户注视显示面板的注视位置,并计算确定该注视位置的坐标,将该坐标发送至数据传输模块502。
示例性地,人眼跟踪模块501包括但不限于双目摄像头、红绿蓝深度摄像头(RGB-D摄像头)、红外眼动追踪摄像头等。该人眼跟踪模块501可以安装在显示设备的顶部或者底部中间位置,以确保能够直接捕捉目标用户的人眼位置以及眼球的运动方向。
在本申请实施例中,数据传输模块502能够通过SPI(串行外设接口)、USB(通用串行总线)或PCIe(PCI Express,一种高速串行计算机扩展总线标准)等多种方案实现高效、稳定的数据传输。
具体而言,SPI接口以其高实时性和高速数据传输能力,能够满足对数据传输时效性要求极高的应用场景;USB接口则凭借其高带宽和灵活便捷的特性,成为传输大量数据时的理想选择;而PCIe接口,作为内部高速设备连接的标准,以其卓越的性能,在需要极高数据传输速率的情况下展现出显著优势。
示例性地,数据传输模块502可以将注视位置的坐标,即眼球位置坐标(x,y,z)进行格式化处理,封装成标准数据包。随后,利用高速数据总线(包括但不限于SPI、USB、PCIe等)建立人眼跟踪模块501与数据传输模块502之间的连接。数据传输模块502可以向显示控制模块503发送该目标用户的眼球位置坐标(x,y,z),由显示控制模块503进行后续的处理。该设计确保了数据传输过程中的低延迟和高带宽,从而实现了摄像头实时捕获眼球坐标位置与显示面板像素排布的精确同步控制。
进一步地,本申请实施例通过各模块间的协同工作,能够实时获取并传输精确的眼球坐标信息至显示面板,进而实现显示面板的同步调整。这一过程不仅提高了数据传输的效率和准确性,还确保了整个人眼跟踪与显示设备的稳定性和可靠性。
在本申请实施例中,显示控制模块503接收数据传输模块502发送的目标用户的眼球位置坐标(x,y,z),该显示控制模块503能够基于眼球位置坐标以及注视点的屏幕坐标数据,通过过渡处理方法,实现对显示面板上像素灰度值的动态调整,从而确保用户在不同位置均能观察到最佳的3D视觉效果。
具体而言,显示控制模块503接收由人眼跟踪模块501提供的眼球位置坐标(包括但不限于x,y,z轴上的位置信息)以及计算得出的注视点在显示面板上的坐标数据。随后,该显示控制模块503利用过渡处理的方法,根据眼球位置坐标与注视点屏幕坐标的实时变化,对显示面板上相应像素的灰度值进行精确且平滑的调整。
该显示控制模块503不仅考虑了眼球运动的连续性和平滑性,还充分结合了人眼的视觉感知特性,确保在调整像素灰度值时能够产生自然且连贯的视觉效果。通过这种方式,显示控制模块503能够动态地优化显示面板上的图像内容,使得用户即便处于不同的观察位置,也能获得最佳的3D观看体验。
进一步地,本申请实施例所提出的显示控制模块503技术方案还具备高度的灵活性和可扩展性,能够适应不同类型的显示面板和3D显示技术,从而广泛应用于各种3D显示设备和系统中,为用户提供更加真实、沉浸式的视觉体验。
在本申请实施例中,棱镜光栅模块504的分光作用可以使得目标用户在任意角度均观看到最佳的3D图像的效果。也即是说,本申请实施例可以采用柱状棱镜的3D显示技术,旨在通过精确控制光线的分离与导向,为目标用户提供立体且高质量的3D视觉体验。
具体而言,该3D显示技术利用柱状棱镜的特殊光学性质,将来自显示面板的光线分离成不同的方向。这一过程中,柱状棱镜被巧妙地安装在显示面板的前端,其内部透镜结构能够针对每个像素发出的光线进行精确调控,确保光线按照预定的角度被导向。由此,左眼和右眼能够分别接收到经过特殊处理的、存在差异的图像信息,从而在大脑中形成3D立体效果。
为了进一步提升3D显示效果,本申请实施例还提出了一种柱状棱镜安装方式。具体而言,柱状棱镜被相对显示面板斜置一定角度。该设计不仅能够在一定程度上平衡因视差而产生的图像分辨率在水平和垂直方向上的损失,使得3D图像在各个方向上都能保持较高的清晰度;同时,还能够有效地减弱摩尔纹现象,避免图像中出现干扰性的条纹,从而进一步提升用户的视觉体验。
需要说明的是,本申请实施例描述的显示设备是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着显示设备的演变和其他显示设备的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
以下实施例中的方法均可以在具有上述硬件结构的显示设备400中实现。对本申请实施例的方法进行说明。
下面结合附图,详细介绍本申请实施例提供的灰阶控制方法。
本申请实施例可以控制调整子像素的灰阶值,以此避免串扰问题。下面结合附图,详细介绍本申请实施例提供的一种灰阶控制方法,如图6所示,该灰阶补偿的方法可以包括S601-S602。其中,S601也可以称为“确定多个注视区域”流程,S602也可以称为“根据不同的控制策略对相应的注视区域的子像素控制”流程。以下对S601-S602进行详细说明。
S601、基于目标用户在显示面板内的注视点位置,确定显示面板对应的多个注视区域。
在本申请实施例中,显示设备可以通过人眼跟踪模块,实时获取目标用户的注视点位置(注视点坐标),进而显示设备可以根据注视点位置和预设距离,将显示面板划分为多个注视区域。
示例性地,多个注视区域包括但不限于中心区域、边缘区域、外围区域。其中,中心区域为注视点位置的中心的区域;边缘区域为与中心区域相邻的区域;中心区域为注视点位置的中心的区域;外围区域为除中心区域和边缘区域之外的区域。
例如,图7所示,以目标用户的注视点位置(A点)为中心,向两侧拓展一定的预设距离(半径r1),以得到矩形的中心区域。又或者,以注视点位置(A点)为圆心,向周边拓展半径r1的距离,以得到圆形的中心区域。该中心区域为目标用户注视的重要区域,由于中央凹是视网膜上视觉感知最敏锐的区域,负责高分辨率的图像处理,目标用户对该中心区域内的视觉效果也最为敏感。相应的,显示设备对于该中心区域的视觉处理要求最高。
进一步地,显示设备在中心区域的基础上向两侧继续拓展一定的预设距离(半径r2),以得到矩形的边缘区域。又或者,以注视点位置(A点)为圆心,向周边拓展半径r2的距离,以得到圆形的边缘区域。该边缘区域还可以理解为环绕中心区域的区域,可以定义为半径r1与半径r2之间的区域。应理解,在该边缘区域中仍然有较高的视觉要求,尤其是在目标用户快速动眼期间。
进而,外围区域可以是位于半径r2之外的区域,该外围区域通常不会直接注视,相应的视觉需求最低。
具体而言,显示设备可以根据实际显示面板的大小和分辨率动态调整多个注视区域中子像素的灰阶值,从而在不同区域内应用差异化的过渡处理方案,优化裸眼3D显示效果。
S602、针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制。
其中,多个注视区域中至少两个注视区域对应的控制策略不同。
在本申请实施例中,显示面板对应有多个周期,一个周期中包括多个注视点,且一个注视点对应一个区域,每个注视点对应的区域均包括过渡区域和非过渡区域。
其中,过渡区域为任意两个注视点对应区域的临界线之间的区域。可以理解的是,在某一个子像素的中心点处于两个注视点对应的区域的临界线之间的过渡区域时,该子像素将出现横跨两个注视点对应的区域的情况。
在本申请实施例中,非过渡区域为除过渡区域之外的区域,非过渡区域用于表征某一子像素全部处于注视点对应的区域。也即是说,该子像素并未出现横跨两个注视点对应的区域的情况。
示例性地,如图8所示,显示面板对应有多个周期,且一个周期中包括多个注视点,例如注视点1、注视点2、注视点3、注视点4、注视点5。每个注视点均对应有一个区域,且每个注视点对应的区域均包括过渡区域和非过渡区域,例如,过渡区域为图8中阴影部分的区域,非过渡区域为图8中非阴影部分的区域。
同时,在中心区域、边缘区域以及外围区域中均将存在过渡区域和非过渡区域,也即是说中心区域、边缘区域以及外围区域均与注视点对应的区域出现重合。
例如,将图7和图8结合形成如图9a所示的区域图,以注视点1和注视点2为例。以目标用户的注视点位置(A点)为中心,向两侧拓展一定的预设距离(半径r1),以得到中心区域,并在中心区域的基础上向两侧继续拓展一定的预设距离(半径r2),以得到边缘区域。相应的,外围区域可以是位于半径r2之外的区域。
由图9a可知,该注视点位置(A点)处于注视点1对应的区域,且由于上述得到的中心区域横跨注视点1和注视点2对应的区域,因此中心区域包括过渡区域(阴影部分)和非过渡区域(非阴影部分)。在该图9a中,边缘区域并未包括过渡区域和非过渡区域,本申请实施例仅是给出一种便于理解的示例。在实际场景中,边缘区域也可能包括过渡区域和非过渡区域,本申请实施例对此不作限定。
又例如,将图7和图8结合形成如图9b所示的区域图,显示面板对应有多个周期,且一个周期中包括多个注视点,例如注视点1、注视点2、注视点3、注视点4、注视点5。以目标用户的注视点位置(A点)为中心,向两侧拓展一定的预设距离(半径r1),以得到中心区域,并在中心区域的基础上向两侧继续拓展一定的预设距离(半径r2),以得到边缘区域。相应的,外围区域可以是位于半径r2之外的区域。应理解,对于处于中心区域的B点、边缘区域的C点和外围区域的D点将采用不同的控制策略,以调整灰阶值。
本申请实施例中对于中心区域的控制策略如下所示:
在本申请实施例中,对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值和/或增大多个子像素中处于非过渡区域的子像素的灰阶值。
具体地,对于中心区域中的多个子像素,显示设备的处理方式可以为以下方式(1)、方式(2)、方式(3)、方式(4)、方式(5)中的任意一种。
方式(1):对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值。示例性地,基于第一子像素的中心点在目标注视点区域中的位置和第一区域的宽度,确定第一子像素的灰阶系数,并基于第一子像素的灰阶系数和第一子像素在目标注视点区域中的第一图像灰阶值,以减小第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素。第一图像灰阶值用于表征第一子像素在第一注视点区域的灰阶值,或者用于表征第一子像素在第二注视点区域的灰阶值。
在一些实施例中,目标注视点区域为多个注视点区域中对第一子像素的灰阶值存在影响的区域。
示例性地,目标注视点区域包括第一注视点区域和第二注视点区域,第一注视点区域和第二注视点区域中一个为注视点位置所在注视点对应的区域,另一个为与所在注视点对应的区域的相邻区域。也即是说,显示设备可以通过注视点位置,确定目标注视点区域。
例如,结合图9a所示,以注视点位置(A点)为例,该注视点位置(A点)处于注视点1对应的区域,则注视点1对应的区域为第一注视点区域,相应的,与第一注视点区域相邻的注视点2对应的区域为第二注视点区域。
又例如,假设注视点位置(A点)处于注视点2对应的区域,则注视点2对应的区域为第一注视点区域。以注视点2对应的区域的中心线为分界线,若注视点位置(A点)处于的位置为注视点2对应的区域的左区域,则第二注视点区域为注视点1对应的区域。应理解,此时,注视点1对应的区域和注视点2对应的区域将对第一子像素的灰阶值造成影响。
若注视点位置(A点)处于的位置为注视点2对应的区域的右区域,则第二注视点区域为注视点3对应的区域,此时注视点2对应的区域和注视点3对应的区域为对第一子像素的灰阶值造成影响的区域。
又例如,结合图9b所示,以注视点位置(A点)为例,该注视点位置(A点)处于注视点4对应的区域,则注视点4对应的区域为第一注视点区域。以注视点4对应的区域的中心线为分界线,若注视点位置(A点)所处于的位置为注视点4对应的区域的左区域,则第二注视点区域为注视点3对应的区域。若注视点位置(A点)所处于的位置为注视点4对应的区域的右区域,则第二注视点区域为注视点5对应的区域。如图9b所示,第二注视点区域为注视点5对应的区域。
又一种示例性地,显示设备可以根据子像素的位置,确定目标注视点区域。其中,子像素可以为第一子像素、第二子像素、第三子像素。
例如,结合图9b所示,以B点为例。其中,B点可以理解为第一子像素。该B点处于注视点4对应的区域,则注视点4对应的区域为第一注视点区域。以注视点4对应的区域的中心线为分界线,若B点所处于的位置为注视点4对应的区域的左区域,则第二注视点区域为注视点3对应的区域。若B点所处于的位置为注视点4对应的区域的右区域,则第二注视点区域为注视点5对应的区域。如图9b所示,第二注视点区域为注视点5对应的区域。
又例如,结合图9b所示,以C点为例。其中,C点可以理解为第三子像素。该C点处于注视点5对应的区域,则注视点5对应的区域为第一注视点区域。以注视点5对应的区域的中心线为分界线,若C点所处于的位置为注视点5对应的区域的左区域,则第二注视点区域为注视点4对应的区域。若C点所处于的位置为注视点5对应的区域的右区域,则第二注视点区域为相邻周期中注视点1对应的区域。如图9b所示,第二注视点区域为注视点4对应的区域。
应理解,本申请实施例提供的过渡区域包括但不限于第一子过渡区域和第二子过渡区域,第一子过渡区域为与注视点区域的边界相邻的区域,第二子过渡区域为除第一子过渡区域之外的过渡区域。
示例性地,以目标注视点区域为第一注视点区域(注视点1对应的区域)和第二注视点区域(注视点2对应的区域)为例。如图10所示,一个子像素的宽度subpixel可以是D-H;0.5个子像素的宽度可以是A-C、E-G、I-K等。在第一子像素的中心点处于C-D区域的情况下,则第一子像素完全处于第一注视点区域;在第一子像素的中心点处于H-I区域的情况下,则第一子像素完全处于第二注视点区域。也即是说,A-C、D-H、以及I-K为过渡区域。
其中,由于A-B、E-F、F-G、以及J-K与注视点区域的边界相邻,则A-B、E-F、F-G、以及J-K可以为第一子过渡区域。相应的,过渡区域中除第一子过渡区域之外的过渡区域B-C、D-E、G-H、以及I-J可以为第二子过渡区域。
进一步地,在确定目标注视点区域之后,对减小后的第一子像素的灰阶值进行计算。首先,根据第一子像素的中心点在目标注视点区域中的位置和第一区域的宽度,确定第一子像素的灰阶系数。
其中,第一区域为目标注视点区域中的部分区域。
结合图10,如图11所示,在第一子像素的中心点处于A-B、E-F、F-G、以及J-K中任一个区域的情况下,显示设备可以通过x^2函数减小第一子像素对应位置在第一注视点区域和第二注视点区域的灰阶值。具体的,显示设备可以通过以下公式1,确定第一子像素的灰阶系数k_score。
k_score=((position-line_f)^2/d^2) 公式1
k_score=((position-line_f)^2/d^2) 公式1
其中,position为第一子像素的中心点在目标注视点区域中的位置,即第一子像素的中心点距离目标注视点区域最左边的距离;line_f为第一区域的宽度,即A至F的宽度,d为两个字母之间区域的宽度。
需要说明的是,以A-B或者J-K区域的宽度为d为例,E-G区域的宽度为2d,其中0<d≤subpixel/2,显示设备通过调整d的大小来调节过渡区域的范围。
进一步地,显示设备基于第一子像素的灰阶系数和第一子像素在目标注视点区域中的图像灰阶值,减小第一子像素的灰阶值,以得到减小后的第一子像素的灰阶值。
其中,图像灰阶值用于表征第一子像素在目标注视点区域中不同注视点区域的灰阶值。
示例性地,以第一子像素的中心点处于第一注视点区域中的第一子过渡区域为例。即第一子像素的中心点处于A-B,或者,E-F区域的情况下,显示设备可以将第一子像素的灰阶系数k_score代入以下公式2,确定得到减小后的第一子像素的灰阶值value。
value=k_score*value_right 公式2
value=k_score*value_right 公式2
其中,value_right为第一子像素在第一注视点区域的灰阶值。
可以理解的是,第一子像素处于不同的位置,对应的灰阶系数不同,进而灰阶值也不同。
在本申请实施例中,在显示设备采用公式1和公式2的方式得到减小后的第一子像素的灰阶值,可以将第一子像素的当前灰阶值调整为减小后的灰阶值。
方式(2):对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值。示例性地,对第一子像素的灰阶值进行取黑处理,以减小第一子像素的灰阶值,得到减小后的第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一个子像素。
例如,在第一子像素的中心点处于过渡区域的情况下,显示设备将第一子像素的灰阶值设置为预设值。如图10所示,以预设值是0为例,在第一子像素的中心点处于A-C区域、D-H区域、以及I-K区域的情况下,显示设备将第一子像素的灰阶值设置为0,即对处于过渡区域的子像素进行去黑处理。本申请实施例中,显示设备采用去黑处理后,图像的显示效果可以如图12所示。
又例如,在第一子像素的中心点处于第一子过渡区域的情况下,显示设备将第一子像素的灰阶值设置为预设值。如图10所示,以预设值是0为例,在第一子像素的中心点处于第一子过渡区域A-B、E-F、F-G、以及J-K中任意一个区域的情况下,显示设备可以将该第一子像素的灰阶值设置为0,即对处于过渡区域的子像素进行去黑处理。
方式(3):对于中心区域中的多个子像素,增大多个子像素中处于非过渡区域的子像素的灰阶值。
在一些实施例中,显示设备基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,增大第二子像素的灰阶值,以得到增大后的第二子像素的灰阶值。
其中,第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素。
示例性地,显示设备基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,确定第二子像素的灰阶系数,并基于第二子像素的灰阶系数和第二子像素在目标注视点区域中的第二图像灰阶值,确定第二子像素的灰阶值,进而,将第二子像素的灰阶值和灰阶阈值中最小的灰阶值作为增大后的第二子像素的灰阶值。
其中,第二图像灰阶值用于表征第二子像素在第一注视点区域的灰阶值,或者用于表征第二子像素在第二注视点区域的灰阶值。
可选地,非过渡区域可以包括第二子过渡区域。例如,结合图10所示,非过渡区域可以为B-E、以及G-J。非过渡区域还可以为C-D、以及H-I。
示例性地,以非过渡区域是B-E、以及G-J为例。显示设备可以通过sin函数增大非过渡区域的子像素对应位置的第一注视点区域和第二注视点区域的灰阶值。具体的,显示设备可以通过以下公式3,确定第二子像素的灰阶系数k_score非。
k_score非=k_ratio*sin(2*π(position-d)/(deltax-4*d))+1 公式3
k_score非=k_ratio*sin(2*π(position-d)/(deltax-4*d))+1 公式3
其中,k_ratio为增大系数;position为第二子像素的中心点在目标注视点区域中的位置;deltax为目标注视点区域的宽度。
进一步地,以第二子像素的中心点处于B-E为例,显示设备可以将第二子像素的灰阶系数k_score非=代入以下公式4,确定第二子像素的灰阶值value非。
value非=k_score非*value_right非 公式4
value非=k_score非*value_right非 公式4
其中,value_right非为第二子像素在第一注视点区域的灰阶值。
相应的,若第二子像素的中心点处于G-J的情况下,显示设备可以将第二子像素的灰阶值k_score非=代入以下公式5,确定第二子像素的灰阶值value非。
value非=k_score非*value_left非 公式5
value非=k_score非*value_left非 公式5
其中,value_left非为第二子像素在第二注视点区域的灰阶值。
本申请实施例中,显示设备确定第二子像素的灰阶值value非之后,将该第二子像素的的灰阶值value非与灰阶阈值255进行比较,确定两者中最小的灰阶值作为增大后的第二子像素的灰阶值。
方式(4):对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,并增大多个子像素中处于非过渡区域的子像素的灰阶值。示例性地,显示设备可以将上述方式(1)与方式(3)结合,也即是说,采用方式(1)的方法减小处于过渡区域的子像素的灰阶值,并采用方式(3)的方式增大非过渡区域的子像素的灰阶值。
在本申请实施例中,将方式(1)与方式(3)结合后,图像的显示效果可以如图13所示。
方式(5):对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,并增大多个子像素中处于非过渡区域的子像素的灰阶值。示例性地,显示设备可以将上述方式(2)与方式(3)结合,也即是说,采用方式(2)的方法减小处于过渡区域的子像素的灰阶值,并采用方式(3)的方式增大非过渡区域的子像素的灰阶值。
在本申请实施例中,将方式(2)与方式(3)结合后,图像的显示效果可以如图14所示。
基于上述方案,本申请实施例中的显示设备对于中心区域中处于过渡区域的子像素执行在减小灰阶值的动作,增大非过渡区域的子像素的灰阶值,以此平衡过渡区域和非过渡区域中高亮度平台和低亮度平台的宽度,进一步有效提高裸眼3D的实际可视范围。
以上对如何处理中心区域中的多个子像素进行详细介绍。
以下对显示设备如何处理边缘区域中的多个子像素进行详细概述,即对于边缘区域的控制策略如下所示:
在本申请实施例中,对于边缘区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值。
在一些实施例中,基于第三子像素的中心点在目标注视点区域中的位置和第三子像素的宽度,确定第三子像素的区域占比,并基于第三子像素的区域占比和第三子像素的第三图像灰阶值,减小第三子像素的灰阶值,以得到减小后第三子像素的灰阶值。
其中,第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素;第三子像素的区域占比用于表征第三子像素在第一注视点区域中的占比的情况,或者用于表征第三子像素在第二注视点区域中的占比的情况;第三图像灰阶值用于表征第三子像素在第一注视点区域的灰阶值,或者用于表征第三子像素在第二注视点区域的灰阶值。
示例性地,结合图10所示,假设第三子像素处于第一子过渡区域。由于第一子过渡区域分别包括四种不同位置的过渡区域(如,A-B、E-F、F-G、J-K),因此在第三子像素处于不同位置的第一子过渡区域时,显示设备可以通过不同的公式得到第三子像素的区域占比。
具体地,显示设备确定第三子像素的区域占比可以为以下情况(1)、情况(2)、情况(3)、以及情况(4)中的任意一种。
情况(1)、在第三子像素的中心点处于第一子过渡区域A-B的情况下,显示设备可以通过以下公式6确定第三子像素在第一注视点区域的占比k_right。
k_right=(position+subpixel/2)/subpixel 公式6
k_right=(position+subpixel/2)/subpixel 公式6
其中,position为第三子像素的中心点在目标注视点区域中的位置,subpixel为一个子像素(第三子像素)的宽度。需要说明的是,position可以理解为第三子像素的中心点距目标注视点区域最左侧的距离。例如,第三子像素的中心点处于B点的线上,则position为A点至B点的距离。
进一步地,显示设备可以根据第三子像素在第一注视点区域的占比k_right,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
情况(2)、在第三子像素的中心点处于第一子过渡区域E-F的情况下,显示设备可以通过以下公式7确定第三子像素在第一注视点区域的占比k_right。
k_right=(deltax/2-position+subpixel/2)/subpixel 公式7
k_right=(deltax/2-position+subpixel/2)/subpixel 公式7
其中,position为第三子像素的中心点在目标注视点区域中的位置,subpixel为一个子像素(第三子像素)的宽度,deltax为目标注视点区域的宽度。
例如,如图15所示,第三子像素的中心点处于E点。deltax/2为半个目标注视点区域的宽度,如A至F的宽度。Position为A至E的宽度,deltax/2-position可以得到E至F的宽度。subpixel/2为半个子像素的宽度,即P至E的宽度。显示设备将E至F的宽度与P至E的宽度相加,得到P至F的宽度,通过P至F的宽度和一个子像素的宽度,计算确定第三子像素在第一注视点区域的占比。
可以理解的是,本申请实施例中第三子像素的区域占比可以理解为求解面积的过程。
显示设备可以根据第三子像素在第二注视点区域的占比k_left,得到该第三子像素在第一注视点区域的占比k_right,即k_right=1-k_left。
情况(3)、在第三子像素的中心点处于第一子过渡区域F-G的情况下,显示设备可以通过以下公式8确定第三子像素在第一注视点区域的占比k_right。
k_left=(position-deltax/2+subpixel/2)/subpixel 公式8
k_left=(position-deltax/2+subpixel/2)/subpixel 公式8
其中,position为第三子像素的中心点在目标注视点区域中的位置,subpixel为一个子像素(第三子像素)的宽度,deltax为目标注视点区域的宽度。
显示设备可以根据第三子像素在第一注视点区域的占比k_right,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
情况(4)、在第三子像素的中心点处于第一子过渡区域J-K的情况下,显示设备可以通过以下公式9确定第三子像素在第一注视点区域的占比k_right。
k_left=(deltax-positon+subpixel/2)/subpixel 公式9
k_left=(deltax-positon+subpixel/2)/subpixel 公式9
其中,position为第三子像素的中心点在目标注视点区域中的位置,subpixel为一个子像素(第三子像素)的宽度,deltax为目标注视点区域的宽度。
显示设备可以根据第三子像素在第一注视点区域的占比k_right,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
在本申请实施例中,显示设备通过上述情况(1)-情况(4)中的任一种公式确定第三子像素的区域占比之后,可以通过区域占比,确定减小后第三子像素的灰阶值。
其中,第三子像素的区域占比包括第三子像素在第一注视点区域中的占比和第三子像素在第二注视点区域中的占比。
需要说明的是,显示设备采用第三像素在第一注视点区域的灰阶值,或者,采用第三像素在第二注视点区域的灰阶值,是根据第三子像素的中心点位于第一注视点区域还是第二注视点区域确定的。例如,在第三子像素的中心点位于第一注视点区域,则采用第三子像素在第一注视点区域的灰阶值,在第三子像素的中心点位于第二注视点区域,则采用第三子像素在第二注视点区域的灰阶值。
进一步地,显示设备采用第三像素在第一注视点区域的灰阶值,或者,采用第三像素在第二注视点区域的灰阶值,是根据第三子像素的中心点所处的第一子过渡区域位于第一注视点区域还是第二注视点区域确定的。例如,在第三子像素的中心点所处的第一子过渡区域位于第一注视点区域,则采用第三子像素在第一注视点区域的灰阶值,在第三子像素的中心点所处的第一子过渡区域位于第二注视点区域,则采用第三子像素在第二注视点区域的灰阶值。
在一些实施例中,显示设备基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第一注视点区域的灰阶值,减小第三子像素的灰阶值,以得到减小后第三子像素的灰阶值。
又或者,基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的灰阶值,减小第三子像素的灰阶值,以得到减小后第三子像素的灰阶值。
示例性地,结合图10所示,由于第一子过渡区域分别包括四种不同位置的过渡区域,因此在第三子像素处于不同位置的第一子过渡区域时,显示设备可以通过不同的公式得到减小后第三子像素的灰阶值。具体的,显示设备确定目标灰阶值可以为以下情况(5)和情况(6)中的任一种。
情况(5)、显示设备采用第三子像素在第一注视点区域的灰阶值,确定减小后第三子像素的灰阶值。结合上述情况(1)和情况(2),如图10所示。在第三子像素的中心点处于第一子过渡区域A-B,或者,第一子过渡区域E-F的情况下,由于第一子过渡区域A-B和第一子过渡区域E-F处于目标注视点区域中的第一注视点区域,因此显示设备可以通过以下公式10,确定减小后第三子像素的灰阶值value。
value=(k_right-k_left)*value_right 公式10
value=(k_right-k_left)*value_right 公式10
其中,value_right为第三子像素在第一注视点区域的灰阶值。
情况(6)、显示设备采用第三子像素在第二注视点区域的灰阶值,确定减小后第三子像素的灰阶值。结合上述情况(3)和情况(4),如图10所示。在第三子像素的中心点处于第一子过渡区域F-G,或者,第一子过渡区域J-K的情况下,由于第一子过渡区域F-G和第一子过渡区域J-K处于目标注视点区域中的第二注视点区域,因此显示设备可以通过以下公式11,确定减小后第三子像素的灰阶值value。
value=(k_left-k_right)*value_left 公式11
value=(k_left-k_right)*value_left 公式11
其中,value_left为第三子像素在第二注视点区域的灰阶值。
本申请实施例中,显示设备通过情况(5)或情况(6)处理后,图像的显示效果可以如图16所示。
可以理解的是,假设目标用户处于注视点1的位置。如图17所示,对于3D显示面板上的像素发出的光线经过棱镜的分光作用,分别进入目标用户的左眼和右眼,左眼通过棱镜光栅观看到的区域形成的图像为左图。同理,右眼通过棱镜光栅观看到的区域形成的图像为右图。也即是说,第三子像素在第一注视点区域可以对应一个图像灰阶值,即第一灰阶值。相应的,在第二注视点区域也可以对应一个图像灰阶值,即第二灰阶值。
在一些实施例中,基于第三子像素在第一注视点区域中的第一灰阶值和第三子像素在第二注视点区域中的第二灰阶值,减小第三子像素的灰阶值,以得到减小后第三子像素的灰阶值。
其中,第一灰阶值基于第三子像素在第一注视点区域中的占比和第三子像素在第一注视点区域的图像灰阶值确定;第二灰阶值基于第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的图像灰阶值确定。
示例性地,显示设备可以基于第三子像素在第一注视点区域中的占比、以及第三子像素在第一注视点区域的图像灰阶值,确定第一灰阶值,并基于第三子像素在第二注视点区域中的占比、以及第三子像素在第二注视点区域的灰阶值,确定第二灰阶值。
应理解,第一灰阶值为调整后的第三子像素在第一注视点区域的灰阶值,第二灰阶值为调整后的第三子像素在第二注视点区域的灰阶值。
在本申请实施例中,显示设备可以根据第一灰阶值和第二灰阶值,确定减小后第三子像素的灰阶值,以此避免由于第三子像素在注视点区域的区域占比小,但第三子像素在注视点区域的灰阶值大,而造成的处理效果不理想的问题。
示例性地,结合图10所示,由于第一子过渡区域分别包括四种不同位置的过渡区域(A-B、E-F、F-G、J-K),因此在第三子像素处于不同位置的第一子过渡区域时,显示设备可以通过不同的公式得到减小后第三子像素的灰阶值。具体的,显示设备确定减小后第三子像素的灰阶值可以为以下情况(7)和情况(8)中的任一种。
情况(7)、结合上述情况(1)和情况(2),如图10所示。在第三子像素的中心点处于第一子过渡区域A-B,或者,第一子过渡区域E-F的情况下,由于第一子过渡区域A-B和第一子过渡区域E-F处于第一注视点区域,因此显示设备可以通过以下公式12,确定减小后第三子像素的灰阶值value。
value=k_left*value_left-k_right*value_right 公式12
value=k_left*value_left-k_right*value_right 公式12
情况(8)、结合上述情况(3)和情况(4),如图10所示。在第三子像素的中心点处于第一子过渡区域F-G,或者,第一子过渡区域J-K的情况下,由于第一子过渡区域F-G和第一子过渡区域J-K处于第二注视点区域,因此显示设备可以通过以下公式13,确定减小后第三子像素的灰阶值value。
value=k_right*value_right-k_left*value_left 公式13
value=k_right*value_right-k_left*value_left 公式13
可以理解的是,若第三子像素在注视点区域的区域占比小,且第三子像素在注视点区域的灰阶值大时,该第三子像素的亮度受灰阶值的影响较亮。同理,若第三子像素在注视点区域的区域占比大,但第三子像素在注视点区域的灰阶值小时,该第三子像素的亮度受灰阶值的影响较暗。进而,通过上述情况(7)和情况(8)可以更好的调整第三子像素的灰阶值。本申请实施例中,显示设备采用上述情况(7)或者情况(8)处理后,图像的显示效果可以如图18所示。
又一种示例,结合10所示,假设第三子像素处于第二子过渡区域。由于第二子过渡区域分别包括四种不同位置的过渡区域(如,B-C、D-E、G-H、J-K),因此在第三子像素处于不同位置的第二子过渡区域时,显示设备可以通过不同的公式得到第三子像素的区域占比。具体的,显示设备确定第三子像素的区域占比可以为以下情况(9)、情况(10)、情况(11)、以及情况(12)中的任意一种。
情况(9)、在第三子像素的中心点处于第二子过渡区域B-C的情况下,显示设备可以通过上述情况(1)中的公式6确定第三子像素在第一注视点区域的占比k_right。相应的,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
情况(10)、在第三子像素的中心点处于第二子过渡区域D-E的情况下,显示设备可以通过上述情况(2)中的公式7确定第三子像素在第一注视点区域的占比k_right。相应的,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
情况(11)、在第三子像素的中心点处于第二子过渡区域G-H的情况下,显示设备可以通过上述情况(3)中的公式8确定第三子像素在第一注视点区域的占比k_right。相应的,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
情况(12)、在第三子像素的中心点处于第二子过渡区域J-K的情况下,显示设备可以通过上述情况(4)中的公式9确定第三子像素在第一注视点区域的占比k_right。相应的,得到该第三子像素在第二注视点区域的占比k_left,即k_left=1-k_right。
在本申请实施例中,显示设备通过上述情况(9)-情况(12)中的任一种公式确定第三子像素的区域占比之后,可以通过区域占比,确定减小后第三子像素的灰阶值。
在一种可能的实现方式中,显示设备可以基于第三子像素对第一注视点区域的占比与第三子像素对第二注视点区域的占比的差值、以及第三子像素在第二注视点区域的灰阶值,确定减小后第三子像素的灰阶值。
可选地,显示设备还可以基于第三子像素对第一注视点区域的占比与第三子像素对第二注视点区域的占比的差值、以及第三子像素在第一注视点区域的灰阶值,确定减小后第三子像素的灰阶值。
示例性地,结合图10所示,由于第二子过渡区域分别包括四种不同位置的过渡区域,因此在第三子像素处于不同位置的第二子过渡区域时,显示设备可以通过不同的公式得到减小后第三子像素的灰阶值。具体的,显示设备确定减小后第三子像素的灰阶值可以为以下情况(13)和情况(14)中的任一种。
情况(13)、显示设备采用第三子像素在第一注视点区域的灰阶值,确定减小后第三子像素的灰阶值。结合上述情况(9)和情况(10),如图10所示。在第三子像素的中心点处于第二子过渡区域B-C,或者,第二子过渡区域D-E的情况下,由于第二子过渡区域B-C和第二子过渡区域D-E处于第一注视点区域,因此显示设备可以通过上述情况(5)中的公式10,确定减小后第三子像素的灰阶值value。
情况(14)、显示设备采用第三子像素在第二注视点区域的灰阶值,确定减小后第三子像素的灰阶值。结合上述情况(3)和情况(4),如图10所示。在第三子像素的中心点处于第二子过渡区域G-H,或者,第二子过渡区域I-J的情况下,由于第二子过渡区域G-H和第二子过渡区域I-J处于第二注视点区域,因此显示设备可以通过上述情况(6)中的公式11,确定减小后第三子像素的灰阶值value。
本申请实施例中,在第三子像素的中心点处于第二子过渡区域时,显示设备通过情况(13)或者情况(14)确定减小后第三子像素的灰阶值,并调整第三子像素的当前灰阶值,调整后图像的显示效果可以为图14所示。
在又一种可能的实现方式中,显示设备可以基于第三子像素在第一注视点区域中的占比、以及第三子像素在第一注视点区域的图像灰阶值,确定第一灰阶值,并基于第三子像素在第二注视点区域中的占比、以及第三子像素在第二注视点区域的灰阶值,确定第二灰阶值。
应理解,第一灰阶值为调整后的第三子像素在第一注视点区域的图像灰阶值,第二灰阶值为调整后的第三子像素在第二注视点区域的图像灰阶值。
在本申请实施例中,显示设备可以根据第一灰阶值和第二灰阶值,减小第三子像素的灰阶值,以得到减小后第三子像素的灰阶值,以此避免由于第三子像素在注视点区域的区域占比小,但第三子像素在注视点区域的灰阶值大,而造成的处理效果不理想的问题。
示例性地,结合图10所示,由于第二子过渡区域分别包括四种不同位置的过渡区域(B-C、D-E、G-H、I-J),因此在第三子像素处于不同位置的第二子过渡区域时,显示设备可以通过不同的公式得到减小后第三子像素的灰阶值。具体的,显示设备确定减小后第三子像素的灰阶值可以为以下情况(15)和情况(16)中的任一种。
情况(15)、结合上述情况(9)和情况(10),如图10所示。在第三子像素的中心点处于第二子过渡区域B-C,或者,第二子过渡区域D-E的情况下,由于第二子过渡区域B-C和第二子过渡区域D-E处于第一注视点区域,因此显示设备可以通过上述情况(7)中的公式12,确定减小后第三子像素的灰阶值value。
情况(16)、结合上述情况(11)和情况(12),如图10所示。在第三子像素的中心点处于第二子过渡区域G-H,或者,第二子过渡区域I-J的情况下,第二子过渡区域G-H和第二子过渡区域I-J处于第二注视点区域,因此显示设备可以通过上述情况(8)中的公式13,确定减小后第三子像素的灰阶值value。
本申请实施例中,在第三子像素的中心点处于第二子过渡区域时,显示设备通过情况(15)或者情况(16)确定减小后第三子像素的灰阶值,并调整第三子像素的当前灰阶值,调整后图像的显示效果可以为图19所示。
本申请实施例中,以31.5 8K裸眼3D项目样品,黑白图的合图串扰测试和花朵的3D效果拍图为例。对于边缘区域,显示设备采用减小边缘区域中子像素的灰阶值的方式,与没有加入过渡处理的排图算法相比较,对合图进行串扰率测试以及花朵的实际3D效果测试对比,减小边缘区域中子像素的灰阶值的方法可使得串扰率降低1%左右,重影现象明显减弱,3D效果得到有效提升。
在本申请实施例中,对第三子像素的灰阶值进行取黑处理,确定减小后的第三子像素的灰阶值。
其中,第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
示例性地,在第三子像素的中心点处于过渡区域的情况下,显示设备将第三子像素的灰阶值设置为预设值。如图10所示,以预设值是0为例,在第三子像素的中心点处于A-C区域、D-H区域、以及I-K区域的情况下,显示设备将第三子像素的灰阶值设置为0,即对处于过渡区域的子像素进行去黑处理。
以上为本申请实施例对于边缘区域中处于过渡区域的子像素处理方式的详细介绍。
以下为本申请实施例对外围区域的处理方式进行概述,即对于外围区域的控制策略如下所示:
在本申请实施例中,对于外围区域中的多个子像素,不进行调整控制。
应理解,外围区域为除中心区域和边缘区域之外的区域,由于该外围区域通常不会直接注视,相应的视觉需求最低,因此对于外围区域的子像素的灰阶值可以不作处理,以弥补其他区域(中心区域和边缘区域)因减小子像素的灰阶值造成的亮度削减。
需要说明的是,本申请实施例可以根据实际使用情况,调整过渡区域和非过渡区域的覆盖范围。如,增大过渡区域的覆盖范围,和/或,减小非过渡区域的覆盖范围;若增大过渡区域的覆盖范围,则过渡处理的范围也随之增大,相应的也增强显示设备对于子像素的灰阶值的过渡处理的效果。
基于上述技术方案,本申请实施例的显示设备可以实时针对目标用户的注视点位置,确定显示面板对应的多个不同的注视区域,并且对于不同注视区域中的子像素的灰阶值进行灵活的调整控制。也即是说,对于不同注视区域采用不同的控制策略,以此有效降低串扰率,增大裸眼3D观看可视范围,提高用户的观感。
在本申请实施例中,显示设备可以确定不同子像素的中心点在目标注视点区域中的位置。
在一种可能的实现方式中,显示设备可以根据注视点位置,确定目标注视点区域,并基于目标子像素所在行的中心子像素的位置,确定目标子像素的中心点在目标注视点区域中的位置。
其中,目标子像素为上述所提及的第一子像素、第二子像素、第三子像素中的任一个。
示例性地,显示设备可以获取目标注视点区域的宽度,并基于目标注视点区域的宽度和中心子像素的位置,确定目标子像素所在行的第二区域的宽度,进而根据第二区域的宽度,确定目标子像素的中心点在目标注视点区域中的位置。
其中,第二区域为目标子像素所在行左侧的不完整区域。
示例性地,如图20所示,人眼与显示面板中的棱镜之间的距离为z,棱镜与显示面板中的显示面板之间的距离为h,棱镜的水平节距值为pitch。显示设备可以确定一个棱镜在显示面板上的覆盖范围,即一个目标注视点区域deltax满足以下公式14:
deltax=(z+h)*pitch/z 公式14
deltax=(z+h)*pitch/z 公式14
显示设备将一个目标注视点区域deltax代入公式15,得到所在行最左侧不完整区域的宽度:
edge_distance=(xi-deltax/2)%deltax 公式15
edge_distance=(xi-deltax/2)%deltax 公式15
其中,edge_distance为所在行的一部分区域宽度,xi为第i行中心子像素的位置(中心子像素的位置),%用于表征求余算法。
进而,显示设备依据中心子像素的位置距离最左侧边缘的距离,得到目标子像素所处区域的具体位置,即显示设备将中心子像素的位置xi、所在行的一部分周期宽度edge_distance代入以下公式16:
position=(xi+deltax-edge_distance)%deltax 公式16
position=(xi+deltax-edge_distance)%deltax 公式16
其中,position为目标子像素的中心点在目标注视点区域中的位置。图21所示,B点为中心子像素的位置xi,C点位目标子像素的中心点。
例如,中心子像素的位置xi为13,目标注视点区域deltax为5,所在行最左侧不完整区域的宽度edge_distance为2。显示设备通过上述公式16得到(13+5-2)/5=3余2,则显示设备可以通过结果3余2,确定目标子像素的中心点处于第4个目标注视点区域的第2个位置。又例如,中心子像素的位置xi为8,目标注视点区域宽度deltax为5,所在行最左侧不完整区域的宽度edge_distance为2。显示设备通过上述公式16得到(8+5-2)/5=2余1,则显示设备可以通过结果2余1,确定目标子像素的中心点处于第3个目标注视点区域的第1个位置。
需要说明的是,本申请实施例中的“所在行”可以理解为“目标子像素的所在行”。即“中心子像素的位置”可以理解为“目标子像素的所在行的中心子像素的位置”。
可以理解的是,由于一个目标注视点区域的宽度中包括有多个子像素,在目标注视点区域的宽度deltax为5的情况下,一个子像素的宽度pixel可以为1,且0-2.5为目标注视点区域的第一注视点区域,2.5-5为目标注视点区域的第二注视点区域。
如图22所示,以目标子像素的中心点处于第3个目标注视点区域的第1个位置为例。若目标子像素的中心点处于第1个位置时,由于一个子像素的宽度pixel为1,则目标子像素处于0.5-1.5。由此可知,目标子像素完全处于第一注视点区域,并未处于过渡区域。但目标子像素的前一个子像素为处于过渡区域的子像素,即前一个子像素处于第2个目标注视点区域的4.5-第3个目标注视点区域的0.5的位置。
可以理解的是,结合图22所示,在目标注视点区域的宽度deltax为5,一个子像素的宽度pixel为1的情况下,目标子像素的中心点所处的位置满足以下任意一项,则目标子像素的中心点处于过渡区域:小于0.5、处于2至3之间、大于4.5。
由此通过上述技术方案,本申请实施例的显示设备可以通过公式14-公式16,得到目标子像素的中心点在目标注视点区域中的位置,以便于后续对目标子像素的灰阶值进行调整控制。
应理解,本申请实施例提供的显示设备还包括设置于显示面板上方的棱镜,该棱镜可以包括柱状棱镜,对此本申请实施例对此不作限定。
在一些实施例中,基于显示设备的设备参数,确定棱镜的摆放位置。
其中,显示设备的设备参数包括但不限于:棱镜的水平口径、棱镜的拱高、棱镜与显示面板之间的距离、棱镜覆盖的显示面板的子像素的数量、显示面板的子像素的宽度等。
示例性地,根据几何光学透镜成像原理,结合图20和图23的几何关系,可以得到以下公式17和公式18:
z+hz=D2D1 公式17
hz=D3w 公式18
z+hz=D2D1 公式17
hz=D3w 公式18
其中,z为目标用户的眼睛到显示面板的距离,h为棱镜和显示面板之间的距离,D2为棱镜的覆盖宽度,D1为一个棱镜的水平口径,D3为目标用户的两眼之间的距离覆盖的宽度,w为目标用户的左眼和右眼之间的距离,也称为瞳距。
当处于最佳观看距离z时,D3=0.5×D2。将D3的值代入公式18,得到:
hz=0.5×D2w 公式19
hz=0.5×D2w 公式19
解这个方程,可以得到空气层放置高度h的表达式,即公式20:
h=0.5×N×subpixel×wz 公式20
h=0.5×N×subpixel×wz 公式20
其中,N为每个柱状棱镜覆盖的显示面板上的子像素数量,subpixel显示面板上每个子像素的宽度。
进一步地,计算棱镜水平口径D1,将求得的h值代入第一个公式,得到棱镜水平口径D1的表达式,即公式21:
D1=N×subpixel×z+hz 公式21
D1=N×subpixel×z+hz 公式21
进一步地,计算柱状棱镜半径R,根据透镜的折射率公式,可以得到柱状棱镜半径R的表达式,即公式22:
R=(n1-n2)×h 公式22
R=(n1-n2)×h 公式22
更进一步地,计算柱状棱镜拱高L,根据图24的几何关系,可以得到柱状棱镜拱高L的表达式,即公式23:
L=R-R2-(2D1)2 公式23。
L=R-R2-(2D1)2 公式23。
通过上述设计步骤,我们可以得到柱状棱镜3D显示技术所需的所有关键参数。这些参数相互制约,以此确定柱状棱镜的摆放位置,进而确保柱状棱镜能够精确地将光线分离并导向不同的方向,从而为观察者提供高质量的3D立体视觉效果。
需要注意的是,上述设计方法中的公式和参数都是基于理想情况下的几何光学原理推导出来的。在实际应用中,可能还需要考虑其他因素,如显示面板的分辨率、柱状棱镜的加工精度等,以确保最终的3D显示效果达到最佳。
如图25所示,本申请实施例提供的一种灰阶控制方法,还包括以下S2501-S2502。
S2501、获取目标用户的眼球运动数据。
在本申请实施例中,显示设备可以通过人眼跟踪模块追踪目标用户在显示面板内的注视点位置,人眼跟踪模块可以执行校准程序,建立眼球运动与显示屏幕坐标系统之间的映射关系。当目标用户注视显示面板时,人眼跟踪模块能够实时捕获眼球运动数据,并根据映射关系计算用户注视点在屏幕上的对应坐标,即注视点位置。
进一步地,本申请实施例的人眼跟踪模块还能够处理双眼位置信息,以确定两眼中心,即眉心位置相对于显示屏幕中心点的三维位置坐标(x,y,z)。以此使得本申请实施例能够更准确地追踪目标用户的视线,提高人机交互的精度和可靠性。
S2502、基于眼球运动数据,确定目标用户在显示面板的注视点位置。
在本申请实施例中,人眼跟踪模块可以是双目摄像头。本申请实施例可以设置两个相距一定距离的摄像头作为输入设备,该双目摄像头之间的相对位置关系已知,并作为后续计算的基础。双目摄像头可以同时捕捉同一场景的图像,并对该图像进行预处理,预处理包括但不限于去噪、增强对比度等,以提高后续处理的准确性。
示例性地,双目摄像头利用图像匹配算法,找出两个摄像头捕捉到的图像中对应的特征点,并计算该特征点之间的视差。根据视差原理和已知的摄像头参数(如焦距、摄像头间距等),计算每个特征点对应的深度信息,并在获取场景中的深度信息后,双目摄像头可以结合人眼特征(如瞳孔位置、眼角形状等),在图像中识别出人眼,并基于深度信息推算出人眼在三维空间中的位置信息。
其中,视差是指同一物体在两个摄像头图像中的位置差异,该视差用于反映物体与摄像头之间的距离关系;深度信息用于表征物体在三维空间中距离摄像头的距离。
又一种示例,以人眼跟踪模块是红绿蓝深度摄像头(RGB-D摄像头)为例。本申请实施例红绿蓝深度摄像头集成有多个高精度传感器,包括红绿蓝(red green blue,RGB)摄像头和飞行时间(time of flight,TOF)深度传感器。
具体而言,通过RGB摄像头捕捉场景的彩色图像信息,该彩色图像信息包含丰富的视觉特征和细节,为后续的图像处理与识别提供了基础。同时,TOF深度传感器则利用光脉冲的飞行时间原理,快速测量场景中各点到摄像头的距离,从而生成高精度的深度图像。
本申请实施例,可以通过精确校准RGB摄像头与TOF深度传感器,使得两者捕捉到的数据能够在空间上严格对齐。在此基础上,利用图像处理和计算机视觉算法,对RGB图像和深度图像进行融合处理,提取出场景中的人眼特征。进一步地,结合深度信息,RGB-D摄像头能够准确计算出人眼在三维空间中的精确坐标位置。
又一种示例性地,以人眼跟踪模块是红外眼动追踪摄像头为例。红外眼动追踪摄像头集成有红外光源、红外摄像头、图像处理器,其中红外光源用于发射高强度的红外光束,为眼动追踪过程提供稳定且充足的红外光线照明;该红外摄像头能够捕获由眼睛反射回的红外光,形成清晰、对比度高的眼部图像,确保在各种环境光条件下也可正常工作,进而为后续的图像处理与数据分析提供高质量的视觉输入。
具体而言,红外摄像头可以将眼部图像传输至图像处理器,进而由图像处理器实时处理眼动传感器捕捉到的眼部图像,对传输的数据进行分析,计算目标用户的眼球位置坐标,并基于该眼球位置坐标和角膜反射点位置,计算瞳孔中心到角膜反射点的向量,作为目标用户的注视方向。
对于每只眼睛,可以计算出注视方向向量,并将注视方向向量延伸到显示面板,与显示面板的交点即为注视点的屏幕坐标。即根据注视方向向量和显示面板位置关系,计算注视点在显示面板上的二维坐标。若两只眼睛的注视方向向量不完全重合,则通过加权平均或其他算法确定最终的注视点坐标。
其中,本申请实施例采用瞳孔中心到角膜反射点的向量作为注视方向。
应理解,由于人类眼睛的生理构造特点,尤其是中央凹的高清晰度视觉能力,结合眼球的运动模式、大脑在信息处理中的选择性注意力机制,以及对视觉舒适度的自然追求,共同促使目标用户在观看屏幕时自然而然地将视线聚焦于特定区域。该聚焦不仅提升了目标用户处理视觉信息的效率,还有效地减轻了视觉疲劳感。
基于上述技术方案,本申请实施例提供了一种集成了人眼追踪技术的裸眼3D显示设备,以及与之配套的灰阶控制方法(显示方法)。该显示设备利用高清摄像头等精密传感器,实时捕捉并精准计算目标用户的眼球位置坐标(注视点位置)。随后,通过数据传输模块,将该坐标即时传递至显示控制的核心,即显示控制模块。显示控制模块依据接收到的坐标,对显示面板的子像素的灰阶值进行控制。
可以理解的是,鉴于该坐标处于持续的动态变化之中,且伴随着特定的影响区域等参数,显示控制模块可以同步且智能地调整显示面板上每个像素的灰阶值。
最终,经过棱镜光栅模块的精密分光处理,使目标用户无论身处何种视角,均能享受到无与伦比的3D视觉盛宴。该显示设备和灰阶控制方法可以适用于高端3D显示,如高级游戏显示器,虚拟显示器等。
此外,对于无人眼追踪模块的裸眼3D显示设备,以及与之配套的灰阶控制方法(显示方法)。该显示设备通过预设的固定距离和角度,精确设置显示参数,确保用户能在特定的视角位置观看到最佳的3D视觉效果。该显示设备特别适用于3D广告屏、展示用3D显示器等场景,为用户提供沉浸式的视觉体验。
需要指出的是,本申请各实施例之间可以相互借鉴或参考,例如,相同或相似的步骤,方法实施例、系统实施例和装置实施例之间,均可以相互参考,不予限制。
本申请实施例可以根据上述方法示例对灰阶控制装置进行功能模块或者功能单元的划分,例如,可以对应各个功能划分各个功能模块或者功能单元,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块或者功能单元的形式实现。其中,本申请实施例中对模块或者单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
如图26所示,为本申请实施例提供的一种灰阶控制装置的结构示意图,该装置应用于显示设备,该显示设备包括:显示设备包括:显示面板、以及处理器;显示面板对应有多个注视点,且一个注视点对应一个区域,每个注视点对应的区域均包括过渡区域和非过渡区域。
其中,多个注视点是目标用户通过不同的角度注视显示面板的视点;过渡区域为任意两个注视点对应区域的临界线之间的区域,非过渡区域为除过渡区域之外的区域。
该装置包括处理单元2501和获取单元2502,处理单元2501,被配置为:基于目标用户在显示面板内的注视点位置,确定显示面板对应的多个注视区域;处理单元2501,还被配置为:针对多个注视区域中的每个注视区域,基于注视区域对应的控制策略对注视区域中的子像素的灰阶值进行控制。
其中,多个注视区域中至少两个注视区域对应的控制策略不同。
在一些实施例中,处理单元2501,还被配置为:基于注视点位置,确定目标注视点区域;目标注视点区域为多个注视点区域中对子像素的灰阶值存在影响的区域;目标注视点区域包括第一注视点区域和第二注视点区域,第一注视点区域和第二注视点区域中一个为注视点位置对应的区域,另一个为与注视点位置对应的区域的相邻区域。
在一些实施例中,多个注视区域包括中心区域,中心区域为注视点位置的中心的区域;处理单元2501,具体被配置为:对于中心区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值,和/或增大多个子像素中处于非过渡区域的子像素的灰阶值。
在一些实施例中,处理单元2501,具体被配置为:基于第一子像素的中心点在目标注视点区域中的位置和第一区域的宽度,确定第一子像素的灰阶系数,并基于第一子像素的灰阶系数和第一子像素在目标注视点区域中的第一图像灰阶值,减小第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第一区域为目标注视点区域中的部分区域;第一图像灰阶值用于表征第一子像素在第一注视点区域的灰阶值,或者用于表征第一子像素在第二注视点区域的灰阶值。
在一些实施例中,处理单元2501,具体被配置为:对第一子像素的灰阶值进行取黑处理,以减小第一子像素的灰阶值。
其中,第一子像素为中心区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,处理单元2501,还被配置为:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,增大第二子像素的灰阶值。
其中,第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素。
在一些实施例中,处理单元2501,具体被配置为:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,确定第二子像素的灰阶系数,并基于第二子像素的灰阶系数和第二子像素在目标注视点区域中的图像灰阶值,确定第二子像素的灰阶值,进而将第二子像素的灰阶值和灰阶阈值中最小值作为增大后的第二子像素的灰阶值。
其中,第二图像灰阶值用于表征第二子像素在第一注视点区域的灰阶值,或者用于表征第二子像素在第二注视点区域的灰阶值。
在一些实施例中,多个注视区域包括边缘区域,边缘区域为与中心区域相邻的区域;中心区域为注视点位置的中心的区域;处理单元2501,具体被配置为:对于边缘区域中的多个子像素,减小多个子像素中处于过渡区域的子像素的灰阶值。
基于第三子像素的区域占比和第三子像素的第三图像灰阶值,减小第三子像素的灰阶值;第三图像灰阶值用于表征第三子像素在第一注视点区域的图像灰阶值,或者用于表征第三子像素在第二注视点区域的图像灰阶值。
在一些实施例中,处理单元2501,还被配置为:基于第三子像素的中心点在目标注视点区域中的位置和第三子像素的宽度,确定第三子像素的区域占比。
其中,第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素;第三子像素的区域占比用于表征第三子像素在第一注视点区域中的占比的情况,或者用于表征第三子像素在第二注视点区域中的占比的情况。
在一些实施例中,处理单元2501,具体被配置为:
基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第一注视点区域的灰阶值,减小第三子像素的灰阶值;或者,
基于第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的灰阶值,减小第三子像素的灰阶值。
在一些实施例中,处理单元2501,具体被配置为:
基于第三子像素在第一注视点区域中的第一灰阶值和第三子像素在第二注视点区域中的第二灰阶值,减小第三子像素的灰阶值。
其中,第一灰阶值基于第三子像素在第一注视点区域中的占比和第三子像素在第一注视点区域的图像灰阶值确定;第二灰阶值基于第三子像素在第二注视点区域中的占比和第三子像素在第二注视点区域的图像灰阶值确定。
在一些实施例中,处理单元2501,还被配置为:对子像素的灰阶值进行取黑处理,以减小子像素的灰阶值。
其中,子像素为第一子像素或者第三子像素,第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,处理单元2501,还被配置为:增大过渡区域的覆盖范围,和/或,减小非过渡区域的覆盖范围。
在一些实施例中,子像素的中心点在目标注视点区域中的位置可以通过以下方式确定:根据注视点位置,确定目标注视点区域;基于目标子像素所在行的中心子像素的位置,确定目标子像素的中心点在目标注视点区域中的位置。
其中,目标子像素为第一子像素、第二子像素、第三子像素中的任一个;第一子像素为中心区域的多个子像素中处于过渡区域的任一子像素;第二子像素为中心区域的多个子像素中处于非过渡区域的任一个子像素;第三子像素为边缘区域的多个子像素中处于过渡区域的任一个子像素。
在一些实施例中,处理单元2501,具体被配置为:获取目标注视点区域的宽度,并基于目标注视点区域的宽度和中心子像素的位置,确定目标子像素所在行的第二区域的宽度,进而根据第二区域的宽度,确定目标子像素的中心点在目标注视点区域中的位置。
其中,第二区域为目标子像素所在行左侧的不完整区域;
在一些实施例中,处理单元2501,还被配置为:获取目标子像素所在行的中心子像素的与显示面板中心点之间的偏移量,并基于偏移量和像素宽度,确定目标子像素所在行的中心子像素的位置。
在一些实施例中,多个注视区域包括外围区域,外围区域为除中心区域和边缘区域之外的区域;中心区域为注视点位置的中心的区域,边缘区域为与中心区域相邻的区域;处理单元2501,具体被配置为:对于外围区域中的多个子像素,不进行调整控制。
在一些实施例中,显示设备还包括设置于显示面板上方的棱镜;处理单元2501,还被配置为:基于显示设备的设备参数,确定棱镜的摆放位置。
在一些实施例中,显示设备的设备参数包括以下至少两项:棱镜的水平口径;棱镜的拱高;棱镜与显示面板之间的距离;棱镜覆盖显示面板的子像素的数量;显示面板的子像素的宽度。
在通过硬件实现时,本申请实施例中的获取单元2502可以集成在通信接口上,处理单元2501可以集成在处理器上。具体实现方式如图26所示。
图27示出了上述实施例中所涉及的灰阶控制装置的又一种可能的结构示意图。该通信装置包括:处理器2602和通信接口2603。处理器2602用于对装置的动作进行控制管理,例如,执行上述处理单元2501执行的步骤,和/或用于执行本文所描述的技术的其它过程。通信接口2603用于支持装置与其他网络实体的通信,例如,执行上述获取单元2502执行的步骤。该装置还可以包括存储器2601和总线2604,存储器2601用于存储装置的程序代码和数据。
其中,存储器2601可以是该装置中的存储器等,该存储器可以包括易失性存储器,例如随机存取存储器;该存储器也可以包括非易失性存储器,例如只读存储器,快闪存储器,硬盘或固态硬盘;该存储器还可以包括上述种类的存储器的组合。
上述处理器2602可以是实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。该处理器可以是中央处理器,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。该处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。
总线2604可以是扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。总线2604可以分为地址总线、数据总线、控制总线等。为便于表示,图26中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
图26中的装置还可以为芯片。该芯片包括一个或两个以上(包括两个)处理器2602和通信接口2603。
可选的,该芯片还包括存储器2605,存储器2605可以包括只读存储器和随机存取存储器,并向处理器2602提供操作指令和数据。存储器2605的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
在一些实施方式中,存储器2605存储了如下的元素,执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
在本申请实施例中,通过调用存储器2605存储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。
本公开的一些实施例提供了一种计算机可读存储介质(例如,非暂态计算机可读存储介质),该计算机可读存储介质中存储有计算机程序指令,计算机程序指令在计算机(例如,接收节点)上运行时,使得计算机执行如上述实施例中任一实施例的同步方法。
示例性的,上述计算机可读存储介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,CD(Compact Disk,压缩盘)、DVD(Digital Versatile Disk,数字通用盘)等),智能卡和闪存器件(例如,EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、卡、棒或钥匙驱动器等)。本公开描述的各种计算机可读存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读存储介质。术语“机器可读存储介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
本公开的一些实施例还提供了一种计算机程序产品,例如,该计算机程序产品存储在非瞬时性的计算机可读存储介质上。该计算机程序产品包括计算机程序指令,在计算机(例如,接收节点)上执行该计算机程序指令时,该计算机程序指令使计算机执行如上述实施例的同步方法。
本公开的一些实施例还提供了一种计算机程序。当该计算机程序在计算机(例如,接收节点)上执行时,该计算机程序使计算机执行如上述实施例的同步方法。
上述计算机可读存储介质、计算机程序产品及计算机程序的有益效果和上述一些实施例的同步方法的有益效果相同,此处不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。
Claims (24)
- 一种显示设备,其中,所述显示设备包括:显示面板和处理器;所述处理器,被配置为:基于目标用户在所述显示面板内的注视点位置,确定所述显示面板对应的多个注视区域;所述处理器,还被配置为:针对所述多个注视区域中的每个注视区域,基于所述注视区域对应的控制策略对所述注视区域中的子像素的灰阶值进行控制;其中,所述多个注视区域中至少两个注视区域对应的控制策略不同。
- 根据权利要求1所述的显示设备,其中,所述显示面板对应有多个注视点,且一个注视点对应一个区域,每个注视点对应的区域均包括过渡区域和非过渡区域;其中,所述多个注视点是目标用户通过不同的角度注视显示面板的视点;所述过渡区域为任意两个注视点对应区域的临界线之间的区域,所述非过渡区域为除所述过渡区域之外的区域。
- 根据权利要求2所述的显示设备,其中,所述处理器,还被配置为:基于所述注视点位置,确定目标注视点区域;所述目标注视点区域为多个注视点区域中对子像素的灰阶值存在影响的区域;所述目标注视点区域包括第一注视点区域和第二注视点区域,所述第一注视点区域和第二注视点区域中一个为所述注视点位置对应的区域,另一个为与所述注视点位置对应的区域的相邻区域。
- 根据权利要求3所述的显示设备,其中,所述多个注视区域包括中心区域,所述中心区域为所述注视点位置所在的区域;所述处理器,具体被配置为:对于所述中心区域中的多个子像素,减小所述多个子像素中处于所述过渡区域的子像素的灰阶值,和/或增大所述多个子像素中处于所述非过渡区域的子像素的灰阶值。
- 根据权利要求4所述的显示设备,其中,所述处理器,具体被配置为:基于第一子像素的中心点在目标注视点区域中的位置和第一区域的宽度,确定所述第一子像素的灰阶系数;所述第一子像素为所述中心区域的多个子像素中处于所述过渡区域的任一子像素;所述第一区域为所述目标注视点区域中的部分区域;基于所述第一子像素的灰阶系数和所述第一子像素在所述目标注视点区域中的第一图像灰阶值,减小所述第一子像素的灰阶值;所述第一图像灰阶值用于表征所述第一子像素在所述第一注视点区域的灰阶值,或者用于表征所述第一子像素在所述第二注视点区域的灰阶值。
- 根据权利要求4或5所述的显示设备,其中,所述处理器,具体被配置为:基于第二子像素的中心点在目标注视点区域中的位置和目标注视点区域的宽度,增大所述第二子像素的灰阶值;其中,所述第二子像素为所述中心区域的多个子像素中处于所述非过渡区域的任一个子像素。
- 根据权利要求6所述的显示设备,其中,所述处理器,还被配置为:基于所述第二子像素的中心点在所述目标注视点区域中的位置和所述目标注视点区域的宽度,确定所述第二子像素的灰阶系数;基于所述第二子像素的灰阶系数和所述第二子像素在所述目标注视点区域中的第二图像灰阶值,确定所述第二子像素的灰阶值;所述第二图像灰阶值用于表征所述第二子像素在所述第一注视点区域的灰阶值,或者用于表征所述第二子像素在所述第二注视点区域的灰阶值;将所述第二子像素的灰阶值和所述多个子像素的灰阶阈值中最小值作为增大后的所述第二子像素的灰阶值。
- 根据权利要求3所述的显示设备,其中,所述多个注视区域包括边缘区域,所述边缘区域为与中心区域相邻的区域;所述中心区域为所述注视点位置所在的区域;所述处理器,具体被配置为:对于所述边缘区域中的多个子像素,减小所述多个子像素中处于所述过渡区域的子像素的灰阶值。
- 根据权利要求8所述的显示设备,其中,所述处理器,具体被配置为:基于第三子像素的区域占比和所述第三子像素的第三图像灰阶值,减小所述第三子像素的灰阶值;所述第三图像灰阶值用于表征所述第三子像素在所述第一注视点区域的图像灰阶值,或者用于表征所述第三子像素在所述第二注视点区域的图像灰阶值。
- 根据权利要求9所述的显示设备,其中,所述处理器,还被配置为:基于所述第三子像素的中心点在目标注视点区域中的位置和第三子像素的宽度,确定所述第三子像素的区域占比;所述第三子像素为所述边缘区域的多个子像素中处于所述过渡区域的任一个子像素;所述第三子像素的区域占比用于表征所述第三子像素在所述第一注视点区域中的占比的情况,或者用于表征所述第三子像素在所述第二注视点区域中的占比的情况。
- 根据权利要求9所述的显示设备,其中,所述处理器,具体被配置为:基于所述第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和所述第三子像素在所述第一注视点区域的灰阶值,减小所述第三子像素的灰阶值;或者,基于所述第三子像素在第一注视点区域中的占比、第三子像素在第二注视点区域中的占比和所述第三子像素在所述第二注视点区域的灰阶值,减小所述第三子像素的灰阶值。
- 根据权利要求9所述的显示设备,其中,所述处理器,具体被配置为:基于所述第三子像素在第一注视点区域中的第一灰阶值和所述第三子像素在第二注视点区域中的第二灰阶值,减小所述第三子像素的灰阶值;其中,所述第一灰阶值基于所述第三子像素在第一注视点区域中的占比和所述第三子像素在所述第一注视点区域的图像灰阶值确定;所述第二灰阶值基于所述第三子像素在第二注视点区域中的占比和所述第三子像素在所述第二注视点区域的图像灰阶值确定。
- 根据权利要求4或8所述的显示设备,其中,所述处理器,还被配置为:对子像素的灰阶值进行取黑处理,以减小所述子像素的灰阶值;所述子像素为第一子像素或者第三子像素,所述第一子像素为所述中心区域的多个子像素中处于所述过渡区域的任一子像素;所述第三子像素为所述边缘区域的多个子像素中处于所述过渡区域的任一个子像素。
- 根据权利要求2-13中任一项所述的显示设备,其中,所述处理器,还被配置为:增大所述过渡区域的覆盖范围,和/或,减小所述非过渡区域的覆盖范围。
- 根据权利要求4-6中任一项或者8-12中任一项所述的显示设备,其中,子像素的中心点在目标注视点区域中的位置可以通过以下方式确定:根据所述注视点位置,确定目标注视点区域;基于目标子像素所在行的中心子像素的位置,确定所述目标子像素的中心点在所述目标注视点区域中的位置;其中,所述目标子像素为第一子像素、第二子像素、第三子像素中的任一个;所述第一子像素为所述中心区域的多个子像素中处于所述过渡区域的任一子像素;所述第二子像素为所述中心区域的多个子像素中处于所述非过渡区域的任一个子像素;所述第三子像素为边缘区域的多个子像素中处于所述过渡区域的任一个子像素。
- 根据权利要求15所述的显示设备,其中,所述处理器,具体被配置为:获取所述目标注视点区域的宽度;基于所述目标注视点区域的宽度和所述中心子像素的位置,确定所述目标子像素所在行的第二区域的宽度;所述第二区域为所述目标子像素所在行左侧的不完整区域;根据所述第二区域的宽度,确定所述目标子像素的中心点在所述目标注视点区域中的位置。
- 根据权利要求15或16所述的显示设备,其中,所述处理器,还被配置为:获取所述目标子像素所在行的中心子像素的与所述显示面板中心点之间的偏移量;基于所述偏移量和子像素宽度,确定所述目标子像素所在行的中心子像素的位置。
- 根据权利要求2-17中任一项所述的显示设备,其中,所述多个注视区域包括外围区域,所述外围区域为除中心区域和所述边缘区域之外的区域;所述中心区域为所述注视点位置的中心的区域,所述边缘区域为与所述中心区域相邻的区域;所述处理器,还被配置为:对于所述外围区域中的多个子像素,不进行调整控制。
- 根据权利要求1-18中任一项所述的显示设备,其中,所述显示设备还包括设置于所述显示面板上方的棱镜;所述处理器,还被配置为:基于所述显示设备的设备参数,确定所述棱镜的摆放位置。
- 根据权利要求19所述的显示设备,其中,所述显示设备的设备参数包括以下至少两项:所述棱镜的水平口径;所述棱镜的拱高;所述棱镜与所述显示面板之间的距离;所述棱镜覆盖所述显示面板的子像素的数量;所述显示面板的子像素的宽度。
- 一种灰阶控制方法,其中,所述方法包括:基于目标用户在所述显示面板内的注视点位置,确定所述显示面板对应的多个注视区域;针对所述多个注视区域中的每个注视区域,基于所述注视区域对应的控制策略对所述注视区域中的子像素的灰阶值进行控制;其中,所述多个注视区域中至少两个注视区域对应的控制策略不同。
- 一种灰阶控制装置,其中,所述装置包括:处理单元;所述处理单元用于:基于目标用户在所述显示面板内的注视点位置,确定所述显示面板对应的多个注视区域;所述处理单元还用于:针对所述多个注视区域中的每个注视区域,基于所述注视区域对应的控制策略对所述注视区域中的子像素的灰阶值进行控制;其中,所述多个注视区域中至少两个注视区域对应的控制策略不同。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有指令,当计算机执行所述指令时,所述计算机执行上述权利要求21所述的灰阶控制方法。
- 一种计算机程序产品,其中,所述计算机程序产品包括指令,在计算机上执行所述指令时,所述计算机执行如上述权利要求21所述的灰阶控制方法。
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202480002849.4A CN121153075A (zh) | 2024-04-16 | 2024-11-29 | 显示设备、灰阶控制方法、装置及存储介质 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/088125 WO2025217817A1 (zh) | 2024-04-16 | 2024-04-16 | 电子设备、灰阶补偿方法、装置及存储介质 |
| CNPCT/CN2024/088125 | 2024-04-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025218182A1 true WO2025218182A1 (zh) | 2025-10-23 |
Family
ID=97402684
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/088125 Pending WO2025217817A1 (zh) | 2024-04-16 | 2024-04-16 | 电子设备、灰阶补偿方法、装置及存储介质 |
| PCT/CN2024/135910 Pending WO2025218182A1 (zh) | 2024-04-16 | 2024-11-29 | 显示设备、灰阶控制方法、装置及存储介质 |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/088125 Pending WO2025217817A1 (zh) | 2024-04-16 | 2024-04-16 | 电子设备、灰阶补偿方法、装置及存储介质 |
Country Status (2)
| Country | Link |
|---|---|
| CN (2) | CN121153074A (zh) |
| WO (2) | WO2025217817A1 (zh) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104205199A (zh) * | 2012-04-06 | 2014-12-10 | 株式会社半导体能源研究所 | 显示装置以及电子设备 |
| KR20150037231A (ko) * | 2013-09-30 | 2015-04-08 | 엘지디스플레이 주식회사 | 멀티뷰 영상 생성 방법과 이를 이용한 입체 영상 표시 장치 |
| KR20150077167A (ko) * | 2013-12-27 | 2015-07-07 | 엘지디스플레이 주식회사 | 3차원영상 표시장치 및 그 구동방법 |
| US20150245007A1 (en) * | 2014-02-21 | 2015-08-27 | Sony Corporation | Image processing method, image processing device, and electronic apparatus |
| CN116391223A (zh) * | 2020-10-27 | 2023-07-04 | 索尼集团公司 | 信息处理装置、信息处理方法和程序 |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4375468B2 (ja) * | 2007-09-26 | 2009-12-02 | エプソンイメージングデバイス株式会社 | 2画面表示装置 |
| TW201441667A (zh) * | 2013-04-26 | 2014-11-01 | Wintek Corp | 立體影像顯示方法及相關裝置 |
| CN104965308B (zh) * | 2015-08-05 | 2017-12-22 | 京东方科技集团股份有限公司 | 三维显示装置及其显示方法 |
| CN110830783B (zh) * | 2019-11-28 | 2021-06-01 | 歌尔光学科技有限公司 | 一种vr影像处理方法、装置、vr眼镜及可读存储介质 |
| CN111292701B (zh) * | 2020-03-31 | 2022-03-08 | Tcl华星光电技术有限公司 | 显示面板补偿方法及装置 |
| CN113538271B (zh) * | 2021-07-12 | 2025-02-14 | Oppo广东移动通信有限公司 | 图像显示方法、装置、电子设备和计算机可读存储介质 |
-
2024
- 2024-04-16 WO PCT/CN2024/088125 patent/WO2025217817A1/zh active Pending
- 2024-04-16 CN CN202480000741.1A patent/CN121153074A/zh active Pending
- 2024-11-29 WO PCT/CN2024/135910 patent/WO2025218182A1/zh active Pending
- 2024-11-29 CN CN202480002849.4A patent/CN121153075A/zh active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104205199A (zh) * | 2012-04-06 | 2014-12-10 | 株式会社半导体能源研究所 | 显示装置以及电子设备 |
| KR20150037231A (ko) * | 2013-09-30 | 2015-04-08 | 엘지디스플레이 주식회사 | 멀티뷰 영상 생성 방법과 이를 이용한 입체 영상 표시 장치 |
| KR20150077167A (ko) * | 2013-12-27 | 2015-07-07 | 엘지디스플레이 주식회사 | 3차원영상 표시장치 및 그 구동방법 |
| US20150245007A1 (en) * | 2014-02-21 | 2015-08-27 | Sony Corporation | Image processing method, image processing device, and electronic apparatus |
| CN116391223A (zh) * | 2020-10-27 | 2023-07-04 | 索尼集团公司 | 信息处理装置、信息处理方法和程序 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN121153075A (zh) | 2025-12-16 |
| WO2025217817A1 (zh) | 2025-10-23 |
| CN121153074A (zh) | 2025-12-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11474597B2 (en) | Light field displays incorporating eye trackers and methods for generating views for a light field display using eye tracking information | |
| US11132056B2 (en) | Predictive eye tracking systems and methods for foveated rendering for electronic displays | |
| US10871825B1 (en) | Predictive eye tracking systems and methods for variable focus electronic displays | |
| RU2541936C2 (ru) | Система трехмерного отображения | |
| JP7094266B2 (ja) | 単一深度追跡型の遠近調節-両眼転導ソリューション | |
| CN113196136B (zh) | 增强现实头戴送受话器中的动态会聚调整 | |
| CN107071382B (zh) | 立体图像显示装置 | |
| US11659158B1 (en) | Frustum change in projection stereo rendering | |
| EP3409013B1 (en) | Viewing device adjustment based on eye accommodation in relation to a display | |
| US10553014B2 (en) | Image generating method, device and computer executable non-volatile storage medium | |
| JP2021518679A (ja) | ディスプレイシステムのための深度ベースの中心窩化レンダリング | |
| US20150312558A1 (en) | Stereoscopic rendering to eye positions | |
| JP2020514926A (ja) | ディスプレイシステムのための深度ベース中心窩化レンダリング | |
| CN105992965A (zh) | 响应于焦点移位的立体显示 | |
| US11641455B2 (en) | Method and apparatus for measuring dynamic crosstalk | |
| Cebeci et al. | Gaze-directed and saliency-guided approaches of stereo camera control in interactive virtual reality | |
| CN104216126A (zh) | 一种变焦3d显示技术 | |
| US20130265398A1 (en) | Three-Dimensional Image Based on a Distance of a Viewer | |
| CN117412020A (zh) | 视差调整方法、装置、存储介质和计算设备 | |
| WO2025218182A1 (zh) | 显示设备、灰阶控制方法、装置及存储介质 | |
| US20250067982A1 (en) | Controllable aperture projection for waveguide display | |
| US11921295B1 (en) | Eyewear with lenses for reduced discrepancy between accommodation and convergence | |
| Liu et al. | Computational 3D displays | |
| GB2636997A (en) | Computer-implemented method and system | |
| Gurrieri | Improvements in the visualization of stereoscopic 3D imagery |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24935710 Country of ref document: EP Kind code of ref document: A1 |