CN116466903A - Image display method, device, equipment and storage medium - Google Patents
Image display method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN116466903A CN116466903A CN202310458371.3A CN202310458371A CN116466903A CN 116466903 A CN116466903 A CN 116466903A CN 202310458371 A CN202310458371 A CN 202310458371A CN 116466903 A CN116466903 A CN 116466903A
- Authority
- CN
- China
- Prior art keywords
- image
- display
- offset
- adjustment value
- displayed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1407—General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The embodiment of the application provides an image display method, an image display device and a storage medium, and relates to the technical field of computers, wherein the method comprises the following steps: based on the background image and the superimposed object image in the same source image to be displayed, the respective display images of the left and right displays are generated, so that the image processing process is simplified, the requirements of a processor are reduced, and the power consumption and the cost are reduced. And secondly, based on target depth information carried by the superimposed object image, adjusting the horizontal distance between the superimposed object image and the right display when the superimposed object image is displayed on the left display, superposing and displaying the adjusted first offset image and the adjusted background image on the left display, and superposing and displaying the adjusted second offset image and the adjusted background image on the right display, thereby realizing the stereoscopic display effect of the superimposed object, bringing the stereoscopic impression and the spatial impression of display to the user.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image display method, an image display device, image display equipment and a storage medium.
Background
In the fields of video, games, web education, web conferencing, social networking, shopping, etc., there is an increasing demand for Augmented Reality (AR) and Virtual Reality (VR) technologies. The AR/VR technology utilizes display optics to create a virtual world in three-dimensional space, providing a user with a sensory simulation of vision, etc., to make the user feel as if he were experiencing his or her environment, and to observe things in three-dimensional space.
In the related art, AR/VR devices achieve three-dimensional (3D) effects with complex optics, which results in a heavy, power-consuming and costly wearable device.
Disclosure of Invention
The embodiment of the application provides an image display method, an image display device and a storage medium, which are used for reducing the weight, the power consumption and the cost of head-mounted wearable equipment.
In one aspect, an embodiment of the present application provides an image display method, which is applied to a wearable device, including:
acquiring a source image to be displayed, wherein the source image to be displayed comprises a background image and at least one superimposed object image carrying target depth information;
for each superimposed object image, adjusting the horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display based on corresponding target depth information to obtain a first offset image corresponding to the left display and a second offset image corresponding to the right display;
Superposing the obtained first offset image with the background image to obtain a first display image, and displaying the first display image in the left display;
and superposing the obtained second offset image with the background image to obtain a second display image, and displaying the second display image in the right display.
In one aspect, an embodiment of the present application provides an image display apparatus, which is applied to a wearable device, including:
the acquisition module is used for acquiring a source image to be displayed, wherein the source image to be displayed comprises a background image and at least one superimposed object image carrying target depth information;
the processing module is used for adjusting the horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display based on the corresponding target depth information for each superimposed object image to obtain a first offset image corresponding to the left display and a second offset image corresponding to the right display;
the display module is used for superposing the obtained first offset image with the background image to obtain a first display image, and displaying the first display image in the left display;
And the display module is also used for superposing the obtained second offset image with the background image to obtain a second display image, and displaying the second display image in the right display.
Optionally, the superimposed object image further includes a reference position with respect to the source image to be displayed;
the processing module is specifically configured to:
determining a target adjustment value for the horizontal distance based on the target depth information;
and controlling the superimposed object image to move relative to the reference position based on the target adjustment value, and obtaining the first offset image and the second offset image.
Optionally, the horizontal distance increases to a positive direction and the horizontal distance decreases to a negative direction;
the processing module is further configured to:
if the image depth represented by the target depth information is smaller than the reference image depth, the target adjustment value is smaller than zero, and the reference image depth refers to: a focal length of the left display or the right display.
And if the image depth represented by the target depth information is larger than the reference image depth, the target adjustment value is larger than zero.
Optionally, the processing module is specifically configured to:
determining an adjustment upper limit value of the horizontal distance based on the target depth information;
And selecting the target adjustment value from the adjustment upper limit value according to a preset rule.
Optionally, the processing module is further configured to:
and controlling the superimposed object image to move relative to the reference position based on the target adjustment value, and reducing the absolute value of the target adjustment value if the watching time of the user is longer than the preset time before the first offset image and the second offset image are obtained.
Optionally, the target adjustment value includes a first adjustment value and a second adjustment value;
the display module is specifically used for:
controlling the superimposed object image to horizontally move by a first adjustment value relative to the reference position, and obtaining the first offset image; and controlling the superimposed object image to horizontally move by a second adjustment value relative to the reference position, and obtaining the second offset image.
Optionally, the display module is specifically configured to:
controlling the superposition object image to horizontally move the target adjustment value relative to the reference position, and obtaining the first offset image; and taking the superimposed object image as the second offset image; or,
controlling the superposition object image to horizontally move the target adjustment value relative to the reference position, and obtaining the second offset image; and taking the superposition object image as the first offset image.
In one aspect, embodiments of the present application provide a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the image display method described above when the processor executes the program.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, which when run on the computer device, causes the computer device to perform the steps of the above-described image display method.
In the embodiment of the application, the first display image displayed by the left display and the second display image displayed by the right display are respectively generated based on the background image and the superimposed object image in the same source image to be displayed, so that the image processing processes such as image decompression and image rendering are simplified, the requirements of a processor are reduced, the power consumption is reduced, and the cost is reduced. And secondly, based on target depth information carried by the superimposed object image, adjusting the horizontal distance between the superimposed object image and the right display when the superimposed object image is displayed on the left display, superposing and displaying the adjusted first offset image and the adjusted background image on the left display, and superposing and displaying the adjusted second offset image and the adjusted background image on the right display, thereby realizing the stereoscopic display effect of the superimposed object, bringing the stereoscopic impression and the spatial impression of display to the user. In addition, in the method, the background images in the same source image to be displayed are directly displayed on the left display and the right display, the background images are not required to be displayed, and only the superimposed object images are required to be translated, so that the stereoscopic effect of the superimposed object is realized, and the complexity of processing the background images is reduced; meanwhile, in the source images to be displayed, the background images always exist and have a large range, compared with the background images, the overlapped object images are often smaller, and each source image to be displayed does not contain the overlapped object images, so that when the translation distance of the overlapped object images is only adjusted, but not the background images are translated, the stereoscopic impression and the spatial impression of the source images to be displayed can be increased, and the convergence adjustment conflict can be greatly relieved. Moreover, the translation distance of the superimposed object image is adjusted, the position of the superimposed object perceived by the user is shortened, and the distance between the superimposed object image and the virtual image plane of the head-mounted wearable device is shortened, so that convergence adjustment conflict is relieved, and further symptoms such as eye fatigue, blurred vision, headache and dizziness when the user watches related videos are relieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a head wearable device according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an image display method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a source image to be displayed according to an embodiment of the present application;
fig. 4 is a schematic diagram of a left and right display device for displaying a source image to be displayed according to an embodiment of the present application;
fig. 5 is a schematic diagram of a left and right display device for displaying a source image to be displayed according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
For ease of understanding, the terms involved in the embodiments of the present invention are explained below.
Vergence adjustment conflict: vergence means that the eyes of two eyes adjust the positions of two object images by rotating inwards or outwards, so that the brain can combine the two object images into one object image. Focusing means that the eyes automatically adjust the focal length according to the distance of an object, so that an object image clearly falls on retina, and people can see the world clearly. The convergence function will converge the line of sight of both eyes to the same object, while the focusing function will focus this object at the same distance. After the convergence and focusing synchronous cooperation is achieved and the binocular single vision function is achieved, the distance, the size, the position and the direction of the object can be accurately judged, and then the position relation between the object and the objective environment is judged. When the convergence and focusing positions are separated, convergence adjustment conflicts are generated.
For example, viewing a 3D movie, the distance between the viewer and the screen is constant, so the focal length cannot be changed. This results in that the focus function cannot follow the convergence function to focus on the same distance as usual, and the convergence and focus positions are separated, which is a convergence adjustment conflict. Because of the convergence adjustment conflict, the brain is forced to synthesize the information that the vision and the focus are not at the same position, so that the brain is disordered, and adverse reactions such as asthenopia, dizziness, headache and the like can also occur for a long time.
Referring to fig. 1, which is a schematic structural diagram of a wearable device applicable to an embodiment of the present application, the wearable device 100 may be smart glasses; such as AR glasses, VR glasses, MR glasses, etc. The head wearable device 100 includes a left display 101, a right display 102, and a processor 103.
The processor 103 may employ various reduced instruction set computers (Reduced Instruction Set Computer, abbreviated as RISC), digital signal processors (Digital Signal Processing, abbreviated as DSP), central processing units (central processing unit, abbreviated as CPU), and graphics processors (graphics processing unit, abbreviated as GPU), as well as other processor hardware circuits, etc. as processing units to perform corresponding functions. The processor 103 may include only one processing unit or may include a plurality of processing units. In some embodiments, left display 101 and right display 102 each correspond to a separate processor 103.
In some embodiments, the head wearable device 100 further includes an image sensor, a synchronized positioning and mapping (Simultaneous Localization and Mapping, SLAM) module, a wireless module, a memory.
The image sensor may be a camera, a depth sensor, an infrared sensor, etc. The image sensor is used for acquiring images or videos. The head wearable device includes one or more image sensors. The memory is used for storing images or videos acquired by the image sensor.
The SLAM module is configured to determine the pose of the head wearable device 100 and the surrounding environment at different moments based on the image or video acquired by image sensing.
The wireless module is used for wireless communication with surrounding devices. The wireless module may be a WIFI module, a classical bluetooth module, a bluetooth low energy (Bluetooth Low Energy, BLE for short), an LE audio module, a Zigbee module, a near field communication (Near Field Communication, NFC for short), an Ultra Wide Band (UWB for short), and the like.
Based on the system architecture diagram shown in fig. 1, the embodiment of the application provides a flow of an image display method, as shown in fig. 2, where the flow of the method is performed by a computer device, and the computer device may be the head wearable device shown in fig. 1, and includes the following steps:
step 201, a source image to be displayed is acquired.
Specifically, the source image to be displayed may be a video frame in the video source to be played, or may be an independent image. Each source image to be displayed comprises a background image and at least one superimposed object image carrying target depth information, wherein the background image does not contain the depth information, the background images are respectively displayed on a left display and a right display, and the size of the background images is the same as that of the source images to be displayed.
The superimposed object image may be an image of a certain superimposed object in the source image to be displayed. The superimposed object may be a bird, a dog, a bottle, a person, etc. When the source image to be displayed is displayed, the superimposed object needs to be displayed in a three-dimensional mode, namely the three-dimensional characteristic of the superimposed object needs to be displayed. The source image to be displayed contains one or more superimposed objects to be displayed in a stereoscopic manner. For example, referring to fig. 3, the source image to be displayed includes an image 301 of a dog, an image 302 of a bird, and a background image 303, wherein the dog and the bird are superimposed objects that need to be displayed stereoscopically.
The image depth characterized by the target depth information of the superimposed object image refers to: the distance between the location of the superimposed object perceived by the user and the eyes of the user is expected to be located either in front of or behind the focal plane of the head-mounted wearable device, or in the focal plane of the head-mounted wearable device, where the focal plane of the head-mounted wearable device also refers to: the head wearable device displays a virtual image plane of a virtual image of an image or video.
In some embodiments, superimposing the target depth information of the object image includes: the target depth information characterizes a ratio of image depth to reference image depth. For example, the target depth information characterizes an image depth that is 0.5 times, 0.8 times, 1 times, 1.2 times, 1.5 times, 2 times, etc., the reference image depth.
Reference image depth refers to: the distance of the virtual image plane in which the background image is located from the user's eye. Since the background image has no depth information, the background image is directly displayed on the left display and the display, and thus the reference image depth also refers to: the focal point or focal plane (i.e., virtual image plane) of the left or right display is spaced from the user's eye. Typically, the reference image depth is a fixed value.
In some embodiments, the reference image depth refers to: focal length of left or right display.
In some embodiments, the superimposed object image further includes a reference position relative to the source image to be displayed, where the reference position may be a position of the region where the superimposed object image is located relative to the source image to be displayed, for example, may be any predetermined pixel point in the region where the superimposed object image is located in the source image to be displayed.
For example, each pixel point in the source image to be displayed is set as: (i, j), wherein 0< = i < N;0< = j < M; n, M is the resolution of the source image to be displayed, and the values of N and M may be: 640. 480. 800. 600; 1024. 768; 1280. 720, a step of selecting a specific part; 1920. 1080; 2560. 1440; 4096. 2160, etc.
The reference positions may be: (i 0, j 0), 0< = i0< N;0< = j0< M; each pixel in the superimposed object image is: (i 1, j 1), i0< = i1< N1; j0< =j1 < M1, where N1< =n, M1< =m.
Step 202, for each superimposed object image, adjusting a horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display based on the corresponding target depth information, and obtaining a first offset image corresponding to the left display and a second offset image corresponding to the right display.
In some embodiments, a target adjustment value for the horizontal distance is determined based on the target depth information; then, based on the target adjustment value, the superimposed object image is controlled to move relative to the reference position, and a first offset image and a second offset image are obtained.
Specifically, the superimposed object image displayed on the left display and the superimposed object image displayed on the right display can be controlled to translate in opposite directions so as to reduce the horizontal distance between the two images; the superimposed object image displayed on the left display and the superimposed object image displayed on the right display can be controlled to be shifted oppositely so as to increase the horizontal distance between the superimposed object image and the superimposed object image; of course, the superimposed object image displayed on the left display or the superimposed object image displayed on the right display may be controlled to be shifted so as to increase or decrease the horizontal distance therebetween, which is not particularly limited in this application.
In practical application, the horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display determines the distance between the position of the superimposed object perceived by the user and the eyes of the user. The target adjustment value may be a distance of at least one pixel.
In a specific implementation, the present application obtains the target adjustment value at least in the following ways:
in the first embodiment, the distance between the position of the superimposed object perceived by the user and the eyes of the user is equal to the depth of the image represented by the target depth information, and the target adjustment value is determined.
In the second embodiment, the adjustment upper limit value of the horizontal distance is determined based on the target depth information. And then selecting a target adjustment value from the adjustment upper limit value according to a preset rule.
Specifically, the upper limit value is adjusted to enable the position of the superposition object perceived by the user to be equal to the distance between the eyes of the user and the depth of the image represented by the target depth information. The adjustment upper limit value may be an upper limit value that increases the horizontal distance or an upper limit value that decreases the horizontal distance.
When the target adjustment value is selected from the adjustment upper limit value, compared with the image depth adjustment superposition object image represented by the target depth information, after the target adjustment value is adopted to adjust the superposition object image, the position of the superposition object perceived by the user is closer to the virtual image plane of the head-mounted wearable device, so that the user can perceive the stereoscopic effect of the superposition object, and meanwhile, the convergence reducing adjustment conflict is effectively relieved, and the symptoms of eye fatigue, vision blurring, headache, dizziness and the like when the user watches related videos are relieved.
In the third embodiment, the target adjustment value may be preset and directly carried in the target depth information, and after the target depth information of the superimposed object image is obtained, the target adjustment value may be directly obtained from the target depth information.
In some embodiments, the set horizontal distance increases to a positive direction and the horizontal distance decreases to a negative direction. If the image depth represented by the target depth information is smaller than the reference image depth, the target adjustment value is smaller than zero. If the image depth represented by the target depth information is greater than the reference image depth, the target adjustment value is greater than zero.
Specifically, the smaller the image depth represented by the target depth information is, the smaller the absolute value of the target adjustment value correspondingly set is; the larger the image depth represented by the target depth information is, the larger the absolute value of the target adjustment value correspondingly set is.
Of course, when the set horizontal distance increases to the negative direction and the horizontal distance decreases to the positive direction, if the image depth represented by the target depth information is greater than the reference image depth, the target adjustment value is less than zero. If the image depth represented by the target depth information is smaller than the reference image depth, the target adjustment value is larger than zero.
In some embodiments, considering that the user continuously watches the stereoscopic video for a long time (i.e. the continuous playing time of the wearable device reaches the preset time), the problems of eye fatigue, dizziness, headache and other symptoms are easy to cause; the application proposes that if the user watching time period is longer than the preset time period, the absolute value of the target adjusting value is reduced. And then, controlling the superimposed object image to move relative to the reference position by adopting the adjusted target adjustment value, so as to obtain a first offset image and a second offset image.
Specifically, the preset time period may be 20 minutes, 30 minutes, or the like. In practical application, an adjustment weight may be set, and then the target adjustment value is multiplied by the adjustment weight to obtain an adjusted target adjustment value, where the adjustment weight is greater than 0 and less than 1. In practical application, a plurality of different viewing time length thresholds can be set, each viewing time length threshold corresponds to one adjusting weight, and the larger the viewing time length threshold is, the smaller the corresponding adjusting weight is, so that the absolute value of the target adjusting value continuously becomes smaller as the continuous playing time length becomes longer. Of course, if a pause occurs during the playing process and the pause time reaches a preset threshold (e.g., 1 minute, 2 minutes, 5 minutes, etc.), the adjustment weight may be restored to 1, that is, the absolute value of the target adjustment value is not reduced.
As the absolute value of the target adjustment value is smaller, after the target adjustment value is adopted to adjust the image of the superimposed object, the position of the superimposed object perceived by the user is closer to the virtual image plane of the head-mounted wearable device, so that the convergence adjustment conflict can be effectively relieved, and the symptoms of eye fatigue, blurred vision, headache, dizziness and the like when the user watches the related video are reduced.
In some embodiments, the target adjustment value includes a first adjustment value and a second adjustment value. Controlling the superimposed object image to horizontally move by a first adjustment value relative to the reference position to obtain a first offset image; and controlling the superimposed object image to horizontally move by a second adjustment value relative to the reference position, and obtaining a second offset image.
Specifically, the target adjustment value is a sum of the first adjustment value and the second adjustment value, and the first adjustment value and the second adjustment value may be equal or unequal.
Setting positive direction from right to left and negative direction from left to right for the left display, when the first adjustment value is greater than 0, that is, the superimposed object image is controlled to horizontally move left relative to the reference position by the first adjustment value, and a first offset image is obtained. When the first adjustment value is smaller than 0, that is, the superimposed object image is controlled to horizontally shift to the right by the absolute value of the first adjustment value with respect to the reference position, a first offset image is obtained.
And setting the right-to-left negative direction and the left-to-right positive direction for the right display, and when the second adjustment value is greater than 0, namely controlling the superposition object image to horizontally move right by the second adjustment value relative to the reference position, so as to obtain a second offset image. When the second adjustment value is smaller than 0, that is, the superimposed object image is controlled to horizontally shift the absolute value of the second adjustment value to the left with respect to the reference position, a second offset image is obtained.
It should be noted that, in the present application, the right display may be set to have a positive direction from right to left and a negative direction from left to right; the left display is set to have a negative direction from right to left and a positive direction from left to right, and the present application is not particularly limited.
For example, for a left display, a positive direction is set from right to left, and a negative direction is set from left to right; for the right display, the negative direction is set from right to left and the positive direction is set from left to right. As shown in fig. 4, the source image to be displayed includes a letter a image, a letter B image, and a background image, where both the letter a and the letter B are superimposed objects that need three-dimensional display.
When the target adjustment value x10=0 for the letter a image, the horizontal distance between the letter a image displayed on the left display and the letter a image displayed on the right display is z1, and the relative positions of the letter a images and the background image displayed on the two displays are the same.
Similarly, when the target adjustment value x20=0 of the letter B image, the horizontal distance between the letter B image displayed on the left display and the letter B image displayed on the right display is z2, and the relative positions of the letter B images and the background image displayed on the two displays are the same.
Referring to fig. 5, a target adjustment value X10 of the letter a image is determined based on depth information of the letter a image. Since the depth information of the alpha-A image characterizes an image depth greater than the reference image depth, the target adjustment value X10 is greater than 0. The control letter A image is horizontally moved leftwards relative to the reference position by a first adjustment value X11 (X11 is larger than 0) and then displayed on a left display. The image of the control letter A is horizontally moved to the right relative to a reference position by a second adjustment value X12 (X12 is larger than 0) and then displayed on a right display, wherein the sum of the first adjustment value X11 and the second adjustment value X12 is a target adjustment value X10, and the reference position is a designated pixel point in the source image to be displayed. At this time, the horizontal distance between the letter a image displayed on the left display and the letter a image displayed on the right display is z1+x10.
The target adjustment value X20 of the letter-B image is determined based on the depth information of the letter-B image. Since the depth information of the letter B image characterizes an image depth less than the reference image depth, the target adjustment value X20 is less than 0. The control letter B image is displayed on the left display after being horizontally moved rightward by the absolute value of the first adjustment value X21 (X21 is smaller than 0) with respect to the reference position. The control letter B image is displayed on the right display after horizontally moving the absolute value (X22 is smaller than 0) of the second adjustment value X22 leftwards relative to the reference position, wherein the sum of the first adjustment value X21 and the second adjustment value X22 is a target adjustment value X20, and the reference position is a designated pixel point in the source image to be displayed. At this time, the horizontal distance between the letter B image displayed on the left display and the letter B image displayed on the right display is z2+x20.
In some embodiments, the superimposed object image is controlled to horizontally move by a target adjustment value relative to a reference position to obtain a first offset image; and taking the superposition object image as a second offset image.
Specifically, the movement is performed only for the superimposed image displayed on the left display, and the superimposed image displayed on the right display is not moved.
For example, for a left display, a positive direction is set from right to left and a negative direction from left to right. When the target adjustment value is greater than 0, controlling the superposition object image to horizontally move the target adjustment value leftwards relative to the reference position, and obtaining a first offset image; the superimposed image is taken as a second offset image.
When the target adjustment value is smaller than 0, controlling the superposition object image to horizontally move the absolute value of the target adjustment value to the right relative to the reference position, and obtaining a first offset image; the superimposed image is taken as a second offset image.
In some embodiments, the overlay object image is controlled to horizontally move by a target adjustment value relative to the reference position to obtain a second offset image; and taking the superposition object image as a first offset image.
Specifically, the movement is performed only for the superimposed image displayed on the right display, and the superimposed image displayed on the left display is not moved.
For example, for a right display, a negative direction is set from right to left and a positive direction from left to right. When the target adjustment value is greater than 0, controlling the superposition object image to horizontally move the target adjustment value to the right relative to the reference position, and obtaining a second offset image; the superimposed image is taken as a first offset image.
When the target adjustment value is smaller than 0, controlling the superposition object image to horizontally move the absolute value of the target adjustment value leftwards relative to the reference position, and obtaining a second offset image; the superimposed image is taken as a first offset image.
In the embodiment of the application, the superimposed object image displayed on one display (the left display or the right display) is adjusted, and the superimposed object image displayed on the other display is not adjusted, so that a stereoscopic image is obtained, and thus, the processing procedures of reading a source image, rendering and displaying can be simplified, and the complexity, the operand, the power consumption and the cache requirement of image processing are reduced. For video, because the data volume of the video is huge, and the computing resources and the cache memory for processing the video data are scarce resources, the processing performance of the video data can be effectively improved when only the superimposed object image displayed on one display (left display or right display) is adjusted.
And 203, superposing the obtained first offset image and the background image to obtain a first display image, and displaying the first display image in a left display.
In particular, the first offset image obtained may be one or more. In practical application, the corresponding overlapping areas in the first offset image and the background image are weighted and summed to obtain a first display image, which is specifically shown in the following formula (1):
kx=px*alpha+(1-alpha)*qx…………………(1)
wherein qx is the pixel value of the x-th pixel in the first offset image, px is the pixel value of the x-th pixel in the corresponding overlapping region in the background image, kx is the pixel value of the x-th pixel in the corresponding overlapping region in the first display image, 0< x < = m, and m is the total number of pixels in the first offset image; alpha is a weight value, 0< = alpha <1, and different alpha is set for the positions of m pixels in different areas; for example, alpha=0.5 is set for a preset number of pixels located in the edge region of the first offset image, and alpha=0 is set for pixels located in other regions of the first offset image.
And 204, superposing the obtained second offset image and the background image to obtain a second display image, and displaying the second display image in a right display.
In particular, the second offset image obtained may be one or more. In practical application, the second display image may be obtained by performing weighted summation on the corresponding overlapping region in the second offset image and the background image. The principle of superposition of the second offset image and the background image is the same as that of superposition of the first offset image and the background image, and will not be described again here.
In the embodiment of the application, the first display image displayed by the left display and the second display image displayed by the right display are respectively generated based on the background image and the superimposed object image in the same source image to be displayed, so that the image processing processes such as image decompression and image rendering are simplified, the requirements of a processor are reduced, the power consumption is reduced, and the cost is reduced. And secondly, based on target depth information carried by the superimposed object image, adjusting the horizontal distance between the superimposed object image and the right display when the superimposed object image is displayed on the left display, superposing and displaying the adjusted first offset image and the adjusted background image on the left display, and superposing and displaying the adjusted second offset image and the adjusted background image on the right display, thereby realizing the stereoscopic display effect of the superimposed object, bringing the stereoscopic impression and the spatial impression of display to the user. In addition, in the method, the background images in the same source image to be displayed are directly displayed on the left display and the right display, the background images are not required to be displayed, and only the superimposed object images are required to be translated, so that the stereoscopic effect of the superimposed object is realized, and the complexity of processing the background images is reduced; meanwhile, in the source images to be displayed, the background images always exist and have a large range, compared with the background images, the overlapped object images are often smaller, and each source image to be displayed does not contain the overlapped object images, so that when the translation distance of the overlapped object images is only adjusted, but not the background images are translated, the stereoscopic impression and the spatial impression of the source images to be displayed can be increased, and the convergence adjustment conflict can be greatly relieved. Moreover, the translation distance of the superimposed object image is adjusted, the position of the superimposed object perceived by the user is shortened, and the distance between the superimposed object image and the virtual image plane of the head-mounted wearable device is shortened, so that convergence adjustment conflict is relieved, and further symptoms such as eye fatigue, blurred vision, headache and dizziness when the user watches related videos are relieved.
Based on the same technical concept, the embodiment of the present application provides a schematic structural diagram of an image display device, which is applied to a head wearable device, as shown in fig. 6, the device 600 includes:
an obtaining module 601, configured to obtain a source image to be displayed, where the source image to be displayed includes a background image and at least one superimposed object image carrying target depth information;
the processing module 602 is configured to adjust, for each superimposed object image, a horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display based on the corresponding target depth information, to obtain a first offset image corresponding to the left display and a second offset image corresponding to the right display;
a display module 603, configured to superimpose the obtained first offset image with the background image, obtain a first display image, and display the first display image on the left display;
the display module 603 is further configured to superimpose the obtained second offset image with the background image, obtain a second display image, and display the second display image on the right display.
Optionally, the superimposed object image further includes a reference position with respect to the source image to be displayed;
The processing module 602 is specifically configured to:
determining a target adjustment value for the horizontal distance based on the target depth information;
and controlling the superimposed object image to move relative to the reference position based on the target adjustment value, and obtaining the first offset image and the second offset image.
Optionally, the horizontal distance increases to a positive direction and the horizontal distance decreases to a negative direction;
the processing module 602 is further configured to:
if the image depth represented by the target depth information is smaller than the reference image depth, the target adjustment value is smaller than zero, and the reference image depth refers to: a focal length of the left display or the right display.
And if the image depth represented by the target depth information is larger than the reference image depth, the target adjustment value is larger than zero.
Optionally, the processing module 602 is specifically configured to:
determining an adjustment upper limit value of the horizontal distance based on the target depth information;
and selecting the target adjustment value from the adjustment upper limit value according to a preset rule.
Optionally, the processing module 602 is further configured to:
and controlling the superimposed object image to move relative to the reference position based on the target adjustment value, and reducing the absolute value of the target adjustment value if the watching time of the user is longer than the preset time before the first offset image and the second offset image are obtained.
Optionally, the target adjustment value includes a first adjustment value and a second adjustment value;
the display module 603 is specifically configured to:
controlling the superimposed object image to horizontally move by a first adjustment value relative to the reference position, and obtaining the first offset image; and controlling the superimposed object image to horizontally move by a second adjustment value relative to the reference position, and obtaining the second offset image.
Optionally, the display module 603 is specifically configured to:
controlling the superposition object image to horizontally move the target adjustment value relative to the reference position, and obtaining the first offset image; and taking the superimposed object image as the second offset image; or,
controlling the superposition object image to horizontally move the target adjustment value relative to the reference position, and obtaining the second offset image; and taking the superposition object image as the first offset image.
In the embodiment of the application, the first display image displayed by the left display and the second display image displayed by the right display are respectively generated based on the background image and the superimposed object image in the same source image to be displayed, so that the image processing processes such as image decompression and image rendering are simplified, the requirements of a processor are reduced, the power consumption is reduced, and the cost is reduced. And secondly, based on target depth information carried by the superimposed object image, adjusting the horizontal distance between the superimposed object image and the right display when the superimposed object image is displayed on the left display, superposing and displaying the adjusted first offset image and the adjusted background image on the left display, and superposing and displaying the adjusted second offset image and the adjusted background image on the right display, thereby realizing the stereoscopic display effect of the superimposed object, bringing the stereoscopic impression and the spatial impression of display to the user.
Based on the same technical concept, the embodiment of the present application provides a computer device, which may be a head wearable device shown in fig. 1, as shown in fig. 7, including at least one processor 701, and a memory 702 connected to the at least one processor, where a specific connection medium between the processor 701 and the memory 702 is not limited in the embodiment of the present application, and a bus connection between the processor 701 and the memory 702 is exemplified in fig. 7. The buses may be divided into address buses, data buses, control buses, etc.
In the embodiment of the present application, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may perform the steps of the image display method described above by executing the instructions stored in the memory 702.
Wherein the processor 701 is a control center of a computer device, various interfaces and lines may be utilized to connect various portions of the computer device, by executing or executing instructions stored in the memory 702 and invoking data stored in the memory 702, to achieve a three-dimensional effect. Alternatively, the processor 701 may include one or more processing units, and the processor 701 may integrate an application processor and a modem processor, wherein the application processor primarily processes an operating system, a user interface, and application programs, etc., and the modem processor primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 701. In some embodiments, processor 701 and memory 702 may be implemented on the same chip, or they may be implemented separately on separate chips in some embodiments.
The processor 701 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, and may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
The memory 702 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 702 may include at least one type of storage medium, and may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory), magnetic Memory, magnetic disk, optical disk, and the like. Memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer device, but is not limited to such. The memory 702 in the embodiments of the present application may also be circuitry or any other device capable of implementing a memory function for storing program instructions and/or data.
Based on the same inventive concept, the embodiments of the present application provide a computer-readable storage medium storing a computer program executable by a computer device, which when run on the computer device, causes the computer device to perform the steps of the above-described image display method.
Based on the same inventive concept, embodiments of the present application provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer device, cause the computer device to perform the steps of the above-described image display method.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, or as a computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer device or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer device or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer device or other programmable apparatus to produce a computer device implemented process such that the instructions which execute on the computer device or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. An image display method applied to a head-mounted wearable device, comprising the following steps:
Acquiring a source image to be displayed, wherein the source image to be displayed comprises a background image and at least one superimposed object image carrying target depth information;
for each superimposed object image, adjusting the horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display based on corresponding target depth information to obtain a first offset image corresponding to the left display and a second offset image corresponding to the right display;
superposing the obtained first offset image with the background image to obtain a first display image, and displaying the first display image in the left display;
and superposing the obtained second offset image with the background image to obtain a second display image, and displaying the second display image in the right display.
2. The method of claim 1, wherein the overlay object image further comprises a reference position relative to the source image to be displayed;
based on the corresponding target depth information, adjusting a horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display to obtain a first offset image corresponding to the left display and a second offset image corresponding to the right display, including:
Determining a target adjustment value for the horizontal distance based on the target depth information;
and controlling the superimposed object image to move relative to the reference position based on the target adjustment value, and obtaining the first offset image and the second offset image.
3. The method as recited in claim 2, further comprising:
the horizontal distance increases to a positive direction and the horizontal distance decreases to a negative direction;
if the image depth represented by the target depth information is smaller than the reference image depth, the target adjustment value is smaller than zero, and the reference image depth refers to: a focal length of the left display or the right display;
and if the image depth represented by the target depth information is larger than the reference image depth, the target adjustment value is larger than zero.
4. The method of claim 2, wherein the determining the target adjustment value for the horizontal distance based on the target depth information comprises:
determining an adjustment upper limit value of the horizontal distance based on the target depth information;
and selecting the target adjustment value from the adjustment upper limit value according to a preset rule.
5. The method of claim 4, wherein the controlling the overlay object image to move relative to the reference position based on the target adjustment value, prior to obtaining the first offset image and the second offset image, further comprises:
And if the watching time period of the user is longer than the preset time period, reducing the absolute value of the target adjusting value.
6. The method of any of claims 2 to 5, wherein the target adjustment value comprises a first adjustment value and a second adjustment value;
the controlling the overlay object image to move relative to the reference position based on the target adjustment value, to obtain the first offset image and the second offset image, includes:
controlling the superimposed object image to horizontally move by a first adjustment value relative to the reference position, and obtaining the first offset image; and controlling the superimposed object image to horizontally move by a second adjustment value relative to the reference position, and obtaining the second offset image.
7. The method according to any one of claims 2 to 5, wherein the controlling the superimposed object image to move relative to the reference position based on the target adjustment value, to obtain the first offset image and the second offset image, includes:
controlling the superposition object image to horizontally move the target adjustment value relative to the reference position, and obtaining the first offset image; and taking the superimposed object image as the second offset image; or,
Controlling the superposition object image to horizontally move the target adjustment value relative to the reference position, and obtaining the second offset image; and taking the superposition object image as the first offset image.
8. An image display device applied to a head-mounted wearable device, comprising:
the acquisition module is used for acquiring a source image to be displayed, wherein the source image to be displayed comprises a background image and at least one superimposed object image carrying target depth information;
the processing module is used for adjusting the horizontal distance between the superimposed object image displayed on the left display and the superimposed object image displayed on the right display based on the corresponding target depth information for each superimposed object image to obtain a first offset image corresponding to the left display and a second offset image corresponding to the right display;
the display module is used for superposing the obtained first offset image with the background image to obtain a first display image, and displaying the first display image in the left display;
and the display module is also used for superposing the obtained second offset image with the background image to obtain a second display image, and displaying the second display image in the right display.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1-7 when the program is executed.
10. A computer readable storage medium, characterized in that it stores a computer program executable by a computer device, which program, when run on the computer device, causes the computer device to perform the steps of the method according to any one of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310458371.3A CN116466903A (en) | 2023-04-25 | 2023-04-25 | Image display method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310458371.3A CN116466903A (en) | 2023-04-25 | 2023-04-25 | Image display method, device, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116466903A true CN116466903A (en) | 2023-07-21 |
Family
ID=87182245
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310458371.3A Pending CN116466903A (en) | 2023-04-25 | 2023-04-25 | Image display method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116466903A (en) |
-
2023
- 2023-04-25 CN CN202310458371.3A patent/CN116466903A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11736674B2 (en) | Dynamic convergence adjustment in augmented reality headsets | |
| EP3035681B1 (en) | Image processing method and apparatus | |
| US10241329B2 (en) | Varifocal aberration compensation for near-eye displays | |
| Bastani et al. | Foveated pipeline for AR/VR head‐mounted displays | |
| US10999412B2 (en) | Sharing mediated reality content | |
| WO2011033673A1 (en) | Image processing apparatus | |
| US11237413B1 (en) | Multi-focal display based on polarization switches and geometric phase lenses | |
| KR20150121127A (en) | Binocular fixation imaging method and apparatus | |
| US11308682B2 (en) | Dynamic stereoscopic rendering method and processor | |
| CN111264057B (en) | Information processing apparatus, information processing method, and recording medium | |
| CN109901710A (en) | Treating method and apparatus, storage medium and the terminal of media file | |
| US11543655B1 (en) | Rendering for multi-focus display systems | |
| KR20180099703A (en) | Configuration for rendering virtual reality with adaptive focal plane | |
| JP6963399B2 (en) | Program, recording medium, image generator, image generation method | |
| CN106293561B (en) | Display control method and device and display equipment | |
| JP2011205195A (en) | Image processing device, program, image processing method, chair, and appreciation system | |
| WO1996005573A1 (en) | Image-processing system for handling depth information | |
| KR101818839B1 (en) | Apparatus and method of stereo scopic 3d contents creation and display | |
| US12461591B2 (en) | Control method and apparatus for virtual reality device | |
| EP4030752A1 (en) | Image generation system and method | |
| US20250239030A1 (en) | Vertex pose adjustment with passthrough and time-warp transformations for video see-through (vst) extended reality (xr) | |
| WO2014098786A2 (en) | View weighting for multiview displays | |
| US12413696B2 (en) | Techniques for correcting vertical angular image misalignment in extended reality systems | |
| CN116466903A (en) | Image display method, device, equipment and storage medium | |
| US12493936B2 (en) | Image processing apparatus, image processing method, and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |