Head-mounted display device
Technical Field
The present invention relates to a three-dimensional display device, and more particularly, to a head-mounted display device.
Background
Virtual Reality (VR) technology is a computer simulation system that can create and experience a virtual world, which uses a computer to create a simulation environment to immerse a user in the simulation environment, and is a technology that can provide interactive three-dimensional dynamic views of multi-source information fusion and entity behavior system simulation. In the existing virtual reality system, binocular stereoscopic vision is a key for realizing three-dimensional vision, images seen by two eyes of a user are different in reality, the different images are generated respectively and displayed on different displays, and the images enter the left eye and the right eye of the user respectively according to needs, so that the user can feel image parallax, and depth information of a scene can be perceived.
Augmented Reality (AR) is a technology that can superimpose virtual objects in a real scene, and applies virtual information to the real world, so that a real environment and virtual objects are superimposed in real time on the same picture or space and can be perceived by human senses, thereby achieving a sensory experience beyond reality.
Currently, Head Mounted Displays (HMDs) are key devices used to display virtual reality and augmented reality. However, the head-mounted display in the prior art has various problems, two of which are mainly that depth perception is mainly achieved through binocular stereo, the contradiction between binocular focusing and convergence cannot be solved, and the head-mounted display is easy to cause uncomfortable symptoms such as fatigue and vertigo to a user after being worn for a long time; secondly, the resolution in the whole visual field of the display is consistent, the resolution at the periphery of the attention area of human eyes is the same as that in the attention area of human eyes, and the perception resolution of human eyes in the attention area is different from that at the periphery of the attention area, so that the current consistent resolution does not fully utilize the visual characteristics of human eyes. In view of the above, some related inventions introduce a transmissive holographic screen into a head-mounted display device, but the head-mounted display device is essentially binocular three-dimensional display, or multi-view three-dimensional display, only partially adopts a holographic element, and cannot achieve true holographic display, and cannot solve the contradiction between binocular focusing and convergence, and further cannot fully utilize the visual characteristics of human eyes to improve the performance efficiency of the head-mounted display device.
SUMMERY OF THE UTILITY MODEL
In view of the above, it is necessary to provide a head-mounted display device in view of at least one of the above-mentioned problems.
A head-mounted display device, the device comprising:
the light modulation unit is used for integrally imaging the image output by the two-dimensional display panel to form a three-dimensional virtual image;
and the eyepiece is used for amplifying the three-dimensional virtual image formed by the light modulation unit.
In one embodiment, the apparatus further comprises:
and the light barrier is positioned between the light modulation unit and the ocular lens and is used for enabling the light field image projected to the left eye not to be projected to the right eye and enabling the light field image projected to the right eye not to be projected to the left eye.
In one embodiment, the apparatus further comprises:
and the diaphragm is positioned between the light modulation unit and the ocular lens and is used for filtering light rays of other viewing areas except the main viewing area.
Further, the position center of the diaphragm is aligned with the position center of the ocular.
Further, the apparatus further comprises:
and the light barrier is positioned between the light modulation unit and the diaphragm and used for enabling the light field image projected to the left eye not to be projected to the right eye and enabling the light field image projected to the right eye not to be projected to the left eye.
Further, the light barrier is placed at right angles to the light modulation unit.
In one embodiment, the apparatus further comprises:
and the eyepiece distance adjusting unit is used for adjusting the distance between the eyepieces corresponding to the left eye and the eyepieces corresponding to the right eye on the head-mounted display equipment so that the adjusted eyepiece distance is adapted to the interpupillary distance of the user.
In one embodiment, the apparatus further comprises:
and the distance adjusting unit of the ocular and the light modulation unit is used for adjusting the distance between the ocular and the light modulation unit on the head-mounted display equipment, so that the distance between the adjusted ocular and the light modulation unit is adaptive to the vision of a user.
In one embodiment, the apparatus further comprises the two-dimensional display panel:
the two-dimensional display panel is used for image output.
Further, the two-dimensional display panel is a two-dimensional display panel with uniform resolution or a two-dimensional display panel with non-uniform resolution;
the light modulation units are light modulation units with uniform point spacing or light modulation units with non-uniform point spacing.
In one embodiment, the apparatus further comprises:
and the positioning unit is used for determining the relative position of the pupil of the human eye relative to the head-mounted display equipment so as to adjust the position of the Zhu viewing area, so that the position of the Zhu viewing area is aligned with the pupil area of the human eye and is positioned in the main viewing area.
In one embodiment, the light modulation unit is a microlens array or a micro-pore array.
In one embodiment, the apparatus further comprises:
and the error storage unit is used for storing error values corresponding to the parameter information of the head-mounted display equipment and the ocular lenses.
The utility model provides a head-mounted display device can be used to virtual reality and augmented reality environment, compares with prior art, and this equipment has following advantage at least:
1. the world presented in human eyes is a natural three-dimensional world by utilizing the principle of integrated imaging, and the problem of visual fatigue caused by long-term use can be solved.
2. The three-dimensional virtual image obtained by integrated imaging is amplified by adopting the eyepiece, so that the three-dimensional immersion feeling of a user in the head-mounted display equipment is greatly improved.
Further, by means of the preferred embodiments of the present invention, the following advantages can be achieved:
1. through setting up the point apart from the two-dimensional display panel of uneven distribution's optical modulation unit and resolution ratio uneven distribution, the utility model provides a wear-type display device make full use of the inhomogeneous characteristic of people's eye resolution ratio, improved the work efficiency of equipment on the basis that the assurance equipment provided the three-dimensional scene of high resolution ratio.
2. The use of the diaphragm obviously reduces the possibility of crosstalk and improves the three-dimensional perception experience.
Drawings
Fig. 1 is a schematic diagram of hardware distribution in a head-mounted display device according to an embodiment of the present invention;
fig. 2 is a schematic diagram of hardware distribution in a head-mounted display device according to another embodiment of the present invention;
fig. 3 is a schematic diagram illustrating hardware distribution in a head-mounted display device according to still another embodiment of the present invention;
FIG. 4 is a schematic diagram of a design of parameters of a head mounted display device;
FIG. 5 is a schematic diagram of a three-dimensional object display formed in a head-mounted display device;
fig. 6 is a flowchart illustrating a generation process of three-dimensional image information in the head-mounted display device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an apparatus for generating three-dimensional image information in a head-mounted display device according to an embodiment of the present invention;
fig. 8 is a schematic view illustrating a correspondence relationship between a viewing zone shape and a sub-image shape covered by a microlens array according to an embodiment of the present invention;
fig. 9 is a schematic view of different sub-pattern distribution shapes under the same microlens array according to an embodiment of the present invention;
fig. 10 is a flowchart illustrating a detailed process of generating three-dimensional image information in a head-mounted display device according to an embodiment of the present invention;
fig. 11 is a schematic diagram of hardware distribution in a head-mounted display device according to a second embodiment of the present invention;
FIG. 12 is a graph of human eye resolution versus viewing angle;
FIG. 13 is a non-uniform resolution light field display composed of a two-dimensional display panel and a microlens array;
FIG. 14 is a schematic diagram of the generation of three-dimensional image information for a non-uniform resolution light field display;
FIG. 15 is a schematic diagram of the resolution smooth transition principle of a non-uniform resolution light field display.
Detailed Description
Reference will now be made in detail to embodiments of the invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the utility model and are not construed as limiting the utility model.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Head mounted display devices, typically referred to as Head Mounted Displays (HMDs), i.e. head displays. The working principle in the general sense is that an image on a display screen is amplified through a group of optical systems (mainly precise optical lenses), the image is projected onto the retina, and then the large-screen image is presented in the eyes of a viewer, so that different effects such as Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR) and the like can be realized.
In one implementation of the present invention, as shown in fig. 1, a head-mounted display device is provided, the device including a light modulation unit and an eyepiece. The light modulation unit is used for carrying out integrated imaging on the image output by the two-dimensional display panel to form a three-dimensional virtual image; the eyepiece is used for amplifying the three-dimensional virtual image formed by the light modulation unit. Preferably, the two-dimensional display panel may be included on a head-mounted display device. Preferably, in the head-mounted display device, the two-dimensional display panel is a two-dimensional display panel with uniform resolution or non-uniform resolution, and the light modulation units are uniformly or non-uniformly distributed from dot to dot.
As a preferable scheme, as shown in fig. 2, a light barrier is further disposed in the head-mounted display device, and the light barrier is disposed between the light modulation unit and the eyepiece, and is used for projecting light rays expressing a three-dimensional virtual image according to a pre-designed route thereof, so as to avoid mutual crosstalk in the process of projecting the light rays to both eyes of a person, that is, a light field image that should be originally projected to a left eye is projected only to the left eye and not projected to a right eye, and similarly, a light field image that should be originally projected to a right eye is not projected to the left eye; the image projected to human eyes through the light modulation unit is a light field image. When the viewing ranges of the two eyes of the user have the overlapping area on the display panel, the overlapping area may interfere with the viewing of the person, the display of the three-dimensional effect is affected, and the light barrier is introduced to shield the overlapping range which may be formed by crosstalk on the display panel originally, so that the image quality in the viewing ranges of the two eyes of the user is better. Preferably, the light barrier is placed at right angles to the light modulation unit, i.e. at right angles to the plane in which the light barrier and the light modulation unit are located.
As a preferable mode, as shown in fig. 3, a stop is further provided in the head-mounted display apparatus, and the stop is an edge of an optical element in the optical module, a frame or a specially provided perforated barrier, and the size of the stop is determined by a lens frame and other metal frames, and the stop is provided between the light modulation unit and the eyepiece for filtering light rays of other viewing areas except the main viewing area. The main viewing area is determined based on the two-dimensional display unit and the light modulation unit, and the image in the main viewing area is characterized by sufficient light and clear image. After the content displayed by the two-dimensional display panel passes through the light modulation unit, a three-dimensional virtual image which is integrated with imaging display is formed, a plurality of viewing areas exist in the three-dimensional virtual image, including a main viewing area and other viewing areas except the main viewing area, if the three-dimensional virtual image is not processed, crosstalk between the main viewing area and the other viewing areas can be caused, the three-dimensional virtual image which finally enters the eyepiece and is amplified by the eyepiece is not displayed favorably in high quality, therefore, parts except the main viewing area in the three-dimensional virtual image are filtered out through the diaphragm, only the three-dimensional virtual image in the main viewing area is transmitted to the eyepiece, and the quality of image display is ensured. The position center of the diaphragm is preferably aligned with the position center of the ocular, and when the related hardware is installed, the position of the ocular can be adjusted according to the position of the diaphragm, or the position of the diaphragm can be adjusted according to the position of the ocular, so that the position center of the diaphragm is aligned with the position center of the ocular. Of course, as a more preferable scheme, as shown in fig. 3, a light stop and a light barrier are simultaneously disposed in a certain head-mounted display device, and the light barrier is located between the light modulation unit and the light stop, so that it is possible to ensure that a high-quality three-dimensional virtual image is provided, and to avoid an unnecessary overlapping region in a two-eye viewing range of human eyes.
As a preferable scheme, the head-mounted display device further includes an eyepiece distance adjusting unit, configured to adjust a distance between an eyepiece corresponding to the left eye and an eyepiece corresponding to the right eye on the head-mounted display device, so that the adjusted eyepiece distance is adapted to the interpupillary distance of the user. The eyepiece distance adjusting unit can automatically identify the left eye and the right eye of a user, and adjust the distance between the eyepiece corresponding to the left eye and the eyepiece corresponding to the right eye on the head-mounted display device according to an identification result.
As a preferable scheme, the head-mounted display device further comprises an eyepiece and light modulation unit distance adjusting unit, which is used for adjusting the distance between the eyepiece and the light modulation unit on the head-mounted display device, so that the adjusted eyepiece and light modulation unit distance is adapted to the vision of the user.
As a preferable aspect, the head-mounted display device further includes a positioning unit, which is configured to determine a relative position of a pupil of the human eye with respect to the head-mounted display device, so as to adjust a position of the main viewing area to be aligned with and located within the pupil area of the human eye.
As a preferable scheme, as shown in fig. 4, the light modulation unit in the head-mounted display device is a microlens array or a micro-pore array. As shown in fig. 5, in the integrated imaging display, an image displayed on the two-dimensional display panel is imaged by the corresponding microlens to form a three-dimensional object light field in space, and human eyes can perceive a real three-dimensional object by capturing the three-dimensional object light field. Both the microlens array and the micro-pore array have the effect of causing light rays at specific locations to be transmitted in specific directions. The microlens array and the micro-pore array in the utility model can be dynamic liquid crystal lens or liquid crystal micro-pore array composed of liquid crystal elements, thus the microlens array or the micro-pore array of partial or whole area has refraction effect or does not have refraction effect through controlling the liquid crystal elements, and can become transparent elements when having no refraction effect, thereby realizing the switching of two-dimensional and three-dimensional display, or the mixed display of two-dimensional and three-dimensional objects.
As a preferred scheme, the utility model provides a still include the error memory cell among the head mounted display device for the corresponding error value of parameter information of the head mounted display device and eyepiece among them. Since the head-mounted display device includes several optical devices, especially the ocular lenses, the optical devices must achieve a certain degree of precision to ensure the final good image display effect, however, both during the manufacturing process and during the use process of the optical devices, slight differences between actual performance parameters and design parameters of the optical device may occur, for example, the temperature during production and the temperature during use may differ, the temperature difference in different situations during actual use may cause some changes in the performance parameters of some devices due to thermal expansion and contraction caused by temperature, and may also cause small size differences during production, in the case where high precision is required, these small differences may result in a large difference in overall performance, it is therefore necessary to provide an error storage unit for these errors, which is stored in preparation for error correction.
An embodiment of the present invention further provides a method for generating three-dimensional image information, as shown in fig. 6, including the following steps:
s100: parameter information of the head-mounted display device and an eyepiece therein is acquired. The head-mounted display device uses the head-mounted display device provided in the apparatus section, and once the head-mounted display device is determined, various device parameters such as the external dimension, the focal length, etc. of the eyepiece, the distance between the eyepiece and the light modulation unit, the spatial position of the eyepiece, etc. are basically determined, and are preset parameters in the head-mounted display device, particularly the eyepiece therein, which are important, and the determination of other relevant parameters can be influenced by the preset parameters.
The parameter information of the head-mounted display device and the eyepiece therein is determined based on preset parameters of the head-mounted display device and the eyepiece therein, wherein the preset parameters include but are not limited to: at least one of a distance between the real two-dimensional display panel and the eyepiece, a distance between the virtual two-dimensional display panel and the eyepiece, a diameter of the eyepiece, a width of the virtual two-dimensional display panel, a number of pixels covered by one lens, and a pixel dot pitch of the two-dimensional display panel.
The parameter information of the head-mounted display device and the ocular determined by the preset parameters comprises at least one of the following items: the zoom magnification of the two-dimensional display panel, the focal length of the eyepiece, the dot pitch of the virtual light modulation unit, the distance between the eyepiece and the vertex of the main viewing area, the visual angle of the main viewing area, the distance between the virtual two-dimensional display panel and the virtual light modulation unit, the distance between the real light modulation unit and the eyepiece, the distance between the real light modulation unit and the real two-dimensional display panel, the zoom magnification of the light modulation unit, and the dot pitch of the real light modulation unit.
Specifically, as shown in fig. 4, the following preset parameter definitions are performed first: the distance between the real two-dimensional display panel and the eyepiece is ULCDThe distance between the virtual two-dimensional display panel and the eyepiece is VLCDThe diameter of the eyepiece is A, the width of the virtual two-dimensional display panel is W, the number of pixels covered by one microlens on one microlens array is N, and the pixel dot pitch of the two-dimensional display panel is PLCD(ii) a The parameter information of the head-mounted light display device and the ocular lens is calculated by the following formula:
calculating the magnification M ═ V of the two-dimensional display panelLCD/ULCDAnd according to a formula
Calculating the focal length f of the ocular lens;
calculating the dot pitch VP of a virtual light modulation cellLen=N*PLCD*M;
Calculating the distance d ═ A ═ V between the ocular lens and the vertex of the main viewing areaLCDV (A + W); wherein,
the main viewing area forms a conical area, and the vertex of the main viewing area is the vertex of the conical area;
calculating the visual angle omega of the main viewing area to be 2atan (A/(2 d));
calculating a distance between a virtual two-dimensional display panel and a real two-dimensional display panel
Vg=VPLen/(2*tan(Ω/2));
According to the formulaCalculating the distance U between the real light modulation unit and the eyepieceLen;
Calculating the distance g between the real light modulation unit and the real two-dimensional display panel as ULCD-ULen;
Calculating the magnification K ═ V of the real light modulation unitLen/ULen;
Calculating the dot pitch P of the real light modulation unitLen=VPLen/K。
The real light field display consists of a real two-dimensional display panel and a light modulation unit, and correspondingly, the virtual light field display consists of a virtual two-dimensional display panel and a virtual light modulation unit. Obviously, based on the determined parameter information of the head-mounted display device and the ocular lens thereof, the parameter information is corrected in combination with the corresponding error value of the parameter information, so that the head-mounted display device can adapt to the use scene and the actual device state and maintain the optimal performance.
S200: and generating three-dimensional image information of the head-mounted display equipment according to the parameter information.
Based on the idea of the computer system, an embodiment of the present invention provides an apparatus for generating three-dimensional image information corresponding to the method for generating three-dimensional image information, as shown in fig. 7, including:
and the acquisition module 10 is used for acquiring parameter information of the head-mounted display device and an eyepiece therein.
And a generating module 20, configured to generate three-dimensional image information of the head-mounted display device according to the parameter information.
The present invention will be described in further detail with reference to specific embodiments.
Example one
The present embodiment provides a head-mounted display device with uniform resolution, as shown in fig. 3, which comprises the following elements arranged in sequence: the device comprises a two-dimensional display panel with uniform resolution, a micro-lens array with uniformly distributed point distances, a diaphragm and an eyepiece.
The two-dimensional display panel as the two-dimensional display unit may be a single display panel, and the images entering the left eye and the right eye of the user are respectively displayed through two display regions. The two-dimensional display panel may also be two panels, each panel displaying images entering the left and right eyes of a user, respectively. As a device installation manner, the panel may be fixedly installed on the head-mounted display device, that is, the head-mounted display device integrates all the structural functional units into one device without using another independent device. The panel may also be a display screen of another device (e.g., a mobile phone or a tablet computer, etc.), i.e., one or more independent mobile devices plus other structural functional units in the head-mounted display device. Thus, when the panel is the display screen of another device, the device includes the following elements in sequential arrangement: a bracket for placing other equipment, a micro lens array with uniformly distributed point distances, a diaphragm and an ocular lens. Of course, the two-dimensional display unit in the head-mounted display device may also be implemented on a separate computer, and the calculated image is sent to the two-dimensional display unit in a wired or wireless manner.
In the embodiment, after the content displayed by the two-dimensional display panel passes through the microlens array, an integrated imaging display is formed, and the content displayed by the two-dimensional display panel is processed into a virtual object light field by the microlens array. The virtual object light field that microlens array transmitted out has a plurality of viewing zones, if do not pass through the processing, the region can form the crosstalk with other viewing zones in the main viewing zone, in this embodiment, filter the part outside the main viewing zone of virtual object light field through the diaphragm, give the eyepiece with the main viewing zone of virtual object light field to, the eyepiece sends the people's eye after enlargeing the main viewing zone of virtual object light field, thereby avoided the crosstalk of other viewing zones to the main viewing zone, image display's quality has been improved.
The diaphragm mainly functions to allow light rays in the main viewing area to pass through, and simultaneously shields light rays outside the main viewing area, so that crosstalk is eliminated, correct three-dimensional display content is provided for a user, and when the diaphragm is not arranged, the light rays entering the pupil of an eye can come from two adjacent viewing areas instead of the same viewing area under an ideal condition, so that image defects of object double images are caused. Therefore, the shape of the stop should match the shape of the main viewing area, and in particular, the position and size of the stop may be determined according to the position and size of the shape of the main viewing area, and the position of the stop is preferably located at the center of the eyepiece. When the hardware is installed, the relative positions of the diaphragm and the ocular can be adjusted, so that the position center of the diaphragm is aligned with the position center of the ocular. In the utility model, due to the introduction of the eyepiece, the shape of the main viewing area should consider the main viewing area of the virtual light field display (real light field display is composed of the two-dimensional display panel and the light modulation unit) obtained after being enlarged by the eyepiece, rather than the real light field displayed by the original real light field display.
The shape of the viewing area is determined by the sub-image shape covered by the microlens array, as shown in fig. 8, the shape of the viewing area is identical to the sub-image shape after being magnified by the microlens, but is located on the other side of the microlens. Two adjacent subgraphs will form two adjacent viewing zones, while the viewing zone formed by the subgraphs directly below the microlens is called the main viewing zone. For a given microlens array, as shown in fig. 9, there may be multiple sub-image arrangement modes, different sub-image arrangement modes form different main viewing region shapes, and the microlens array is applicable to different application occasions, and in practice, according to the requirements of different occasions, it is determined which main viewing region shape to use, for example, the sub-image shape in the middle of fig. 9 is suitable for the occasion that needs a larger moving range in the horizontal direction, and the sub-image on the left side in fig. 9 will form a main viewing region close to a circle, because the shape of the pupil of human eye is a circle, the sub-image arrangement mode will be more suitable for the application occasion of the utility model, i.e., more suitable for a head-mounted display.
When the viewing ranges of the two eyes of the user have an overlapping area on the display panel, the overlapping area may interfere with the viewing of the person, and the display of the 3D effect is affected. Therefore, in this embodiment, a light barrier is further added to the head-mounted display device. As shown in fig. 3, the light barrier is located between the microlens array and the diaphragm for allowing the viewing ranges of both eyes of the user to have no overlap on the display panel. The light barrier may be at an angle close to a right angle to the plane of the diaphragm, or at a right angle to the plane of the diaphragm. Through setting up the barn door, the display quality of wear-type display device has further been guaranteed. Of course, when the viewing ranges of both eyes do not have an overlapping region on the display panel, the head-mounted display device may not include a light barrier, thereby reducing the complexity and weight of the head-mounted display device.
The interpupillary distance of different users may be different, and in order to improve the user adaptability of the device, the head-mounted display device of this embodiment may further include: and the eyepiece distance adjusting unit is used for adjusting the distance between an eyepiece corresponding to the left eye and an eyepiece corresponding to the right eye on the head-mounted display device. The eyepiece interval adjusting unit can adjust the distance between the eyepiece corresponding to the left eye and the eyepiece corresponding to the right eye on the head-mounted display device according to the operation of the user. The eyepiece distance adjusting unit can also automatically identify the left eye and the right eye of a user, and automatically adjust the distance between the eyepiece corresponding to the left eye and the eyepiece corresponding to the right eye on the head-mounted display device according to an identification result. Different users ' interpupillary distance probably is different, after adding eyepiece interval adjustment unit, the utility model provides a head-mounted display device can be according to the user about the interval adjustment eyepiece interval between the eye for the eyepiece interval suits with user's interpupillary distance, thereby adaptability improves user's experience degree. Namely, after the ocular distance adjusting unit is added, the user can obtain the respective best user experience despite various interpupillary distances of the user.
In addition, different users may have different eyesight, and some users even wear myopia glasses or hyperopia glasses, and in order to facilitate the users with different eyesight to use the head-mounted display device, the device may further include: and the distance adjusting unit of the eyepiece and the light modulation unit is used for adjusting the distance between the eyepiece and the micro-lens array on the head-mounted display device. After adding the interval adjustment unit between eyepiece and the microlens array, can adjust the distance between eyepiece and the microlens array of this head mounted display device according to user's eyesight to the user of adaptation different eyesight, thereby guarantee that different users all possess the best user experience.
A method for generating three-dimensional image information in a head-mounted display device comprises the following steps:
in the head-mounted display device of the present embodiment, parameters between the two-dimensional display panel, the microlens array, the stop, and the eyepiece are associated, and the parameters between these elements and their association relationship will be described below.
The following preset parameters are read from the memory unit of the head mounted display device:
ULCD: the distance between the real two-dimensional display panel and the eyepiece;
VLCD: a distance between the virtual two-dimensional display panel and the eyepiece;
a: the diameter of the eyepiece;
w: a width of the virtual two-dimensional display panel;
n: the number of pixels covered by one lens;
PLCDpixel dot pitch of a two-dimensional display panel.
The head-mounted display device uses the head-mounted display device provided in the apparatus section, and once the head-mounted display device is determined, various device parameters such as the external dimension, the focal length, etc. of the eyepiece, the distance between the eyepiece and the light modulation unit, the spatial position of the eyepiece, etc. are basically determined, and are preset parameters in the head-mounted display device, particularly the eyepiece therein, which are important, and the determination of other relevant parameters can be influenced by the preset parameters.
The visual meaning of the above parameters is seen in fig. 4. Based on the preset parameters, the parameters such as the dot pitch of the real microlens array and the space between the real microlens array and the real two-dimensional panel can be calculated according to the following steps. The method specifically comprises the following steps: the zoom factor of the two-dimensional display panel, the focal length of the eyepiece, the dot pitch of the virtual microlens array, the distance between the eyepiece and the vertex of the main viewing area, the viewing angle of the main viewing area, the distance between the virtual two-dimensional display panel and the virtual microlens array, the distance between the real microlens array and the eyepiece, the distance between the real microlens array and the real two-dimensional display panel, the zoom factor of the real microlens array, and the dot pitch of the real microlens array.
Having these parameters, the main optical parameters of the entire head-mounted display device are determined:
step 1, calculating the magnification factor M of the two-dimensional panel as VLCD/ULCDAnd calculating the focal length f of the eyepiece according to the formula (1)
Step 2, calculating the point distance VP of the virtual microlens arrayLen=N*PLCD*M
Step 3, calculating the distance between the ocular lens and the vertex of the main viewing areaDistance d ═ a × VLCD/(A+W)
Step 4, calculating the visual angle omega of the main viewing area to be 2atan (A/(2d))
Step 5, calculating the space Vg between the virtual two-dimensional panel and the virtual microlens array as VPLen/(2*tan(Ω/2));
Step 6, calculating the distance U between the real micro-lens array and the ocular lens according to the formula (2)Len;
Step 7, calculating the distance g between the real micro-lens array and the real two-dimensional panel as ULCD-ULen
Step 8, calculating the magnification K of the micro-lens array as VLen/ULen
Step 9, calculating the point distance P of the real micro-lens arrayLen=VPLen/K
Formula (1):
formula (2):
in the following, the embodiment will be explained with reference to fig. 8, where the two-dimensional display panel area under the microlens is formed by arranging three rows of sub-pixel point groups, the sub-pixel point group associated with the microlens is indicated by hatching, and the distribution shape of the sub-pixel point groups is as shown in fig. 8. The shape of the main viewing area transmitted by the microlens array is the same as the shape of the sub-pixel arrangement. The vertex position of the main viewing zone is a parameter that can be obtained by calculation using a design parameter, and the size of the center depth cross section of the main viewing zone is the size of the cross sectional shape of the main viewing zone and the eyepiece lens plane.
In the embodiment of the present invention, because of the introduction of the eyepiece, the parameters of the virtual light field display need to be calculated first (or calculated in advance, and then loaded from the memory), and then according to the current head posture, the posture of the virtual light field display relative to the head center is calculated by using the head center as the origin, and finally, the content generation can be performed.
Specifically, as shown in fig. 10, the content generating method according to the embodiment of the present invention includes the following steps:
s110: the 3D model is loaded by the VR device or a mobile device built into the VR device.
The 3D model may be in various 3D model formats, such as three-dimensional mesh, three-dimensional voxel, etc.; the mobile device can be a portable device such as a mobile phone and a tablet computer.
S120: preset parameters of the head mounted display, in particular ocular parameters thereof, are acquired.
The acquired preset parameters may include: u shapeLCD,VLCD,W,N,PLCD(ii) a These parameters may be provided by the manufacturer and stored in a memory unit of the device, read by the light field image rendering software.
S130: generating a virtual light field display according to the ocular parameters, wherein the generation method can refer to the step 2 and the step 5 in the calculation process, and details are not repeated herein;
s140: and calculating the posture of the virtual light field display relative to the head center by taking the head center as an origin according to the current head posture. The head pose parameters include the orientation and position of the head in three-dimensional space, which can be read from the head pose measurement unit. The position of the virtual light field display relative to the center of the head is determined by the mechanical design parameters of the device (including the ocular-to-eye distance, interpupillary distance between the ocular) and ocular parameters, which can also be read from the memory unit of the device.
S150: calculating the color of each pixel point of the transformed virtual light field display by using a ray tracing method; and for the virtual light field display, calculating the light ray parameters in the space corresponding to each pixel point on the display according to preset parameters read from a storage unit of the equipment. For example, for a pixel, its corresponding ray can be determined by the two-point connection: one point is the position of the pixel itself on the display and the other point is the optical center position of the microlens corresponding to the pixel. The position of the pixel is determined by the position of the display panel and the position of the pixel in the image, the optical center of the microlens is determined by the design parameters of the device, and the positions of the display panel and the microlens can be read from the memory unit of the device. Then, using ray tracing method, the color value of the pixel is calculated. Ray tracing is a standard method in computer graphics and will not be described in detail here.
S160: and assigning the colors of the pixel points on the virtual light field display to the pixel points of the corresponding two-dimensional display panel in the real light field display one to one. When all the pixel points on the two-dimensional display panel in the real light field display are endowed with corresponding pixel point color values, the head-mounted two-dimensional display equipment realizes the generation of three-dimensional image information and can display the three-dimensional image information.
Example two
The present embodiment provides a head-mounted display device with non-uniform resolution, as shown in fig. 11. The apparatus comprises: the display device comprises a two-dimensional display panel with non-uniform resolution, a micro-lens array with non-uniform point distance distribution, a diaphragm and an eyepiece.
The two-dimensional display panel as the two-dimensional display unit may be a single piece of display panel that displays images entered into the left and right eyes of the user through two display regions, respectively. The two-dimensional display panel may also be two panels, each panel displaying images entering the left and right eyes of a user, respectively. As a device installation manner, the panel may be fixedly installed on the head-mounted display device, that is, the head-mounted display device integrates all the structural functional units into one device without using another independent device. The panel may also be a display screen of another device (e.g., a mobile phone or a tablet computer, etc.), i.e., one or more independent mobile devices plus other structural functional units in the head-mounted display device. Thus, when the panel is the display screen of another device, the device includes the following elements in sequential arrangement: a bracket for placing other equipment, a micro-lens array with non-uniform point distance distribution, a diaphragm and an ocular lens. Of course, the two-dimensional display unit in the head-mounted display device may also be implemented on a separate computer, and the calculated image is sent to the two-dimensional display unit in a wired or wireless manner.
In this embodiment, the resolution of the two-dimensional display panel is non-uniformly displayed, and the structure is an optimization result of the arrangement of the device structure according to human bionics. Fig. 12 shows the relationship between the resolution of the human eye and the viewing angle, and it is clear from this figure that the resolution of the human retina for different viewing angles is not the same, but only in the central region, i.e. around a viewing angle of zero degrees, has a very high resolution, accounting for approximately 5 degrees of viewing angle. In addition, the resolution drops sharply, for example, at a position 12 degrees from the center, the resolving power of the human eye drops to about one fifth of that at the center. Therefore, by utilizing the characteristic of non-uniform distribution of the resolution of human eyes, the related elements in the head-mounted display device are designed in a targeted manner, namely, a higher resolution is used at the center of the two-dimensional display device, and a lower resolution is used at the edge, so that the human eyes can see a three-dimensional image with high resolution and can also perceive scenes around the center. Under the optimized design, the high-resolution two-dimensional panel is maximally utilized, and meanwhile, the calculation amount is reduced, so that the manufacturing cost of the equipment is reduced. Meanwhile, the dot matrix of the micro-lens array is also unevenly distributed and is adaptive to a two-dimensional display panel with uneven display, and the display mode is adaptive to the vision habit of human eyes, so that the visual experience of a user can be further improved.
As can be seen from fig. 13, the two-dimensional display panel with non-uniform resolution and the microlens array can constitute a light field display with non-uniform resolution. The high-resolution two-dimensional display panel corresponds to the high-density micro-lens array, and the low-resolution two-dimensional display panel corresponds to the low-density micro-lens array. The benefit of such a design is that the spatial resolution is increased in the high density microlens array area while maintaining the low density microlens array area spatial resolution.
In this embodiment, at least one of the light barrier, the eyepiece distance adjusting unit, and the eyepiece-to-light modulation unit distance adjusting unit is further added, and the detailed working principle and the position structure thereof are as described in the first embodiment and are not described herein again.
The generation of three-dimensional image information for a non-uniform resolution light field display is illustrated in fig. 14. Because only the optical parameters of the low-density micro lens array are different from those of the high-density micro lens array, when the three-dimensional image information is generated, the corresponding three-dimensional image information is generated by using the respective optical parameters, and in the concrete implementation, the sub images can be generated separately and then combined.
In order to avoid abrupt changes in resolution that may cause visual discomfort to the user, a smooth transition may be used. As shown in fig. 15, at the common boundary between the high resolution area and the low resolution area, the high resolution area first adopts a resolution which is not much different from the low resolution area to generate image content, that is, the parameters corresponding to the low resolution image are used to draw pixels, and then the resolution is gradually improved as the distance from the center of the high resolution area is closer.
The determination of hardware equipment parameters, the determination of diaphragm parameters and the detailed three-dimensional image information generation scheme in the embodiment two adopt the corresponding schemes in the embodiment one of the utility model.
Example three:
the embodiment of the utility model provides a head-mounted display device of even resolution ratio. The apparatus comprises the following elements in sequential order: a two-dimensional display panel with uniform resolution, a microlens array with uniformly distributed dot pitch, and an eyepiece.
The arrangement positions and the structural modes of the two-dimensional display panel, the micro lens array and the eyepiece are the same as those of the first embodiment and the second embodiment, and are not described again here.
The head mounted display device may also include: the function and structure of the light barrier, the ocular lens distance adjusting unit, and the ocular lens and light modulating unit distance adjusting unit are as described in the first and second embodiments, and are not described herein again.
In this embodiment, the head-mounted display device further includes: and the positioning unit is used for determining the relative position of the pupil of the human eye relative to the head-mounted display equipment so as to adjust the position of the main viewing area, so that the main viewing area is aligned with the pupil area of the human eye and is positioned in the main viewing area. For example, the positioning unit may be a camera, which is mounted around an eyepiece inside the head-mounted display device and can capture an image of the eye. The position of the pupil relative to the camera is obtained by detecting the position of the pupil from the image, and then the position of the pupil relative to the head-mounted display device is obtained according to the known position of the camera relative to the head-mounted display device. Based on the position of the human eye pupil relative to the head mounted display device, a generation parameter of the main viewing region is controlled so that the main viewing region can include the human eye pupil region. More detailed processing methods will be known to those skilled in the art and will not be described herein.
The determination of the hardware device parameters and the detailed three-dimensional image information generation scheme in this embodiment may adopt the corresponding scheme in the first embodiment, and details are not described herein again.
Example four
The present embodiment provides a head-mounted display device with non-uniform resolution, the device comprising: a two-dimensional display panel with non-uniform resolution, a microlens array with non-uniform distribution of dot pitch, and an eyepiece.
The arrangement positions and the structural modes of the two-dimensional display panel, the micro lens array and the eyepiece are the same as those of the first embodiment and the second embodiment, and are not described again here.
The head mounted display device may also include: the function and structure of the light barrier, the ocular lens distance adjusting unit, and the ocular lens and light modulating unit distance adjusting unit are as described in the first and second embodiments, and are not described herein again.
In this embodiment, the head-mounted display device further includes: and the positioning unit is used for determining the relative position of the pupil of the human eye relative to the head-mounted display equipment so as to adjust the position of the main viewing area, so that the main viewing area is aligned with the pupil area of the human eye and is positioned in the main viewing area. For example, the positioning unit may be a camera, which is mounted around an eyepiece inside the head-mounted display device and can capture an image of the eye. The position of the pupil relative to the camera is obtained by detecting the position of the pupil from the image, and then the position of the pupil relative to the head-mounted display device is obtained according to the known position of the camera relative to the head-mounted display device. Based on the position of the human eye pupil relative to the head mounted display device, a generation parameter of the main viewing region is controlled so that the main viewing region can include the human eye pupil region. More detailed processing methods will be known to those skilled in the art and will not be described herein.
The determination of the hardware device parameters and the detailed three-dimensional image information generation scheme in this embodiment may adopt the corresponding scheme in the first embodiment, and details are not described here.