WO2024228015A1 - Display device for a user-mountable display system - Google Patents
Display device for a user-mountable display system Download PDFInfo
- Publication number
- WO2024228015A1 WO2024228015A1 PCT/GB2024/051141 GB2024051141W WO2024228015A1 WO 2024228015 A1 WO2024228015 A1 WO 2024228015A1 GB 2024051141 W GB2024051141 W GB 2024051141W WO 2024228015 A1 WO2024228015 A1 WO 2024228015A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- optical signals
- user
- focal length
- image source
- receive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0185—Displaying image at variable distance
Definitions
- the present invention relates to a display device for a user-mountable display system.
- Head worn displays which present virtual images to a user. These virtual images can be presented to a user by projecting light onto a semi reflective visor or eyepiece such that the virtual images appear superimposed onto the ambient environment being viewed by the user.
- Such virtual images are presented at a single specific focal depth in the ambient scene.
- a display device for a user-mountable display system comprising:
- a first image source arranged to generate first optical signals, the first optical signals being for bearing imagery
- a second image source arranged to generate second optical signals, the second optical signals being for bearing imagery
- Relay optics arranged to: receive the first and second optical signals; combine the first and second optical signals; and output the combined first and second optical signals for viewing,
- the first image source being configured such that the first optical signals have a first, and substantially fixed, focal length
- the second image source being configured such that the second optical signals have a second, and substantially fixed, focal length being substantially different to the first focal length
- Such that the user may perceive simultaneously virtual imagery at a first depth corresponding to the first focal length and at a second depth corresponding to the second focal length.
- Such a device can provide a two-depth augmented reality or virtual reality environment and thereby help the user to view augmented images (and hence the associated information) with reduced need to adjust their eye-focus. This can be particularly useful in contexts where the user is presented with information relating to either of two environments, for example an in-vehicle environment and an external environment.
- the relay optics may comprise: a first combiner element arranged to receive the first optical signals and the second optical signals and output the first and second optical signals in substantial alignment.
- the first combiner element may be partially transmissive, partially reflective and comprise a first face arranged to receive and transmit the first optical signals and a second face arranged to receive and reflect the second optical signals.
- the relay optics may comprise a set of lenses arranged in series.
- the device may further comprise: a second combiner element arranged to receive the combined first and second optical signal from the relay optics and output the combined first and second optical signal to the user.
- the second combiner element may comprise an at least partially reflective face arranged to receive the combined first and second optical signals from the relay optics and reflect the combined first and second optical signals to the user.
- the second combiner element may be partially transmissive.
- the device is capable of mixing the augmenting imagery with the users view of the ambient environment.
- the second combiner element may be formed as a visor or eyepiece or pair of eyepieces for the user.
- the focal length of the first optical signals may corresponds to a distance between 0.5 and 10 metres ahead of the user, or may correspond to a distance between 50 cm and 200 cm, or may correspond to approximately 1 metre.
- the focal length of the second optical signals may corresponds to infinity.
- a head mounted display system comprising the display device of any of the preceding claims.
- a helmet comprising a display device according to the first aspect.
- Figure 1 shows a display device
- Figure 2 shows a display system comprising the display device of Figure 1 ;
- Figure 3 shows a further view of a display system
- Figure 4 shows a mapping of an ambient scene into a far field zone and a near field zone
- Figure 5 shows an object comparison using two camera feeds.
- Display device 100 comprises a first image source 1 and a second image source 2.
- Each of these image sources is able to generate and output light signals, respectively s1 , s2, bearing imagery such as virtual images. These signals are output towards a combiner element 3.
- the first light output signal s1 is associated with a first, nearer, focal depth 6, and the second light output signal s2 is associated with a second, focal depth 7 different to the first.
- This difference can be achieved in a number of different ways.
- the stand-off distance between the respective image sources and the combiner element 3 could be different.
- the focal length could be determined by a the image source and the output light generated.
- the first and second focal lengths are fixed.
- the nearer focal depth may be configured to be half to 2 metres ahead of the user.
- the second focal depth may be configured to be at infinity.
- the first image source 1 is arranged generally perpendicular to the second image source 2.
- the combiner element 3 is semi-reflective, semi-transmissive and is arranged to receive light output from the first and second source.
- the combiner 3 is inclined at 45 degrees to the output light from each of the sources. (Other forms of partially-reflective, partially-transmissive combiner elements could be used in alternate embodiments).
- light from the first image source 1 is transmitted through the combiner element 3 whereas light from the second image source 2 is reflected through 90 degrees. Accordingly, the light signals s1 output by the first image source 1 are combined with the light signals s2 from the second image source 2 at the semi-reflective combiner element 3.
- relay optics 4 being a set of lenses arranged in series to condition the combined beam.
- the appropriate conditioning of the beam and hence the appropriate configuration of the relay optics 4 would be apparent to the skilled optic designer.
- the relay optics 4 output the combined, conditioned light to a second combiner element 5, which is partially-reflective and partially-transmissive.
- the combiner element 5 will be integrated into a visor or eyepiece or pair of eyepieces for positioning in a user’s view.
- the combiner element 5 is configured to receive the combined, conditioned light and reflect at least a portion of such into the eye of a user.
- the user’s boresight view is perpendicular to the output from the relay optics 4 and the combiner element is inclined at 45 degrees to the output from the relay optics 4 and positioned on the user’s boresight view. Accordingly, the virtual images carried by the light signal s1 output by image source 1 and the virtual images carried by the light signal s2 output by the image source 2 are presented to the user.
- the second combiner 5 is partially reflected and partially transmissive. As such, the user is able to see the virtual images superimposed on the user’s ambient view.
- the user Given the respective focal depths of the light signals (s1 , s2) and the respective virtual images they bear, the user will perceive the virtual images at one of two focal depths: a nearer focal depth 6 and a farther focal depth 7.
- the first image source 1 generates virtual images for a nearer focal depth 6
- second image source 2 generates virtual images for the farther focal depth 7.
- FIG 2 and Figure 3 set out an example display system 200 utilising the display device 100. (For ease of viewing not all components of the system are shown in Figure 3).
- the system 200 comprises the display device 100, a virtual image database 40, a processing unit 50 and a camera device 10.
- the virtual image database 40 stores electronically a number of virtual images (43a, b,..., n) and accompanying metadata.
- each virtual image may be listed alongside a particular target real-world object (T, R, S... m), and/or a particular real-world region (302, 304, ... ,p), and/or a particular focal depth (near, far, ... , q) as an image data set 42.
- a virtual image data signal (VID) is output from the virtual image database 40 to the processing unit 50.
- the camera device 10 may be one or more cameras.
- the camera device 10 is arranged to substantially view the same ambient scene as the user, and generate ambient imaging data (AID).
- the processing unit 50 comprises an image-to-display mapping module 52 and an image processing module 56.
- the processing unit 50 is operably connected to the virtual image database 40 and the camera device 10 such that it can receive image data sets 42 and AID respectively. Further, the processing unit 50 is operably connected to both the first image source 1 and the second image source 2 such that it may address appropriate imagery-bearing signals to each.
- the image-to-display mapping unit 52 comprises a transformation 53 submodule, which may be used to apply a scaling, rotation or skewing to virtual images.
- the Image processing module 56 comprises an image recognition 57 submodule and a ranging 58 submodule.
- the camera device 10 comprises a left camera 22 and a right camera 24.
- Left camera generates first camera data (CD1 ) and right camera generates second camera data (CD2).
- CD1 and CD2 combined represent AID.
- the display system 200 is at least partially arranged on a mount structure or frame 26 having the form of a head worn structure e.g. a pair of glasses or goggles.
- a mount structure or frame 26 having the form of a head worn structure e.g. a pair of glasses or goggles.
- the mount tends to comprise arms for resting on the users ears, linked by a member where eyepieces may be mounted, and a bridge to rest on the user’s nose.
- head worn structures are contemplated and would include helmet mounted structures).
- the left and right cameras 22, 24 are mounted on the left and right outermost sides of the mount structure 26, separated by dimension 500.
- the mount structure accommodates the second combiner 5.
- the second combiner 5 is shown as a pair of eyepieces, one for each eye. In alternative embodiments, the second combiner may be a single visor member.
- the eyepieces are located on the mount 26 in between the left 22 and right camera 24.
- Left camera 22 defines a separation 501 between itself and the left eye.
- Right eye 24 defines a separation of 502 between itself and the right eye.
- the mount structure 26 is arranged such that when worn at the users head, the combiners 5 are positioned over the user’s eyes.
- the left camera 22 has a field of view ABC.
- the right camera 24 has a field of view DEF.
- the fields of view of the left and right cameras overlap at a common portion BDG.
- the nearest point of the common portion to the user is point G.
- the system is be configured to have a minimal separation between point G and the user, thereby covering substantially the user’s field of view.
- Figure 4 shows a view of an ambient scene in which the display system 200 may be used.
- Figure 4 shows an ambient scene a user may view while sat in a car.
- the scene has a distinct internal zone (including the dashboard, driving wheel, rear view mirror and windscreen frame) and a distinct outside view (including the road and road side).
- This ambient scene may be converted into a map 300 comprising a near field zone 302 and a far field zone 304.
- a near field display of virtual images is preferable and for the far field zone 304 a far field display of virtual images is preferable.
- the display system 200 is able to recognise objects (e.g. R, T, S) or zones (304, 302) in a scene and then match predetermined virtual objects to the respective objects or zones according to predetermined rules.
- objects e.g. R, T, S
- zones 304, 302
- certain virtual images are to be presented at a near focal length and others are to be presented at a far focal length.
- a user may wear the display system 200 and view an ambient scene.
- Objects T, S and I are present in the scene.
- Figure 4 shows such an arrangement where a vehicle cockpit/dashboard represents a first zone, near field map 302, and the outside scene the second zone, far field map 304.
- the user In viewing the ambient scene, the user directs cameras 22, 24 towards the scene and imaging data (AID) is generated by the cameras and sent to the processing unit 50.
- AID imaging data
- the imaging data (AID) is received by the processing unit 50 and directed to the image processing module 56.
- the AID is used by an image recognition module 57 which scans the data for objects or zones of interest. Such zones or objects will generally have been pre-defined in accordance with the intended use of the system.
- the image recognition module 57 may generate a signal indicating the presence (e.g. yes or no), and direction (e.g. as a bearing), of an object (or zone) in the scene.
- a ranging module 58 may use the AID to determine the distance to the recognised object or zone. Such ranging may be performed using standard rangefinder geometrical techniques, parallax determinations, or may use alternative methods (see the discussion of Figure 5 below). Therefore, as a result of the image processing module 56 using the AID, the processing unit 50 may generate a signal denoting the presence, and location (e.g. bearing and range) of a particular object or zone.
- the processor unit 50 can address this presence/location signal to the image-to-display mapping module 52.
- the mapping module 52 making reference to the virtual image database 40, uses the presence/location signal to select any appropriate virtual image that is to be associated with the object/zone.
- mapping module 52 uses the presence/location signal to determine a focal depth for the virtual image.
- the processing unit 50 can address the virtual image, as a suitable signal, to the relevant image source 1 or 2.
- a speedometer reading is to be presented as a virtual image 43a on the dashboard at the near focal length 6
- a directional arrow e.g. for navigation
- a virtual image 43b is to appear centrally in the windscreen at the far focal length 7.
- the image processing module 56 recognises in the AID the dashboard, then the presence/location signal will be used by the mapping module 52 to select the speedometer virtual image 43a and address it to the near depth projector 1 .
- the image processing module 56 recognises in the AID the windscreen, then the presence/location signal will be used by the mapping module 52 to select the directional arrow virtual image 43b and address it to the far depth projector 2.
- Figure 5 illustrates steps in a process for determining the range of an object, such as may be used with the system 200.
- Box 522 represents an image captured at a certain instant by left camera 22 (as such box 522 represents camera data, CD1 ).
- Box 425 represents an image captured at that same instance by right camera 24 (as such box 425 represents camera data, C2).
- object S Present in the ambient scene and each of the images 522, 524 (associated with the same time) is the object S.
- object S is relatively close to the imaging devices 22, 24, which are set apart by separation 500.
- the location of object S is different in each of the images 522 and 524. Accordingly an offset 530 representing the camera- to-camera discrepancy of close objects is defined.
- this offset can be used to estimate a specific value for the range to the object S e.g. through use of a look up table.
- the location of S can be taken as the average position of S between the two images. (This assumes that the left and right cameras are mounted at the same distance from the centre of the users field of view). As shown in Figure 3, the separation 501 between the left eye and the left camera is equal to separation 502 between the right eye and the right camera. (Of course if there were differences between separation 501 and separation 502, then an aggregated position of the object S, for the purposes of superimposing virtual images, could be calculated taking by taking a corresponding weighted average of the positions).
- a still further use for the offset 530 is in addressing virtual images to either image source 1 or 2 without having to determine a specific range to the associated object. For example if the offset 530 for an object S is above a predetermined threshold, it may be determined that any virtual images mapped to the object should be sent to the first image source 1 for near focal depth display. Conversely, is the offset is below the predetermined threshold, it may be determined that any virtual images mapped to that object should be sent to the second image source 2 for far focal depth display. Such a further use could find particular utility where particular objects or zones are likely to shift between the near field and the far field.
- the camera device 10 has been used both as a ranging device and as an imaging device. In alternative examples, it may be possible to provide a ranging device which is separate from an imaging device.
- the display device and system may be provided at a helmet.
- the helmet may be for use in managing or controlling a vehicle, especially an aircraft.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
There is disclosed a display device (100) for a user-mountable display system (200), the device comprising: A first image source (1) arranged to generate first optical signals (s1), the first optical signals being for bearing imagery; A second image source (2) arranged to generate second optical signals (s2), the second optical signals being for bearing imagery; Relay optics (4) arranged to: receive the first and second optical signals; combine the first and second optical signals; and output the combined first and second optical signals for viewing, The first image source (1) being configured such that the first optical signals have a first, and substantially fixed, focal length (6), The second image source being configured such that the second optical signals have a second, and substantially fixed, focal length (7) being substantially different to the first focal length. Such that the user may perceive simultaneously virtual imagery at a first depth corresponding to the first focal length (6) and at a second depth corresponding to the second focal length (7).
Description
DISPLAY DEVICE FOR A USER-MOUNTABLE DISPLAY SYSTEM
FIELD
The present invention relates to a display device for a user-mountable display system.
BACKGROUND
Head worn displays are known which present virtual images to a user. These virtual images can be presented to a user by projecting light onto a semi reflective visor or eyepiece such that the virtual images appear superimposed onto the ambient environment being viewed by the user.
Typically such virtual images (also referred to in this context as augmenting images) are presented at a single specific focal depth in the ambient scene.
SUMMARY
According to an aspect of the present invention, there is provided a display device for a user-mountable display system, the device comprising:
A first image source arranged to generate first optical signals, the first optical signals being for bearing imagery;
A second image source arranged to generate second optical signals, the second optical signals being for bearing imagery;
Relay optics arranged to: receive the first and second optical signals; combine the first and second optical signals; and output the combined first and second optical signals for viewing,
The first image source being configured such that the first optical signals have a first, and substantially fixed, focal length,
The second image source being configured such that the second optical signals have a second, and substantially fixed, focal length being substantially different to the first focal length.
Such that the user may perceive simultaneously virtual imagery at a first depth corresponding to the first focal length and at a second depth corresponding to the second focal length.
Such a device can provide a two-depth augmented reality or virtual reality environment and thereby help the user to view augmented images (and hence the associated information) with reduced need to adjust their eye-focus. This can be particularly useful in contexts where the user is presented with information relating to either of two environments, for example an in-vehicle environment and an external environment.
The relay optics may comprise: a first combiner element arranged to receive the first optical signals and the second optical signals and output the first and second optical signals in substantial alignment. The first combiner element may be partially transmissive, partially reflective and comprise a first face arranged to receive and transmit the first optical signals and a second face arranged to receive and reflect the second optical signals.
The relay optics may comprise a set of lenses arranged in series.
The device may further comprise: a second combiner element arranged to receive the combined first and second optical signal from the relay optics and output the combined first and second optical signal to the user. The second combiner element may comprise an at least partially reflective face arranged to receive the combined first and second optical signals from the relay optics and reflect the combined first and second optical signals to the user.
The second combiner element may be partially transmissive.
As such the device is capable of mixing the augmenting imagery with the users view of the ambient environment.
The second combiner element may be formed as a visor or eyepiece or pair of eyepieces for the user.
The focal length of the first optical signals may corresponds to a distance between 0.5 and 10 metres ahead of the user, or may correspond to a distance between 50 cm and 200 cm, or may correspond to approximately 1 metre.
The focal length of the second optical signals may corresponds to infinity.
According to a second aspect of the invention, there is provided a head mounted display system comprising the display device of any of the preceding claims.
According to a third aspect there is provided a helmet comprising a display device according to the first aspect.
BRIEF DESCRIPTION OF THE FIGURES
Embodiments of the invention will now be described by way of example only with reference to the figures, in which:
Figure 1 shows a display device;
Figure 2 shows a display system comprising the display device of Figure 1 ; and
Figure 3 shows a further view of a display system;
Figure 4 shows a mapping of an ambient scene into a far field zone and a near field zone; and
Figure 5 shows an object comparison using two camera feeds.
DETAILED DESCRIPTION
With reference to Figure 1 , an example display device 100 is to be described.
Display device 100 comprises a first image source 1 and a second image source 2.
Each of these image sources is able to generate and output light signals, respectively s1 , s2, bearing imagery such as virtual images. These signals are output towards a combiner element 3.
The first light output signal s1 is associated with a first, nearer, focal depth 6, and the second light output signal s2 is associated with a second, focal depth
7 different to the first. This difference can be achieved in a number of different ways. For example the stand-off distance between the respective image sources and the combiner element 3 could be different. Alternatively or additionally the focal length could be determined by a the image source and the output light generated.
In the present example, the first and second focal lengths are fixed. The nearer focal depth may be configured to be half to 2 metres ahead of the user. The second focal depth may be configured to be at infinity.
The first image source 1 is arranged generally perpendicular to the second image source 2.
The combiner element 3 is semi-reflective, semi-transmissive and is arranged to receive light output from the first and second source. The combiner 3 is inclined at 45 degrees to the output light from each of the sources. (Other forms of partially-reflective, partially-transmissive combiner elements could be used in alternate embodiments).
As shown, light from the first image source 1 is transmitted through the combiner element 3 whereas light from the second image source 2 is reflected through 90 degrees. Accordingly, the light signals s1 output by the first image source 1 are combined with the light signals s2 from the second image source 2 at the semi-reflective combiner element 3.
Thus there is output from the combiner element 3 a combined light beam. This combined light beam is received by relay optics 4 being a set of lenses arranged in series to condition the combined beam. The appropriate conditioning of the beam and hence the appropriate configuration of the relay optics 4 would be apparent to the skilled optic designer.
The relay optics 4 output the combined, conditioned light to a second combiner element 5, which is partially-reflective and partially-transmissive. Typically the combiner element 5 will be integrated into a visor or eyepiece or pair of eyepieces for positioning in a user’s view.
The combiner element 5 is configured to receive the combined, conditioned light and reflect at least a portion of such into the eye of a user. As shown for this example, the user’s boresight view is perpendicular to the output from the relay optics 4 and the combiner element is inclined at 45 degrees to the output from the relay optics 4 and positioned on the user’s boresight view.
Accordingly, the virtual images carried by the light signal s1 output by image source 1 and the virtual images carried by the light signal s2 output by the image source 2 are presented to the user.
The second combiner 5 is partially reflected and partially transmissive. As such, the user is able to see the virtual images superimposed on the user’s ambient view.
Given the respective focal depths of the light signals (s1 , s2) and the respective virtual images they bear, the user will perceive the virtual images at one of two focal depths: a nearer focal depth 6 and a farther focal depth 7. In the present example, the first image source 1 generates virtual images for a nearer focal depth 6 whereas second image source 2 generates virtual images for the farther focal depth 7.
Figure 2 and Figure 3 set out an example display system 200 utilising the display device 100. (For ease of viewing not all components of the system are shown in Figure 3).
Also shown are real world objects R, T and S, and the user’s eyeball.
The system 200 comprises the display device 100, a virtual image database 40, a processing unit 50 and a camera device 10.
The virtual image database 40 stores electronically a number of virtual images (43a, b,..., n) and accompanying metadata. In particular, each virtual image may be listed alongside a particular target real-world object (T, R, S... m), and/or a particular real-world region (302, 304, ... ,p), and/or a particular focal depth (near, far, ... , q) as an image data set 42. A virtual image data signal (VID) is output from the virtual image database 40 to the processing unit 50.
The camera device 10 may be one or more cameras. The camera device 10 is arranged to substantially view the same ambient scene as the user, and generate ambient imaging data (AID).
The processing unit 50 comprises an image-to-display mapping module 52 and an image processing module 56. The processing unit 50 is operably connected to the virtual image database 40 and the camera device 10 such that it can receive image data sets 42 and AID respectively. Further, the processing unit 50 is operably connected to both the first image source 1 and the second image source 2 such that it may address appropriate imagery-bearing signals to each.
The image-to-display mapping unit 52 comprises a transformation 53 submodule, which may be used to apply a scaling, rotation or skewing to virtual images.
The Image processing module 56 comprises an image recognition 57 submodule and a ranging 58 submodule.
As shown in Figure 3, the camera device 10 comprises a left camera 22 and a right camera 24. Left camera generates first camera data (CD1 ) and right camera generates second camera data (CD2). CD1 and CD2 combined represent AID.
The display system 200 is at least partially arranged on a mount structure or frame 26 having the form of a head worn structure e.g. a pair of glasses or goggles. As such the mount tends to comprise arms for resting on the users ears, linked by a member where eyepieces may be mounted, and a bridge to rest on the user’s nose. (Other head worn structures are contemplated and would include helmet mounted structures).
The left and right cameras 22, 24 are mounted on the left and right outermost sides of the mount structure 26, separated by dimension 500. The mount structure accommodates the second combiner 5. Here the second combiner 5 is shown as a pair of eyepieces, one for each eye. In alternative embodiments, the second combiner may be a single visor member.
The eyepieces are located on the mount 26 in between the left 22 and right camera 24. Left camera 22 defines a separation 501 between itself and the left eye. Right eye 24 defines a separation of 502 between itself and the right eye.
The mount structure 26 is arranged such that when worn at the users head, the combiners 5 are positioned over the user’s eyes.
As depicted in Figure 3, the left camera 22 has a field of view ABC. The right camera 24 has a field of view DEF. The fields of view of the left and right cameras overlap at a common portion BDG. The nearest point of the common portion to the user is point G. The system is be configured to have a minimal separation between point G and the user, thereby covering substantially the user’s field of view.
Further, the users left eye has a field of view IHK and the right eye IHL. There is an overlap in the region IHJ where the user would have binocular vision.
Figure 4 shows a view of an ambient scene in which the display system 200 may be used. In particular, Figure 4 shows an ambient scene a user may view while sat in a car. The scene has a distinct internal zone (including the dashboard, driving wheel, rear view mirror and windscreen frame) and a distinct outside view (including the road and road side). This ambient scene may be converted into a map 300 comprising a near field zone 302 and a far field zone 304. For the near field zone 302 a near field display of virtual images is preferable and for the far field zone 304 a far field display of virtual images is preferable.
In operation the display system 200 is able to recognise objects (e.g. R, T, S) or zones (304, 302) in a scene and then match predetermined virtual objects to the respective objects or zones according to predetermined rules. In particular it is provided that certain virtual images are to be presented at a near focal length and others are to be presented at a far focal length.
By way of operational example, a user may wear the display system 200 and view an ambient scene. Objects T, S and I are present in the scene.
(Alternatively, the scene may be pre-defined and divided into distinct zones, each having a characteristic focal length. Figure 4 shows such an arrangement where a vehicle cockpit/dashboard represents a first zone, near field map 302, and the outside scene the second zone, far field map 304.)
In viewing the ambient scene, the user directs cameras 22, 24 towards the scene and imaging data (AID) is generated by the cameras and sent to the processing unit 50.
The imaging data (AID) is received by the processing unit 50 and directed to the image processing module 56. At the image processing module 56, the AID is used by an image recognition module 57 which scans the data for objects or zones of interest. Such zones or objects will generally have been pre-defined in accordance with the intended use of the system.
As a result of such scanning, the image recognition module 57 may generate a signal indicating the presence (e.g. yes or no), and direction (e.g. as a bearing), of an object (or zone) in the scene.
Further, a ranging module 58 may use the AID to determine the distance to the recognised object or zone. Such ranging may be performed using standard rangefinder geometrical techniques, parallax determinations, or may use alternative methods (see the discussion of Figure 5 below).
Therefore, as a result of the image processing module 56 using the AID, the processing unit 50 may generate a signal denoting the presence, and location (e.g. bearing and range) of a particular object or zone.
The processor unit 50 can address this presence/location signal to the image-to-display mapping module 52. The mapping module 52, making reference to the virtual image database 40, uses the presence/location signal to select any appropriate virtual image that is to be associated with the object/zone.
Moreover, the mapping module 52 uses the presence/location signal to determine a focal depth for the virtual image.
Once the desired focal depth for a virtual image has been determined, given the identified objects or zones, the processing unit 50 can address the virtual image, as a suitable signal, to the relevant image source 1 or 2.
As a contextual example, where the user is in control of a vehicle, the system could be pre-configured such that: a speedometer reading is to be presented as a virtual image 43a on the dashboard at the near focal length 6, and a directional arrow (e.g. for navigation) is to be presented as a virtual image 43b to appear centrally in the windscreen at the far focal length 7.
Accordingly, when the image processing module 56 recognises in the AID the dashboard, then the presence/location signal will be used by the mapping module 52 to select the speedometer virtual image 43a and address it to the near depth projector 1 .
Further, when the image processing module 56 recognises in the AID the windscreen, then the presence/location signal will be used by the mapping module 52 to select the directional arrow virtual image 43b and address it to the far depth projector 2.
Figure 5 illustrates steps in a process for determining the range of an object, such as may be used with the system 200.
Box 522 represents an image captured at a certain instant by left camera 22 (as such box 522 represents camera data, CD1 ). Box 425 represents an image captured at that same instance by right camera 24 (as such box 425 represents camera data, C2).
Present in the ambient scene and each of the images 522, 524 (associated with the same time) is the object S.
However, object S is relatively close to the imaging devices 22, 24, which are set apart by separation 500. Thus the location of object S is different in each of the images 522 and 524. Accordingly an offset 530 representing the camera- to-camera discrepancy of close objects is defined.
This can present a dilemma for the wider imaging system in determining where in the display to lay up certain virtual images that are to be, from the user’s view, superimposed on object S.
However, if the offset between respective images of object S is determined (e.g. overlapping the images 522 and 524 and counting the intermediate pixels), then this offset can be used to estimate a specific value for the range to the object S e.g. through use of a look up table.
Further, the location of S, for the purposes of locating any relevant virtual images, can be taken as the average position of S between the two images. (This assumes that the left and right cameras are mounted at the same distance from the centre of the users field of view). As shown in Figure 3, the separation 501 between the left eye and the left camera is equal to separation 502 between the right eye and the right camera. (Of course if there were differences between separation 501 and separation 502, then an aggregated position of the object S, for the purposes of superimposing virtual images, could be calculated taking by taking a corresponding weighted average of the positions).
A still further use for the offset 530 is in addressing virtual images to either image source 1 or 2 without having to determine a specific range to the associated object. For example if the offset 530 for an object S is above a predetermined threshold, it may be determined that any virtual images mapped to the object should be sent to the first image source 1 for near focal depth display. Conversely, is the offset is below the predetermined threshold, it may be determined that any virtual images mapped to that object should be sent to the second image source 2 for far focal depth display. Such a further use could find particular utility where particular objects or zones are likely to shift between the near field and the far field.
In the above examples, the camera device 10 has been used both as a ranging device and as an imaging device. In alternative examples, it may be possible to provide a ranging device which is separate from an imaging device.
The display device and system may be provided at a helmet. The helmet may be for use in managing or controlling a vehicle, especially an aircraft.
Claims
1. A display device for a user-mountable display system, the device comprising:
A first image source arranged to generate first optical signals, the first optical signals being for bearing imagery;
A second image source arranged to generate second optical signals, the second optical signals being for bearing imagery;
Relay optics arranged to: receive the first and second optical signals; combine the first and second optical signals; and output the combined first and second optical signals for viewing,
The first image source being configured such that the first optical signals have a first, and substantially fixed, focal length,
The second image source being configured such that the second optical signals have a second, and substantially fixed, focal length being substantially different to the first focal length,
Such that the user may perceive simultaneously virtual imagery at a first depth corresponding to the first focal length and at a second depth corresponding to the second focal length.
2. The device according to claim 1 , wherein the relay optics comprise:
A first combiner element arranged to receive the first optical signals and the second optical signals and output the first and second optical signals in substantial alignment.
3. The device according to claim 2 wherein the first combiner element is partially transmissive, partially reflective and comprises: a first face arranged to receive and transmit the first optical signals; and a second face arranged to receive and reflect the second optical signals.
4. The device according to any of the preceding claims wherein the relay optics comprises set of lenses arranged in series.
5. The device according to any of the preceding claims wherein the device further comprises:
A second combiner element arranged to receive the combined first and second optical signal from the relay optics and output the combined first and second optical signal to the user.
6. The device according to claim 5 wherein the second combiner element comprises an at least partially reflective face arranged to:
Receive the combined first and second optical signals from the relay optics; and
Reflect the combined first and second optical signals to the user.
7. The device according to claim 5 or 6 wherein the second combiner element is partially transmissive.
8. The device according to claim 5, 6, or 7 wherein the second combiner element is formed as a visor or eyepiece or pair of eyepieces for the user.
9. The device according to any of the previous claims wherein the focal length of the first optical signals corresponds to a distance between 0.5 and 10 metres ahead of the user.
10. The device according to claim 9 wherein the focal length of the first optical signals corresponds to a distance between 50 cm and 200 cm.
11 . The device according to any of the previous claims wherein the focal length of the second optical signals corresponds to infinity.
12. A head mounted display system comprising the display device of any of the preceding claims.
13. A helmet comprising a display device according to any one of claims 1 to
11.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2306504.8A GB2631378A (en) | 2023-05-03 | 2023-05-03 | Display device for a user-mountable display system |
| EP23275071.1A EP4459360A1 (en) | 2023-05-03 | 2023-05-03 | Display device for a user -mountable display system |
| EP23275071.1 | 2023-05-03 | ||
| GB2306504.8 | 2023-05-03 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024228015A1 true WO2024228015A1 (en) | 2024-11-07 |
Family
ID=91067246
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB2024/051141 Pending WO2024228015A1 (en) | 2023-05-03 | 2024-04-30 | Display device for a user-mountable display system |
Country Status (2)
| Country | Link |
|---|---|
| TW (1) | TW202509574A (en) |
| WO (1) | WO2024228015A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2005062629A1 (en) * | 2003-12-12 | 2005-07-07 | Headplay, Inc. | Multiple imaging arrangements for head mounted displays |
| US20180239145A1 (en) * | 2017-02-21 | 2018-08-23 | Oculus Vr, Llc | Focus adjusting multiplanar head mounted display |
| EP3839608A1 (en) * | 2019-12-16 | 2021-06-23 | Acer Incorporated | Near-eye display apparatus |
-
2024
- 2024-04-30 WO PCT/GB2024/051141 patent/WO2024228015A1/en active Pending
- 2024-05-03 TW TW113116501A patent/TW202509574A/en unknown
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2005062629A1 (en) * | 2003-12-12 | 2005-07-07 | Headplay, Inc. | Multiple imaging arrangements for head mounted displays |
| US20180239145A1 (en) * | 2017-02-21 | 2018-08-23 | Oculus Vr, Llc | Focus adjusting multiplanar head mounted display |
| EP3839608A1 (en) * | 2019-12-16 | 2021-06-23 | Acer Incorporated | Near-eye display apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202509574A (en) | 2025-03-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12054047B2 (en) | Image processing method of generating an image based on a user viewpoint and image processing device | |
| US10162175B2 (en) | Dual head-up display apparatus | |
| US6078427A (en) | Smooth transition device for area of interest head-mounted display | |
| US7961117B1 (en) | System, module, and method for creating a variable FOV image presented on a HUD combiner unit | |
| US8253653B2 (en) | Image observation system | |
| US20200252602A1 (en) | Virtual image display device and head mounted display using same | |
| JP6987341B2 (en) | Information display device and its spatial sensing device | |
| US20210152812A1 (en) | Display control device, display system, and display control method | |
| US10642038B1 (en) | Waveguide based fused vision system for a helmet mounted or head worn application | |
| KR20190072649A (en) | Virtual image display device | |
| CN113924518A (en) | Controlling display content of an augmented reality head-up display device of a motor vehicle | |
| CN111727399A (en) | Display system, moving body, and design method | |
| US10884242B2 (en) | Display apparatus and display method | |
| US20220013046A1 (en) | Virtual image display system, image display method, head-up display, and moving vehicle | |
| JP2021110904A (en) | Head-up display device | |
| EP3407112A1 (en) | Distributed aperture head-up display (hud) | |
| EP4459360A1 (en) | Display device for a user -mountable display system | |
| EP4459983A1 (en) | System for displaying virtual images | |
| EP4459359A1 (en) | User mountable display system and method | |
| WO2024228015A1 (en) | Display device for a user-mountable display system | |
| GB2631378A (en) | Display device for a user-mountable display system | |
| WO2024228014A1 (en) | System for displaying virtual images | |
| WO2024228016A1 (en) | User-mountable display system and method | |
| GB2629593A (en) | System for displaying virtual images | |
| GB2629592A (en) | User-mountable display system and method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24725001 Country of ref document: EP Kind code of ref document: A1 |