WO1996021909A1 - Method for reducing computer calculations when generating virtual images - Google Patents
Method for reducing computer calculations when generating virtual images Download PDFInfo
- Publication number
- WO1996021909A1 WO1996021909A1 PCT/SE1996/000012 SE9600012W WO9621909A1 WO 1996021909 A1 WO1996021909 A1 WO 1996021909A1 SE 9600012 W SE9600012 W SE 9600012W WO 9621909 A1 WO9621909 A1 WO 9621909A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- zone
- image
- calculated
- point
- generated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
Definitions
- the present invention relates to a method for reducing the computer calculations when generating virtual images.
- This invention is based on the principle that image details which are redundant as regards the function of the vision are neither calculated nor displayed. As a result, the number of calculations required to produce virtual video images can be drasti ⁇ cally reduced.
- this method implies that there is only one observer in front of the display device, since it is the observer's point of fixation, i.e. the centre of the observer's visual field, that controls the resolution and, hence, the exactitude of the calculations. This means that the detail resolution of the displayed image is a function of the resolution characteristics of the eye.
- virtual reality also referred to as “cyberspace”
- the user wears a helmet on his head, covering his eyes and ears.
- the user may also carry a number of sensors on different parts of his body, sensing his movements and position.
- the computer creates the illusion of an artificial world in the form of images (optionally three-dimensional ones) which are projected in front of the user's eyes by means of display devices, as well as stereo sound through earphones.
- This technique has been developed by, inter alia, the US airforce and NASA.
- One purpose of virtual reality is to create realistic simulators.
- Fig. 1 illustrates the visual acuity of an eye at high luminance as a function of the horizontal angular distance from the fovea, and a three-step approximation thereof
- Fig. 2 is a three-dimensional view of the approximation, presenting three zones
- Fig. 3 shows an example of the surface exactitude in a generated image with respect to the point of observation of the eye.
- the invention makes use of the fact that the human eye has a gradually poorer capacity to distinguish details in visual impressions according to their angular dis ⁇ tance from visual impressions falling straight at the area of the highest visual acuity, the fovea.
- the visual acuity is shown as a function of the horizontal angu ⁇ lar distance from the fovea. It appears from this Figure that the eye is highly sensi ⁇ tive to spatial distinction within an area corresponding to ⁇ 1 degree in the horizontal direction of the visual field. This approximately goes for the vertical direction as well.
- Fig. 1 thus includes a three-step approximation of the visual acuity. In the following, these three areas will be referred to as zone 1 , zone 2 and zone 3.
- Fig. 2 illustrates how these three zones are distributed three-dimensionally when the observer's point of fixation is located at the centre of the image.
- the "empty" space in the cube represents redundant image details. As appears from this Figure, a vast amount of image details need not be calculated or displayed.
- the method according to the invention presupposes that the point of fixation of the eye in a projected image can be detected.
- various equipment for performing the detection The observer may then sit in front of some kind of display device. It is particularly simple to detect the point of fixation when the user is wear ⁇ ing a helmet of the type employed in connection with "virtual reality". This helmet is a natural attachment for the detection apparatus. In addition, this disposes of the source of error that may arise in systems where the apparatus is not fixed to the user's head.
- zone 1 For exemplifying purposes, the visual field in the Figures referred to in this applica ⁇ tion has been divided into three zones. In actual practice, use may conveniently be made of more than three zones.
- the area with full resolution is roughly described by zone 1, taking up 10°x 10° of the image (see Figs 1 and 2). At a certain distance outside this area, the capacity to distinguish details is much poorer.
- zone 2 This latter area is roughly described by zone 2, taking up 40° x 40° of the image, minus the area of zone 1.
- zone 2 the computer is obliged to calculate details with a 1/16 surface exactitude.
- zone 3 In the remainder of the image, i.e. the image minus zones 1 and 2, which is referred to as zone 3, the computer is obliged to calculate details with a 1/256 surface exactitude.
- zones described in this example are square, they may also, when use is made of refined techniques, be round or oval so as to better suit the function of the eye.
- Fig. 3 illustrates how the surface exactitude can be reduced with an increasing distance to the point of fixation.
- the image in the Figure takes up 120° x 75° of the visual field.
- the computer produces these toroids with different degrees of resolution, depending on where the toroids are located in relation to the point of fixation of the eye.
- Those parts of the toroids that are situated in zone 1 are rendered in great detail, those parts of the toroids that are situated in zone 2 are rendered in less detail, and those parts of the toroids that are situated in zone 3 are rendered in least detail.
- the computer contains informa ⁇ tion on the "objects" that are to be rendered in the virtual image, their positions, shapes, sizes, colours, manner of illumination, and so forth. On the basis of this information, the computer calculates the appearance of the image. In all generation of virtual images, the computer is able to calculate the image with a varying wealth of detail. The more detailed the rendition is, the longer the time it takes for the com- puter to calculate the details. The new thing about the invention is that, in the generation of an image, the computer calculates the image with a varying wealth of detail in different parts, as controlled by the observer's momentary point of fixation.
- the computer calculates the entire image with the resolution according to zone 3 and stores this information (part image 3) in an image memory. 2.
- the computer calculates the image described by zone 2 (including zone 1) with the resolution according to zone 2 and stores this information (part image 2) in an image memory.
- the computer calculates the image described by zone 1 with the resolution according to zone 1 and stores this information (part image 1) in an image memory.
- the computer generates a new image of these three part images, in which part image 1 takes precedence over part image 2, which in turn takes precedence over part image 3.
- the capacity to distinguish different density levels decreases in a manner similar to that of the detail vision. This fact may be used for reducing the number of steps of the grey scales employed. Likewise, the number of levels in the definition of various colour components may also be reduced.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
This invention relates to a method for reducing the computer calculations when generating virtual images. The invention is based on the principle that image details which are redundant as regards the function of the vision are neither calculated nor displayed. This is achieved by taking the following steps. The direction of the eyes of an observer is sensed and used as a basis for calculating the observer's momentary point of fixation in the image. At least one zone of the image is defined with the point of fixation as centre point, and if a plurality of zones are defined, these zones are of gradually increasing size. The image information in the smallest zone is calculated and generated with high resolution, and the image information in each larger zone is calculated and generated with lower and lower resolution, the image information in the image outside the largest zone being finally calculated and generated with the lowest resolution.
Description
Method for Reducing Computer Calculations When Generating Virtual Images
The present invention relates to a method for reducing the computer calculations when generating virtual images.
As a rule, virtual images are generated with the same resolution over the whole image, which requires a computer of considerable calculating capacity. In many cases, however, the human eye and the human brain are incapable of taking in all this image information as it is being displayed on a display device.
When produced, computer-generated images are normally built up of a number of surfaces in the form of polygons. These surfaces may have different colours or shades as well as different densities. The more details in the image, the more polygons per image area.
This invention is based on the principle that image details which are redundant as regards the function of the vision are neither calculated nor displayed. As a result, the number of calculations required to produce virtual video images can be drasti¬ cally reduced. However, this method implies that there is only one observer in front of the display device, since it is the observer's point of fixation, i.e. the centre of the observer's visual field, that controls the resolution and, hence, the exactitude of the calculations. This means that the detail resolution of the displayed image is a function of the resolution characteristics of the eye.
The application presenting itself immediately bears upon simulators of the type
"virtual reality" (also referred to as "cyberspace"), where the user wears a helmet on his head, covering his eyes and ears. The user may also carry a number of sensors on different parts of his body, sensing his movements and position. With the aid of the information obtained from the sensors, the computer creates the illusion of an artificial world in the form of images (optionally three-dimensional ones) which are projected in front of the user's eyes by means of display devices, as well as stereo sound through earphones. This technique has been developed by, inter alia, the US airforce and NASA. One purpose of virtual reality is to create realistic simulators.
The present invention solves the problem of how to reduce considerably the com¬ puter calculations required when generating images by providing a method having the distinctive features recited in the appended independent claim.
In the following, the invention will be described in more detail with reference to the accompanying drawings, in which
Fig. 1 illustrates the visual acuity of an eye at high luminance as a function of the horizontal angular distance from the fovea, and a three-step approximation thereof, Fig. 2 is a three-dimensional view of the approximation, presenting three zones, and Fig. 3 shows an example of the surface exactitude in a generated image with respect to the point of observation of the eye.
The invention makes use of the fact that the human eye has a gradually poorer capacity to distinguish details in visual impressions according to their angular dis¬ tance from visual impressions falling straight at the area of the highest visual acuity, the fovea. In Fig. 1, the visual acuity is shown as a function of the horizontal angu¬ lar distance from the fovea. It appears from this Figure that the eye is highly sensi¬ tive to spatial distinction within an area corresponding to ±1 degree in the horizontal direction of the visual field. This approximately goes for the vertical direction as well.
When the invention is brought to use, the visual field is therefore divided into a number of areas, in which the eye has varying capacities to distinguish details. The number of areas may vary, depending on the degree of accuracy with which one wishes to follow the acuity curve. For exemplifying purposes, Fig. 1 thus includes a three-step approximation of the visual acuity. In the following, these three areas will be referred to as zone 1 , zone 2 and zone 3.
Further, Fig. 2 illustrates how these three zones are distributed three-dimensionally when the observer's point of fixation is located at the centre of the image. The "empty" space in the cube represents redundant image details. As appears from this Figure, a vast amount of image details need not be calculated or displayed.
The method according to the invention presupposes that the point of fixation of the eye in a projected image can be detected. There exists today various equipment for performing the detection. The observer may then sit in front of some kind of display device. It is particularly simple to detect the point of fixation when the user is wear¬ ing a helmet of the type employed in connection with "virtual reality". This helmet is a natural attachment for the detection apparatus. In addition, this disposes of the
source of error that may arise in systems where the apparatus is not fixed to the user's head.
Supposing that an image is projected in front of the user in such a manner as to take up his entire visual field, i.e. 185° in the horizontal direction and 155° in the vertical direction. In front of each eye, there is thus projected an image taking up approximately 120° in the horizontal direction. It appears from Fig. 1 that the eye has a perfect capacity to distinguish details in an area of up to a few degrees round the point of fixation. This means that the computer has to calculate details with full resolution in this area only.
For exemplifying purposes, the visual field in the Figures referred to in this applica¬ tion has been divided into three zones. In actual practice, use may conveniently be made of more than three zones. The area with full resolution is roughly described by zone 1, taking up 10°x 10° of the image (see Figs 1 and 2). At a certain distance outside this area, the capacity to distinguish details is much poorer. This latter area is roughly described by zone 2, taking up 40° x 40° of the image, minus the area of zone 1. In zone 2, the computer is obliged to calculate details with a 1/16 surface exactitude. In the remainder of the image, i.e. the image minus zones 1 and 2, which is referred to as zone 3, the computer is obliged to calculate details with a 1/256 surface exactitude.
Although the zones described in this example are square, they may also, when use is made of refined techniques, be round or oval so as to better suit the function of the eye.
With the aid of four illuminated toroids placed at different locations in an image, Fig. 3 illustrates how the surface exactitude can be reduced with an increasing distance to the point of fixation. The image in the Figure takes up 120° x 75° of the visual field. The computer produces these toroids with different degrees of resolution, depending on where the toroids are located in relation to the point of fixation of the eye. Those parts of the toroids that are situated in zone 1 are rendered in great detail, those parts of the toroids that are situated in zone 2 are rendered in less detail, and those parts of the toroids that are situated in zone 3 are rendered in least detail.
As is customary in the generation of virtual images, the computer contains informa¬ tion on the "objects" that are to be rendered in the virtual image, their positions,
shapes, sizes, colours, manner of illumination, and so forth. On the basis of this information, the computer calculates the appearance of the image. In all generation of virtual images, the computer is able to calculate the image with a varying wealth of detail. The more detailed the rendition is, the longer the time it takes for the com- puter to calculate the details. The new thing about the invention is that, in the generation of an image, the computer calculates the image with a varying wealth of detail in different parts, as controlled by the observer's momentary point of fixation.
Thus, the basic idea is that the details of the image are calculated only in as far as they can be perceived by the observer. An example thereof will now be described with reference to Fig. 3.
1. The computer calculates the entire image with the resolution according to zone 3 and stores this information (part image 3) in an image memory. 2. The computer calculates the image described by zone 2 (including zone 1) with the resolution according to zone 2 and stores this information (part image 2) in an image memory.
3. The computer calculates the image described by zone 1 with the resolution according to zone 1 and stores this information (part image 1) in an image memory.
4. The computer generates a new image of these three part images, in which part image 1 takes precedence over part image 2, which in turn takes precedence over part image 3.
It may furthermore be assumed that the capacity to distinguish different density levels decreases in a manner similar to that of the detail vision. This fact may be used for reducing the number of steps of the grey scales employed. Likewise, the number of levels in the definition of various colour components may also be reduced.
If use is made of, say, 256 levels in the grey scale of zone 1 , 64 levels in zone 2 and 16 levels in zone 3 may suitably be employed.
Claims
1. A method for reducing computer calculations when generating virtual images, c h a r a c t e r i s e d in that the direction of the eyes of an observer is sensed and used as a basis for calculating the observer's momentary point of fixation in the image, at least one zone of the image is defined with the point of fixation as centre point and, if a plurality of zones are defined, these zones are of gradually increasing size, and the image information in the smallest zone is calculated and generated with high resolution and, for each larger zone, is calculated and generated with lower and lower resolution and, finally, in the image outside the largest zone is calculated and generated with the lowest resolution.
2. A method as claimed in claim 1, c h a r a c t e r i s e d in that a smaller zone which is calculated and generated with high resolution is not also calculated and generated with lower resolution as part of a larger zone.
3. A method as claimed in claim 1 or 2, c h a r a c t e r i s e d in that the number of density levels is reduced from a smaller zone to a larger zone.
4. A method as claimed in any one of claims 1-3, c h a r a c t e r i s e d in that the number of levels in the definition of different colour components is reduced from a smaller zone to a larger zone.
5. A method as claimed in any one of claims 1-4, c h a r a c t e r i s e d in that each zone round the point of observation has a substantially oval shape.
6. A method as claimed in any one of claims 1-4, c h a r a c t e r - i s e d in that each zone round the point of observation has a substantially circular shape.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SE9500035-2 | 1995-01-10 | ||
| SE9500035A SE9500035L (en) | 1995-01-10 | 1995-01-10 | Ways to reduce computer computations when generating virtual images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO1996021909A1 true WO1996021909A1 (en) | 1996-07-18 |
Family
ID=20396753
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE1996/000012 Ceased WO1996021909A1 (en) | 1995-01-10 | 1996-01-10 | Method for reducing computer calculations when generating virtual images |
Country Status (2)
| Country | Link |
|---|---|
| SE (1) | SE9500035L (en) |
| WO (1) | WO1996021909A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110728744A (en) * | 2018-07-16 | 2020-01-24 | 青岛海信电器股份有限公司 | Volume rendering method and device and intelligent equipment |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE3603552C2 (en) * | 1985-02-06 | 1987-09-17 | Rca Corp., Princeton, N.J., Us |
-
1995
- 1995-01-10 SE SE9500035A patent/SE9500035L/en not_active IP Right Cessation
-
1996
- 1996-01-10 WO PCT/SE1996/000012 patent/WO1996021909A1/en not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE3603552C2 (en) * | 1985-02-06 | 1987-09-17 | Rca Corp., Princeton, N.J., Us |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110728744A (en) * | 2018-07-16 | 2020-01-24 | 青岛海信电器股份有限公司 | Volume rendering method and device and intelligent equipment |
| CN110728744B (en) * | 2018-07-16 | 2023-09-19 | 海信视像科技股份有限公司 | Volume rendering method and device and intelligent equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| SE9500035D0 (en) | 1995-01-10 |
| SE502975C2 (en) | 1996-03-04 |
| SE9500035L (en) | 1996-03-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Vince | Introduction to virtual reality | |
| Kersten et al. | Moving cast shadows induce apparent motion in depth | |
| Mohler et al. | Calibration of locomotion resulting from visual motion in a treadmill-based virtual environment | |
| Vince | Virtual reality systems | |
| Brooks | What's real about virtual reality? | |
| US5388990A (en) | Virtual reality flight control display with six-degree-of-freedom controller and spherical orientation overlay | |
| US6411266B1 (en) | Apparatus and method for providing images of real and virtual objects in a head mounted display | |
| Tachi et al. | Tele-existence simulator with artificial reality | |
| KR100311066B1 (en) | Method and apparatus for generating high resolution 3d images in a head tracked stereo display system | |
| US5255211A (en) | Methods and apparatus for generating and processing synthetic and absolute real time environments | |
| Blade et al. | Virtual environments standards and terminology | |
| US6690338B1 (en) | Apparatus and method for providing images of real and virtual objects in a head mounted display | |
| Holloway et al. | Virtual environments: A survey of the technology | |
| CN116310218A (en) | Surface modeling system and method | |
| GB2201069A (en) | Method and apparatus for the perception of computer-generated imagery | |
| JPH10334275A (en) | Virtual reality method and apparatus and storage medium | |
| KR950024108A (en) | Texture mapping method and device | |
| JPH0962864A (en) | High-speed drawing method and device | |
| Williams et al. | Functional similarities in spatial representations between real and virtual environments | |
| Ellis | Origins and elements of virtual | |
| EP0804783B1 (en) | Method and apparatus for displaying a virtual world | |
| Lathan et al. | Changes in the vertical size of a three-dimensional object drawn in weightlessness by astronauts | |
| Danforth et al. | A platform for gaze-contingent virtual environments | |
| WO1996021909A1 (en) | Method for reducing computer calculations when generating virtual images | |
| JPH11195131A (en) | Virtual reality method and apparatus and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA FI JP US |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE |
|
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: CA |