[go: up one dir, main page]

GB2440197A - 3D Perspective Image of a User Interface for Manipulating Focus and Context - Google Patents

3D Perspective Image of a User Interface for Manipulating Focus and Context Download PDF

Info

Publication number
GB2440197A
GB2440197A GB0614419A GB0614419A GB2440197A GB 2440197 A GB2440197 A GB 2440197A GB 0614419 A GB0614419 A GB 0614419A GB 0614419 A GB0614419 A GB 0614419A GB 2440197 A GB2440197 A GB 2440197A
Authority
GB
United Kingdom
Prior art keywords
images
image
grid
curvilinear
array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0614419A
Other versions
GB0614419D0 (en
Inventor
Alan Stuart Radley
Matt Services Ltd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0614419A priority Critical patent/GB2440197A/en
Publication of GB0614419D0 publication Critical patent/GB0614419D0/en
Publication of GB2440197A publication Critical patent/GB2440197A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A curvilinear perspective grid containing three vanishing points is used to display a 2-D array of 3-D and/or 2-D images. The resulting smooth magnification continuum provides an expanded representational space for images to reside in, whilst preserving any spatially represented ordinal relationships between the images. A smooth optical blending of focus and context is projected to a nominally fixed view point towards which the entire optical panorama is presented. The 2-D array of images can be animated to either bring new regions of the 2D image array into view and/or to bring distant images closer to the station point. The method has the key efficiency advantage that simulated optical zooming and panning operations can be performed simultaneously whilst also presenting a measured range of perspective views of the displayed constituent objects.

Description

<p>1 2440197</p>
<p>Specification</p>
<p>Field of the Invention</p>
<p>The present invention relates generally to interactive visual displays, and more specifically to a user interface mechanism for manipulating focus plus context on a visual display system.</p>
<p>Background</p>
<p>Over the last 50 years a rapid evolution has taken place in terms of the techniques of interactive visual display. Computer system input/output has steadily progressed from punched cards and mechanical switches, which were commonly employed in the early 1950's, to 2-D (two-dimensional) visual display methods such as command driven systems first developed in the 1960's. Later in the early 1980's Zerox [1] developed the desktop metaphor with electronic counterparts to physical objects on the so-called "Star Office" screen which strongly influenced much of what followed.</p>
<p>Many computer system environments employ graphical user interface (GUI) capabilities in order to aid the user in interacting with information views in a straight forward and intuitive manner. In particular the use of a scalable and scrollable display frame or "window" has become a popular way to display system input and output on visual displays.</p>
<p>Typically a GUI environment employs several different methods to display information inside windows, including menus, lists of text items, groups of icons, and also groups of bitmap images. A disadvantage of these techniques relates to the fact that all of the objects in the window are normally displayed at a fixed spatial scale, resulting in a corresponding limit to the number of items that can be simultaneously displayed within a window region of a specific size.</p>
<p>A second disadvantage of these techniques as applied to groups of bitmap and icon images, relates to situations where images are to be magnified. It is common practice here to show the magnified view either in another separate spatial region of the same window or in an entirely different window altogether. Typically only one individual image is shown magnified at any particular instant. This method prevents the display of the magnified image features of several images simultaneously, whilst at the same time preserving the view of these same images within a wider group context.</p>
<p>Therefore no capability is provided to make comparative geometrical measurements and observations between two or more magnified images, whilst simultaneously making similar measurements and observations across multiple images displayed at a lower magnification.</p>
<p>Another common characteristic of graphical user interfaces, is that typically no efficient methods are provided to manipulate zoom and pan views of a plurality of 3-D (three-dimensional) object representations or images. In particular no methods have been developed to display a measured range of perspective views of the individual images which are to be manipulated in this way. Reference here is made specifically to methods which do not exhibit the two aforementioned display related efficiency disadvantages.</p>
<p>One example scenario is the display of an assortment of different 3-D diamond ring designs together on a visual display system. A typical way to perform this task would be to present a view of a group of 3-D images of diamond rings arranged together on a visual display system and projected according to the rules of linear perspective.</p>
<p>Here individual ring images can be automatically (for example) rotated, whilst various camera zooming and panning operations are performed, specifically in order to facilitate the detailed display, comparative measurement and also comparative observation, of the 3-D form related features of the different ring images.</p>
<p>Reference to a "camera" here refers not to a real "optical" device with mirrors and/or lenses, but rather concerns the entities and procedures existing in the conjoined system to the visual display system, and which affect "camera like" operations including controlling aspects of the scene projection such as "plate scale" or overall scene magnification, angle of view and also the field of view of the projection etc. These camera operations typically are automatically performed in response to the conjoined system's events and processes and possibly also partially in response to user selections and/or actions.</p>
<p>This method of displaying 3-D object groups runs into a number of representation related efficiency limitations which are characteristic of linear perspective.</p>
<p>Linear perspective based methods typically employ a 2-D array or a series of images laid out in a plane (of any orientation) in an apparent or real 3-D space, and spread out across the full width of the display aperture (DA). The unavoidable result of zooming scenes along the central axis of the projection here is a smaller number of displayed images within the DA because as the overall pictorial scale increases images are occluded at the edges of the DA.</p>
<p>The alternative technique of moving individual items one, after another, first closer to the perspective window, and then back into position in the overall view, prevents the presentation (and measurement) of the magnified features of several items simultaneously. Moving several items towards the perspective window in this way inevitably leads to the obscuration of other parts of the scene (or images) and thus to a loss of context.</p>
<p>An additional disadvantage here is that zooming and panning of scenes according to the methods of linear perspective typically does facilitate the production of a measured range of perspective views of the individual objects present in the represented scene. Accurate comparative analysis of the forms of 3-D objects becomes difficult to achieve due to the lack of a systematic framework within which notionally identical and repeatable comparative measurements and observations can be made.</p>
<p>Linear perspective based methods of zooming and panning 3-D object groupings exhibit a number of disadvantages in terms of the efficient use of display area real-estate. For example, zooming operations typically fail to present magnified detail across several items whilst simultaneously maintaining the overall context, either because objects readily become occluded one behind the other, or alternatively because objects are lost off the edges of the DA.</p>
<p>Additionally scene panning and/or object orientation manipulations typically occur without the aid of a framework in which a measured range of perspective views of the objects can be reliably and rapidly obtained. Thus complex camera manipulations and/or object transformations are often required in order to set up the exacting perspectives from which comparative measurements and observations can be made.</p>
<p>The need for these operations results partially from the intrinsic operational separation of zooming and panning operations with this method. With linear perspective based zooming these camera operations are also required in order to compensate for the loss of field information (or context) as a result of the occlusion of images at the edge of the DA, and potentially also due to the loss of visibility of images as they pass through the perspective window.</p>
<p>These camera and object manipulations incur a system performance overhead, in addition to leading to an extra cognitive load on the user. Additionally whilst these manipulations are taking place the visual display space on the DA is not efficiently employed, in terms of preserving the number of images from which systematic measurements and observations can be made. These complex manipulations therefore reduce the number of measurements and observations achievable per unit time on the visual display, and hence reduce the efficiency of the overall visual display system.</p>
<p>There is therefore an unmet need to be able to maximise the efficiency of display area real estate in five specific ways. Firstly, needed is a way to provide an expanded display space which is not (in principle) limited in tenns of the number of objects which can be simultaneously displayed within an enclosing DA. Secondly there is a need to provide the capability to simultaneously present the magnified image features of several object images together whilst at the same time preserving the relational context of these same images within a broader spectrum or group of images.</p>
<p>Thirdly a method is needed whereby the zooming and panning of 2-D image arrays can be achieved simultaneously in one operation, and thus without the need for complex camera manipulations.</p>
<p>Fourthly a need exists for a way to achieve focus plus context manipulations which largely overcomes the aforementioned display area efficiency limitations of linear perspective, specifically without the zooming related drawbacks of occlusion of objects at the edge of the DA, and without the loss of object visibility as objects move through the perspective window.</p>
<p>Fifthly, needed is a way to zoom and pan a plurality of 3-D (three-dimensional) objects together in a unified perspective view, whilst simultaneously presenting the individual objects from a measured range of perspective views.</p>
<p>Summary</p>
<p>It is therefore an object of the present invention to be able to manipulate zooming and panning operations on a 2-D array of objects (or images of objects), within a simulated or real 3-D space, and specifically without the aforementioned display related efficiency problems inherent in scenes depicted according to the rules of linear perspective.</p>
<p>It is a further object of the present invention to overcome the stated representational disadvantages of 2-D window based methods, specifically when displaying I -D or 2-D information groupings, and in particular in regard to the automatic preservation of context during zooming operations.</p>
<p>It is a further object of the present invention to be able to manipulate zooming and panning operations on a plurality of objects (specifically a 2-D array of 3-D object images) located together in a single unified perspective view within a simulated or real 3-D space, whilst preserving the context of the images so displayed, and whilst simultaneously (and potentially automatically) displaying a measured range of perspective views of the same objects.</p>
<p>Therefore, according to the present invention, a method and structure for providing focus plus context on a visual display system is provided.</p>
<p>Brief descnption of the drawings The present invention illustrated in Figures 1 and 2 and employed according to the given invention claims detailed in the present document, and according to the listed aspects [1-7], consists of a number of representational methods, geometrical regions and functional behaviours as follows: 1. A region marked "curvilinear grid" (See Figures 1 and 2 where this curvilinear grid is labelled as item 1) as described and referred to in aspects 1-7, and also a region consisting of a group of object representations (or images) arranged in a 2-D image array as described also in aspects 1-7. These regions having specific common geometrical features, whereby together they apply specific representational and procedural affects to the 2-D image array, as described in aspects 1-7.</p>
<p>Figure 1 shows an oblique view of some features of our method consisting of the prescriptions and aspects described in 1 above, and which is projected according to the aspects 1-7 of our method.</p>
<p>Figure 2 depicts one example of the application of our method, where the curvilinear grid has been projected onto the DA towards a nominal station point located in-front of an invisible perspective window. This station point is located above the plane of the curvilinear grid, which here has also been slightly inclined forwards to improve the visibility of images on a 2-D image array which is located co-planar to, and a short distance above, the curvilinear grid.</p>
<p>Detailed description of the drawings</p>
<p>Our method employs two separate display regions, these being the curvilinear grid and the 2-D image array respectively. These regions are co-located together in the lateral dimension, the 2-D image array being located above the curvilinear grid relative to a fixed reference or nominal station point. Labelled as item 1 is the curvilinear grid, which consists of a nominally opaque surface delineated into a number of cells by a series of grid lines arranged in a curvilinear shape and also according to the influence of 3 vanishing points. The curvilinear grid contains three vanishing points (lying outside of its boundaries) as described in aspects 1-7 of our method. We note here that the 2-D image array uses these same vanishing points (or ones very closely located in spatial terms) in order to aid in the production of specific representational and functional affects. [See aspects 1-7] Figure 1 shows an oblique view of our method and identified are three vanishing point locations labelled items 15 and 17 (the lateral vanishing points) and also item 16 which is the central vanishing point. All three vanishing points are co-located together in approximately the same plane relative to the curvilinear grid. The relative positioning of these vanishing points is shown in Figure 1.</p>
<p>Figure 1 depicts the locations of these three vanishing points in the lateral and depth (or the simulated third dimension) dimensions in relative terms and the indicated positions are shown for the purposes of illustration only. The specific location of these points in the lateral and depth dimensions may change from one application (or usage instance) of our method to another, and also according to the spatial position and orientation of our method in the DA, and also according to specific required geometrical and apparent representational effects in the associated projection.</p>
<p>Figure 2 depicts a 2-D array of images projected according to our method as it would appear in a DA. Note that the boundary edges of the DA are not shown in this image.</p>
<p>Marked as items 2 and 3 are 3-D images which adopt the central vanishing point aligning behaviour as detailed in aspect five of our method. Items 12 and 13 show 2-D images at one extreme lateral position on the curvilinear grid, which are scaled according to the influence of the central and also one lateral vanishing point, whilst adopting the station point facing behaviour according to aspect six of our method.</p>
<p>Item 7 on Figure 2 also shows another 2-D image at a different position on the curvilinear grid, which is displayed according to the specific prescriptions of our method.</p>
<p>Item 5 on Figure 2 shows a text label item as described in aspect four of the claims.</p>
<p>Also shown on Figure 2 is item 6 which is a non-essential aspect of our invention, being a representation of an object "shadow" which is present for aesthetic reasons to apparently "lift" objects from the curvilinear grid. Items 4 and 8 represent specific examples of the change of the form of the 3-D images present on the 2-D image array which may (for example) be used to indicate "state" aspects of those individual object items.</p>
<p>For example the presence of item 4 may indicate a selection event for that image, whilst item 8 is a "pin marker" which may (for example) indicate that this image contains and/or refers to and/or exhibits other related facts and/or features which are not overtly displayed here in any other way.</p>
<p>Note that items 4 and 8 are non-essential aspects of our method and merely serve to illustrate the broad diversity of the types of visual information that it is possible to convey using our method.</p>
<p>Of particular note here is that the images (2-D and 3-D) present on the 2-D image array, and displayed according to our method as detailed in claims 1-7, may in accordance with events and processes in the conjoined system, change their visible form over a variety of different time-scales, and also in a variety of different ways.</p>
<p>Item 3 on Figure 2 serves to illustrate the alignment twisting effect on another image instance [as detailed in aspect five of our method], the image being a 3-D model of a floppy disc. Item 10 on Figure 2 is a 3-D model of a flat screen television, which serves to indicate that where a 3-D object is largely flat in form, it still adopts the alignment twisting behaviour.</p>
<p>The two grid positions labelled as item 9 on the curvilinear grid are used to indicate specific locations on the 2-D image array which are empty or devoid of an image instance.</p>
<p>A notable aspect of the depicted geometry of the curvilinear grids shown in Figure 1 and Figure 2 is that 9 rows of 4 columns are displayed, whereby each successive row (in the direction away from the perspective window) is depicted as receding slightly in terms of its position in the "vertical" dimension (orthogonal to the lateral grid dimension) relative to the previous row. This geometrical feature is a non-essential aspect of our method and here serves to preserve the amount of available display space (vertically on the DA).</p>
<p>Detailed description</p>
<p>Linear perspective is a widely used representative technique often employed for projecting a view of 3-D space. An inherent representational drawback with linear perspective relates to a lack of realism in terms of allocating a correct scale to represented objects located at large distances laterally from the central axis of the projection. [2,3] This characteristic causes objects to be magnified at the edges of the picture plane, relative to the optical view seen in reality, and the amount of projected or picture space is then reduced unnaturally towards the sides of the projection. Curvilinear perspective overcomes this particular problem, allowing lateral objects to be correctly represented as reducing in scale at increasing distances from the central axis of the projection. [2,3] According to the first aspect of the present invention, an important submission has been the application of curvilinear perspective to the 3-D representation and display of groups of images arranged in a 2-D ordered arrangement, and here referred to as a 2-D image array. Our method employs an exaggerated form of curvilinear perspective, specifically in order to aid in the creation of a deep magnification continuum for images to reside in.</p>
<p>In our method two lateral "vanishing points" are arranged on either side of a central vanishing point, creating an optical vista which bulges outwards towards a nominally fixed station point or alternatively another reference point within the representation space. Note that these "vanishing points" serve as general reference points from which to define the scale and the local angular aspect positions of 3-D images within the 2-D image array, whilst also influencing the shape of the curvilinear grid. Note here that the individual scales of any 2-D images present on the 2-D image array are similarly affected by these vanishing points.</p>
<p>The first aspect of the invention adapts curvilinear perspective to the display of 2-D image arrays, in particular creating a curvilinear grid of visible "cells" [See Figures 1 and 2, item 1) within which individual images can reside. Lying coplanar to, and slightly above, the grid is a 2-D array of images (according to the first aspect of the invention) which represents a specific sub-region of a potentially extended 2-D array of images.</p>
<p>According to the third aspect of the invention our method displays a small portion of a potentially infinitely extended 2-D image array, which may extend in two dimensions for as many rows and columns, and hence images, as required.</p>
<p>According to the prescriptions of our method detailed in claims 1-7, each cell on the curvilinear grid may (or may-not) contain an image, the same being located slightly above the plane of the cell to facilitate an unobstructed simulated optical path between the image and a notionally fixed station point. The curvilinear grid provides a fixed background reference grid for the array of images to move against or animate across, and also a series of fixed container cell area locations for the images to reside in whenever they are stationary.</p>
<p>Our method assumes that the images (prior to applying our method) are either all of closely comparable physical size, or alternatively that they have been nominally scaled (prior to applying our method) so as to appear as such.</p>
<p>According to aspects 1, 2 and 3 of our method, the entire 2-D image array (consisting of all the presently displayed images within the 2-D array of images) can be visually animated or scrolled (between the two lateral vanishing points) to simulate a combined panning plus zooming operation, and hence move the visible region of high magnification (the central area of the projection) smoothly from one region of the 2-D image array to another. Images are first gradually magnified, and then gradually de-magnified, as they move across the projection from left to right (for example). Here the equivalent of "camera" panning is performed as the images themselves are animated across the field of view, instead of the notional camera itself</p>
<p>being "panned" across the field.</p>
<p>Intrinsic to our method is the fact that lateral zooming and panning operations performed in this way do not produce and/or require any changes to the notional cameras position, angle of view and also the field of view of the projection.</p>
<p>According to aspects 1, 2 and 3 of our method, animation across the curvilinear grid can be initiated, and also manipulated by, the conjoined system's processes and events, in order to facilitate the fine-grained adjustment of scene characteristics.</p>
<p>When the array of images extends beyond the lateral visible bounds of the 2-D image array, lateral animation across the curvilinear grid involves the opening up of a vista opposite to the direction of motion of the images and the closing off of the vista on the opposite side of the projection. Images are thus rendered invisible as they move off one side of the visible region of the 2-D image array.</p>
<p>In this way new images can therefore enter the projection (laterally) on the side close to one lateral vanishing point, whilst simultaneously other images leave the display area on the opposite side close to the opposing lateral vanishing point. With our method a similar animation procedure can also occur in the depth dimension of the represented 2-D image array.</p>
<p>According to aspect five of our method, as individual 3-D images move laterally across the curvilinear grid they also present different aspects of themselves (towards a nominally fixed station point or other reference point), that is they automatically rotate to align their local "front elevation" with the instantaneous nadir point of the central vanishing point. This technique facilitates the automatic display of a measured range of local perspective views of all the individual objects as they move across the curvilinear grid. In Figure 1 3-D images are seen to rotate by an overall angle of close to 180 degrees (in the lateral direction) as they move from one side of the projection to the other. The size of this overall rotation angle is dependant both on the relative positioning of the three vanishing points and also on both the size and shape of the curvilinear grid.</p>
<p>Note that any 2-D images present in the array do not exhibit this aspect changing behaviour as described above for 3-D objects. In order to preserve the 2-D projected face area of this type of image, with our method 2-D images automatically face the intended station point (or other reference point) at all times as they animate across the display. (See item six in the attached claims) According to the fourth aspect of the invention each image has a small associated text label [Item 5] located close by and which also exhibits similarly this station point (or other reference point) facing behaviour.</p>
<p>Another vanishing point (item 16 on Figure 1) is located towards the back of the curvilinear grid, and images on the curvilinear grid located towards the back of the grid are therefore reduced in scale according to their position relative to this rear vanishing point. The overall affect is that pictorial space is greatly expanded in the lateral directions, and partially in the depth dimension as well, providing an increased amount of representational space, as compared with linear perspective.</p>
<p>Our method is assumed to work in conjunction with a conjoined system consisting perhaps of a computer, visual display (VD), computer program(s) and a database which affects those actions (electronic and physical) and which affect the graphical generation, display and animation of a 2-D array of images which is represented according to our method inside a display aperture (DA) as detailed in claims 1-7.</p>
<p>In one application of our method, the visual display may include an interactive display system which incorporates a user controlled selection device or pointer and a corresponding cursor which is displayed in front of the enclosing DA and which combined with the selection device affects the interactive selection of images from within the 3-D representation.</p>
<p>On screen graphical images can be used to represent a wide variety of different types of information. Correspondingly we foresee that our method can be applied to a wide variety of situations in which such images are to be displayed.</p>
<p>Our method intrinsically possesses a number of distinct efficiency advantages over linear perspective based zooming methods, in particular by allowing a 2-D image array to be zoomed in a lateral direction (in one dimension of the array) whilst preserving the number of images displayed on a display aperture (DA). Our method has a further advantage in this respect.</p>
<p>With our method, images do not become invisible as they approach and exceed the point of maximum magnification, as they would in linear perspective based zooming where images pass through the perspective window as they move beyond this point.</p>
<p>With our method images remain visible (during lateral zooming across the 2-D image array) as they pass through this region, thus facilitating comparative measurements and observations at a specific range of visual scales.</p>
<p>An additional advantage of our method (with respect to linear perspective based methods) relates to the fact that a combined zooming and panning operation is achieved simultaneously as images move laterally across the DA. With our method zooming and panning operations can be combined into a single (inherently reversible) process which both preserves the number of displayed items on a 2-D image array and which also simultaneously displays a measured range of local perspective views of all the objects so represented. Here the complex, repetitive and often disjointed camera manipulations required with linear perspective based zooming methods are avoided.</p>
<p>Therefore our method has the advantage that zooming can be achieved using a simplified manipulative process, which can be partially and/or fully automated. A further advantage in this respect is a corresponding reduction in the required conjoined system graphical display management overheads. Here simplified manipulations result in a smaller number of object and/or camera transforms per unit time interval.</p>
<p>A key advantage of our method relates to the fact that it provides a systematic framework for obtaining a specific range of perspective views of a 2-D array of 3-D objects. Within this framework a measured set of local perspective views of the individual constituent objects are presented. A key advantage here over linear perspective (in terms of zooming and panning manipulations) is the production of a series of individual perspective views of objects which are uniformly scaled orseparated, relative to each other, and which are therefore inherently comparable and</p>
<p>also repeatable.</p>
<p>A framework is provided within which systematic measurements and observations can be rapidly obtained whist performing simplified focus plus context manipulations on a plurality of images.</p>
<p>The inherent simplicity of the interrelated lateral zooming and panning procedure leaves room for the process to be rendered in a fully automatic manner and also affords improved fine-grained control either partially by a human user and/or automatically as a result of conjoined system events and processes.</p>
<p>An important aspect of the present invention is the production of a measured range of perspective views of a 2-D array of 3-D images. A key feature here is that individual objects experience a specific series of angular rotations as they move or change their lateral position on the curvilinear grid. These aspect changes occur with respect to, or relative to, a specific "front elevation" starting view.</p>
<p>Another related advantage of our method then allows this "front-elevation" angle to potentially be set (according to the conjoined system's functionality) at a nominal local orientation or starting angle in the centre curvilinear grid "cell" position. (see item 2 on Figure 2) Therefore in principle this starting elevation angle could be individual or unique to the object in question, and also potentially re-orientable (with respect to each of three orthogonal reference planes) between individual applications of the animations made according to our method. Thus our method [as described in claims 1-7] facilitates the complete angular presentation of individual objects, specifically by enabling objects to be presented in a measured or scaled angular continuum which is fully configurable in terms of both extent and beginning reference orientation.</p>
<p>Our method is applicable in a variety of information display scenarios, and in particular it provides specific advantages in those situations where the manipulation of focus plus context is to be achieved whilst maximising the efficiency of the overall visual display system. One example here is the presentation of an engineering and electrical component catalogue, whereby a plurality of 3-D models (and possibly 2-D images) of the various components present in a region of the overall catalogue are to be displayed on a visual display system. Our method would here be used with a conjoined system which included a database of components containing 3-D models (and possibly also 2-D images) of components.</p>
<p>Our method has a number of distinct efficiency advantages over the typical ways in which such a component catalogue is presented on a visual display system.</p>
<p>Firstly, an expanded display space is provided, relative to 2-D methods such as menu and text lists, specifically in terms of the number of components that can be displayed simultaneously inside a DA of a limited size. With our method more components can be displayed, per unit of display area, (within the limits of component item visibility and also the limitations of display system resolution) than can be shown in the list, menu and icon group views which are typically employed to show views of component catalogues. This specific advantage relates to the fact that our method is intrinsically of the perspective type, and hence reducible in terms of overall DA scale without loss of displayed context information.</p>
<p>Secondly, with our method it is feasible to manipulate and/or configure a zoomed view of the catalogue such that the magnified features of several components (either component images of the 2-D or 3-D class) are displayed simultaneously whilst preserving the relational context of these same components within a broader spectrum or group of images. Using standard display techniques (either 2-D text list or menu based methods or linear perspective based displays) such views are not efficiently achievable, thus preventing the rapid attainment of comparative measurements and observations over a range of component magnifications.</p>
<p>Thirdly, with ordinary 2-D list and menu based methods and in particular with linear perspective based methods, no ways are typically provided to achieve the simultaneous zooming and panning either of 2-D component arrays or of components arranged in any other way whatsoever. Thus complex camera and/or other display related manipulations are typically required in order to perform zooming and panning operations on component catalogue views, whereby comparative measurements and observations can be rapidly obtained between the different components. Such operations inevitably incur a conjoined system management overhead.</p>
<p>Our method provides comparative observations and measurements between a measured series of individual perspective views which are inherently projected from different viewpoints at any instant, but which are viewed over an identical range of perspective views as the 2-D image array is zoomed and panned.</p>
<p>An important factor here is that comparative measurements and observations can be split into two broad classes, one type being where a component feature (either form or colour related) to be measured is largely invariant with respect to a specific set of changes to the perspective view, or alternatively a component feature which is strongly variant with respect to those same changes. Our method has the advantage that it aids in the making of efficient measurements for both of these fundamental classes of measurements and observations.</p>
<p>In the case of measurements and observations which are of the strongly feature variant class, specifically with respect to changes in perspective view, our method provides a way of projecting, in a timed series, an identical range of perspective views for each of the different components. Thus the features of different components can be rapidly compared and contrasted during efficient focus plus context manipulations. Note that complex camera manipulations are not required in order to perform these same measurements and observations with our method.</p>
<p>In the case of measurements and observations which are of the strongly feature invariant class, our method facilitates the simultaneous comparison of features across a plurality of components during focus plus context manipulations.</p>
<p>Our method has the distinct advantage that zooming and panning of 2-D component arrays can be performed simultaneously, thus avoiding complex camera operations, thus avoiding the need for such system management overheads.</p>
<p>Fourthly, our method has the advantage that efficient use of display area real-estate is automatically achieved, in terms of preserving the number of individual components in view (on the DA) at any instant, and specifically during zooming and panning operations on a plurality of components.</p>
<p>Fifthly, with our method a framework is provided in which a systematic (and configurable) range of perspective views of a plurality of components can be reliably obtained. Our method provides distinct advantages in terms of obtaining measurements and observations between a large number of components. In particular, with our method such measurements can occur either simultaneously (where the component features to be measured are of the perspective invariant class) or alternatively by means of an identical range of perspective viewpoints (where the component features to be measured are of the perspective variant class).</p>
<p>With our method such comparative measurements and observations occur within a systematic framework whereby individual components are automatically presented in terms of a measured range of perspective views. Thus rapid comparative measurements and observations can be reliability obtained whilst making efficient use of display area real estate per unit time.</p>
<p>Therefore our method facilitates the simultaneous manipulation of focus plus context views of a plurality of components, whilst also presenting perspective views within a framework in which all the generated views of the displayed components occur along a measured or scaled continuum.</p>
<p>The present invention is suitable for application on any kind of visual display system which facilitates the display of simulated perspective scenes. Our method is serviceable for use in a variety of different types of display systems, including those of either the 2-D or 3-D class, and regardless of whether the display method in question produces projected, real or virtual images.</p>

Claims (1)

  1. <p>Claims What we claim is: 1. A method for manipulating focus plus
    context on a visual display system, the method comprising: the generation of a) a 3-D (three dimensional) perspective image, consisting of a real, virtual or projected image of a curvilinear grid [Figure 1, itemi] in a simulated or real 3-D space, on a visual display system, and the curvilinear grid consisting of a notionally planar grid of individual segment areas (or "cells") which converge towards 3 vanishing points, a central vanishing point lying outside and towards the back of the grid, and two other lateral vanishing points located outside of the grid on the lateral edges of the grid respectively, the vanishing points co-located together in the same plane as the curvilinear grid, and the grid being angled or tilted towards a fixed station point or other fixed reference point positionally located within the space in such a way as to facilitate the unobstructed projected view of any and all objects lying in close proximity to the curvilinear grid, and b) a group of images arranged notionally in a 2-D (two dimensional) array, consisting of a series of "rows" and "columns" of images, and here referred to as a 2-D image array, and consisting of a group of 2-D or 3-D object representations (or images), consisting of a real, virtual or projected series of images arranged in a notionally 2-D (two-dimensional) plane, whereby a region of this 2-D image array is projected into the same simulated or real 3-D space as in (a), and here referred to as a visible region of the 2-D image array, whereupon this visible region of the 2-D image array consists of 2-D or 3-D images which are individually located laterally inside of, and vertically above, the individual boundaries defined by the segment areas on the curvilinear grid in (a); and the display of images present on the visible region of the 2-D image array in the same simulated or real 3-D space described in (a) and (b), on a visual display system, and potentially in accordance with events and process in the conjoined system and as described in claims 1-7, which conforms to the shape and form of the curvilinear grid described in (a) and (b) above, whereupon these same images automatically scale themselves according to the geometrical distance from the aforementioned vanishing points, or other vanishing points lying very close by in the simulated or real 3-D space, whereupon a simulated "optical depth" is automatically produced between the fixed station point or other fixed reference point and each image present in each grid segment area (or "cell") lying on top of the curvilinear grid described in (a) and (b), whereupon images lying locationally "close" to a vanishing point are scaled to a relatively smaller size whilst those lying relatively further away are scaled to a relatively larger apparent size accordingly.</p>
    <p>2. The methods of claim I whereupon, when the visible region of the 2-D array of images lying coplanar with the curvilinear grid is animated, whereupon all the constituent images animate either in a row or column direction on the visible region of the 2-D image array, (in unison) under control and/or by means of affected actions emanating from, and controlled by, the conjoined system's events and processes, whereupon no movement occurs as a result on the curvilinear grid itself, but rather the images themselves animate smoothly and automatically in unison and in response to the said actions, across the curvilinear grid, whilst at the same time scaling themselves automatically and smoothly depending upon their instantaneous (relative) position on the curvilinear grid, and hence distance from each of the three vanishing points described in claim I, whereupon the images move one or more grid "places" (i.e. grid or "cell" positions on the curvilinear grid) in a series of smooth conjoined "animation frames" depending on the particular action initiated in the enclosing system and/or controlled by events in the conjoined system and being dependant on the instantaneous number and presence of images present on the curvilinear grid; and the term "animation frame" referring to a single still "frame" or instantaneous "snapshot" image of our method as described in claims 1-7 (within the DA) which can be subsequently unified or combined (in rapid succession) with other such similar frames into a smooth apparent movement action, through a combination of processes and actions in the display system and conjoined system, and in accordance with claims 1-7; and images in the visible region of the 2-D image array whenever they are stationary adopt a position so as to reside in the geometric centre of an individual display "cell" on the curvilinear grid.</p>
    <p>3. The method of claims I and 2, wherein, whereupon when the 2-D image array has additional image items located out of the field of view of the displayed or visible region of the 2-D image array, when the images are animated then new images not previously present on the curvilinear grid can be animated into view (that is enter the visible region of the 2-D image array) on the side of the curvilinear grid diametrically opposed to the direction of movement, whilst others on the opposite side of the curvilinear grid animate smoothly out of view in the direction of movement, and this scrolling feature occurs at an animation speed and with an extent in terms of image numbers scrolled in and out of the projected view of the visible region of 2-D image array, according to, and in response to, either individual or combined actions and processes in the visual display system and/or the conjoined system(s) and also automatically in terms of a movement speed and according to events and process in the conjoined system and also according to the extent of the 2-D image array being scrolled.</p>
    <p>4. The method of claims 1, 2, 3 and 4 whereupon any image on the curvilinear grid displays a section of text a short distance either in front of the image, or slightly above it in the simulated optical space, whereupon this text will adopt an automatic scaling and also station point facing or instantaneous reference point facing behaviour, and regardless of the instantaneous image location on the visible region of the 2-D image array, and in accordance with claims 1-7.</p>
    <p>5. The method of claims 1, 2 and 3 whereupon any 3-D image on the curvilinear grid is automatically animated to a) adopt a local angle (that is adjust the angle of its local "front-elevation" in the lateral direction) to face a direction that instantaneously follows a direction directly opposite to the central vanishing point (the nadir of that point in space with respect to the instantaneous position of the image in question); [Figure 2, item 14]; and b) the front elevation angle for an individual 3-D image adopts its beginning reference angle or front elevation initial angle in the centre of the projection, that angle being identical to the angle adopted when the image is stationary and located in the central cell position on the curvilinear grid (see item 2 in Figure 2 which is located in this position), whereupon this angle is subsequently changed (in accordance with the (a) above and in accordance with aspects 1-7) as it moves (or is otherwise located) to another point on the curvilinear grid, and in principle this starting front elevation angle can be individual or unique to the object in question, and also potentially re-orientable (with respect to each of the three 3-D spatial reference planes) between individual applications of the animations made according to our method.</p>
    <p>6. The method of claims 1, 2 and 3 whereupon any 2-D image on the curvilinear grid is automatically animated to adopt a local angle (that is adjust the angle of its local "front-elevation") to face a direction that instantaneously faces either a nominal fixed station point, or another reference point in the depicted space. [Figure 2, item 7] 7. A machine readable medium storing a sequence of instructions that, when executed by a machine generates a graphical representation of a region of a 2-D array of images on a display unit, which takes a form such that it causes the machine to perform the steps of: displaying, on the display unit a visual representation corresponding to a plurality of displayed images in form and or sequence partially or wholly identical to the design features laid out in any of the said claims I -7 inclusive of above; displaying, on the display unit a visual representation corresponding to the generation, layout, relocation and display of a plurality of images in a form partially or wholly identical to any of the said claims 1 -7 inclusive of above; responsive to the detection of the identified image, in response relocating the display of a plurality of images such that the images align themselves in a fashion partially or wholly identical to the geometry and perspective representations and visual methods listed in claims 1-7 inclusive of above.</p>
GB0614419A 2006-07-20 2006-07-20 3D Perspective Image of a User Interface for Manipulating Focus and Context Withdrawn GB2440197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0614419A GB2440197A (en) 2006-07-20 2006-07-20 3D Perspective Image of a User Interface for Manipulating Focus and Context

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0614419A GB2440197A (en) 2006-07-20 2006-07-20 3D Perspective Image of a User Interface for Manipulating Focus and Context

Publications (2)

Publication Number Publication Date
GB0614419D0 GB0614419D0 (en) 2006-08-30
GB2440197A true GB2440197A (en) 2008-01-23

Family

ID=36998414

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0614419A Withdrawn GB2440197A (en) 2006-07-20 2006-07-20 3D Perspective Image of a User Interface for Manipulating Focus and Context

Country Status (1)

Country Link
GB (1) GB2440197A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012089270A1 (en) * 2010-12-30 2012-07-05 Telecom Italia S.P.A. 3d interactive menu
US9632659B2 (en) 2008-02-27 2017-04-25 Google Inc. Using image content to facilitate navigation in panoramic image data
US9754413B1 (en) 2015-03-26 2017-09-05 Google Inc. Method and system for navigating in panoramic images using voxel maps
US10025455B2 (en) 2010-06-01 2018-07-17 Sphere Technology Limited Method, apparatus and system for a graphical user interface

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5678015A (en) * 1995-09-01 1997-10-14 Silicon Graphics, Inc. Four-dimensional graphical user interface
US20010028369A1 (en) * 2000-03-17 2001-10-11 Vizible.Com Inc. Three dimensional spatial user interface
US20050086612A1 (en) * 2003-07-25 2005-04-21 David Gettman Graphical user interface for an information display system
US20060274060A1 (en) * 2005-06-06 2006-12-07 Sony Corporation Three-dimensional object display apparatus, three-dimensional object switching display method, three-dimensional object display program and graphical user interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5678015A (en) * 1995-09-01 1997-10-14 Silicon Graphics, Inc. Four-dimensional graphical user interface
US20010028369A1 (en) * 2000-03-17 2001-10-11 Vizible.Com Inc. Three dimensional spatial user interface
US20050086612A1 (en) * 2003-07-25 2005-04-21 David Gettman Graphical user interface for an information display system
US20060274060A1 (en) * 2005-06-06 2006-12-07 Sony Corporation Three-dimensional object display apparatus, three-dimensional object switching display method, three-dimensional object display program and graphical user interface

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9632659B2 (en) 2008-02-27 2017-04-25 Google Inc. Using image content to facilitate navigation in panoramic image data
CN103824317B (en) * 2008-02-27 2017-08-08 谷歌公司 Help to navigate in panoramic image data using picture material
US10163263B2 (en) 2008-02-27 2018-12-25 Google Llc Using image content to facilitate navigation in panoramic image data
US10025455B2 (en) 2010-06-01 2018-07-17 Sphere Technology Limited Method, apparatus and system for a graphical user interface
US10802666B2 (en) 2010-06-01 2020-10-13 Sphere Technology Limited Method, apparatus and system for a graphical user interface
US11366565B2 (en) 2010-06-01 2022-06-21 Sphere Research Ltd. Method, apparatus and system for a graphical user interface
US11768580B2 (en) 2010-06-01 2023-09-26 Sphere Research Ltd. Method, apparatus and system for a graphical user interface
WO2012089270A1 (en) * 2010-12-30 2012-07-05 Telecom Italia S.P.A. 3d interactive menu
US9442630B2 (en) 2010-12-30 2016-09-13 Telecom Italia S.P.A. 3D interactive menu
US9754413B1 (en) 2015-03-26 2017-09-05 Google Inc. Method and system for navigating in panoramic images using voxel maps
US10186083B1 (en) 2015-03-26 2019-01-22 Google Llc Method and system for navigating in panoramic images using voxel maps

Also Published As

Publication number Publication date
GB0614419D0 (en) 2006-08-30

Similar Documents

Publication Publication Date Title
US7411610B2 (en) Method and system for generating detail-in-context video presentations using a graphical user interface
US7486302B2 (en) Fisheye lens graphical user interfaces
US7995078B2 (en) Compound lenses for multi-source data presentation
US7475356B2 (en) System utilizing mixed resolution displays
US7978210B2 (en) Detail-in-context lenses for digital image cropping and measurement
US7546540B2 (en) Methods of using mixed resolution displays
US8788967B2 (en) Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US8711183B2 (en) Graphical user interfaces and occlusion prevention for fisheye lenses with line segment foci
US7983473B2 (en) Transparency adjustment of a presentation
US20020167531A1 (en) Mixed resolution displays
US20040125138A1 (en) Detail-in-context lenses for multi-layer images
US20160041630A1 (en) Operations in a Three Dimensional Display System
US7333071B2 (en) Methods of using mixed resolution displays
GB2440197A (en) 3D Perspective Image of a User Interface for Manipulating Focus and Context
Dias et al. Image Manipulation through Gestures
JP6191851B2 (en) Document presentation method and user terminal
US20220206669A1 (en) Information processing apparatus, information processing method, and program
Williams et al. InteractiveVirtual Simulation for Multiple Camera Placement
EP4579412A1 (en) Information processing device, information processing system, and program
Bowman et al. Effortless 3D Selection through Progressive Refinement.
GB2609473A (en) Three-dimensional display apparatus
Sun et al. Modifying Two-dimensional Panel Elements in Three-dimensional Space
CN118976250A (en) Game scene editing method, device, terminal equipment and storage medium
Ayatsuka et al. Layered penumbrae: an effective 3D feedback technique
Sudarsanam et al. CubeCam: A Screen-space Camera Manipulation Tool.

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)