WO2000065461A1 - Outils d'interaction avec des environnements virtuels - Google Patents
Outils d'interaction avec des environnements virtuels Download PDFInfo
- Publication number
- WO2000065461A1 WO2000065461A1 PCT/US1999/028930 US9928930W WO0065461A1 WO 2000065461 A1 WO2000065461 A1 WO 2000065461A1 US 9928930 W US9928930 W US 9928930W WO 0065461 A1 WO0065461 A1 WO 0065461A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual environment
- environment system
- set forth
- virtual
- system set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/22—Character recognition characterised by the type of writing
- G06V30/228—Character recognition characterised by the type of writing of three-dimensional handwriting, e.g. writing in the air
Definitions
- the invention relates generally to tools by means of which users of an interactive environment may interact with the environment and more specifically to tools for interactively manipulating objects belonging to a virtual environment.
- the virtual table a workbench-like back-projection system.
- Stereoscopic rendering allows representations of objects being worked on by the designer to appear as if they were floating in space above the table, permitting the designer to grab them, move them, and otherwise manipulate them in the virtual reality produced by the table.
- the virtual table seems perfectly suited as a virtual environment for interactive product design. The designer can either start with 2D sketches on the tabletop and then successively extrude them into 3D space or work directly in 3D from the beginning.
- the mere size of the table allows for intuitive, gesture-based interaction of the user's hand in the space above the virtual environment that appears on the tabletop.
- the virtual table - as an interactive media - combines the strengths of conventional drawing boards, current CAD systems and interactive virtual environments; hence, the virtual table has the potential to make the CAD workplace into a 3D workspace.
- Stork and Maidhof introduced 3D interaction techniques for modeling precisely with 3D input devices. See A. Stork and M. Maidhof. "Efficient and Precise Solid Modelling Using 3D Input Devices". Proceedings of ACM Symposium on Solid Modelling and Applications, pp. 181 - 194, May 14-16, Atlanta, Georgia, 1997.
- the user can look at the miniature representation from a point of view different from that from which the user is looking at the full-sized virtual environment.
- the user can also us a pointing device to select and manipulate items on either the miniature representation or the full-sized virtual environment, with manipulations in the one being reflected in the other. If the user's position in the full-sized virtual environment is recorded on the miniature representation, the miniature representation serves as a map.
- the miniature representation may work as a "magic lens" to show aspects of the virtual environment that are not visible in the full-sized version.
- Angus, et al. disclose the use of a virtual tool called a paddle to interact with a virtual environment in their paper, Ian G. Angus, et al. "Embedding the 2D Interaction Metaphor in a Real 3D Nirtual Environment, Proceedings of IS&T/SPIE's Electronic Imaging Symposium, 1995.
- the paddle is hand-held and has three real buttons.
- the virtual environment knows the position of the paddle and of a pointing device, and projects 2-D representations onto the paddle's opaque surface. The user can then employ the buttons and the pointing device to interact with the display on the paddle.
- One use of the paddle is to give the user of the virtual environment a 2-D map of the environment with the user's current position in the environment marked on the map. The user employs the map to locate him or herself within the virtual environment and to navigate to other locations in the environment.
- No.3 discloses an opaque pad and pen where the pad is used as a display and the pen is used to select items on the pad and in the virtual environment.
- the pen is used to select an item in the virtual environment, the item appears on the pad.
- the manner in which it is selected by the pen determines how it appears on the pad.
- the items on the pad may include objects to be added to the virtual environment and tools for manipulating objects in the environment.
- the user employs the pen to drag the object from the pad to the desired position in the environment.
- To employ a tool on the virtual pad the user employs the pen to select the object and then employs the pen to operate the tool.
- the tools disclosed are scaling tools, rotation tools, cut and paste tools, magic lenses, and coloring tools.
- the pad is opaque, it has been generally used only to control or display information about the virtual environment, not to interact directly with it. Thus, the most typical use of an opaque pad is as an analogue to a menu or dialog box in a two-dimensional GUI.
- the invention provides tools for working with the virtual environment that are themselves part of the virtual environment.
- the user employs a device to view a portion of the virtual environment and the virtual environment system responds to the position and orientation of the device and the direction and point of view of the user by producing a modification of the portion as required for the tool.
- the device is a mirror and the virtual environment system responds to the mirror's position and orientation and the given direction and point of view by modifying the portion of the virtual environment that is reflected in the mirror.
- the reflected portion may be modified to show the virtual environment as it would be seen from the direction and point of view that is the reflection of the given direction and point of view in the mirror, so that it appears to the viewer in the same fashion as a real environment seen through a mirror.
- the reflected portion may also be used as a "magic lens" to show a different view of the virtual environment, for example, an X-ray view. What is seen may further depend on which side of the mirror is being used. Ray tools may be used with the mirror by simply pointing the ray tool at the desired point in the reflection in the mirror.
- the device is transflective, that is, the user can simultaneously see objects through the device and objects reflected on the device.
- the objects seen through the device belong to a physical space and the objects reflected on the device belong to a reflection space.
- the virtual environment system Given the location and orientation of the device, the direction and point of view of the user's eyes, and a location in the physical space, the virtual environment system can define the reflection space so that it exactly overlaps the physical space and can define a portion of the virtual environment so that the reflection of the portion in the device appears at the given location in the physical space.
- the technique can be used to deal with occlusion of the virtual environment by a real object or to augment a real object with objects created by the virtual environment.
- the device may simply be a handle.
- the virtual environment system uses the given direction and point of view and the device position and orientation to determine where the image of a tool controlled by the handle appears in the virtual environment.
- the handle may be attached to a transparent component such as a transparent tube or a transparent panel.
- the extent of the tool's image is determined by the extent of the transparent component and to the user, the transparent component and the tool's image appear to merge, that is, the tool appears to be whatever the portion of the virtual environment that the user sees through the transparent component shows.
- a transparent tube may appear in the virtual environment as a stylus, a net, a trowel, a knife, or a saw and may be used to perform corresponding operations on the virtual environment.
- the transparent panel similarly merges with whatever the portion of the virtual environment viewed through the panel shows, and may therefore be as polymorphic as the transparent tube.
- the device may have two or more sides, with the modification of the portion depending on which side is used to view the virtual environment.
- Useful modifications include ones in which the portion of the virtual environment is modified to appear as though it were viewed through a transparent panel, ones in which the portion is modified to function as a "magic lens", and ones where a "snapshot” can be made of the portion and the device used to move the "snapshot" to another part of the virtual environment.
- the portion can also be used to select an object in the virtual environment, either by being moved over the object or by sweeping through the volume of the virtual environment occupied by the object.
- the portion may also be context sensitive in that its appearance changes in response to the selection of a particular object.
- the portion may further include one or more objects that move with the portion, and the device may be used with a selection device.
- the objects that move with the portion may include objects to be placed in the virtual environment or objects that have been taken from the virtual environment and placed in the portion.
- the objects that move with the portion may further include representations of operations to be performed on selected objects of the virtual environment. Examples of such representations are buttons and sliders. Among the operations is changing the manner in which the virtual environment system modifies the portion.
- the portion thus functions as a tool palette for working with the virtual environment.
- the tool palette is context sensitive in that the representations depend on the mode of operation of the portion and the object being operated on.
- the selection device may be transparent and the virtual environment system may use the position and orientation of the selection device and the given direction and point of view to produce an image of the selection device in the portion.
- the image may contain a visual representation of the operation presently being performed using the selection device.
- the selection device may be a stylus and the device itself may be a transparent panel.
- the virtual environment system may respond to movements of an end of the stylus on or near the transparent panel by making corresponding marks in the portion of the virtual environment.
- the virtual environment system may further include a gesture manager which interprets the movements of the end of the stylus as gestures indicating objects, operations, or letters and/or numbers.
- the device that is used to view the portion of the virtual environment may operate in either the reflective or transparent modes just described, with the mode of operation being determined by the orientation of the device relative to the virtual environment.
- the device that is used to view the portion may be fixed or movable relative to the virtual environment , as may be the given direction and point of view. Where both are movable, the given direction and point of view may be those of the eyes of a user of the virtual environment system and the device may be held in the user's hand. Modes of operation of the device may depend on the position of the device relative to the virtual environment or on separate mode inputs.
- FIG. 1 is an overview of an implementation of the invention in a virtual table
- FIG. 2 shows pad image 125 and pen image 127 as they appear when used as a palette tool
- FIG. 3 shows pad image 125 and pen image 127 as they appear when used as a first "through the plane tool"
- FIG. 4 shows pad image 125 and pen image 127 as they appear when used as a second "through the plane” tool
- FIG. 5 shows the geometry of object selection in tool 401 ;
- FIG. 6 shows operations performed by means of gestures written on the transparent pad
- FIG. 7 shows an "x-ray" pad image
- FIG. 8 shows the optics of the foil used in the reflective pad
- FIG. 9 shows how the angle of the transparent pad relative to the virtual table surface determines whether it is transparent or reflective
- FIG. 10 shows how the transparent pad can be used in reflective mode to examine a portion of the virtual environment that is otherwise not visible to the user
- FIG. 11 shows how the transparent pad may be used to capture a snapshot
- FIG. 12 shows a state diagram for the transparent pad when used to capture 3-D snapshots
- FIG. 13 shows projected pad 125 being used to select items in the virtual environment by sweeping through the virtual environment
- FIG. 14 shows the use of gestures to create objects in the virtual environment and to control the objects
- FIG. 15 shows the gestures used to create objects in a preferred embodiment
- FIG. 16 shows the gestures used to control objects and change modes in a preferred embodiment
- FIG. 17 shows how the portion of the virtual environment that is reflected in a mirror is determined
- FIG. 18 shows how ray pointing devices may be used with a mirror to manipulate a virtual environment reflected in the mirror
- FIG. 19 is an overview of virtual reality system program 109
- FIG. 20 shows a portion of the technique used to determine whether the pad is operating in transparent or reflective mode
- FIG. 21 shows how a transflective panel may be used with a virtual environment to produce reflections of virtual objects that appear to belong to a physical space
- FIG. 22 shows how the transflective panel may be used to prevent a virtual object from being occluded by a physical object
- FIG. 23 shows how the transflective panel may be used to augment a physical object with a virtual object.
- Reference numbers in the drawing have three or more digits: the two right-hand digits are reference numbers in the drawing indicated by the remaining digits. Thus, an item with the reference number 203 first appears as item 203 in FIG. 2.
- FIG. 1 Overview of the pad and pen interface: FIG. 1
- the pad and pen interface of the invention employs a transparent pad and a large, pen-shaped plastic tube that functions as a pen and is further equipped with a button that is used in the same fashion as a mouse button to activate an object selected by the pen.
- the interface is used with a virtual table, but can be advantageously employed in any virtual reality system that uses back projection to create at least part of the virtual environment.
- FIG. 1 shows a system 101 for creating a virtual environment on a virtual table 111 in which the pad and pen interface of the invention is employed.
- Processor 103 is executing a virtual reality system program 109 that creates stereoscopic images of a virtual environment. The stereoscopic images are back- projected onto virtual table 111.
- a user of virtual table 111 views the images through LCD shutter glasses 117. When so viewed, the images appear to the user as a three-dimensional virtual environment.
- Shutter glasses 121 have a magnetic tracker attached to them which tracks the postion and orientation of the shutter glasses, and by that means, the postion and orientation of the user's eyes. Any other kind of 6DOF tracker could be used as well.
- the postition and orientation are input (115) to processing unit 105 and virtual reality system program 109 uses the position and orientation information to determine the point of view and viewing direction from which the user is viewing the virtual environment. It then uses the point of view and viewing direction to produce stereoscopic images of the virtual reality that show the virtual reality as it would be seen from the point of view and viewing direction indicated by the position and orientation information.
- the user When the transparent pad and pen are being used with virtual table 111, the user holds transparent pad 123 in the subordinate hand and pen 121 in the dominant hand. Both pad 123 and pen 121 have magnetic trackers 119, and position and orientation information from pad 123 and pen 121 are provided to processing unit 105 along with the position and orientation information from shutter glasses 121.
- Nirtual reality system program 109 knows the actual sizes of transparent pad 123 and pen 121, and uses that information, together with the position and orientation information, to generate stereoscopic images 125 of pad 123 and stereoscopic images 127 of pen 121 and back project them onto the surface of virtual table 111 at locations such that they cover the same portions of the surface of virtual table 111 that pad 123 and pen 121 appear to the user to cover.
- Projected image 125 of transparent pad 123 may be transparent, or it may function as a "magic lens" to show a view of the virtual environment that is different from what is seen outside the area of projected image 125. Projected image 125 may further appear to carry tools and objects that can be used in manipulating the virtual environment. Projected image 127 of pen 121 will further include a cursor at the pen's point. When the user looks at projected pad 125 through transparent pad 123, projected object 129 appears to be on transparent pad 123, and when real pen 121 is touching transparent pad 123, projected pen image 127 appears to be touching projected pad 125. By moving real pen 121 on transparent pad 123, the user can thus touch and manipulate projected object 129 on projected pad 125.
- System 101 thus unifies several previously isolated approaches to 3D user-interface design, such as two-handed interaction and the use of multiple coordinate systems.
- Our interface supports the following important features:
- pad image 125 which takes on many forms; the image 127 of tube 121 remains always a pen.
- Image 127 can, however, be just as polymorphic as pad image 125: it can be the image of a pen, the image of a paint brush, the image of a net with a handle, and so on and so forth.
- both pad 123 and pen 121 can be reduced to handles: system 101 uses the position and orientation information from the handle and from shutter glasses 117 together with parameters about the size and type of tool to make an image of the tool at the proper position in the virtual environment.
- the handles are hidden by the user's hand and consequently need not be transparent.
- the handles could also have forms corresponding to the forms of the handles with real tools. For instance, the handle for a tool in the virtual environment whose real-world equivalent is two-handed can be two-handed.
- the advantage of using a real transparent pen and a real transparent panel instead of just handles is, of course, that the real transparent pen and panel are as polymorphic as handles by themselves, but also provide the user with tactile feedback.
- a preferred embodiment of system 101 uses the Baron Nirtual Table produced by the Barco Group as its display device.
- This device offers a 53"x40" display screen built into a table surface.
- the display is produced by a Indigo2TM Maximum Impact workstation manufactured by Silicon Graphics, Incorporated.
- transparent pad 123 is an 8"xl0" Plexiglas® acrylic plastic sheet and pen 121 is a large, transparent, pen-shaped plastic tube with a button.
- Both props and the shutter glasses in the preferred embodiment are equipped with 6DOF (six degrees of freedom) Flock of Birds® trackers made by Ascension Technology Corporation, for position and orientation tracking.
- the material for the pen and pad was also selected for minimal reflectivity, so that with dimmed lights — the usual setup for working with the virtual table — the props become almost invisible. While they retain their tactile property, in the user's perception they are replaced by the projected pad 125 and projected pen 127 produced by the virtual table.
- Our observations and informal user studies indicate that virtual objects can even appear floating above pad 123's surface, and that conflicting depth cues resulting from such scenarios are not perceived as disturbing. Conflicts occur only if virtual objects protrude from the outline of the prop as seen by the user because of the depth discontinuity. The most severe problem is occlusion from the user's hands. Graphical elements on pad 123 are placed in a way so that such occlusions are minimized, but they can never be completely avoided.
- the pen was chosen to be relatively large to provide room for graphics displayed inside the pen. In that way, the pen also provides visual feedback by e.g., displaying the tool it is currently associated with. So far, however, we have made only basic use of this capability and have instead focused on the pad as a carrier for the user interface.
- virtual reality system program 109 is based on the Studierstube software framework described in D. Schmalumble, A. Fuhrmann, Z. Szalavari, M. Gerckenz: "Studierstube” - An Environment for Collaboration in Augmented Reality. Extended abstract appeared Proc. of Collaborative Nirtual Environments '96, Nottingham, UK, Sep. 19-20, 1996. Full paper in: Nirtual Reality - Systems, Development and Applications, Vol. 3, No. 1, pp. 37-49, 1998. Studierstube is realized as a collection of C++ classes that extend the Open Inventor toolkit, described at P. Strauss and R. Carey: An Object Oriented 3D Graphics Toolkit.
- Open Inventor's rich graphical environment approach allows rapid prototyping of new interaction styles, typically in the form of Open Inventor node kits.
- Tracker data is delivered to the application via an engine class, which forks a lightweight thread to decouple graphics and I/O.
- Off-axis stereo rendering on the NT is performed by a special custom viewer class.
- Studierstube extends Open Inventor's event system to process 3D (i. e., true 6DOF) events, which is necessary for choreographing complex 3D interactions like the ones described in this paper.
- the .iv file format which includes our custom classes, allows convenient scripting of most of an application's properties, in particular the scene's geometry. Consequently very little application-specific C++ code — mostly in the form of event callbacks — was necessary.
- window tools The rendering of window tools generally follows the method proposed in J. Niega, M. Conway, G. Williams, and R. Pausch: 3D Magic Lenses. In Proceedings of ACM UIST'96, pages 51-58. ACM, 1996, except that it uses hardware stencil planes.
- a way of using user interface props to interact with a virtual environment is termed in the following a metaphor.
- one such metaphor is the palette: projected pad 125 carries tools and controls that the user can employ to interact with the virtual environment.
- the metaphors for which the transparent pad and pen can be employed are the following:
- the pad can carry tools and controls, much like a dialog box works in the desktop world. This way it can serve as carrier of 3D user-interface elements (widgets) and for 3D menu selection. Furthermore, it can offer collections of 3D objects to choose from, parameter fields to edit, or additional information to be displayed in a handheld private view. Because the pad is transparent, it works as a see-through tool. For example, a context-sensitive menu can be displayed on the pad image that adapts itself to the objects that the user currently is viewing through the pad. This allows a reduction of the complexity of the 2D interface at any point in time, thus allowing for the display of more data that is specific to what is being viewed through the pad.
- Window tools the pad image defines a "window" into the virtual reality.
- the window can be used in a number of ways:
- Through-the-plane tool When the window is used as a through-the-plane tool, the user orients the "window" defined by the pad image and then manipulates objects seen through the window by applying the pen to the pad. Objects can e.g., be selected by using the pen to make a circle on the pad around the object as seen through the pad. The circle appears on the pad image and the circled object is selected. 2. Lens and mirror tools: The window defined by the pad image can be used as a "magic lens” to see things that the user otherwise cannot see in the virtual environment, for example to obtain an "X-ray" view of an object in the virtual embodiment. The pad can also be used as a mirror to reflect a view of the back side of an object in the virtual environment.
- volumetric manipulation tool The pad itself can be used for active object manipulation, exploiting the fact that the pad has a spatial extent (unlike the point represented by the pen tip). For example, reference planes and axes can be specified with the aid of this tool and sweeps can be interactively defined.
- FIG. 2 shows transparent pad image 125 and pen image 127 when the two are being used according to a palette metaphor 201.
- the transparent pad and pen are being used with a virtual reality program that generates a virtual environment for use in large-scale landscaping or town planning.
- Pad image 125 includes a set of buttons along the top; included in these buttons are buttons 203 indicating objects such as buildings that can be placed in the virtual environment and buttons 205 indicating tools that can be used to manipulate objects in the virtual environment.
- Button 209 is an exit button; it is used to exit from the application program being controlled by the transparent pad and pen.
- a painting tool (indicated by the image of a painter's palette) has been selected and virtual reality program 109 has responded to the selection by making image 125 into a tool for defining a color. It has done this by adding images of sliders 207 to image 125.
- These sliders control the amount of red, green, and blue in the color being defined, and are manipulated by placing pen 121 in a position on pad 123 such that the cursor on its image 127 touches a slider 207 in image 125 and then moving pen 121 in the direction that the slider 207 is to move.
- Nirtual reality program 109 responds by moving the slider as indicated by pen 121 and changing the color being defined to agree with the movement of the slider..
- pad image 125 corresponding to transparent pad 123 resembles the dialog boxes used in desktop system GUIs in that it groups various application controls such as buttons, sliders, dials etc. Since transparent pad 123 is hand-held, it is always in convenient reach for the user, which is an advantage if working on different areas of the table. It is easy to remember where the controls are, and transparent pad 123 can even be put aside temporarily without causing confusion.
- the basic mode of our sample landscaping application is object placement.
- the pad serves as an object browser presenting a collection of objects to select from.
- the user then moves pen 121 in a fashion such that projected pen 120 appears to drag objects from pad image 125 and drop them into the virtual environment via direct 3D manipulation.
- Additional controls some implemented as space-saving pop-up button bars — allow the user to scale, colorize, and delete objects.
- 2D controls and 3D direct manipulation naturally blend as the pad represents a 2D surface similar to many real-world control containers in other application areas (e. g., a remote control, a radio, a dishwasher's front panel).
- images 125 when it is being used according to palette metaphor 201 are context sensitive; it contains only those tools which are relevant to a particular situation in which it is being used.
- tools 205 include a selection tool for selecting an object in the virtual environment, and when an object has been selected, the buttons and tools on image 125 are those that are relevant to that object.
- Another interesting property is the multiple surfaces of reference with which the user simultaneously interacts. The user can thus select a representation of an object 203 from image 125 and use pen image 127 to drag the representation from pad image 125 to a location in the virtual environment and add it to the virtual environment. We make further use of these multiple surfaces of reference with the window and through-the-plane tools.
- pad image 125 may operate in modes in which it is transparent, i.e., the virtual environment may be seen through pad image 125, allows the development of yet another class of user interface tools that we call through-the-plane tools. With these tools, pad image 125 is used to select an object in the virtual environment and the tools that are carried on pad image 125 are used to operate on the selected object.
- the first tool is a context sensitive information and manipulation dialog, shown in FIG. 3.
- pad image 125 When pad image 125 is being used as a tool 301, pad image 305 has cross-hairs 303 marking its center.
- object 309 here, a house
- object 309 is selected for manipulation and pad image 125 displays buttons 307 for the tools needed to manipulate object 309, as well as the name of the object at 305.
- the buttons are those needed to colorize the object 309.
- context-sensitive menus and toolbars that appear and disappear as different objects are selected.
- the context-sensitive tool brings these possibilities into a virtual reality system.
- context-sensitive manipulation requires two steps in a one-handed desktop system: The user selects an object, looks for context-sensitive controls to appear somewhere on the screen, and then manipulates the controls. Manipulation of pad and pen can be almost instantaneous and is cognitively more similar to context-sensitive pop-up menus, but without the corresponding disadvantages (e.g., display often obscured by menu, mouse button must be held and cannot be used for interaction).
- Another approach to specifying operations on objects seen through transparent pad image 125 is using pen 121 to write on pad 123, which of course results in projected pen 127 appearing to write on projected pad 125 and marks appearing on projected pad 125.
- a user may select an object in the virtual environment by using pen 121 to write on pad 123 in such a fashion that projected pen 127 makes marks on projected pad 125 which encircle the object to be selected with a "lasso".
- This technique is shown in Fig. 4.
- the user has written with pen 121 on pad 123 such that "lasso" 405 appears to encircle objects 403 as seen through pad 405, thereby selecting objects 403.
- pad image 125 may change upon selection to show those tools appropriate for the selected objects.
- an outline drawn on the pad being held into the scene defines a conical sweep volume that has its tip in the user's eye point and its contour defined by the "lasso". All object contained within this volume are selected, as shown in FIG. 5.
- the user's eye point 503 and lasso outline 505 that the user draws with pen 121 on pad 123 defines volume 507 in the virtual environment produced by virtual table 111, and volume 507 contains selected objects 403.
- Lasso tool 401 is just one example for a wide design space of tools based on 2D gestures that specify operations on the 3D objects of the virtual environment (e.g., objects may be deleted by "using pen 121 on pad 123 to "cross the objects out” on pad image 125 ).
- FIG. 6 shows an example of a sequence of gestures 601 in which objects are selected using a lasso, deleted by crossing them out, and restored by means of an "undo" gesture.
- the views of pad image 125 and pen image 127 are read as indicated by the arrows.
- 603 shows how a lasso is used to select a group of objects; 605 shows how a "cross out” gesture is used to delete one of the selected objects; 607, finally, shows how an "undo" gesture is used to restore the deleted object.
- 2D gestures may be used to create objects, to operate on objects, and to indicate a change in the mode in which pen 121 and pad 123 are operating.
- a user may teach new gestures to the system.
- the through-the-plane tool allows us to reuse all the metaphors that are used in the desktop world to manipulate 2D environments to manipulate virtual environments. It remains to be verified, however, in which cases this 2D manipulation is more capable than direct 3D manipulation. From our observations we conclude that the power of such 2D gesture tools lies in manipulation at-a-distance, for example when attempting to manipulate objects on one side of the table when standing at the other side.
- pad image 125 in the virtual environment defines an area of the virtual environment which has properties that are different from the remainder of the virtual environment.
- pad 123 defines an area in which operations such as selection of objects can be performed.
- window tools are the following:
- snapshot tools which permit a portion of the virtual environment that is visible through pad image 125 to be captured and moved elsewhere, for example for comparison.
- the presently-preferred embodiment employs two examples of magic lens tools: an x-ray vision tool and a mirror tool.
- FIG. 7 One kind of x-ray vision tool is shown in FIG. 7.
- the area being landscaped has an underground telecommunications network connecting the buildings.
- the user employs x-ray vision tool 701.
- the area within virtual environment 703 covered by pad image 125 shows the positions of cables 707.
- pen image 127 the user may modify the positions of the cables and connect them to buildings in the virtual environment.
- x-ray vision tool 701 is bound to the back side of transparent pad 123, and consequently, to employ x-ray vision tool 701, the user simply turns transparent pad 123 over.
- the system can easily determine which side of the pad the user is looking at by examining the pad's normal vector.
- any tool can be bound to the back side of transparent pad 123, and when properly used, this feature permits transparent pad 123 to be "overloaded” without confusing the user.
- the mirror tool is a special application of a general technique for using real mirrors to view portions of a virtual environment that would otherwise not be visible to the user from the user's current viewpoint and to permit more than one user to view a portion of a virtual environment simultaneously.
- the general technique will be explained in detail later on.
- transparent pad 123 is being used as a mirror tool, it is made reflective instead of transparent.
- One way of doing this is to use a material which can change from a transparent mode and vice-versa.
- Another, simpler way is to apply a special foil that is normally utilized as view protection for windows (such as Scotchtint P-18, manufactured by Minnesota Mining and Manufacturing Company) to one side of transparent pad 123.
- foils either reflect or transmit light, depending on which side of the foil the light source is on, as shown in FIG. 8.
- 801 is shown how foil 809 is transparent when light source 805 is behind foil 809 relative to the position 807 of the viewer's eye, so that the viewer sees object 811 behind foil 809.
- 806 is shown how foil 809 is reflective when light source 805 is on the same side of foil 809 relative to position 807 of the viewer's eye, so that the viewer sees the reflection 815 of object 813 in foil 809, but does not see object 811.
- transparent pad 123 When a transparent pad 123 with foil 809 applied to one side is used to view a virtual environment, the light from the virtual environment is the light source. Whether transparent pad 123 is reflective or transparent depends on the angle at which the user holds transparent pad 123 relative to the virtual environment. How this works is shown in FIG. 9. The transparent mode is shown at 901. There, transparent pad 123 is held at an angle relative to the surface 111 of the virtual table which defines plane 905. Light from table surface 111 which originates to the left of plane 905 will be transmitted by pad 123; light which originates to the right of plane 905 will be reflected by pad 123.
- plane 905 the user's physical eye 807, and surface 111 of the virtual table (the light source) is such that only light which is transmitted by pad 123 can reach physical eye 807; any light reflected by pad 123 will not reach physical eye 807. What the user sees through pad 123 is thus the area of surface 111 behind pad 123.
- the reflective mode is shown at 903; here, pad 123 defines plane 907.
- pad 123 defines plane 907.
- light from surface 111 which originates to the left of plane 907 will be transmitted by pad 123; light which originates to the right of plane 907 will be reflected.
- the angle between plane 907, the user's physical eye 807, and surface 111 is such that only light from surface 111 which is reflected by pad 123 will reach eye 807. Further, since pad 123 is reflecting, physical eye 807 will not be able to see anything behind pad 123 in the virtual environment.
- pad 123 When pad 123 is held at an angle to surface 111 such that it reflects the light from the surface, it behaves relative to the virtual environment being produced on surface 111 in exactly the same way as a mirror behaves relative to a real environment: if a mirror is held in the proper position relative to a real environment, one can look into the mirror to see things that are not otherwise visible from one's present point of view.
- This behavior 1001 relative to the virtual environment is shown in FIG. 10.
- virtual table 1007 is displaying a virtual environment 1005 showing the framing of a self-propelled barge.
- Pad 123 is held at an angle such that it operates as a mirror and at a position such that what it would reflect in a real environment would be the back side of the barge shown in virtual environment 1005. As shown at 1003, what the user sees reflected by pad 123 is the back side of the barge.
- virtual reality system program 109 tracks the position and orientation of pad 123 and the position and orientation of shutter glasses 117. When those positions and orientations indicate that the user is looking at pad 123 and is holding pad 123 at an angle relative to table surface 1 11 and user eye position 807 such that pad 123 is behaving as a mirror, virtual reality system program 109 determines which portion of table surface 111 is being reflected by pad 123 to user eye position 807 and what part of the virtual environment would be reflected by pad 123 if the environment was real and displays that part of the virtual environment on the portion of table surface 111 being reflected by pad 123. Details of how that is done will be explained later.
- pad 123 can function in both reflective and transparent modes as a magic lens, or looked at somewhat differently, as a hand-held clipping plane that defines an area of the virtual environment which is viewed in a fashion that is different from the manner in which the rest of the virtual environment is viewed.
- Scotchtint P-18 is designed not only to provide privacy, but also to protect against sunlight. This sun protection feature blocks a fraction of the transmitted light. Thus, a virtual environment that is observed through the pad appears to be darker than a virtual environment that is looked at without the pad. In the preferred environment, this problem is dealt with by setting up the virtual environment so that it includes light sources that brighten the portion of the virtual environment that is being viewed through pad 123 and thereby overcome the effects of the foil. Other techniques for making pad 123 reflective may not require such tactics.
- virtual reality system program 109 determines whether pad 123 is operating in see-through or reflective mode using two terms. The first, as shown in FIG. 9, is whether the user's eye position 807 is on the same or other side of the pad plane: If E is the user's generalized physical eye position 807 and R m a point on pad plane 905 that is projected onto the projection plane at 909, then the transparent mode is active, if
- the reflective mode is active, if
- c will indicate whether the pad is relatively pe ⁇ endicular to or parallel to the projection plane and therefore whether the pad is being used in reflective or transparent mode.
- FIG. 20 shows how this is done in a preferred embodiment.
- Graph 2001 shows curve 2003 with the values of c for differences in the solid angles between the pad and the projection plane ranging from 0° (pad parallel to the projection surface) through 90° (pad pe ⁇ endicular to the projection surface), 180° (pad again parallel), and 270° (pad again pe ⁇ endicular) to 360° (pad again parallel).
- Ti and T 2 are threshold values that define how the virtual environment system is to inte ⁇ ret the value of c. If c's value is between Ti and T 2 , the pad is in reflective mode 2007 and if it is above Ti or below T 2 , it is in transparent mode 2005. In the preferred embodiment, Ti is set to 0.5 and T 2 to -0.5.
- Reflective mode complementary navigation (difficult-to-reach viewing/interaction, clipping-plane-in-hand, etc.) Even though the modes are complementary in most cases, a certain overlap exists. On the one hand, the two-handed interaction in combination with a tracked pen would also be supported in the reflective mode (interaction with the reflection space), and seems to be an interesting possibility to interact from "difficult-to-reach” positions (e.g. in the inside of objects, etc.). On the other hand, navigation (clipping-plane-in-hand, etc.) can also be realized in the transparent mode. Note, that this is an example for an overlap of the application possibilities, but it is still complementary in the interaction range.
- the transparent, as well as the reflective mode can overload the transparent, as well as the reflective mode with a multitude of different functionality.
- the user can activate different modes that are supported by the two different sides of the transparent pad.
- window-controls such as buttons, sliders, etc.
- through-the-plane tools such as magic lenses, etc.
- the user can switch between them at pleasure, by turning over the pad.
- pad 123 is given a reflective mode, it effectively has four sides, two in each mode, and each of these sides can have a different functionality.
- FIGs. 11 and 12 While the X-ray tool is an example of a modified view of the environment, a window can also show different content.
- Windows in the desktop world usually show different content: multiple windows can either be entirely unrelated, or they can show data from different points in space (different viewpoints) or time (different versions).
- CAD systems normally use four windows with different viewpoints, and text tools like xdijf show a side-by-side comparison of different versions of data.
- Such a snapshot l l l l may be decoupled from pad image 123 and left floating in the scene at any position, and possibly be picked up again later.
- a user can inspect multiple views at once from inside a virtual environment, a strategy equivalent to the aforementioned multiple views of CAD systems.
- Tools that select objects at a distance use a ray to select the object and therefore have a dimension of one.
- Errors introduced by human inaccuracy make it difficult to perform precise manipulation with tools like these, which have essentially no spatial extent, unlike real world tools, which always have a spatial extent.
- the lack of spatial extent of the tools used to manipulate virtual environments is one reason for the development of techniques such as 3D snap-dragging.
- Pad image 125 unlike the points and rays produces by conventional 3D manipulation tools, has two dimensions, and thus has spatial extent in the same way that a real world tool does.
- An example of how the spatial extent of pad image 125 can be used to manipulate the virtual environment is pad image 125's use as di fishnet selection tool for the landscaping application, shown in FIG. 13.
- the user selects the fishnet mode (for example, by pushing a button on pad image 125) and then moves transparent pad 123 so that pad image 125 sweeps through the virtual environment, as shown at 1303, where pad image 125 appears at 1307.
- Objects in the virtual environment that are encountered by pad image 1307 during the sweep, such as object 1309, are selected.
- the small replicas on the pad can be discarded by using pen 121 to cause pen image 127 to remove them from pad image 125, thereby deselecting the corresponding object in the virtual environment.
- pad image 125 is placed in sketch pad mode by using pen image 127 to depress a button 205 on the surface of pad image 125.
- FIG. 14 Shown in each of the images of FIG. 14 are pad image 125 and pen image 127.
- Set of images 1401 shows the gesture that specifies a truncated cone.
- the gesture is made up of three strokes.
- the first stroke is a circle 1425, which defines the base contour of the cone; the second and third strokes are shown at 1407.
- the vertical stroke defines the height of a cone having the base contour defined by the first stroke; the horizontal stroke indicates where the cone defined by the circle and the vertical stroke is to be truncated.
- the strokes are of course made by moving pen 121 on or near pad 123.
- Set of images 1409 show the two gestures needed to make a torus at 1411, namely two circles that are more or less concentric, with one circle representing the outside diameter of the torus and the other the inside diameter.
- At 1413 is shown how the cross-out gesture 1415 can be used to remove an object from pad image 125 and at 1417 is shown how "undo" gesture 1419 can undo the deletion of 1413.
- FIG. 15 Details of object creation with gestures: FIG. 15
- the gestures that are used for object creation were developed to be as intuitive as possible, to ensure easy memorization.
- the gestures have been designed to follow the contours of the top-down projection of the corresponding solid geometries as closely as possible. This differentiates our approach from the one presented in the SKETCH system referred to in the Description of related art.
- the user mainly outlines the side- view contours of an object to generate basic solids. Top-view outlines are used in Sketch in a special bird's-eye mode to produce 2D contours to define the silhouettes of 3D geometry created in a subsequent step.
- FIG. 15 shows the gestures employed to create objects in a presently-preferred embodiment.
- Each gesture consists of one or more strokes of pen 121 that are performed on or close to pad 123.
- the entire gesture is defined by pressing pen 121 's button before the first stroke of the gesture and releasing it after the last stroke.
- the pen and pad are used much like pen and paper, except instead of actually drawing a shape, the computer scans the strokes made on the pad. The strokes' proximity to the pad determines whether or not they contribute to the gesture to be recognized.
- Table 1501 of FIG. 15 shows the gestures: row 1503 shows gestures that create objects having a rectangular base structure; row 1505 shows gestures that create objects having a circular base structure.
- Column 1507 shows the pen strokes for rectangular solids, spheres, and toruses;
- column 1509 shows the pen strokes for pyramids and cones;
- column 1511 shows the pen strokes for truncated pyramids and cones.
- one pen stroke indicates the base shape and another indicates the extent of the base shape's third dimension.
- Gestures for truncated solids resemble their non-truncated equivalent in that a horizontally cutting finishing stroke is added to the height stroke.
- the cylinder gesture makes the exception of employing its side-view contour by being defined through two parallel lines.
- the torus is defined by two circular gestures. This leaves for the sphere a circular gesture and an arc gesture, which indicates the sphere's curvature in all dimensions.
- the rate of recognition of the gestures by virtual reality system program 109 is generally between 95% and 100%.
- FIG. 16 shows all of the presently-defined gestures for object control and mode change in table 1601.
- the gestures for object control are shown at 1603; those for mode change are shown at 1605.
- the recognition rate by the system is again between 95% and 100%.
- Objects are selected by circling their projected images when the images are viewed through the pad. In a similar way, objects are deleted by "crossing them out” on the pad. Undo is represented by a "scribbling" on the pad, thus resembling the erasure of mistakes on common paper. All of this is shown at row 1603 of table 1601.
- Gestures representing letters and numbers In many cases, the easiest way for a user of a program to input information to it is by means of strings of letters or numbers.
- the user In modern two-dimensional GUI's, the user typically employs a keyboard to input the letters or numbers into fields in a dialog box. The fields are typed, and if the user attempts to put the wrong kinds of values in them, for example, letters in a number field, the GUI indicates an error. Inputting strings of letters or numbers when the interface to the program is a virtual environment is much more difficult.
- dialog boxes are much less intuitive in virtual environments than they are in two-dimensional GUIs, and second, even if one had a dialog box, it is not clear how one would write to it.
- Nirtual environment system 101 with the interface provided by transparent pad 123 and pen 121 solves both of these problems. Since virtual environment system 101 can display anything on projected pad 125, it can also display the equivalent of a dialog box on projected pad 125, and since the user can use pen 121 to "write" on transparent pad 123, with corresponding marks appearing on projected pad 125, all that is needed to be able to use pen 121 and pad 123 to fill in fields in a dialog box appearing on projected pad 125 is to extend the gestures which virtual environment system 101 recognizes to include gestures representing letters and numbers.
- projected pad 125 includes not only the fields of the dialog box, but also a large area for numeric inputs and another large area for character-string inputs.
- the user uses the pen to select a field in the dialog box and then writes in the proper large area using Graffiti strokes.
- virtual environment system 101 recognizes a gesture, it places the character corresponding to the gesture in the selected field of the dialog box. Because the Graffiti gestures are single- stroke gestures and the range of possibilities is limited by the context in which the user is making the Graffiti gestures, there is no need to train virtual environment system 101 to recognize Graffiti input from different users.
- gestures are made up of one or more strokes, with the user indicating the beginning of a gesture by pushing the button on pen 121 and the end of the gesture by releasing the button. Recognizing a gesture in the preferred embodiment is a two-step process: first, recognizing what strokes have been made, and then determining whether there is a gesture that consists of that combination of strokes. A gesture is thus represented simply as a set of strokes.
- buttons 205 are associated with each of the following events: • Loading a set of gestures from a file,
- this button can be used to add a variation of a gesture to help clarify and hone the process.
- the creation of a torus as shown at 1411 of FIG. 14 can be accomplished by penciling two circles onto the pad which are intersecting or not. This allows for much more 'sloppiness' in the user's drawing, i.e. for true sketching.
- Transforming an expressed gesture into a meaningful statement can be a computationally intensive task and may not be easy to achieve in real-time (i.e. together with rendering, tracking etc.).
- techniques of artificial intelligence and machine learning are applied to solve classification problems such as the recognition of gestures or speech.
- the method described recognizes previously learned gestures.
- System 101 was taught these gestures by having a user perform them at runtime. Any kind of 2D, 3D, or 6DOF input device can be used to gather the motion data that carries out the gesture.
- the reliability of the recognition process can be improved by repeating the same gestures several times and correcting incorrect recognition results. This extends the system's knowledge.
- Once the system has learned the gesture it translates the recognized gesture into an object which processor 103 can identify (numbers, strings, events, etc.) and process further.
- the strokes of a gesture are recognized using software in virtual reality system 109 that is executing in processor 103 as follows: first, the software accepts the raw position and orientation data for a stroke. Then, in a three-stage process, the raw position and orientation data is first used to continuously update the stroke-specific basic information (e.g. bounding box, length, orientation etc.) on the fly. Once updated, the stroke-specific basic information serves as a basis for calculating a set of fuzzy values that characterizes the stroke in the second stage. Approximated reasoning is dynamically applied to express the trust in each characterization criteria (so-called aspect), depending on the appearance of extreme situations.
- the stroke-specific basic information e.g. bounding box, length, orientation etc.
- each individual set of aspects represents a product rule that describes the degree of membership of the stroke being inte ⁇ reted in a specific class of strokes.
- These product rules are finally used to find the stroke with the best match among the already learned ones. To do so, we compare the aspect set of the stroke to be identified with the ones stored in the knowledge-base. The stroke with the smallest total deviation is suggested as the most-likely candidate.
- the basic information consists of stroke-dependent properties that are continually updated during scanning of a stroke. This information is the basis for computing the characterization aspects. Because all of the basic information is treated similarly, we will only discuss the processing of position data in detail. Orientation information (e.g. azimuth, elevation and roll alignments) are treated in the same way.
- the total length of the stroke is : n -l
- n-l n-l n-l cx ⁇ x.
- the angle coefficients (legs of a triangle) used to calculate the directions of the stroke's start and end sections are:
- / 1 where /,' is the momentary length of the stroke (from segment 1 to segment i).
- the angle coefficients are used later to calculate the stroke's start and end angles, and to represent its directions (x,y,z components) at the beginning and the end. Note that with increasing distance from the start position (and decreasing distance to the end position), the weights of the start- angle-coefficients (sax,say,saz) decrease while the weights of the end-angle-coefficients (eax,eay,eaz) increase. This causes a sufficiently realistic weighting of the stroke's start and end sections and prevents the computation from taking the distortion into account that usually appears at the beginning and the end (e.g. introduced by the tracking device or the user).
- fuzzy values that express the grade of membership to their fuzzy sets (e.g. the set of intricate strokes, the set of flat strokes, etc.). Note, that in the following we will discuss only a subset of the defined aspects to illustrate the principles. For performance reasons, we use an integer representation, thus the aspects are normalized to a range of [0,100].
- a stroke's length is at least as long as the diagonal of its bounding box.
- Values of around 50 indicate a square height/width relation.
- Values close to 100 indicate that the stroke does begin close to the specific face (right, top, front). Values of around 50 indicate that the stroke begins at the center, and values close to 0 indicate that it begins at the corresponding opposite sides (left, bottom, back) of the bounding box. Similar ratios can be formulated for the end position (a ]0 ,a u ,a n ).
- the start and end angles using the horizontal/vertical and depth angle-coefficients, which have been tracked during the scanning process. These computed angles represent the stroke's direction at the beginning and the end (x,y,z components). The angles are calculated based on the stroke's projection onto the x/y-plane, x/z-plane and z/y-plane. Note, that the angle-coefficients represent a weighted sum of the movements with increasing weights for the end-angle- coefficients (the closer we get to the end) and decreasing weights for the start-angle- coefficients (the further the distance from the beginning). This causes a sufficiently realistic weighting of the stroke's start and end sections and prevents from taking the distortion into account that usually appears at the beginning and the end (e.g. introduced by the tracking device or the user).
- the aspects form a product rule that classifies the stroke. For example :
- fuzzy system An important feature of our fuzzy system (in contrast to others) is that we don't want the user to define rules that characterize motion-based strokes, but that we want the system to learn these rules in terms of differentiating between strokes and recognizing them. Thus the system must be able to automatically generate a new product rule every time it is taught a new stroke (or a new representation of an already learned stroke). Note that the strokes that must be recognized are not known in advance, thus a manual specification of product rules that describe them is not possible. It is important that the system evaluates these rules in a way that allow it to draw the right inference, depending on a possibly large number of generated rules.
- a rule is represented by a set of fuzzy values (i.e.
- Some fuzzy systems allow the user to specify the degree of faith in these rules. This is called approximate reasoning and is usually implemented by multiplying the inferred fuzzy values by predefined weights that represent the user's faith in particular rules (e.g. 0.8 represents high, 0.2 represents low). We apply approximate reasoning by weighting each aspect deviation to indicate its importance in terms of inferring the correct conclusion. Some of these weights can be set up manually by the user to indicate the general importance of the aspects, but most of the weights are calculated dynamically (e.g. depending on the appearance of extreme cases), because the rules (as well as an indication of the faith in these rules) are not known in advance, but are learned by the system.
- the specific weights can also be set up in advance to indicate the aspect's importance and strength (the larger the weights the higher the strength of the aspect). For example, ao would have a large weight, because the peculiarity ' whether the stroke is straight or intricate' is important information.
- Y max( ⁇ , min(l 00, b + g(X - a))) where g is the equation's gradient and [ ⁇ ,b],( ⁇ ⁇ a ⁇ 100,0 ⁇ b ⁇ lOO) the center of rotation (for varying gradients).
- the divisors of the critical aspects are 20 and that all weights range from 0 (no effect) to 5 (very strong aspect).
- Learning new strokes is achieved by simply adding a new product rule (i.e. the specific set of aspects) to the knowledge base. From then on, the rule is compared with other, new strokes.
- a new product rule i.e. the specific set of aspects
- the rule is compared with other, new strokes.
- Several product rules for the same stroke can be stored (in different representations) to increase the reliability.
- the system can, for example, be trained by correcting it each time it fails to recognize a stroke. In this case, the aspects of the failed stroke can be added to the set again, and so extend the system's knowledge of different representations of the same stroke. The failure rate of the system decreases as it becomes more knowledgeable about the strokes (i.e. if many different representations of the same strokes are stored in the knowledge base).
- Similar strokes should be taught to the system in a similar way (i.e. same dimensions, etc.) to emphasize details. Because the amount of memory required to represent the knowledge base is minimal, a scanned stroke can be recognized very quickly (i.e. the comparison process is very effective), even if the knowledge base contains many different representations of the same strokes. Each aspect ranges from 0 to 100, thus a 7-bit representation is sufficient. Multiplied by 56 (for 56 position/orientation aspects), a stroke representation requires less than 50 bytes of memory, no matter how the stroke is. Even smaller representations can be used, if smaller aspects are sufficient enough (e.g. for 2D or 3D recognition).
- stroke recognition is further enhanced with additional techniques known to the art.
- feed-forward perceptron neural networks for classification as described in D. Rumelhart, D. Zipser. "Feature Discovery by Competitive Learning”, Parallel Distributed Processing, MIT Press, Cambridge, Massachusetts, USA, 1986; neural networks built from linear associators to perform principle component analysis in terms of reducing the amount of redundancy in the knowledge-base, described in E. Oja. "A Simplified Neuron Model as Principle Component Analyzer", Journal of Mathematical Biology, vol. 15, pp. 267- 273, 1982. and in: T. Sanger. "Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network", Neural Networks 2, pp. 459-473, 1989.
- gestures may be defined as sets of strokes
- stroke information as recognized above can be used in many ways to interact with virtual environments on an intuitive and natural basis.
- motion-based strokes may be used for gestures representing letters and numbers, gestures representing objects in the virtual environment, and gestures for performing operations in the virtual environment. Disambiguation of gestures is made easier in a preferred embodiment by the fact that the gestures are made on pad 123 within a precisely-defined context (supplied by pad image 125).
- gestures can be easily combined with other techniques of interaction such as manipulation of buttons or sliders and direct manipulation of objects in the virtual environment. Additional means of interaction could be used as well, for example, simple voice commands for changing modes or for indicating approval or disapproval of responses made by the virtual environment.
- FIGs. 17 and 18 Using real mirrors to reflect virtual environments: FIGs. 17 and 18
- the mirror tool is a special application of a general technique for using mirrors to view virtual environments.
- Head tracking as achieved for example in the preferred embodiment of system 101 by attaching a magnetic tracker to shutter glasses 117, represents one of the most common, and most intuitive methods for navigating within immersive or semi-immersive virtual environments.
- Back-screen- projection planes are widely employed in industry and the R&D community in the form of virtual tables or responsive workbenches, virtual walls or powerwalls, or even surround-screen projection systems or CANEs. Applying head-tracking while working with such devices can, however, lead to an unnatural clipping of objects at the edges of projection plane 111.
- Standard techniques for overcoming this problem include panning and scaling techniques (triggered by pinch gestures) that reduce the projected scene to a manageable size.
- panning and scaling techniques triggered by pinch gestures
- these techniques do not work well when the viewpoint of the user of the virtual environment is continually changing.
- the method employs a planar mirror to reflect the virtual environment and can be used to increase the perceived viewing volume of the virtual environment and to permit multiple observers to simultaneously gain a perspectively correct impression of the virtual environment
- the method is based on the fact that a planar mirror enables us to perceive the reflection of stereoscopically projected virtual scenes three-dimensionally.
- the stereo images that are projected onto the portion of surface 111 that is reflected in the planar mirror must be computed on the basis of the positions of the reflection of the user's eyes in the reflection space (i.e. the space behind the mirror plane).
- the physical eyes perceive the same perspective by looking from the physical space through the mirror plane into the reflection space, as the reflected eyes do by looking from the reflection space through the mirror plane into the physical space.
- Mirror 1703 defines a plane 1705 which divides what a user's physical eye 1713 sees into two spaces: physical space 1709, to which physical eye 1713 and physical projection plane 1717 belong, and reflection space 1707, to which reflection 1711 of physical eye 1713 and reflection 1715 of physical projection plane 1717 appear to belong when reflected in mirror 1703. Because reflection space 1707 and physical space 1709 are symmetrical, the portion of the virtual environment that physical eye 1713 sees in mirror 1703 is the portion of the virtual environment that reflected eye 1711 would see if it were looking through mirror 1703.
- virtual reality system program 109 need only know the position and orientation of physical eye 1713 and the size and position of mirror 1703. Using this information, virtual reality system program 109 can determine the position and orientation of reflected eye 1711 in reflected space 1707 and from that, the portion of physical projection plane 1717 that will be reflected and the point of view which determines the virtual environment to be produced on that portion of physical projection plane 1717.
- mirror plane 1705 is represented as:
- X is the point on the mirror plane that is visible to the user, and 7is the point on the projection plane that is reflected towards the user atX.
- transparent pad 123 may be made reflective and may be used in its reflective mode to view a virtual environment in the manner just described. All that is required to use any reflective surface to view a virtual environment is that virtual reality system program 109 know the shape, location, and orientation of the mirror and the location and orientation of physical eyes 1713 that are using the mirror to view the virtual environment.
- mirror tracking permits virtual reality system program 109 to adjust what is projected on the portion of physical projection plane 1717 that is reflected in mirror 1703 as required for both the position and orientation of mirror 1703 and the position and orientation of physical eye 1713, mirror tracking allows us to observe unnaturally clipped areas intuitively, even when the observer's viewpoint is changing continuously.
- the mirror itself can also be used as clipping plane that enables us to investigate the interiors of objects:
- ⁇ is the clipping plane offset.
- the offset is particularly useful to reflect the intersection in the mirror.
- Mirror tracking and head tracking are complementary. To switch from head tracking to mirror tracking, all the user need to is look at a mirror that is in a position where what the user will see in the mirror is a reflection of a portion of projection plane 1717. If the user is holding the mirror, the user can manipulate it until it is in the proper position. To return to head tracking, all the user has to do is cease looking into the mirror. If the mirror is hand- held, the user can simply lay it down.
- mirror tracking simply by tracking the mirror. Even though this approximation does not result in a mirrored perspective that is absolutely correct for each observer viewing the mirror, it does allow multiple observers to view the virtual environment simultaneously by means of the mirror. By moving the mirror, different portions of the virtual environment may be displayed to all of those looking at the mirror. The perspective seen by the observers can be thought of as the perspective two stereo-cameras would capture from eye positions that are kept constant relative to the mirror-plane. everyone looking at the mirror can then observe this perspective.
- mirror-tracking also enables a group to increase its viewing volume in the environment.
- FIG. 18 shows the geometric and computational basis for all ray-pointing interactions with reflection space 1711. Note that direct manipulation (such as virtual hands, direct picking) of the reflections is not possible because of the physical constraints of mirror 1703.
- FIGs. 21-23 Using transflective tools with virtual environments: FIGs. 21-23
- the reflecting pad When the reflecting pad is made using a clear panel and film such as Scotchtint P-18, it is able not only to alternatively transmit light and reflect light, but also able to do both simultaneously, that is, to operate transflectively.
- a pad with this capability can be used to augment the image of a physical object seen through the clear panel by means of virtual objects produced on projection plane 111 and reflected by the transflective pad. This will be described with regard to FIG. 21.
- the plane of transflective pad 2117 divides environment 2101 into two subspaces.
- subspace 2107 that contains the viewer's physical eyes 2115 and (at least a large portion of) projection plane 111 'the projection space' (or PRS), and subspace 2103 that contains physical object 2119 and additional physical light-sources 2111 'the physical space' (or PHS).
- PHS 2103 is exactly overlaid by reflection space 2104, which is the space that physical eye 2115 sees reflected in mirror 2117.
- the objects that physical eye 2115 sees reflected in mirror 2117 are virtual objects that the virtual environment system produces on projection plane 111.
- the virtual environment system uses the definition of virtual graphical element 2121 to produce virtual graphical element 2127 at a location and orientation on projection plane 111 such that when element 2127 is reflected in mirror 2117, the reflection 2122 of virtual graphical element 2127 appears in reflection space 2104 at the location of virtual graphical element 2121. Since mirror 2117 is transflective, physical eye 2115 can see both physical object 2119 through mirror 2117 and virtual graphical element 2127 reflected in mirror 2117 and consequently, reflected graphical element 2122 appears to physical eye 2115 to overlay physical object 2119.
- the virtual environment system computes the location and direction of view of reflected eye 2109 from the location and direction of view of physical eye 2115 and the location and orientation of mirror 2117 (as shown by arrow 2113).
- the virtual environment system computes the location of inverse reflected virtual graphical element 2127 in projection space 2107 from the location and point of view of reflected eye 2109, the location and orientation of mirror 2117, and the definition of virtual graphical element 2121, as shown by arrow 2123.
- the definition of virtual graphical element 2121 will be relative to the position and orientation of physical object 2119.
- the virtual environment system then produces inverse reflected virtual graphical element 2127 on projection plane 111, which is then reflected to physical eye 2115 by mirror 2117.
- reflection space 2104 exactly overlays physical space 2103
- the reflection 2122 of virtual graphical element 2127 exactly overlays defined graphical element 2121.
- physical object 2119 has a tracking device and a spoken command is used to indicate to the virtual environment system that the current location and orientation of physical object 2119 are to be registered in the coordinate system of the virtual environment being projected onto projection plane 111. Since graphical element 2121 is defined relative to physical object 2119, registration of physical object 2119 also defines the location and orientation of graphical element 2121. In other embodiments, of course, physical object 2119 may be continually tracked.
- Transflective mirror 2117 thus solves an important problem of back-projection environments, namely that the presence of physical objects in PRS 2107 occludes the virtual environment produced on projection plane 111 and thereby destroys the stereoscopic illusion.
- the virtual elements will always overlay the physical objects.
- reflection space 2104 exactly overlays PHS 2103, the reflected virtual element 2127 will appear at the same position (2122) within the reflection space as virtual element 2121 would occupy within PHS 2103 if virtual element 2121 were real and PHS 2103 were being viewed by physical eye 2115 without mirror 2117.
- FIG. 22 illustrates a simple first example at 2201.
- a virtual sphere 2205 is produced on projection plane 111. If hand 2203 is held between the viewer's eyes and projection plane 111, hand 2203 occludes sphere 2205. If transflective mirror 2207 is placed between hand 2203 and the viewer's eyes in the proper position, the virtual environment system will use the position of transflective mirror 2207, the original position of sphere 2205 on projection plane 111, and the position of the viewer's eyes to produce a new virtual sphere at a position on projection plane 111 such that when the viewer looks at transflective mirror 2207 the reflection of the new virtual sphere in mirror 2207 appears to the viewer to occupy the same position as the original virtual sphere 2205; however, since mirror 2207 is in front of hand 2203, hand 2203 cannot occlude virtual sphere 2205 and virtual sphere 2205 overlays hand 2203.
- the user can intuitively adjust the ratio between transparency and the reflectivity by changing the angle between transflective mirror 2207 and projection plane 111. While acute angles highlight the virtual augmentation, obtuse angles let the physical objects show through brighter. As for most augmented environments, a proper illumination is decisive for a good quality. The technique would of course also work with fixed transflective mirrors 2207.
- FIG. 23 shows an example of how a transflective mirror might be used to augment a transmitted image.
- physical object 2119 is a printer 2303.
- Printer 2303's physical cartridge has been removed.
- Graphical element 2123 is a virtual representation 2305 of the printer's cartridge which is produced on projection plane 111 and reflected in transflective mirror 2207.
- Printer 2303 was registered in the coordinate system of the virtual environment and the virtual environment system computed reflection space 2104 as described above so that it exactly overlays physical space 2103.
- virtual representation 2305 appears to be inside printer 2303 when printer 2303 is viewed through transflective mirror 2207.
- virtual representation 2305 is generated on projection plane 111 according to the positions of printer 2303, physical eye 2115, and mirror 2117, mirror 2117 can be moved by the user and the virtual cartridge will always appear inside printer 2303.
- Nirtual arrow 2307 which shows the direction in which the printer's cartridge must be moved to remove it from printer 2303 is another example of augmentation. Like the virtual cartridge, it is produced on projection plane 111. Of course, with this technique, anything which can be produced on projection plane 111 can be use to augment a real object.
- the normal/inverse reflection must be applied to every aspect of graphical element 2127, including vertices, normals, clipping planes, textures, light sources, etc., as well as to the physical eye position and virtual head-lights. Since these elements are usually difficult to access, hidden below some internal data-structure (generation-functions, scene-graphs, etc.), and an iterative transformation would be to time-intensive, we can express the reflection as a 4x4 transformation matrix. Note, that this complex transformation cannot be approximated with an accumulation of basic transformations (such as translation, rotation and scaling).
- every graphical element will be reflected with respect to the mirror-plane.
- a side-effect of this is, that the order of polygons will also be reversed (e.g. from counterclockwise to clockwise) which, due to the wrong front-face determination, results in a wrong rendering (e.g. lighting, culling, etc.). This can easily be solved by explicitly reversing the polygon order.
- Any complex graphical element (normals, material properties, textures, text, clipping planes, light sources, etc.) is reflected by applying the reflection matrix, as shown in the pseudo-code above.
- Nirtual reality system program 109 in system 101 is able to deal with inputs of the user's eye positions and locations together with position and orientation inputs from transparent pad 123 to make pad image 125, with position and orientation inputs from pen 121 to make projected pen 127, with inputs from pen 121 as applied to pad 123 to perform operations on the virtual environment, and together with position and orientation inputs from a mirror to operate on the virtual environment so that the mirror reflects the virtual environment appropriately for the mirror's position and orientation and the eye positions. All of these inputs are shown at 115 of FIG. 1. As also shown at 113 in FIG. 1, the resulting virtual environment is output to virtual table 111.
- FIG. 19 provides an overview of major components of program 109 and their interaction with each other.
- the information needed to produce a virtual environment is contained in virtual environment description 1933 in memory 107.
- virtual environment generator 1943 reads data from virtual environment description 1933 and makes stereoscopic images from it. Those images are output via 113 for back projection on table surface 111.
- Pad image 125 and pen image 127 are part of the virtual environment, as is the portion of the virtual environment reflected by the mirror, and consequently, virtual environment description 1933 contains a description of a reflection (1937), a description of the pad image (1939), and a description of the pen image (1941).
- Nirtual environment description 1933 is maintained by virtual environment description manager 1923 in response to parameters 1913 indicating the current position and orientation of the user's eyes, parameters 1927, 1929, 1930, and 1931 from the interfaces for the mirror (1901), the transparent pad (1909), and the pen (1919), and the current mode of operation of the mirror and/or pad and pen, as indicated in mode specifier 1910.
- Mirror interface 1901 receives mirror position and orientation information 1903 from the mirror, eye position and orientation information 1805 for the mirror's viewer, and if a ray tool is being used, ray tool position and orientation information 1907.
- Mirror interface 1901 inte ⁇ rets this information to determine the parameters that virtual environment description manager 1923 requires to make the image to be reflected in the mirror appear at the proper point in the virtual environment and provides the parameters (1927) to manager 1923, which produces or modifies reflection description 1937 as required by the parameters and the current value of mode 1910. Changes in mirror position and orientation 1903 may of course also cause mirror interface 1901 to provide a parameter to which manager 1923 responds by changing the value of mode 1910. The other interfaces work in much the same fashion.
- Transparent pad interface 1909 receives position and orientation information 1911 from transparent pad 123, the position 1915 of the point of pen 121, and the state 1917 of pen 121's button and inte ⁇ rets this information to provide pad image parameters 1929 to virtual environment description manager 1923 which manager 1923 can inte ⁇ ret to determine the part of the virtual environment upon which pad image 125 is to appear and the mode of appearance of pad image 125.
- the pad image parameters 1929 specify the gestures and the pen strokes that make them up.
- Nirtual environment description manager passes the gesture and pen stroke specifications to gesture manager 1925, which uses gesture descriptions 1935 to inte ⁇ ret them and return the results of the inte ⁇ retation to manager 1923. If transparent pad 123 is operating in gesture learning mode, gesture manager 1925 adds descriptions of the gestures and their meanings to gesture descriptions 1935.
- Pen interface 1919 provides the information to manager 1923 which manager 1923 needs to make projected pen 127.
- the device that is used to view the portion of the virtual environment to be modified may be the mirror or transparent panel used in the preferred embodiment, but may also be any other device which can be used to view a portion of the virtual environment.
- a simple wire frame is a simple wire frame.
- Another is a touch-sensitive transparent tablet.
- a device that is both reflective and transparent may be implemented as described in the preferred embodiment or using any other technique, for example, a panel that becomes reflective or transparent in response to electrical inputs.
- the transflective panel can be used as described to augment real objects or to deal with occlusion of virtual objects by real objects, but can also be used generally where it is useful to introduce a virtual object into an environment seen through the transflective panel.
- a tool is always transparent, the tool can be can be reduced to a handle, with the image of the tool being computed from the position and location of the user's eyes, the position and orientation of the handle, and parameters defining the tool's type and size.
- the portion may be modified in any way which is useful for the particular task or virtual environment and may include any kind of object which is useful for the particular task or virtual environment.
- the tools are consequently completely polymo ⁇ hic.
- Mode and operation specification inputs may be the position, button selection, and gesture inputs described herein, but also any other kind of user input that the virtual environment will accept, for example voice input.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU18427/00A AU1842700A (en) | 1999-04-22 | 1999-12-07 | Tools for interacting with virtual environments |
| US09/959,087 US6842175B1 (en) | 1999-04-22 | 1999-12-07 | Tools for interacting with virtual environments |
| PCT/US2001/018327 WO2001095061A2 (fr) | 1999-12-07 | 2001-06-06 | Table virtuelle etendue: rallonge optique pour systemes de projection de type table |
| PCT/US2001/025186 WO2002015110A1 (fr) | 1999-12-07 | 2001-08-10 | Vitrines d'exposition virtuelle |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13059099P | 1999-04-22 | 1999-04-22 | |
| US60/130,590 | 1999-04-22 | ||
| US13700799P | 1999-06-01 | 1999-06-01 | |
| US60/137,007 | 1999-06-01 | ||
| US15356799P | 1999-09-13 | 1999-09-13 | |
| US60/153,567 | 1999-09-13 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2000065461A1 true WO2000065461A1 (fr) | 2000-11-02 |
Family
ID=27384023
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US1999/028930 Ceased WO2000065461A1 (fr) | 1999-04-22 | 1999-12-07 | Outils d'interaction avec des environnements virtuels |
Country Status (2)
| Country | Link |
|---|---|
| AU (1) | AU1842700A (fr) |
| WO (1) | WO2000065461A1 (fr) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002087227A3 (fr) * | 2001-04-20 | 2003-12-18 | Koninkl Philips Electronics Nv | Appareil d'affichage et image codee destinee a etre presentee sur un tel appareil |
| US6803928B2 (en) | 2000-06-06 | 2004-10-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Extended virtual table: an optical extension for table-like projection systems |
| EP1507192A3 (fr) * | 2003-06-09 | 2007-06-20 | Microsoft Corporation | Détection d'un geste d'attente en examinant des paramètres associés à la motion du stylo |
| CN106445345A (zh) * | 2016-09-30 | 2017-02-22 | 北京金山安全软件有限公司 | 一种悬浮窗显示方法、装置及电子设备 |
| EP3020026A4 (fr) * | 2013-07-10 | 2017-05-31 | Samsung Electronics Co., Ltd. | Procédé et appareil permettant d'appliquer un effet graphique dans un dispositif électronique |
| CN110099649A (zh) * | 2016-12-19 | 2019-08-06 | 爱惜康有限责任公司 | 具有用于工具致动的虚拟控制面板的机器人外科系统 |
| US10554931B1 (en) | 2018-10-01 | 2020-02-04 | At&T Intellectual Property I, L.P. | Method and apparatus for contextual inclusion of objects in a conference |
| CN115830112A (zh) * | 2022-11-25 | 2023-03-21 | 之江实验室 | 一种基于手持实物的混合现实交互方法和系统 |
| CN116540872A (zh) * | 2023-04-28 | 2023-08-04 | 中广电广播电影电视设计研究院有限公司 | Vr数据处理方法、装置、设备、介质及产品 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5684943A (en) * | 1990-11-30 | 1997-11-04 | Vpl Research, Inc. | Method and apparatus for creating virtual worlds |
| US5917495A (en) * | 1995-11-30 | 1999-06-29 | Kabushiki Kaisha Toshiba | Information presentation apparatus and method |
-
1999
- 1999-12-07 AU AU18427/00A patent/AU1842700A/en not_active Abandoned
- 1999-12-07 WO PCT/US1999/028930 patent/WO2000065461A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5684943A (en) * | 1990-11-30 | 1997-11-04 | Vpl Research, Inc. | Method and apparatus for creating virtual worlds |
| US5917495A (en) * | 1995-11-30 | 1999-06-29 | Kabushiki Kaisha Toshiba | Information presentation apparatus and method |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6803928B2 (en) | 2000-06-06 | 2004-10-12 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Extended virtual table: an optical extension for table-like projection systems |
| WO2002087227A3 (fr) * | 2001-04-20 | 2003-12-18 | Koninkl Philips Electronics Nv | Appareil d'affichage et image codee destinee a etre presentee sur un tel appareil |
| EP1507192A3 (fr) * | 2003-06-09 | 2007-06-20 | Microsoft Corporation | Détection d'un geste d'attente en examinant des paramètres associés à la motion du stylo |
| EP3020026A4 (fr) * | 2013-07-10 | 2017-05-31 | Samsung Electronics Co., Ltd. | Procédé et appareil permettant d'appliquer un effet graphique dans un dispositif électronique |
| US10134161B2 (en) | 2013-07-10 | 2018-11-20 | Samsung Electronics Co., Ltd. | Method and apparatus for applying graphic effect in electronic device |
| CN106445345A (zh) * | 2016-09-30 | 2017-02-22 | 北京金山安全软件有限公司 | 一种悬浮窗显示方法、装置及电子设备 |
| CN106445345B (zh) * | 2016-09-30 | 2019-06-28 | 北京金山安全软件有限公司 | 一种悬浮窗显示方法、装置及电子设备 |
| US11547494B2 (en) | 2016-12-19 | 2023-01-10 | Cilag Gmbh International | Robotic surgical system with virtual control panel for tool actuation |
| CN110099649A (zh) * | 2016-12-19 | 2019-08-06 | 爱惜康有限责任公司 | 具有用于工具致动的虚拟控制面板的机器人外科系统 |
| US12396631B2 (en) | 2016-12-19 | 2025-08-26 | Cilag Gmbh International | Robotic surgical system with virtual control panel for tool actuation |
| CN110099649B (zh) * | 2016-12-19 | 2022-07-29 | 爱惜康有限责任公司 | 具有用于工具致动的虚拟控制面板的机器人外科系统 |
| US10554931B1 (en) | 2018-10-01 | 2020-02-04 | At&T Intellectual Property I, L.P. | Method and apparatus for contextual inclusion of objects in a conference |
| US11108991B2 (en) | 2018-10-01 | 2021-08-31 | At&T Intellectual Property I, L.P. | Method and apparatus for contextual inclusion of objects in a conference |
| CN115830112A (zh) * | 2022-11-25 | 2023-03-21 | 之江实验室 | 一种基于手持实物的混合现实交互方法和系统 |
| CN115830112B (zh) * | 2022-11-25 | 2023-09-22 | 之江实验室 | 一种基于手持实物的混合现实交互方法和系统 |
| CN116540872A (zh) * | 2023-04-28 | 2023-08-04 | 中广电广播电影电视设计研究院有限公司 | Vr数据处理方法、装置、设备、介质及产品 |
| CN116540872B (zh) * | 2023-04-28 | 2024-06-04 | 中广电广播电影电视设计研究院有限公司 | Vr数据处理方法、装置、设备、介质及产品 |
Also Published As
| Publication number | Publication date |
|---|---|
| AU1842700A (en) | 2000-11-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6842175B1 (en) | Tools for interacting with virtual environments | |
| Zeleznik et al. | SKETCH: An interface for sketching 3D scenes | |
| O'Hagan et al. | Visual gesture interfaces for virtual environments | |
| US8643569B2 (en) | Tools for use within a three dimensional scene | |
| US6091410A (en) | Avatar pointing mode | |
| US7750911B2 (en) | Pen-based 3D drawing system with 3D mirror symmetric curve drawing | |
| Kolsch et al. | Multimodal interaction with a wearable augmented reality system | |
| Shaw et al. | Two-handed polygonal surface design | |
| Millette et al. | DualCAD: integrating augmented reality with a desktop GUI and smartphone interaction | |
| EP0856786A2 (fr) | Dispositif et méthode d'affichage de fenêtres | |
| Thomas et al. | Spatial augmented reality—A tool for 3D data visualization | |
| Kim et al. | Tangible 3D: Hand Gesture Interaction for Immersive 3D Modeling. | |
| CN112527112B (zh) | 一种多通道沉浸式流场可视化人机交互方法 | |
| JP2007323660A (ja) | 描画装置、及び描画方法 | |
| Encarnaĉão et al. | A Translucent Sketchpad for the Virtual Table Exploring Motion‐based Gesture Recognition | |
| WO2000065461A1 (fr) | Outils d'interaction avec des environnements virtuels | |
| Kruszyński et al. | Tangible props for scientific visualization: concept, requirements, application | |
| Kiyokawa et al. | A tunnel window and its variations: Seamless teleportation techniques in a virtual environment | |
| US11694376B2 (en) | Intuitive 3D transformations for 2D graphics | |
| Oh et al. | A system for desktop conceptual 3D design | |
| De Amicis et al. | Parametric interaction for cad application in virtual reality environment | |
| JP2025131492A (ja) | 3次元カーソルによる仮想タッチ方法、記憶媒体及びチップ | |
| Song et al. | Real-time 3D finger pointing for an augmented desk | |
| Serra et al. | Interaction techniques for a virtual workspace | |
| Kolaric et al. | Direct 3D manipulation using vision-based recognition of uninstrumented hands |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| ENP | Entry into the national phase |
Ref country code: AU Ref document number: 2000 18427 Kind code of ref document: A Format of ref document f/p: F |
|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA CN JP SG US |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 09959087 Country of ref document: US |
|
| 122 | Ep: pct application non-entry in european phase |