HK1030667A - Method of sequencing computer controlled tasks based on the relative spatial location of task objects in a directional field - Google Patents
Method of sequencing computer controlled tasks based on the relative spatial location of task objects in a directional field Download PDFInfo
- Publication number
- HK1030667A HK1030667A HK01101620.5A HK01101620A HK1030667A HK 1030667 A HK1030667 A HK 1030667A HK 01101620 A HK01101620 A HK 01101620A HK 1030667 A HK1030667 A HK 1030667A
- Authority
- HK
- Hong Kong
- Prior art keywords
- task
- objects
- master
- sequence
- directional
- Prior art date
Links
Description
Technical Field
The present invention relates generally to graphical user interfaces for operating program computers, and more particularly to graphical user interfaces for sequencing tasks to be performed by a computer.
Background
The traditional method of sequencing a series of tasks is to employ a program, script, or Graphical User Interface (GUI). A program or script is comprised of a sequence of instructions that direct a computer to perform a specific task. However, most programming or scripting languages require an understanding of the programming method and syntax of the programming language used. Most computer users do not have the time or skill required to establish a construct that makes these tools useful. Therefore, this approach is not feasible for these users. Another problem with this method of ordering tasks is that the code must be rewritten if it is desired to modify the task order.
In the GUI, tasks to be performed may be represented as objects. After defining the default properties of the objects, the user must manually connect the objects to define the order of the tasks. The process of ordering tasks is typically done by the user manually connecting the objects together in the desired order. One problem with this approach is that it is time consuming and prone to error. For example, many GUI-based programs allow objects to be ordered by manually selecting a first object, referred to as a source object, manually selecting a second object, referred to as a target object, and generating a link between the source object and the target object. This link is typically displayed as a line extending between the source object and the target object. This line may also include an arrow indicating the sorting direction. This process is repeated over and over to generate a continuous chain in which the previous target object becomes the source object and the new object is selected as the target. Once constructed, the entire sequence or chain can be triggered and the underlying tasks performed in the order specified by the user.
This task ranking method is described in "graphic Process capability", IBM technical public bulletin (April 1991, Vol.33, No. 11). This article represents a general method of connecting iconic representations of objects and ordering their respective tasks in a graphical user interface. Many software programs and many product tools for program generation, commonly referred to as Rapid Application Development (RAD) programs, use this approach. Examples of these product tools and their strategies of use can be found in "quick development" by Steve McConnell, Microsoft Press: the talking software of Taming Wild (1996) and James Koblielus, IDG Books Worldwide Inc., in "Workflow Strategies" (1997).
One inherent problem with this approach is shown to be the large number of objects comprising any sequence. Since prior art methods require some action by the user to unambiguously associate and link one object to the next, determine associations or links between objects that make up a sequence, any reordering that requires removal of existing links or generation of new links is very time consuming and error prone. It is often necessary to modify a complex sequence of tasks to meet the rapidly changing needs of the user or organization requiring the task. If the user re-specifies the order of any of the linked objects in the sequence, subsequent tasks or links related to the moved object may be affected. That is, the ordering of a particular object may depend on the tasks performed by some previous objects. In this case, the user must also make changes to all subsequent related links in order to achieve the desired result in the reordering of the objects. This is extremely time consuming and error prone in prior art procedures.
Another problem with prior art methods of ordering tasks is the large amount of information that is displayed to the user as the number of objects and their associated links becomes larger. For such a large number of objects, the user is easily overwhelmed by the density of the virtual information. Some prior art programs attempt to address this problem by allowing the user to selectively hide certain links (see, for example, Lazar's software development, month 2 1997, "HELLO, WORLD! Parcplace-Digitalk's targets for Java 1.0"). However, no one has provided a way to represent objects or links that have been hidden.
In prior art programs regarding workflow or task ordering, there is no correlation between the position of the task object in the graphical user interface and the order of the tasks in the sequence. This lack of correlation is very confusing for the end user. In fact, everyone regardless of cultural background has an innate understanding of patterns. Prior art programs do not take advantage of the human innate understanding of patterns, which makes these programs more difficult to learn and use.
Accordingly, there is a need for an improved graphical user interface for sequencing computer-controlled tasks in a more efficient and less time-consuming manner.
Summary of The Invention
The invention provides a graphical method for sequencing computer-controlled tasks. According to the invention, computer-controlled tasks are represented as objects in a graphical user interface. The task object is placed by the user in a directional field in the user interface. The directional field includes a directional attribute, represented in the user interface by a directional indicator. The direction attribute specifies how the order of the tasks in the field is determined. Tasks are automatically ordered by the computer when the sequence is performed according to the relative position of objects in the directional field with respect to each other and the directional properties of the directional field. The user does not need to explicitly link one object to another. Rather, the links may be automatically generated when the sequence is executed.
The user may modify the task order in one of two ways. First, the order of tasks may be changed by moving objects in the directional field to change the relative positions of the objects. Second, the directional property of the directional field may be changed to change the order. The links between objects will be dynamically regenerated the next time the sequence is executed.
In the preferred embodiment of the invention, the links between objects are drawn on the screen when the sorting is performed. The link appears on the interface as a line connecting two objects together. The links form a geometric pattern that provides properties about the underlying sequence regardless of the particular application selected for use by any individual, organization, or group. For example, the pattern reflects both the ordering of the tasks that comprise the sequence and the spatial preferences of the individual user that generated the sequence.
These patterns rely on the innate ability common to people to perceive and recognize patterns. The human innate pattern recognition capabilities provide the basis for a uniform method of generating a system ordering a sequence of operations. A knowledgeable user who changes from one work environment to another, such as from research to statistics, will become more quickly adaptive to work because the patterns represent information loosely connected to the underlying sequence tasks. Also, if additional objects are placed in an existing sequence, the user will intuitively know how to include the object in the chain without having to manually draw a link connecting the new object to the existing object. Such a system has many advantages. Such a system not only provides information that is always reliable in the form of geometric patterns that are naturally generated from the ranking, but also allows meaning to be conveyed between individuals without the need for common language or shared cultural background.
We insist on obtaining improvements in computer use by generating a more natural system for interacting with a computer for humans. The directional field indicator allows a knowledgeable user to intuitively estimate the correct location of an object within the directional field to achieve a desired result. The user does not need to explicitly generate links between objects. Rather, the links may be dynamically generated as the sequence is executed. Eliminating the need to explicitly generate links between objects makes sequence generation more efficient. It is also important to eliminate the need to explicitly generate links so that the user is more focused on the entire sequence.
One of the main advantages of the present invention is the improvement in the efficiency of generating complex task sequences. There is no longer a need to manually generate or modify sequences as is the case with prior art procedures. Any degree of knowledge of the user, be it a novice or experienced user, can quickly and easily modify the sequence by rearranging the objects in the directional field. Less behavior is required since the user does not need to explicitly define the links between objects.
Another advantage of the present invention resides in the loose connection of the spatial arrangement of objects in the user interface and the resulting geometric pattern formed by the lines connecting the objects together. These geometric patterns provide the user with information about the nature of the underlying sequence regardless of the particular application used. The user can easily interpret the meaning of the pattern without the need for prior knowledge of the context when the application was initially built. Also, the user can easily transfer patterns from one environment to the next without having to spend time and effort to gain expertise in the particular program that created their initial patterns. A knowledgeable user who changes from one program to another will adapt faster without requiring retraining to learn the nuances of the new program.
Another advantage is that the present invention allows for the transfer of expertise through task sequence patterns. Geometric patterns have the meaning of expertise within the context knowledge they generate. Thus, a knowledgeable user can more easily understand and modify a sequence generated by another.
The present invention may, of course, be carried out in other specific ways than those herein set forth without departing from the spirit and essential characteristics of the invention. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
Brief description of the drawings
Fig. 1 illustrates a plurality of task objects and a single supervisor object arranged in a direction field whose direction attribute is set TO UPPER RIGHT TO LOWER LEFT.
Fig. 2 is a diagram illustrating a task sequence pattern resulting from the object arrangement in fig. 1.
FIG. 3 illustrates the same object as shown in FIG. 1, wherein one task object has been moved.
Fig. 4 shows a task sequence of the object arrangement shown in fig. 3.
FIG. 5 shows the same object as shown in FIG. 1, wherein the directional attribute becomes UPPERLEFT TO LOWER RIGHT (top left TO bottom RIGHT).
Fig. 6 shows a task sequence of the object arrangement shown in fig. 5.
Fig. 7 is a diagram illustrating two observers observing one task sequence pattern.
FIG. 8 shows a plurality of task objects and a master object placed in a directional field, where the master object has a limited area of influence.
FIG. 9 shows the same task object and supervisor object as shown in FIG. 8, where the supervisor object and its associated area of influence have been moved.
FIG. 10 shows a plurality of task objects and two master objects placed in one directional field, where each master object has a limited area of influence and each limited area of influence has its interaction attribute set to NONE.
FIG. 11 shows the same task object and supervisor object as shown in FIG. 10, however, the interaction attributes for each area of influence of each supervisor object are set to invoke the other supervisor objects.
Fig. 12 shows the same task object and master object as shown in fig. 10, however, the interaction attribute of each affected area of each master object is set to CALL all _ jects (CALL all objects).
FIG. 13 shows a plurality of different types of task objects and a plurality of specific types of supervisor objects placed in a directional field.
FIG. 14 is a flow chart depicting the ranking method of the present invention.
FIG. 15 is a diagram illustrating a typical user interface of a software program incorporating the ranking method of the present invention.
Fig. 16 is a diagram illustrating respective spatial sequence indicators displaying a 2-dimensional direction field.
FIG. 17 is a diagram illustrating a 3-dimensional directional field having a plurality of task objects and a master object with limited areas of influence.
FIG. 18 is an exterior view of an outside-inside menu object.
FIG. 19 is an inside view of an outside-inside menu object.
Fig. 20 is a diagram illustrating a virtual office incorporating the ranking method of the present invention.
Detailed description of the invention
Referring now to the drawings, and in particular to FIG. 1, the method of sequencing computer controlled tasks of the present invention is illustrated in greater detail. As shown in FIG. 1, the method is implemented via a user interface 10, wherein computer-controlled tasks are graphically represented on a computer display as tasks or task objects 14 in a spatial field 12. The tasks are automatically ordered by the computer according to the relative positions of the task objects 14 in the spatial field 12. The spatial field 12 comprises a directional property specifying how the order of tasks is determined. To implement this process, the user generates task objects and places the task objects in the spatial field 12. The task objects 14 represent specific tasks in a sequence or process. The task object may be represented as a button or icon that informs the user of the task associated with the particular object instance.
Once a task object 14 is generated or instantiated, its default behavior or functionality is set by the user. The behavior of the task object 14 may be set, for example, by a property page that provides access to various properties and methods included with the task object. By setting or changing the properties of the task object 14, the user may specify the function or task that the task object 14 performs. The task object 14 may virtually represent any task that may be executed or controlled by a computer. For example, task object 14 may be used to execute other programs on a computer or send keystrokes to an application. Currently, once a user has been instantiated, it is sufficient to understand that the properties that affect the basic functionality of the task object 14 are accessible to the user.
A computer-controlled process or procedure typically includes a plurality of tasks, represented in a user interface as a series of task objects 14. Tasks represented as task objects 14 are automatically ordered and executed by the computer. As each task is executed, a line, referred to as a sequence line 20, may be drawn between each task object 14 in the sequence. The sequence lines 20 extend between various points on the object, referred to herein as object location points 24. In the example given, the object location point 24 is located in the upper left corner of each object in the user interface 10. However, the object location point 24 may also be placed at a location outside the object. The object location points 24, whether inside or outside, are used by the computer to determine the sequence of task objects 14. That is, the position of the object location point 24 is used to determine the ordering of the objects. When the sequence is initiated, the sequence line 20 is drawn from the object position point 24 of one task object 14 to the object position point 24 of the next task object 14 in the sequence. Thus, it can be readily seen that the sequence line 20 also serves as a form of progress indicator. As shown in fig. 2, the sequence line 20 forms a pattern, referred to herein as a task sequence pattern 22.
The user may order the computer-controlled tasks in one of two ways. First, the task sequence can be changed by moving the corresponding task object 14 in the spatial field 12 to a new position. That is, the relative positions of the tasks in the sequence can be changed by merely moving the iconic representation of the tasks (i.e., task objects) in the spatial field 12. Once a task object 14 is moved within the spatial field 12, the computer automatically reorders the tasks without the user having to explicitly reconnect the object.
A second way of changing the task sequence is to change the directional properties of the spatial field 12. The directional attribute specifies how the tasks are ordered according to their position in the spatial field 12. For example, in a two-dimensional spatial field 12, the directional attribute may specify that tasks are ordered from top right to bottom left according to the position of the corresponding task object in the spatial field 12. If the directional property changes to specify a bottom right to top left sequence, the task order will change even if all task objects 14 remain in the same position. In the present invention, the orientation attribute is represented as an icon called a spatial sequence indicator 18 displayed on the user interface.
The directional property of the spatial field is set by accessing the property page, for example by right-clicking in the spatial field 12. The property page allows a user to set properties for the spatial field. One attribute is the directional attribute of the spatial field. This attribute specifies how to order the objects placed in the spatial field. In the disclosed 2D embodiment, the directional attribute has six possible settings. Each setting is represented by a different spatial sequence locator as shown in fig. 16. The values of the directional attributes include UPPER LEFT TO LOWER RIGHT, LOWER RIGHT TO UPPER LEFT, LOWERLEFT TO UPPER RIGHT, UPPER RIGHT TO LOWER LEFT, CURRENT POSITIONOUTWARD, and OUTER MOST POINTS INWARD (top LEFT TO bottom RIGHT, bottom RIGHT TO top LEFT, bottom LEFT TO top RIGHT, top RIGHT TO bottom LEFT, CURRENT position outward, and outermost point INWARD). The spatial field 12 may also have other properties, such as color or font, which may be set by the user.
In fig. 1, six task objects 14 are placed in the spatial field 12, namely object 1, object 2, object 3, object 4, object 5 and object 6. The direction attribute specifies a sort order of UPPERRIGHT TO LOWER LEFT (top right TO bottom LEFT). In this sort order, horizontal positions have priority over vertical positions. When the process is initiated, the computer executes the tasks represented by the task objects 14 in the spatial field 12. The sequence is initiated by a triggering event. In the preferred embodiment of the present invention, execution of the sequence is triggered using the master object 16. A supervisor object 16 is an object whose purpose is to activate the default behavior of one or more task objects 14. In the disclosed embodiment, the master object responds when the master object 16 is "clicked" to initiate a sequence.
The order of the tasks is determined by the relative position and orientation properties of the task objects in the spatial field 12. In a preferred embodiment of the present invention, a sequence line 20 is drawn from one object 14 to the next as each task is performed. Preferably, when the task represented by the latter object is completed, a sequence line 20 is drawn connecting the two task objects 14. In the example given, object 2 is executed first, followed by object 1, object 3, object 4, and object 6. When the task associated with object 2 is completed, a sequence line 20 is drawn from object 2 to object 1. This process is repeated until the last task in the sequence is completed. A task sequence pattern 22 of the sequence shown in fig. 1 is illustrated in fig. 2.
It should be noted that the object 5 is not included in the sequence in fig. 1. Each task object 14 has an attribute, referred to as an include attribute, that causes the task object 14 to either be included or excluded, depending on the setting of the include attribute. When the containment attribute is set to "YES," the object is included in the sequence. Conversely, when the inclusion attribute is set to "NO", the object is excluded. The inclusion property is set by the property page of the task object.
FIG. 3 shows the same task objects 14 as shown in FIG. 1, but the relative position of the first task object 14 has changed. In this example, the order of execution of the tasks has changed. More specifically, object 2 executes first, then object 3, object 4, object 6, and finally object 1. It should also be noted that the task sequence pattern 22 as shown in fig. 4 is formed differently than in fig. 1. When an end user works with a particular task sequence pattern 22, a natural correlation is made by the user between the task sequence pattern 22 and the underlying operation. Once the correlation is made, the task sequence pattern 22 provides a more intuitive modeling environment.
Fig. 5 shows the same co-located task objects 12 as shown in fig. 1, but the directional property of the spatial field 12 has changed TO specify a single UPPER LEFT TO BOTTOM RIGHT sequence. In this example, the order of execution of the tasks has been affected and a different task sequence pattern 22 is generated. More specifically, the new sequence starts from object 1 and then proceeds in order to object 2, object 3, object 4, and object 6. Fig. 6 shows a task sequence pattern 22 for this sequence. This example demonstrates how the sequence of tasks is changed without changing the relative position of the objects in the spatial field 12.
An important feature of the present invention is the task sequence pattern 22 that is generated when the tasks represented by the objects are ordered. The pattern 22, while unique to the particular environment in which it is generated, hides the underlying complexity of the particular object and its representation and allows the viewer to focus on the sequence itself. By generating a higher general level of abstraction, the task sequence patterns 22 enable people to interact at a more natural, intuitive level than previously known by taking advantage of the commonly shared human perceptual properties. Fig. 7 shows two users from different backgrounds observing a sequence pattern hiding the underlying object. The sequence pattern enables two users to understand and communicate ideas regardless of cultural differences.
In the foregoing discussion, it has been assumed that all task objects 14 in the spatial field 12 are controlled by a master object 16. However, in a preferred embodiment of the present invention, an area of influence 26 may be defined that hosts the object 16. When the scope of the master object 16 is limited, only those task objects 14 that fall within the area of influence 26 of the master object 16 can be controlled by the master object 16. The area of influence 26 of the master object 16 is represented by a line of demarcation on the computer display. The area of influence 26 is by default unrestricted.
Fig. 8 and 9 illustrate how the area of influence 26 may be used in the ranking. In fig. 8, five task objects 14 and one supervisor object 16 are placed in the spatial field 12. The area of influence 26 of the master object 16 is shown by a boundary line 26, which boundary line 26 is visible on the user interface 10. Two task objects 14, object 1 and object 2, fall within the area of influence 26 of the master object 16. Object 3, object 4 and object 5 are located outside of the region of influence 26 hosting object 16. When the master object 16 is triggered, object 1 and object 2 are both included in the sequence. Object 3, object 4, and object 5 are excluded because they are outside the area of influence 26 that hosts object 16.
FIG. 9 shows the same spatial arrangement of task objects 14 as shown in FIG. 8, but with the area of influence 26 of the master object 16 having moved to the right. By moving the area of influence 26, tasks represented by object 1 and object 2 are excluded, and tasks represented by object 3, object 4, and object 5 are included. The new execution order is object 3, object 4, and object 5.
The method of the present invention for sequencing computer controlled tasks supports a plurality of master objects 16, each master object 16 having its own region of influence 26. The present invention also supports interactions between master objects 16. Each supervisor object 16 has attributes that can be set by the user. One attribute that has been mentioned above is the directional attribute. Another attribute of the supervisor object 16 that may be set by the user is a schema attribute. The mode attribute specifies the interaction mode between one supervisor object 16 and other supervisor objects 16. In the present invention, there are three modes: NONE, CALL MASTER, and CALL ALL obscts (NONE, CALL other MASTER object, and CALL ALL object). If the interaction mode is set to NONE, the master object 16 will order the task objects 14 within its own area of influence 26, while ignoring the task objects 14 outside of its own area of influence 26. If the interaction mode is set to CALLOTHRMASTER, the MASTER object 16 will trigger other MASTER objects 16 that have the mode property set appropriately once the task objects 14 within its own zone of influence 26 are sorted. The calling supervisor object 16 must be specified by name or type in the property page of the called supervisor object 16. If the interaction mode property is set to CALL ALL obsjects, then the master object 16 processes ALL task OBJECTS 14 in the entire spatial field 12 as if they were ALL within their own zone of influence 26. In this case, all objects are ordered as if the master object 16 had an unlimited scope.
FIGS. 10-12 show interactions between different supervisor objects 16. FIG. 10 shows two master objects with intersecting regions of influence 26. Each supervisor object 16 has two task objects 14. Object 1 and object 2 belong to supervisor 1. Object 3 and object 4 belong to supervisor 2. When supervisor 1 is triggered, the tasks represented by object 1 and object 2 are performed. Likewise, when a second master object, master 2, is triggered, the tasks represented by object 3 and object 4 are performed. It should be noted that even though the object 2 appears to fall within the region of influence 26 of the supervisor 2, the object 2 is not triggered by the supervisor 2. This is because object 2 is a child of supervisor 1 and not supervisor 2.
FIG. 11 shows the same supervisor object 16 and task object 14 as shown in FIG. 10. But sets the interaction pattern attribute of each MASTER object 16 to CALL OTHER MASTER. In this case, when the first master object 16, master 1 is triggered, the tasks represented by object 1 and object 2 are executed. When the first master object 16, master 1 completes its ordering of objects within the area of influence 26, it calls the second master object 16, master 2. Note that the master 2 must have reactive properties set to respond to the master 1. Supervisor 2 then orders and executes the processes that it affects the representation of the objects within region 26. Specifically, the supervisor 2 causes tasks associated with the object 3 and the object 4 to be performed.
The CALL OTHER messages attribute allows a supervisor object to respond to OTHER supervisor objects 16 as if one task object 14 were. All supervisor objects 16 have a reactive attribute that can be set by the user so that one supervisor object 16 responds to other supervisor object types or to a particular supervisor object by name. The reaction attribute will identify a particular hosting object 16 by the type or name that it will respond to. By setting this attribute, one supervisor object 16 can be called by another supervisor object 16. A master object 16 is referred to as a slave master object when the master object 16 has its default behavior triggered by another master object 16 on its reaction list. The reaction list is used as a security measure in large programs, where there may be hundreds of hosting objects and task hosting objects.
Fig. 12 shows the same master object 16 and task object 14 as those shown in fig. 10 and 11, but the mode attribute of the master object 16, i.e., master 1 and master 2, is set to CALL ALL obsects in fig. 12. In this case, when executing any one master object 16, i.e., master 1 or master 2, all objects in the area of the two master objects 16 will be ordered as if all objects belong within a single unrestricted region of influence. In this example, the sequences would be object 4, object 1, object 2, and object 3. Since both master objects 16 have the same directional property, the same sequence is triggered regardless of which master object 16 starts the sequence. But if the directional property of master 2 is different from the directional property of master 1, the sequence will change. Thus, when the interaction attribute is set to CALL ALL OBJECTS, then ALL task OBJECTS 14 are processed as if they ALL belong within a single unrestricted area of influence.
In the case where there are many task objects 14 within a particular area of influence, it may be desirable to perform only a subset of the tasks represented by the task objects 14 in that area. In light of the foregoing discussion, one implementation is to set the include attribute of the task object 14 that the user wishes to sort to YES and the include attributes of all other task objects 14 to NO. But this process of setting the inclusion property can be cumbersome when a large number of task objects 14 are involved.
Another solution to this problem is to use different types of objects to represent the tasks to be performed. The default behavior of the master object 16 will be set to order only task objects 14 of a particular type. Thus, for each type of task object 14, there will be a corresponding master object 16 that will only order events of that particular type. All objects are ordered using a generic hosting object, regardless of their type. In a preferred embodiment of the invention, the different types of objects have different appearances and are therefore easily distinguishable by the end user.
FIG. 13 illustrates how different types of objects are used for task ordering. FIG. 13 shows four task objects 14 and three supervisor objects 16. The task objects 14 are of two types-type a and type B. There are two generic hosting objects and one generic hosting object. When clicking on the type A generic hosting object, only the type A objects are sorted. Specifically, type-A-object 1 and type-A-object 2 are ordered. Likewise, when clicking on a type B generic hosting object, only type B objects are ordered, i.e., type-B-object 1 and type-B-object 2. However, when the generic master object is clicked on, then the object type is ignored and all task objects 14 are sorted. In this case, the sequences are type-A-object 1, type-B-object 1, type-A-object 2, and type-B-object 2. In the case of many mixed types of objects, it is convenient to have the ability to sort only objects of a particular type without having to change the location of the objects or delete existing objects or change the containment state of the objects.
FIG. 14 is a flow chart illustrating a process of a computer sequencing tasks represented as task objects 14 in a user interface. The process is generally triggered by an event (block 100). In the described embodiment, this event triggers the default behavior of one of the master objects 16. It will be apparent to those of ordinary skill in the art that the supervisor object 16 is not an essential part of the trigger process. Other techniques may be used to trigger the sequence, such as a timer event or an external input. A dynamic data structure is generated (block 102) to store information about objects in the user interface. The stored information includes the location of the object. After the dynamic data structure is generated, a function is called to return the number of objects to be sorted (block 104). The computer then repeats for all task objects 14 within the area of influence 26 of the master object 16 (block 106). After each iteration, the computer determines whether all objects have been examined (block 108). If not, the attributes of the next object are checked (block 110). Based on the attributes of the object, the computer determines whether to include the particular object in the sequence (block 112). If the object is to be included, the object is added to the dynamic data structure (block 114). After all objects have been examined, the objects listed in the dynamic data structure are sorted according to their spatial location and the directional properties of the hosting object 16 (block 116). After completion of the classification, the objects perform their designated tasks in the sequence (block 118). After all tasks have been completed, the process ends.
Referring now to FIG. 15, an illustration of a user interface appearing in a Windows-based application is shown. As shown, the interface includes a host window 200, the host window 200 being made up of multiple normal Windows portions typically found in most Windows applications. The main window 200 includes a frame 202 that surrounds the remainder of the main window 200. A title bar 206 extends along the top of the main window 200. The system menu and application icons 204 are placed at one end of a title bar 206 in the upper left corner of the window. Three title bar buttons are placed at the right end of title bar 206. The left-most of these buttons is a minimize window button 208, which allows the user to minimize the window. The button to the right of the minimize view button 208 is a maximize view button 210. This button allows the user to maximize the main window 200. The rightmost button is a close view button 212 that allows the user to close the main view 200. To the right of the window, a vertical scroll bar 214 appears for vertical scrolling in the main window 200. Adjacent to the bottom edge of the main window 200 is a horizontal scroll bar 216 for horizontal scrolling. The area enclosed by the main window 200 is referred to as the visible client area 218. The vertical scrollbar 214 and the horizontal scrollbar 216 allow the visible client area 218 to be moved vertically and horizontally to view objects that are outside the boundaries of the main window 200. A cursor 222 appears in the visible client area 218 and may be manipulated by a mouse or other input device. At the bottom right corner of the main window 200 is a window size control 220 that allows the user to change the window size.
The main function of the user application is accessed through the menu bar 224 and the tool bar 226. A menu bar 224 is located just below the title bar 206 and provides a number of menu options such as files and help. When a file is selected, a list of menu options appears (e.g., new, open, save, exit). The help menu activates a help file. It will be understood by those of ordinary skill in the art that each menu item may include many menu items as well as submenus. The menu structure of an application is well known to those of ordinary skill in the art.
Immediately below the menu bar 24 is a home tool bar 226. The tool bar 226 generally includes a series of buttons, some of which provide access to the same functionality as the menu bar 224. At the left end of the tool bar 226, for example, are three buttons that repeat the functions of the menu bar 224. Specifically, the file open button 228 opens a normal window dialog box for opening a file. The file save button 230 opens a normal windows dialog box for saving the file. The exit button 232 closes the application.
The remaining buttons on the user interface 10 are arranged in two groups. As will be described below, the buttons 240, 242, 244, 246, 248, 250, and 252 are used to instantiate the task object 14 in the user interface 10. Buttons 260 and 262 are used to illustrate the master object 16 in the user interface 10. To illustrate an object in the user interface 10, a user selects a button (typically by clicking the button with a pointing device 222, such as a mouse), positions a cursor 222 over the visible client area 218, and then clicks with the cursor 222 at a desired location. An object of the selected type will be instantiated where the cursor 222 is positioned. This method of using buttons in conjunction with cursor 222 to instantiate objects in user interface 10 is common in Windows applications and is well known to those of ordinary skill in the art.
Once an object is instantiated, the default behavior of the object is set via the property sheet. The property page can be accessed, for example, by right-clicking or double-clicking the object with a pointing device. The user can set or change the properties of the objects instantiated in the user interface 10 from the property sheet. The property pages are used, for example, to set default behaviors and tasks to be performed by the objects. The containment attributes are also accessed through the attribute page. Another useful attribute is a hidden attribute of the object. The hidden property allows an instantiated object to be hidden from view. This property is useful, for example, for controlling access to object settings. For example, if a new employee is employed, a particular object instance may be hidden so that such instance cannot be accessed while allowing the new employee to activate the program. As the employee becomes more familiar with the work environment and the context of the sequential tasks that comprise the program, the employee may be given more access. Hiding an object instance makes it impossible for the user to interact with it, i.e. change properties, spatial position, etc., but the object is still included in the sequence. The hidden property does provide an additional measure of control, flexibility and security of the program.
In the preferred embodiment of the present invention, the top-level pop-up menu for each object is approximately the same, even if the objects are of different types. This consistency in the user interface 10 helps the user to quickly interact with the object instances.
As described above, the buttons 240 and 252 allow the user to instantiate the task object 14 in the user interface 10. Each of these buttons represents a different type of task object 14. In the disclosed embodiment, button 240 is used to instantiate an exit button object. The exit button object provides various ways of exiting the current instance of the application.
Button 242 allows the user to instantiate a run button object. Button 244 allows the user to instantiate a running image object. Both the run button object and the run image object are used to run other applications and to send keystrokes to these applications. The main difference is how the objects appear in the interface. The running button object is displayed as a simple button in the user interface 10 and the running image object is displayed as a bitmap image or icon.
Button 246 allows the user to instantiate an SQL button object. SQL button objects differ from run button objects and run image objects in that their default behavior allows access to the database through ODBC (open database connectivity), JDBC (Java database connectivity), and SQL (structured query language). Button 248 allows the user to instantiate a QBE button object that allows access to, for example, a third party polling machine containing the query. SQL, QBE, JDBC, and ODBC are well known to those of ordinary skill in the art.
Button 250 allows the user to access an Automated Object Maker (AOM). AOM allows a user to join an existing database file and select those fields of the database file that correspond to the attributes required to instantiate a particular object type. The top level menu of the AOM exposes a drop down list of available object types that can be instantiated. The user selects a record from a database for constructing an object instance and places it on the visible client area 218 as if it had been manually instantiated.
Button 252 allows the user to instantiate an Override Line Object (OLO). This object is the same as any other task object except that it has the attribute of a line and shows itself as a line. When an OLO object is generated, it is linked to other task objects 14 of the user interface 10. The main reason for using OLO is to use when the automatically generated pattern is not the one desired to be displayed. OLO provides a method to get exceptions in automatically generated patterns.
It will be apparent to those skilled in the art of object-based interface design that these tools can be easily extended by adding additional object types with different behavior characteristics.
The method of ordering objects that has been described above in relation to a two-dimensional space can also be implemented in a three-dimensional space. FIG. 17 shows a three-dimensional "virtual reality" space employing the ranking method of the present invention. The object instantiation in 3-D is the same as in 2-D media, but the object in 3-D virtual media has at least three coordinates. The objects in the 3-D virtual environment also have an inner surface and an outer surface that include one or more faces. Texture may be added to the outer and inner surfaces of the object. These textures typically provide additional information to the user, but can be used to enhance sensory attributes that reflect basic object attributes specific to the application.
FIG. 17 shows two master objects 16, each master object 16 having a limited area of influence 26. Each of the impact regions 26 includes a plurality of task objects 14. A spatial sequence indicator 18 is associated with each region of influence 26. The spatial sequence indicator 18 indicates how the objects within a particular region of influence 26 are ordered. For example, in the illustrated embodiment, the spatial sequence indicator 18 reflects the directional attribute as FRONT TO BACK-UPPER LEFT TOLOWER RIGHT (FRONT TO BACK-top LEFT TO bottom RIGHT).
A three-dimensional embodiment of the present invention has several objects that are not present in a 2-D embodiment and may be generally described as outer-inner objects. The outer-inner object is a three-dimensional object that can be input by a user. In the disclosed embodiments, the outer-inner object may serve several purposes. One purpose of the out-in object is to display information to the user. Information is displayed on the inner and outer surfaces of the outer-inner object. Thus, the information displayed on the outside-in object is visible to the user whether the user is inside or outside the object. The out-in objects may also serve as a method for inputting commands by a user or otherwise allow a user to interact with the environment.
Three types of outside-inside objects are used, including a directional viewport 30, a spatial sequence indicator 18, and an outside-inside menu object. The directional viewing box 30(OVB) is a six-sided box, each face labeled as corresponding to a respective view in the virtual environment: FRONT, RIGHT, BACK, LEFT, TOP OR BOTTOM (FRONT, RIGHT, BACK, LEFT, TOP OR BOTTOM). These labels may also be found inside the directional viewing box 30 in the event that the viewer enters the box. The purpose of the orienting viewing box 30 is to provide a quick reference tool to orient the position of the viewer within a particular area of influence 26.
Another outside-inside object is a spatial sequence indicator 18. In the 3-D embodiment, the spatial sequence indicator 18 is represented as a cube with one icon displayed on each of the six interior and exterior surfaces of the cube. This allows the user to determine the directional properties of the directional field 12 from many different vantage points within and outside the object.
The third type of outside-in object is an outside-in menu object. The outer-inner menu object replaces the position of the top, upper menu in the 2-D embodiment. The out-in menu object provides the same functionality as the pop-up menu found in the 2-D embodiment. However, unlike 2-D display media, 3-D virtual media becomes more efficient by presenting menu items on multiple object surfaces simultaneously, so that not only can they be viewed from any viewpoint outside or inside the object, but also can they interact. If the user finds himself within an outside-inside menu object, he can interact with the inner surface of the object without having to exit the object or change position, thereby saving time.
In a 3-D embodiment, the outside-inside menu object of a 3-D directional field is located at one corner of the field. The outside-inside menu objects of the task object 14 and the supervisor object 16 are located inside these objects. It should be understood that the task object 14 and the supervisor object 16 are also outside-inside objects.
FIG. 18 shows the outside-inside menu object viewed from outside it. Fig. 19 shows the out-in menu object viewed from inside it. As shown, menu items appear on both the inner and outer surfaces of the object, allowing a user to interact with the menu object, whether the user is inside or outside the object.
The ordering method in the 3-D embodiment is the same as that used in the 2-D embodiment. When the master object 16 is triggered, the tasks represented by the task objects 14 are ordered and executed. The spatial sequence indicator 18 determines the ordering criterion to be applied. In the embodiment shown in FIG. 17, the objects are sorted from front to back, and then from top left to bottom right. In performing the action associated with each task object 14, a sort line is drawn that connects the two objects together. The objects in the 3-D embodiment also have an inclusion property, as described above, that can be used to exclude certain tasks from the sequence.
Fig. 20 shows a practical application of the sorting method discussed above. FIG. 20 is a virtual office display that allows sorting and reordering by moving objects in the virtual office space. The actual purpose of the virtual office environment is to demonstrate the manner in which documents to be faxed are ordered. The background image 300 is merely a graphic image containing an office background stored in one image object instance. By right-clicking anywhere on this background, the user can access the properties of the object and make any changes required, including changing the background by loading a new graphical image, as shown in fig. 20, which shows a conventional office with virtual desks and doors. The door has a away symbol that is actually an exit button object 302. The four objects appear on the virtual table and are in fact instances of the aforementioned running image objects. The first object 304 appears as a folder on the desktop. The other three objects 306, 308, and 310 are shaped like a letter. The start button 312 in the lower left corner of the virtual office is a master object with an external location point 314 on the fax machine image. The spatial sequence indicator 18 indicates that the directional property is set to point from the outermost to the inner. When the master object 16 is activated, the tasks associated with the folder object are performed first, followed by the three message objects in sequence. Tasks associated with the objects are performed in order from the object farthest from the facsimile machine to the object closest to the facsimile machine. A virtual office represents one way in which the present invention can be used to order tasks.
The present invention provides an easy and convenient way for a user or programmer to control task sequencing by manipulating objects within a graphical user interface. An object representing a task to be performed is placed in the direction field. The tasks associated with each object are automatically ordered by the computer according to the relative spatial positions of the objects in the directional field. The concept of task ordering according to relative spatial position is a new paradigm in the programming arts.
The present invention may, of course, be carried out in other specific ways than those herein set forth without departing from the spirit and essential characteristics of the invention. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.
Claims (28)
1. A method of sequencing a plurality of tasks performed or controlled by a computer, comprising:
a) placing a task object in a directional field having a directional attribute, wherein the task object represents a task to be performed by the computer; and
b) ordering, by the computer, the one or more task objects in the directional field according to the relative spatial locations of the task objects in the directional field and the directional attributes of the directional field.
2. The sequencing method of claim 1 further including the step of reordering the task objects by changing the relative spatial positions of the task objects in the directional field.
3. The sorting method according to claim 1, further comprising the step of selecting directional properties of the directional field.
4. The sequencing method of claim 1 wherein said task objects have one or more modifiable attributes for controlling the behavior of task objects.
5. The sequencing method of claim 4 wherein one of said modifiable attributes is used to include or exclude a task object in the directional field from said sequence.
6. The sequencing method of claim 4 wherein at least one modifiable attribute specifies a task to be performed by a task object.
7. The sequencing method of claim 1 further including the step of placing a master object in the directional field for initiating said sequence of tasks.
8. The sequencing method of claim 7 wherein said task object is responsive to said master object to perform its associated task.
9. The sequencing method of claim 8 further including the step of defining a restricted area of influence for said master object, wherein said master object is used to initiate a sequence including task objects falling within the master object's area of influence.
10. The sequencing method of claim 9 wherein the sequence includes only those task objects that fall within the region of influence of the master object.
11. The sequencing method of claim 8 including a plurality of master objects, each master object having an area of influence, wherein at least one master object is responsive to activation of one or more other master objects to initiate sequencing of task objects within its own control area.
12. The sequencing method of claim 8 further including the step of selecting a type for each task object from a predefined list of types, wherein each master object is programmed to sequence only certain specified types of task objects.
13. The sequencing method of claim 12 further including the step of defining a generic type for master objects used to sequence all types of task objects.
14. A method of sequencing a plurality of tasks performed or controlled by a computer, comprising:
a) displaying a user interface having a directional field on a computer display;
b) in response to a user input, placing a task object in the directional field, wherein the task object represents a task to be performed by the computer;
c) selecting a directional attribute for the directional field;
d) ordering, by the computer, the one or more task objects in the directional field according to the relative spatial locations of the task objects in the directional field and the directional attributes of the directional field.
15. The sequencing method of claim 14 further including the step of reordering the task objects by changing the relative spatial positions of the task objects in the directional field.
16. The sequencing method of claim 14 wherein said task objects have one or more modifiable attributes for controlling the behavior of task objects.
17. The sequencing method of claim 16 wherein one of said modifiable attributes is used to include or exclude a task object in the directional field from said sequence.
18. The sequencing method of claim 16 wherein at least one user definable attribute is used to specify a task to be performed by the task object.
19. The sequencing method of claim 14 further including the step of placing a master object in the directional field for initiating said task sequence.
20. The sequencing method of claim 19 wherein said task object is responsive to said master object to perform its associated task.
21. The sequencing method of claim 20 further including the step of defining a restricted area of influence for said master object, wherein said master object is used to initiate a sequence including task objects falling within the master object's area of influence.
22. The sequencing method of claim 21 wherein the sequence includes only those task objects that fall within the region of influence of the master object.
23. The sequencing method of claim 20 including a plurality of master objects, each master object having an area of influence, wherein at least one master object is responsive to activation of one or more other master objects to initiate sequencing of task objects within its own area of influence.
24. The sequencing method of claim 20 further including the step of selecting a type for each task object from a predefined list of types, wherein each master object is programmed to sequence only certain specified types of task objects.
25. The sequencing method of claim 24 further including the step of defining a generic type for master objects used to sequence all types of task objects.
26. A computing method for displaying information to a user and receiving input from the user, comprising:
a) displaying a three-dimensional object in a 3-D virtual environment on a computer display, wherein the menu object comprises an outer surface and an inner surface; and
b) information is displayed on both the inner and outer surfaces of the object such that the information is visible to the user when the user is inside or outside the object.
27. The computing method of claim 26 wherein the menu items are displayed on both an inner surface and an outer surface of the object.
28. The computing method of claim 27, further comprising the step of selecting a menu item on an interior or exterior surface of the menu object in response to user input.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US60/043,371 | 1997-04-04 | ||
| US08/905,701 | 1997-08-04 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1030667A true HK1030667A (en) | 2001-05-11 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6948173B1 (en) | Method of sequencing computer controlled tasks based on the relative spatial location of task objects in a directional field | |
| JPH06208448A (en) | Method and computer controlled display for allowing an application to provide a collective browser with browser items | |
| US5903271A (en) | Facilitating viewer interaction with three-dimensional objects and two-dimensional images in virtual three-dimensional workspace by drag and drop technique | |
| JP2675987B2 (en) | Data processing method and processing system | |
| EP0752640B1 (en) | Representation of inter-relationships between graphical objects in a computer display device | |
| EP0453386B1 (en) | Hierarchical inter-panel process flow control | |
| US5956032A (en) | Signalling a user attempt to resize a window beyond its limit | |
| CN1790241A (en) | Apparatus and method for chaining objects in a pointer drag path | |
| US8656292B2 (en) | Accentuated graphical user interface | |
| JPH103375A (en) | Method for arranging window position and graphical user interface | |
| WO2005103874A2 (en) | Modelling relationships within an on-line connectivity universe | |
| US20110148918A1 (en) | Information processing apparatus and control method therefor | |
| KR20060052717A (en) | Virtual desktops, how to recall an array of program examples, how to manage application examples, and how to manage applications | |
| WO2007122145A1 (en) | Capturing image data | |
| US20080072234A1 (en) | Method and apparatus for executing commands from a drawing/graphics editor using task interaction pattern recognition | |
| EP0923759A2 (en) | Apparatus and method for creating and controlling a virtual workspace of a windowing system | |
| EP0873548B1 (en) | Extensible selection feedback and graphic interaction | |
| JP4449183B2 (en) | Image editing system, image editing method, and storage medium | |
| JP4148721B2 (en) | Shared terminal for conference support, information processing program, storage medium, information processing method, and conference support system | |
| HK1030667A (en) | Method of sequencing computer controlled tasks based on the relative spatial location of task objects in a directional field | |
| CN1266510A (en) | Method for computer-controlled task sequencing based on relative spatial positions of task objects in an orientation field | |
| CN1248016A (en) | Method of Realizing Graphical Interface Simulation in Single Task System | |
| JP4893060B2 (en) | Search system screen display method | |
| Tomitsch | Trends and evolution of window interfaces | |
| CN1755594A (en) | A computer-executable shortcut menu system and its operating method |