HK1181861A - Method and computer system for rendering a target item on a display - Google Patents
Method and computer system for rendering a target item on a display Download PDFInfo
- Publication number
- HK1181861A HK1181861A HK13108873.0A HK13108873A HK1181861A HK 1181861 A HK1181861 A HK 1181861A HK 13108873 A HK13108873 A HK 13108873A HK 1181861 A HK1181861 A HK 1181861A
- Authority
- HK
- Hong Kong
- Prior art keywords
- target item
- display
- selection
- user
- target
- Prior art date
Links
Description
The application is a divisional application of patent applications with international application numbers of PCT/US2007/086707, international application dates of 2007 of 12-month and 7-day, Chinese national application numbers of 200780045422.9 and the invention name of 'operating a touch screen interface'.
Technical Field
The present invention relates to operating touch screen interfaces, and more particularly, to a method and computer system for presenting a target item on a display.
Background
Many devices, such as Personal Digital Assistants (PDAs), mobile phone-PDA hybrids, and Ultra Mobile Personal Computers (UMPCs), utilize pen-based input to help users clearly define selection points on the screen and they also support touch input. The pen or stylus is typically thin and also helps to create a vertical offset between the user's hand and the screen so that objects appearing on the screen are not obscured by the user's finger or hand. However, taking out the stylus takes time and may be inconvenient, for example in the context of one-handed operation, or may be inefficient, for example in the context of short/intermittent interactions.
Users sometimes use their fingers or other "touch inputs" to select objects displayed on the device screen when use of the stylus is inefficient or inconvenient. This is often the case, for example, for intermittent or short-time interactions such as verifying meeting times, navigating maps, or controlling media players.
Disclosure of Invention
A shift pointing technique is provided that is designed to allow a user to operate a user interface with a selection entity such as his finger by preventing occlusion and defining a clear selection point when the user operates a touch screen device using touch. When a user attempts to select a small target displayed on the screen of the touch-sensitive display device, the shift pointing technique creates and displays a callout showing a representation of the occluded screen area and places the representation of the occluded screen area in an unoccluded screen location. An occluded area is an area of the touch screen that is occluded by a user's finger or other selection entity. The callout also displays a pointer representing the current selection point of the user's finger or other selection entity. Using the visual feedback provided by the callout, the user can guide the pointer to the target by moving (e.g., dragging or scrolling) their finger or other selection entity on the touch screen. The user may then submit a target acquisition (e.g., select a small target) by lifting his finger or other selection entity off the screen of the device. Conversely, when a user attempts to select a larger target on the screen of a touch screen device, no callout is created and the user enjoys the full capabilities of the touch screen unaltered.
Thus, in addition to offsetting the pointer, the shift pointing technique offsets the screen content to provide much better targeting performance. These techniques may allow a user to select small targets with a much lower error rate than an unassisted touch screen, and may reduce errors due to the target being occluded by the user's finger (or other selection entity) and ambiguity as to which portion of the finger (or other selection entity) defines a selection point on the display or screen. Thus, an error rate can be reduced when using touch input to the touch screen device.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
A more complete understanding of an example embodiment may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
FIG. 1 is a simplified schematic representation of an example computer system according to an example implementation;
FIG. 2 is a simplified schematic representation of a front view of a touch screen device according to one exemplary implementation;
FIG. 3 is an exemplary flow diagram of a technique for using touch input to select a desired target displayed on a screen of a touch screen device according to one exemplary implementation;
4(a) -4(e) are a series of exemplary diagrams illustrating an escalation or "shifting pointing" technique for selecting a relatively small target displayed on a screen of a touchscreen device using touch input according to one exemplary implementation;
5(a) -5(b) are a series of exemplary diagrams illustrating a conventional technique for using touch input to select a larger target displayed on a screen of a touch screen device according to another exemplary implementation;
FIG. 6(a) is a diagram showing the contact area of a user's finger when the user attempts to select a target;
FIG. 6(b) is a graph showing the ratio SF/STHow a logarithmic function may be used to map to a plot of dwell timeout;
7(a) - (d) are diagrams illustrating exemplary positioning of a callout and a pointer relative to different locations of a user's finger on a screen of a touch screen device;
FIG. 8(a) is a diagram showing a target, a user's finger, and an input point from the perspective of the user;
fig. 8(b) is a diagram showing a target, a contact area of a user's finger, and an input point from the viewpoint of hardware; and
FIG. 9 is a diagram illustrating zoom enhancements that may be applied to annotations when a user attempts to select a small target.
Detailed Description
The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations. All of the implementations described below are exemplary implementations provided to enable persons skilled in the art to make or use the invention and are not intended to limit the scope of the invention which is defined by the appended claims.
Example embodiments may be described herein in terms of functional and/or logical block components and processing steps. It should be appreciated that these block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Moreover, those skilled in the art will appreciate that the various practical embodiments may be implemented in connection with any number of data transmission protocols and that the system described herein is merely one example embodiment.
For the sake of brevity, conventional techniques related to computing device operation, touch screen operation, the presentation of graphics on display elements, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an example embodiment.
FIG. 1 is a simplified schematic representation of an example computer system 100 for implementing a touch screen device. Computer system 100 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of implementations described herein. Other well known computing systems, environments, and/or configurations that may be suitable for use with these implementations include, but are not limited to, personal computers, server computers, hand-held or laptop devices, personal digital assistants, mobile telephones, kiosk-based computers such as Automated Teller Machines (ATMs) and in-flight entertainment systems, retail product information systems, Global Positioning System (GPS) navigation devices, location maps, building directories, portable media players, electronic books, transit kiosks, museum information displays, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Computer system 100 may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, and/or other elements that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various implementations.
Computer system 100 typically has at least some form of computer readable media. Computer readable media can be any available media that can be accessed by computer system 100 and/or by an application executed by computer system 100. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer system 100. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
Referring again to FIG. 1, in its most basic configuration, a computer system 100 typically includes at least one processing unit 102 and an amount of memory 104. Depending on the exact configuration and type of computing system 100, memory 104 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This most basic configuration is identified in fig. 1 by reference numeral 106. In addition, computer system 100 may also have additional features/functionality. For example, computer system 100 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 1 by removable storage 108 and non-removable storage 110. Memory 104, removable storage 108 and non-removable storage 110 are all examples of computer storage media, as defined above.
Computer system 100 may also contain communication connections 112 that allow the system to communicate with other devices. The communication connection 112 may be associated with processing of the communication medium as defined above.
Computer system 100 may also include input device(s) 114 such as a keyboard, mouse or other pointing device, voice input device, pen, stylus or other input device, etc., or communicate with input device(s) 114. In an example embodiment described below with reference to FIG. 2, the computer system 100 includes a screen, display, or other User Interface (UI) that can accept touch input and allow a user to select a particular object displayed on the screen. Although the example embodiments described herein utilize touch input, the embodiments described herein may be equivalently configured to also support any equivalent touch-based input, such as that occurring with a pen or stylus. The computer system 100 may also include or communicate with output devices 116, such as a display, speakers, printer, etc. All of these devices are well known in the art and need not be discussed at length here.
Overview
Although convenient, using touch input can increase scaling time and error rate. Unfortunately, user interfaces designed for pen or stylus input often contain small targets, and selection with a finger can become slow and error prone in these cases. For example, small targets are occluded using a finger or other form of "touch input," forcing target selection and acquisition to be completed without visual feedback. This makes selection and acquisition error prone.
Although fingers are somewhat less accurate than styluses in fine control, accuracy is not the only reason for the high error rate associated with touch input. Other reasons for the high error rate associated with touch input are the blurred selection points due to the contact area of the finger in combination with the occlusion of the target. For example, it is difficult for a user to determine whether they have acquired an object that is smaller in size than the finger contact area. Unfortunately, the user's finger also obscures objects that are smaller than the finger's contact area, thereby preventing the user from seeing visual feedback.
Broadly speaking, techniques and technologies are provided that can improve the operation of a pen-based or touch screen device, such as a PDA or UMPC. These techniques and technologies may allow for touch input when a user's finger or other selection entity (e.g., another body part) touches the screen of a touch screen device in an attempt to select an object displayed on the screen. When a user attempts to select a target, annotations can be presented within the unobstructed screen area of the screen. The callout includes a representation of the area of the screen that is occluded by the user's finger (or other selection entity). In some implementations, the "representation of the occluded screen area" may include a copy of the screen area occluded by the user's finger (or other selection entity).
In the following description, escalation or "shift pointing" techniques will be described with reference to a scenario in which a user attempts to select a target using their finger. However, it will be appreciated that generally escalation or "shift pointing" techniques may be applied whenever a user attempts to select a target using any "selection entity". As used herein, the term "selection entity" may include a body part such as a finger or fingernail or other selection instrument that blocks or occludes an area of the touch screen device when a user attempts to use the selection entity to select a target displayed in the occluded area.
Fig. 2 is a simplified schematic representation of a front view of a touch screen device 200. Touch screen device 200 may be implemented in any suitably configured computing device or system, such as computer system 100.
The touch screen device 200 includes a touch screen 202 for displaying information including a desired target that a user wants to select. As used herein, the term "touch screen" refers to a screen, display, or other UI that is configured or designed to allow touch input by pressing an area of the screen, display, or other UI to select an object displayed on the screen, display, or other UI. For example, the user may press the screen with a stylus or pen, for example, or touch the screen with the user's finger or other appendage. The touch screen device may be implemented in any of a variety of electronic devices, including, for example, portable appliances for any number of different applications, such as cellular telephones, PDAs, laptop computers, video game consoles, electronic toys, electronic control pads, and the like; fixed service stations for information distribution, such as ATMs and the like.
When the user attempts to select a desired target (not shown in FIG. 2) displayed on the touch screen 202, the user may place his or her finger on the desired target on the touch screen 202. The area of the touch screen 202 covered by the user's finger may be referred to as the occluded screen area 204 of the touch screen 202. The occluded screen area 204 comprises the area of the screen 202 that is covered by the user's finger and that includes the desired target that the user is attempting to select. The desired target occupies a first area of the screen 202 within the occluded screen area 204.
When a user's finger touches the surface of the touch screen 202 in an attempt to select a desired target displayed on the screen 202, one or more modules in the touch screen device 200 cooperate with the processing unit 102 to determine whether occlusion is a problem with the desired target (under the user's finger).
When it is determined that occlusion may be a problem for the desired target under the user's finger, the callout 206 and pointer 208 can be displayed or rendered. The decision to display or render the callout 206 and pointer 208 can be referred to as "escalation". Exemplary techniques for determining whether to display or present the callout 206 and pointer 208 (or "escalate") can include, but are not limited to, for example, a user input based trigger, a dwell timer based trigger, or a target size based trigger. These techniques for determining whether to upgrade will be described in more detail below.
As used herein, the term "callout" refers to a shifted representation of the occluded screen area (which typically includes a representation of the desired target). In some implementations, the "representation of the occluded screen area" may include a copy of the screen area occluded by the user's finger (or other selection entity). In some implementations, annotations can move in response to input movement, display updates, or for other reasons, and thus do not have to be statically placed. The callout can generally be of any suitable size and any suitable shape. In this particular example, as shown in FIG. 2, the copy portion of the callout 206 is shown as having a circular shape or frame, however, the copy portion can have a rectangular shape or frame, a square shape or frame, an oval shape or frame, a cartoon bubble shape or frame, or any combination thereof. The callout 206 can also be located or placed (or transitioned) in any suitable location in the unobstructed screen area (shown with cross-hatching in FIG. 2). An example of callout placement will be provided below with reference to FIG. 7. Further, the callout 206 can be the same size as the occluded area, smaller than the occluded area, or larger than the occluded area, depending on the implementation. In one exemplary "zoom" implementation described below with reference to FIG. 9, the callout 206 is larger than the occluded area. This implementation is particularly helpful in situations where the desired target is particularly small and difficult to select.
As used herein, the term "pointer" refers to the current system input coordinates specified by an input device, such as a user's finger, and represents the actual contact or selection point on the screen. In one implementation, the shifted pointer 208 and the actual point of contact under the finger are connected with a dashed line as shown in FIG. 2. The actual contact point represents the current actual contact or selection point of the user's finger within the occluded screen area 204. Thus, in addition to shifting the pointer 208, the callout 206 shifts the representation of the occluded screen content, which can result in much better targeting performance.
When a decision is made to upgrade, one or more modules in the touch screen device 200 cooperate with the processing unit 102 to execute computer instructions for displaying or rendering the callout 206 and pointer 208 in an unobstructed screen area (shown cross-hatched in FIG. 2) of the touch screen 202.
The pointer 208 may move when the user attempts to select a desired target so that the pointer 208 may be moved by moving a finger over the surface of the screen 202. The visual feedback provided by the callout 206 to the user allows the user to move the pointer 208 over the representation of the desired target displayed in the callout 206. For example, the user can guide the pointer 208 over the representation of the desired target displayed in the callout 206 by holding their finger on the occluded screen area 204 of the touch screen 202 and guiding the pointer 208 by moving or scrolling their finger over the surface of the touch screen 202 (in the occluded screen area 204) until the pointer 208 is over the representation of the desired target.
To select the desired target, the user submits a target acquisition by lifting their finger off the surface of the screen 202 while the pointer 208 is over the representation of the desired target displayed in the callout 206. In one implementation, successful target acquisition may be confirmed with a click sound, while unsuccessful target acquisition attempts may result in an error sound. One or more modules in the touch screen device 200 cooperate with the processing unit 102 to remove the callout 206 and the pointer 208 when the user lifts their finger off the surface of the touch screen 202.
FIG. 3 is an exemplary flow diagram 300 of a technique for using touch input to select a desired target displayed on a screen of a touch screen device according to one exemplary implementation. Fig. 3 will be described with reference to fig. 4(a) -4(e) and fig. 5(a) -5(b) to illustrate how the technique of fig. 3 can be applied in one exemplary implementation. 4(a) -4(e) are a series of exemplary diagrams 400 illustrating an escalation or "shift pointing" technique for selecting a relatively small target displayed on a screen of a touch screen device using touch input according to one exemplary implementation. 5(a) -5(b) are a series of exemplary diagrams 500 illustrating a conventional technique for using touch input to select a larger target displayed on a screen of a touch screen device according to another exemplary implementation.
At step 310, a user attempts to acquire or select a desired target displayed on the display or screen of a touch screen device by touching the device display surface (e.g., the surface of the device) with their finger. For example, as shown in fig. 4(a) and 5(a), the user presses the screen surface with their finger 410, 510 (or other object including other body parts or devices) to attempt to select a desired target 401, 501. In FIG. 4(a), the desired target 410 occupies a first, small area displayed on the screen under the user's finger 401. The desired target 410 is near a number of other possible targets (shown as small rectangles). The area of the screen covered by the user's finger 401 (and including the desired target 410 and possibly other targets) is referred to below as an "occluded" area that is not visible to the user. In fig. 5(a), the desired target 510 occupies a relatively large area displayed on the screen that is not completely covered by the user's finger 501. In other words, in FIG. 5(a), the desired target 510 is only partially occluded because some portion of the desired target 510 is still visible.
Conditional escalation overcomes occlusion problems and allows the user to reliably select small targets. This escalation or shift pointing technique helps ensure that the interaction overhead is limited to cases where it is really necessary (e.g., small targets), which can save a significant amount of time. At step 320, a processor or other module in the touch screen device determines whether an "upgrade" is needed for a particular desired target. In general, a processor or other module in a touch screen device determines whether occlusion is a problem given a possible target displayed in an occluded screen area under a user's finger. Any number of different techniques may be used to determine whether to perform an upgrade (e.g., determine whether to display or render a callout and pointer). These techniques may include, but are not limited to, for example, a user input based trigger, a dwell timer based trigger, or a target size based trigger. These techniques will be described below.
If it is determined that an upgrade is not needed (e.g., occlusion is not a problem for the desired target under the user's finger), then at step 325, the touch screen device continues to operate in its normal or conventional manner like an unmodified touch screen (e.g., without invoking an upgrade). The process 300 waits for the next desired target and loops back to step 310. In the exemplary case depicted in fig. 5(a) and 5(b), the callout is not created or displayed when the user attempts to select a larger target on the screen of the touch screen device. By lifting their finger immediately, the user makes the selection as if using an unassisted touch screen. Here, the simplicity of unassisted touchscreen input makes it sufficient for larger targets.
The escalation or shift pointing technique also works as the touch screen user desires in that it allows the user to aim at the actual target itself. By allowing the user to aim at the actual target, the escalation or shift pointing technique remains compatible with conventional pen and touch input. This compatibility keeps the interaction consistent when switching back and forth between pen and touch inputs. This also makes it easy to deploy the upgrade or shift pointing technique in case of walk-up or to retrofit existing systems.
If it is determined that escalation is required (e.g., occlusion is a problem for the desired target under the user's finger), then at step 330, a callout and pointer can be rendered or displayed on the non-occluded area of the screen.
The escalation or shift pointing technique also works as the touch screen user desires in that it allows the user to aim at the actual target itself. By allowing the user to aim at the actual target, the escalation or shift pointing technique remains compatible with conventional pen and touch input. This compatibility keeps the interaction consistent when switching back and forth between pen and touch inputs. This also makes it easy to deploy the upgrade or shift pointing technique in case of walk-up or to retrofit existing systems.
The callout and pointer can help eliminate problems associated with occlusion, and can also help reduce problems associated with actual contact or selection point ambiguity. For example, as shown in FIG. 4(b), a callout 406 and a pointer 408 can be provided or displayed in a non-occluded area of the screen. The callout 406 displays a representation of the occluded screen area (e.g., the area covered by the user's finger 410) over the non-occluded area of the screen. The representation of the occluded screen area can include, for example, a copy 401' of the desired target 401. The pointer 408 represents the actual contact or selection point of the user's finger on the screen. When the pointer 408 is initially displayed, the pointer 408 does not coincide with the copy 401' of the desired target 401.
Further, it should be understood that while the location of the callout 406 is shown as being displayed above the target and the user's finger, as will be described below with reference to FIG. 6, the callout 406 can be positioned at any convenient location within the non-occluded area of the screen relative to either the target or the user's finger. The placement of the callout and pointer should be done in a way that can help minimize occlusion and maximize predictability to speed up visual redirection.
At step 340, the user directs the pointer over the representation of the desired target to select the desired target. For example, as shown in FIG. 4(c), while maintaining contact of their finger 410 with the screen, the user can guide the position of the pointer 408 based on visual feedback provided by the callout 406. The user can make the correct movement and fine tune the pointer position by moving his finger over the surface of the screen until the pointer 408 is over the copy 401' of the desired target 401 displayed in the unobstructed screen area of the screen.
When the pointer is over the representation of the desired target, the user submits a target acquisition of the desired target at step 350. For example, as shown in FIG. 4(d), to select a desired target, the user submits a target acquisition of the desired target 401 by lifting their finger 410 off the surface of the screen (e.g., a take-off selection) while the pointer 408 is over the copy 401' of the desired target 401 displayed in the unobstructed screen area. In one implementation, successful target acquisition may be confirmed with a click sound, while unsuccessful target acquisition attempts may result in an error sound. In another implementation, lifting finger 410 to select target 401 once the correct position is visually verified may result in a brief starburst afterglow and complete the selection.
At step 360, the callout and pointer are removed when the user lifts their finger from the surface of the screen. For example, as shown in FIG. 4(e), the callout 406 and pointer 408 are removed when the user lifts their finger (not shown) off the surface of the screen, and the desired target has been selected.
Techniques for determining whether to perform an upgrade
In one implementation, a trigger based on user input may be used to trigger escalation or "shift pointing techniques". For example, the user may press a button or select another input device to trigger the upgrade.
In another implementation, a target size based trigger may be used to trigger escalation or "shift pointing techniques". The processor or other module may determine whether occlusion is a problem for the desired target based on the size of the contact area of the desired target relative to the selection entity (e.g., the user's finger). For example, because occlusion can be problematic when the minimum size of the desired target is smaller than a typical finger contact diameter, the processor or other module may determine whether there is a target that is small enough to be occluded by the finger (e.g., the desired target includes a small target relative to the contact area of the selection entity (e.g., the user's finger)). In one implementation, there is an approximate threshold size or "occlusion threshold" where occlusion makes selecting a smaller target error prone. When a user presses the surface of the screen using their finger to attempt to select a desired target (e.g., touch and press on the occluded screen area), a processor or other module in the touch screen device determines whether the desired target is less than an occlusion threshold. If the desired target is less than the occlusion threshold, then an escalation or shift pointing technique is implemented. In contrast, occlusion will generally not be a problem when the user attempts to select a larger target on the screen. Thus, for targets greater than the occlusion threshold, the escalation or shift pointing technique does not render or display annotations on the screen, but rather works as if it were an unmodified touch screen.
In yet another implementation, escalation or "shift pointing techniques" may be triggered using a dwell timer-based trigger. For example, the processor or other module determines whether a user's finger has been in contact with the display for more than a threshold time. If the user's finger has been in contact with the display for more than a threshold time, the processor or other module determines that escalation or shift pointing should be implemented. If the user's finger has been in contact with the display for less than or equal to a threshold time, the processor or other module determines that escalation or shift pointing should not be implemented and that a conventional, unaided touch screen should be implemented.
Escalation based on quiesce and selection ambiguity
In yet another implementation, rather than basing the decision of whether to escalate solely on a target size-based trigger or a dwell timer-based trigger, the concepts from both implementations may be combined when deciding whether to escalate and use a "shift pointing" technique in an upcoming targeting attempt.
By using dwell time, the final decision as to whether to upgrade is left to the user. For example, a fixed dwell timeout (e.g., 300 milliseconds) may be used in the complete absence of additional knowledge about the target size and location. When the fixed dwell timeout expires, escalation or shift pointing should be implemented. However, when the touchscreen device provides information about the size and location of the target, the shift pointing technique may determine or calculate a dwell timeout based on "selection ambiguity". In one embodiment described below with reference to fig. 6(a) and 6(b), a dwell timeout between screen contact and upgrade may be defined. The duration of the dwell timeout may vary depending on the size of the target under the user's finger, and the selection ambiguity may be determined or estimated by comparing the minimum target size found under the user's finger to an occlusion threshold.
When the target is small compared to the occlusion threshold, the selection ambiguity is relatively high, and the dwell timeout can be set to a very short duration and escalation occurs almost immediately. However, if the target is much larger than the occlusion threshold, occlusion is not an issue. In this case, an upgrade is not necessary, and thus the dwell timeout may be set to a longer time, enabling the user to utilize a simple, direct touch. Thus, for relatively large targets, the dwell timeout is relatively long and the user can acquire the target without an upgrade, resulting in the same performance as an unmodified touchscreen.
For targets approximately the same size as the occlusion threshold, the selection ambiguity level itself is ambiguous (the user may or may not need escalation depending on their confidence in their selection). In this case, the dwell timeout occurs after a short delay that is only long enough to control escalation invocation with stalling. If the user wants to escalate or invoke the shift pointing technique, the user may pause by holding their finger on the surface of the screen for a period of time. To avoid an upgrade, the user can immediately lift his finger off the screen surface.
Fig. 6(a) is a diagram showing a contact area 605 of a user's finger 610 when the user attempts to select a target 601. FIG. 6(a) also shows the occlusion threshold (S)F) And the minimum size (S) of the minimum object 601 found under the user' S finger 610T). In one implementation, the occlusion threshold (S)F) Is the maximum size of the contact area 605 of the user's finger 610. Occlusion threshold (S)F) And the minimum size (S) of the minimum target 601T) Can be used to calculate the occlusion threshold (S)F) Minimum size (S) corresponding to the smallest target found under the fingerT) The ratio of (a) to (b).
FIG. 6(b) is a graph showing the ratio SF/STHow a logarithmic function can be used to map to a plot of dwell timeout. The logarithmic function is defined by the mathematical formula:
a. m, n and τ are real number parameters.
Occlusion threshold (S)F) Minimum size (S) corresponding to the smallest target found under the fingerT) The ratio of (d) can be mapped to dwell time using the logarithmic function. In one implementation, these real parameters may be set to a =1, m =0, n =4, and τ = 3. As shown in fig. 6B, when these real parameters are used in the logarithmic function, this results in a smooth curve that maps small targets to about 0 milliseconds, large targets to about 1500 milliseconds, and targets near the occlusion threshold to about 300 milliseconds. In other words, the curve approaches a minimum delay time of 0 milliseconds for very small target hits (hit); the maximum delay time for the curve to hit a large target of about 1500 milliseconds; and for targets with sizes close to the occlusion threshold, the curve hits a latency close to 300 milliseconds.
Estimating an occlusion threshold
Occlusion threshold (S)F) Roughly related to the finger contact area, touch sensitive screens commonly used on PDAs and UMPCs report only a single input point rather than a finger contact area. To shade over timeThreshold value (S)F) May be determined based on the target size for which the upgrade is used and the target size for which the upgrade is not used. With an initial guess of SFInitially, then if the user is at SF<STTime-escalation, then occlusion threshold (S)F) Increase S, and if the user is not upgraded and SF>STThen, the occlusion threshold value (S)F) Decreasing S, where S = w | SF-STAnd where w is a manually adjusted weight for smoothing the estimate over time. In one implementation, a weight (w) equal to 0.125 may be used to provide a good balance between smoothness and learning rate.
A potential benefit of this approach is that if the user prefers to use their fingernail (rather than their finger or fingertip) to select a target, the occlusion threshold (S)F) Will shrink so that upgrades are only immediate to very small targets. For devices capable of sensing whether a stylus is in a device slot, the method allows learning independent occlusion thresholds (S) for the finger and the pen, respectivelyF) The value is obtained. In the absence of this sensor data, setting the weight (w) to a relatively large value allows for fast learning of a new occlusion threshold (S)F) In response to a change in the input style of the user.
7(a) - (d) are diagrams illustrating exemplary positioning of the callout 706 and pointer 708 relative to different locations of the user's finger 710 on the screen of the touch screen device. Fig. 7(a) - (d) show that the escalation or shift pointing technique does not result in any inaccessible screen areas. The position of the callout can be displayed in any location in the non-occluded area of the screen relative to the desired target 701 and/or the user's finger 710. For example, in the diagram shown in FIG. 7(a), the callout 706A is offset directly above the user's finger 710A and the desired target 701A within the non-occluded area of the screen. In FIG. 7(B), to avoid clipping at the edges of the screen, the callout 706B is offset to the top right of the user's finger 710B and the desired target 701B within the non-occluded area of the screen. Positioning the callout 706B further toward the middle of the screen can help avoid clipping near the edge. In FIG. 7(C), to avoid clipping at the top edge of the screen, the desired target 701C is near the top edge of the display. Thus, to avoid clipping, the callout 706C can be offset to the left of the user's finger 710C and slightly below the desired target 701C within the non-occluded area of the screen. It will be appreciated that if it is not possible to shift the callout 706C to the left, the callout 706C can be shifted to the right of the user's finger 710D and slightly below the desired target 701D within the non-occluded area of the screen as shown in FIG. 7 (D). By adjusting the relative callout 706 locations, the escalation or shift pointing technique handles the target 701 anywhere on the screen and can prevent clipping problems that might otherwise occur at the edges of the screen. Additionally, it is to be appreciated that "handedness detection" can be employed to reverse the placement or positioning of the callout 706 for left-handed users.
Correction of user-perceived input points
Fig. 8(a) is a diagram showing a target 801, a user's finger 810, and an input point 807 from the user's perspective. In many touch screen devices, a single selection point is calculated and placed approximately in the middle of the finger contact area. Fig. 8(b) is a diagram showing the target 801, the contact region 809 of the user's finger, and the input point 807' from the viewpoint of hardware. For some users, the contact point is often slightly below the intended target. The shift pointing technique displays the pointer position relative to the initial point of contact. In some implementations, the position of the pointer relative to the initial point of contact may be adjusted to reflect the point of contact perceived by the user.
For example, in one implementation, a shift pointing technique may adjust an input position based on a single point of contact. An estimate of a correction vector (V) may be periodically computed that maps the hardware input point 807' to the user-perceived input point 807. For example, in one implementation, the correction vector (V) may be estimated by calculating the final lift-off point (P) at which correction is made2) And initial contact point (P)1) Add a weighting vector to update: vt+1=Vt+w(P2-P1) Where w is the manually adjusted weight. In one implementation of the method of the present invention,the manually adjusted weight (w) may be set approximately equal to 0.33 to smooth the estimate without making iterative refinement too slow. This reduces the fine-tuning time after the estimated vee converges, allowing the user to simply verify the selected target without further adjustment. But unlike other fingers, the contact shape of the thumb tends to vary depending on the location of the contact on the display. This makes a single adjustment vector insufficient. Linear interpolation between the position-specific adjustment vectors may alleviate this problem.
Labelling magnification or "zooming"
One purpose of escalation or shift pointing techniques is to enable a user to acquire targets by avoiding target occlusion. In some use cases, the target may be particularly small. For example, while the shift pointing technique described above works well for acquiring targets of 6 pixels or more (about 2.6 millimeters), in some cases, a user may want to acquire targets of less than 6 pixels. In some implementations, the shift pointing technique may be enhanced with zoom and gain manipulation of the Control Display (CD) ratio manipulation to enhance scaling accuracy and allow high precision pointing accuracy.
FIG. 9 is a diagram showing zoom enhancements that can be applied to a callout 906 produced by a escalation or shift pointing technique when a user attempts to select a small target. For particularly small targets, the above-described techniques can also implement a zoom function by enlarging the callout 906 and increasing the display ratio of the callout 906 to the occluded screen area rendered by the callout 906. When the zoom function is implemented, the rendered size of the occluded screen area displayed in the callout 906 is larger than the actual area occluded by the user's finger so that the callout 906 presents an enlarged version of the occluded screen area.
In some implementations, the callout 906 can be modified so that it now travels with the finger similar to a tracking menu, enabling the user to reach content beyond the callout. Because the finger no longer directly corresponds to the pointer 908 position, the callout 906 is moved so that it does not become occluded during the calibration phase. The initial position of the callout 906 can be placed relative to the initial contact point. If the point of contact moves beyond the threshold diameter, the callout 906 moves along with the finger similar to a tracking menu. Given an increased zoom space (or increased motion space with a high CD ratio), this allows fine adjustment beyond the initial area covered by the box if the initial contact point is too far from the desired target.
In this particular example, the upgrade is performed and the representation of the occluded screen area displayed in the callout 906 has been enlarged. It will be appreciated that any suitable magnification factor may be used depending on the size of the display, the size of the occluded area, or the size of the particular target. The higher the magnification of the callout, the less content the callout displays. While this magnification will guarantee visibility of the pixel-sized target, it may not be sufficient to allow reliable target acquisition. In some implementations, the zooming can also be completed with an enhancement of the Control Display (CD) ratio.
Control Display (CD) ratio enhancement
The Control Display (CD) ratio is a mapping between the actual finger movement ("control") and the movement of the system pointer on the display ("display"). By increasing the CD ratio above 1, the finger needs to move further than the pointer to cover a certain pointer distance. By reducing the CD ratio below 1, the finger can move a shorter distance than the pointer to cover a certain pointer distance. This manipulation is also referred to as "gain", which is the inverse of the CD ratio. Given a certain control movement, the gain increases or decreases the resulting pointer movement. If the gain is low, the pointer movement is less than some control movement.
To allow the user to aim at the target, many touch screen devices operate with a CD ratio of 1. For example, the pointer location may correspond to a finger input location of 1: 1. However, once the user's finger is in contact with the screen, a pointer may be displayed to provide visual feedback to the user. Finger movement can then control the pointer in a relative manner that the pointer moves faster or slower than the finger guiding it. To address this, in the enhanced shift pointing technique, the CD ratio may be adjusted to at most 8:1 at upgrade. The movement of the pointer across the screen is slowed down, expanding the 1 pixel object to 8 pixels in motion space. In alternative implementations, the CD ratio may be adjusted with a handle (handle) like a pantograph or based on the distance from the initial touch point for stability purposes.
As discussed above, regardless of the initial position of the target, the callout is positioned to avoid occlusion by the finger. In some cases, the finger is moved so that the original target position is no longer occluded. Because the input area of a touch sensitive display is limited, increasing the CD ratio above 1 reduces the extent of the "motion space" to 1/CD of the display space. Finger movement in the control space may be referred to as "motor space" movement because people control the movement with their cognitive motor processes. This can be problematic if the initial contact point is X pixels away from the edge of the display and also more than X/CD pixels away from the target. Because the shift pointing technique employs a lift-off selection, the user cannot select the target. To address this issue, the shift pointing technique can be modified to snap to a point closer to the edge where all the intermediate pixels can be selected or use pointer acceleration so that a fast succession of long-slow and short-fast movements can simulate clutching.
While the above detailed description has given at least one example embodiment, it should be appreciated that a vast number of variations exist. It should also be appreciated that the example embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the systems, methods, or devices in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.
Claims (20)
1. A method for presenting a target item on a display, the method comprising:
receiving a selection of the target item on a first portion of the display;
determining whether the selection of the target item satisfies a predetermined condition; and
displaying a representation of the target item on a second portion of the display when the selection of the target item satisfies the predetermined condition.
2. The method of claim 1, wherein determining whether the selection of the target item satisfies a predetermined condition comprises determining whether an input associated with the selection of the target item on the display exceeds an input threshold.
3. The method of claim 2, wherein the input threshold is associated with a temporal threshold.
4. The method of claim 1, wherein determining whether the selection of the target item satisfies a predetermined condition comprises determining whether a size of the target item does not exceed a size threshold.
5. The method of claim 1, wherein the representation of the target item in the second portion of the display is larger than the target item in the first portion of the display.
6. The method of claim 1, further comprising adjusting a position of the second portion of the display based on the received input.
7. The method of claim 1, further comprising removing the representation of the target item in the second portion of the display when a selection of the target item is no longer received.
8. A computer storage medium encoding computer-executable instructions that, when executed by at least one processor, perform a method for presenting a target item on a display, the method comprising:
receiving a selection of the target item on a first portion of the display;
determining whether the selection of the target item satisfies a predetermined condition; and
displaying a representation of the target item on a second portion of the display when the selection of the target item satisfies the predetermined condition.
9. The computer-readable storage medium of claim 8, wherein determining whether the selection of the target item satisfies a predetermined condition comprises determining whether an input associated with the selection of the target item on the display exceeds an input threshold.
10. The computer-readable storage medium of claim 9, wherein the input threshold is associated with a temporal threshold.
11. The computer-readable storage medium of claim 8, wherein determining whether the selection of the target item satisfies a predetermined condition comprises determining whether a size of the target item does not exceed a size threshold.
12. The computer-readable storage medium of claim 8, wherein the representation of the target item in the second portion of the display is larger than the target item in the first portion of the display.
13. The computer-readable storage medium of claim 8, further comprising adjusting a position of the second portion of the display based on the received input.
14. The computer-readable storage medium of claim 8, further comprising removing the representation of the target item in the second portion of the display when a selection of the target item is no longer received.
15. A computer system for presenting a target item on a display, the system comprising:
one or more processors; and
a memory coupled to the one or more processors, the memory to store instructions that, when executed by the one or more processors, cause the one or more processors to perform a method for presenting a target item on a display, the method comprising:
receiving a selection of the target item on a first portion of the display;
determining whether the selection of the target item satisfies a predetermined condition; and
displaying a representation of the target item on a second portion of the display when the selection of the target item satisfies the predetermined condition.
16. The computer system of claim 15, wherein determining whether the selection of the target item satisfies a predetermined condition comprises determining whether an input associated with the selection of the target item on the display exceeds an input threshold.
17. The computer system of claim 16, wherein the input threshold is associated with a temporal threshold.
18. The computer system of claim 15, wherein determining whether the selection of the target item satisfies a predetermined condition comprises determining whether a size of the target item does not exceed a size threshold.
19. A method for presenting a target item on a display, the method comprising:
receiving a selection of the target item on a first portion of the display;
in response to receiving a selection of the target item, displaying a representation of the target item in a second portion of the display; and
removing the representation of the target item in the second portion of the display when a selection of the target item is no longer received.
20. A computer storage medium encoding computer-executable instructions that, when executed by at least one processor, perform a method for presenting a target item on a display, the method comprising:
receiving a selection of the target item on a first portion of the display;
in response to receiving a selection of the target item, displaying a representation of the target item in a second portion of the display; and
removing the representation of the target item in the second portion of the display when a selection of the target item is no longer received.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/635,730 | 2006-12-07 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1181861A true HK1181861A (en) | 2013-11-15 |
| HK1181861B HK1181861B (en) | 2018-06-29 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101553775B (en) | Operating a touch screen interface | |
| AU2013223015B2 (en) | Method and apparatus for moving contents in terminal | |
| US12299275B2 (en) | Devices, methods, and systems for performing content manipulation operations | |
| US8531410B2 (en) | Finger occlusion avoidance on touch display devices | |
| US20130215018A1 (en) | Touch position locating method, text selecting method, device, and electronic equipment | |
| US9841890B2 (en) | Information processing device and information processing method for improving operability in selecting graphical user interface by generating multiple virtual points of contact | |
| US8525854B2 (en) | Display device and screen display method | |
| CN108064371A (en) | A kind of control method and device of flexible display screen | |
| US20110043453A1 (en) | Finger occlusion avoidance on touch display devices | |
| CN112083989A (en) | Interface adjusting method and device | |
| AU2011253778B2 (en) | Operating touch screen interfaces | |
| HK1181861A (en) | Method and computer system for rendering a target item on a display | |
| HK1181861B (en) | Method and computer system for rendering a target item on a display | |
| AU2016203054B2 (en) | Operating touch screen interfaces | |
| CN105190512A (en) | Screen operation method for electronic device based on electronic device and control action | |
| US20090237357A1 (en) | Method And Cursor-Generating Device For Generating A Cursor Extension On A Screen Of An Electronic Device | |
| CN117666856A (en) | Control methods, devices and equipment for virtual interactive interfaces in extended real space |