HK1137525B - Language input interface on a device - Google Patents
Language input interface on a device Download PDFInfo
- Publication number
- HK1137525B HK1137525B HK10100973.9A HK10100973A HK1137525B HK 1137525 B HK1137525 B HK 1137525B HK 10100973 A HK10100973 A HK 10100973A HK 1137525 B HK1137525 B HK 1137525B
- Authority
- HK
- Hong Kong
- Prior art keywords
- candidate
- virtual keyboard
- symbol string
- phonetic symbol
- candidates
- Prior art date
Links
Description
Technical Field
The subject matter of this specification relates generally to text input interfaces.
Background
A conventional computer keyboard may be too large for a portable device such as a mobile phone, a multimedia player, or a Personal Digital Assistant (PDA). Some portable devices have smaller versions of traditional computer keyboards or use virtual keyboards to receive user input. The virtual keyboard may take the form of a software application or a feature (feature) of a software application to simulate a computer keyboard. For example, in a portable device with a touch sensitive display, a user may enter text using a virtual keyboard by selecting or marking (tab) areas of the touch sensitive display that correspond to keys of the virtual keyboard.
These smaller keyboards and virtual keyboards may have keys that correspond to more than one character. For example, some keys may correspond by default to one character in English, e.g., the letter "a", and may also correspond to other additional characters, such as another letter or a letter with an accent option (accent option), e.g., a character "", or other characters with accent options. Due to the physical limitations (e.g., size) of the virtual keyboard, a user may find it difficult to type characters that are not readily available on the virtual keyboard.
Input methods for devices with multi-lingual environments may present unique challenges for input and spelling correction that may need to be adapted to the selected language to ensure accuracy and efficient workflow.
Disclosure of Invention
In general, one aspect of the subject matter described in this specification can be implemented as a method comprising the acts of: presenting a virtual keyboard in a first area of a touch-sensitive display of a device, receiving an input on the virtual keyboard representing a phonetic symbol string (phonetic string), presenting the input phonetic symbol string in a second area of the touch-sensitive display, identifying one or more candidate objects (candidate) based on the phonetic symbol string, presenting at least a subset of the candidate objects in the first area or the second area, receiving an input selecting one of the candidate objects, and replacing the input phonetic symbol string with the selected candidate object. Other embodiments of this aspect include corresponding systems, apparatus, computer program products, and computer-readable media.
Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages. Text in languages requiring phonetic symbol string-to-character conversion can be entered more efficiently on portable devices. Error correction and word (word) prediction techniques may be applied to east asian language input.
The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Drawings
FIG. 1 is a block diagram of an exemplary mobile device;
FIG. 2 is a block diagram of an exemplary implementation of the mobile device of FIG. 1;
3A-3F illustrate exemplary user interfaces for entering text; and
FIG. 4 illustrates an exemplary text entry process.
Like reference numbers and designations in the various drawings indicate like elements.
Detailed Description
Exemplary Mobile device
Fig. 1 is a block diagram of an exemplary mobile device 100. The mobile device 100 may be, for example, a handheld computer, a personal digital assistant, a cellular telephone, a network appliance, a camera, a smart phone, an Enhanced General Packet Radio Service (EGPRS) mobile phone, a network base station, a media player, a navigation device, an email device, a game console, or a combination of any two or more of these or other data processing devices.
Overview of Mobile devices
In some implementations, the mobile device 100 has a touch-sensitive display 102. The touch sensitive display 102 may implement Liquid Crystal Display (LCD) technology, light emitting polymer display (LPD) technology, or some other display technology. The touch sensitive display 102 may be sensitive to tactile (haptic) and/or haptic (tactile) contact by the user.
In some implementations, the touch-sensitive display 102 may include a multi-touch-sensitive (multi-touch-sensitive) display 102. The multi-touch sensitive display 102 can, for example, process multiple simultaneous touch points, including processing data related to the pressure, degree (degree) and/or position of each touch point. Such processing facilitates gestures and interactions with multiple fingers, chord playing (chording), and other interactions. Other touch-sensitive display technologies may also be used, such as displays in which contact is made using a stylus or other pointing device. Some examples of multi-touch sensitive display technologies are described in U.S. Pat. Nos. 6,323,846, 6,570,557, 6,677,932, and 6,888,536, each of which is incorporated herein by reference in its entirety.
In some implementations, the mobile device 100 can display one or more graphical user interfaces on the touch-sensitive display 102 to provide a user with access to various system objects and to communicate information to the user. In some implementations, the graphical user interface may include one or more display objects 104, 106. In the illustrated example, the display objects 104, 106 are graphical representations of system objects. Some examples of system objects include device functions, applications, windows, files, alarms, events, or other recognizable system objects.
Exemplary Mobile device functionality
In some implementations, mobile device 100 can implement multiple device functions, such as a telephony device represented by telephony object 110; an email device represented by email object 112; a network data communication device represented by a Web object 114; Wi-Fi base station equipment (not shown); and a media processing device represented by media player object 116. In some implementations, particular display objects 104, such as phone object 110, email object 112, network (Web) object 114, and media player object 116, may be displayed in a menu bar 118. In some implementations, device functionality may be accessed from a top-level graphical user interface, such as the graphical user interface shown in FIG. 1. For example, touching one of the objects 110, 112, 114, or 116 may invoke a corresponding function.
In some implementations, the mobile device 100 can implement a network distribution (network distribution) function. For example, the functionality may enable a user to retrieve the mobile device 100 and provide access to its associated network during travel. In particular, the mobile device 100 may extend internet access (e.g., Wi-Fi) to other wireless devices in the vicinity. For example, mobile device 100 may be configured as a base station for one or more devices. Likewise, the mobile device 100 may grant or deny network access to other wireless devices.
In some implementations, after device functionality is enabled, the graphical user interface of mobile device 100 changes, or is added to or replaced by another user interface or user interface element, to facilitate user access to particular functionality associated with the respective device functionality. For example, in response to a user touching the phone object 110, the graphical user interface of the touch-sensitive display 102 may present display objects related to various phone functions; likewise, touching email object 112 may cause the graphical user interface to present display objects related to various email functions; touching the web object 114 may cause the graphical user interface to present display objects related to various web surfing functions; and touching media player object 116 may cause the graphical user interface to present display objects related to various media processing functions.
In some implementations, the top-level graphical user interface environment or state of FIG. 1 can be restored by pressing a button 120 located near the bottom of the mobile device 100. In some implementations, each respective device function can have a respective "home" display object displayed on the touch-sensitive display 102, and the graphical user interface environment of FIG. 1 can be restored by pressing the "home" display object.
In some embodiments, the top-level graphical user interface may include additional display objects 106, such as a Short Messaging Service (SMS) object 130, a calendar object 132, a photos object 134, a camera object 136, a calculator object 138, a stocks object 140, a weather object 142, a maps object 144, a notes object 146, a clocks object 148, an address book object 150, and a settings object 152. For example, touching the SMS display object 130 may invoke an SMS messaging environment and support functions; likewise, each selection of a display object 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, and 152 may invoke a corresponding object environment and functionality.
Additional and/or different display objects may also be displayed in the graphical user interface of FIG. 1. For example, if the device 100 is acting as a base station for other devices, one or more "connection" objects may appear in the graphical user interface to indicate a connection. In some implementations, the user can configure the display objects 106, e.g., the user can specify which display objects 106 to display, and/or can download additional applications or other software that provide other functionality and corresponding display objects.
In some implementations, the mobile device 100 can include one or more input/output (I/O) devices and/or sensor devices. For example, a speaker 160 and a microphone 162 may be included to facilitate voice-enabled functions, such as telephone and voice mail (voice mail) functions. In some implementations, an up/down button 184 may be included for volume control of the speaker 160 and microphone 162. The mobile device 100 may also include an on/off button 182 for a ring alert (ringer) for an incoming call (incoming phone call). In some implementations, a microphone (loud speaker)164 may be included to facilitate hands-free voice functions, such as a speakerphone (speakerphone) function. An audio jack 166 may also be included for an earphone and/or microphone.
In some implementations, a proximity sensor 168 can be included to facilitate detecting that a user is positioning the mobile device 100 proximate to the user's ear and, in response, deactivating a function (disconnect) to the touch-sensitive display 102 to prevent inadvertent function calls. In some implementations, the touch-sensitive display 102 can be turned off when the mobile device 100 is proximate to the user's ear to save additional power.
Other sensors may also be used. For example, in some implementations, the ambient light sensor 170 may be utilized to facilitate adjusting the brightness of the touch sensitive display 102. In some implementations, an accelerometer 172 can be utilized to detect movement of the mobile device 100, as indicated by directional arrow 174. Thus, display objects and/or media may be presented according to a detected orientation (orientation), e.g., portrait or landscape. In some implementations, the mobile device 100 can include circuitry and sensors to support location determination capabilities, such as provided by a Global Positioning System (GPS) or other positioning system (e.g., systems using Wi-Fi access points, television signals, cellular grids (cellurgreds), Uniform Resource Locators (URLs)). In some implementations, a positioning system (e.g., a GPS receiver) can be integrated into the mobile device 100 or provided as a stand-alone device that can be connected to the mobile device 100 through an interface (e.g., the port device 190) to provide access to location-based services.
In some implementations, a port device 190, such as a Universal Serial Bus (USB) port, or a docking port, or some other wired port connection, may be included. For example, port device 190 may be utilized to establish a wired connection to other computing devices, such as other communication devices 100, network access devices, personal computers, printers, display screens, or other processing devices capable of receiving and/or transmitting data. In some implementations, the port device 190 allows the mobile device 100 to synchronize with a host device using one or more protocols, such as, for example, TCP/IP, HTTP, UDP, and any other known protocols.
The mobile device 100 may also include a camera lens and sensor 180. In some implementations, the camera lens and sensor 180 can be located on the back surface of the mobile device 100. The camera may capture still images and/or video.
The mobile device 100 may also include one or more wireless communication subsystems, such as an 802.11b/g communication device 186, and/or Bluetooth (R)TMA communication device 188. Other communication protocols may also be supported, including other 802.x communication protocols (e.g., WiMax, Wi-Fi, 3G), Code Division Multiple Access (CDMA), global system for mobile communications (GSM), and,Enhanced Data GSM Environment (EDGE), etc.
Exemplary Mobile device architecture
Fig. 2 is a block diagram 200 of an exemplary implementation of the mobile device 100 of fig. 1. The mobile device 100 may include a memory interface 202, one or more data processors, image processors and/or central processing units 204, and a peripheral interface 206. The memory interface 202, the one or more processors 204, and/or the peripherals interface 206 can be separate components or can be integrated in one or more integrated circuits. The various components in the mobile device 100 may be coupled by one or more communication buses or signal lines.
Sensors, devices, and subsystems can be coupled to peripheral interface 206 to facilitate multiple functions. For example, a motion sensor 210, a light sensor 212, and a proximity sensor 214 may be coupled to the peripheral interface 206 to facilitate the orientation, lighting, and proximity functions described with reference to fig. 1. Other sensors 216 may also be connected to the peripheral interface 206, such as a positioning system (e.g., a GPS receiver), a temperature sensor, a biometric sensor (biometric sensor), or other sensing device, in order to implement the relevant functions.
A camera subsystem 220 and an optical sensor 222, such as a Charge Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) optical sensor, may be utilized to facilitate camera functions, such as recording photographs and video clips.
Communication functions may be facilitated by one or more wireless communication subsystems 224, wherein wireless communication subsystems 224 may include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of communication subsystem 224 may be dependent upon the communication network(s) on which mobile device 100 is intended to operate. For example, the mobile device 100 may include a network designed to operate in a GSM network, GPRS network, EDGE network, Wi-Fi or WiMax network, and BluetoothTMA communication subsystem 224 operating over a network. In particular, wirelessCommunication subsystem 224 may include a hosting protocol (hosting protocol) such that device 100 may be configured as a base station for other wireless devices.
The audio subsystem 226 may be coupled to a speaker 228 and a microphone 230 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.
The I/O subsystem 240 may include a touchscreen controller 242 and/or other input controller(s) 244. The touchscreen controller 242 may be coupled to a touchscreen 246. Touch screen 246 and touch screen controller 242 may, for example, detect contact and movement or breaking thereof using any of a number of touch sensitive technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other elements or other proximity sensor arrays for determining one or more points of contact with touch screen 246.
Other input controller(s) 244 may be coupled to other input/control devices 248, such as one or more buttons, rocker switches (rocker switches), thumbwheels, infrared ports, USB ports, and/or pointer devices (pointers) such as styluses. The one or more buttons (not shown) may include an up/down button for volume control of the speaker 228 and/or the microphone 230.
In one implementation, pressing the button for a first duration may unlock the touch screen 246; and pressing the button for a second duration longer than the first duration may turn power to the mobile device 100 on or off. The user may be able to customize the functionality of one or more buttons. The touch screen 246 may also be used to implement virtual or soft buttons and/or a keyboard, for example.
In some implementations, the mobile device 100 can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some implementations, the mobile device 100 may include the functionality of an MP3 player, such as an iPodTM. Thus, the mobile device 100 may include compatibility with an iPod30 pin connector. Other input/output and control devices may also be used.
The memory interface 202 may be coupled to a memory 250. Memory 250 may include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 250 may store an operating system 252, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. The operating system 252 may include instructions for handling basic system services and for performing hardware dependent tasks (hardware dependent tasks). In some implementations, the operating system 252 may be a kernel (e.g., UNIX kernel).
Memory 250 may also store communication instructions 254 to facilitate communication with one or more additional devices, one or more computers, and/or one or more servers. Memory 250 may include graphical user interface instructions 256 for facilitating graphical user interface processing; sensor processing instructions 258 for facilitating sensor-related processing and functions; telephony instructions 260 for facilitating telephony-related processes and functions; electronic messaging instructions 262 for facilitating electronic-messaging related processes and functions; web browsing instructions 264 for facilitating web browsing-related processes and functions; media processing instructions 266 for facilitating media processing-related processes and functions; GPS/navigation instructions 268 for facilitating GPS and navigation-related processes and functions; camera instructions 270 for facilitating camera-related processes and functions; and/or other software instructions 272 for facilitating other processes and functions, such as security processes and functions. Memory 250 may also store other software instructions (not shown), such as network video instructions for facilitating network video-related processes and functions; and/or online shopping instructions for facilitating online shopping-related processes and functions. In some implementations, the media processing instructions 266 are divided into audio processing instructions and video processing instructions to facilitate audio processing-related processes and functions and video processing-related processes and functions, respectively. The activity record and International Mobile Equipment Identity (IMEI)274 or similar hardware identifier may also be stored in the memory 250.
Language data 276 may also be stored in memory 250. Linguistic data 276 may include, for example, a dictionary of one or more languages (i.e., a list of possible words in a language), a dictionary of characters and corresponding speech (phonetics), one or more corpora (corpuses) of character and character combinations (characters), and so forth.
Each of the above instructions and applications may correspond to a set of instructions for performing one or more of the functions described above. These instructions need not be implemented as separate software programs, procedures or modules. Memory 250 may include additional instructions or fewer instructions. In addition, various functions of the mobile device 100 may be implemented in hardware and/or software, including in one or more signal processing and/or application specific integrated circuits.
Language input interface
Fig. 3A-3F illustrate exemplary user interfaces for entering multi-lingual text on the mobile device 100. The mobile device 100 may display a text entry area 302 and a virtual keyboard 304 on the touch-sensitive display 102. The text entry area 302 may be any area where entered text may be displayed, such as a note-taking application, an email application, and the like. In some implementations, the text entry area 302 can be one or more text fields (textfields) located in a document (e.g., a web page presented in a web browser application). The virtual keyboard 304 includes one or more virtual keys 303, where each virtual key corresponds to a letter of an alphabet (e.g., the latin alphabet). The virtual keyboard 304 may include keyboard switch keys 308 for switching between letter keys and keys for numbers, punctuation marks, etc. (i.e., letter keys or number/punctuation mark keys may be displayed in the virtual keyboard 304). The user may enter text by touching the touch-sensitive display 102 on a region of a desired key of the virtual keyboard 304; the user selects or clicks (hit) a desired key of the virtual keyboard 304. The letters, numbers, etc. corresponding to the touched key are displayed in the text entry area 302 as unconverted current input 310-a. The user may click the backspace key 306 to delete the last entered character.
In some implementations, the mobile device 100 has the capability to enter non-English text using a Latin alphabet virtual keyboard. For example, mobile device 100 may have the capability to enter Chinese and/or Japanese text (including Chinese or Japanese characters and symbols) using a Latin alphabet virtual keyboard (e.g., a virtual keyboard with letters arranged in a QWERTY layout). For example, mobile device 100 may include a Chinese or Japanese text entry mode that utilizes a Latin alphabet keyboard. The user may use the virtual keyboard to type in strings of alphabetic phonetic symbols representing sounds (sound) or syllables of non-english language. For example, a user may type in pinyin (romanization) of one or more characters or symbols in chinese or japanese using a virtual keyboard.
For convenience, implementations in this specification will be described with reference to the entry of japanese text. It should be understood, however, that the described implementations may be applied to other non-english languages (e.g., chinese). More generally, regardless of the language, the described implementations may be applied to any text input interface that involves identifying, presenting, and selecting candidates for input (e.g., Latin alphabet spelling to non-Latin alphabet text, spelling and grammar correction, thesaurus feature, etc.).
When the user enters the first letter of the phonetic symbol string, as shown in FIG. 3A, that letter is displayed in the text entry area 302 as unconverted current input 310-A. In some implementations, input 310-A is displayed underlined or in other formats (e.g., bold text, italics, highlighting). Underlining/formatting indicates that the input is a temporary input that undergoes conversion prior to additional input from the user, whether the additional input is an additional letter or the user's selection of a candidate object. For example, in FIG. 3A, the user clicks the "s" key, and the letter "s" is displayed underlined in the text entry area 302 as the current input 310-A.
The virtual keyboard 304 may include a "confirm" key 314 that accepts the displayed input 310-a as it is when clicked by the user. The accepted input is displayed without underlining. For example, in fig. 3A, the user may click on the "ok" key 314 to accept the entered string "s" as is; "s" is shown without underlining. In some implementations, clicking the "OK" key 214 also adds a space after the accepted input. In some other implementations, adding a space after the accepted input depends on whether the accepted input is a language in which a space separates words (words) and/or whether the accepted input is the end of a sentence, for example. In some implementations, the key 314 is a "space" key that accepts the current input as is when pressed, effectively functioning as a "confirm" key.
The virtual keyboard 304 may also include a "display candidate" key 312. By clicking the "show candidates" key 312, the user may bring up a menu (tray) for candidate characters, symbols, and combinations thereof (e.g., kanji, kana combinations) in place of the input 310-a. The candidate options box is described further below.
Continuing from the exemplary input 310-A shown in FIG. 3A, the user then clicks the letter "e" on the keyboard, resulting in the string "se". Device 100 may convert string "se" to hiragana symbol "せ," where string "se" is the pinyin for hiragana symbol "せ," and as shown in fig. 3B, hiragana symbol "せ" is displayed as the underlined converted current input 310-B. The user may click on the "OK" key 314 to accept the Hiragana symbol "せ" as is; then "せ" is displayed without underlining. Alternatively, the user may click on the "show candidates" key 312 to bring up a candidates selection box about the string "se" (e.g., characters starting with "se" by phonetic reading).
Continuing from the exemplary input 310-B shown in FIG. 3B, the user then clicks the "n" key, resulting in the string "sen". The terminal "n" letter is converted into hiragana symbol "di", wherein the terminal "n" is pinyin for the hiragana symbol "di", and is appended to the hiragana symbol "せ" that has been converted. As shown in FIG. 3C, the Hiragana symbol "せ'/" is displayed as the converted current input 310-B with underlining.
In some implementations, device 102 may display one or more suggested candidates 318 in a row for input 310-B. Suggested candidates may include single characters, phonetic symbols (e.g., japanese kana), and combinations of multiple characters and/or phonetic symbols. For example, in FIG. 3C, the Kanji character "thread" is shown as a proposed candidate of "せ'/; "せ'/(" sen ") is the sound reading (onyomi) reading of the japanese kanji character" thread ". In some implementations, the user may click on a suggested candidate (i.e., touch the touch-sensitive display 102 over an area of the desired suggested candidate) to select the suggested candidate, continue to tap alphabetic keys on the virtual keyboard 304 to add to the input 310-B, or click on the "show candidate" key 312 to bring up a candidate box, among other actions. If the user selects a suggested candidate, the selected suggested candidate is displayed as accepted input 336, as shown in FIG. 3F. If the user continues to type letter keys on virtual keyboard 304, then current input 310-B is expanded and possible candidates for current input 310-B are reduced.
In some implementations, the device 100 determines one or more suggested candidates 318 presented to the user as the best matches for the input 310-B based on one or more criteria (e.g., frequency in language, exact matches, etc.).
In some implementations, device 100 may display more candidates when the user clicks arrow graphical object 319 or the like on touch-sensitive display 102. For example, when the user clicks on arrow 319, candidate object tab 322 may be displayed. Alternatively, the row of suggested candidates may be expanded to display more candidates. Arrow 319 gives the user a prompt that there are additional candidates available.
In some implementations, the user may click the confirmation key 314 once to select a first one of the suggested candidates 318, quickly click the confirmation key 314 twice in succession to select a second one of the suggested candidates 318, and so on.
If the user clicks the "show candidates" key 312 or arrow 319, a candidate options box 322 may be displayed as shown in FIG. 3D. In some implementations, the candidate object checkbox 322 is displayed in place of the virtual keyboard 304. In some implementations, the candidate options box 322 is displayed over all or a portion of the text entry area 302. In some implementations, the candidate object checkbox 322 slides (slide) over the virtual keyboard 304 or the text entry area 302 and the slide is displayed as an animation effect. When the candidate object box 322 is moved out of view, the candidate object box 322 may slide out (slide off) of the touch-sensitive display 102.
Candidate box 322 may include one or more candidate keys 330, where each candidate key 330 corresponds to a candidate for conversion of input 310-B. The candidates, whether for candidate key 330 or suggested candidate 318, may be characters, phonetic or syllable symbols (e.g., kana symbols), pinyin, multi-character combinations forming words or phrases, multi-symbol combinations forming words or phrases, character and symbol combinations forming words or phrases, and the like. Candidates may include phonetic reading being the entry of 310-B as a pronunciation or characters beginning with the entry 310-B as a pronunciation, words beginning with the entry 310-B, etc. For example, in FIG. 3D, candidate box 322 includes some candidate keys 330 corresponding to Kanji characters having a reading of "せ'/". In some implementations, the candidates in the candidate box are ranked based on a variety of criteria regarding which candidate is the best candidate.
In some implementations, candidates for suggested candidate 318 and candidate option box 322 are identified and sorted using predictive text and/or error correction techniques, examples of which include fuzzy matching, techniques for determining cursor position based on finger contact, and so forth. An example of predictive Text technology is disclosed in "An effective Text Input Method for Pen-based computers" by Masui in Proceedings of ACM Conference on Human Factors in computing Systems (CHI' 98), Addison-Wesley, 4.1998, page 328 < 335 >, which is incorporated herein in its entirety by reference. An example of a technique for determining a Cursor Position based on Finger Contact is disclosed in U.S. patent application No.11/850,015 (publication No. us20080094356), entitled "Methods for determining a Cursor Position from a Finger Contact with a TouchScreen Display", filed on 4.9.2007, which is hereby incorporated by reference in its entirety. For example, determining a cursor position based on finger contact may include (a) detecting a contact area of a finger with the touch screen display, (b) determining a first position associated with the contact area, and (c) determining a cursor position based on one or more factors (factors). These factors include (1) a first position, (2) one or more distances between the first position and one or more user interface objects associated with the touch screen display (e.g., icons including an open icon, a close icon, a delete icon, an exit icon, or a soft key icon), and (3) one or more activity sensitive numbers (activity sensitive numbers), each of which is associated with a respective user interface object.
The contact area may be, for example, an elliptical area having a major axis and a perpendicular minor axis. The first location may be, for example, the centroid of the contact region.
The distance between the first location and the user interface object may be the distance between the first location and a point on the user interface object that is closest to the first location. Alternatively, the distance may be a distance between the first location and a center point of the user interface object. In some implementations, if the determined cursor position is on a particular user interface object (or in a "click region" of an object), the user interface object is activated to perform a predetermined operation.
A particular user interface object may be assigned a particular activity sensitive number, for example, based on the operation associated with each object. The activity-sensitive number may, for example, adjust the determined cursor position such that the cursor position is dragged closer to the particular user interface object, thereby making it easier to activate.
In some implementations, the cursor position is determined based on the first location, the activity-sensitive number associated with the user interface object closest to the first location, and the distance between the first location and the user interface object closest to the first location. In these embodiments, the cursor position is not affected by parameters associated with other neighboring user interface objects.
In some implementations, when one or more user interface objects fall within a predetermined distance of the first location, the cursor position is determined based on the first location, the activity sensitive number associated with each user interface object that falls within the predetermined distance, and the distance between the first location and each of the user interface objects. Alternatively, in some implementations, when one or more user interface objects fall within a contact area (or within a predetermined distance of the contact area) of a user's finger in contact with the touch screen display, the cursor position is determined based on the first position, an activity sensitive number associated with each user interface object that falls within the contact area (or within a predetermined distance within the contact area), and a distance between the first position and each of the user interface objects.
In some implementations, if the candidate object box 322 is displayed on the virtual keyboard 304, the candidate object box 322 may further include a keyboard toggle 328 for switching back to the virtual keyboard 304. Candidate box 322 may also include a previous candidate key 326 and/or a next candidate key 324 for moving back and forth in the set of candidate keys 330 within candidate box 322. In some implementations, the candidate box 322 also includes a confirmation key 314.
The user may click candidate key 330 to replace input 310-B with a candidate corresponding to the clicked candidate key 330. For example, as seen in FIG. 3D, if the user clicks the key corresponding to the candidate character "thousand" (key 332), input 310-B is replaced with the character "thousand". As shown in FIG. 3E, the character "thousand" is displayed as accepted input 336. In FIG. 3E, the candidate object tab 322 returns to the virtual keyboard 304. The virtual keyboard 304 may include a "space" key 334 and an "enter" key 332 in place of the "ok" key 314 and the display candidate key 312, respectively. As can be seen in fig. 3F, the user can enter a new phonetic symbol string input.
In some implementations, virtual keyboard 304 may include keys for switching between multiple input keyboards for multiple languages.
In some implementations, the candidate object box 322 includes a cancel key 331 for returning the virtual keyboard 304 from the candidate object box 322 without selecting a candidate object.
In some implementations, one of the suggested candidate 318 or candidate in the candidate box 322 is highlighted as the "currently selected" candidate. When a suggested candidate 318 or candidate box 322 is first displayed after the phonetic symbol string is entered, the initially highlighted candidate may be the phonetic symbol string itself, among the suggested candidate 318 or candidate box 322 or the "best" candidate. The key 312 may be a "next candidate" key, where pressing the key moves the highlight to the next candidate. In some implementations, there may be a "previous candidate" key to bring the highlight back to the previous candidate. The confirmation key 314 may be used to accept the highlighted candidate.
In some other implementations, when a user inputs a string of phonetic symbols, the candidate object is not automatically selected or highlighted by default; the user can click the confirmation key 314 to accept the phonetic symbol string as it is. The user may click on the next candidate key (and optionally on the previous candidate key) to move between candidates and highlight one of them. When a different candidate is highlighted, the current input 310-B changes to show the currently highlighted candidate, still being underlined or otherwise displayed to indicate that the current input 310-B is still temporary. Clicking the enter key (e.g., enter key 332) confirms the currently selected candidate object or phonetic symbol string (i.e., regardless of how the phonetic symbol string or candidate object is displayed in the current input 310-B). Adding more phonetic symbols by tapping on the virtual keyboard 304 also automatically accepts the currently selected candidate object or phonetic symbol string (i.e., regardless of how the phonetic symbol string or candidate object is displayed in the current input 310-B).
Fig. 4 illustrates an exemplary text entry process 400. For convenience, process 400 will be described with reference to a device (e.g., device 100) that performs process 400.
A virtual keyboard is displayed in a first region of a touch-sensitive display of a device (402). For example, the device displays a virtual keyboard 304 on a portion of the touch-sensitive display 102.
An input is received to type a string of phonetic symbols on a virtual keyboard (404). The user may type one or more letters using the virtual keyboard. The typed letters may constitute a string of phonetic symbols. For example, the phonetic symbol string may be a pinyin for a character, word, etc. of a language that does not use the Latin alphabet.
The input phonetic symbol string is displayed in a second area of the display (406). The device 100 may display a string of phonetic symbols in a text entry area on the touch-sensitive display 102. In some implementations, the apparatus 100 converts the phonetic symbol string, for example, into symbols (e.g., japanese kana, chinese phonetic notation, etc.) corresponding to the phonetic symbol string.
One or more candidates matching the phonetic symbol string are identified (408). For example, the device 100 may look up the phonetic symbol string in a dictionary, character database, or the like, and find a matching character for the phonetic symbol string. In some implementations, the device 100 may segment the string of phonetic symbols based on syllables or other criteria and find candidates for each segment.
At least a subset of the identified candidate objects is displayed in a first area of the touch-sensitive display (410). For example, the candidate may be displayed in a candidate options box 322 displayed in place of the virtual keyboard 304. In some implementations, if there are more candidates than can be placed within the options box 322, the user may navigate to the overflow candidate by clicking on the previous candidate key 326 or the next candidate key 324.
An input is received selecting one of the candidates (412). For example, the user may click on one of the candidate keys 330 in the candidate box 322 to select the corresponding candidate.
The displayed phonetic symbol string is replaced with the selected candidate (414). In some implementations, the selected candidate object is displayed on the touch-sensitive display in place of the input phonetic symbol string.
In some implementations, the virtual keyboard 304 and the candidate object checkbox 322 may be dynamically resized based on the orientation of the touch-sensitive display 102. For example, fig. 3A-3F illustrate a virtual keyboard 304 or candidate box 322 in a portrait orientation (reporting orientation). If the device 100, and thus the touch-sensitive display 102, is rotated to a landscape orientation (landscapeorientation), the device 100 may detect the rotation and resize the virtual keyboard 304 and candidate object box 322 to fit the landscape width (landscapewidth) of the touch-sensitive display 102.
In some implementations, suggested candidate objects 318 are displayed in the same orientation as the text input, which may vary by language. For example, if text is displayed from left to right, suggested candidate 318 is displayed from left to right. If the text is displayed from right to left, the suggested candidate object 318 is displayed from right to left. If the text is displayed from top to bottom, the suggested candidate object 318 is displayed from top to bottom.
In some implementations, the phonetic symbol string may be entered by voice rather than tapping on the virtual keyboard 304. For example, device 100 may include a speech recognition module that receives and processes speech input from a user and generates a string of phonetic symbols based on the speech input. The device 100 may identify candidates for selection by the user for the phonetic symbol string generated by the speech recognition module.
The embodiments and functional operations disclosed in this specification may be implemented as digital electronic circuitry, or as computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or as a combination of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage medium, a machine-readable storage substrate (storage substrate), a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" includes all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the referenced computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificial signal, such as a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can be implemented as, special purpose logic circuitry, such as an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, the computer need not have such devices. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, the disclosed embodiments can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other types of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be in any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; input from the user may be received in any form, including acoustic, speech, or tactile input.
The disclosed embodiments can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the techniques disclosed herein), or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected in any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), such as the Internet.
While this specification contains many specifics, these should not be construed as limitations on the scope as claimed or as that which may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. In addition, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features may in some cases be excised from the claimed combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the division of multiple system components in the embodiments described above should not be understood as requiring such division in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Specific embodiments of the subject matter described in this specification have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes illustrated in the figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
Claims (14)
1. An information processing method comprising:
presenting a virtual keyboard in a first region of a touch-sensitive display of a device;
receiving an input representing a string of phonetic symbols on the virtual keyboard;
presenting the string of phonetic symbols in a second area of the touch-sensitive display;
identifying one or more candidate objects based on the phonetic symbol string;
presenting a candidate object box in the first area in place of a virtual keyboard, the candidate object box including at least a subset of the candidate objects;
receiving input selecting one of the candidate objects; and
replacing the typed phonetic symbol string with the selected candidate.
2. The method of claim 1, wherein:
the phonetic symbol string comprises Chinese pinyin; and
the candidate object includes a chinese character.
3. The method of claim 1, wherein:
the phonetic symbol string comprises Japanese pinyin; and
the candidate objects include one or more of the group consisting of japanese kanji characters and japanese kana symbols.
4. The method of claim 1, wherein the virtual keyboard comprises keys corresponding to letters of the Latin alphabet.
5. The method of claim 1, wherein the candidate objects comprise multi-character words.
6. The method of claim 1, wherein identifying one or more candidates based on the phonetic symbol string comprises identifying one or more candidates using text prediction from the phonetic symbol string.
7. The method of claim 6, wherein said presenting a candidate box comprising at least a subset of the candidates in the first area in place of a virtual keyboard comprises presenting the subset of candidates in an order determined based on the text prediction.
8. An information processing apparatus comprising:
means for presenting a virtual keyboard in a first region of a touch-sensitive display of the device;
means for receiving an input representing a string of phonetic symbols on the virtual keyboard;
means for presenting the string of phonetic symbols in a second area of the touch-sensitive display;
means for identifying one or more candidates based on the phonetic symbol string;
means for presenting a candidate object box in the first area in place of the virtual keyboard that includes at least a subset of the candidate objects;
means for receiving an input selecting one of the candidate objects; and
means for replacing the typed phonetic symbol string with the selected candidate.
9. The apparatus of claim 8, wherein:
the phonetic symbol string comprises Chinese pinyin; and
the candidate object includes a chinese character.
10. The apparatus of claim 8, wherein:
the phonetic symbol string comprises Japanese pinyin; and
the candidate objects include one or more of the group consisting of japanese kanji characters and japanese kana symbols.
11. The device of claim 8, wherein the virtual keyboard comprises keys corresponding to letters of the Latin alphabet.
12. The apparatus of claim 8, wherein the candidate objects comprise multi-character words.
13. The apparatus of claim 8, wherein said means for identifying one or more candidates based on said phonetic symbol string comprises means for identifying one or more candidates using text prediction from said phonetic symbol string.
14. The apparatus of claim 13, wherein the means for presenting a candidate box in the first area in place of the virtual keyboard that includes at least a subset of the candidates comprises means for presenting the subset of candidates in an order determined based on the text prediction.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/042,309 | 2008-03-04 | ||
| US12/042,309 US8289283B2 (en) | 2008-03-04 | 2008-03-04 | Language input interface on a device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1137525A1 HK1137525A1 (en) | 2010-07-30 |
| HK1137525B true HK1137525B (en) | 2013-09-19 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| USRE46139E1 (en) | Language input interface on a device | |
| US10871897B2 (en) | Identification of candidate characters for text input | |
| US8908973B2 (en) | Handwritten character recognition interface | |
| US8564541B2 (en) | Zhuyin input interface on a device | |
| US8949743B2 (en) | Language input interface on a device | |
| KR101895503B1 (en) | Semantic zoom animations | |
| RU2611970C2 (en) | Semantic zoom | |
| RU2600543C2 (en) | Programming interface for semantic zoom | |
| US9141200B2 (en) | Device, method, and graphical user interface for entering characters | |
| KR101265431B1 (en) | Input methods for device having multi-language environment | |
| US20090225034A1 (en) | Japanese-Language Virtual Keyboard | |
| US20130002553A1 (en) | Character entry apparatus and associated methods | |
| US20080182599A1 (en) | Method and apparatus for user input | |
| KR20120006503A (en) | Improved text input | |
| KR20140074888A (en) | Semantic zoom gestures | |
| US20130263039A1 (en) | Character string shortcut key | |
| US20140164981A1 (en) | Text entry | |
| HK1137525B (en) | Language input interface on a device | |
| US20140317569A1 (en) | Methods and Devices for Chinese Language Input to a Touch Screen | |
| KR102869440B1 (en) | Character input device implemented in software | |
| HK1223693B (en) | Method and system for receiving text-input on a touch sensitive display device | |
| HK1137539B (en) | Identification of candidate characters for text input |