WO2004010275A1 - Procede et systeme d'entree d'informations associes a des microphones - Google Patents
Procede et systeme d'entree d'informations associes a des microphones Download PDFInfo
- Publication number
- WO2004010275A1 WO2004010275A1 PCT/SE2003/001231 SE0301231W WO2004010275A1 WO 2004010275 A1 WO2004010275 A1 WO 2004010275A1 SE 0301231 W SE0301231 W SE 0301231W WO 2004010275 A1 WO2004010275 A1 WO 2004010275A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- microphones
- sound signals
- sound
- calculating
- calculation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/043—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using propagating acoustic waves
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1662—Details related to the integrated keyboard
- G06F1/1673—Arrangements for projecting a virtual keyboard
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
Definitions
- the present invention relates to a method for information input in an information processing system as well as an information processing system and a pointing device comprising such an information processing system.
- the information input is primarily carried out with a built-in keyboard.
- the keyboard has buttons that correspond to the desired character set.
- This keyboard is of normal size for certain computer types and in smaller format in portable computers of mall size. It is a problem that the keyboard is relatively large, and thus putting certain demands on the outer dimensions of the computer. It is also a problem that due to the large amount of electro-mechanical parts the keyboard is relatively costly to manufacture . The smaller keyboard can also be quite slow to type one.
- the information input is normally carried with a small built-in keyboard.
- the keys have numbers and when inputting text the user has to push the buttons several times to uniquely determine the correct character.
- a previous data input device is disclosed in US-patent numbers 3,909,785 and 3,838,212.
- Acoustic signals generated by a stylus are received by elongated capa- citive bar-type microphones.
- the stylus as well as the microphones are galvanically connected to processing circuitry.
- a time delay between a triggering pulse and an acoustic signal received by the bar-type microphones are used by the processing circuitry to calculate the shortest spatial distances between the stylus and the respective microphone.
- a drawback relating to an acoustic data input device is hence that they require a configuration involving two-way communication between a stylus and the processing circuitry. That is, there is a need to establish an absolute timing frame of reference, e.g. by way of a triggering pulse, between the creation of an acoustic pulse and the reception of the acoustic pulse in the processing circuitry.
- Another drawback is the elongated microphones, which are large and expensive.
- the object of the inventions hence to solve problems related to how to simplify inputting text and other information into mobile phones, personal digital assistants, portable computers and other similar information systems.
- a method and a system where position information is input into an information processing system, the system comprising a plurality of microphones located at known microphone positions and connected to processing circuitry capable of interpreting sound signals from the microphones.
- a sound signal is received from each microphone, the signals originating from an acoustic wave emanating from an acoustic sound source located at a first position.
- a respective difference in distance between the sound source at said first position and respective microphone is then calculated, followed by a calculation, using a geometric model, of an estimate of said first position, said estimate being position information intended for use in the information processing system.
- the system comprises at least three microphones and the calculation of a respective difference in distance involves calculating differences in propagation delay for said sound signals.
- the calculation of propagation delay differences is performed, for example, by way of calculating differences between times of arrival for said sound signals or by way of calculating cross-correlation functions between said sound signals.
- the calculation of a respective difference in distance involves calculating Doppler shifts for said sound signals.
- a mobile communication terminal comprises a system as described above .
- a pointing device comprises means for transmitting a sound signal and is capable of interacting with a system and a mobile communication terminal as discussed above.
- An advantage of the invention is that it simplifies information input into an information handling system such as a PDA, mobile phone etc. Used together with an optically projected virtual keyboard, as will be explained below in a preferred embodiment, the invention provides a user with a much simplified way of information input .
- Figure 1 illustrates schematically a diagram of a system according to the present invention.
- Figure 2 illustrates a flow chart of a method according to the present invention.
- Figure 3 illustrates schematically a mobile telephone terminal according to the present invention.
- Figure 4 illustrates schematically a mobile telephone terminal according to the present invention.
- Figure 5 illustrates schematically a pointing device according to the present invention.
- Figure 1 shows a system 100 capable of performing a method according to the invention. It is to be noted that only the essential function blocks of the system 100 is shown in figure 1. The skilled person will realize that any implementation of the system 100 require such functions as power supply and an appropriately designed data communication bus between the different function blocks. Moreover, the system 100 may be implemented in intelligent devices such as a mobile communication terminal, as will be further exemplified below, or portable computer etc.
- the system 100 comprises a processing unit 104, a memory unit 105 and a display 106 as well as a plurality of microphones as indicated by a first 101 and a second 102, microphone.
- a third microphone 103 drawn using a dashed line in order to emphasize that it is in some cases optional and also to indicate that any number of additional microphones may be used within the scope of the invention.
- embodiments of the invention may be further improved ' by using more than two or three microphones, as will be discussed below.
- the processing unit 104 comprises all circuitry needed to perform the method of the invention, including all signal processing, filtering, analog-to-digital conversion etc. Moreover, as the person skilled in the art will readily understand, the processor 104 is controlled by way of software instructions contained in the memory unit 105 and/or contained in the processor 104 itself.
- Figure 2 is a flow chart of a generic method according to the present invention. More detailed inventive methods will be disclosed below where reference will be made to functional steps performed by the processor 104 of figure 1. Although not explicitly stated below, reference can be made to the generic steps illustrated in figure 2. The actual coding of the method into software instructions is performed according to techniques known in the art .
- the method commences with a reception step 201, where a number of sound signals are received by microphones', in the system.
- a second step 202 all calculations are performed on the signals. These calculations include conversion of sound power into electric signal power, filtering, digitizing followed by processing according to algorithms and schemes as will be discussed in some detail below in connection with the different embodiments.
- a step of displaying 203 the information after processing has been performed is shown. However, any post-processing is of course possible.
- a mobile phone has several microphones, at least three, that are located at known positions relative to the mobile phone;
- a user places the mobile phone on a surface and activates a text input function. This starts a projection of a virtual keyboard on the surface in front of the mobile phone. This projection is performed by a small diode- projector, which is powered by the mobile phone.
- the user uses his finger or other pointing device against the surface in front of the mobile phone, and impacts the surface, thus generating a sound at a certain position.
- This position corresponds to a character in the character-set displayed by the projection of said virtual keyboard.
- the microphones are constantly registering the received sound when the text input function is activated.
- the registered sound is converted into digital form by and analog to digital converter, a so-called AD-converter, comprised within the hardware/software in the phone. Accordingly a number of signals are obtained, one for each microphone .
- An algorithm in the mobile phone detects when a sound above a certain volume has been registered, where the volume is estimated as the sum of the squared signal values during a given number of samples .
- another algorithm estimates the difference in propagation delay between the different received sounds. This is accomplished by calculating the cross-correlation function between the different signals. The propagation delay is obtained as the value where the cross- correlation function has its maximum. Then, by taking the sound velocity into count, the estimated propagation delays are translated to estimated differences is distance between the generated sound and respective microphones. These estimated differences in distance are in turn used to estimate the position of the generated sound. This is performed by using a model- fitting algorithm.
- the estimated position of the generated sound is then converted into a character by utilizing the fact that the positions of the different characters on the virtual keyboard are known.
- the character whose position is closest to the estimated position is chosen as estimated character and the so-obtained character is displayed on the screen of the mobile phone.
- a second embodiment of the invention solves the text input problem by utilizing a pointing device in the shape of, e.g., a pen, and it involves the following steps:
- a user places the mobile phone on a surface, or holds it in his hand, and activates a text input function from a menu choice in the mobile phone. This makes a number of microphones with know positions relative to the mobile phone, to register received sound.
- the user takes in his hand the pen-shaped pointing device, which comprises an ultrasound transmitter, which has the transmitting element in the tip.
- the user activates, e.g. by pushing a button on the ultrasound transmitter, the transmission of an ultrasound signal, in a certain narrow frequency spectrum, for example the sum of two sinusoids with slightly different center frequency.
- a button on the ultrasound transmitter activates, e.g. by pushing a button on the ultrasound transmitter, the transmission of an ultrasound signal, in a certain narrow frequency spectrum, for example the sum of two sinusoids with slightly different center frequency.
- the button is pressed the user moves the ultrasound transmitter in a certain pattern corresponding to an input character.
- the registered sound is converted into digital form by and analog to digital converter, a so-called AD- converter. Accordingly a number of signals are obtained, one for each microphone .
- An algorithm in the mobile phone detects when the sound volume in said narrow frequency band is above a certain level, where the volume is estimated as the sum of the squared signal values during a given number of samples. During the time when the said volumes are above said level the following procedure is constantly repeated:
- a second algorithm uses a certain number of samples of the sound signals to estimate the difference in propagation delay between the different received sounds. This is accomplished by calculating the cross-correlation function between the different signals. The propagation delay is obtained as the value where the cross- correlation function has its maximum.
- the estimated propagation delays are translated to estimated differences in distance between the generated sound and respective microphones. These estimated differences in distance are in turn used to estimate the position of the generated sound. This is performed by using a model- fitting algorithm.
- the stored movement pattern which best matches the estimated movement pattern gives an estimated input character.
- the so-obtained character is then displayed on the screen of the mobile phone.
- the above procedure of inputting a movement pattern may be used without the final steps of matching the pattern with characters.
- the pointing device may be used as an equivalent of a computer mouse.
- the text input function described in the previously described embodiments is further improved by supporting the writing of words with a dictionary built into the mobile phone.
- the mobile phone presents a number of words beginning with the characters entered so far. If the user sees the word that he intends to write he can pick this word by selecting it from the menu. This is accomplished by first writing a special character that has the meaning that the next character represents a menu choice. The user then writes the number that corresponds to the menu choice that he wants to make. After this step the inputting of the current word is ended and the inputting of next words starts .
- the estimation of the positions is improved by also measuring the power of the received signals.
- the position can be estimated by using only two microphones. If there are more than two microphones the quality of the position can be improved.
- the power is estimated by summing the square of the signal samples. Theoretically, with ideal microphones and ideal free space propagation of the sound, the power of the sound is inversely proportional to the squared distance between the generated sound and respective microphone. In the example with two microphones the estimated differences in distance forms a hyperbola on which the generated sound is estimated to be located. The estimated power of the signals forms an ellipse on which the generated sound is estimated to be located. The generated sound is estimated to be in one of the intersections between this hyperbola and ellipse. One of the intersections can be neglected since it is behind the microphones.
- the estimated position is obtained by model fitting of the estimated distance differences and powers. This is a standard practice in the system identification area, see e.g. "System Identification, 1998 Torsten S ⁇ derstr ⁇ m, Petre Stoica"
- the estimation of the positions is improved by first calibrating the system.
- the system is calibrated by generating a sound at a known position.
- Model fitting is then used to calibrate parameters such as the speed of sound, the frequency spectrum of the sound, the weight of power versus distance differences in the model, location and orientation of the projected keyboard and parameters for the sound propagation.
- the estimation of the positions is improved by also computing the Doppler frequency shift of the received signals .
- the position can be estimated by using only two microphones. If there are more than two microphones the quality of the position can be improved.
- the positions estimated after the initial one can be computed by taking the Doppler frequency shift into account.
- the Doppler frequency shift corresponds to the velocity of the sound-transmitting source, in this case the pen.
- the velocity towards the microphones is thus obtained from the Doppler frequency shift.
- These velocities are transformed into a velocity vector in Cartesian coordinates. By integrating all velocity vectors obtained in this manner a movement pattern originating in the initial position is obtained. The computation of the
- Doppler frequency is performed in the time domain. This is computationally more efficient and equivalent according to the time-frequency duality. In addition when the computation is done in the time domain no errors are accumulated in the integration step.
- the estimation of the positions is improved by knowing the transmission times of the sound signals. Accordingly the position can be estimated by using only two microphones. By knowing the time the sound is transmitted the position can easily be estimated by simply time-stamping the reception time at the different microphones.
- the reception time is estimated by cross-correlating the received signals with a known stored signal .
- This stored signal can be either the signal that is transmitted or a filtered version of this signal that accounts for filters and other analog and digital components that alter the signal from the source to reception.
- the propagation times are then obtained as the differences between the reception times and the transmission time.
- the distances between the microphones and the sound transmitter are then simply obtained by multiplying the propagation time with the speed of sound.
- the position of the transmitter is then obtained by simple geometric calculations.
- the transmission time can be made known in a way of different ways.
- a transmission media that has significantly faster transmission speed than sound, e.g. infrared communication, radio transmission, having a cable between the phone and the transmitter.
- the time in the transmitter known by calibrating it in a known position, e.g. by placing the microphone adjacent to one of the microphones and emitting a sound, then the transmission times in the pen are known, as long as the clock in the pen does not drift significantly compared to the clock in the mobile phone.
- Figure 3 illustrates a system 300 for fast text input in a mobile phone 301 without using a real keyboard.
- a mobile phone 301 has a plurality of microphones 302, at least three, location at well known positions relative to the mobile phone 301.
- a coordinate system, x-y, is indicated. The user places the mobile phone on a surface
- the 307 and activates a text inputting function by a menu choice presented on the screen 304 of the mobile phone 301.
- This projection is performed with a small diode projector 303, powered by the mobile phone 301.
- the user moves a finger or other object against the surface 307 in front of the mobile phone 301, thus generating a sound at a certain position 306.
- the position 306 corresponds to a certain character in the character set used in the virtual keyboard 305.
- the microphones 302 constantly register the received sound when the text inputting function is activated.
- the registered sound is converted into digital form by an analog to digital converter, a so-called AD- converter.
- a number of digital signals one for each microphone 302, is obtained.
- An algorithm in the mobile phone 301 detects when a sound above a certain volume is registered, where the volume is estimated as the sum of the squared signal values for a given number of samples.
- a second algorithm estimates the propagation difference between the different received sounds.
- the propagation differences is obtained as the value where the cross- correlation function has its maximum.
- the estimated propagation differences are converted into estimated differences in distance between the generated sound and respective microphone, by using the velocity of sound. These estimated differences in distance are in turn used to estimate the position of the generated sound. This is done with a so-called model fitting algorithm, as briefly described in the following. Theoretically the distance between a microphone located in the coordinate (x ⁇ , y ⁇ ) and a sound generated at (x,y) by the expression:
- the estimate of the position 306 of the generated sound is converted into a character by utilization the fact that the locations of the characters displayed in the virtual keyboard are known.
- the character whose position is closest to the estimated position if selected as estimated character.
- the so-obtained character is displayed on the screen 304 of the mobile phone 301.
- Figure 4 illustrates a system for text input in a mobile phone 401, by writing with an ultrasound transmitter 407 equipped pen 405.
- the user places the mobile phone on a surface 409 or holds it in his hand, and activates a text inputting function by making menu selection in the mobile phone 401. This will start the recording of sound in several microphones 402, at least two, with known positions relative to the mobile phone 401. A coordinate system, x-y, is indicated.
- the user grasps the pen 405 equipped with an ultrasound transmitter 407.
- the ultrasound transmitter is located in the tip of the pen.
- the user pushes the button 406 on the pen 405, which activates the transmission of an ultrasound signal with a narrow frequency spectrum, for example the sum of two sinusoids with slightly different frequency.
- a narrow frequency spectrum for example the sum of two sinusoids with slightly different frequency.
- the sound registered by the microphones 402 are converted into digital form by an analog to digital converter, as so-called AD-converter. Accordingly a number of digital signals are obtained, one for each microphone 402.
- An algorithm in the mobile phone 401 detects when the sound in said narrow frequency spectrum exceed a certain volume, where the volume is estimated as the summed signal squares during a certain number of samples. During the time when the sound volume is above said level the following positioning procedure is continuously repeated:
- a second algorithm uses a certain number of samples of the sound signals to estimate the differences in propagation between the different received sounds. This is accomplished by computing the cross-correlation function between the different signals. The propagation difference is obtained as the value where the cross- correlation function has its maximum. The estimated propagation differences are converted into estimated differences in distance between the generated sound and the respective microphones 402 by utilizing the speed of sound. These estimated differences in distance are in turn used by a third algorithm to estimate the position of the generated sound. This is accomplished with a so- called model-fitting algorithm, as briefly described in the previous embodiment . When the user releases the button 406 there is no longer any detected sound in said narrow frequency spectrum, and consequently the above- described positioning procedure is stops.
- the positions obtained in the above-described positioning procedure forms an estimate of the movement pattern 408, of the pen 405 with ultrasound transmitter 407.
- This estimated pattern is matched against a number of stored patterns, representing certain characters, by using algorithm well known to persons skilled in the art .
- the stored pattern that best matches the estimated pattern gives the estimated input character.
- the so-obtained character is displayed on the screen 404 of the mobile phone 401.
- Figure 5 illustrates schematically a pointing device 500 capable of interacting with a system 506 according to the present invention (e.g. system 100 in figure 1).
- the device 500 is generally pen-shaped with a pointed end 501.
- the device 500 comprises a processing unit 502 to which a speaker 503 and a transceiver 504 are connected.
- the device is capable of producing sound signals 507 that are intended for reception by the system 506.
- the device is also capable of exchanging information with the system via a communication channel 505.
- Information transferred via the communication channel 505 include timing information as discussed above.
- One typical implementation of the present invention is in the form of a computer program comprising a plurality of software instructions, programmed using a suitable programming tool known in the art, performing a method according to the embodiments disclosed.
- the computer program may reside in a physical form on any kind of distribution media, including a diskette, hard disk, CD,
- DVD or as a collection of signals transferred via communication channels such as the Internet .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Telephone Function (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2003247307A AU2003247307A1 (en) | 2002-07-23 | 2003-07-21 | Method and system for information input comprising microphones |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SE0202296-0 | 2002-07-23 | ||
| SE0202296A SE0202296D0 (sv) | 2002-07-23 | 2002-07-23 | System och metod för informationsinmatning med alstrat ljuds position |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2004010275A1 true WO2004010275A1 (fr) | 2004-01-29 |
Family
ID=20288615
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE2003/001231 Ceased WO2004010275A1 (fr) | 2002-07-23 | 2003-07-21 | Procede et systeme d'entree d'informations associes a des microphones |
Country Status (3)
| Country | Link |
|---|---|
| AU (1) | AU2003247307A1 (fr) |
| SE (1) | SE0202296D0 (fr) |
| WO (1) | WO2004010275A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1696306A1 (fr) * | 2005-02-25 | 2006-08-30 | Siemens Aktiengesellschaft | Dispositif mobile avec échelle variable |
| WO2008128989A1 (fr) * | 2007-04-19 | 2008-10-30 | Epos Technologies Limited | Localisation vocale et de position |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0773494A1 (fr) * | 1995-11-13 | 1997-05-14 | Motorola, Inc. | Curseur réagissant au mouvement pour contrÔler le mouvement dans un dispositif d'image virtuelle |
| EP0982676A1 (fr) * | 1998-08-27 | 2000-03-01 | Hewlett-Packard Company | Procédé et dispositif pour une affichage/clavier pour un assistant numérique personnel |
| EP1039365A2 (fr) * | 1999-03-26 | 2000-09-27 | Nokia Mobile Phones Ltd. | Un dispositif de saisie de données pour l'introduction manuelle de données dans un téléphone mobile |
| WO2001093182A1 (fr) * | 2000-05-29 | 2001-12-06 | Vkb Inc. | Dispositif de saisie de donnees virtuelles et procede de saisie de donnees alphanumeriques et analogues |
| WO2002050762A1 (fr) * | 2000-12-19 | 2002-06-27 | Ubinetics Limited | Clavier virtuel |
-
2002
- 2002-07-23 SE SE0202296A patent/SE0202296D0/xx unknown
-
2003
- 2003-07-21 AU AU2003247307A patent/AU2003247307A1/en not_active Abandoned
- 2003-07-21 WO PCT/SE2003/001231 patent/WO2004010275A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0773494A1 (fr) * | 1995-11-13 | 1997-05-14 | Motorola, Inc. | Curseur réagissant au mouvement pour contrÔler le mouvement dans un dispositif d'image virtuelle |
| EP0982676A1 (fr) * | 1998-08-27 | 2000-03-01 | Hewlett-Packard Company | Procédé et dispositif pour une affichage/clavier pour un assistant numérique personnel |
| EP1039365A2 (fr) * | 1999-03-26 | 2000-09-27 | Nokia Mobile Phones Ltd. | Un dispositif de saisie de données pour l'introduction manuelle de données dans un téléphone mobile |
| WO2001093182A1 (fr) * | 2000-05-29 | 2001-12-06 | Vkb Inc. | Dispositif de saisie de donnees virtuelles et procede de saisie de donnees alphanumeriques et analogues |
| WO2002050762A1 (fr) * | 2000-12-19 | 2002-06-27 | Ubinetics Limited | Clavier virtuel |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1696306A1 (fr) * | 2005-02-25 | 2006-08-30 | Siemens Aktiengesellschaft | Dispositif mobile avec échelle variable |
| WO2006089842A1 (fr) * | 2005-02-25 | 2006-08-31 | Siemens Aktiengesellschaft | Terminal mobile a affichage modulable |
| GB2438796A (en) * | 2005-02-25 | 2007-12-05 | Palm Inc | Mobile scalable display terminal |
| GB2438796B (en) * | 2005-02-25 | 2011-02-09 | Palm Inc | Mobile terminal comprising a scalable display |
| WO2008128989A1 (fr) * | 2007-04-19 | 2008-10-30 | Epos Technologies Limited | Localisation vocale et de position |
| JP2010525646A (ja) * | 2007-04-19 | 2010-07-22 | エポス ディベロップメント リミテッド | 音と位置の測定 |
| EP2528354A1 (fr) | 2007-04-19 | 2012-11-28 | Epos Development Ltd. | Localisation vocale et de position |
| US8787113B2 (en) | 2007-04-19 | 2014-07-22 | Qualcomm Incorporated | Voice and position localization |
Also Published As
| Publication number | Publication date |
|---|---|
| SE0202296D0 (sv) | 2002-07-23 |
| AU2003247307A1 (en) | 2004-02-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Wang et al. | Ubiquitous keyboard for small mobile devices: harnessing multipath fading for fine-grained keystroke localization | |
| US8436808B2 (en) | Processing signals to determine spatial positions | |
| EP2544077B1 (fr) | Procédé et appareil pour fournir une interface utilisateur utilisant un signal acoustique et dispositif comprenant une interface utilisateur | |
| US7158117B2 (en) | Coordinate input apparatus and control method thereof, coordinate input pointing tool, and program | |
| CN105320452B (zh) | 利用人体作为输入机构的可佩带装置 | |
| US6690618B2 (en) | Method and apparatus for approximating a source position of a sound-causing event for determining an input used in operating an electronic device | |
| US7852318B2 (en) | Acoustic robust synchronization signaling for acoustic positioning system | |
| US10331166B2 (en) | User interfaces | |
| US20070130547A1 (en) | Method and system for touchless user interface control | |
| US20120139863A1 (en) | Apparatus and method for inputting writing information according to writing pattern | |
| US20030132950A1 (en) | Detecting, classifying, and interpreting input events based on stimuli in multiple sensory domains | |
| WO2000039663A1 (fr) | Dispositif d'entree virtuelle | |
| EP2064618A2 (fr) | Dispositif mobile avec saisie de texte entraînée de manière acoustique, et son procédé | |
| WO2004029866A1 (fr) | Procede et systeme de reconnaissance d'ecriture en 3d | |
| JP2012503244A (ja) | 指に装着される装置および相互作用方法および通信方法 | |
| Kim et al. | UbiTap: Leveraging acoustic dispersion for ubiquitous touch interface on solid surfaces | |
| EP1228480B1 (fr) | Procede permettant de numeriser un texte et un dessin a l'aide d'une fonction d'effacement et/ou de pointage | |
| WO2008097024A1 (fr) | Procédé et appareil pour l'entrée d'écriture manuscrite et système d'entrée l'utilisant | |
| CN108920052A (zh) | 页面显示控制方法及相关产品 | |
| CN110276328A (zh) | 指纹识别方法及相关产品 | |
| Zhao et al. | UltraSnoop: Placement-agnostic keystroke snooping via smartphone-based ultrasonic sonar | |
| WO2000065530A1 (fr) | Dispositif d'entree du type stylo pour ordinateur | |
| US20050148870A1 (en) | Apparatus for generating command signals to an electronic device | |
| WO2004010275A1 (fr) | Procede et systeme d'entree d'informations associes a des microphones | |
| WO2018149318A1 (fr) | Procédé d'entrée, dispositif, appareil, système et support de stockage informatique |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |
|
| WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |