CN104199555A - Terminal setting method and terminal setting device - Google Patents
Terminal setting method and terminal setting device Download PDFInfo
- Publication number
- CN104199555A CN104199555A CN201410483867.7A CN201410483867A CN104199555A CN 104199555 A CN104199555 A CN 104199555A CN 201410483867 A CN201410483867 A CN 201410483867A CN 104199555 A CN104199555 A CN 104199555A
- Authority
- CN
- China
- Prior art keywords
- body language
- information
- terminal
- standard
- limb action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000009471 action Effects 0.000 claims description 136
- 210000003414 extremity Anatomy 0.000 description 110
- 230000005540 biological transmission Effects 0.000 description 8
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 210000005224 forefinger Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
An embodiment of the invention provides a terminal setting method and a terminal setting device. The terminal setting method includes: a terminal acquiring first body language information of a user; the terminal determining first standard body movements corresponding to language module information, wherein the body language module information includes at least one standard body movement and standard body language information corresponding to the standard body movements; further, the terminal determining terminal setting corresponding to the first standard body movements according to corresponding relation between the first standard body movement as well as the standard body movements and the terminal setting and executing the terminal setting. The terminal setting method is quite convenient and rapid, and great convenience is brought to the user.
Description
Technical field
The embodiment of the present invention relates to electronic technology field, relates in particular to a kind of method to set up and device of terminal.
Background technology
Along with the development of modern communication technology, the intelligent terminals such as mobile phone, panel computer, landline telephone have also been obtained development at full speed, many facilities are provided to people's life and/or work.User, in using the process of described intelligent terminal, need to carry out certain interactive operation with described intelligent terminal, so that described intelligent terminal is arranged.
In prior art, mode by click keys and/or touch screen realizes the setting to described intelligent terminal, wherein, described button comprises physical button and/or virtual key, and touch screen comprises touches, double-clicks, drags, slides and grows by a certain control or entry in screen.
Therefore, existing terminal setting method is more loaded down with trivial details, and especially when the inconvenient click keys of user and/or touch screen, the situation such as drive on expressway such as user, can bring inconvenience to user.
Summary of the invention
This embodiment provides a kind of method to set up and device of terminal, and the method to set up of described terminal is very convenient, to user, brings many convenience.
First aspect, the embodiment of the present invention provides a kind of method to set up of terminal, comprising:
Terminal is obtained the first body language information of user;
Described terminal determines according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding;
Described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.
Alternatively, described terminal determines according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding, comprising:
Described terminal compares standard body language information described at least one in described the first body language information and described body language Template Information, if described the first body language information is consistent with the first standard body language information in described body language Template Information, according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding, wherein, described the first body language information comprises: angle information, angular velocity information and range information, described the first standard body language information comprises: standard angle information, standard angle velocity information and gauged distance information.
Alternatively, described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, comprising:
Described terminal is according to described the first standard limb action, inquires about the corresponding relation between arranging of standard limb action and terminal described in each, obtains terminal setting corresponding to described the first standard limb action.
Alternatively, described terminal is obtained the first body language information of user, comprising:
Described terminal receives the first body language information of the described user who is sent by motion capture device.
Second aspect, the embodiment of the present invention provides a kind of method to set up of terminal, comprising:
Motion capture device obtains the first body language information of user, and described the first body language information comprises: angle information, angular velocity information and range information;
Described motion capture device sends to terminal by described the first body language information, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information.
The third aspect, the embodiment of the present invention provides a kind of terminal, comprising:
Acquisition module, for obtaining the first body language information of user;
The first determination module, for determining according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding;
The second determination module, for according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.
Alternatively, described the first determination module specifically for:
Standard body language information described at least one in described the first body language information and described body language Template Information is compared, if described the first body language information is consistent with the first standard body language information in described body language Template Information, according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding, wherein, described the first body language information comprises: angle information, angular velocity information and range information, described the first standard body language information comprises: standard angle information, standard angle velocity information and gauged distance information.
Alternatively, described the second determination module specifically for: according to described the first standard limb action, inquire about the corresponding relation between arranging of standard limb action and terminal described in each, obtain terminal setting corresponding to described the first standard limb action.
Alternatively, described acquisition module is specifically for the first body language information of receiving the described user sent by motion capture device.
Fourth aspect, the embodiment of the present invention provides a kind of motion capture device, comprising:
Acquisition module, for obtaining the first body language information of user, described the first body language information comprises: angle information, angular velocity information and range information;
Sending module, for described the first body language information is sent to terminal, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information.
In the present invention, terminal is by obtaining the first body language information of user, and determines according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding; Further, described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting, thereby user can realize the setting to terminal by doing simple limb action, therefore, the method to set up of the terminal of the embodiment of the present invention is very convenient, to user, brings many convenience.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the schematic flow sheet of the method to set up embodiment mono-of terminal of the present invention;
Fig. 2 is the schematic flow sheet of the method to set up embodiment bis-of terminal of the present invention;
Fig. 3 is the structural representation of terminal embodiment mono-of the present invention;
Fig. 4 is the structural representation of motion capture device embodiment mono-of the present invention;
Fig. 5 is the structural representation that system embodiment one is set of terminal of the present invention.
Embodiment
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, rather than whole embodiment.Embodiment based in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
Fig. 1 is the schematic flow sheet of the method to set up embodiment mono-of terminal of the present invention.The executive agent of the present embodiment is terminal, and this terminal can realize by software and/or hardware, and described terminal includes but not limited to mobile phone, with the panel computer of call function or landline telephone etc.As shown in Figure 1, the method for the present embodiment can comprise:
S101, terminal are obtained the first body language information of user.
In the embodiment of the present invention, when user need to arrange terminal, alternatively, when the operator scheme of described terminal is life pattern, described user can be by doing some limb actions, as stretch oneself or the one hand gestures such as shape that receive calls, described terminal is obtained the first body language information of user, so that described terminal judges described user's limb action further according to described the first body language information, thereby carry out correspondingly terminal setting according to limb action, alternatively, described the first body language information can comprise: angle information, angular velocity information and range information.
Particularly, step S101 can comprise: described terminal receives the first body language information of the described user who is sent by motion capture device.
In the embodiment of the present invention, described terminal can be by receiving the first body language information of the described user who is sent by motion capture device, alternatively, described motion capture device can be an intelligent bracelet, can be worn on described user on hand, when described user does gesture, described motion capture device can get described user's body language information, alternatively, described motion capture device comprises direction sensor, gyro sensor, proximity transducer and radio transmission apparatus, wherein, described direction sensor is for calculating the angle φ that described motion capture device moves and/or rotates, described gyro sensor is for the x of unit of account in the time, y, the angular velocity data ω (x) of tri-axes of coordinates of z, ω (y), ω (z), described proximity transducer is for detection of the distance L of described motion capture device and described user's body, described radio transmission apparatus is uploaded to described terminal for the body language information that described motion capture device is got, so that described terminal is processed, analysis and coupling etc., also when described user's limb action is different, described motion capture device is to the different φ of described terminal to report, ω (x), ω (y), ω (z) and L data.Alternatively, described radio transmission apparatus can be bluetooth equipment, described bluetooth equipment is being bound with described terminal, when the sensor of described motion capture device detects user's body language information, by described bluetooth equipment, to described terminal, sends described body language information.Alternatively, in the embodiment of the present invention, do not limit the particular location of described motion capture device, can also be arranged at the position that other can get user's body language information, do not repeat them here.Alternatively, described motion capture device can also be arranged in described terminal.
S102, described terminal determine according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding.
In the embodiment of the present invention, terminal determines according to the user's who gets the first body language information and default body language Template Information the first standard limb action that described the first body language information is corresponding, wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding, also be that described body language Template Information comprises at least one standard limb action and at least one standard body language information, alternatively, described standard limb action and described standard body language information are for one to one, or described standard limb action also can at least 2 standard body language information of correspondence.
Alternatively, step S102 can comprise: described terminal compares standard body language information described at least one in described the first body language information and described body language Template Information, if described the first body language information is consistent with the first standard body language information in described body language Template Information, according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding, wherein, described the first body language information and described the first standard body language information comprise: angle information, angular velocity information and range information.
In the embodiment of the present invention, described terminal is by comparing standard body language information described at least one in described the first body language information and described body language Template Information, if described the first body language information is consistent with the first standard body language information in described body language Template Information or described the first body language information and described body language Template Information in the error of the first standard body language information be less than a preset value (also can think that described the first body language information and described the first standard body language information are approximate consistent), according to described the first standard body language information, determine the first standard limb action corresponding with described the first standard body language information in described body language Template Information, otherwise, the first body language information described in described terminal nonrecognition, alternatively, described terminal can remind described user to re-enter limb action, until described terminal can be identified the body language information of described user's input, be also that described terminal is determined the first standard limb action corresponding to described the first body language information according to described the first body language information and default body language Template Information.Alternatively, described the first body language information comprises: angle information, angular velocity information and range information, described the first standard body language information comprises: standard angle information, standard angle velocity information and gauged distance information, also pass through the angle information in described the first body language information, angular velocity information and range information respectively with described standard body language information in standard angle information, standard angle velocity information and gauged distance information compare, if described angle information and described standard angle information, described angular velocity information and described standard angle velocity information, and described range information and described gauged distance information respectively consistent or error be less than respectively a preset value (also can think that described the first body language information and described the first standard body language information are approximate consistent), according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding.
Table 1 is body language template file layout, alternatively, in described default body language template, the process of establishing of each standard limb action and the standard body language information that described in each, standard limb action is corresponding is as follows: user carries out a certain limb action (as stretching oneself) three times, direction sensor in described motion capture device, gyro sensor and proximity transducer record respectively the data of three times, alternatively, using the data (as error is less for the first time and for the second time) that wherein twice error is less as described in the template (being also standard body language information) of limb action, and described body language Template Information input database is stored, alternatively, file layout can be as shown in table 1:
Table 1, body language template file layout
Alternatively, concrete comparison procedure is as follows: described terminal is read described body language Template Information from database, and be placed in data (Data) set, travel through the first body language information that whole set and described motion capture device upload and (comprise φ, ω (x), ω (y), ω (z) and L information) mate, alternatively, can calculate coupling by fuzzy algorithm take and judge the standard limb action whether limb action corresponding to described the first body language information defines in described body language template.
S103, described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.
In the embodiment of the present invention, described terminal first determines with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding according to standard limb action described in the first standard limb action corresponding to described the first body language information of having determined and default each, then carry out described terminal setting, the content of the corresponding relation between described each standard limb action and terminal setting can include but not limited to following content: " A stretches oneself---open normal mode; B one hand is gathered and is placed in one's ear---start to play music; C both hands are placed on bearing circle---open vehicle-mounted pattern; The D one hand shape that receives calls---receive calls; E forefinger is placed on lip limit---open conference model; F both hands close up and are placed in one's ear---and open offline mode ", alternatively, the corresponding relation between described standard limb action and terminal setting can be stored in the form of form described terminal, in the embodiment of the present invention, this is not restricted.
Alternatively, step S103 can comprise: described terminal is according to described the first standard limb action, inquire about the corresponding relation between arranging of standard limb action and terminal described in each, obtain terminal setting corresponding to described the first standard limb action, and carry out described terminal setting.
In the embodiment of the present invention, described terminal is passed through described the first standard limb action corresponding to described the first body language information of having determined, inquire about the corresponding relation between arranging of standard limb action and terminal described in each, also with standard limb action described in each and terminal setting between corresponding relation in standard limb action information mate, obtain the terminal setting corresponding with described the first standard limb action in described in each standard limb action and the terminal corresponding relation between arranging, and carry out described terminal setting, if the first standard limb action is " stretching oneself ", determine with terminal corresponding to the first standard limb action be set to " unlatching normal mode ", further, described terminal is opened normal mode.
In the embodiment of the present invention, terminal is by obtaining the first body language information of user, and determines according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding; Further, described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting, thereby user can realize the setting to terminal by doing simple limb action, therefore, the method to set up of the terminal of the embodiment of the present invention is very convenient, to user, brings many convenience.
Fig. 2 is the schematic flow sheet of the method to set up embodiment bis-of terminal of the present invention.The executive agent of the present embodiment is motion capture device, and this motion capture device can be realized by software and/or hardware, and described motion capture device can comprise direction sensor, gyro sensor, proximity transducer and radio transmission apparatus.As shown in Figure 2, the method for the present embodiment can comprise:
S201, motion capture device obtain the first body language information of user, and described the first body language information comprises: angle information, angular velocity information and range information.
In the embodiment of the present invention, when user need to arrange terminal, alternatively, when the operator scheme of described terminal is life pattern, described user can be when doing some limb actions, as stretch oneself or the one hand gestures such as shape that receive calls, and described motion capture device obtains the first body language information of described user, wherein, described the first body language information comprises: angle information, angular velocity information and range information, alternatively, described motion capture device can be an intelligent bracelet, can be worn on described user on hand, when described user does gesture, described motion capture device can get described user's body language information, alternatively, described motion capture device comprises direction sensor, gyro sensor and proximity transducer, wherein, described direction sensor is for calculating the angle φ that described motion capture device moves and/or rotates, described gyro sensor is for the x of unit of account in the time, y, the angular velocity data ω (x) of tri-axes of coordinates of z, ω (y), ω (z), described proximity transducer is for detection of the distance L of described motion capture device and described user's body, also when described user's limb action is different, described motion capture device can get different φ, ω (x), ω (y), ω (z) and L data.Alternatively, on described motion capture device, switch can be set, when the state of described motion capture device in closing, even if user has limb action, described motion capture device does not obtain body language information, and accordingly, described terminal also can not be carried out the operation of the above embodiment of the present invention S101-S103.Alternatively, in the embodiment of the present invention, do not limit the particular location of described motion capture device, can also be arranged at the position that other can get user's body language information, do not repeat them here.
S202, described motion capture device send to terminal by described the first body language information, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information.
In the embodiment of the present invention, at described motion capture device, get after the first body language information of user, described motion capture device sends to terminal by described the first body language information, alternatively, described motion capture device also comprises radio transmission apparatus, described radio transmission apparatus is uploaded to described terminal for the body language information that described motion capture device is got, so that described terminal is processed, analysis and coupling etc., also be that described motion capture device sends to terminal by described radio transmission apparatus by described the first body language information, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information, when described limb action is the first standard limb action according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.Alternatively, described radio transmission apparatus can be bluetooth equipment, described bluetooth equipment is being bound with described terminal, when described motion capture device sends described body language information by described bluetooth equipment to described terminal when getting the first body language information of user; Alternatively, when if the bluetooth equipment of described motion capture device is not bound with terminal or described motion capture device does not open bluetooth equipment, described motion capture device is in dormant state, when user has limb action, although described motion capture device obtains body language information, can't be to body language information described in terminal to report.
In the embodiment of the present invention, motion capture device obtains the first body language information of user, and described the first body language information comprises: angle information, angular velocity information and range information, further, described motion capture device sends to terminal by described the first body language information, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information, when described limb action is the first standard limb action according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting, thereby user can realize the setting to terminal by doing simple limb action, therefore, the method to set up of the terminal of the embodiment of the present invention is very convenient, to user, bring many convenience.
Fig. 3 is the structural representation of terminal embodiment mono-of the present invention, and as shown in Figure 3, the terminal 30 that the present embodiment provides comprises: acquisition module 301, the first determination module 302 and the second determination module 303.
Wherein, acquisition module 301 is for obtaining the first body language information of user;
The first determination module 302 is for determining according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding;
The second determination module 303 for according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.
Alternatively, described the first determination module specifically for:
Standard body language information described at least one in described the first body language information and described body language Template Information is compared, if described the first body language information is consistent with the first standard body language information in described body language Template Information, according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding, wherein, described the first body language information comprises: angle information, angular velocity information and range information, described the first standard body language information comprises: standard angle information, standard angle velocity information and gauged distance information.
Alternatively, described the second determination module specifically for: according to described the first standard limb action, inquire about the corresponding relation between arranging of standard limb action and terminal described in each, obtain terminal setting corresponding to described the first standard limb action.
Alternatively, described acquisition module is specifically for the first body language information of receiving the described user sent by motion capture device.
The terminal of the present embodiment, can be for the technical scheme of the method to set up embodiment mono-of terminal of the present invention, and it realizes principle and technique effect is similar, repeats no more herein.
Fig. 4 is the structural representation of motion capture device embodiment mono-of the present invention, and as shown in Figure 4, the motion capture device 40 that the present embodiment provides comprises: acquisition module 401 and sending module 402.
Wherein, acquisition module 401 is for obtaining the first body language information of user, and described the first body language information comprises: angle information, angular velocity information and range information;
Sending module 402 is for described the first body language information is sent to terminal, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information.
The motion capture device of the present embodiment, can be for the technical scheme of the method to set up embodiment bis-of terminal of the present invention, and it realizes principle and technique effect is similar, repeats no more herein.
Fig. 5 is the structural representation that system embodiment one is set of terminal of the present invention, as shown in Figure 5, the system that arranges 50 of the terminal that the present embodiment provides comprises: terminal 501 and motion capture device 502, wherein, terminal 501 can adopt the structure of above-mentioned terminal embodiment, accordingly, can carry out the technical scheme of the method to set up embodiment mono-of above-mentioned terminal; Motion capture device 502 can adopt the structure of above-mentioned motion capture device embodiment, accordingly, can carry out the technical scheme of the method to set up embodiment bis-of above-mentioned terminal, and it realizes principle and technique effect is similar, repeats no more herein.Alternatively, between terminal 501 and motion capture device 502, by wireless connections, or motion capture device 502 can also be arranged in terminal 501, and the embodiment of the present invention is to the position of motion capture device 502 and be not construed as limiting.
One of ordinary skill in the art will appreciate that: all or part of step that realizes above-mentioned each embodiment of the method can complete by the relevant hardware of programmed instruction.Aforesaid program can be stored in a computer read/write memory medium.This program, when carrying out, is carried out the step that comprises above-mentioned each embodiment of the method; And aforesaid storage medium comprises: various media that can be program code stored such as ROM, RAM, magnetic disc or CDs.
Finally it should be noted that: each embodiment, only in order to technical scheme of the present invention to be described, is not intended to limit above; Although the present invention is had been described in detail with reference to aforementioned each embodiment, those of ordinary skill in the art is to be understood that: its technical scheme that still can record aforementioned each embodiment is modified, or some or all of technical characterictic is wherein equal to replacement; And these modifications or replacement do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.
Claims (10)
1. a method to set up for terminal, is characterized in that, comprising:
Terminal is obtained the first body language information of user;
Described terminal determines according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding;
Described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.
2. method according to claim 1, is characterized in that, described terminal determines according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding, comprising:
Described terminal compares standard body language information described at least one in described the first body language information and described body language Template Information, if described the first body language information is consistent with the first standard body language information in described body language Template Information, according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding, wherein, described the first body language information comprises: angle information, angular velocity information and range information, described the first standard body language information comprises: standard angle information, standard angle velocity information and gauged distance information.
3. method according to claim 2, it is characterized in that, described terminal according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, comprising:
Described terminal is according to described the first standard limb action, inquires about the corresponding relation between arranging of standard limb action and terminal described in each, obtains terminal setting corresponding to described the first standard limb action.
4. according to the method described in any one in claim 1-3, it is characterized in that, described terminal is obtained the first body language information of user, comprising:
Described terminal receives the first body language information of the described user who is sent by motion capture device.
5. a method to set up for terminal, is characterized in that, comprising:
Motion capture device obtains the first body language information of user, and described the first body language information comprises: angle information, angular velocity information and range information;
Described motion capture device sends to terminal by described the first body language information, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information.
6. a terminal, is characterized in that, comprising:
Acquisition module, for obtaining the first body language information of user;
The first determination module, for determining according to described the first body language information and body language Template Information the first standard limb action that described the first body language information is corresponding; Wherein, described body language Template Information comprises at least one standard limb action and the standard body language information that described in each, standard limb action is corresponding;
The second determination module, for according to described the first standard limb action and described in each standard limb action determine with the corresponding relation between terminal setting the terminal setting that described the first standard limb action is corresponding, and carry out described terminal setting.
7. terminal according to claim 6, is characterized in that, described the first determination module specifically for:
Standard body language information described at least one in described the first body language information and described body language Template Information is compared, if described the first body language information is consistent with the first standard body language information in described body language Template Information, according to described body language Template Information and described the first standard body language information, determine the first standard limb action that described the first body language information is corresponding, wherein, described the first body language information comprises: angle information, angular velocity information and range information, described the first standard body language information comprises: standard angle information, standard angle velocity information and gauged distance information.
8. terminal according to claim 7, it is characterized in that, described the second determination module specifically for: according to described the first standard limb action, inquire about the corresponding relation between arranging of standard limb action and terminal described in each, obtain terminal setting corresponding to described the first standard limb action.
9. according to the terminal described in any one in claim 6-8, it is characterized in that, described acquisition module is specifically for the first body language information of receiving the described user sent by motion capture device.
10. a motion capture device, is characterized in that, comprising:
Acquisition module, for obtaining the first body language information of user, described the first body language information comprises: angle information, angular velocity information and range information;
Sending module, for described the first body language information is sent to terminal, so that described terminal is determined limb action corresponding to described the first body language information according to described the first body language information.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410483867.7A CN104199555A (en) | 2014-09-19 | 2014-09-19 | Terminal setting method and terminal setting device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410483867.7A CN104199555A (en) | 2014-09-19 | 2014-09-19 | Terminal setting method and terminal setting device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN104199555A true CN104199555A (en) | 2014-12-10 |
Family
ID=52084856
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410483867.7A Pending CN104199555A (en) | 2014-09-19 | 2014-09-19 | Terminal setting method and terminal setting device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104199555A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106406075A (en) * | 2016-11-15 | 2017-02-15 | 广东小天才科技有限公司 | Alarm clock closing method and device |
| CN106878390A (en) * | 2017-01-09 | 2017-06-20 | 北京奇虎科技有限公司 | Electronic pet interactive control method, device and wearable device |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0823683A1 (en) * | 1995-04-28 | 1998-02-11 | Matsushita Electric Industrial Co., Ltd. | Interface device |
| CN1834853A (en) * | 2005-03-14 | 2006-09-20 | 佛山市顺德区顺达电脑厂有限公司 | Intelligence input recognizer and method thereof |
| CN101561708A (en) * | 2008-04-15 | 2009-10-21 | 宏碁股份有限公司 | Method for determining input mode by using motion sensing and input device thereof |
-
2014
- 2014-09-19 CN CN201410483867.7A patent/CN104199555A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0823683A1 (en) * | 1995-04-28 | 1998-02-11 | Matsushita Electric Industrial Co., Ltd. | Interface device |
| CN1834853A (en) * | 2005-03-14 | 2006-09-20 | 佛山市顺德区顺达电脑厂有限公司 | Intelligence input recognizer and method thereof |
| CN101561708A (en) * | 2008-04-15 | 2009-10-21 | 宏碁股份有限公司 | Method for determining input mode by using motion sensing and input device thereof |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106406075A (en) * | 2016-11-15 | 2017-02-15 | 广东小天才科技有限公司 | Alarm clock closing method and device |
| CN106878390A (en) * | 2017-01-09 | 2017-06-20 | 北京奇虎科技有限公司 | Electronic pet interactive control method, device and wearable device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104238807B (en) | Apparatus and method for selecting objects by using multi-touch | |
| CN105657173B (en) | Volume adjustment method, device and mobile terminal | |
| CN104423581B (en) | Mobile terminal and its control method | |
| EP2746979B1 (en) | Mobile device having face recognition function using additional component and method for controlling the mobile device | |
| EP3087456B1 (en) | Remote multi-touch control | |
| CN104238912B (en) | application control method and device | |
| CN107045420A (en) | Switching method and mobile terminal, the storage medium of application program | |
| WO2013185119A1 (en) | Storing trace information | |
| US20130298079A1 (en) | Apparatus and method for unlocking an electronic device | |
| CN103869942A (en) | Input control method and wearing electronic device | |
| CN103955275A (en) | Application control method and device | |
| KR20140089224A (en) | Device and method for executing operation based on touch-input | |
| CN109885174A (en) | Gesture control method, device, mobile terminal and storage medium | |
| CN105117008B (en) | Guiding method of operating and device, electronic equipment | |
| CN105653071A (en) | Information processing method and electronic device | |
| CN108920069B (en) | Touch operation method and device, mobile terminal and storage medium | |
| CN108595047A (en) | Touch control object recognition methods and device | |
| WO2017096958A1 (en) | Human-computer interaction method, apparatus, and mobile device | |
| CN101482799A (en) | Method for controlling electronic equipment through touching type screen and electronic equipment thereof | |
| CN103353826A (en) | Display equipment and information processing method thereof | |
| CN102760005A (en) | Method and device for controlling electronic equipment | |
| CN109144377A (en) | Operating method, smartwatch and the computer readable storage medium of smartwatch | |
| CN104951228B (en) | Laying method, device and the terminal device of icon | |
| CN104199555A (en) | Terminal setting method and terminal setting device | |
| CN108073291A (en) | A kind of input method and device, a kind of device for input |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141210 |