WO2012142869A1 - Method and apparatus for automatically adjusting terminal interface display - Google Patents
Method and apparatus for automatically adjusting terminal interface display Download PDFInfo
- Publication number
- WO2012142869A1 WO2012142869A1 PCT/CN2012/071468 CN2012071468W WO2012142869A1 WO 2012142869 A1 WO2012142869 A1 WO 2012142869A1 CN 2012071468 W CN2012071468 W CN 2012071468W WO 2012142869 A1 WO2012142869 A1 WO 2012142869A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target area
- data
- terminal
- target
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Definitions
- the present invention relates to the field of mobile communications, and in particular, to a method and apparatus for automatically adjusting terminal interface display. Background technique
- the user when the user interacts with the mobile phone, the user can operate the mobile phone through the touch screen of the mobile phone. For example, when the image is enlarged, the user only needs two fingers to slide from the inside to the outside on the screen of the mobile phone, and the mobile phone automatically according to the mobile phone The speed and distance of the finger sliding are calculated as a multiple to be enlarged, and the enlarged image is displayed on the mobile phone interface according to the calculated magnification.
- an embodiment of the present invention provides a method and apparatus for automatically adjusting a terminal interface display.
- An embodiment of the present invention provides a method for automatically adjusting a terminal interface display, including: acquiring physical distance data of a target object to a terminal and/or a target area model of a target object; and determining target area tracking according to the target area model and/or physical distance data.
- Data model and Obtaining a terminal interface display parameter corresponding to the target area tracking data model according to a mapping relationship between the target area tracking data model and the terminal interface display parameter;
- An embodiment of the present invention further provides an apparatus for automatically adjusting a terminal interface display, including: an acquiring unit, configured to acquire physical distance data of a target object to a terminal and/or a target area model of the target object;
- the monitoring unit is configured to determine a target area tracking data model according to the target area model and/or the physical distance data, and obtain a terminal interface corresponding to the target area tracking data model according to the mapping relationship between the target area tracking data model and the terminal interface display parameter Display parameter
- the control unit is configured to adjust the display content and/or display mode of the current terminal interface according to the acquired terminal interface display parameters.
- the existing terminal operation mode is expanded, and a new man-machine is provided.
- the embodiment of the invention can simulate the reading habit of the user, and adjust the content displayed on the screen according to the reading habit of the user, and adjust the font size, and can reduce the user learning cost and improve the user experience. It makes the terminal operation more convenient and flexible, and is conducive to improving the market competitiveness of the terminal, especially for users with broken arms, so that the content displayed on the terminal screen and the manner of display can be adjusted without hands-on.
- FIG. 1 is a schematic structural diagram of an apparatus for automatically adjusting a terminal interface display according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a preferred structure of an embodiment of the present invention
- FIG. 3 is a schematic flow chart of a method for automatically adjusting a terminal interface display according to an embodiment of the present invention. detailed description
- Embodiments of the present invention provide a method and apparatus for automatically adjusting a terminal interface display.
- the most natural behavior is to approach the item, close to
- the embodiment of the invention provides a method and a device for automatically adjusting the display of the terminal interface according to the distance between the human eye and the screen.
- the method and the device simulate the natural behavior of the human being, and are a more humanized interaction scheme. Will not let users pay any learning costs.
- FIG. 1 is a schematic structural diagram of an apparatus for automatically adjusting a terminal interface display according to an embodiment of the present invention.
- the apparatus for automatically adjusting the display of the terminal interface includes: an acquisition unit 10, a monitoring unit 12, and a control unit 14.
- the respective modules of the embodiments of the present invention will be described in detail below.
- the obtaining unit 10 is configured to acquire physical distance data of the target object to the terminal and/or a target area model of the target object.
- the acquiring unit 10 includes: a video stream collecting unit, a ranging unit, an identifying unit, and a smart unit. .
- the image stream collecting unit is configured to collect the image stream and collect the image data of the target object in the image stream.
- the target object is a face or a person.
- the image stream The collecting unit needs to collect at least three sets of image streams, and separately collect image data of the target objects in at least three sets of image streams;
- the image stream collecting unit has the function of collecting corresponding images in a dynamic or static image stream.
- the image stream collecting unit can transmit image data to the ranging unit through a proprietary protocol. .
- the ranging unit is configured to calculate physical distance data of the target object to the terminal according to the image data of the target object (ie, the distance between the person or the face to the terminal); in actual applications, the ranging unit will collect at least three sets of image data. They are marked as specific codes, analyze and calculate the relationship between specific codes, and obtain the physical distance data of the target object to the terminal.
- the ranging unit supports preprocessing of the image stream, and in addition, the ranging unit also has the capability of transmitting the physical distance data to the identification unit through the private coordination;
- the image stream collecting unit and the ranging unit may constitute a terminal tracking sensing module
- the terminal tracking sensing module refers to a device or device having certain intelligent acquisition and computing capabilities and capable of acquiring corresponding object images.
- the terminal tracking sensing module is also capable of calculating a specific physical distance from the corresponding object to the terminal, and transmitting the data by a proprietary protocol.
- the identification unit is configured to distinguish the target area from the target object to generate the target area data; preferably, in the embodiment of the invention, the target area is a human eye.
- the identification unit includes: a segmentation processing module and a parameter matching module.
- the segmentation processing module is configured to initially distinguish the target region from the target object according to the intelligent segmentation strategy, and generate preliminary target region data; that is, the segmentation processing module has the capability of collecting the image data stream and the physical distance data through the transmission protocol. At the same time, it has an intelligent segmentation strategy that distinguishes the human eye area from the background and/or face.
- the parameter matching module is configured to match the preliminary target area data with the target area model attribute characteristic parameter, determine a real image of the target area, and finally obtain the target area data according to the real image of the target area.
- the parameter matching module determines the human eye attribute Calculating the feature vector of the human eye and matching the human eye has the ability to recognize the real image of the human eye.
- the smart unit is configured to receive the target area data (human eye real image data) matched by the parameter matching module, and determine the target area model according to the target area data and the physical distance data, specifically, the smart unit according to the target area data and the physical distance data Calculating a focus area of the target area, and an angle of the target area and the terminal, and generating a target area model according to the focus area of the target area and the angle of the target area and the terminal;
- the smart unit has the ability to calculate the human eye focus area and the human eye and screen angle data based on the distance data, and establish an intelligent human eye model having a mapping relationship of human eye image, distance angle data, and focus data.
- the parameter matching module and the intelligent unit may constitute a system intelligent module, which has data calculation, intelligent identification, memory storage, and provides a function of establishing a smart human eye model;
- the monitoring unit 12 is configured to determine a target area tracking data model according to the target area model and/or the physical distance data, and obtain a terminal interface display parameter corresponding to the tracking data model according to the mapping relationship between the target area tracking data model and the terminal interface display parameter. ;
- the monitoring unit 12 has the ability to establish a smart human eye tracking data model having a human eye tracking function according to the relationship between the smart human visual and physical distance data, and to have the control unit 14 triggered by monitoring the change of the physical distance data.
- a smart human eye tracking data model having a human eye tracking function according to the relationship between the smart human visual and physical distance data
- the control unit 14 triggered by monitoring the change of the physical distance data.
- the preset module may be further configured to adjust the display parameters of the terminal interface according to the setting of the user, where the display parameters of the terminal interface specifically include: content layout, interface style, font size, and movement mode. . That is, the preset module has a function of storing a terminal interface display parameter set by the user;
- the control unit 14 is configured to adjust the current terminal interface display content and/or display mode according to the acquired terminal interface display parameters.
- the terminal display module performs display of a user interface (UI) under the control of the control unit 14. That is, the control unit 14 supports the associated supervisor
- UI user interface
- the control unit 14 supports the associated supervisor
- the data of the measuring unit 12 and the preset module are established, and a mapping relationship is established, and the function of managing and using the terminal display module is achieved by sending an instruction to call the configuration parameter in the preset module.
- the display content may be adjusted up and down and left and right only according to the change of the target area model; if the target area tracks the physical distance in the data model
- the size of the font in the display mode can be adjusted only according to the change of the physical distance data; when the target area model and the physical distance data are changed, the display content and the display mode can be simultaneously adjusted.
- mapping relationship between the display parameters of the terminal interface specifically: establishing a mapping relationship between each set of collected image streams and physical distance data, and establishing a mapping between the collected image stream, the physical distance data, and the target area model (smart human eye model) Relationship, establish a correspondence between the target area model (smart human eye model), the physical distance data and the target area tracking data model (smart human eye tracking model), and establish a correspondence between the target area tracking data model and the terminal interface display parameters relationship.
- the monitoring unit 12 monitors the change of the target area, and the startup control unit 14 controls these mapping relationships to achieve the purpose of controlling the terminal display module.
- the method specifically includes: a terminal tracking sensing module, a segmentation processing module, a system intelligence module, a monitoring module (ie, a monitoring unit in FIG. 1), and a preset module.
- the system control module ie, the control unit in FIG. 1 and the terminal display module, wherein the terminal tracking sensing module includes an image stream collecting unit and a ranging unit, and the system intelligent module includes a parameter matching module and an intelligent unit, as shown in FIG.
- Step 1 The device shown in the automatic adjustment terminal interface shown needs to perform the following processing when the terminal interface is automatically adjusted according to the human eye: Step 1.
- the user picks up the terminal, and the terminal tracking sensor module of the terminal starts up immediately.
- the terminal tracking sensor module synchronously collects the image stream and calculates the physical distance between the item and the terminal screen, and performs data transmission to the segmentation processing module by using a proprietary protocol. .
- three or more cameras in the image stream collecting unit in the terminal tracking sensing module start collecting dynamic and static image streams of corresponding objects in front of the screen of the terminal through multiple directions, and the ranging unit receives images of the objects. After the flow, the image mark is quickly made, and the physical distance data between the corresponding object and the sensing module is calculated.
- the collected sets of image streams and physical distance data are transmitted to the segmentation processing module through a proprietary protocol.
- Step 2 The segmentation processing module firstly distinguishes the human eye from the image stream by acquiring image stream and physical distance data through techniques such as intelligent image segmentation and image processing, and generates a plurality of sets of human eye image data, and establishes each group of human eyes.
- the human eye shadow stream is re-identified by a parameter matching module that is transmitted to the system intelligence module through a proprietary protocol.
- Step 3 After receiving the plurality of sets of human eye image streams, the parameter matching module of the system intelligence module matches the human eye by analyzing the feature vector of the human eye multiple times to further determine the real image of the human eye.
- the multi-group data of the human eye's real image is transmitted to the intelligent unit for further analysis.
- Step 4 The system intelligent module stores the identified sets of human eye real image data through the intelligent unit, synchronously receives the mapping relationship of the physical distance data corresponding to the real image data of each group of human eyes, and obtains the human eye focus area according to the mapping relationship. And the angle data of the human eye and the screen; establishing a mapping relationship between the human eye focus area and the human eye real image, the distance between the human eye and the screen, and the angle between the human eye and the screen, generating a human eye intelligent model, and transmitting to the monitoring Module.
- Step 5 The monitoring module generates a smart human eye tracking data model by mapping the physical distance data with the human eye intelligent model.
- the system control module sends an instruction by monitoring the physical distance data collected by the terminal tracking sensor module. Once the user moves the terminal, the distance between the user and the terminal screen changes, and the human eye focus area, the human eye and the screen angle in the mapping relationship of the human eye intelligent model It will change and the system control module will be started. At the same time, data such as the human eye focus area, the human eye and the screen angle, the human eye and the distance in the corresponding intelligent human eye tracking model at the distance are transmitted to the system control module.
- the system control module calls the preset parameters of the preset module through the interface.
- the preset data includes all configuration parameters that the terminal UI interface can display under the change of distance, for example, content typesetting, interface style, content size, and moving mode.
- the terminal interface display parameters may be set in the preset module according to the user's habits.
- Step 7 The system control module establishes a mapping relationship between the monitoring module and the preset module. When the monitoring module detects the distance change, the system control module is triggered. The system control module controls the display content and the manner of the terminal display module according to the parameters of the preset module. After receiving the operation instruction, the terminal display module receives data according to the human eye focus area, the human eye and the screen angle, the human eye and the screen distance, and the like. The terminal interface is called to display parameters, and the command is sent to adjust the display layout and style of the content on the terminal screen.
- the monitoring module detects a near-distance change of the horizontal distance between the human eye and the screen, and the terminal display module zooms in and out on the human eye focus area on the terminal UI interface; meanwhile, according to the angle change of the human eye and the screen, the terminal display module The content of the terminal UI interface is moved up and down. Specifically, when the user's human eye level is close to the screen 5 cm, the monitoring module calculates that the distance data is shortened by 5 cm, triggering the system control module to perform the control operation of the preset module, and the system control module according to the human eye focus area, the human eye and the screen angle The data is used to enlarge the content of the human eye focus area of the UI interface by a factor of two.
- the terminal is vertically moved up to 3 cm, and the monitoring module monitors the distance between the human eye and the human focus area of the screen, and the angle of the corresponding human eye and the screen in the intelligent human eye tracking model becomes larger, so that the UI interface content is increased. Moving down, the user can easily see the upper part of the UI interface content.
- the content of the terminal interface exhibits sufficient flexibility and freedom.
- the terminal system can adjust the layout and style of the content display on the screen of the terminal according to the distance data of the screen and the human eye. For example, after the distance between the screen and the human eye becomes smaller, the screen content automatically becomes larger. A picture is stored on the terminal, and the image of the terminal screen can be based on the person. The distance between the eye and the screen is automatically adjusted to the appropriate image for the user to see. When the human eye is 30cm away from the screen, the entire character avatar can be seen. The human eye is 20cm away from the screen, and the screen is automatically adjusted to see the lips of the person's avatar.
- the terminal system can display the best interface according to the habits and characteristics of the terminal holder. For example, an elderly person is farsighted. After initializing the setting of farsighted data, within the effective value range, regardless of whether the terminal of the old person is far away from the human eye, the interface can be clearly viewed without being affected by the farsightedness disorder.
- the target area tracking data model is established, and the display of the terminal screen is controlled according to the mapping relationship between the target area tracking data model and the terminal interface display parameter, thereby solving the touch-based touch in the prior art.
- Inductive intelligent interaction is not conducive to the operation of disabled people and requires users to pay for learning costs. It can expand the existing terminal operation mode, simulate user's living habits, reduce user learning costs, narrow individual user differences, and improve The user's experience makes the terminal operation more convenient and flexible, which is conducive to improving the market competitiveness of the terminal.
- FIG. 3 is a schematic flowchart of a method for automatically adjusting a terminal interface display according to an embodiment of the present invention.
- the method for automatically adjusting the display of the terminal interface includes the following processing: Step 301: Obtain physical distance data of the target object to the terminal and/or a target area model of the target object;
- the obtaining the physical distance data of the target object to the terminal in the step 301 includes: processing the image stream, collecting the image data of the target object in the image stream, and calculating the physical distance data of the target object to the terminal according to the image data of the target object;
- the target object is a face or a person.
- step 301 the following processing needs to be performed: 1. Collect at least three sets of image streams, Collecting image data of the target object in at least three sets of image streams respectively; 2. Marking at least three sets of image data collected as specific codes, analyzing and calculating the mutual relationship between the specific codes, and acquiring physical distance data of the target object to the terminal.
- the acquiring the target area model of the target object in step 301 specifically includes: 1. collecting the image stream, collecting the image data of the target object in the image stream, and calculating the physical distance data of the target object to the terminal according to the image data of the target object;
- the target area is differentiated from the target object to generate the target area data, and the target area model is determined according to the target area data and the physical distance data.
- the target area is a human eye.
- distinguishing the target area from the target object to generate the target area data specifically includes:
- the target region is initially distinguished from the target object to generate preliminary target region data.
- the preliminary target region data is matched with the target region model attribute parameter to determine the real image of the target region, according to the target region.
- the real image ultimately captures the target area data.
- Determining the target area model based on the target area data and the physical distance data specifically includes:
- Step 302 Determine a target area tracking data model according to the target area model and/or physical distance data, and obtain a terminal interface display parameter corresponding to the tracking data model according to the mapping relationship between the target area tracking data model and the terminal interface display parameter;
- Step 303 Adjust the display content and/or display mode of the current terminal interface according to the obtained terminal interface display parameter.
- the display content may be up and down only according to the change of the target area model. If the physical distance data in the target area tracking data model changes, the size of the font in the display mode can be adjusted only according to the change of the physical distance data; when both the target area model and the physical distance data change , you can adjust the display content and display mode at the same time.
- the terminal interface display parameters may also be adjusted according to the user's settings, wherein the terminal interface display parameters specifically include: content typesetting, interface style, font size, and moving mode.
- mapping relationship between the display parameters of the terminal interface specifically: establishing a mapping relationship between each set of collected image streams and physical distance data, and establishing a mapping between the collected image stream, the physical distance data, and the target area model (smart human eye model) Relationship, establish a correspondence between the target area model (smart human eye model), the physical distance data and the target area tracking data model (smart human eye tracking model), and establish a correspondence between the target area tracking data model and the terminal interface display parameters relationship.
- the monitoring unit monitors the change of the target area, and the startup control unit controls these mapping relationships to achieve the purpose of controlling the terminal display module.
- Step 1 The user picks up the terminal, and the terminal tracking sensor module of the terminal starts up immediately.
- the terminal tracking sensor module synchronously collects the image stream and calculates the physical distance between the item and the terminal screen, and performs data transmission to the segmentation processing module by using a proprietary protocol. .
- three or more cameras in the image stream collecting unit in the terminal tracking sensing module start collecting dynamic and static image streams of corresponding objects in front of the screen of the terminal through multiple directions, and the ranging unit receives images of the objects. After the flow, the image mark is quickly made, and the internal calculation method is used to obtain the corresponding object and the sensing module.
- Physical distance data The collected sets of image streams and physical distance data are transmitted to the segmentation processing module through a proprietary protocol.
- Step 2 The segmentation processing module firstly distinguishes the human eye from the image stream by acquiring image stream and physical distance data through techniques such as intelligent image segmentation and image processing, and generates a plurality of sets of human eye image data, and establishes each group of human eyes.
- the human eye shadow image stream is transmitted to the identification unit in the system intelligence module through a proprietary protocol for re-identification.
- Step 3 After receiving the plurality of sets of human eye image streams, the identification unit of the system intelligence module matches the human eye by analyzing the feature vector of the human eye multiple times to further determine the real image of the human eye, and The plurality of sets of data of the human eye's real image are transmitted to the smart unit for further analysis.
- Step 4 The system intelligent module stores the identified sets of human eye real image data through the intelligent unit, synchronously receives the mapping relationship of the physical distance data corresponding to the real image data of each group of human eyes, and obtains the human eye focus area according to the mapping relationship. And the angle data of the human eye and the screen; establishing a mapping relationship between the human eye focus area and the human eye real image, the distance between the human eye and the screen, and the angle between the human eye and the screen, generating a human eye intelligent model, and transmitting to the monitoring Module.
- Step 5 The monitoring module generates a smart human eye tracking data model by mapping the physical distance data with the human eye intelligent model.
- the system control module sends an instruction by monitoring the physical distance data collected by the terminal tracking sensor module. Once the user moves the terminal, the distance between the user and the terminal screen changes, and the human eye focus area, the human eye and the screen angle in the mapping relationship of the human eye intelligent model will change accordingly, and the system control module is activated. At the same time, data such as the human eye focus area, the human eye and the screen angle, the human eye and the distance in the corresponding intelligent human eye tracking model at the distance are transmitted to the system control module.
- the system control module calls the preset parameters of the preset module through the interface.
- the preset data includes all configuration parameters that the terminal UI interface can display under the change of distance, for example, content typesetting, interface style, content size, moving mode, and the like.
- the terminal interface display parameters are set in the preset module according to the user's habits.
- Step 7 The system control module establishes a mapping relationship between the monitoring module and the preset module. When the monitoring module detects the distance change, the system control module is triggered. The system control module controls the display content and the manner of the terminal display module according to the parameters of the preset module. After receiving the operation instruction, the terminal display module receives data according to the human eye focus area, the human eye and the screen angle, the human eye and the screen distance, and the like. The terminal interface is called to display parameters, and the command is sent to adjust the display layout and style of the content on the terminal screen.
- the monitoring module detects a near-distance change of the horizontal distance between the human eye and the screen, and the terminal display module zooms in and out on the human eye focus area on the terminal UI interface; meanwhile, according to the angle change of the human eye and the screen, the terminal display module The content of the terminal UI interface is moved up and down. Specifically, when the user's human eye level is close to the screen 5 cm, the monitoring module calculates that the distance data is shortened by 5 cm, triggering the system control module to perform the control operation of the preset module, and the system control module according to the human eye focus area, the human eye and the screen angle The data is used to enlarge the content of the human eye focus area of the UI interface by a factor of two.
- the terminal is vertically moved up to 3 cm, and the monitoring module monitors the distance between the human eye and the human focus area of the screen, and the angle of the corresponding human eye and the screen in the intelligent human eye tracking model becomes larger, so that the UI interface content is increased. Moving down, the user can easily see the upper part of the UI interface content.
- the content of the terminal interface exhibits sufficient flexibility and freedom.
- the terminal system can adjust the layout and style of the content display on the screen of the terminal according to the distance data of the screen and the human eye. For example, after the distance between the screen and the human eye becomes smaller, the screen content automatically becomes larger.
- a picture is stored on the terminal, and the image of the terminal screen can be automatically adjusted to a suitable image according to the distance between the human eye and the screen.
- the human eye is 30 cm away from the screen, the entire character avatar can be seen; The screen is 20cm, and the screen is automatically adjusted to see the lips of the person's head.
- the terminal system can display the best interface according to the habits and characteristics of the terminal holder. For example, an elderly person is farsighted. After initializing the setting of farsighted data, within the effective value range, regardless of whether the terminal of the old person is far away from the human eye, the interface can be clearly viewed without being affected by the farsightedness disorder.
- the target area tracking data model is established, and the display of the terminal screen is controlled according to the mapping relationship between the target area tracking data model and the terminal interface display parameter, and the existing terminal operation mode is performed.
- the expansion provides a new human-computer intelligent interaction mode, and the embodiment of the invention can simulate the reading habit of the user, and adjust the content displayed on the screen according to the reading habit of the user, and adjust the font size and/or the font size.
- Reduce the user's learning cost improve the user's experience, make the terminal operation more convenient and flexible, and help to improve the market competitiveness of the terminal, especially for users with broken arms, so that they can be displayed on the terminal screen without hands-on.
- the content and display mode are adjusted.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
自动调节终端界面显示的方法及装置 技术领域 Method and device for automatically adjusting terminal interface display
本发明涉及移动通讯领域, 特别是涉及一种自动调节终端界面显示的 方法及装置。 背景技术 The present invention relates to the field of mobile communications, and in particular, to a method and apparatus for automatically adjusting terminal interface display. Background technique
现有技术中, 在用户与手机进行交互时, 用户可以通过手机的触摸屏 对手机进行操作, 例如, 在对图片进行放大时, 用户只要两个手指在手机 屏幕上由内向外滑动, 手机自动根据手指头滑动的速度和距离等计算出要 放大的倍数, 并根据计算出的放大倍数在手机界面上显示出放大后的图片。 In the prior art, when the user interacts with the mobile phone, the user can operate the mobile phone through the touch screen of the mobile phone. For example, when the image is enlarged, the user only needs two fingers to slide from the inside to the outside on the screen of the mobile phone, and the mobile phone automatically according to the mobile phone The speed and distance of the finger sliding are calculated as a multiple to be enlarged, and the enlarged image is displayed on the mobile phone interface according to the calculated magnification.
目前, 这种基于触摸感应的智能交互方式相比以往通过按键进行交互 的方式具有突破性的进步, 更加符合人性化交互设计的趋势、 以及用户的 自然行为习惯, 是下一代手机界面设计的基本原则和要求。 但基于触摸感 应的智能交互方式仍然存在以下不足之处: 在对图片进行放大时, 用户至 少要用一只手进行操作, 因此, 对于没有手臂的残疾人来说, 就无法完成 上述操作; 其次, 用户还要用手做滑动操作, 即便操作很简单, 但对初学 者来说, 也是需要学习成本去习惯这种交互模式。 发明内容 At present, this kind of intelligent interaction based on touch sensing has made breakthroughs in comparison with the way of interacting with buttons in the past, which is more in line with the trend of humanized interaction design and the natural behavior habits of users, which is the basic design of the next generation mobile phone interface. Principles and requirements. However, the intelligent interaction method based on touch sensing still has the following shortcomings: When zooming in on the picture, the user must operate with at least one hand, so for the disabled without the arm, the above operation cannot be completed; The user also needs to do the sliding operation by hand, even if the operation is very simple, but for beginners, it is also necessary to learn the cost to get used to this interactive mode. Summary of the invention
为解决上述技术问题, 本发明实施例提供一种自动调节终端界面显示 的方法及装置。 In order to solve the above technical problem, an embodiment of the present invention provides a method and apparatus for automatically adjusting a terminal interface display.
本发明实施例提供一种自动调节终端界面显示的方法, 包括: 获取目标对象到终端的物理距离数据和 /或目标对象的目标区域模型; 根据目标区域模型和 /或物理距离数据确定目标区域追踪数据模型, 并 根据目标区域追踪数据模型与终端界面显示参数的映射关系, 获取与目标 区域追踪数据模型相对应的终端界面显示参数; An embodiment of the present invention provides a method for automatically adjusting a terminal interface display, including: acquiring physical distance data of a target object to a terminal and/or a target area model of a target object; and determining target area tracking according to the target area model and/or physical distance data. Data model, and Obtaining a terminal interface display parameter corresponding to the target area tracking data model according to a mapping relationship between the target area tracking data model and the terminal interface display parameter;
根据获取的终端界面显示参数对当前终端界面显示内容和 /或显示方式 进行调节。 Adjust the display content and/or display mode of the current terminal interface according to the obtained terminal interface display parameters.
本发明实施例还提供了一种自动调节终端界面显示的装置, 包括: 获取单元, 设置为获取目标对象到终端的物理距离数据和 /或目标对象 的目标区域模型; An embodiment of the present invention further provides an apparatus for automatically adjusting a terminal interface display, including: an acquiring unit, configured to acquire physical distance data of a target object to a terminal and/or a target area model of the target object;
监测单元, 设置为根据目标区域模型和 /或物理距离数据确定目标区域 追踪数据模型, 并根据目标区域追踪数据模型与终端界面显示参数的映射 关系, 获取与目标区域追踪数据模型相对应的终端界面显示参数; The monitoring unit is configured to determine a target area tracking data model according to the target area model and/or the physical distance data, and obtain a terminal interface corresponding to the target area tracking data model according to the mapping relationship between the target area tracking data model and the terminal interface display parameter Display parameter
控制单元, 设置为根据获取的终端界面显示参数对当前终端界面显示 内容和 /或显示方式进行调节。 The control unit is configured to adjust the display content and/or display mode of the current terminal interface according to the acquired terminal interface display parameters.
本发明实施例有益效果如下: The beneficial effects of the embodiments of the present invention are as follows:
通过建立目标区域追踪数据模型, 并根据目标区域追踪数据模型与终 端界面显示参数的映射关系对终端屏幕的显示进行控制, 对现有的终端操 作方式进行了扩充, 提供了一种新的人机智能交互模式, 本发明实施例能 够模拟用户的阅读习惯, 并根据用户的阅读习惯对屏幕显示的内容进行上 下左右调节和 /或字体大小的调节, 能够减小用户学习成本, 提高用户的使 用体验, 使终端操作更加方便灵活, 有利于提高终端的市场竟争力, 特别 是对于手臂有残缺的使用者, 使其不用动手便可以对终端屏幕所显示的内 容和显示的方式进行调节。 附图说明 By establishing a target area tracking data model, and controlling the display of the terminal screen according to the mapping relationship between the target area tracking data model and the terminal interface display parameters, the existing terminal operation mode is expanded, and a new man-machine is provided. In the smart interaction mode, the embodiment of the invention can simulate the reading habit of the user, and adjust the content displayed on the screen according to the reading habit of the user, and adjust the font size, and can reduce the user learning cost and improve the user experience. It makes the terminal operation more convenient and flexible, and is conducive to improving the market competitiveness of the terminal, especially for users with broken arms, so that the content displayed on the terminal screen and the manner of display can be adjusted without hands-on. DRAWINGS
图 1是本发明实施例的自动调节终端界面显示的装置的结构示意图; 图 2是本发明实施例的优选结构示意图; 1 is a schematic structural diagram of an apparatus for automatically adjusting a terminal interface display according to an embodiment of the present invention; FIG. 2 is a schematic diagram of a preferred structure of an embodiment of the present invention;
图 3是本发明实施例的自动调节终端界面显示的方法的流程示意图。 具体实施方式 FIG. 3 is a schematic flow chart of a method for automatically adjusting a terminal interface display according to an embodiment of the present invention. detailed description
本发明实施例提供了一种自动调节终端界面显示的方法及装置, 在实 际生活中, 如果需要细看某个物品, 在人的自然行为习惯中, 最自然的行 为是靠近该物品, 靠近的方式有两个: 1、 把人眼凑过去看; 2、 把物品拿 得近一些。 可以看出: 如果不使用手, 要放大看某个物品, 人们最自然的 习惯是凑近去看, 只要把人眼靠近物品, 就可以放大看。 如果使用手, 这 只手实际上也没有多余操作, 只把物品挪近, 挪近的物品, 如果想看上半 部或下半部, 上下移动物品即可。 根据以上的分析, 本发明实施例提出一 种根据人眼与屏幕的距离自动调节终端界面显示的方法和装置, 该方法和 装置模仿上述人类的自然行为, 是一种更人性化的交互方案, 不会让使用 者付出任何学习成本。 Embodiments of the present invention provide a method and apparatus for automatically adjusting a terminal interface display. In actual life, if an item needs to be carefully viewed, in a person's natural behavior, the most natural behavior is to approach the item, close to There are two ways: 1. Put the eyes of the people together; 2. Take the items closer. It can be seen that if you don't use your hand, you need to zoom in to see an item. The most natural habit of people is to look around. Just close the eye to the item and you can zoom in. If you use your hand, this hand does not actually have any extra operations. Just move the item closer and move closer. If you want to see the upper or lower part, move the item up and down. According to the above analysis, the embodiment of the invention provides a method and a device for automatically adjusting the display of the terminal interface according to the distance between the human eye and the screen. The method and the device simulate the natural behavior of the human being, and are a more humanized interaction scheme. Will not let users pay any learning costs.
以下结合附图以及实施例, 对本发明实施例进行进一步详细说明。 应 当理解, 此处所描述的具体实施例仅仅用以解释本发明, 并不限定本发明。 The embodiments of the present invention are further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
装置实施例 Device embodiment
根据本发明的实施例,提供了一种自动调节终端界面显示的装置, 图 1 是本发明实施例的自动调节终端界面显示的装置的结构示意图, 如图 1 所 示,根据本发明实施例的自动调节终端界面显示的装置包括: 获取单元 10、 监测单元 12、 以及控制单元 14。 以下对本发明实施例的各个模块进行详细 的说明。 According to an embodiment of the present invention, a device for automatically adjusting a terminal interface display is provided. FIG. 1 is a schematic structural diagram of an apparatus for automatically adjusting a terminal interface display according to an embodiment of the present invention. As shown in FIG. 1, according to an embodiment of the present invention, The apparatus for automatically adjusting the display of the terminal interface includes: an acquisition unit 10, a monitoring unit 12, and a control unit 14. The respective modules of the embodiments of the present invention will be described in detail below.
具体地, 获取单元 10设置为获取目标对象到终端的物理距离数据和 / 或目标对象的目标区域模型; 具体地,获取单元 10包括: 影像流收集单元、 测距单元、 识别单元、 以及智能单元。 Specifically, the obtaining unit 10 is configured to acquire physical distance data of the target object to the terminal and/or a target area model of the target object. Specifically, the acquiring unit 10 includes: a video stream collecting unit, a ranging unit, an identifying unit, and a smart unit. .
影像流收集单元设置为收集影像流, 采集影像流中目标对象的影像数 据; 优选地, 在本发明实施例中, 目标对象为人脸或人。 在实际应用中, 为了确定一个物体在空间中的具体位置, 至少需要三个点, 因此, 影像流 收集单元需要收集至少三组影像流, 分别采集至少三组影像流中目标对象 的影像数据; The image stream collecting unit is configured to collect the image stream and collect the image data of the target object in the image stream. Preferably, in the embodiment of the present invention, the target object is a face or a person. In practical applications, in order to determine the specific position of an object in space, at least three points are needed, therefore, the image stream The collecting unit needs to collect at least three sets of image streams, and separately collect image data of the target objects in at least three sets of image streams;
从上述描述可以看出, 影像流收集单元具有在动态或静态的影像流中 采集对应影像的功能, 此外, 在实际应用中, 影像流收集单元还能通过私 有协议把影像数据传输给测距单元。 As can be seen from the above description, the image stream collecting unit has the function of collecting corresponding images in a dynamic or static image stream. In addition, in practical applications, the image stream collecting unit can transmit image data to the ranging unit through a proprietary protocol. .
测距单元设置为根据目标对象的影像数据计算出目标对象到终端的物 理距离数据(即, 人或人脸到终端的距离); 在实际应用中, 测距单元将采 集的至少三组影像数据分别标记为特定编码, 分析并计算特定编码间的相 互关系, 获取目标对象到终端的物理距离数据。 The ranging unit is configured to calculate physical distance data of the target object to the terminal according to the image data of the target object (ie, the distance between the person or the face to the terminal); in actual applications, the ranging unit will collect at least three sets of image data. They are marked as specific codes, analyze and calculate the relationship between specific codes, and obtain the physical distance data of the target object to the terminal.
也就是说, 测距单元支持对影像流的预处理, 此外, 测距单元还同时 具有通过私有协 巴物理距离数据传输到识别单元的能力; That is to say, the ranging unit supports preprocessing of the image stream, and in addition, the ranging unit also has the capability of transmitting the physical distance data to the identification unit through the private coordination;
优选地, 在实际应用中, 影像流收集单元和测距单元可以组成终端追 踪传感模块, 终端追踪传感模块是指具有一定的智能采集和计算能力、 并 能够采集对应物体影像的设备或装置, 此外, 终端追踪传感模块还能够计 算出对应物体到终端的具体物理距离, 并以私有协议进行数据传输。 Preferably, in an actual application, the image stream collecting unit and the ranging unit may constitute a terminal tracking sensing module, and the terminal tracking sensing module refers to a device or device having certain intelligent acquisition and computing capabilities and capable of acquiring corresponding object images. In addition, the terminal tracking sensing module is also capable of calculating a specific physical distance from the corresponding object to the terminal, and transmitting the data by a proprietary protocol.
识别单元设置为将目标区域从目标对象中区分出来生成目标区域数 据; 优选地, 在本发明实施例中, 目标区域为人眼。 具体地, 识别单元包 括: 分割处理模块和参数匹配模块。 The identification unit is configured to distinguish the target area from the target object to generate the target area data; preferably, in the embodiment of the invention, the target area is a human eye. Specifically, the identification unit includes: a segmentation processing module and a parameter matching module.
其中, 分割处理模块设置为根据智能分割策略初步将目标区域从目标 对象中区分出来, 生成初步目标区域数据; 也就是说, 分割处理模块具有 通过传输协议收集影像数据流和物理距离数据的能力, 同时具有智能分割 策略, 能够把人眼区域从背景和 /或脸部中区分出来。 The segmentation processing module is configured to initially distinguish the target region from the target object according to the intelligent segmentation strategy, and generate preliminary target region data; that is, the segmentation processing module has the capability of collecting the image data stream and the physical distance data through the transmission protocol. At the same time, it has an intelligent segmentation strategy that distinguishes the human eye area from the background and/or face.
参数匹配模块设置为将初步目标区域数据与目标区域模型属性特征参 数进行匹配, 确定目标区域的真实影像, 根据目标区域的真实影像最终获 取目标区域数据。 在实际应用中, 参数匹配模块通过确定的人眼属性, 计 算出人眼的特征向量并对人眼做出匹配, 具有识别出人眼真实影像的能力。 智能单元设置为接收参数匹配模块匹配出的目标区域数据 (人眼真实 影像数据), 根据目标区域数据、 以及物理距离数据确定目标区域模型, 具 体地, 智能单元根据目标区域数据、 以及物理距离数据计算目标区域的聚 焦区、 以及目标区域与终端的角度, 并根据目标区域的聚焦区、 以及目标 区域与终端的角度生成目标区域模型; The parameter matching module is configured to match the preliminary target area data with the target area model attribute characteristic parameter, determine a real image of the target area, and finally obtain the target area data according to the real image of the target area. In practical applications, the parameter matching module determines the human eye attribute Calculating the feature vector of the human eye and matching the human eye has the ability to recognize the real image of the human eye. The smart unit is configured to receive the target area data (human eye real image data) matched by the parameter matching module, and determine the target area model according to the target area data and the physical distance data, specifically, the smart unit according to the target area data and the physical distance data Calculating a focus area of the target area, and an angle of the target area and the terminal, and generating a target area model according to the focus area of the target area and the angle of the target area and the terminal;
也就是说, 智能单元具有根据距离数据计算出人眼聚焦区和人眼与屏 幕角度数据, 建立具有人眼影像、 距离角度数据、 聚焦数据的映射关系的 智能人眼模型的能力。 That is to say, the smart unit has the ability to calculate the human eye focus area and the human eye and screen angle data based on the distance data, and establish an intelligent human eye model having a mapping relationship of human eye image, distance angle data, and focus data.
优选地, 在实际应用中, 参数匹配模块和智能单元可以组成系统智能 模块, 该模块具有数据计算、 智能辨别、 记忆存储, 提供建立智能人眼模 型的功能; Preferably, in practical applications, the parameter matching module and the intelligent unit may constitute a system intelligent module, which has data calculation, intelligent identification, memory storage, and provides a function of establishing a smart human eye model;
监测单元 12设置为根据目标区域模型和 /或物理距离数据确定目标区 域追踪数据模型, 并根据目标区域追踪数据模型与终端界面显示参数的映 射关系, 获取与追踪数据模型相对应的终端界面显示参数; The monitoring unit 12 is configured to determine a target area tracking data model according to the target area model and/or the physical distance data, and obtain a terminal interface display parameter corresponding to the tracking data model according to the mapping relationship between the target area tracking data model and the terminal interface display parameter. ;
也就是说,监测单元 12具有根据智能人目 莫型和物理距离数据的关系, 建立具有人眼追踪功能的智能人眼追踪数据模型的能力、 以及具有通过监 测物理距离数据的变化来触发控制单元 14的能力。 That is, the monitoring unit 12 has the ability to establish a smart human eye tracking data model having a human eye tracking function according to the relationship between the smart human visual and physical distance data, and to have the control unit 14 triggered by monitoring the change of the physical distance data. Ability.
优选地, 在实际应用中, 还可以包括预设模块, 设置为根据用户的设 置对终端界面显示参数进行调整, 其中, 终端界面显示参数具体包括: 内 容排版、 界面风格、 字体大小、 以及移动方式。 也就是说, 预设模块具有 存储用户设置的终端界面显示参数的功能; Preferably, in the actual application, the preset module may be further configured to adjust the display parameters of the terminal interface according to the setting of the user, where the display parameters of the terminal interface specifically include: content layout, interface style, font size, and movement mode. . That is, the preset module has a function of storing a terminal interface display parameter set by the user;
控制单元 14设置为根据获取的终端界面显示参数对当前终端界面显示 内容和 /或显示方式进行调节。终端显示模块在控制单元 14的控制下进行用 户界面 (User Interface, UI ) 的显示。 也就是说, 控制单元 14支持关联监 测单元 12和预设模块的数据, 并建立映射关系, 通过发送调用预设模块中 配置参数的指令, 达到管理和使用终端显示模块的作用。 具体地, 在实际 应用中, 如果目标区域追踪数据模型中的目标区域模型发生变化, 则可以 只根据目标区域模型的变化对显示内容进行上下左右的调节; 如果目标区 域追踪数据模型中的物理距离数据发生了变化, 则可以只根据物理距离数 据的变化对显示方式中字体的大小进行调节; 在目标区域模型和物理距离 数据都发生了变化时, 可以对显示内容和显示方式同时进行调节。 The control unit 14 is configured to adjust the current terminal interface display content and/or display mode according to the acquired terminal interface display parameters. The terminal display module performs display of a user interface (UI) under the control of the control unit 14. That is, the control unit 14 supports the associated supervisor The data of the measuring unit 12 and the preset module are established, and a mapping relationship is established, and the function of managing and using the terminal display module is achieved by sending an instruction to call the configuration parameter in the preset module. Specifically, in a practical application, if the target area model in the target area tracking data model changes, the display content may be adjusted up and down and left and right only according to the change of the target area model; if the target area tracks the physical distance in the data model When the data changes, the size of the font in the display mode can be adjusted only according to the change of the physical distance data; when the target area model and the physical distance data are changed, the display content and the display mode can be simultaneously adjusted.
从上述处理过程可以看出, 需要建立采集的影像流、 目标对象到终端 的物理距离数据、 目标区域模型 (例如, 智能人眼模型)、 目标区域追踪数 据模型 (例如, 智能人眼追踪模型)及终端界面显示参数之间的映射关系, 具体包括: 建立每组采集影像流与物理距离数据的映射关系, 建立采集影 像流、 物理距离数据与目标区域模型 (智能人眼模型)之间的映射关系, 建立目标区域模型 (智能人眼模型)、 物理距离数据与目标区域追踪数据模 型 (智能人眼追踪模型)之间的对应关系, 建立目标区域追踪数据模型与 终端界面显示参数之间的对应关系。 在实际应用中, 通过监测单元 12监测 到目标区域的变化, 启动控制单元 14对这些映射关系进行控制, 从而达到 控制终端显示模块的目的。 It can be seen from the above process that it is necessary to establish a collected image stream, physical distance data of the target object to the terminal, a target area model (for example, a smart human eye model), and a target area tracking data model (for example, an intelligent human eye tracking model). And the mapping relationship between the display parameters of the terminal interface, specifically: establishing a mapping relationship between each set of collected image streams and physical distance data, and establishing a mapping between the collected image stream, the physical distance data, and the target area model (smart human eye model) Relationship, establish a correspondence between the target area model (smart human eye model), the physical distance data and the target area tracking data model (smart human eye tracking model), and establish a correspondence between the target area tracking data model and the terminal interface display parameters relationship. In practical applications, the monitoring unit 12 monitors the change of the target area, and the startup control unit 14 controls these mapping relationships to achieve the purpose of controlling the terminal display module.
下面结合附图, 以人眼为例, 对本发明实施例中上述各个模块的处理 流程进行说明。 图 2是本发明实施例的优选结构示意图, 如图 2所示, 具 体包括: 终端追踪传感模块、 分割处理模块、 系统智能模块、 监测模块(即 图 1 中的监测单元)、 预设模块、 系统控制模块(即图 1 中的控制单元)、 以及终端显示模块, 其中, 终端追踪传感模块包括影像流收集单元、 测距 单元, 系统智能模块包括参数匹配模块、 智能单元, 如图 2所示的自动调 节终端界面显示的装置在根据人眼进行终端界面自动调节时, 需要执行如 下处理: 步驟 1、 用户拿起终端, 该终端的终端追踪传感模块立即启动, 终端追 踪传感模块同步收集影像流和计算得出物品与终端屏幕的物理距离, 以私 有协议向分割处理模块进行数据传输。 具体地, 终端追踪传感模块中的影 像流收集单元中 3个或 3个以上的摄像头开始通过多个方向收集终端屏幕 前方对应物体的动态和静态影像流, 测距单元接收到这些物体的影像流后, 快速做出影像标记, 并计算获得对应物体与该传感模块之间的物理距离数 据。 通过私有协议, 将收集到的多组影像流和物理距离数据传输到分割处 理模块。 The processing flow of each module in the embodiment of the present invention will be described below with reference to the accompanying drawings. 2 is a schematic structural diagram of a preferred embodiment of the present invention. As shown in FIG. 2, the method specifically includes: a terminal tracking sensing module, a segmentation processing module, a system intelligence module, a monitoring module (ie, a monitoring unit in FIG. 1), and a preset module. The system control module (ie, the control unit in FIG. 1) and the terminal display module, wherein the terminal tracking sensing module includes an image stream collecting unit and a ranging unit, and the system intelligent module includes a parameter matching module and an intelligent unit, as shown in FIG. 2 The device shown in the automatic adjustment terminal interface shown needs to perform the following processing when the terminal interface is automatically adjusted according to the human eye: Step 1. The user picks up the terminal, and the terminal tracking sensor module of the terminal starts up immediately. The terminal tracking sensor module synchronously collects the image stream and calculates the physical distance between the item and the terminal screen, and performs data transmission to the segmentation processing module by using a proprietary protocol. . Specifically, three or more cameras in the image stream collecting unit in the terminal tracking sensing module start collecting dynamic and static image streams of corresponding objects in front of the screen of the terminal through multiple directions, and the ranging unit receives images of the objects. After the flow, the image mark is quickly made, and the physical distance data between the corresponding object and the sensing module is calculated. The collected sets of image streams and physical distance data are transmitted to the segmentation processing module through a proprietary protocol.
步驟 2、 分割处理模块通过智能影像分割、 影像处理等技术, 在获得影 像流和物理距离数据后将人眼从影像流中初步区分出来, 生成多组人眼影 像数据, 并建立每组人眼影像流与对应物理距离数据的映射关系。 人眼影 像流通过私有协议传输到系统智能模块中的参数匹配模块进行再次识别。 Step 2: The segmentation processing module firstly distinguishes the human eye from the image stream by acquiring image stream and physical distance data through techniques such as intelligent image segmentation and image processing, and generates a plurality of sets of human eye image data, and establishes each group of human eyes. The mapping relationship between the image stream and the corresponding physical distance data. The human eye shadow stream is re-identified by a parameter matching module that is transmitted to the system intelligence module through a proprietary protocol.
步驟 3、 系统智能模块的参数匹配模块接收到多组人眼影像流后, 通过 多次分析比对人眼的特征向量, 对人眼做出匹配, 进一步确定识别出该人 眼的真实影像, 并将该人眼真实影像的多组数据传输到智能单元, 进行下 一步分析。 Step 3: After receiving the plurality of sets of human eye image streams, the parameter matching module of the system intelligence module matches the human eye by analyzing the feature vector of the human eye multiple times to further determine the real image of the human eye. The multi-group data of the human eye's real image is transmitted to the intelligent unit for further analysis.
步驟 4、系统智能模块通过智能单元将识别出的多组人眼真实影像数据 进行存储, 同步接收每组人眼真实影像数据对应的物理距离数据的映射关 系, 根据该映射关系获得人眼聚焦区和人眼与屏幕的角度数据; 建立人眼 聚焦区与人眼真实影像、 人眼与屏幕的距离、 以及人眼与屏幕的角度之间 的映射关系, 生成人眼智能模型, 并传输给监测模块。 Step 4: The system intelligent module stores the identified sets of human eye real image data through the intelligent unit, synchronously receives the mapping relationship of the physical distance data corresponding to the real image data of each group of human eyes, and obtains the human eye focus area according to the mapping relationship. And the angle data of the human eye and the screen; establishing a mapping relationship between the human eye focus area and the human eye real image, the distance between the human eye and the screen, and the angle between the human eye and the screen, generating a human eye intelligent model, and transmitting to the monitoring Module.
步驟 5、监测模块通过物理距离数据与人眼智能模型的映射关系, 生成 一个智能人眼追踪数据模型。 通过监测终端追踪传感模块收集的物理距离 数据, 对系统控制模块发送指令。 一旦用户移动终端, 用户与终端屏幕距 离发生变化, 人眼智能模型的映射关系中的人眼聚焦区、 人眼与屏幕角度 将随之发生变化, 系统控制模块被启动。 同时, 该距离下对应的智能人眼 追踪模型中的人眼聚焦区、 人眼与屏幕角度、 人眼与距离等数据被传输到 系统控制模块。 Step 5: The monitoring module generates a smart human eye tracking data model by mapping the physical distance data with the human eye intelligent model. The system control module sends an instruction by monitoring the physical distance data collected by the terminal tracking sensor module. Once the user moves the terminal, the distance between the user and the terminal screen changes, and the human eye focus area, the human eye and the screen angle in the mapping relationship of the human eye intelligent model It will change and the system control module will be started. At the same time, data such as the human eye focus area, the human eye and the screen angle, the human eye and the distance in the corresponding intelligent human eye tracking model at the distance are transmitted to the system control module.
步驟 6、 系统控制模块通过接口调用预设模块的预设参数。 该预设数据 包括终端 UI界面在距离变化下能展现的一切配置参数, 例如, 内容排版、 界面风格、 内容大小、 移动方式等。 优选地, 在用户在初次使用时, 可以 根据用户的习惯在预设模块中设置终端界面显示参数。 Step 6. The system control module calls the preset parameters of the preset module through the interface. The preset data includes all configuration parameters that the terminal UI interface can display under the change of distance, for example, content typesetting, interface style, content size, and moving mode. Preferably, when the user is using for the first time, the terminal interface display parameters may be set in the preset module according to the user's habits.
步驟 7、 系统控制模块建立监测模块与预设模块的映射关系,监测模块 一旦监测到距离改变, 即会触发系统控制模块。 系统控制模块根据预设模 块的参数控制终端显示模块的显示内容和方式, 终端显示模块收到操作指 令后, 根据收到的人眼聚焦区、 人眼与屏幕角度、 人眼与屏幕距离等数据 调用终端界面显示参数, 发送指令调整终端屏幕上内容的显示排版和风格。 例如, 监测模块检测到人眼与屏幕水平距离的近远变化, 终端显示模块则 在终端 UI界面上的人眼聚焦区进行放大缩小显示; 同时, 根据人眼与屏幕 的角度变化, 终端显示模块对终端 UI界面进行内容的上移下移。 具体地, 当用户人眼水平靠近屏幕 5cm时, 监测模块计算出距离数据缩短了 5cm, 触发了系统控制模块执行预设模块的控制操作 , 系统控制模块根据人眼聚 焦区、 人眼与屏幕角度数据, 使 UI界面人眼聚焦区内容放大 1倍。 此时终 端竖直上移 3cm, 监测模块会监测到人眼和屏幕的人眼聚焦区距离发生变 化, 随之智能人眼追踪模型中对应的人眼与屏幕的角度变大, 使 UI界面内 容下移, 用户非常容易的看到 UI界面内容的上部分。 Step 7. The system control module establishes a mapping relationship between the monitoring module and the preset module. When the monitoring module detects the distance change, the system control module is triggered. The system control module controls the display content and the manner of the terminal display module according to the parameters of the preset module. After receiving the operation instruction, the terminal display module receives data according to the human eye focus area, the human eye and the screen angle, the human eye and the screen distance, and the like. The terminal interface is called to display parameters, and the command is sent to adjust the display layout and style of the content on the terminal screen. For example, the monitoring module detects a near-distance change of the horizontal distance between the human eye and the screen, and the terminal display module zooms in and out on the human eye focus area on the terminal UI interface; meanwhile, according to the angle change of the human eye and the screen, the terminal display module The content of the terminal UI interface is moved up and down. Specifically, when the user's human eye level is close to the screen 5 cm, the monitoring module calculates that the distance data is shortened by 5 cm, triggering the system control module to perform the control operation of the preset module, and the system control module according to the human eye focus area, the human eye and the screen angle The data is used to enlarge the content of the human eye focus area of the UI interface by a factor of two. At this time, the terminal is vertically moved up to 3 cm, and the monitoring module monitors the distance between the human eye and the human focus area of the screen, and the angle of the corresponding human eye and the screen in the intelligent human eye tracking model becomes larger, so that the UI interface content is increased. Moving down, the user can easily see the upper part of the UI interface content.
从上述处理可以看出, 在本发明实施例中, 终端界面的内容展现有足 够的灵活度和自由度。 终端系统能够根据屏幕和人眼的距离数据, 调整终 端屏幕上内容显示的排版和风格, 例如, 屏幕和人眼的距离变小后, 屏幕 内容会自动变大。 终端上存储了一张图片, 终端屏幕的图像就可以根据人 眼和屏幕的距离, 自动调节成合适的图像给用户看,人眼距离屏幕 30cm的 时候, 可以看到整个人物头像; 人眼距离屏幕 20cm, 屏幕自动调节成看到 人物头像的嘴唇。 此外, 在本发明实施例中, 终端系统能够根据终端持有 者的习惯和特征, 以最佳界面进行展现。 例如, 某位老人是远视, 在初始 化设置远视数据后, 在有效值范围内, 无论该老人的终端离人眼远还是近, 都能很清晰浏览界面, 而不再受远视障碍的影响。 As can be seen from the above processing, in the embodiment of the present invention, the content of the terminal interface exhibits sufficient flexibility and freedom. The terminal system can adjust the layout and style of the content display on the screen of the terminal according to the distance data of the screen and the human eye. For example, after the distance between the screen and the human eye becomes smaller, the screen content automatically becomes larger. A picture is stored on the terminal, and the image of the terminal screen can be based on the person. The distance between the eye and the screen is automatically adjusted to the appropriate image for the user to see. When the human eye is 30cm away from the screen, the entire character avatar can be seen. The human eye is 20cm away from the screen, and the screen is automatically adjusted to see the lips of the person's avatar. In addition, in the embodiment of the present invention, the terminal system can display the best interface according to the habits and characteristics of the terminal holder. For example, an elderly person is farsighted. After initializing the setting of farsighted data, within the effective value range, regardless of whether the terminal of the old person is far away from the human eye, the interface can be clearly viewed without being affected by the farsightedness disorder.
借助于本发明实施例的上述技术方案, 通过建立目标区域追踪数据模 型, 并根据目标区域追踪数据模型与终端界面显示参数的映射关系对终端 屏幕的显示进行控制, 解决了现有技术中基于触摸感应的智能交互方式不 利于残疾人操作且需要使用者付出学习成本的问题, 能够对现有的终端操 作方式进行扩充, 能够模拟用户的生活习惯, 减小用户学习成本, 缩小用 户个体差异, 提高用户的使用体验, 使终端操作更加方便灵活, 有利于提 高终端的市场竟争力。 With the above technical solution of the embodiment of the present invention, the target area tracking data model is established, and the display of the terminal screen is controlled according to the mapping relationship between the target area tracking data model and the terminal interface display parameter, thereby solving the touch-based touch in the prior art. Inductive intelligent interaction is not conducive to the operation of disabled people and requires users to pay for learning costs. It can expand the existing terminal operation mode, simulate user's living habits, reduce user learning costs, narrow individual user differences, and improve The user's experience makes the terminal operation more convenient and flexible, which is conducive to improving the market competitiveness of the terminal.
方法实施例 Method embodiment
根据本发明的实施例,提供了一种自动调节终端界面显示的方法, 图 3 是本发明实施例的自动调节终端界面显示的方法的流程示意图, 如图 3 所 示, 根据本发明实施例的自动调节终端界面显示的方法包括如下处理: 步驟 301 , 获取目标对象到终端的物理距离数据和 /或目标对象的目标 区域模型; According to an embodiment of the present invention, a method for automatically adjusting a terminal interface display is provided. FIG. 3 is a schematic flowchart of a method for automatically adjusting a terminal interface display according to an embodiment of the present invention. As shown in FIG. 3, according to an embodiment of the present invention, The method for automatically adjusting the display of the terminal interface includes the following processing: Step 301: Obtain physical distance data of the target object to the terminal and/or a target area model of the target object;
具体地, 步驟 301 中获取目标对象到终端的物理距离数据包括如下处 理: 收集影像流, 采集影像流中目标对象的影像数据, 根据目标对象的影 像数据计算出目标对象到终端的物理距离数据; 优选地, 在本发明实施例 中, 目标对象为人脸或人。 Specifically, the obtaining the physical distance data of the target object to the terminal in the step 301 includes: processing the image stream, collecting the image data of the target object in the image stream, and calculating the physical distance data of the target object to the terminal according to the image data of the target object; Preferably, in the embodiment of the present invention, the target object is a face or a person.
在实际应用中, 为了确定一个物体在空间中的具体位置, 至少需要三 个点, 因此, 在步驟 301中, 需要执行下述处理: 1、 收集至少三组影像流, 分别采集至少三组影像流中目标对象的影像数据; 2、 将采集的至少三组影 像数据分别标记为特定编码, 分析并计算特定编码间的相互关系, 获取目 标对象到终端的物理距离数据。 In practical applications, in order to determine the specific location of an object in space, at least three points are needed. Therefore, in step 301, the following processing needs to be performed: 1. Collect at least three sets of image streams, Collecting image data of the target object in at least three sets of image streams respectively; 2. Marking at least three sets of image data collected as specific codes, analyzing and calculating the mutual relationship between the specific codes, and acquiring physical distance data of the target object to the terminal.
具体地, 步驟 301中获取目标对象的目标区域模型具体包括: 1、 收集 影像流, 采集影像流中目标对象的影像数据, 根据目标对象的影像数据计 算出目标对象到终端的物理距离数据; 2、 将目标区域从目标对象中区分出 来生成目标区域数据, 并根据目标区域数据、 以及物理距离数据确定目标 区域模型; 在本发明实施例中, 目标区域为人眼。 Specifically, the acquiring the target area model of the target object in step 301 specifically includes: 1. collecting the image stream, collecting the image data of the target object in the image stream, and calculating the physical distance data of the target object to the terminal according to the image data of the target object; The target area is differentiated from the target object to generate the target area data, and the target area model is determined according to the target area data and the physical distance data. In the embodiment of the present invention, the target area is a human eye.
具体地, 将目标区域从目标对象中区分出来生成目标区域数据具体包 括: Specifically, distinguishing the target area from the target object to generate the target area data specifically includes:
1、 根据智能分割策略初步将目标区域从目标对象中区分出来, 生成初 步目标区域数据; 2、 将初步目标区域数据与目标区域模型属性特征参数进 行匹配, 确定目标区域的真实影像, 根据目标区域的真实影像最终获取目 标区域数据。 1. According to the intelligent segmentation strategy, the target region is initially distinguished from the target object to generate preliminary target region data. 2. The preliminary target region data is matched with the target region model attribute parameter to determine the real image of the target region, according to the target region. The real image ultimately captures the target area data.
根据目标区域数据、 以及物理距离数据确定目标区域模型具体包括: Determining the target area model based on the target area data and the physical distance data specifically includes:
1、 根据目标区域数据、 以及物理距离数据计算目标区域的聚焦区、 以 及目标区域与终端的角度; 2、 根据目标区域的聚焦区、 以及目标区域与终 端的角度生成目标区域模型。 1. Calculate the focus area of the target area and the angle of the target area and the terminal according to the target area data and the physical distance data; 2. Generate a target area model according to the focus area of the target area and the angle of the target area and the terminal.
步驟 302, 根据目标区域模型和 /或物理距离数据确定目标区域追踪数 据模型, 并根据目标区域追踪数据模型与终端界面显示参数的映射关系, 获取与追踪数据模型相对应的终端界面显示参数; Step 302: Determine a target area tracking data model according to the target area model and/or physical distance data, and obtain a terminal interface display parameter corresponding to the tracking data model according to the mapping relationship between the target area tracking data model and the terminal interface display parameter;
步驟 303 , 根据获取的终端界面显示参数对当前终端界面显示内容和 / 或显示方式进行调节。 Step 303: Adjust the display content and/or display mode of the current terminal interface according to the obtained terminal interface display parameter.
具体地, 在实际应用中, 如果目标区域追踪数据模型中的目标区域模 型发生变化, 则可以只根据目标区域模型的变化对显示内容进行上下左右 的调节; 如果目标区域追踪数据模型中的物理距离数据发生了变化, 则可 以只根据物理距离数据的变化对显示方式中字体的大小进行调节; 在目标 区域模型和物理距离数据都发生了变化时, 可以对显示内容和显示方式同 时进行调节。 Specifically, in the actual application, if the target area model in the target area tracking data model changes, the display content may be up and down only according to the change of the target area model. If the physical distance data in the target area tracking data model changes, the size of the font in the display mode can be adjusted only according to the change of the physical distance data; when both the target area model and the physical distance data change , you can adjust the display content and display mode at the same time.
优选地, 在实际应用中, 还可以根据用户的设置对终端界面显示参数 进行调整, 其中, 终端界面显示参数具体包括: 内容排版、 界面风格、 字 体大小、 以及移动方式。 Preferably, in the actual application, the terminal interface display parameters may also be adjusted according to the user's settings, wherein the terminal interface display parameters specifically include: content typesetting, interface style, font size, and moving mode.
从上述处理过程可以看出, 需要建立采集的影像流、 目标对象到终端 的物理距离数据、 目标区域模型 (例如, 智能人眼模型)、 目标区域追踪数 据模型 (例如, 智能人眼追踪模型)及终端界面显示参数之间的映射关系, 具体包括: 建立每组采集影像流与物理距离数据的映射关系, 建立采集影 像流、 物理距离数据与目标区域模型 (智能人眼模型)之间的映射关系, 建立目标区域模型 (智能人眼模型)、 物理距离数据与目标区域追踪数据模 型 (智能人眼追踪模型)之间的对应关系, 建立目标区域追踪数据模型与 终端界面显示参数之间的对应关系。 在实际应用中, 通过监测单元监测到 目标区域的变化, 启动控制单元对这些映射关系进行控制, 从而达到控制 终端显示模块的目的。 It can be seen from the above process that it is necessary to establish a collected image stream, physical distance data of the target object to the terminal, a target area model (for example, a smart human eye model), and a target area tracking data model (for example, an intelligent human eye tracking model). And the mapping relationship between the display parameters of the terminal interface, specifically: establishing a mapping relationship between each set of collected image streams and physical distance data, and establishing a mapping between the collected image stream, the physical distance data, and the target area model (smart human eye model) Relationship, establish a correspondence between the target area model (smart human eye model), the physical distance data and the target area tracking data model (smart human eye tracking model), and establish a correspondence between the target area tracking data model and the terminal interface display parameters relationship. In practical applications, the monitoring unit monitors the change of the target area, and the startup control unit controls these mapping relationships to achieve the purpose of controlling the terminal display module.
下面结合附图, 以人眼为例, 对本发明实施例中上述各个模块的处理 流程进行说明。 The processing flow of each of the above modules in the embodiment of the present invention will be described below with reference to the accompanying drawings.
步驟 1、 用户拿起终端, 该终端的终端追踪传感模块立即启动, 终端追 踪传感模块同步收集影像流和计算得出物品与终端屏幕的物理距离, 以私 有协议向分割处理模块进行数据传输。 具体地, 终端追踪传感模块中的影 像流收集单元中 3个或 3个以上的摄像头开始通过多个方向收集终端屏幕 前方对应物体的动态和静态影像流, 测距单元接收到这些物体的影像流后, 快速做出影像标记, 通过内部的计算方法获得对应物体与该传感模块之间 的物理距离数据。 通过私有协议, 将收集到的多组影像流和物理距离数据 传输到分割处理模块。 Step 1. The user picks up the terminal, and the terminal tracking sensor module of the terminal starts up immediately. The terminal tracking sensor module synchronously collects the image stream and calculates the physical distance between the item and the terminal screen, and performs data transmission to the segmentation processing module by using a proprietary protocol. . Specifically, three or more cameras in the image stream collecting unit in the terminal tracking sensing module start collecting dynamic and static image streams of corresponding objects in front of the screen of the terminal through multiple directions, and the ranging unit receives images of the objects. After the flow, the image mark is quickly made, and the internal calculation method is used to obtain the corresponding object and the sensing module. Physical distance data. The collected sets of image streams and physical distance data are transmitted to the segmentation processing module through a proprietary protocol.
步驟 2、 分割处理模块通过智能影像分割、 影像处理等技术, 在获得影 像流和物理距离数据后将人眼从影像流中初步区分出来, 生成多组人眼影 像数据, 并建立每组人眼影像流与对应物理距离数据的映射关系。 人眼影 像流通过私有协议传输到系统智能模块中的识别单元进行再次识别。 Step 2: The segmentation processing module firstly distinguishes the human eye from the image stream by acquiring image stream and physical distance data through techniques such as intelligent image segmentation and image processing, and generates a plurality of sets of human eye image data, and establishes each group of human eyes. The mapping relationship between the image stream and the corresponding physical distance data. The human eye shadow image stream is transmitted to the identification unit in the system intelligence module through a proprietary protocol for re-identification.
步驟 3、 系统智能模块的识别单元接收到多组人眼影像流后, 通过多次 分析比对人眼的特征向量, 对人眼做出匹配, 进一步确定识别出该人眼的 真实影像, 并将该人眼真实影像的多组数据传输到智能单元, 进行下一步 分析。 Step 3: After receiving the plurality of sets of human eye image streams, the identification unit of the system intelligence module matches the human eye by analyzing the feature vector of the human eye multiple times to further determine the real image of the human eye, and The plurality of sets of data of the human eye's real image are transmitted to the smart unit for further analysis.
步驟 4、系统智能模块通过智能单元将识别出的多组人眼真实影像数据 进行存储, 同步接收每组人眼真实影像数据对应的物理距离数据的映射关 系, 根据该映射关系获得人眼聚焦区和人眼与屏幕的角度数据; 建立人眼 聚焦区与人眼真实影像、 人眼与屏幕的距离、 以及人眼与屏幕的角度之间 的映射关系, 生成人眼智能模型, 并传输给监测模块。 Step 4: The system intelligent module stores the identified sets of human eye real image data through the intelligent unit, synchronously receives the mapping relationship of the physical distance data corresponding to the real image data of each group of human eyes, and obtains the human eye focus area according to the mapping relationship. And the angle data of the human eye and the screen; establishing a mapping relationship between the human eye focus area and the human eye real image, the distance between the human eye and the screen, and the angle between the human eye and the screen, generating a human eye intelligent model, and transmitting to the monitoring Module.
步驟 5、监测模块通过物理距离数据与人眼智能模型的映射关系, 生成 一个智能人眼追踪数据模型。 通过监测终端追踪传感模块收集的物理距离 数据, 对系统控制模块发送指令。 一旦用户移动终端, 用户与终端屏幕距 离发生变化, 人眼智能模型的映射关系中的人眼聚焦区、 人眼与屏幕角度 将随之发生变化, 系统控制模块被启动。 同时, 该距离下对应的智能人眼 追踪模型中的人眼聚焦区、 人眼与屏幕角度、 人眼与距离等数据被传输到 系统控制模块。 Step 5: The monitoring module generates a smart human eye tracking data model by mapping the physical distance data with the human eye intelligent model. The system control module sends an instruction by monitoring the physical distance data collected by the terminal tracking sensor module. Once the user moves the terminal, the distance between the user and the terminal screen changes, and the human eye focus area, the human eye and the screen angle in the mapping relationship of the human eye intelligent model will change accordingly, and the system control module is activated. At the same time, data such as the human eye focus area, the human eye and the screen angle, the human eye and the distance in the corresponding intelligent human eye tracking model at the distance are transmitted to the system control module.
步驟 6、 系统控制模块通过接口调用预设模块的预设参数。 该预设数据 包括终端 UI界面在距离变化下能展现的一切配置参数, 例如, 内容排版、 界面风格、 内容大小、 移动方式等。 优选地, 在用户在初次使用时, 可以 根据用户的习惯在预设模块中设置终端界面显示参数。 Step 6. The system control module calls the preset parameters of the preset module through the interface. The preset data includes all configuration parameters that the terminal UI interface can display under the change of distance, for example, content typesetting, interface style, content size, moving mode, and the like. Preferably, when the user is using for the first time, The terminal interface display parameters are set in the preset module according to the user's habits.
步驟 7、 系统控制模块建立监测模块与预设模块的映射关系,监测模块 一旦监测到距离改变, 即会触发系统控制模块。 系统控制模块根据预设模 块的参数控制终端显示模块的显示内容和方式, 终端显示模块收到操作指 令后, 根据收到的人眼聚焦区、 人眼与屏幕角度、 人眼与屏幕距离等数据 调用终端界面显示参数, 发送指令调整终端屏幕上内容的显示排版和风格。 例如, 监测模块检测到人眼与屏幕水平距离的近远变化, 终端显示模块则 在终端 UI界面上的人眼聚焦区进行放大缩小显示; 同时, 根据人眼与屏幕 的角度变化, 终端显示模块对终端 UI界面进行内容的上移下移。 具体地, 当用户人眼水平靠近屏幕 5cm时, 监测模块计算出距离数据缩短了 5cm, 触发了系统控制模块执行预设模块的控制操作 , 系统控制模块根据人眼聚 焦区、 人眼与屏幕角度数据, 使 UI界面人眼聚焦区内容放大 1倍。 此时终 端竖直上移 3cm, 监测模块会监测到人眼和屏幕的人眼聚焦区距离发生变 化, 随之智能人眼追踪模型中对应的人眼与屏幕的角度变大, 使 UI界面内 容下移, 用户非常容易的看到 UI界面内容的上部分。 Step 7. The system control module establishes a mapping relationship between the monitoring module and the preset module. When the monitoring module detects the distance change, the system control module is triggered. The system control module controls the display content and the manner of the terminal display module according to the parameters of the preset module. After receiving the operation instruction, the terminal display module receives data according to the human eye focus area, the human eye and the screen angle, the human eye and the screen distance, and the like. The terminal interface is called to display parameters, and the command is sent to adjust the display layout and style of the content on the terminal screen. For example, the monitoring module detects a near-distance change of the horizontal distance between the human eye and the screen, and the terminal display module zooms in and out on the human eye focus area on the terminal UI interface; meanwhile, according to the angle change of the human eye and the screen, the terminal display module The content of the terminal UI interface is moved up and down. Specifically, when the user's human eye level is close to the screen 5 cm, the monitoring module calculates that the distance data is shortened by 5 cm, triggering the system control module to perform the control operation of the preset module, and the system control module according to the human eye focus area, the human eye and the screen angle The data is used to enlarge the content of the human eye focus area of the UI interface by a factor of two. At this time, the terminal is vertically moved up to 3 cm, and the monitoring module monitors the distance between the human eye and the human focus area of the screen, and the angle of the corresponding human eye and the screen in the intelligent human eye tracking model becomes larger, so that the UI interface content is increased. Moving down, the user can easily see the upper part of the UI interface content.
从上述处理可以看出, 在本发明实施例中, 终端界面的内容展现有足 够的灵活度和自由度。 终端系统能够根据屏幕和人眼的距离数据, 调整终 端屏幕上内容显示的排版和风格, 例如, 屏幕和人眼的距离变小后, 屏幕 内容会自动变大。 终端上存储了一张图片, 终端屏幕的图像就可以根据人 眼和屏幕的距离, 自动调节成合适的图像给用户看,人眼距离屏幕 30cm的 时候, 可以看到整个人物头像; 人眼距离屏幕 20cm, 屏幕自动调节成看到 人物头像的嘴唇。 此外, 在本发明实施例中, 终端系统能够根据终端持有 者的习惯和特征, 以最佳界面进行展现。 例如, 某位老人是远视, 在初始 化设置远视数据后, 在有效值范围内, 无论该老人的终端离人眼远还是近, 都能很清晰浏览界面, 而不再受远视障碍的影响。 借助于本发明实施例的上述技术方案, 通过建立目标区域追踪数据模 型, 并根据目标区域追踪数据模型与终端界面显示参数的映射关系对终端 屏幕的显示进行控制, 对现有的终端操作方式进行了扩充, 提供了一种新 的人机智能交互模式, 本发明实施例能够模拟用户的阅读习惯, 并根据用 户的阅读习惯对屏幕显示的内容进行上下左右调节和 /或字体大小的调节, 能够减小用户学习成本, 提高用户的使用体验, 使终端操作更加方便灵活, 有利于提高终端的市场竟争力, 特别是对于手臂有残缺的使用者, 使其不 用动手便可以对终端屏幕所显示的内容和显示的方式进行调节。 As can be seen from the above processing, in the embodiment of the present invention, the content of the terminal interface exhibits sufficient flexibility and freedom. The terminal system can adjust the layout and style of the content display on the screen of the terminal according to the distance data of the screen and the human eye. For example, after the distance between the screen and the human eye becomes smaller, the screen content automatically becomes larger. A picture is stored on the terminal, and the image of the terminal screen can be automatically adjusted to a suitable image according to the distance between the human eye and the screen. When the human eye is 30 cm away from the screen, the entire character avatar can be seen; The screen is 20cm, and the screen is automatically adjusted to see the lips of the person's head. In addition, in the embodiment of the present invention, the terminal system can display the best interface according to the habits and characteristics of the terminal holder. For example, an elderly person is farsighted. After initializing the setting of farsighted data, within the effective value range, regardless of whether the terminal of the old person is far away from the human eye, the interface can be clearly viewed without being affected by the farsightedness disorder. With the above technical solution of the embodiment of the present invention, the target area tracking data model is established, and the display of the terminal screen is controlled according to the mapping relationship between the target area tracking data model and the terminal interface display parameter, and the existing terminal operation mode is performed. The expansion provides a new human-computer intelligent interaction mode, and the embodiment of the invention can simulate the reading habit of the user, and adjust the content displayed on the screen according to the reading habit of the user, and adjust the font size and/or the font size. Reduce the user's learning cost, improve the user's experience, make the terminal operation more convenient and flexible, and help to improve the market competitiveness of the terminal, especially for users with broken arms, so that they can be displayed on the terminal screen without hands-on. The content and display mode are adjusted.
尽管为示例目的, 已经公开了本发明的优选实施例, 本领域的技术人 员将意识到各种改进、 增加和取代也是可能的, 因此, 本发明的范围应当 不限于上述实施例。 While the preferred embodiments of the present invention have been disclosed for purposes of illustration, those skilled in the art will recognize that various modifications, additions and substitutions are possible, and the scope of the invention should not be limited to the embodiments described above.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201110099276.6A CN102752438B (en) | 2011-04-20 | 2011-04-20 | Method and device for automatically regulating terminal interface display |
| CN201110099276.6 | 2011-04-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012142869A1 true WO2012142869A1 (en) | 2012-10-26 |
Family
ID=47032332
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2012/071468 Ceased WO2012142869A1 (en) | 2011-04-20 | 2012-02-22 | Method and apparatus for automatically adjusting terminal interface display |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN102752438B (en) |
| WO (1) | WO2012142869A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114845165A (en) * | 2022-04-28 | 2022-08-02 | 深圳创维-Rgb电子有限公司 | Interface display method, apparatus, device and readable storage medium |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103076957B (en) * | 2013-01-17 | 2016-08-03 | 上海斐讯数据通信技术有限公司 | A kind of display control method and mobile terminal |
| CN103491230B (en) * | 2013-09-04 | 2016-01-27 | 三星半导体(中国)研究开发有限公司 | Can the mobile terminal of automatic regulating volume and font and Automatic adjustment method thereof |
| CN104866082B (en) * | 2014-02-25 | 2019-03-26 | 北京三星通信技术研究有限公司 | Method and device for browsing based on user behavior |
| CN103869978A (en) * | 2014-02-25 | 2014-06-18 | 上海聚力传媒技术有限公司 | Method and equipment for determining display size of screen element |
| CN105607733B (en) * | 2015-08-25 | 2018-12-25 | 宇龙计算机通信科技(深圳)有限公司 | Adjusting method, regulating device and terminal |
| CN106919247B (en) * | 2015-12-25 | 2020-02-07 | 北京奇虎科技有限公司 | Virtual image display method and device |
| CN106412232A (en) * | 2016-08-26 | 2017-02-15 | 珠海格力电器股份有限公司 | Method and device for controlling zooming of operation interface and electronic equipment |
| US20210072802A1 (en) * | 2016-12-27 | 2021-03-11 | Shenzhen Royole Technologies Co., Ltd. | Electronic device and display control method thereof |
| CN109584285B (en) * | 2017-09-29 | 2024-03-29 | 中兴通讯股份有限公司 | A control method, device and computer-readable medium for display content |
| CN110765847B (en) * | 2019-09-06 | 2023-08-04 | 平安科技(深圳)有限公司 | Font adjustment method, device, equipment and medium based on face recognition |
| CN113495661A (en) * | 2020-04-03 | 2021-10-12 | 阿里巴巴集团控股有限公司 | Display method and device of electronic device and computer storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6603491B2 (en) * | 2000-05-26 | 2003-08-05 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
| CN101587704A (en) * | 2008-05-20 | 2009-11-25 | 鸿富锦精密工业(深圳)有限公司 | Portable video unit and method for adjusting display font size thereof |
| CN101751209A (en) * | 2008-11-28 | 2010-06-23 | 联想(北京)有限公司 | Method and computer for adjusting screen display element |
| CN101815126A (en) * | 2010-03-19 | 2010-08-25 | 中兴通讯股份有限公司 | Method and device for automatically adjusting display scale |
| CN101924825A (en) * | 2010-07-14 | 2010-12-22 | 康佳集团股份有限公司 | Mobile terminal and method for automatically regulating display size of characters |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101271678A (en) * | 2008-04-30 | 2008-09-24 | 深圳华为通信技术有限公司 | Screen font zooming method and terminal unit |
| CN101893934A (en) * | 2010-06-25 | 2010-11-24 | 宇龙计算机通信科技(深圳)有限公司 | Method and device for intelligently adjusting screen display |
-
2011
- 2011-04-20 CN CN201110099276.6A patent/CN102752438B/en not_active Expired - Fee Related
-
2012
- 2012-02-22 WO PCT/CN2012/071468 patent/WO2012142869A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6603491B2 (en) * | 2000-05-26 | 2003-08-05 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
| CN101587704A (en) * | 2008-05-20 | 2009-11-25 | 鸿富锦精密工业(深圳)有限公司 | Portable video unit and method for adjusting display font size thereof |
| CN101751209A (en) * | 2008-11-28 | 2010-06-23 | 联想(北京)有限公司 | Method and computer for adjusting screen display element |
| CN101815126A (en) * | 2010-03-19 | 2010-08-25 | 中兴通讯股份有限公司 | Method and device for automatically adjusting display scale |
| CN101924825A (en) * | 2010-07-14 | 2010-12-22 | 康佳集团股份有限公司 | Mobile terminal and method for automatically regulating display size of characters |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114845165A (en) * | 2022-04-28 | 2022-08-02 | 深圳创维-Rgb电子有限公司 | Interface display method, apparatus, device and readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102752438A (en) | 2012-10-24 |
| CN102752438B (en) | 2014-08-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2012142869A1 (en) | Method and apparatus for automatically adjusting terminal interface display | |
| CN111736691B (en) | Interaction method and device of head-mounted display device, terminal device and storage medium | |
| CN103353935B (en) | A kind of 3D dynamic gesture identification method for intelligent domestic system | |
| JP5863423B2 (en) | Information processing apparatus, information processing method, and program | |
| CN103336575B (en) | The intelligent glasses system of a kind of man-machine interaction and exchange method | |
| JP6011165B2 (en) | Gesture recognition device, control method thereof, display device, and control program | |
| CN108681399B (en) | Equipment control method, device, control equipment and storage medium | |
| CN110083202A (en) | With the multi-module interactive of near-eye display | |
| DE102018103572A1 (en) | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND RECORDING MEDIUM | |
| WO2020080107A1 (en) | Information processing device, information processing method, and program | |
| CN109839827B (en) | Gesture recognition intelligent household control system based on full-space position information | |
| CN103279253A (en) | A theme setting method and terminal device | |
| KR101654311B1 (en) | User motion perception method and apparatus | |
| CN107291346A (en) | Drawing content processing method and device of terminal equipment and terminal equipment | |
| CN107479712A (en) | information processing method and device based on head-mounted display apparatus | |
| WO2013149475A1 (en) | User interface control method and device | |
| CN109145802A (en) | More manpower gesture man-machine interaction methods and device based on Kinect | |
| JP6109288B2 (en) | Information processing apparatus, information processing method, and program | |
| WO2012149713A1 (en) | Method and apparatus for human-machine interaction | |
| WO2024055957A1 (en) | Photographing parameter adjustment method and apparatus, electronic device and readable storage medium | |
| JP2010112979A (en) | Interactive signboard system | |
| CN115562500B (en) | Method for controlling smart phone through eye movement | |
| JP2011243141A (en) | Operation information processor, method and program | |
| CN120103959A (en) | Human-computer interaction method, device, equipment and medium | |
| CN117827072A (en) | Device control method, device and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12773786 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 12773786 Country of ref document: EP Kind code of ref document: A1 |