CN108600797B - Information processing method and electronic equipment - Google Patents
Information processing method and electronic equipment Download PDFInfo
- Publication number
- CN108600797B CN108600797B CN201810297237.9A CN201810297237A CN108600797B CN 108600797 B CN108600797 B CN 108600797B CN 201810297237 A CN201810297237 A CN 201810297237A CN 108600797 B CN108600797 B CN 108600797B
- Authority
- CN
- China
- Prior art keywords
- information
- media
- media information
- environment
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 230000010365 information processing Effects 0.000 title claims abstract description 14
- 230000007613 environmental effect Effects 0.000 claims abstract description 53
- 238000000034 method Methods 0.000 claims abstract description 35
- 230000008859 change Effects 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 17
- 230000000875 corresponding effect Effects 0.000 description 66
- 238000001514 detection method Methods 0.000 description 9
- 210000001747 pupil Anatomy 0.000 description 8
- 230000001960 triggered effect Effects 0.000 description 8
- 230000009467 reduction Effects 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000029052 metamorphosis Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41415—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47217—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure provides an information processing method and an electronic device, wherein the method includes: processing environmental information obtained by a first acquisition device, the environmental information being associated with first media information; obtaining second media information corresponding to a portion of the environment information based at least on a processing result; and outputting the second media information, wherein the second media information is matched with the first media information in content. The method and the device can conveniently realize the associated playing of different media information, and improve the user experience.
Description
Technical Field
The embodiment of the application relates to the field of multimedia equipment, in particular to an information processing method and electronic equipment.
Background
Currently, to promote or increase the familiarity of a product, more and more merchants choose a way to promote advertising to promote the product. For example, the advertisement information is displayed on a display screen outdoors, or played on a television, or the advertisement information is displayed in network equipment and broadcasting equipment, such as audio or video. However, in the above situation, due to environmental factors or limitations of the playing device, the user may only watch the picture or only listen to the sound, and cannot watch the picture while listening to the audio, which affects the user experience, and cannot effectively promote the advertisement information.
Disclosure of Invention
The embodiment of the application provides an information processing method and electronic equipment capable of conveniently and correlatively playing different media information.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
an information processing method comprising:
processing environmental information obtained by a first acquisition device, the environmental information being associated with first media information;
obtaining second media information corresponding to a portion of the environment information based at least on a processing result;
and outputting the second media information, wherein the second media information is matched with the first media information in content.
In an embodiment of the present application, processing the environmental information obtained by the first collecting device includes:
acquiring a first part of information in the environment information;
processing the first part of information to obtain the processing result;
or acquiring a second part of information in the environment information;
processing the second part of information to obtain the processing result;
the first part of information and the second part of information are different, and the first part of information comprises identification information corresponding to the second media information; the second part of information comprises at least one of image information, character information and audio information corresponding to the first media information.
In an embodiment of the present application, processing the environmental information obtained by the first collecting device includes:
if the first preset condition is met, acquiring the environmental information through a first acquisition device;
triggering a first part of information about second media information in the environment information;
wherein, satisfying the first preset condition includes at least one of the following:
obtaining a spatial parameter representing the posture of the first acquisition device, and if the change rule of the spatial parameter represents that the posture change of the first acquisition device conforms to a preset rule, or if the spatial parameter indicates that the time for which the first acquisition device is kept in the first spatial posture exceeds a first preset time, meeting a first preset condition; or
Acquiring a behavior parameter representing a user, and if the behavior parameter indicates that the user pays attention to the environment information or the time for paying attention to the environment information exceeds second preset time, meeting a first preset condition; or
Obtaining a playing parameter, and if the change of the playing parameter represents the user attention environment information, meeting a first preset condition;
if a first instruction for instructing acquisition of the environmental information is obtained, a first preset condition is satisfied.
In an embodiment of the present application, processing the environmental information obtained by the first collecting device includes:
acquiring environmental information through the first acquisition device;
if a second preset condition is met, triggering a first part of information about second media information in the environment information;
wherein, satisfying the second preset condition includes at least one of:
if partial information corresponding to the first media information in the environment information is continuously acquired within a third preset time, a second preset condition is met; or
And if a second instruction for indicating triggering of the first part of information is obtained, a second preset condition is met.
In an embodiment of the application, the triggering the first part of the environment information about the second media information includes one of the following:
identifying identification information about second media information in the environment information; triggering the link corresponding to the identification information based on the identification result; wherein the first part of information comprises the identification information; or
Triggering a website link corresponding to second media information in the environment information, wherein the first part of information comprises the website link;
the first part of information is displayed on the first media information or displayed in an associated area outside a display area of the first media information.
In an embodiment of the present application, the matching of the second media information and the first media information content includes:
the first media information is image information or video information, and the second media information is matched audio information; or when the first media information is audio information, the second media information is matched image information or video information; and/or
The output progress of the first media information and the output progress of the second media information are matched.
In an embodiment of the present application, wherein,
the first media information is from a first source address, and the first source address is a playing address of a first source file, wherein the first source file comprises first media information and second media information;
wherein, the processing the environmental information obtained by the first acquisition device comprises:
and acquiring a first source address corresponding to first media information of the environment information, so as to acquire the second media information based on the first source address.
In an embodiment of the application, the outputting the second media information includes:
determining the current playing progress based on first part information corresponding to second media information in the environment information acquired in real time; the first part of information is updated in real time based on the playing progress of the first media information;
synchronizing the second media information according to the playing progress; or
Acquiring a second part of information except the first part of information in the environment information;
matching the playing progress matched with the second part of information in a database;
and synchronizing the second media information according to the playing progress.
In addition, an embodiment of the present application further provides an electronic device, which includes:
the first acquisition device is used for acquiring environment information, and the environment information is associated with first media information;
and the processor is configured to process the environment information obtained by the first acquisition device, obtain second media information corresponding to a part of the environment information at least based on a processing result, and control output of the second media information, wherein the second media information is matched with the first media information in content.
In an embodiment of the present application, the electronic device further includes:
a retaining device for relatively securing the wearable apparatus to at least a portion of a user's body.
Based on the disclosure of the above embodiments, it can be known that the embodiments of the present application have the following beneficial effects:
according to the embodiment of the application, the audio or video information matched with the media information can be acquired through the processing operation of the media information when a user can only watch or listen to the media information due to the influence of environmental limitation, the limitation of a playing source or other factors, so that the related playing of different matched media information is realized, and the characteristics of simplicity, convenience and better user experience are realized; in addition, the user can selectively acquire and output the matched media information to the environment information which is interested by the user, and the applicability is better.
Drawings
FIG. 1 is a schematic flow chart of an information processing method in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating processing of context information in one embodiment of the present application;
FIG. 3 is a schematic flow chart of processing context information in another embodiment of the present application;
fig. 4 is a source flow structure diagram of an electronic device in an embodiment of the present application.
Detailed Description
Specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, but the present application is not limited thereto.
It will be understood that various modifications may be made to the embodiments disclosed herein. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present application has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
In the following, with reference to the accompanying drawings, in order to describe an embodiment of the present application in detail, an embodiment of the present application provides an information processing method, which can obtain, based on obtained environment information, second media information matched with first media information in the environment information, where the first media information and the second media information may be image information and audio information, respectively, so that the image information can be viewed while listening to the audio information, and user experience is improved.
As shown in fig. 1, a schematic flow chart of an information processing method in an embodiment of the present application is shown, where the information processing method in the embodiment of the present application may include:
processing environmental information obtained by a first acquisition device, the environmental information being associated with first media information;
obtaining second media information corresponding to a portion of the environment information based at least on a processing result;
and outputting the second media information, wherein the second media information is matched with the first media information in content.
The information processing method provided by the embodiment of the application can be applied to any electronic equipment capable of playing audio and/or video. For example, the electronic device in the embodiment of the present application may include a mobile phone, a computer, or a smart device wearable on a user, or the information processing method in the embodiment of the present application may also be applied to, for example, an augmented reality AR or a virtual reality VR or a mixed reality MR device. The electronic device in the embodiment of the present application may obtain the environmental information through a first acquisition device disposed therein, where the environmental information in the embodiment of the present application may be related to broadcast information or other wireless signals wirelessly transmitted from the outside, video information played on an external display device, or dynamic or static image information displayed, or media information such as audio information played on an external audio device, and in addition, the environmental information in the embodiment of the present application may also be related to media information such as audio information, video information, and image information played by the electronic device itself. That is, the environment information in the embodiment of the present application may be related to the media information played by the electronic device of the user itself, or may be related to the media information played or transmitted in the external environment. In addition, in this embodiment, the associating of the environment information with the first media information may mean that the environment information includes the first media information and identification information matched with the first media information, or that the environment information includes identification information associated with the second media information and loaded in the media information. In the embodiments of the present application. The first media information may be at least one of the video information, the image information, and the audio information. Specifically, after detecting the environment information, the electronic device may process the environment information to obtain and output second media information that matches the first media information, where the second media information may also be at least one of the video information, the image information, and the audio information, but the second media information is different from the first media information.
For example, in one embodiment of the present application, the environment information may include first media information configured as video information or image information displayed on the display device and identification information loaded on the first media information or on the associated area, or the environment information may be configured as identification information loaded on the first media information. When the user pays attention to the first media information, the identification information in the environment information can be identified and triggered through the electronic equipment, so that the second media information which is matched with the first media information and is constructed into audio information can be obtained. That is to say, in practical applications, even if audio information corresponding to the video information is not played in the video information played outdoors or indoors, or a user cannot clearly listen to the audio information due to condition restrictions, and the user wants to listen to the audio information, the second media information matched with the first media information in the environment information can be acquired by the electronic device carried by the user, and the listening to the audio information is performed. Based on the configuration, the second media information corresponding to the first media information can be conveniently acquired and played, and the user can selectively acquire the second media information according to the own requirements or interests, namely when the user wants to listen to the corresponding audio information, the acquisition and playing of the second media information are executed, so that the user experience is better.
In another embodiment of the present application, the environment information may include first media information configured as audio information transmitted in an audio device or space and identification information transmitted with or displayed around the first media information, or the environment information may also be configured as identification information associated with the first media information transmitted with or displayed with the first media information. The electronic equipment can acquire second media information which is matched with the first media information and is constructed into video information or image information based on the identification and triggering of the identification information. That is to say, in practical applications, even when audio information is played outdoors or indoors, video information corresponding to the audio information is not played correspondingly, or when video information cannot be played through audio equipment due to condition restrictions, a user may acquire, through an electronic device carried by the user, second media information that matches the first media information in the environment information, and perform display and viewing of the video information or image information. Based on the configuration, the second media information corresponding to the first media information can be conveniently acquired and played, and the user can selectively acquire the second media information according to the own requirements or interests, namely when the user wants to watch the corresponding video information or image information, the acquisition and playing of the second media information are executed, so that the user experience is better.
Further, the user can obtain and play the second media information matched with the first media information through the electronic device, and when the second media information in the audio form is played, the user can select to listen through the earphone, so that the influence on the surrounding environment or other users is avoided. On the other hand, for the advertisement information in the form of videos or images, an advertiser can choose to only play the corresponding video or image information, and a user can listen to the audio information through the electronic device in a self-matching manner, so that noise pollution is avoided, and interference among different audio played by a plurality of playing sources is avoided.
In addition, in other embodiments of the present application, the first media information and the second media information may be configured in other forms, and the environment information may be any form of information loaded with the first media information. As long as the second media information matched with the associated first media information can be obtained by processing the environment information, the embodiment of the present application can be taken as such, and details are not described herein again.
In addition, in this embodiment of the application, the matching of the second media information and the first media information content may include: the first media information is image information or video information, and the second media information is matched audio information; or when the first media information is audio information, the second media information is matched image information or video information. Namely, the associated playing of the matched audio information and the video information/image information can be realized.
In another embodiment, the matching of the second media information with the first media information content may further comprise: the output progress of the first media information is matched with the output progress of the second media information, namely, not only the content is matched, but also the playing progress of the first media information is matched, so that the use experience of a user is improved.
The specific configuration of the embodiments of the present application will be described in detail below. The environment information in the embodiment of the present application as described above may be associated with the first media information, or may also include the first media information. In this embodiment of the application, processing the environmental information obtained by the first collecting device may include:
acquiring a first part of information in the environment information;
and processing the first part of information to obtain the processing result.
The first part of information in the embodiment of the present application may be configured as identification information corresponding to the second media information. For example, it may be a two-dimensional code identifier, a one-dimensional code identifier, or a website connection, etc. The electronic equipment can control the identification information to be recognized and triggered, so that second media information corresponding to the first media information can be obtained. For example, the electronic device may further include a second acquiring device, which is used for identifying and triggering the first part of information, for example, the second acquiring device may be a two-dimensional code identifying module, a one-dimensional code identifying module, or a website link triggering module, etc. to identify and trigger the first part of information. The processor can execute the identification and triggering of the identification information through the second acquisition device, so as to obtain the first media information.
The identification information can be used for obtaining second media information in a correlated mode, the identification information is identified through the first acquisition device, a website or a storage address of the corresponding first media information is further obtained, and then the second media information is obtained.
Alternatively, in another embodiment of the present application, the processing the environmental information obtained by the first collecting device may also include:
acquiring a second part of information in the environment information;
processing the second part of information to obtain the processing result;
wherein the first part of information and the second part of information are different.
As described above, the environment information in the embodiment of the present application may include the first media information, for example, when the first media information is a video, the second part of information may be part of video data in the first media information, such as a part of video segment or a plurality of image information obtained by capturing video images. Alternatively, when the first media information is audio, the second part of information may be part of audio data in the second media information, such as part of audio segments, or text information identified by the audio information.
Specifically, the data of the second media information may be stored in the electronic device, another electronic device in communication with the electronic device, a server, or a database in the cloud. After the second part of information is obtained, the corresponding second media information can be obtained based on the obtained second part of information query.
For example, when the second media information is a video or an image, the first acquisition device acquires sub-video information or image information (second part information) included in the environment information; and acquiring second media information matched with the sub-video information or the image information in the database based on the sub-video information or the image information, wherein the second media information can comprise audio information.
Or when the second media information is audio information, acquiring sub-audio information included in the environment information or character information identified based on the audio information through the first acquisition device;
and acquiring second media information matched in the database based on the sub-audio information or the character information identified based on the audio information, wherein the second media information comprises video information or image information.
That is, in the embodiment of the application, the second media information matched with the second part of information can be identified through image matching, video matching or character matching, and audio matching, so that the second media information corresponding to the first media information is obtained. According to the embodiment of the application, different processing modes can be executed for different types of first media information, and corresponding second media information can be acquired, so that the method and the device are more flexible and simple.
Further, as shown in fig. 2, a schematic flow chart of processing environment information in an embodiment of the present application is shown. Which may include:
if the first preset condition is met, acquiring the environmental information through a first acquisition device;
triggering a first part of the environment information about the second media information.
That is, in the embodiment of the present application, the condition for acquiring the environment information may be limited, and the acquisition of the environment information is executed only when the first preset condition is satisfied, so as to process the first part of information in the environment information, or acquire the second media information based on the second part of information.
Specifically, the manner of meeting the first preset condition in the embodiment of the present application may be respectively based on three angles, one of which is to execute the obtaining of the environment information when the spatial posture of the acquisition device meets the preset condition; the second is the acquisition of the execution environment information when the behavior of the user meets the preset condition, and the third is the acquisition of the execution environment information when the preset control instruction is received. The following are detailed respectively.
The embodiment of the application can obtain the spatial parameters representing the gesture of the first acquisition device in real time, for example, a gesture sensor can be arranged in the first acquisition device, the gesture sensor can be a gravity sensor, an acceleration sensor, a gyroscope and other sensing devices, and gesture information of the first acquisition device, such as current position, direction and other spatial parameter information, can be obtained in real time through the gesture sensor, so that the gesture change condition of the first acquisition device can be obtained. And if the change rule of the spatial parameter of the first acquisition device indicates that the posture change of the first acquisition device accords with the preset rule, or if the spatial parameter indicates that the time for the first acquisition device to maintain in the first spatial posture exceeds the first preset time, the first preset condition is met.
Wherein, the change rule of space parameter represents that the posture change of this first collection system accords with and predetermines the rule and includes: through the spatial position and the orientation and other information of the acquired first acquisition device, the metamorphosis change of the first acquisition device is judged to be from bottom to top, and when the moving height is larger than the preset height, the rule is met. When the first collecting device is judged to be moved from the lower first position to the higher second position and the higher second position is larger than a preset height, the first collecting device is judged to accord with a preset rule, namely, the first preset condition is met. At the moment, the user can automatically identify that the environment information needs to be acquired when lifting the first acquisition device, and the method has the characteristics of simplicity and convenience. Or when the first collecting device is judged to be transferred from the first direction to the second direction, and the included angle between the first direction and the second direction is larger than the preset angle, the first collecting device is judged to accord with the preset rule, namely, the first preset condition is met. For example, when the user turns suddenly, it may be determined that the environmental information needs to be acquired, and the environmental information may be automatically collected or identified. The preset height and the preset angle can be configured according to requirements, and different range values can be set in different embodiments. Or, in another embodiment, when it is determined that the time that the first acquisition device is kept in the first spatial attitude exceeds a first preset time, it is determined that the first preset condition is met. That is, when the user needs to perform the collection and identification of the environment information through the first collection device, the first collection device may be usually kept at a position corresponding to the environment information, at this time, the first collection device will be kept at the first spatial posture, if the time for protecting the posture exceeds the first preset time, it may be determined that the environment information needs to be acquired, that is, the first preset condition is satisfied, and at this time, the acquisition operation of the environment information may be performed. The judgment of the first preset condition is executed at the angle of the acquisition device, and when the posture of the acquisition device accords with the preset rule, the environment information can be determined to be required to be acquired.
In addition, the determining whether the first preset condition is satisfied from the perspective of the user may include:
and acquiring a behavior parameter representing the user, and if the behavior parameter indicates that the user pays attention to the environment information or the time for paying attention to the environment information exceeds second preset time, meeting a first preset condition.
The behavior parameters of the user may include a direction in which the user's eyes are looking, image parameters of pupils of the user's eyes, and the like. For example, the electronic device may acquire a face image of the user, determine whether the gaze direction of the eyes of the user corresponds to the environment information based on the face image, and if so, determine that the user is paying attention to the environment information, that is, the first preset condition is satisfied. At this time, it may be realized that, when it is determined that the user is paying attention to the environment information, it may be determined that the first preset condition is satisfied, and then the acquiring operation of the environment information may be performed. Alternatively, the electronic device may determine that the first preset condition is satisfied when it is determined that the user is gazing at the environment information based on the acquired facial image, and further determines that the eyes of the user are in the pupil expansion state, or determines that the eyes of the user are always in the open state within a fourth preset time. That is, when the user is paying attention to an environment information, it is common that the pupil is enlarged, and since the eyes are often opened and not blinked while watching a video or an image, the present application may determine that the first preset condition is satisfied by determining the pupil enlargement state or the eye opening state of the user. Or, if the time that the user pays attention to the environment information exceeds the second preset time, the first preset condition is judged to be met. That is, if it is determined that the user has paid attention to one piece of environment information for at least the second preset time, the obtaining operation of the environment information may be performed. Through the method, whether the acquisition operation of the environment information is executed or not can be determined through the behavior parameters of the user, and the method has the characteristics of being more intelligent and convenient.
In addition, the embodiment of the application can acquire the information of the playing parameters of the electronic device, such as volume parameter information, parameter information of a noise reduction mode, or information of display parameters. And if the change of the obtained playing parameter represents the user attention environment information, a first preset condition is met. For example, when the first media information is audio information, and a user needs to perform acquisition of environment information, the user may perform an operation of adjusting volume increase or performing an operation of controlling audio playing, and when the electronic device may detect a change of a volume parameter, and the change of the volume parameter indicates volume increase, it may be determined that the first preset condition is met, that is, it is determined that the user pays attention to the audio information, and at this time, the environment information needs to be acquired through the first acquisition device. Or, the user may also input a control instruction of the noise reduction mode, and when the instruction is obtained, the electronic device determines that the playing parameter meets the first preset condition, that is, determines that the user pays attention to the audio information, and at this time, the first acquisition device needs to obtain the environment information. The user can also perform adjustment operation of the display parameters to control the acquisition of the audio information. For example, when the electronic device detects an adjustment instruction of the display parameter, that is, it is determined that the user focuses on the second media information corresponding to the audio information, at this time, the first collecting device may obtain the environment information, so as to process the environment information to obtain the second media information. The adjusting instruction of the display parameter comprises the following steps: at least one of a brightness adjustment instruction, a contrast adjustment instruction, and a display interface size adjustment instruction.
In another embodiment, when the first media information is image information or video information, the user may also perform the adjustment operation of the playing parameter when the user needs to perform the acquisition of the environment information. For example, an adjustment operation of the display parameter is performed, so that the electronic device obtains a display parameter adjustment instruction. When the electronic equipment acquires the display parameter adjusting instruction, the user is judged to pay attention to the video information, and the environment information can be acquired through the first acquisition device. Alternatively, the user may perform other adjustment operations of the playing parameters to control the acquisition of the audio information. For example, the user may perform a volume-up adjustment operation or a control operation of playing audio or video, or the user may also input a control instruction of the noise reduction mode, and when the electronic device acquires the playing parameter, it is determined that the user pays attention to the second media information corresponding to the video information, and at this time, the first acquisition device needs to acquire the environment information, that is, the first acquisition device may acquire the environment information, so as to process the environment information to acquire the second media information. Based on the above embodiment, the operation of executing the environment information can be selected based on the change of the playing parameter of the electronic device in real time and intelligently.
Further, in another embodiment of the present application, the electronic device may also directly receive a first instruction for acquiring the environment information, which is input by a user, for example, a control key may be disposed on the electronic device, the user may input the first instruction by triggering the control key, and the electronic device may acquire the environment information through the first acquisition device when receiving the first instruction. The control key can be a mechanical key or a touch key. Or, the electronic device may receive gesture information of the user, where the gesture information is a preset gesture and is determined as receiving the first instruction. In other embodiments, the input of the first instruction may be implemented in other manners, which is not limited herein.
Through the configuration, whether the judgment of the first preset condition is met or not can be realized, and when the first preset condition is met, the environmental information is acquired through the first acquisition device, so that the automatic control of the first acquisition device can be realized, and the requirements of users can be met. The first acquisition device in the embodiment of the application can comprise a camera device and an audio receiving device, and can be specifically selected and set according to requirements.
Fig. 3 is a schematic flow chart of processing environment information according to another embodiment of the present application, which may include:
acquiring environmental information through the first acquisition device;
and triggering the first part of the information about the second media information in the environment information if a second preset condition is met. Triggering a first part of the environment information about the second media information, for example by means of a second acquisition device
In the embodiment of the application, the environment information may be acquired in real time through the first acquisition device, or acquired based on the received instruction, or acquired when the first preset condition is met according to the above. In the process of acquiring the environment information, if a second preset condition is met, executing the identification and triggering operation of the first part of information in the environment information so as to acquire second media information; or obtaining the second media information based on the second portion of information. That is to say, in the present application, only when the second preset condition is met, the triggering operation of the first part of information is executed, or the comparison and identification operation of the second part of information is executed, that is, the obtaining operation of the second media information is executed when the second preset condition is met, so as to avoid that the user executes the triggering of the first part of information and the matching operation of the second part of information to the first part of information when the user does not need to obtain the second media information, which affects the user experience.
Next, a second preset condition in the embodiment of the present application is described in detail, where the second preset condition is satisfied if the first collecting device continuously collects part of the information corresponding to the second media information in the environment information within a third preset time. That is, in the process of acquiring the environment information by the first acquisition device, if it is determined that the environment information acquired at the third preset time corresponds to the same partial information of the first media information, it may be determined that the user is always paying attention to the first media information, and at this time, a trigger operation of the first partial information of the environment information or an identification matching operation of the second partial information may be performed, so as to obtain the second media information.
Or, in another embodiment of the present application, if a second instruction for instructing triggering of the first part of information is obtained in the process of acquiring the environment information, a second preset condition is satisfied. The electronic device may receive a second instruction input by a user in real time, for example, the electronic device may be provided with a control key, the user may input the second instruction by triggering the control key, and when receiving the second instruction, the electronic device may trigger a first part of information in the environment information through the first collecting device, or perform a comparison matching operation through a second part of information. The control key can be a mechanical key or a touch key. Or, the electronic device may receive gesture information of the user, where the gesture information is a preset gesture and is determined as receiving the second instruction. In other embodiments, the input of the second instruction may be implemented in other manners, which is not limited herein.
Based on the above, the first part of information may be selected to be triggered by whether the acquired environment information corresponds to the same first media information, or may be selected to be triggered based on an operation instruction of a user, which has better applicability.
In the following, a manner of triggering the first part of information in the embodiment of the present application is described in detail, as described in the embodiment, the first part of information in the embodiment of the present application may be identification information corresponding to the second media information, for example, the identification information may be a two-dimensional code identification, a one-dimensional code identification, or a website link identification. For the case that the first part of information is identification information, triggering the first part of information about the second media information in the environment information may include: identifying identification information about second media information in the environment information; and triggering the link corresponding to the identification information based on the identification result. That is, the identification module in the first acquisition device may perform identification of the identification information, so as to identify a website or a storage address corresponding to the second media information, and further trigger the website or the storage address, thereby acquiring the second media information. For embodiments in which the first portion of information includes a web site link, the web site link in the context information corresponding to the second media information may be triggered; through the triggering operation of the website link, the corresponding second media information can be acquired. In the embodiment of the application, the first part of information may be displayed on the first media information or displayed in an associated area outside the display area of the first media information. For example, when the first media information is image information or video information, the first part of information may be displayed on a display area of the first media information, preferably on an area that does not affect the playing and displaying of the first media information, or the first part of information may be displayed on an associated area other than the display area of the first media information, such as an area adjacent to the periphery, so as to conveniently trigger the first part of information displayed thereon when the user acquires the first media information through the first acquisition device.
Further, in order to implement synchronous playing of the first media information and the second media information in the present application, the output progress of the second media information may be correspondingly output according to the progress of the first media information associated with the obtained environment information.
For example, in an embodiment of the present application, a current playing progress may be determined based on a first part of information, corresponding to second media information, of the environment information acquired in real time; and synchronizing the second media information according to the playing progress. Wherein the first part of information is updated in real time based on the playing progress of the first media information. That is, the first part of information in the present application may include a playing progress of the first media information, and update its data in real time based on the playing progress. When the first acquisition device identifies the identifier of the first part of information, the second media information and the playing progress information can be synchronously acquired, so that the second media information synchronized with the first media information is acquired and output. In another embodiment of the present application, a current playing progress of the first media information may also be determined based on the obtained second part of information. That is, the first acquisition device may acquire a second part of information other than the first part of information in the environment information, and match the playing schedule in the database with the second part of information, thereby synchronizing the second media information according to the playing schedule.
For example, when the first media information is video information, the second part of information may be sub-video information or image information, and after the second part of information is obtained, the playing progress corresponding to the second part of information at present may be determined by using the corresponding relationship between the progress and the images/videos stored in the database. Or, when the second media information is video information, the second part of information may be sub-audio information or recognized text information, and after the second part of information is obtained, a playing progress currently corresponding to the second part of information may be determined by using a corresponding relationship between the sub-audio information/text information stored in the database and the progress, so as to obtain the playing progress of the first media information. Based on the configuration, the synchronous playing of the second media information and the first media information can be conveniently realized.
In summary, the embodiment of the present application can realize that when a user can only watch or listen to media information due to environmental restrictions, limitations of a playing source, or influences of other factors, audio or video information matched with the media information can be obtained through processing operations on the media information, thereby realizing associated playing of different media information in a matching manner, and having the characteristics of simplicity, convenience and better user experience; in addition, the user can selectively acquire and output the matched media information to the environment information which is interested by the user, and the applicability is better.
In addition, the embodiment of the application also provides electronic equipment, and the electronic equipment can apply the information processing method to obtain second media information matched with the first media information managed by the environment information, so that audio and video listening and watching are realized.
Fig. 4 is a schematic block diagram of an electronic device in an embodiment of the present application, where the electronic device in the embodiment of the present application may include a mobile phone, a computer, or a smart device wearable on a user, or the electronic device in the embodiment of the present application may also be configured as an augmented reality AR or a virtual reality VR or a mixed reality MR device.
Specifically, the electronic device in the embodiment of the present application may include: a first acquisition device 1, a processor 2, and a player 3. When the electronic device is a wearable electronic device, it may further comprise a holding means for holding the electronic device on at least a part of the body of the user, for example the holding means may be a helmet, a wrist band, a support for smart glasses, a holder or the like, or the electronic device may also be configured as a smart garment or the like.
Wherein the first collecting device 1 may be configured to obtain environment information, the environment information being associated with the first media information. For example, the environment information in the embodiment of the present application may be related to broadcast information or other wireless signals wirelessly transmitted from the outside, or may be related to video information played on a display device of the outside, or dynamic or static image information displayed, or may be related to media information such as audio information played on an external audio device, or the environment information in the embodiment of the present application may also be related to media information such as audio information, video information, and image information played by the electronic device itself. That is, the environment information in the embodiment of the present application may be media information played by the electronic device associated with the user itself, or may be media information played or transmitted in an external environment. In addition, in this embodiment, the associating of the environment information with the first media information may mean that the environment information includes the first media information and identification information matched with the first media information, or that the environment information includes identification information associated with the second media information and loaded in the media information.
The first capturing means 1 may be constructed in different configurations for different types of environmental information and first media information. For example, when the first media information is a video or an image, the first capture device 1 may be an image acquisition module such as a camera, when the first media information is information such as audio, the first capture device 1 may be an audio acquisition module, or when the environment information is a wireless transmission signal, the first capture device 1 may be a wireless communication module. For different configuration requirements, the first collecting device 1 in the embodiment of the present application may include at least one of the above configurations to complete the obtaining of the environmental information.
The processor 2 may receive the environment information acquired by the first acquisition device 1, process the environment information, obtain second media information corresponding to a part of the environment information at least based on the processing result, and control the player 3 to output the second media information, where the second media information matches with the first media information in content.
The processor 2 in this embodiment of the application may process the environment information after the first acquisition device 1 detects the environment information, to obtain and output second media information matched with the first media information, where the second media information may also be at least one of the video information, the image information, and the audio information, but the second media information is different from the first media information.
For example, in one embodiment of the present application, the environment information may include first media information configured as video information or image information displayed on a display screen and identification information described on the first media information or on an associated area, or the environment information may be configured as identification information loaded on the first media information. When the user focuses on the first media information, the processor can acquire second media information which is matched with the first media information and is constructed into audio information based on the identification and triggering of the identification information. That is to say, in practical applications, even if the audio information corresponding to the video information is not played in the video information played outdoors or indoors, or the user cannot clearly acquire the audio information due to condition restrictions, the user can acquire the second media information matched with the first media information in the environment information through the electronic device carried by the user when the user wants to listen to the audio information, and then listen to the audio information. Based on the configuration, the second media information corresponding to the first media information can be conveniently acquired and played, and the user can selectively acquire the second media information according to the own requirements or interests, namely when the user wants to listen to the corresponding audio information, the acquisition and playing of the second media information are executed, so that the user experience is better. In another embodiment of the present application, the environment information may include first media information configured as audio information transmitted in an audio device or space and identification information transmitted with or displayed around the first media information, or the environment information may also be configured as identification information associated with the first media information transmitted with or displayed with the first media information. The processor 2 may acquire the second media information configured as video information matching the first media information based on the identification and triggering of the identification information. That is to say, in practical applications, even if video information corresponding to the audio information is not played in the audio information played outdoors or indoors, or a user cannot play the video information or image information through the audio device due to condition restrictions, the second media information matched with the first media information in the environment information can be acquired through the electronic device carried by the user, and the video information or image information is displayed and viewed. Based on the configuration, the second media information corresponding to the first media information can be conveniently acquired and played, and the user can selectively acquire the second media information according to the own requirements or interests, namely when the user wants to watch the corresponding video information or image information, the acquisition and playing of the second media information are executed, so that the user experience is better. Further, the user can obtain and play the second media information matched with the first media information through the electronic device, and when the second media information in the audio form is played, the user can select to listen through the earphone, so that the influence on the surrounding environment or other users is avoided. On the other hand, for the advertisement information in the form of videos or images, an advertiser can choose to only play the corresponding video or image information, and a user can listen to the audio information through the electronic device in a self-matching manner, so that noise pollution is avoided, and interference among different audio played by a plurality of playing sources is avoided.
In addition, in other embodiments of the present application, the first media information and the second media information may be configured in other forms, and the environment information may be any form of information loaded with the first media information. As long as the second media information matched with the first media information can be obtained by processing the environment information, the embodiment of the present application can be taken as the second media information, and details are not described herein again.
In addition, the matching of the second media information and the first media information in the embodiment of the application includes: the first media information is image information or video information, and the second media information is matched audio information; or when the first media information is audio information, the second media information is matched image information or video information. I.e. a presentation of matching audio information and video information/image information can be achieved.
In another embodiment, the matching of the second media information with the first media information content may further comprise: the output progress of the first media information is matched with the output progress of the second media information, namely, not only the content is matched, but also the playing progress of the first media information is matched, so that the use information of the user is improved.
The specific configuration of the embodiments of the present application will be described in detail below. The environment information of the embodiment of the present application as described above may be associated with the first media information or may also include the first media information. In the embodiment of the present application, the processing, by the processor 2, the environmental information obtained by the first acquisition device 1 may include: and acquiring a first part of information in the environment information, and processing the first part of information to obtain the processing result.
The first part of information in the embodiment of the present application may be configured as identification information corresponding to the second media information. For example, it may be a two-dimensional code identifier, a one-dimensional code identifier, or a website connection, etc. The processor 2 may control the identification and triggering of the identification information so that second media information corresponding to the first media information may be obtained. For example, the electronic device may further include a second acquisition device 4 for identifying and triggering the first part of information, for example, the second acquisition device 4 may be a two-dimensional code identification module, a one-dimensional code identification module, or a website link triggering module, etc. to identify and trigger the first part of information. The processor 2 may perform the identification and triggering of the identification information through the second acquisition device, thereby obtaining the first media information. The identification information can be used for obtaining second media information in a correlated mode, the identification information is identified through the first acquisition device, a website or a storage address of the corresponding first media information is further obtained, and then the second media information is obtained.
Alternatively, in another embodiment of the present application, the processing of the environmental information obtained by the first acquisition device 1 by the processor 2 may also include: acquiring a second part of information in the environment information, and processing the second part of information to obtain a processing result; wherein the first part of information and the second part of information are different.
As described above, the environment information in the embodiment of the present application may include the first media information, for example, when the first media information is a video, the second part of information may be part of video data in the first media information, such as a part of video segment or a plurality of image information obtained by capturing video images. Alternatively, when the first media information is audio, the second part of information may be part of audio data in the second media information, such as part of audio segments, or text information identified by the audio information.
Specifically, the data of the second media information may be stored in the electronic device, another electronic device in communication with the electronic device, a server, or a database in the cloud. The second part of information may be obtained by the first collecting device 1, and after obtaining the second part of information, the processor 2 may query the database and obtain the corresponding second media information based on the obtained second part of information.
For example, when the second media information is a video or an image, the processor 2 acquires sub-video information or image information (second part information) included in the environment information through the first acquisition device 1; and acquiring second media information matched with the sub-video information or the image information in the database based on the sub-video information or the image information, wherein the second media information can comprise audio information.
Or, when the second media information is audio information, the processor 2 acquires sub audio information included in the environment information or text information (second part information) identified based on the audio information through the first acquisition device 1; and acquiring second media information matched in the database based on the sub-audio information or the character information identified based on the audio information, wherein the second media information comprises video information or image information.
That is, the processor 2 in the embodiment of the application may identify the second media information matched with the second part of information by image matching, video matching, or text matching, or audio matching, and further obtain the second media information corresponding to the first media information. According to the embodiment of the application, different processing modes can be executed for different types of first media information, and corresponding second media information can be acquired, so that the method and the device are more flexible and simple.
Further, in another embodiment of the present application, the processor 2 may acquire the environmental information through the first acquisition device when determining that the first preset condition is met, and trigger a first part of information about the second media information in the environmental information.
That is, in this embodiment of the present application, the condition for acquiring the environment information may be limited, and only when the first preset condition is satisfied, the processor 2 controls the first acquisition device 1 to execute the acquisition of the environment information, so as to process the first part of information in the environment information, or acquire the second media information based on the second part of information.
Specifically, in the embodiment of the present application, the processor 2 determines that the first preset condition is satisfied, and may start from three angles, one of which is to execute the obtaining of the environment information when the spatial posture of the acquisition device satisfies the preset condition; the second is the acquisition of the execution environment information when the behavior of the user meets the preset condition, and the third is the acquisition of the execution environment information when the preset control instruction is received. The following are detailed respectively.
In this embodiment, the electronic device may include a detection module, so as to obtain the spatial parameter representing the posture of the first acquisition device in real time, for example, the detection module may be provided with a posture sensor in the first acquisition device 1, the posture sensor may be a gravity sensor, an acceleration sensor, a gyroscope or other sensing devices, and the posture information of the first acquisition device may be obtained in real time through the posture sensor, such as the current position, the current direction or other spatial parameter information, so as to obtain the posture change condition of the first acquisition device 1. The processor 2 judges that the change rule of the spatial parameter of the first acquisition device detected by the detection module 1 represents that the posture change of the first acquisition device 1 meets a preset rule, or if the spatial parameter indicates that the time for the first acquisition device 1 to maintain in the first spatial posture exceeds a first preset time, a first preset condition is met.
Wherein, the change rule representation of the spatial parameter of the posture change of the first collecting device accords with the preset rule and can include: through the spatial position and the orientation and other information of the acquired first acquisition device, the metamorphosis change of the first acquisition device is judged to be from bottom to top, and when the moving height is larger than the preset height, the rule is met. That is, when the processor 2 determines that the first collecting device 1 is moved from the lower first position to the higher second position, and the higher second position is greater than a predetermined height, it determines that the first collecting device meets the predetermined rule, that is, the first predetermined condition is satisfied. At the moment, the user can automatically identify that the environment information needs to be acquired when lifting the first acquisition device, and the method has the characteristics of simplicity and convenience. Or, the processor 2 determines that the first collecting device is transferred from the first direction to the second direction, and when an included angle between the first direction and the second direction is greater than a preset angle, the first collecting device meets a preset rule, that is, a first preset condition is met. For example, when the user turns suddenly, it may be determined that the environmental information needs to be acquired, and the environmental information may be automatically collected or identified. The preset height and the preset angle can be configured according to requirements, and different range values can be set in different embodiments. Alternatively, in another embodiment, when the processor 2 determines that the time that the first acquisition device 1 is kept in the first spatial posture exceeds a first preset time, it determines that the first preset condition is met. That is, when the user needs to perform the collection and identification of the environment information through the first collection device, the first collection device may be usually kept at a position corresponding to the environment information, at this time, the first collection device will be kept at the first spatial posture, if the time for protecting the posture exceeds the first preset time, it may be determined that the environment information needs to be acquired, that is, the first preset condition is satisfied, and at this time, the acquisition operation of the environment information may be performed. The judgment of the first preset condition is executed at the angle of the acquisition device, and when the posture of the acquisition device accords with the preset rule, the environment information can be determined to be required to be acquired.
In addition, the determining whether the first preset condition is satisfied from the perspective of the user may include:
the method comprises the steps that a behavior parameter representing a user is obtained through a detection module, and if the behavior parameter shows that the user pays attention to environment information or the time of paying attention to the environment information exceeds second preset time, a first preset condition is met. The behavior parameters of the user may include a direction in which the user's eyes are looking, image parameters of pupils of the user's eyes, and the like. Here, the detection module may include an image recognition module that receives the facial information of the user acquired by the first acquisition apparatus 1, and performs an image recognition operation.
For example, the electronic device may acquire a facial image of the user through the first acquisition apparatus 1, and the detection module determines whether the gaze direction of the eyes of the user corresponds to the environmental information based on the facial image, and if so, the processor 2 determines that the user is interested in the environmental information, that is, the first preset condition is satisfied. At this time, it may be realized that, when it is determined that the user is paying attention to an environmental information, it may be determined that the first preset condition is satisfied, and then the acquiring operation of the environmental information may be performed by the first acquisition device 1. Alternatively, the electronic device may determine that the first preset condition is satisfied when it is determined that the user is gazing at the environment information based on the acquired facial image, and further determines that the eyes of the user are in the pupil expansion state, or determines that the eyes of the user are always in the open state within a fourth preset time. That is, when the user is paying attention to an environment information, it is common that the pupil is enlarged, and since the eyes are often opened and not blinked while watching a video or an image, the present application may determine that the first preset condition is satisfied by determining the pupil enlargement state or the eye opening state of the user. Or, if the processor 2 determines that the time for the user to pay attention to the environment information exceeds the second preset time, it determines that the first preset condition is satisfied. That is, if it is determined that the user has paid attention to one piece of environment information for at least the second preset time, the obtaining operation of the environment information may be performed. Through the method, whether the acquisition operation of the environment information is executed or not can be determined through the behavior parameters of the user, and the method has the characteristics of being more intelligent and convenient.
In addition, the detection module in the embodiment of the present application may also obtain information of a playing parameter of the electronic device, such as volume parameter information, parameter information of a noise reduction mode, or information of a display parameter. If the processor 2 judges that the change of the playing parameter represents the user attention environment information, a first preset condition is met. For example, when the first media information is audio information, and the user needs to perform the acquisition of the environment information, the user may perform an adjustment operation of increasing the volume or perform a control operation of playing the audio, at this time, the processor 2 may detect a change of a volume parameter, and if the change of the volume parameter indicates that the volume is increased, it may be determined that the first preset condition is met, that is, it is determined that the user pays attention to the audio information, and at this time, the environment information needs to be acquired through the first acquisition device. Or, the user may also input a control instruction of the noise reduction mode, and when the detection module obtains the instruction, the processor 2 determines that the playing parameter meets the first preset condition, that is, determines that the user pays attention to the audio information, and at this time, the first acquisition device needs to obtain the environment information. The user can also perform adjustment operation of the display parameters to control the acquisition of the audio information. For example, when the processor 2 determines that the adjustment instruction of the display parameter is detected, that is, it determines that the user focuses on the second media information corresponding to the audio information, at this time, the environment information may be acquired by the first acquisition device so as to process the environment information to acquire the second media information. The adjusting instruction of the display parameter comprises the following steps: at least one of a brightness adjustment instruction, a contrast adjustment instruction, and a display interface size adjustment instruction.
In another embodiment, when the first media information is image information or video information, the user may also perform the adjustment operation of the playing parameter when the user needs to perform the acquisition of the environment information. For example, an adjustment operation of the display parameter may be performed to cause the processor 2 to obtain a display parameter adjustment instruction. When the processor 2 acquires the display parameter adjustment instruction, it is determined that the user pays attention to the video information, that is, the environmental information can be acquired through the first acquisition device. Alternatively, the user may perform other adjustment operations of the playing parameters to control the acquisition of the audio information. For example, the user may perform a volume-up adjustment operation, or a control operation instructing to play audio or video, or the user may also input a control instruction of a noise reduction mode, and when the detection module acquires the change of the play parameter, the processor 2 determines that the user pays attention to the second media information corresponding to the video information, and at this time, the first acquisition device needs to acquire the environment information, that is, the first acquisition device may acquire the environment information, so as to process the environment information to acquire the second media information. Based on the above embodiment, the operation of executing the environment information can be selected based on the change of the playing parameter of the electronic device in real time and intelligently.
Further, in another embodiment of the present application, the processor 2 may also directly receive a first instruction for acquiring the environment information, which is input by a user, for example, a control key may be disposed on the electronic device, the user may input the first instruction by triggering the control key, and when receiving the first instruction, the processor 2 may acquire the environment information through the first acquisition device. The control key can be a mechanical key or a touch key. Or, the electronic device may receive gesture information of the user, where the gesture information is a preset gesture and is determined as receiving the first instruction. In other embodiments, the input of the first instruction may be implemented in other manners, which is not limited herein.
Through the configuration, whether the judgment of the first preset condition is met or not can be realized, and when the first preset condition is met, the environmental information is acquired through the first acquisition device, so that the automatic control of the first acquisition device can be realized, and the requirements of users can be met. The first acquisition device in the embodiment of the application can comprise a camera device and an audio receiving device, and can be specifically selected and set according to requirements.
In addition, in another embodiment of the present application, the processing of the environment information by the processor 2 may include: acquiring environmental information through the first acquisition device; and if a second preset condition is met, triggering a first part of the environment information about the second media information through the second acquisition device 4.
In the embodiment of the present application, the first collecting device 1 may obtain the environmental information in real time, or obtain the environmental information based on the received instruction, or may perform obtaining the environmental information according to the above when the first preset condition is satisfied. In the process of acquiring the environment information, if a second preset condition is met, the processor 2 controls the second acquisition device 4 to execute the identification and triggering operation of the first part of information in the environment information so as to acquire second media information; or obtaining the second media information based on the second portion of information. That is to say, in the present application, only when the second preset condition is met, the triggering operation of the first part of information is executed, or the comparison and identification operation of the second part of information is executed, that is, the obtaining operation of the second media information is executed when the second preset condition is met, so as to avoid that the user executes the triggering of the first part of information and the matching operation of the second part of information to the first part of information when the user does not need to obtain the second media information, which affects the user experience.
Next, a second preset condition in the embodiment of the present application is described in detail, wherein if the first collecting device 1 continuously collects part of the environmental information corresponding to the second media information within a third preset time, the processor 2 determines that the second preset condition is satisfied. That is, in the process of acquiring the environment information through the first acquisition device 1, if it is determined that the environment information acquired at the third preset time corresponds to the same partial information of the first media information, it may be determined that the user is always paying attention to the first media information, and at this time, a trigger operation of the first partial information of the environment information or an identification matching operation of the second partial information may be performed, so as to obtain the second media information.
Alternatively, in another embodiment of the present application, if in the process of acquiring the environment information, the processor 2 obtains a second instruction for instructing triggering of the first part of information, the second preset condition is satisfied. That is, the processor 2 may receive a second instruction input by the user in real time, for example, a control key may be disposed on the electronic device, the user may input the second instruction by triggering the control key, and when the electronic device receives the second instruction, the electronic device may trigger a first part of information in the environment information through the first collecting device, or perform a comparison and matching operation through a second part of information. The control key can be a mechanical key or a touch key. Or, the electronic device may receive gesture information of the user, where the gesture information is a preset gesture and is determined as receiving the second instruction. In other embodiments, the input of the second instruction may be implemented in other manners, which is not limited herein.
Based on the above, the first part of information may be selected to be triggered by whether the acquired environment information corresponds to the same first media information, or may be selected to be triggered based on an operation instruction of a user, which has better applicability.
In the following, a manner of triggering the first part of information in the embodiment of the present application is described in detail, as described in the embodiment, the first part of information in the embodiment of the present application may be identification information corresponding to the second media information, for example, the identification information may be a two-dimensional code identification, a one-dimensional code identification, or a website link identification. For the case that the first part of information is identification information, triggering, by the second capturing device 4, the first part of information about the second media information in the environment information may include: identifying identification information about the second media information in the environment information, and triggering a link corresponding to the identification information based on the identification result. That is, the identification module in the first acquisition device may perform identification of the identification information, so as to identify a website or a storage address corresponding to the second media information, and further trigger the website or the storage address, thereby acquiring the second media information. For embodiments in which the first portion of information includes a web site link, the web site link in the context information corresponding to the second media information may be triggered; through the triggering operation of the website link, the corresponding second media information can be acquired. In the embodiment of the application, the first part of information may be displayed on the first media information or displayed in an associated area outside the display area of the first media information. For example, when the first media information is image information or video information, the first part of information may be displayed on a display area of the first media information, preferably on an area that does not affect the playing and displaying of the first media information, or the first part of information may be displayed on an associated area other than the display area of the first media information, such as an area adjacent to the periphery, so as to conveniently trigger the first part of information displayed thereon when the user acquires the first media information through the first acquisition device.
Further, in order to implement synchronous playing of the first media information and the second media information in the present application, the output progress of the second media information may be correspondingly output according to the progress of the first media information associated with the obtained environment information.
For example, in an embodiment of the present application, a current playing progress may be determined based on a first part of information, corresponding to second media information, of the environment information acquired in real time; and synchronizing the second media information according to the playing progress. Wherein the first part of information is updated in real time based on the playing progress of the first media information. That is, the first part of information in the present application may include a playing progress of the first media information, and update its data in real time based on the playing progress. When the first acquisition device identifies the identifier of the first part of information, the second media information and the playing progress information can be synchronously acquired, so that the second media information synchronized with the first media information is acquired and output. In another embodiment of the present application, a current playing progress of the first media information may also be determined based on the obtained second part of information. That is, the first acquisition device may acquire a second part of information other than the first part of information in the environment information, and match the playing schedule in the database with the second part of information, thereby synchronizing the second media information according to the playing schedule.
For example, when the first media information is video information, the second part of information may be sub-video information or image information, and after the second part of information is obtained, the playing progress corresponding to the second part of information at present may be determined by using the corresponding relationship between the progress and the images/videos stored in the database. Or, when the second media information is video information, the second part of information may be sub-audio information or recognized text information, and after the second part of information is obtained, a playing progress currently corresponding to the second part of information may be determined by using a corresponding relationship between the sub-audio information/text information stored in the database and the progress, so as to obtain the playing progress of the first media information. Based on the configuration, the synchronous playing of the second media information and the first media information can be conveniently realized.
In summary, the embodiment of the present application can realize that when a user can only watch or listen to media information due to environmental restrictions, limitations of a playing source, or influences of other factors, audio or video information matched with the media information can be obtained through processing operations on the media information, thereby realizing associated playing of different media information in a matching manner, and having the characteristics of simplicity, convenience and better user experience; in addition, the user can selectively acquire and output the matched media information to the environment information which is interested by the user, and the applicability is better.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the electronic device to which the data processing method described above is applied may refer to the corresponding description in the foregoing product embodiments, and details are not repeated herein.
The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Various modifications and equivalents may be made by those skilled in the art within the spirit and scope of the present application and such modifications and equivalents should also be considered to be within the scope of the present application.
Claims (10)
1. An information processing method comprising:
processing environment information obtained through a first acquisition device, wherein the environment information is associated with first media information played by external display equipment or electronic equipment;
obtaining second media information corresponding to a portion of the environment information based at least on a processing result;
and outputting the second media information, wherein the second media information is matched with the first media information in content.
2. The method of claim 1, wherein processing the environmental information obtained by the first acquisition device comprises at least one of:
acquiring a first part of information in the environment information;
processing the first part of information to obtain the processing result;
or acquiring a second part of information in the environment information;
processing the second part of information to obtain the processing result;
wherein the first part of information and the second part of information are different;
wherein the first part of information includes identification information corresponding to the second media information; the second part of information comprises at least one of video information, image information, character information and audio information corresponding to the first media information.
3. The method of claim 1, wherein processing environmental information obtained by the first acquisition device comprises:
if the first preset condition is met, acquiring the environmental information through a first acquisition device;
triggering a first part of information about second media information in the environment information;
wherein, satisfying the first preset condition includes at least one of the following:
obtaining a spatial parameter representing the posture of the first acquisition device, and if the change rule of the spatial parameter represents that the posture change of the first acquisition device conforms to a preset rule, or if the spatial parameter indicates that the time for which the first acquisition device is kept in the first spatial posture exceeds a first preset time, meeting a first preset condition; or
Acquiring a behavior parameter representing a user, and if the behavior parameter indicates that the user pays attention to the environment information or the time for paying attention to the environment information exceeds second preset time, meeting a first preset condition; or
Obtaining a playing parameter, and if the change of the playing parameter represents the user attention environment information, meeting a first preset condition;
if a first instruction for instructing acquisition of the environmental information is obtained, a first preset condition is satisfied.
4. The method of claim 1, wherein processing environmental information obtained by the first acquisition device comprises:
acquiring environmental information through the first acquisition device;
if a second preset condition is met, triggering a first part of information about second media information in the environment information;
wherein, satisfying the second preset condition includes at least one of:
if partial information corresponding to the first media information in the environment information is continuously acquired within a third preset time, a second preset condition is met; or
And if a second instruction for indicating triggering of the first part of information is obtained, a second preset condition is met.
5. The method of claim 3 or 4, wherein the triggering of the first part of the context information about the second media information comprises one of:
identifying identification information about second media information in the environment information; triggering the link corresponding to the identification information based on the identification result; wherein the first part of information comprises the identification information; or
Triggering a website link corresponding to second media information in the environment information, wherein the first part of information comprises the website link;
the first part of information is displayed on the first media information or displayed in an associated area outside a display area of the first media information.
6. The method of claim 1, wherein the second media information content matching the first media information comprises:
the first media information is image information or video information, and the second media information is matched audio information; or when the first media information is audio information, the second media information is matched image information or video information; and/or
The output progress of the first media information and the output progress of the second media information are matched.
7. The method of claim 1 or 6, wherein,
the first media information is from a first source address, and the first source address is a playing address of a first source file, wherein the first source file comprises first media information and second media information;
wherein, the processing the environmental information obtained by the first acquisition device comprises:
and acquiring a first source address corresponding to first media information of the environment information, so as to acquire the second media information based on the first source address.
8. The method of claim 1 or 6, wherein the outputting the second media information comprises:
determining the current playing progress based on first part information corresponding to second media information in the environment information acquired in real time; the first part of information is updated in real time based on the playing progress of the first media information;
synchronizing the second media information according to the playing progress; or
Acquiring a second part of information except the first part of information in the environment information;
matching the playing progress matched with the second part of information in a database;
and synchronizing the second media information according to the playing progress.
9. An electronic device, comprising:
the first acquisition device is used for acquiring environment information, and the environment information is associated with first media information played by external display equipment or electronic equipment;
and the processor is configured to process the environment information obtained by the first acquisition device, obtain second media information corresponding to a part of the environment information at least based on a processing result, and control output of the second media information, wherein the second media information is matched with the first media information in content.
10. The electronic device of claim 9, further comprising:
a retaining device for relatively securing the electronic device to at least a portion of a user's body.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810297237.9A CN108600797B (en) | 2018-03-30 | 2018-03-30 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810297237.9A CN108600797B (en) | 2018-03-30 | 2018-03-30 | Information processing method and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN108600797A CN108600797A (en) | 2018-09-28 |
| CN108600797B true CN108600797B (en) | 2021-02-19 |
Family
ID=63624404
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810297237.9A Active CN108600797B (en) | 2018-03-30 | 2018-03-30 | Information processing method and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108600797B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110738512A (en) * | 2019-09-10 | 2020-01-31 | 深圳市元征科技股份有限公司 | multimedia advertisement putting method and device, server and storage medium |
| CN115695711B (en) * | 2022-11-01 | 2025-10-28 | 联想(北京)有限公司 | A processing method and collection device |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103188564A (en) * | 2011-12-28 | 2013-07-03 | 联想(北京)有限公司 | Electronic equipment and information processing method thereof |
| CN103873935A (en) * | 2012-12-17 | 2014-06-18 | 联想(北京)有限公司 | Data processing method and device |
| CN104378576A (en) * | 2013-08-15 | 2015-02-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7213254B2 (en) * | 2000-04-07 | 2007-05-01 | Koplar Interactive Systems International Llc | Universal methods and device for hand-held promotional opportunities |
| US9237377B2 (en) * | 2011-07-06 | 2016-01-12 | Symphony Advanced Media | Media content synchronized advertising platform apparatuses and systems |
| US8966525B2 (en) * | 2011-11-08 | 2015-02-24 | Verizon Patent And Licensing Inc. | Contextual information between television and user device |
| US10432996B2 (en) * | 2014-11-07 | 2019-10-01 | Kube-It Inc. | Matching data objects to video content |
-
2018
- 2018-03-30 CN CN201810297237.9A patent/CN108600797B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103188564A (en) * | 2011-12-28 | 2013-07-03 | 联想(北京)有限公司 | Electronic equipment and information processing method thereof |
| CN103873935A (en) * | 2012-12-17 | 2014-06-18 | 联想(北京)有限公司 | Data processing method and device |
| CN104378576A (en) * | 2013-08-15 | 2015-02-25 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108600797A (en) | 2018-09-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111079012B (en) | Live broadcast room recommendation method and device, storage medium and terminal | |
| CN109600678B (en) | Information display method, device and system, server, terminal and storage medium | |
| CN110572711B (en) | Video cover generation method and device, computer equipment and storage medium | |
| US9491401B2 (en) | Video call method and electronic device supporting the method | |
| EP2899618A1 (en) | Control device and recording medium | |
| CN113613028B (en) | Live broadcast data processing method, device, terminal, server and storage medium | |
| CN110557683B (en) | Video playing control method and electronic equipment | |
| CN109729372B (en) | Live broadcast room switching method, device, terminal, server and storage medium | |
| KR102238330B1 (en) | Display device and operating method thereof | |
| CN110163066B (en) | Multimedia data recommendation method, device and storage medium | |
| CN108712603B (en) | Image processing method and mobile terminal | |
| CN111432245B (en) | Multimedia information playing control method, device, equipment and storage medium | |
| CN110650379A (en) | Video abstract generation method and device, electronic equipment and storage medium | |
| CN110933468A (en) | Playing method, playing device, electronic equipment and medium | |
| CN107786827A (en) | Video capture method, video broadcasting method, device and mobile terminal | |
| CN110958465A (en) | Video stream pushing method and device and storage medium | |
| US12337226B2 (en) | Home training service providing method and display device performing same | |
| CN111836069A (en) | Virtual gift presenting method, device, terminal, server and storage medium | |
| CN108174109B (en) | A kind of photographing method and mobile terminal | |
| KR20160014513A (en) | Mobile device and method for pairing with electric device | |
| CN113038165B (en) | Method, apparatus and storage medium for determining encoding parameter set | |
| CN108012026A (en) | One kind protection eyesight method and mobile terminal | |
| CN109982129B (en) | Short video playing control method and device and storage medium | |
| CN108600797B (en) | Information processing method and electronic equipment | |
| JPWO2017104089A1 (en) | Head-mounted display cooperative display system, system including display device and head-mounted display, and display device therefor |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |