Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a video playing method and device, electronic equipment and a storage medium.
The video playing device may be specifically integrated in a terminal, and the terminal may be a smart television, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, or the like, but is not limited thereto. The terminal can be directly or indirectly connected with the server in a wired or wireless communication mode, wherein the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server providing basic cloud computing services such as cloud services, a cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a CDN (content delivery network), big data and an artificial intelligence platform.
For example, referring to fig. 1a, the video playing apparatus is integrated in a smart television, and when the smart television receives a video playing operation triggered by a playing device, the smart television obtains a target video, where the video playing operation may be triggered by a user, then reports a video identifier of the target video to a server, receives a playing mode corresponding to the target video returned by the server according to the video identifier, then performs content identification on the target video by the smart television to obtain a content identification result of the target video, and finally, the smart television plays the target video based on the content identification result and the playing mode.
According to the video playing method, the server plays the target video according to the playing mode corresponding to the target video returned by the video identification and based on the content identification result and the playing mode, and the user does not need to perform complicated operation in the video playing process, so that the watching experience of the user is improved.
The following are detailed below. It should be noted that the description sequence of the following embodiments is not intended to limit the priority sequence of the embodiments.
A video playback method, comprising: the method comprises the steps of obtaining a target video, reporting a video identifier of the target video to a server, receiving a play mode corresponding to the target video returned by the server according to the video identifier, carrying out content identification on the target video to obtain a content identification result of the target video, and playing the target video based on the content identification result and the play mode.
The following embodiments will be described in detail, respectively, and it should be noted that the order of description of the following embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 1b, fig. 1b is a schematic flow chart of a video playing method provided in the present application. The specific flow of the video playing method can be as follows:
101. and acquiring a target video.
For example, specifically, the target video may be obtained according to a video playing operation, where the video playing operation may be triggered by a user for the playing device, or may be triggered by the playing device itself, for example, a playing control is displayed in a display screen of the playing device, and the user may trigger the video playing operation by clicking the playing control; for another example, the playing device may trigger a video playing operation according to a preset policy, specifically, the playing device plays a certain video within a preset time period, that is, the playing device triggers the video playing operation within the preset time period.
Video (Video) generally refers to various technologies for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames (frames) of pictures per second, human eyes cannot distinguish a single static picture according to the persistence of vision principle; it appears as a smooth continuous visual effect, so that the continuous picture is called a video. In the application, a video corresponding to the video playing operation is determined as a target video.
102. And reporting the video identifier of the target video to a server, and receiving a play mode corresponding to the target video returned by the server according to the video identifier.
The video identifier is used to determine information of the video composition, and includes a name of the target video (such as a name of a tv show, a name of a movie, or a name of a variety program, etc.), a number of the target video in a server of the video client, and a serial number of the target video in a video list, etc.
For example, the video platform may add an identifier to its own video so as to determine a corresponding video according to the video identifier subsequently, and specifically, may add, in the video attribute information, product identifier information to which the video belongs, video identifier information of a specific video to which the video belongs, location information of the video in a video list to which the video belongs, and information such as video type information of the video, and the like
According to the method and the device, the video identification of the target video is reported to the server, the server can reduce local overhead according to the play mode corresponding to the target video returned by the video identification, and the server can obtain the play mode recommended by the target video through data statistical analysis, wherein the play mode can comprise image parameters, sound effect parameters, language parameters and the like of the target video during playing.
Optionally, in some embodiments, the server may determine, through data statistical analysis, that the user selects the most playback modes when viewing the target video, so as to obtain a playback mode recommended by the target video; for another example, the server may determine, through data statistical analysis, a selection number corresponding to each play mode in which the target video is viewed, then obtain, by the server, a user portrait corresponding to a user viewing the target video, and determine, based on the selection number and the user portrait, a play mode corresponding to the target video, that is, optionally, in some embodiments, the step "reporting the video identifier of the target video to the server, and receiving, by the server, the play mode corresponding to the target video returned according to the video identifier" may specifically include:
(11) determining target user account information corresponding to a target video;
(12) constructing a user portrait of the target user account information according to the historical browsing data of the target user account information;
(13) and reporting the user portrait and the video identifier of the target video to a server, and receiving video image parameters and video sound effect parameters corresponding to the target video returned by the server according to the user portrait and the video identifier.
The target user account information is user account information of a currently logged-in video application, the historical browsing data includes videos browsed by the user account in a historical time period (hereinafter referred to as historical browsing videos), the historical browsing videos include watched videos and marked videos, the watched videos are videos marked by videos watched by the user in the historical time period and marked by the user in the historical time period, and the marked videos can be videos collected by the user and the like.
Specifically, a user portrait of the target user account information may be constructed according to the watched video and the marked video of the user, where the user portrait is a tagged user model abstracted according to information such as user attributes, user preferences, living habits, and user behaviors. Colloquially, a user is labeled, and the label is a highly refined characteristic mark obtained by analyzing user information. By tagging, a user may be described with some highly generalized, easily understandable features that may make it easier for a person to understand the user and may facilitate computer processing. The user representation may include: and then reporting the user portrait and the video identifier of the target video to a server, and receiving the video image parameter and the video sound parameter corresponding to the target video returned by the server according to the user portrait and the video identifier.
Optionally, in some embodiments, historical browsing data of the target user account information may be acquired, then a historical browsing video meeting a preset condition is selected from the historical browsing data, and a user portrait of the target user account information is constructed according to the selected historical browsing video. For example, it can be set as: the videos browsed in the last 7 days can also be set as: the video carries the label of the action word eye, and the preset condition can be set according to specific conditions, which is not described herein again.
103. And performing content identification on the target video to obtain a content identification result of the target video.
In the identification process, the target video may be segmented to obtain a plurality of video segments, so as to identify the content of the target video based on the video segments. The video segment may segment the target video according to the video playing time, for example, segment the target video by the video playing time of 5s to obtain a plurality of video segments with the video playing time of 5 s. The video segment may also segment the video to be identified according to the video frame number, for example, segment the target video by 240 frames of video images to obtain a plurality of video segments including 240 frames of video images. In practical applications, the manner of video segmentation and the granularity of video segmentation (the playing duration of a video segment or the number of frames of video images included in a video segment) may be set according to practical application scenes, and are not limited herein.
The video segmentation is performed on the target video content, which is equivalent to that the target video content is finely divided, so that the content granularity of the identification basis is smaller, and a data basis is provided for the subsequent identification of the target video content based on the video segments. Next, a convolutional neural network may be used to perform content recognition on each video segment, so as to obtain a content recognition result of the target video, and then step 104 is performed.
104. And playing the target video based on the content identification result and the playing mode.
The playing mode indicates corresponding video image parameters and video sound effect parameters when the target video is played, the video image parameters may include image resolution, image size, and image color, and the image resolution refers to the number of pixels (in units of "pixel/inch" or "pixel/centimeter") included in a unit length of the image in the width and height directions. For the same image, the higher the resolution is, the more detailed the description of the image is, and the larger the required data amount is; the lower the resolution, the coarser the image, and the smaller the data volume; the image size is the total number of pixels included in the entire image and is expressed by the product of the width-direction pixels and the height-direction pixels. The size of the multimedia image material does not exceed the size of the work presentation window generally; the image color is how many colors are included in the image, and is related to the number of bits (bits) used to describe the color. The former is called color depth and the latter is called bit depth. The relationship between them is color depth-2 bit depth. The lower the bit depth of the image, the smaller the data amount, and the lower the display quality; the higher the bit depth, the larger the data amount, and the higher the display quality. Sound effects are effects produced by sound, i.e. noise or sound added to the soundtrack to enhance the realism, atmosphere or dramatic message of a scene. The sound includes musical tones and sound effects. In this application, the video audio refers to the ambient audio, and the ambient audio mainly refers to handles sound through digital audio processor, makes sound and has different spatial characteristics, for example hall, opera house, cinema, karst cave, stadium etc.. The ambient sound effect is mainly realized by processing the sound through ambient filtering, ambient displacement, ambient reflection, ambient transition and the like, so that a listener feels like being in different environments.
It can be understood that, the video image parameters and the video sound effect parameters corresponding to the target video in the play mode may be determined, and then, based on the content recognition result, the video image parameters, and the video sound effect parameters, the target video is played, that is, optionally, in some embodiments, the step "playing the target video based on the content recognition result and the play mode" may specifically include:
(21) determining video image parameters and video sound effect parameters corresponding to a target video in a play mode;
(22) and playing the target video based on the content identification result, the video image parameters and the video sound effect parameters.
After the video image parameters and the video sound effect parameters are determined, the video image parameters and the video sound effect parameters can be adjusted according to the content identification result, for example, the image color of the target video and the size of playing sound are adjusted, so that the adjusted image color and sound effect are more consistent with the target video.
However, it should be noted that, because the image colors are various, even for the same target video, the corresponding image colors are different under different policies, in order to reduce the amount of calculation, the video sound effect parameter may be adjusted based on the content recognition result, and the target video is played according to the video image parameter and the adjusted sound effect, that is, optionally, in some embodiments, the step "playing the target video based on the content recognition result, the video image parameter, and the video sound effect parameter" may specifically include:
(31) adjusting the video audio effect parameters based on the content identification result to obtain an adjusted audio effect;
(32) and playing the target video according to the video image parameters and the adjusted sound effect.
It should be further noted that, during video playing, a situation that a video is stuck during video playing due to network jitter may occur, so that the network transmission speed may be detected, and when the network transmission speed is detected to be smaller than a preset value, the video image parameter may be reduced, so as to reduce the transmission amount of data, and further solve the problem that the video is stuck during video playing due to network jitter, that is, optionally, in some embodiments, the step "playing the target video according to the video image parameter and the adjusted sound effect" may specifically include:
(41) detecting the current network transmission speed;
(42) when the network transmission speed is detected to be smaller than a preset value, reducing video image parameters;
(43) and playing the video image corresponding to the target video by reducing the parameters of the rear video image, and playing the audio corresponding to the target video by adjusting the rear sound effect.
For example, the preset value is 100kb/s, when it is detected that the network transmission speed is less than 100kb/s, the image resolution in the video image parameter is reduced to 300PPI, and finally, the video image corresponding to the target video is played by the reduced video image parameter, and the audio corresponding to the target video is played by the adjusted audio effect, wherein the preset value can be set according to the actual situation, and is not described herein again.
In addition, in the present application, there are two types of adjustment of video sound effect parameters based on the content recognition result:
1) adjusting the video sound effect of the whole target video based on the content identification result, that is, optionally, in some embodiments, the step "adjusting the video sound effect parameter based on the content identification result to obtain the adjusted sound effect" may specifically include:
(51) determining the video type of the target video according to the content identification result;
(52) and adjusting the video audio effect parameters based on the video type to obtain the adjusted sound effect.
For example, specifically, the target video is determined to be a suspicion type video based on the content recognition result, then, the video audio effect parameter may be adjusted so that the adjusted sound effect better conforms to the suspicion type target video, and finally, the target video is played according to the video image parameter and the adjusted sound effect.
2) Based on the content recognition result, the target video is segmented, and the video audio effect parameter is adjusted according to the video content of the video clip, that is, optionally, in some embodiments, the step "adjusting the video audio effect parameter based on the content recognition result to obtain the adjusted sound effect" may specifically include:
(61) segmenting the target video based on the content identification result to obtain a video segment corresponding to the target video;
(62) and detecting the video content of the video clip, and adjusting the video sound effect parameters based on the detected video content to obtain the adjusted sound effect.
For example, the target video is segmented based on the content recognition result to obtain a video clip a, a video clip B and a video clip C, wherein the content type of the video clip a and the content type of the video clip B are both of a type of puzzling, and the content type of the video clip C is of a type of thrilling, so that the video sound effect parameters of the video clip a and the video clip B can be adjusted to the video sound effect parameters corresponding to the type of puzzling, and the video sound effect parameters of the video clip C are adjusted to the video sound effect parameters corresponding to the type of thrilling.
It should be further noted that, for a target video with multiple languages, the language corresponding to the target video when the target video is played may be determined according to the user image of the user, for example, the language corresponding to the target video when the target video is played is determined to be chinese according to the user image of the user, and when the language information of the target video indicates that the language of the target video does not contain chinese, the language corresponding to the target video when the target video is played is determined to be english; for another example, when the language information of the target video indicates that the language of the target video does not contain the chinese language, prompt information is generated, and a language selection control is displayed, so that the user can select the corresponding language according to actual requirements.
In addition, when the target video is played or after the target video is played, the target video may be switched, that is, optionally, in some embodiments, after the step "playing the target video based on the content identification result and the play mode", the method may specifically include:
(71) when video switching operation triggered by a target video is detected, determining a switching video corresponding to the video switching operation;
(72) and acquiring video playing parameters corresponding to the target video, and switching and playing the target video into a switching video based on the playing parameters.
The video switching operation may be triggered by a user or by a system, for example, when the user watches a target video, the video switching operation is triggered by a remote controller or a touch display screen; or, after the target video is played, the system automatically switches to the next video, so as to trigger the video switching operation, and please refer to the foregoing embodiment for the way of switching the target video to the switched video based on the playing parameters, which is not described herein again.
After the target video is played, the playing record of the played target video can be reported to the server, so that the most image and sound effect modes selected by the movie user can be counted by using big data analysis, and the image and sound effect mode recommended by the target video can be obtained.
Therefore, after the target video is obtained, the video identifier of the target video is reported to the server, the play mode corresponding to the target video returned by the server according to the video identifier is received, then, the content of the target video is identified, the content identification result of the target video is obtained, and finally, the target video is played based on the content identification result and the play mode.
In order to better implement the video playing method provided by the embodiment of the present application, a video playing device is further provided in an embodiment. The meaning of the noun is the same as that in the video playing method, and specific implementation details can refer to the description in the method embodiment.
The video playing apparatus may be specifically integrated in an electronic device, as shown in fig. 2a, and the video playing apparatus may include: the acquisition module 201, the reporting module 202, the identification module 203 and the playing module 204 are as follows:
an obtaining module 201, configured to obtain a target video.
For example, specifically, the obtaining module 201 may obtain the target video according to a video playing operation, where the video playing operation may be triggered by a user for the playing device, or may be triggered by the playing device itself, for example, a playing control is displayed in a display screen of the playing device, and the user may trigger the video playing operation by clicking the playing control; for another example, the playing device may trigger a video playing operation according to a preset policy, specifically, the playing device plays a certain video within a preset time period, that is, the playing device triggers the video playing operation within the preset time period.
The reporting module 202 is configured to report the video identifier of the target video to the server, and receive a play mode corresponding to the target video returned by the server according to the video identifier.
The reporting module 202 of the application reports the video identifier of the target video to the server, and the server can reduce local overhead according to the play mode corresponding to the target video returned by the video identifier, and the server can obtain the play mode recommended by the target video through data statistical analysis, wherein the play mode can include image parameters, sound effect parameters, language parameters and the like of the target video during playing.
The server can determine the most playing modes selected by the user when watching the target video through data statistical analysis, so as to obtain the recommended playing mode of the target video; for another example, the server may determine, through data statistical analysis, a selection number corresponding to each playback mode in which the target video is viewed, and then the server obtains a user portrait corresponding to a user viewing the target video, and determines, based on the selection number and the user portrait, a playback mode corresponding to the target video, where the user portrait is a tagged user model abstracted according to information such as user attributes, user preferences, habits, and user behaviors. Colloquially, a user is labeled, and the label is a highly refined characteristic mark obtained by analyzing user information. By tagging, a user may be described with some highly generalized, easily understandable features that may make it easier for a person to understand the user and may facilitate computer processing.
Optionally, in some embodiments, the reporting module 202 may specifically include:
the first determining unit is used for determining target user account information corresponding to a target video;
the system comprises a construction unit, a display unit and a display unit, wherein the construction unit is used for constructing a user portrait of target user account information according to historical browsing data of the target user account information;
and the reporting unit is used for reporting the portrait and the video identifier of the target video to the server and receiving the video image parameters and the video sound effect parameters corresponding to the target video returned by the server according to the user portrait and the video identifier.
Further, in some embodiments, the construction unit is specifically configured to: acquiring historical browsing data of the account information of the target user; selecting a historical browsing video meeting a preset condition from historical browsing data; and constructing a user portrait of the target user account information according to the selected historical browsing video.
Optionally, in some embodiments, the reporting module 202 is further specifically configured to: and reporting the playing record of the playing target video to a server.
The identifying module 203 is configured to perform content identification on the target video to obtain a content identification result of the target video.
In the identification process, the identification module 203 may first perform video segmentation on the target video to obtain a plurality of video segments, so as to identify the content of the target video based on the video segments. The video segment may segment the target video according to the video playing time, for example, segment the target video by the video playing time of 5s to obtain a plurality of video segments with the video playing time of 5 s. The video segment may also segment the video to be identified according to the video frame number, for example, segment the target video by 240 frames of video images to obtain a plurality of video segments including 240 frames of video images. In practical applications, the identification module 203 may set the video segmentation mode and the granularity of video segmentation (the playing duration of the video segment or the number of frames of the video image included in the video segment) according to the practical application scene, which is not limited herein.
The video segmentation is performed on the target video content, which is equivalent to that the target video content is finely divided, so that the content granularity of the identification basis is smaller, and a data basis is provided for the subsequent identification of the target video content based on the video segments. Next, the identifying module 203 may perform content identification on each video segment by using a convolutional neural network, so as to obtain a content identification result of the target video.
The playing module 204 is configured to play the target video based on the content identification result and the playing mode.
The playing module 204 may determine a video image parameter and a video sound effect parameter corresponding to the target video in the playing mode, and then the playing module 204 plays the target video based on the content identification result, the video image parameter, and the video sound effect parameter, that is, optionally, in some embodiments, the playing module 204 may specifically include:
the second determining unit is used for determining video image parameters and video sound effect parameters corresponding to the target video in the play mode;
and the playing unit is used for playing the target video based on the content identification result, the video image parameter and the video sound effect parameter.
Optionally, in some embodiments, the playing unit may specifically include:
the adjusting subunit is used for adjusting the video audio effect parameters based on the content identification result to obtain an adjusted audio effect;
and the playing subunit is used for playing the target video according to the video image parameters and the adjusted sound effect.
Optionally, in some embodiments, the playing subunit may specifically be configured to: and detecting the current network transmission speed, and when the network transmission speed is detected to be smaller than a preset value, reducing the video image parameters so as to play the video image corresponding to the target video by the reduced video image parameters and play the audio corresponding to the target video by the adjusted sound effect.
Optionally, in some embodiments, the adjusting subunit may specifically be configured to: and determining the video type of the target video according to the content identification result, and adjusting the video audio effect parameters based on the video type to obtain the adjusted audio effect.
Optionally, in some embodiments, the adjusting subunit may specifically be configured to: segmenting the target video based on the content identification result to obtain a video segment corresponding to the target video; and detecting the video content of the video clip, and adjusting the video sound effect parameters based on the detected video content to obtain the adjusted sound effect.
Optionally, in some embodiments, referring to fig. 2b, the playing apparatus may further include a switching module 205, where the switching module 205 may be specifically configured to: when video switching operation triggered by aiming at a target video is detected, determining a switching video corresponding to the video switching operation; and acquiring video playing parameters corresponding to the target video, and switching and playing the target video into a switching video based on the playing parameters.
As can be seen from the above, after the obtaining module 201 of the embodiment of the present application obtains the target video, the reporting module 202 reports the video identifier of the target video to the server, and receives the play mode corresponding to the target video returned by the server according to the video identifier, then, the identifying module 203 performs content identification on the target video to obtain a content identification result of the target video, and finally, the playing module 204 plays the target video based on the content identification result and the play mode.
An embodiment of the present application further provides an electronic device, where the electronic device may be a terminal, as shown in fig. 3, which shows a schematic structural diagram of the electronic device according to the embodiment of the present application, and specifically:
the electronic device may include components such as a processor 1001 of one or more processing cores, memory 1002 of one or more computer-readable storage media, a power source 1003, and an input unit 1004. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 3 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 1001 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 1002 and calling data stored in the memory 1002, thereby performing overall monitoring of the electronic device. Optionally, processor 1001 may include one or more processing cores; preferably, the processor 1001 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1001.
The memory 1002 may be used to store software programs and modules, and the processor 1001 executes various functional applications and data processing by operating the software programs and modules stored in the memory 1002. The memory 1002 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 1002 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 1002 may also include a memory controller to provide the processor 1001 access to the memory 1002.
The electronic device further includes a power source 1003 for supplying power to each component, and preferably, the power source 1003 may be logically connected to the processor 1001 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are implemented through the power management system. The power source 1003 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further include an input unit 1004, and the input unit 1004 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 1001 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 1002 according to the following instructions, and the processor 1001 runs the application programs stored in the memory 1002, so as to implement various functions as follows:
the method comprises the steps of obtaining a target video, reporting a video identifier of the target video to a server, receiving a play mode corresponding to the target video returned by the server according to the video identifier, carrying out content identification on the target video to obtain a content identification result of the target video, and playing the target video based on the content identification result and the play mode.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Therefore, after the target video is obtained, the video identifier of the target video is reported to the server, the play mode corresponding to the target video returned by the server according to the video identifier is received, then, the content of the target video is identified, the content identification result of the target video is obtained, and finally, the target video is played based on the content identification result and the play mode.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the electronic device executes the method provided in the various alternative implementations of the above embodiments.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by a computer program, which may be stored in a computer-readable storage medium and loaded and executed by a processor, or by related hardware controlled by the computer program.
To this end, the present application provides a storage medium, in which a computer program is stored, where the computer program can be loaded by a processor to execute the steps in any one of the video playing methods provided in the present application.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any video playing method provided in the embodiments of the present application, beneficial effects that can be achieved by any video playing method provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The video playing method, the video playing apparatus, the electronic device, and the storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.