HK40081524A - Time delay parameter acquisition method, device, electronic equipment and storage medium - Google Patents
Time delay parameter acquisition method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- HK40081524A HK40081524A HK42023070508.9A HK42023070508A HK40081524A HK 40081524 A HK40081524 A HK 40081524A HK 42023070508 A HK42023070508 A HK 42023070508A HK 40081524 A HK40081524 A HK 40081524A
- Authority
- HK
- Hong Kong
- Prior art keywords
- information
- execution engine
- general execution
- task
- delay parameter
- Prior art date
Links
Description
Technical Field
The present invention relates to a delay parameter acquisition processing technology in network communication, and in particular, to a delay parameter acquisition method, apparatus, system, device, and storage medium.
Background
In the related art, there are several ways to improve the utilization rate of cluster resources, one is to reasonably configure application resources by the cluster itself and run more jobs as much as possible. And secondly, filling other jobs at the trough time interval and running more jobs. The online and offline mixing part is used for filling offline operation in the online operation running process so as to improve the resource utilization rate. The offline task cannot be filled infinitely, online operation is required to be guaranteed not to be affected, SLO of the offline task is guaranteed to be within an acceptable range, meanwhile, the offline task can be quickly online and offline, and when the online task needs resources, a yield is given in time. Therefore, it is necessary to obtain the time delay parameter accurately in time to monitor the task processing quality of the online service process.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for acquiring a delay parameter, an electronic device, and a storage medium, which can accurately acquire a delay parameter in time, monitor task processing quality of an online service process, and simultaneously, when acquiring a delay parameter, do not need to intrude a service processing process, thereby reducing an influence on task execution.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a time delay parameter acquisition method, which comprises the following steps:
acquiring the use environment information of a general execution engine;
determining a hook point matched with the general execution engine based on the using environment information of the general execution engine;
mounting the general execution engine in the hook point;
when a client sends task request information to a server, recording first timestamp information through the general execution engine;
when the server sends a task processing result to the client, recording second timestamp information through the general execution engine;
determining a latency parameter of a use environment of the generic execution engine based on the first timestamp information and the second timestamp information.
The embodiment of the present invention further provides a time delay parameter collecting device, including:
the information transmission module is used for acquiring the use environment information of the general execution engine;
the information processing module is used for determining a hook point matched with the general execution engine based on the using environment information of the general execution engine;
the information processing module is used for mounting the general execution engine in the hook point;
the information processing module is used for recording first timestamp information through the general execution engine when the client sends task request information to the server;
the information processing module is used for recording second timestamp information through the general execution engine when the server sends a task processing result to the client;
the information processing module is configured to determine a latency parameter of a usage environment of the general execution engine based on the first timestamp information and the second timestamp information.
In the above-mentioned scheme, the first and second light sources,
the information processing module is used for determining a filtering condition matched with the task request information;
the information processing module is used for filtering the task request information through a filtering condition matched with the task request information to obtain the filtered task request information;
the information processing module is used for sending the filtered task request information to a server through the general execution engine and recording first timestamp information corresponding to the filtered task request information.
In the above-mentioned scheme, the first and second light sources,
the information processing module is used for determining the configuration information of the general execution engine;
the information processing module is used for determining a detection code corresponding to a function call path of the general execution engine based on the configuration information of the general execution engine;
and the information processing module is used for configuring a corresponding data storage structure for the general execution engine based on the detection code.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for the user mode application process to obtain the time delay parameter and the interface address information of the using environment of the general execution engine from the annular buffer area of the general execution engine;
and the information processing module is used for evaluating the online service quality of the server based on the time delay parameter and the interface address information of the using environment of the general execution engine.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for judging whether idle node resources in the cluster resources meet the task to be processed or not according to the time delay parameters and the interface address information of the using environment of the general execution engine;
the information processing module is used for processing the task to be processed through the idle node resource when the idle node resource in the cluster resource meets the task to be processed;
and the information processing module is used for selecting corresponding idle node resources based on the time delay parameter of the service environment of the general execution engine when the idle node resources in the cluster resources do not meet the task to be processed.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for analyzing the task to be processed and acquiring the priority identifier of the task to be processed;
the information processing module is used for sequencing the received priority of the tasks to be processed according to the priority identification of the tasks to be processed;
and the information processing module is used for creating a corresponding task queue to be processed according to the priority of the task to be processed.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for traversing the task queue to be processed and determining the task to be processed with the highest priority;
the information processing module is used for determining the link quality of each link in the network resources;
the information processing module is configured to configure a link with the highest link quality in the network resources for the to-be-processed task with the highest priority, so as to implement processing of the to-be-processed task in the to-be-processed task queue through the configured link.
In the above-mentioned scheme, the first step of the method,
the information processing module is used for sending the general execution engine identifier, the first time stamp information, the second time stamp information and the time delay parameter to the block chain network so as to ensure that
And the node of the block chain network fills the general execution engine identifier, the first time stamp information, the second time stamp information and the time delay parameter into a new block, and when the common identification of the new block is consistent, the new block is added to the tail part of the block chain.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the time delay parameter acquisition method when the executable instructions stored in the memory are operated.
The embodiment of the present invention further provides a computer-readable storage medium, which stores executable instructions, and when the executable instructions are executed by a processor, the method for acquiring the delay parameter is implemented.
The embodiment of the invention has the following beneficial effects:
the method comprises the steps of acquiring the use environment information of a general execution engine; determining a hook point matched with the general execution engine based on the using environment information of the general execution engine; mounting the general execution engine in the hook point; when a client sends task request information to a server, recording first timestamp information through the general execution engine; when the server sends a task processing result to the client, recording second timestamp information through the general execution engine; determining a latency parameter for a use environment of the generic execution engine based on the first timestamp information and the second timestamp information. Therefore, the time delay parameters can be timely and accurately obtained, the task processing quality of the online service process is monitored, meanwhile, when the time delay parameters are obtained, the service processing process does not need to be invaded, and the influence on task execution is reduced.
Drawings
Fig. 1 is a schematic diagram of a usage environment of a time delay parameter acquisition method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a time delay parameter acquisition apparatus according to an embodiment of the present invention;
fig. 3 is an optional schematic flow chart of a method for acquiring a delay parameter according to an embodiment of the present invention;
fig. 4 is an optional schematic flow chart of the time delay parameter acquisition method according to the embodiment of the present invention;
fig. 5 is a schematic view of an optional flow of a delay parameter collecting method according to an embodiment of the present invention;
fig. 6 is a schematic architecture diagram of a target object determining apparatus 100 according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a block chain in the block chain network 200 according to an embodiment of the present invention;
fig. 8 is a functional architecture diagram of a blockchain network 200 according to an embodiment of the present invention;
fig. 9 is a schematic processing process diagram of a time delay parameter acquisition method according to an embodiment of the present invention;
fig. 10 is a schematic view of a processing effect of the time delay parameter acquisition method in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Before further detailed description of the embodiments of the present invention, terms and expressions referred to in the embodiments of the present invention are described, and the terms and expressions referred to in the embodiments of the present invention are applicable to the following explanations.
1) The universal execution engine, eBPF (extended Berkeley Packet Filter), provides the universal capability of efficiently and safely executing specific codes based on system or program events, and users of the universal capability are not limited to kernel developers; the eBPF may be composed of an execution bytecode instruction, a storage object and a Helper help function, the bytecode instruction must be verified by a BPF verifier before the kernel executes, and in the kernel with the BPF JIT mode enabled, the bytecode instruction is directly converted into a local instruction executable by the kernel to run.
2) Terminals, including but not limited to: the system comprises a common terminal and a special terminal, wherein the common terminal is in long connection and/or short connection with a sending channel, and the special terminal is in long connection with the sending channel.
3) The client and the carrier implementing a specific function in the terminal, for example, a mobile client (APP) is a carrier of a specific function in the mobile terminal, for example, a function of performing live online broadcasting or a function of playing online video.
4) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
5) The runtime environment, the engine for interpreting and executing code, for example, for the applet, may be JavaScript Core of the iOS platform, X5 JS Core of the android platform.
6) Cloud technology refers to a hosting technology for unifying series of resources such as hardware, software, and network in a wide area network or a local area network to realize calculation, storage, processing, and sharing of data. Based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, a resource pool can be formed and used as required, and the cloud computing business model is flexible and convenient. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture-like websites and more portal websites. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
7) Cloud game: the game is a game which runs in a cloud server device, encodes a game picture rendered by the cloud device, transmits the encoded game picture to a user terminal through a network, decodes an encoded file by the user terminal, and renders the encoded file to a display screen for displaying, so that a user does not need to install the game locally, and can complete a game interaction process only by establishing communication network connection with the cloud.
8) A Server cluster (Server cluster) refers to a cluster of servers that collectively perform the same service, and appears to a client as if there is only one Server. The server cluster can utilize a plurality of computers to perform parallel computation so as to obtain high computation speed, and also can use a plurality of computers to perform backup so as to ensure that any one machine damages the whole system or can normally run. The server cluster hard disk fault processing method provided by the application can be applied to a cloud server use scene and a distributed server use scene.
9) A Block chain (Block chain) is an encrypted, chained storage structure formed of blocks (blocks).
For example, the header of each block may include hash values of all transactions in the block, and also include hash values of all transactions in the previous block, so as to achieve tamper resistance and forgery resistance of the transactions in the block based on the hash values; newly generated transactions, after being filled into the tiles and passing through the consensus of nodes in the blockchain network, are appended to the end of the blockchain to form a chain growth.
10 Block chain Network (Block chain Network) that incorporates new blocks into a collection of nodes of a Block chain in a consensus manner.
Fig. 1 is a schematic view of a usage scenario of a delay parameter collecting method according to an embodiment of the present invention, referring to fig. 1, a terminal (including a terminal 10-1 and a terminal 10-2) is provided with corresponding clients capable of executing different functions, where the clients are terminals (including the terminal 10-1 and the terminal 10-2) that obtain different task processing results from corresponding servers 200 through a network 300, the servers 200 can receive task processing requests from different terminals, the terminal is connected to the servers 200 through the network 300, the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless link. The server 200 may be a part of a server cluster, and offline jobs are filled in the online job running process in an online and offline mixed manner, so that the resource utilization rate can be improved. However, the offline task cannot be filled infinitely, it is necessary to ensure that online operation is not affected, and it is ensured that the SLO is within an acceptable range, and meanwhile, offline operation needs to be performed online and offline quickly, and when online operation needs resources, giving a give-out is performed in time. In addition, after the offline operation is performed, the success rate of the offline operation is also ensured, and the failure rate is not high due to frequent resource giving. In the prior art, the quality of the online service is often monitored through hardware indexes such as CPI (common instruction indicator) and the like or online service acquisition and reporting modes, but the hardware indexes such as CPI and the like mainly reflect instruction execution efficiency, the method usually needs to be compared with CPI data on other nodes of the same type of service to judge whether the current service deviates from a normal value, the detection efficiency is low, and multiple copies exist depending on the online service. In addition, the CPI is a hardware index, and if the host machine is a virtual machine on the cloud, the CPI data cannot be acquired due to the limitation of data transparent transmission. Meanwhile, in the scheme of online service collection and reporting, because the monitoring platforms reported by different services may be different, the monitoring platform itself may not provide relevant api for index query, and thus the monitoring timeliness cannot be guaranteed.
As an example, the server 200 is configured to deploy a delay parameter collecting and processing apparatus to implement the delay parameter collecting method provided by the present invention, so as to obtain the service environment information of the general execution engine; determining a hook point matched with the general execution engine based on the using environment information of the general execution engine; mounting the general execution engine in the hook point; when a client sends task request information to a server, recording first timestamp information through the general execution engine; when the server sends a task processing result to the client, recording second timestamp information through the general execution engine; determining a time delay parameter of the using environment of the general execution engine based on the first time stamp information and the second time stamp information, and monitoring the task processing quality of the online service process by using the time delay parameter of the using environment of the general execution engine
As described in detail below with respect to the structure of the delay parameter collecting device according to the embodiment of the present invention, the delay parameter collecting device may be implemented in various forms, such as a dedicated terminal with a processing function of the delay parameter collecting device, or a server group with a processing function of the delay parameter collecting device, for example, a target system deployed in the target system, for example, the server 200 in fig. 1 in the foregoing. Fig. 2 is a schematic structural diagram of a delay parameter collecting device according to an embodiment of the present invention, and it can be understood that fig. 2 only shows an exemplary structure of the delay parameter collecting device, and not a whole structure, and a part of the structure or the whole structure shown in fig. 2 may be implemented as needed.
The time delay parameter acquisition device provided by the embodiment of the invention comprises: at least one processor 201, memory 202, user interface 203, and at least one network interface 204. The various components in the latency parameter acquisition device are coupled together by a bus system 205. It will be appreciated that the bus system 205 is used to enable communications among the components of the connection. The bus system 205 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 205 in fig. 2.
The user interface 203 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 202 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. The memory 202 in embodiments of the present invention is capable of storing data to support operation of the terminal (e.g., 10-1). Examples of such data include: any computer program, such as an operating system and application programs, for operating on a terminal (e.g., 10-1). The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program may include various application programs.
In some embodiments, the delay parameter collecting apparatus provided in the embodiments of the present invention may be implemented by a combination of software and hardware, and as an example, the delay parameter collecting apparatus provided in the embodiments of the present invention may be a processor in a form of a hardware decoding processor, which is programmed to execute the delay parameter collecting method provided in the embodiments of the present invention. For example, a processor in the form of a hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components.
As an example that the delay parameter collecting apparatus provided in the embodiment of the present invention is implemented by combining software and hardware, the delay parameter collecting apparatus provided in the embodiment of the present invention may be directly embodied as a combination of software modules executed by the processor 201, where the software modules may be located in a storage medium, the storage medium is located in the memory 202, the processor 201 reads executable instructions included in the software modules in the memory 202, and the delay parameter collecting method provided in the embodiment of the present invention is completed by combining necessary hardware (for example, including the processor 201 and other components connected to the bus 205).
By way of example, the Processor 201 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor or the like.
As an example that the delay parameter collecting apparatus provided in the embodiment of the present invention is implemented by hardware, the apparatus provided in the embodiment of the present invention may be implemented by directly using a processor 201 in the form of a hardware decoding processor, for example, the apparatus may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), or other electronic components to implement the delay parameter collecting method provided in the embodiment of the present invention.
The memory 202 in the embodiment of the present invention is used for storing various types of data to support the operation of the delay parameter collecting apparatus. Examples of such data include: any executable instructions for operating on the latency parameter collecting apparatus, such as executable instructions, may be included in the program for implementing the method for collecting latency parameters according to the embodiment of the present invention.
In other embodiments, the delay parameter collecting apparatus provided in the embodiments of the present invention may be implemented in a software manner, fig. 2 shows that the delay parameter collecting apparatus stored in the memory 202 may be software in the form of a program, a plug-in, and the like, and includes a series of modules, and as an example of the program stored in the memory 202, the delay parameter collecting apparatus may include a delay parameter collecting apparatus, and the delay parameter collecting apparatus includes the following software module information transmission module 2081 and an information processing module 2082. When the software modules in the delay parameter acquisition device are read into the RAM by the processor 201 and executed, the delay parameter acquisition method provided by the embodiment of the present invention is implemented, where the functions of each software module in the delay parameter acquisition device include:
the information transmission module 2081 is used for acquiring the use environment information of the general execution engine;
and the information processing module is used for determining a hook point matched with the general execution engine based on the using environment information of the general execution engine.
The information processing module 2082 is configured to mount the general execution engine in the hook point;
the information processing module 2082 is configured to record first timestamp information through the general execution engine when the client sends the task request information to the server.
The information processing module 2082 is configured to record second timestamp information through the general execution engine when the server sends the task processing result to the client.
The information processing module 2082 is configured to determine a latency parameter of a usage environment of the general execution engine based on the first timestamp information and the second timestamp information.
The delay parameter collecting method provided in the embodiment of the present invention is described with reference to the delay parameter collecting device shown in fig. 2, wherein it can be understood that the steps shown in fig. 3 may be executed by various electronic devices of the delay parameter collecting device, for example, the steps may be executed by a dedicated terminal with a resource scheduling function, a server or a server cluster controller, or a control terminal of a cloud network server. The dedicated terminal with the delay parameter collecting device can be packaged in the server 200 shown in fig. 1 to execute the corresponding software module in the delay parameter collecting device shown in the foregoing fig. 2. The following is a description of the steps shown in fig. 3.
Step 301: the time delay parameter acquisition device acquires the using environment information of the general execution engine.
Step 302: and the time delay parameter acquisition device determines a hook point matched with the general execution engine based on the use environment information of the general execution engine.
The hook point matched with the generic execution engine may use a hook technology, and after a specific system event is hook-executed, once a hook event occurs, a hook program for the event receives a notification from the system, and at this time, the program can respond to the event at the first time, referring to fig. 1, an open service for communicating with the message processing device is deployed on a server, and the open server may specifically be a service with an interface capability. Communication between the server and the message processing device may be achieved through the open service. For example, the server may provide a webhook address through the open service, and the message processing device may pass the message to the server through the webhook address. The interface address is an address provided by the server for communicating with the message processing device. For example, the server may provide a webhook address through a deployed open service, through which the message processing device may deliver the message to the server. Here the webhook address is the interface address. The configuration parameters corresponding to the message processing device may specifically include a preset format corresponding to the response message, a virtual user identifier, a network address, a communication token, an encryption key, and the like. Wherein the network address is an address in the message processing service for receiving the session message.
Step 303: and the time delay parameter acquisition device mounts the general execution engine in the hook point.
In some embodiments of the present invention, the generic execution engine may be eBPF, which is a set of generic execution engines that provide generic capabilities for efficiently and safely executing specific code based on system or program events, and users of the generic capabilities are no longer restricted to kernel developers; the eBPF may be composed of an execution bytecode instruction, a storage object and a Helper help function, the bytecode instruction must be verified by a BPF verifier before the kernel executes, and in the kernel with the BPF JIT mode enabled, the bytecode instruction is directly converted into a local instruction executable by the kernel to run.
When the generic execution engine is mounted in a hooking point, configuration information of the generic execution engine may be determined first; determining a detection code corresponding to a function call path of the general execution engine based on the configuration information of the general execution engine; and configuring a corresponding data storage structure for the general execution engine based on the detection code. Specifically, when a user program calls a certain system, kprobe or kretprobe detection codes can be added to a related function call path, and related information is recorded into a Map storage structure built in an eBPF, so that the user mode program can perform task processing according to data in the Map to be monitored, wherein kprobe mainly has two use methods, one is loaded through a module, and the other is used through a debuggefs interface, when the user mode program is used through module loading, a directory sample/kprobes is located under a kernel source code, a plurality of examples of kprobes are located under the directory, and a kprobe _ example is taken as an example, a kprobe structure is declared first, and then several key member variables including symbol _ name, pre _ handle and post _ handle are defined. Where symbol _ name is the function name (the term in kprobe _ example. C is do _ fork), and pre _ handle and post _ handle represent the hook functions that are executed before and after the execution of the probe point, respectively. The kprobe is then registered by the register _ kprobe function, thereby completing the addition of kprobe probe code on the relevant function call path.
Step 304: and when the client sends the task request information to the server, the time delay parameter acquisition device records the first timestamp information through the general execution engine.
In some embodiments of the present invention, referring to fig. 4, fig. 4 is a schematic diagram of a process of recording timestamp information by a general execution engine in the embodiments of the present invention, and specifically includes the following steps:
step 401: determining a filtering condition matched with the task request information;
step 402: filtering the task request information through a filtering condition matched with the task request information to obtain filtered task request information;
step 403: sending, by the general execution engine, the filtered task request information to a server, and recording first timestamp information corresponding to the filtered task request information.
In fig. 4, for example, two functions, namely skb _ copy _ datagram _ iter and tcp _ sendmsg, are called in tcp network communication for processing. Where the sendmsg system call is used to send the message to another socket.
If some condition filtering is not performed, the whole protocol stack processing efficiency is affected, so some condition filtering is required when the eBPF program is executed, and the whole execution efficiency is improved. For the latency of the request that needs to be grabbed, the program will use part of the information as meta information in the eBPF program as a filtering condition, such as the port number of the server and the HTTP method of the request, specifically, the HTTP method may include the following types:
1) GET requests the specified page information and returns the entity body. 2) The HEAD is similar to the get request except that there is no specific content in the returned response for retrieving the header. 3) The POST submits data to a specified resource for processing requests (e.g., submitting forms or uploading files). The data is contained in the request body. The POST request may result in the creation of a new resource and/or the modification of an existing resource. 4) The PUT replaces the content of the specified document with data transmitted from the client to the server. 5) DELETE requests the server to DELETE the specified page. 6) The CONNECT HTTP/1.1 protocol is reserved for a proxy server that can change the connection to a pipe mode. 7) OPTIONS allows the client to view the capabilities of the server. 8) The TRACE echos the requests received by the server, mainly for testing or diagnosis. 9) The path entity contains a table that specifies the difference from the original content represented by the URI 10) COPY request server copies the specified page to another network address.
Step 305: and the time delay parameter acquisition device records second timestamp information through the general execution engine when the server sends a task processing result to the client.
Step 306: and the time delay parameter acquisition device determines the time delay parameter of the using environment of the general execution engine based on the first time stamp information and the second time stamp information.
After determining the delay parameter of the use environment of the general execution engine, obtaining the delay parameter and the interface address information of the use environment of the general execution engine from a ring buffer area of the general execution engine by a user mode application process; and evaluating the online service quality of the server based on the time delay parameter and the interface address information of the using environment of the general execution engine.
Referring to fig. 5, fig. 5 is a schematic diagram of a process of evaluating the online service quality of a server by a user mode application process in the embodiment of the present invention, which specifically includes the following steps:
step 501: the general execution engine monitors the task processing progress of the online service program through the hook point.
In the processing process shown in fig. 5, the kernel mode may have a state with many resources, or may access a state with many resources, which may be marked as a privileged state, while the user mode is an unprivileged state, and the accessed state is limited to resources. If a program is operating in a privileged state, the program can access any resource of the computer without restriction of its resource access rights. If a program runs in user mode, its resource requirements are subject to various restrictions. Such as: to access the kernel data structure of the operating system, such as the process table, it needs to be checked in the selected state. In the processing process of fig. 5, the general execution engine is in a kernel state, and the user-mode application process can only acquire the delay data in the fixed ring buffer area, and when the usage environment information of the execution engine changes, the user-mode application process can be adjusted from the user state to the kernel state, so as to acquire more delay data
Step 502: and the universal execution engine records the time delay parameters and sends the time delay parameters to the annular buffer area.
Step 503: and the user mode application process acquires the time delay data in the ring buffer area.
In some embodiments of the invention, the server's online quality of service may be assessed by comparing:
judging whether idle node resources in cluster resources meet the task to be processed or not according to the delay parameters and the interface address information of the using environment of the general execution engine; when the idle node resources in the cluster resources meet the tasks to be processed, processing the tasks to be processed through the idle node resources; and when the idle node resources in the cluster resources do not meet the task to be processed, selecting corresponding idle node resources based on the time delay parameter of the using environment of the general execution engine. In some embodiments of the present invention, the task to be processed may be analyzed to obtain a priority identifier of the task to be processed; according to the priority identification of the tasks to be processed, sequencing the received priority of the tasks to be processed; and creating a corresponding task queue to be processed according to the priority of the task to be processed. Therefore, different customers can be treated differently. Meanwhile, traversing a task queue to be processed, and determining a task to be processed with the highest priority; determining link quality of each link in the network resource; and configuring the link with the highest link quality in the network resources for the task to be processed with the highest priority so as to realize transmission of the task to be processed in the task queue to be processed through the configured link. Therefore, the utilization efficiency of resources is improved, timely processing of high-priority tasks can be guaranteed, the data processing speed of cloud server users is guaranteed, and the use experience of the users is improved.
Further, referring to fig. 6, fig. 6 is a schematic diagram of an architecture of the data processing apparatus 100 according to an embodiment of the present invention, which includes a blockchain network 200 (exemplarily showing a consensus node 210-1 to a consensus node 210-3), an authentication center 300, a service entity 400, and a service entity 500, which are separately described below.
The type of blockchain network 200 is flexible and may be any one of public, private, or alliance, for example. Taking a public link as an example, electronic devices such as user terminals and servers of any service entity can access the blockchain network 200 without authorization; taking a federation chain as an example, an electronic device (e.g., a terminal/server) under the jurisdiction of a service entity after obtaining authorization may access the blockchain network 200, and at this time, become a client node in the blockchain network 200.
In some embodiments, the client node may act as a mere watcher of the blockchain network 200, i.e., provide functionality to support the business entity to initiate transactions (e.g., for uplink storage of data or querying of data on the chain), and may be implemented by default or selectively (e.g., depending on the specific business requirements of the business entity) with respect to the functions of the consensus node 210 of the blockchain network 200, such as a ranking function, a consensus service, and an accounting function, etc. Therefore, the data and the service processing logic of the service subject can be migrated into the block chain network 200 to the maximum extent, and the credibility and traceability of the data and service processing process are realized through the block chain network 200.
Consensus nodes in blockchain network 200 receive transactions submitted from client nodes (e.g., client node 410 shown in fig. 1 as being attributed to business entity 400 and business entity 500) of different business entities (e.g., business entity 400 and business entity 500 shown in fig. 1), perform the transactions to update the ledger or query the ledger, and various intermediate or final results of performing the transactions may be returned for display in the business entity's client nodes.
For example, the client node 410/510 may subscribe to events of interest in the blockchain network 200, such as transactions occurring in a particular organization/channel in the blockchain network 200, and the corresponding transaction notifications are pushed by the consensus node 210 to the client node 410/510, thereby triggering the corresponding business logic in the client node 410/510.
An exemplary application of the blockchain network is described below by taking an example in which a plurality of service entities access the blockchain network to implement distributed data processing.
Referring to fig. 6, a plurality of business entities involved in the management link, for example, the business entity 400 may be a server with a data processing function, the business entity 500 may be different service cluster front ends with self-developed resource scheduling systems, and registers from the certificate authority 300 to obtain respective digital certificates, where the digital certificates include public keys of the business entities and digital signatures signed by the certificate authority 300 on the public keys and identity information of the business entities, and are used to be attached to transactions together with the digital signatures of the business entities for the transactions, and are sent to the blockchain network, so that the blockchain network takes out the digital certificates and signatures from the transactions, verifies the authenticity of the messages (i.e., whether the messages are not tampered) and identity information of the business entities sending the messages, and the blockchain network verifies the messages according to the identities, for example, whether the service entities have authority to initiate transactions. Clients operated by electronic devices (e.g., terminals or servers) hosted by the business entity may request access from the blockchain network 200 to become client nodes.
The client node 410 of the service agent 400 is configured to obtain job data to be processed, and submit the job data to be processed to the cluster resource manager; triggering a corresponding component according to the job data to be processed through the cluster resource manager, converting an object-oriented query language instruction in the job data to be processed into a task matched with a corresponding calculation engine, and starting the job manager of the calculation engine; based on the resource quantity submitted by the data bin tool driver component, sending a resource application request corresponding to the to-be-processed job data to a job manager of a self-research resource scheduling system; converting the received resource application request through an operation manager of the self-research resource scheduling system to realize the matching of the resource application request and the self-research resource scheduling system; based on the converted resource application request, triggering a corresponding task execution component, processing the job data to be processed through the task execution component, and sending a general execution engine identifier, first timestamp information, second timestamp information and a time delay parameter to the block chain network 200.
The general execution engine identifier, the first timestamp information, the second timestamp information, and the delay parameter are sent to the blockchain network 200, service logic may be set in the client node 410 in advance, when corresponding text information is formed, the client node 410 automatically sends the general execution engine identifier, the first timestamp information, the second timestamp information, and the delay parameter to the blockchain network 200, or a service person of the service agent 400 logs in the client node 410, manually packages the general execution engine identifier, the first timestamp information, the second timestamp information, and the delay parameter, and sends the package to the blockchain network 200. During sending, the client node 410 generates a transaction corresponding to the update operation according to the generic execution engine identifier, the first timestamp information, the second timestamp information, and the delay parameter, specifies an intelligent contract that needs to be invoked to implement the update operation, and a parameter transferred to the intelligent contract in the transaction, and also carries a digital certificate of the client node 410 and a signed digital signature (for example, a digest of the transaction is encrypted using a private key in the digital certificate of the client node 410), and broadcasts the transaction to the consensus node 210 in the blockchain network 200.
When the transaction is received in the consensus node 210 in the blockchain network 200, the digital certificate and the digital signature carried by the transaction are verified, after the verification is successful, whether the service agent 400 has the transaction right is determined according to the identity of the service agent 400 carried in the transaction, and the transaction fails due to any verification judgment of the digital signature and the right verification. Upon successful verification, node 210's own digital signature (e.g., encrypted using node 210-1's private key to a digest of the transaction) is signed and broadcast on the blockchain network 200.
After receiving the transaction successfully verified, the consensus node 210 in the blockchain network 200 fills the transaction into a new block and broadcasts the new block. When a new block is broadcasted by the consensus node 210 in the block chain network 200, performing a consensus process on the new block, if the consensus is successful, adding the new block to the tail of the block chain stored in the new block, updating the state database according to a transaction result, and executing a transaction in the new block: for a transaction submitting an updated general execution engine identification, first timestamp information, second timestamp information, and a latency parameter, adding a key-value pair comprising the general execution engine identification, the first timestamp information, the second timestamp information, and the latency parameter to a state database.
A service person of the service agent 500 logs in the client node 510, inputs a target video or text information query request, the client node 510 generates a transaction corresponding to an update operation/query operation according to the target video or text information query request, specifies an intelligent contract that needs to be called to implement the update operation/query operation and parameters transferred to the intelligent contract in the transaction, and broadcasts the transaction to the consensus node 210 in the blockchain network 200, where the transaction also carries a digital certificate of the client node 510 and a signed digital signature (for example, a digest of the transaction is encrypted by using a private key in the digital certificate of the client node 510).
After receiving the transaction in the consensus node 210 in the blockchain network 200, verifying the transaction, filling the block and making the consensus consistent, adding the filled new block to the tail of the blockchain stored in the new block, updating the state database according to the transaction result, and executing the transaction in the new block: for the submitted transaction for updating the manual identification result corresponding to a certain target video, updating the key value pair corresponding to the target video in the state database according to the manual identification result; and for the submitted transaction for inquiring a certain target video, inquiring the key value pair corresponding to the target video from the state database, and returning a transaction result.
It should be noted that fig. 6 exemplarily shows a process of directly linking the generic execution engine identifier, the first timestamp information, the second timestamp information, and the delay parameter, but in other embodiments, for a case that the data size of the target video is large, the client node 410 may link the hash of the target video and the hash of the corresponding text information in pairs, and store the original target video and the corresponding text information in a distributed file system or a database. After the client node 510 obtains the target video and the corresponding text information from the distributed file system or the database, it may perform verification by combining with the corresponding hash in the blockchain network 200, thereby reducing the workload of the uplink operation.
As an example of a block chain, referring to fig. 7, fig. 7 is a schematic structural diagram of a block chain in a block chain network 200 according to an embodiment of the present invention, where a header of each block may include hash values of all transactions in the block, and also includes hash values of all transactions in a previous block, a record of a newly generated transaction is filled in the block, and after being identified by nodes in the block chain network, the record is appended to a tail of the block chain to form a chain growth, and a chain structure based on hash values between blocks ensures tamper resistance and forgery resistance of transactions in the block.
An exemplary functional architecture of a block chain network provided in the embodiment of the present invention is described below, referring to fig. 8, fig. 8 is a functional architecture schematic diagram of a block chain network 200 provided in the embodiment of the present invention, which includes an application layer 201, a consensus layer 202, a network layer 203, a data layer 204, and a resource layer 205, which are described below respectively.
The resource layer 205 encapsulates the computing, storage, and communications resources that implement each node 210 in the blockchain network 200.
The data layer 204 encapsulates various data structures that implement the ledger, including blockchains implemented as files in a file system, keyed state databases, and presence certificates (e.g., hash trees of transactions in blocks).
The network layer 203 encapsulates the functions of a Point-to-Point (P2P) network protocol, a data propagation mechanism and a data verification mechanism, an access authentication mechanism, and a service agent identity management.
The P2P network protocol implements communication between nodes 210 in the blockchain network 200, the data propagation mechanism ensures propagation of transactions in the blockchain network 200, and the data verification mechanism is used for implementing reliability of data transmission between nodes 210 based on an encryption method (e.g., digital certificate, digital signature, public/private key pair); the access authentication mechanism is used for authenticating the identity of the service subject added into the block chain network 200 according to an actual service scene, and endowing the service subject with the authority of accessing the block chain network 200 when the authentication is passed; the business entity identity management is used to store the identity of the business entity that is allowed to access blockchain network 200, as well as the permissions (e.g., the types of transactions that can be initiated).
The consensus layer 202 encapsulates the functions of the mechanism for the nodes 210 in the blockchain network 200 to agree on a block (i.e., a consensus mechanism), transaction management, and ledger management. The consensus mechanism comprises consensus algorithms such as POS, POW and DPOS, and the pluggable consensus algorithm is supported.
The transaction management is configured to verify the digital signature carried in the transaction received by the node 210, verify the identity information of the service entity, and determine whether the service entity has the right to perform the transaction according to the identity information (read the relevant information from the service entity identity management); for the service agents authorized to access the blockchain network 200, the service agents all have digital certificates issued by the certificate authority, and the service agents sign the submitted transactions by using private keys in the digital certificates of the service agents, so that the legal identities of the service agents are declared.
The ledger administration is used to maintain blockchains and state databases. For the block with the consensus, adding the block to the tail of the block chain; executing the transaction in the acquired consensus block, updating the key-value pairs in the state database when the transaction comprises an update operation, querying the key-value pairs in the state database when the transaction comprises a query operation and returning a query result to the client node of the business entity. Supporting query operations for multiple dimensions of a state database, comprising: querying the block based on the block vector number (e.g., a hash value of the transaction); inquiring the block according to the block hash value; inquiring a block according to the transaction vector number; inquiring the transaction according to the transaction vector number; inquiring account data of a business main body according to an account (vector number) of the business main body; and inquiring the block chain in the channel according to the channel name.
The application layer 201 encapsulates various services that the blockchain network can implement, including tracing, crediting, and verifying transactions.
In the following, taking the container cloud platform to execute the delay parameter acquisition method provided by the present application as an example, the process of implementing delay parameter acquisition by the whole scene online/offline mixed part is described, and there are various reasons for low cluster utilization rate, for example: 1) Too many cluster fragments; 2) The service is exclusively clustered, and the resources cannot be shared; 3) Deploying a plurality of copies for disaster recovery processing; 4) The idle buffer resource pool is used for temporarily expanding the capacity; 5) The user can not accurately predict the resources, and the application amount is higher than the actual resource usage amount; 6) The application resources have a tidal phenomenon, and users apply for the resources according to the use amount in peak hours. For example, the resource utilization rate is improved, and hardware cost increase caused by capacity expansion of hardware facilities is saved, so that the time delay parameter acquisition needs to be performed timely and accurately.
Referring to fig. 9, fig. 9 is a schematic processing procedure diagram of a time delay parameter acquisition method in an embodiment of the present invention, which specifically includes the following steps:
step 901: and determining a hook point matched with the general execution engine based on the using environment information of the general execution engine.
Step 902: and mounting the general execution engine in the hook point.
Step 903: and when the client sends the task request information to the server, recording different timestamp information through the general execution engine.
Step 904: and determining a time delay parameter according to different time stamp information.
Step 905: and monitoring the service process in the full-scene online and offline mixed-part scene according to the time delay parameter.
As shown in fig. 9, after the eBPF collection device acquires the delay data, the delay data is handed to the mixing component for comprehensive determination as an important basis for reflecting the service quality of the job. If the hybrid component judges that the currently acquired time delay data is too large and exceeds a normally set threshold value or is greatly fluctuated in a short time, resource limitation and adjustment are carried out on the offline operation so as to ensure the service quality of the online operation.
Step 906: and adjusting the cluster resource configuration according to the monitoring result of the service process.
Referring to fig. 10, fig. 10 is a schematic view illustrating a processing effect of the time delay parameter collection method in the embodiment of the present invention, in the conventional technology, the utilization rate of the online application cluster is always maintained at 10.500%, by using the time delay parameter collection method provided in the present application, the cluster resource configuration is adjusted, and under the condition that the hardware expansion of the server cluster is not performed, the cpu utilization rate of the server cluster is increased to 69.875%, so that it can be known that the cpu utilization rate of the server cluster is increased by 60%.
The beneficial technical effects are as follows:
the method comprises the steps of acquiring the use environment information of a general execution engine; determining a hook point matched with the general execution engine based on the using environment information of the general execution engine; mounting the general execution engine in the hook point; when a client sends task request information to a server, recording first timestamp information through the general execution engine; when the server sends a task processing result to the client, recording second timestamp information through the general execution engine; determining a latency parameter of a use environment of the generic execution engine based on the first timestamp information and the second timestamp information. Therefore, the time delay parameters can be timely and accurately obtained, the task processing quality of the online service process is monitored, meanwhile, when the time delay parameters are obtained, the service processing process does not need to be invaded, and the influence on task execution is reduced.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (13)
1. A time delay parameter acquisition method is characterized by comprising the following steps:
acquiring the use environment information of a general execution engine;
determining a hook point matched with the general execution engine based on the using environment information of the general execution engine;
mounting the general execution engine in the hook point;
when a client sends task request information to a server, recording first timestamp information through the general execution engine;
when the server sends a task processing result to the client, recording second timestamp information through the general execution engine;
determining a latency parameter of a use environment of the generic execution engine based on the first timestamp information and the second timestamp information.
2. The method of claim 1, wherein recording, by the generic execution engine, first timestamp information when the client sends the task request information to the server comprises:
determining a filtering condition matched with the task request information;
filtering the task request information through a filtering condition matched with the task request information to obtain filtered task request information;
sending, by the general execution engine, the filtered task request information to a server, and recording first timestamp information corresponding to the filtered task request information.
3. The method of claim 1, wherein said mounting said generic execution engine in said hooking point comprises:
determining configuration information of the general execution engine;
determining a detection code corresponding to a function call path of the general execution engine based on the configuration information of the general execution engine;
and configuring a corresponding data storage structure for the general execution engine based on the detection code.
4. The method of claim 1, further comprising:
the user mode application process obtains the time delay parameter and the interface address information of the using environment of the general execution engine from the annular buffer area of the general execution engine;
and evaluating the online service quality of the server based on the time delay parameter and the interface address information of the using environment of the general execution engine.
5. The method according to claim 4, wherein the evaluating the online service quality of the server based on the latency parameter and the interface address information of the usage environment of the generic execution engine comprises:
judging whether idle node resources in cluster resources meet the task to be processed or not according to the delay parameters and the interface address information of the using environment of the general execution engine;
when the idle node resources in the cluster resources meet the tasks to be processed, processing the tasks to be processed through the idle node resources;
and when the idle node resources in the cluster resources do not meet the task to be processed, selecting corresponding idle node resources based on the time delay parameter of the using environment of the general execution engine.
6. The method of claim 5, further comprising:
analyzing the task to be processed to obtain a priority identifier of the task to be processed;
according to the priority identification of the tasks to be processed, sequencing the received priority of the tasks to be processed;
and creating a corresponding task queue to be processed according to the priority of the task to be processed.
7. The method of claim 6, wherein the adjusting the corresponding network resource configuration according to the priority ranking of the to-be-processed tasks comprises:
traversing a task queue to be processed, and determining a task to be processed with the highest priority;
determining link quality of each link in the network resource;
and configuring the link with the highest link quality in the network resources for the task to be processed with the highest priority so as to process the task to be processed in the task queue to be processed through the configured link.
8. The method according to any one of claims 1-7, further comprising:
sending the general execution engine identification, the first time stamp information, the second time stamp information and the time delay parameter to the block chain network so as to enable the general execution engine identification, the first time stamp information, the second time stamp information and the time delay parameter to be transmitted to the block chain network
And the node of the block chain network fills the general execution engine identifier, the first time stamp information, the second time stamp information and the time delay parameter into a new block, and when the common identification of the new block is consistent, the new block is added to the tail part of the block chain.
9. The method of claim 8, further comprising:
receiving data synchronization requests of other nodes in the blockchain network;
responding to the data synchronization request, and verifying the authority of the other nodes;
and when the authority of the other nodes passes the verification, controlling the current node and the other nodes to carry out data synchronization so as to realize that the other nodes acquire the general execution engine identification, the first time stamp information, the second time stamp information and the time delay parameter.
10. The method of claim 8, further comprising:
responding to a query request, and analyzing the query request to obtain a corresponding object identifier;
acquiring authority information in a target block in a block chain network according to the object identifier;
checking the matching of the authority information and the object identification;
when the authority information is matched with the object identification, acquiring a corresponding general execution engine identification, first timestamp information, second timestamp information and a time delay parameter in the block chain network;
and responding to the query instruction, pushing the acquired corresponding resource transaction data and the target object matched with the screening condition to a corresponding video playing client so as to enable the client to acquire a corresponding general execution engine identifier, first timestamp information, second timestamp information and a time delay parameter which are stored in the block chain network.
11. A delay parameter acquisition apparatus, the apparatus comprising:
the information transmission module is used for acquiring the use environment information of the general execution engine;
the information processing module is used for determining a hook point matched with the general execution engine based on the using environment information of the general execution engine;
the information processing module is used for mounting the general execution engine in the hook point;
the information processing module is used for recording first timestamp information through the general execution engine when the client sends task request information to the server;
the information processing module is used for recording second timestamp information through the general execution engine when the server sends a task processing result to the client;
the information processing module is configured to determine a latency parameter of a usage environment of the general execution engine based on the first timestamp information and the second timestamp information.
12. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, and implement the time delay parameter acquisition method according to any one of claims 1 to 10.
13. A computer-readable storage medium storing executable instructions, wherein the executable instructions when executed by a processor implement the time delay parameter acquisition method of any one of claims 1 to 10.
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK40081524A true HK40081524A (en) | 2023-05-19 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110727712B (en) | Data processing method and device based on block chain network, electronic equipment and storage medium | |
| US11206451B2 (en) | Information interception processing method, terminal, and computer storage medium | |
| EP3484125B1 (en) | Method and device for scheduling interface of hybrid cloud | |
| JP5522307B2 (en) | System and method for remote maintenance of client systems in electronic networks using software testing with virtual machines | |
| CN108923908B (en) | Authorization processing method, device, equipment and storage medium | |
| CN111414381B (en) | Data processing method and device, electronic equipment and storage medium | |
| CN114787781A (en) | System and method for enabling a highly available managed failover service | |
| US20050222969A1 (en) | Centralized configuration data management for distributed clients | |
| CN104636678B (en) | The method and system of management and control is carried out under a kind of cloud computing environment to terminal device | |
| CN113742660B (en) | Application program license management system and method | |
| US20200322413A1 (en) | Content distributed over secure channels | |
| CN111694743A (en) | Service system detection method and device | |
| CN111698126A (en) | Information monitoring method, system and computer readable storage medium | |
| CN115694699A (en) | Time delay parameter acquisition method, device, electronic equipment and storage medium | |
| CN112766998A (en) | Data processing method and device for business activities | |
| CN112688914A (en) | Intelligent cloud platform dynamic sensing method | |
| CN115827223A (en) | Service grid hosting method and system based on cloud native platform | |
| US10977218B1 (en) | Distributed application development | |
| CN113722114B (en) | A data service processing method, device, computing device and storage medium | |
| US12381739B2 (en) | Image management method and apparatus | |
| US20240259391A1 (en) | Enforcing governance and data sovereignty policies on a computing device using a distributed ledger | |
| JP2025502904A (en) | DATA PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE, AND COMPUTER PROGRAM | |
| CN111367867B (en) | Log information processing method and device, electronic equipment and storage medium | |
| HK40081524A (en) | Time delay parameter acquisition method, device, electronic equipment and storage medium | |
| CN112214769A (en) | Active Measurement System of Windows System Based on SGX Architecture |