CN116701011A - Algorithm service dispatching management system based on rule engine - Google Patents
Algorithm service dispatching management system based on rule engine Download PDFInfo
- Publication number
- CN116701011A CN116701011A CN202310539996.2A CN202310539996A CN116701011A CN 116701011 A CN116701011 A CN 116701011A CN 202310539996 A CN202310539996 A CN 202310539996A CN 116701011 A CN116701011 A CN 116701011A
- Authority
- CN
- China
- Prior art keywords
- algorithm
- rule engine
- server
- rule
- service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/541—Interprogram communication via adapters, e.g. between incompatible applications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
- G06F9/548—Object oriented; Remote method invocation [RMI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1029—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/2895—Intermediate processing functionally located close to the data provider application, e.g. reverse proxies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/565—Conversion or adaptation of application format or content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/541—Client-server
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/549—Remote execution
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
An algorithm service dispatch management system based on a rule engine, comprising: the system comprises a server and a client; the client is used for user management authorization and configuration proxy algorithm service information, is also used for displaying an algorithm service information list, and is also used for configuring rules based on a rule engine; the server is used for interacting with external services, receiving tasks issued by the external services, converting parameters into parameter formats acceptable by corresponding algorithms based on configured rules, calling algorithm functions according to configured proxy service information, and adapting all algorithms to return results through a rule engine. The system disclosed by the application provides a unified interface, adapts to different algorithm manufacturers, and is convenient for external project adaptation development. And introducing a rule engine, setting field mapping rules through the rule engine, and mapping and processing different algorithm data formats to form a unified transmission and return format.
Description
Technical Field
The application relates to the technical field of algorithm service, in particular to an algorithm service scheduling management system based on a rule engine.
Background
In the development of new projects, the application algorithms need to be matched in a butt joint way, the condition that one algorithm is provided with the adaptation by a plurality of algorithm factories and a plurality of servers of the same algorithm are independently deployed often occurs, the difficulty in project development adaptation is caused by a plurality of different algorithm standards, and the problem that task allocation is unreasonable when project algorithm tasks are issued is caused by independent deployment.
Disclosure of Invention
The present application has been made in view of the above problems, and it is an object of the present application to provide a rule engine based algorithm service schedule management system that overcomes or at least partially solves the above problems.
In order to solve the technical problems, the embodiment of the application discloses the following technical scheme:
an algorithm service dispatch management system based on a rule engine, comprising: the system comprises a server and a client; wherein: the client is used for user management authorization and configuration proxy algorithm service information, is also used for displaying an algorithm service information list, and is also used for configuring rules based on a rule engine;
the server is used for interacting with external services, receiving tasks issued by the external services, converting parameters into parameter formats acceptable by corresponding algorithms based on configured rules, calling algorithm functions according to configured proxy service information, and adapting all algorithms to return results through rules configured by a rule engine.
Further, the client configures the external service authority, generates and stores the external service corresponding secret key, the secret key comprises the authorized time, and the external access can be performed only by carrying the corresponding secret key.
Further, the client configures proxy service information, specifically configuration algorithm related information, including an algorithm manufacturer number, an algorithm type number, an algorithm version number, an algorithm server load balancing strategy and an analysis result acquisition strategy, and associates and configures a server information list of a corresponding algorithm, server weights and an analysis task path number upper limit; and updating the configuration information into the service cache in real time, so that the agent information acquisition speed is improved.
Further, the client displays the proxy service and the task information, specifically: the heartbeat of the client accesses the configured algorithm server list, updates the online state of the algorithm server, the consumption condition of the memory space resources and the state of the algorithm task being analyzed on the server in real time, and displays the state on the page of the client.
Further, the server side comprises a user authority management module, an interaction module, a management module and a reverse proxy module, wherein: the user authority module is used for interacting user related commands with the client, generating a user secret key and checking whether the external request secret key is correct or not;
the interaction module is used for interacting with external services, providing a unified algorithm related interface, receiving a request sent by the external services, and converting parameters into parameter formats acceptable by corresponding algorithms based on a rule chain of the rule engine;
the management module is used for interacting with the client and processing algorithm service and rule engine rule related configuration;
the reverse proxy module is used for interacting with the algorithm service, realizing a load balancing strategy, completing task issuing, acquiring a corresponding task analysis result in real time, converting the task result into a uniform format based on a rule engine rule chain, and writing the task result into an elastic search.
Further, in the process of calling the algorithm, when a plurality of servers are deployed in the current algorithm, the algorithm is supported to be called and controlled by using a polling weighting strategy, a memory space strategy and an analysis path number strategy.
Further, the polling weighting strategy specifically includes: according to the configured server weight, algorithm tasks are distributed, and the higher the weight is, the more preferably the corresponding server is selected to be issued;
the memory space strategy is specifically as follows: distributing algorithm tasks according to the residual memory space of the server, and preferentially selecting a corresponding server to issue if the memory is more;
the analysis path number strategy specifically comprises the following steps: the algorithm tasks are distributed according to the number of the analysis tasks in progress, and the lower the number of the analysis tasks is, the more the corresponding server is preferentially selected to be issued.
Further, in the process of calling the algorithm, when the corresponding algorithm service is offline, the storage space resources are insufficient or the number of analysis tasks of the algorithm server is equal to the upper limit of the number of analysis tasks, issuing tasks are forbidden; when all service memory resources of the algorithm cluster are insufficient or the number of tasks reaches the upper limit, the subsequent tasks enter a queue to wait, and are issued again when the algorithm server has spare space.
Further, the rule engine strips the business decision from the program, and the business decision is judged by using a preset statement and an external input rule; using an easy-rule engine to identify a java character string based on jdk, executing the corresponding java character string and returning a judging result as true or false; the field dictionary comparison mapping, the data filtering, the field deleting and the intelligent matching field data are controlled by inputting a character string of a java sentence outside and analyzing the corresponding character string through a rule engine.
Further, the rule chain connects a plurality of rules in series according to a certain sequence, and starts from the first rule, successfully executes the rule corresponding to true, fails to execute the rule corresponding to false until the last rule is executed, and based on the last rule, performs complex processing on the data to complete the adaptation of the parameters and the task results.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the application discloses an algorithm service scheduling management system based on a rule engine, which comprises the following components: the system comprises a server and a client; the client is used for user management authorization and configuration proxy algorithm service information, is also used for displaying an algorithm service information list, and is also used for configuring rules based on a rule engine; the server is used for interacting with external services, receiving tasks issued by the external services, converting parameters into parameter formats acceptable by corresponding algorithms based on configured rules, calling algorithm functions according to configured proxy service information, and adapting all algorithms to return results through a rule engine.
The application supports visual configuration, directly carries out relevant configuration of the proxy on the client, is simple and quick, and displays the configured proxy service state and the analysis state of the issuing task in real time; and a plurality of load balancing strategies are supported, and strategy adjustment can be performed according to different algorithms, and the conditions of the memory or the analysis task number of the algorithm server. The application provides a unified interface, adapts to different algorithm manufacturers and is convenient for the adaptation and development of external projects. The application introduces a rule engine, sets field mapping rules through the rule engine, maps and processes different algorithm data formats to form a unified transmission and return format.
The technical scheme of the application is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, serve to explain the application. In the drawings:
fig. 1 is a structural diagram of an algorithm service scheduling management system based on a rule engine in embodiment 1 of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the problems in the prior art, the embodiment of the application provides an algorithm service scheduling management system based on a rule engine.
Example 1
The embodiment discloses an algorithm service scheduling management system based on a rule engine, as shown in fig. 1, comprising: the system comprises a server and a client; wherein:
the client is used for user management authorization and configuration proxy algorithm service information, displaying an algorithm service information list and configuring rules based on a rule engine;
specifically, the client configures the external service authority, generates and stores the external service corresponding secret key, the secret key comprises the authorized time, and the external access can be performed only by carrying the corresponding secret key. The client configures proxy service information, specifically configures related information of an algorithm, including an algorithm manufacturer number, an algorithm type number, an algorithm version number, an algorithm server load balancing strategy and an analysis result acquisition strategy, and associates and configures a server information list of a corresponding algorithm, server weight and analysis task path number upper limit; and updating the configuration information into the service cache in real time, so that the agent information acquisition speed is improved. The client displays the proxy service and the task information, and specifically comprises the following steps: the heartbeat of the client accesses the configured algorithm server list, updates the online state of the algorithm server, the consumption condition of the memory space resources and the state of the algorithm task being analyzed on the server in real time, and displays the state on the page of the client.
The server is used for interacting with external services, receiving tasks issued by the external services, converting parameters into parameter formats acceptable by corresponding algorithms based on configured rules, calling algorithm functions according to configured proxy service information, and adapting all algorithms to return results through a rule engine.
In this embodiment, the server includes a user authority management module, an interaction module, a management module and a reverse proxy module, where: the user authority module is used for interacting user related commands with the client, generating a user secret key and checking whether the external request secret key is correct or not;
the interaction module is used for interacting with external services, providing a unified algorithm related interface, receiving a request sent by the external services, and converting parameters into parameter formats acceptable by corresponding algorithms based on a rule chain of the rule engine;
the management module is used for interacting with the client and processing algorithm service and rule engine rule related configuration;
the reverse proxy module is used for interacting with the algorithm service, realizing a load balancing strategy, completing task issuing, acquiring a corresponding task analysis result in real time, converting the task result into a uniform format based on a rule engine rule chain, and writing the task result into an elastic search.
In this embodiment, in the process of calling the algorithm, when a plurality of servers are deployed in the current algorithm, a polling weighting strategy, a memory space strategy and an analysis path number strategy are used to realize the algorithm calling control. When the corresponding algorithm service is offline, the storage space resources are insufficient or the number of the analysis tasks in the algorithm server is equal to the upper limit of the number of the analysis tasks, the issuing of the tasks is forbidden; when all service memory resources of the algorithm cluster are insufficient or the number of tasks reaches the upper limit, the subsequent tasks enter a queue to wait, and are issued again when the algorithm server has spare space.
Specifically, the polling weighting strategy specifically includes: according to the configured server weight, algorithm tasks are distributed, and the higher the weight is, the more preferably the corresponding server is selected to be issued;
the memory space strategy is specifically as follows: distributing algorithm tasks according to the residual memory space of the server, and preferentially selecting a corresponding server to issue if the memory is more;
the analysis path number strategy specifically comprises the following steps: the algorithm tasks are distributed according to the number of the analysis tasks in progress, and the lower the number of the analysis tasks is, the more the corresponding server is preferentially selected to be issued.
In this embodiment, the reverse proxy module uses an interface callback to acquire the task analysis result in real time in one of the modes of message middleware consumption and timing query 3; wherein,,
the interface callback specifically comprises: when a task is issued, the task analysis result receiving interface information is transmitted, the task analysis result receiving interface information comprises an ip port, and after the algorithm analyzes the task result, the algorithm calls back the task analysis result receiving interface and pushes data to the management system;
message middleware consumption is specifically: after the algorithm analyzes the result, writing the specified message middleware kafka into the management system to directly consume the corresponding kafka data;
the timing inquiry is specifically as follows: the management system regularly calls a task result query interface provided by the algorithm to query the increment result.
In this embodiment, the rule engine may strip the business decision from the program, and use a predetermined sentence to make the business decision by externally inputting a rule; the method uses an easy-rule engine, which is a java rule engine, and can identify java character strings based on jdk and return a judging result to true or false, that is, we can input a character string of a java sentence outside, and can judge the character string through the easy-rule engine and perform subsequent steps through the result, so that we can use the easy-rule engine to realize the functions of field dictionary mapping, data filtering, deleting a certain field in the returned result, and the like. Specifically, the rule chain connects a plurality of rules in series according to a certain sequence, starts from the first rule, successfully executes the rule corresponding to true, fails to execute the rule corresponding to false until the last rule is executed, and performs complex processing on the data based on the last rule to complete the adaptation of the parameters and the task results.
Field dictionary alignment mapping: different algorithms and different manufacturer standards can cause certain difference in data formats, type dictionaries and different manufacturers are required to be adapted to external input parameters and algorithm return results in order to ensure consistency of external data formats, but if the number of manufacturers is too large, matching conversion is complicated, a rule engine can directly set rules, and mapping is completed through field substitution sentences
And (3) data filtering: some data in the results directly returned by the algorithm may be meaningless to us, the data is not hopefully written into the polluted database in es, filtering rules can be set through a rule engine, and filtering is carried out if the similarity of the results is less than 90%, and the filtering is not carried out with the return
Deleting field, in the actual development, some data can not be returned to external item, such as identity card, etc., and can judge that said field or identity card data is existed or not by means of rule engine, and if it is existed, the correspondent field can be deleted
Intelligent matching field data: some sensitive data can exist in unknown fields and cannot be deleted by designating the fields, so that simple logic statement is insufficient when matching is performed, while easy-rule provides import function, and when item is started, we can write the processed class object into easy-rule engine, and when writing rule, we can directly refer to the imported class object for processing
The embodiment discloses an algorithm service scheduling management system based on a rule engine, which comprises: the system comprises a server and a client; the client is used for configuring external service authority and proxy service information, displaying proxy service and task information and configuring field mapping rules based on a rule engine; the server side is used for interacting with the external service according to the authority configured by the client side, acquiring a task issued by the external service, analyzing the issued task, calling the algorithm function according to the analysis result and the configured proxy service information, and adapting all algorithms to return results through the rule engine.
The embodiment supports visual configuration, directly carries out relevant configuration of the proxy on the client, is simple and quick, and displays the configured proxy service state and the analysis state of the issuing task in real time; and a plurality of load balancing strategies are supported, and strategy adjustment can be performed according to different algorithms, and the conditions of the memory or the analysis task number of the algorithm server. The embodiment provides a unified interface, adapts to different algorithm manufacturers, and facilitates the adaptation and development of external projects. The application introduces a rule engine, sets field mapping rules through the rule engine, maps and processes different algorithm data formats to form a unified transmission and return format.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, application lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. The processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. These software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
Claims (10)
1. An algorithm service dispatch management system based on a rule engine, comprising: the system comprises a server and a client; wherein:
the client is used for user management authorization and configuration proxy algorithm service information, is also used for displaying an algorithm service information list, and is also used for configuring rules based on a rule engine;
the server is used for interacting with external services, receiving tasks issued by the external services, converting parameters into parameter formats acceptable by corresponding algorithms based on configured rules, calling algorithm functions according to configured proxy service information, and adapting all algorithms to return results through rules configured by a rule engine.
2. The algorithm service scheduling management system according to claim 1, wherein the client configures the external service rights, generates and stores an external service corresponding key, the key includes an authorization time, and the external access must be performed with the corresponding key.
3. The algorithm service scheduling management system based on the rule engine as claimed in claim 1, wherein the client configures the proxy service information, specifically configures related information of the algorithm, including an algorithm manufacturer number, an algorithm type number, an algorithm version number, an algorithm server load balancing strategy and an analysis result acquisition strategy, and associates and configures a server information list of a corresponding algorithm, a server weight and an analysis task path number upper limit; and updating the configuration information into the service cache in real time, so that the agent information acquisition speed is improved.
4. The algorithm service scheduling management system based on the rule engine according to claim 1, wherein the client side displays the proxy service and the task information, specifically: the heartbeat of the client accesses the configured algorithm server list, updates the online state of the algorithm server, the consumption condition of the memory space resources and the state of the algorithm task being analyzed on the server in real time, and displays the state on the page of the client.
5. The algorithm service scheduling management system based on a rule engine according to claim 1, wherein the server side comprises a user right management module, an interaction module, a management module and a reverse proxy module, wherein the user right module is used for interacting user related commands with the client side, generating a user secret key and checking whether an external request secret key is correct;
the interaction module is used for interacting with external services, providing a unified algorithm related interface, receiving a request sent by the external services, and converting parameters into parameter formats acceptable by corresponding algorithms based on a rule chain of the rule engine;
the management module is used for interacting with the client and processing algorithm service and rule engine rule related configuration;
the reverse proxy module is used for interacting with the algorithm service, realizing a load balancing strategy, completing task issuing, acquiring a corresponding task analysis result in real time, converting the task result into a uniform format based on a rule engine rule chain, and writing the task result into an elastic search.
6. The algorithm service scheduling management system based on the rule engine according to claim 5, wherein the algorithm call control is supported by using a polling weighting policy, a memory space policy and an analysis path number policy when a plurality of servers are deployed in the current algorithm in the process of calling the algorithm.
7. The algorithm service scheduling management system based on the rule engine according to claim 6, wherein the polling weighting policy is specifically: according to the configured server weight, algorithm tasks are distributed, and the higher the weight is, the more preferably the corresponding server is selected to be issued;
the memory space strategy is specifically as follows: distributing algorithm tasks according to the residual memory space of the server, and preferentially selecting a corresponding server to issue if the memory is more;
the analysis path number strategy specifically comprises the following steps: the algorithm tasks are distributed according to the number of the analysis tasks in progress, and the lower the number of the analysis tasks is, the more the corresponding server is preferentially selected to be issued.
8. The algorithm service scheduling management system based on a rule engine according to claim 5, wherein in the process of invoking the algorithm, when the corresponding algorithm service is offline, the storage space resources are insufficient or the number of analysis tasks in the process that the algorithm server is analyzing the number of the tasks is equal to the upper limit of the number of the analysis tasks, the issuing of the tasks is forbidden; when all service memory resources of the algorithm cluster are insufficient or the number of tasks reaches the upper limit, the subsequent tasks enter a queue to wait, and are issued again when the algorithm server has spare space.
9. The algorithm service scheduling management system according to claim 5, wherein the rule engine strips the business decision from the program, and makes the business decision by externally inputting rules using a predetermined sentence; using an easy-rule engine to identify a java character string based on jdk, executing the corresponding java character string and returning a judging result as true or false; the field dictionary comparison mapping, the data filtering, the field deleting and the intelligent matching field data are controlled by inputting a character string of a java sentence outside and analyzing the corresponding character string through a rule engine.
10. The algorithm service scheduling management system according to claim 9, wherein the rule chain connects a plurality of rules in series according to a certain order, starting from the first rule, successfully executing the rule corresponding to true, failing to execute the rule corresponding to false, until the last rule is executed, and based on the result, performing complex processing on the data to complete the adaptation of the parameters and the task results.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310539996.2A CN116701011A (en) | 2023-05-15 | 2023-05-15 | Algorithm service dispatching management system based on rule engine |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310539996.2A CN116701011A (en) | 2023-05-15 | 2023-05-15 | Algorithm service dispatching management system based on rule engine |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116701011A true CN116701011A (en) | 2023-09-05 |
Family
ID=87838354
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310539996.2A Pending CN116701011A (en) | 2023-05-15 | 2023-05-15 | Algorithm service dispatching management system based on rule engine |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116701011A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117579477A (en) * | 2023-10-27 | 2024-02-20 | 中科驭数(北京)科技有限公司 | A VPP-based configuration saving and restoration system, method and device |
| CN118779115A (en) * | 2024-09-09 | 2024-10-15 | 紫金诚征信有限公司 | Massive data decision engine system, method and computer program based on Java |
-
2023
- 2023-05-15 CN CN202310539996.2A patent/CN116701011A/en active Pending
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117579477A (en) * | 2023-10-27 | 2024-02-20 | 中科驭数(北京)科技有限公司 | A VPP-based configuration saving and restoration system, method and device |
| CN118779115A (en) * | 2024-09-09 | 2024-10-15 | 紫金诚征信有限公司 | Massive data decision engine system, method and computer program based on Java |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7469405B2 (en) | System and method for scheduling execution of cross-platform computer processes | |
| US5758351A (en) | System and method for the creation and use of surrogate information system objects | |
| US7216340B1 (en) | Analysis data validation tool for use in enterprise architecture modeling with result based model updating | |
| CN109947789A (en) | A kind of method, apparatus, computer equipment and the storage medium of the data processing of multiple database | |
| CN112765023B (en) | Test case generation method and device | |
| CN112559525B (en) | Data checking system, method, device and server | |
| CN116701011A (en) | Algorithm service dispatching management system based on rule engine | |
| CN108415998B (en) | Application dependency relationship updating method, terminal, device and storage medium | |
| US20070074197A1 (en) | Automatic dependency resolution | |
| CN112559641A (en) | Processing method and device of pull chain table, readable storage medium and electronic equipment | |
| US10768974B2 (en) | Specifying an order of a plurality of resources in a transaction according to distance | |
| US20080010535A1 (en) | Automated and configurable system for tests to be picked up and executed | |
| JP2007334580A (en) | Support apparatus, program, information processing system, and support method | |
| US20060031194A1 (en) | Decision support implementation for workflow applications | |
| CN109783159A (en) | Application starting method and apparatus based on configuration information | |
| CN111061642B (en) | Full-automatic competition data processing system and method based on user data | |
| US8930960B2 (en) | Methods and systems for object interpretation within a shared object space | |
| CN110781182A (en) | Automatic coding method and device for check logic and computer equipment | |
| CN115061916B (en) | Method for automatically generating interface test case and related equipment thereof | |
| CN111400058A (en) | Method and device for calling message, computer equipment and storage medium | |
| WO2023151397A1 (en) | Application program deployment method and apparatus, device, and medium | |
| CN116306973A (en) | Data processing method, system, device and storage medium | |
| CN119538893B (en) | Document generation method, device, equipment and storage medium thereof | |
| CN112100187A (en) | Student learning data storage method and device based on VueJS | |
| US11989123B1 (en) | Systems, methods, and media for updating, managing, and maintaining a reference template for web service API testing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |