[go: up one dir, main page]

CN118694812B - Service domain deployment reconstruction method and system for distributed ERP system - Google Patents

Service domain deployment reconstruction method and system for distributed ERP system Download PDF

Info

Publication number
CN118694812B
CN118694812B CN202411186819.1A CN202411186819A CN118694812B CN 118694812 B CN118694812 B CN 118694812B CN 202411186819 A CN202411186819 A CN 202411186819A CN 118694812 B CN118694812 B CN 118694812B
Authority
CN
China
Prior art keywords
data
service
test
domain
business
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411186819.1A
Other languages
Chinese (zh)
Other versions
CN118694812A (en
Inventor
虎威
金鑫
龙安菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Zhuoxun Software Co ltd
Original Assignee
Guizhou Zhuoxun Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Zhuoxun Software Co ltd filed Critical Guizhou Zhuoxun Software Co ltd
Priority to CN202411186819.1A priority Critical patent/CN118694812B/en
Publication of CN118694812A publication Critical patent/CN118694812A/en
Application granted granted Critical
Publication of CN118694812B publication Critical patent/CN118694812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/03Protocol definition or specification 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment provides a service domain deployment reconstruction method and system of a distributed ERP system, wherein the reconstruction method comprises the steps of carding service functions of each service domain on the ERP system and interaction relations among data entities of the service functions among different service domains, dividing according to the service functions of each service domain to obtain micro services corresponding to each service domain, setting data flow directions among each service domain and different service domains and setting functional interfaces among each service domain and different service domains according to the service functions of each service domain and the interaction relations, executing database adaptation and source code transformation operation, performing independent test and integrated test on each micro service in a migration platform system, performing integrated test and performance test on data in the migration platform system, and performing test operation and staged operation in all service departments based on test results. By adopting the technical scheme, the reconstruction efficiency of the ERP system can be improved.

Description

Service domain deployment reconstruction method and system for distributed ERP system
Technical Field
The embodiment of the specification belongs to the technical field of information management, and particularly relates to a service domain deployment and reconstruction method and system for a distributed ERP system.
Background
With the rapid development of information technology of the internet of things, the enterprise demands on a resource planning system (EnterpriseResource Planning, ERP) are more urgent. The ERP system is used as an enterprise informatization decision-making and management platform and is important to the operation of the whole enterprise.
As enterprise scale increases and business complexity increases, conventional centralized ERP platforms face challenges in handling large-scale concurrent requests, data processing capabilities, and system extensibility. Distributed deployment becomes a key means to improve ERP platform performance, reliability, and flexibility. However, when the existing ERP cannot be used or cannot meet the enterprise demand, the office efficiency of the enterprise is severely restricted.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method and a system for service domain deployment and reconfiguration of a distributed ERP system, which can improve efficiency of ERP system reconfiguration.
The embodiment of the specification provides a service domain deployment reconstruction method of a distributed ERP system, which comprises the following steps:
service functions of each service domain on the ERP system are combed, and interaction relations among data entities of the service functions among different service domains are obtained;
Dividing according to service functions of each service domain to obtain micro services corresponding to each service domain;
setting data flow directions among the business domains and different business domains and setting functional interfaces among the business domains and different business domains according to business functions of the business domains and the interaction relation;
performing database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of a migration platform system, micro services corresponding to each service domain are migrated to the migration platform system, and data flows among data entities of each micro service, each service domain and different service domains are migrated to the migration platform system, and security inspection is performed on the migration platform system;
And performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, performing test operation based on test results, and operating in all business departments in stages.
Optionally, the service functions of each service domain on the service and the interaction relationship between the data entities of the service functions between different service domains on the service and ERP system include:
based on the collected workflow information of each business domain in the ERP system, a business flow chart is generated by adopting a flow modeling tool, wherein the business flow chart comprises the action relation and the data flow direction of each business function of each business domain;
Analyzing a database of the ERP system, and identifying data entities in each service domain and interaction relations among the data entities by establishing an entity relation diagram;
According to the service flow chart, based on the identified service activities, decomposing the service functions of each service domain into a plurality of management modules, and determining the functional boundaries and data entities of each management module and the interaction relationship among each management module according to the calling relationship among each management module;
according to the interaction relation among the management modules and the interaction relation among the data entities, a data flow diagram among the management modules is constructed, and the interaction relation among the data entities of the business functions among different business domains is determined by identifying the flow paths of the data entities of the management modules among the different management modules.
Optionally, in the step of analyzing the database of the ERP system and identifying the data entities in each service domain and the interaction relationship between each data entity by establishing an entity relationship graph, the reconstruction method further includes:
and establishing a data dictionary corresponding to the data entity, wherein the data dictionary comprises definition, data type, value range and business meaning of the data entity.
Optionally, the setting the data flow between each service domain and different service domains according to the service function of each service domain and the interaction relationship includes:
The method comprises the steps of identifying key events in service functions of each service domain, defining structures and metadata of each key event, establishing a key event processing and routing mechanism to store data of each service domain in areas corresponding to different routes, selecting a message queue technology matched with the key event, and adopting a predefined format of an output transmission protocol to enable the data of each service domain and the data of different service domains to be transmitted in the areas corresponding to the different routes;
The setting the functional interfaces between each service domain and different service domains comprises the following steps:
functional interfaces among various service domains are standardized, a version control mechanism is adopted, and discarding and migration flows of the functional interfaces are planned.
Optionally, the performing database adaptation and source code transformation operations to adapt a first database of the ERP system to a second database of the migration platform system includes:
Based on SQL grammar compatibility test, identifying incompatible SQL sentences of the first database and the second database, rewriting the incompatible SQL sentences, and adjusting the JOIN operation of the first database to adapt to the optimizer of the second database;
identifying incompatible code segments of the first database and the second database in an operation environment by adopting a static code analysis tool, and carrying out code reconstruction operation to replace an incompatible third party library;
The migration of the micro service corresponding to each service domain, the data entity of each micro service, each service domain, and the data flow between different service domains to the migration platform system includes:
Designing a table structure of the second database, and partitioning the table structure to store micro services corresponding to each service domain, and data entities of each micro service, each service domain and data flow directions among different service domains in different areas;
setting a data extraction strategy and a data conversion rule of the second database in the migration process so as to process different data types and format differences and execute a rollback mechanism;
And setting an index in the table structure to perform a create, modify, or delete operation on the data in the second database.
Optionally, the performing an independent test and an integrated test on each micro service in the migration platform system, performing an integrated test and a performance test on data in the migration platform system, and operating in all business departments in stages based on a test result, including:
Writing test cases for each micro service based on the functions and boundary conditions of each micro service, wherein the test cases comprise normal flow cases and abnormal flow cases; identifying the dependency relationship among the micro services, and creating a simulation object to replace the dependency relationship so as to model various response scenes;
Setting up an integrated test environment to perform integrated test on each micro-service, wherein the integrated test environment comprises the steps of configuring a test server simulating a production environment, deploying all the micro-services and dependent components among the micro-services on the test server, configuring a network model, simulating the deployment topology of the micro-services, designing an end-to-end test scene, designing a test scene covering all the micro-services based on key business processes and user trips of each micro-service, executing communication test among the micro-services, including testing synchronous and asynchronous communication modes, verifying load balancing and fault transfer functions, safety authentication and authorization among the test services, simulating network delay and disconnection conditions and verifying consistency of distributed transactions;
Based on the dependency relationship between the data entities of each micro service, designing a data consistency test case to test the consistency of the copying and the buffering of the data entities and a data synchronization mechanism, executing the data entity migration test and the cross-service domain data synchronization test, and designing an automatic data verification script to verify the integrity of the data;
The method comprises the steps of defining key performance indexes and purposes of performance test, designing concurrency for operating scripts, simulating high concurrency scenes, monitoring resource use conditions of the migration platform system, and optimizing database query and index strategies based on the resource use conditions of the migration platform system;
The method comprises the steps of selecting a test point service department, making a test operation plan and an emergency plan, collecting user feedback and system operation data, and performing risk assessment based on a test operation result and the collected user feedback and system operation data.
Optionally, the reconstruction method further includes:
When determining that the service domain reconstruction in the migration platform system fails, executing the service domain reconstruction rollback operation comprises the steps of establishing a rollback mechanism, configuring a quick switching entry, establishing a data rollback mechanism, performing rollback exercise, including simulating a rollback scene in a test scene and evaluating the influence degree of the rollback operation on each service domain, optimizing the rollback flow and executing training operation.
Optionally, the reconstruction method further includes:
Performing end-to-end encryption operation on data transmission and storage in the ERP system, including constructing a multi-level encryption system by adopting a national encryption algorithm and an international general encryption algorithm, performing dynamic desensitization operation on sensitive data in the ERP system to process the sensitive data, and performing privacy-enhanced conditional text anonymization operation by adopting private attribute randomization, including identifying private attributes needing anonymization and generating random parameters;
performing identity authentication and access control to determine the authority level of an access user so as to display data in the ERP system which is matched with the authority level;
The method comprises the steps of setting a log collection system in the ERP system, detecting abnormal behaviors, adopting a value penalty auxiliary control method without rewards or demonstration learning examples and a discrete latent variable enhanced continuous diffusion model, training an abnormal detection model, deploying the abnormal detection model to a working node of the ERP system, detecting the abnormal behaviors, and generating an abnormal behavior alarm and a detailed analysis report.
Optionally, the reconstruction method further includes:
Performing security audit on source codes of the ERP system, including classifying and grading the source codes of the ERP system and generating a multi-dimensional security target, adopting a multi-target combined optimization framework synthesized by a large-scale hierarchical population, executing code scanning, and performing audit operations on source codes of different types and different levels;
The method comprises the steps of executing remote multi-center data backup and data backup scheduling based on data priority, wherein the data backup scheduling based on data priority comprises the steps of analyzing relevance and dependence among different data in the ERP system based on a dependence sensing priority adjustment technology to determine the priority of each data on the ERP system, establishing a key time sensitive network flow model, preferentially distributing time slots for data with higher priority to backup the data with higher priority, and adjusting the priority of the data according to the change frequency and backup time interval of the data
The embodiment of the specification also provides a service domain deployment reconstruction system of the distributed ERP system, which comprises the following steps:
The processing unit is suitable for combing the business functions of each business domain on the ERP system and the interaction relation among the data entities of the business functions among different business domains, and dividing according to the business functions of each business domain to obtain micro-services corresponding to each business domain;
The configuration unit is suitable for setting the data flow direction between each service domain and different service domains and setting the function interfaces between each service domain and different service domains according to the service functions of each service domain and the interaction relation;
The reconfiguration unit is suitable for executing database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of the migration platform system, micro services corresponding to each service domain are migrated to the migration platform system, and data flow directions among data entities of each micro service, each service domain and different service domains are migrated to the migration platform system, and security inspection is carried out on the migration platform system;
The test unit is suitable for performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, and generating a test result;
And the execution unit is suitable for executing test operation and operating in all business departments in stages based on the test result.
By adopting the service domain deployment reconstruction method of the distributed ERP system, on one hand, through the service functions of each service domain on the ERP system and the interaction relation among the data entities of the service functions among different service domains, division operation can be carried out according to the service functions of each service domain, micro-service corresponding to each service domain is obtained, modularization development and deployment in subsequent reconstruction are facilitated, realization difficulty is reduced, and on the other hand, communication standardization among different service domains is facilitated by determining data flow direction and configuring function interfaces, and response speed and stability of the subsequent reconstruction process are improved. After that, through executing database adaptation and source code transformation operations, a first database of the ERP system is adapted to a second database of the migration platform system, so that micro services corresponding to each service domain, data entities of each micro service, each service domain and data flow directions among different service domains can be migrated to the migration platform system, the ERP system is deployed in different migration platform systems, and the security inspection is carried out on the migration platform system, so that the data security can be improved; then, by performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, performing test operation based on test results, and operating in all business departments in stages, the stable and orderly performance of the reconstruction process can be ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for reconstructing service domain deployment of a distributed ERP system according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a data information carding process according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart for determining a data flow in an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a service domain deployment reconfiguration system of a distributed ERP system in an embodiment of the present disclosure.
Detailed Description
As described in the background, when the existing ERP system cannot be used or cannot meet the needs of the enterprise, the office efficiency of the enterprise is severely restricted. For this reason, it is necessary to reconstruct the ERP system to meet the use requirement, however, the existing reconstruction scheme is inefficient.
In order to solve the technical problems, the embodiment of the specification provides a service domain deployment reconstruction method of a distributed ERP system, on one hand, through combing the service functions of each service domain on the ERP system and the interaction relation between data entities of the service functions among different service domains, division operation can be carried out according to the service functions of each service domain to obtain micro services corresponding to each service domain, thereby being beneficial to realizing modularized development and deployment in subsequent reconstruction and reducing realization difficulty, and on the other hand, through determining data flow direction and configuring function interfaces, communication standardization among different service domains is facilitated, and response speed and stability of a subsequent reconstruction process are improved. After that, through executing database adaptation and source code transformation operations, a first database of the ERP system is adapted to a second database of the migration platform system, so that micro services corresponding to each service domain, data entities of each micro service, each service domain and data flow directions among different service domains can be migrated to the migration platform system, the ERP system is deployed in different migration platform systems, and the security inspection is carried out on the migration platform system, so that the data security can be improved; then, by performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, performing test operation based on test results, and operating in all business departments in stages, the stable and orderly performance of the reconstruction process can be ensured.
In order to better understand the inventive concepts, operating principles and advantages of the embodiments of the present disclosure, the voltage regulation schemes in the embodiments of the present disclosure are described in detail below.
Referring to fig. 1, which is a flowchart of a service domain deployment reconstruction method of a distributed ERP system in an embodiment of the present disclosure, in some embodiments of the present disclosure, as shown in fig. 1, a reconstruction operation may be performed according to the following steps:
S11, combing business functions of each business domain on the ERP system and interaction relations among data entities of the business functions among different business domains.
Specifically, there are often multiple business domains on one ERP system, and there are also differences between different business domains, and thus, business functions (such as generating a plan, purchasing management, and financial settlement) of each business domain may be determined by performing a carding operation on the ERP system, and business flow directions of the business functions of each business domain may be analyzed to determine an interaction relationship between data entities of the business functions between different business domains.
As an example, for any two service domains, if one service domain needs to use data generated by another service domain, it is indicated that there is a data dependency relationship between the two service domains.
And S12, dividing according to the service functions of each service domain to obtain the micro service corresponding to each service domain.
Specifically, different business domains in the ERP system have different functions, and each business domain can be split for processing by carrying out business function analysis on each business domain, so that a plurality of micro services representing the business domain can be obtained, thus, complex business functions can be split into manageable small modules, and the functional boundaries of each micro service, input and output data and interaction relations with other micro services are defined, which is beneficial to realizing modularized development and deployment in subsequent reconstruction.
S13, setting data flow directions among the service domains and different service domains and setting functional interfaces among the service domains and different service domains according to service functions of the service domains and the interaction relation.
Specifically, in determining the service functions of each service domain and the interaction relationship between each service domain, the data flow direction relationship can be set, so that data can be efficiently and safely transmitted between each service domain, and clear and same API interface specifications can be defined by setting the functional interfaces between each service domain and different service domains, so that the communication standardization between different service domains is improved, and the situations that data cannot be transmitted or the data transmission efficiency is too slow due to interface discomfort are reduced or avoided.
In some embodiments, in the process of setting the data flow direction, event driving, message queue and other technologies can be adopted to reduce direct coupling, so as to improve the response speed and stability of the system. And S14, performing database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of the migration platform system, micro services corresponding to each service domain are migrated to the migration platform system, and data flows among data entities of each micro service, each service domain and different service domains are migrated to the migration platform system, and security inspection is performed on the migration platform system.
Specifically, by adopting steps S11 to S14 to determine the interaction relationship between different service domains and set the data flow backward, the data on the existing ERP system can be migrated to the migration platform system, so as to realize the distributed deployment of the ERP system. In the migration process, the database adaptation and the source code transformation operation can be performed, so that the first database of the ERP system is adapted to the second database of the migration platform system, the adaptation of the operation environment can be realized, the ERP system migrated to the migration platform system can smoothly operate, and the problem that the ERP system cannot be compatible can be solved after micro services corresponding to each service domain, and data entity of each micro service, each service domain and data flow among different service domains are migrated to the migration platform system.
And by carrying out security inspection on the migration platform system, the data security can be improved, and security measures such as data encryption and access control are ensured to meet design requirements.
S15, performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, performing test operation based on test results, and operating in all business departments in stages.
Specifically, when data on the ERP system is transferred to the migration platform system, the migration platform system can be tested to determine that the existing migration platform system can correctly and stably operate service functions of each service domain, so that problems can be found out in time, and the problems are solved by adopting a corresponding strategy.
And after the test result meets the expected expectations, the test operation can be executed, the performance of the migration platform system in the real service environment is evaluated, and then the migration platform system can be operated in all service departments in stages according to the test operation result, so that the service of each service department can be ensured to be stably transited.
In short, by implementing service domain function data detailed carding, function and data reconstruction planning, credit reconstruction (i.e. compatibility test) and service domain comprehensive test and staged implementation, the distributed deployment of the ERP platform can be ensured to be smoothly carried out, and the efficient operation and the sustainable development of the service are realized.
In some embodiments of the present disclosure, when it is determined that a service domain deployment reconfiguration scheme of a distributed ERP system needs to be executed, functional data of a service domain on the ERP system needs to be first carded, so as to comprehensively examine an existing service flow and a data model.
In some embodiments, referring to the flowchart of a data information carding process in the embodiment of the present specification shown in fig. 2, as described in fig. 2, the following carding steps may be performed:
S21, generating a business flow chart by adopting a flow modeling tool based on the collected workflow information of each business domain in the ERP system.
Specifically, the workflow of each business domain in the current ERP system may be collected and analyzed by close collaboration with the business portion and business process documents may be created, including but not limited to production planning, purchasing relationships, and financial settlement. Thereafter, a business flow chart is drawn using a flow modeling tool (e.g., BPMN).
In some embodiments, the business flow diagrams include the roles and data flow directions of the individual business functions of the individual business domains. By drawing the service flow chart, the interrelation and the data flow path of each service link can be intuitively displayed.
In some embodiments, after the flow chart is drawn, the business expert may also be organized for review to refine and modify the business flow chart.
S22, analyzing the database of the ERP system, and identifying the data entities in each service domain and the interaction relation among the data entities by establishing an entity relation diagram.
Specifically, interactions between data entities may be determined by examining databases of current ERP systems, identifying primary tables and views, and analyzing relationships between tables, by building entity relationship graphs to determine from a business perspective, each data entity (e.g., customer, order, and product), and the necessary attributes of the respective data entities to determine relationships between the data entities (e.g., one-to-many or many-to-many relationships).
S23, according to the service flow chart, based on the identified service activities, decomposing the service functions of each service domain into a plurality of management modules, and determining the functional boundaries and data entities of each management module and the interaction relationship among each management module according to the calling relationship among each management module.
Specifically, because the service flow chart can intuitively display the interrelation and the data flow path of each service link, key service activities (such as production, marketing, finance and the like) can be identified based on the service flow chart, so that the service functions of each service domain can be decomposed into a plurality of management modules, and the consistency of module division and service flow can be realized. On the basis, the core functions of each management module are defined, the input and output of each management module are defined, the business rule and constraint of each management module, the user role and authority requirements of the identification module are determined, the functional boundary and responsibility of each management module are determined, and then through drawing the call relation diagram among the management modules, the interaction relation among the management modules can be realized by identifying the communication modes among shared data, resources and the management modules.
In some alternative examples, the reusability and extensibility of the management module may also be evaluated. For example, identifying common functions that may be used by multiple business processes, evaluating the module's adaptability in different scenarios, considering the module's configurability and degree of parameterization, and evaluating the module's adaptability to future business changes, etc.
S24, constructing a data flow diagram among the management modules according to the interaction relation among the management modules and the interaction relation among the data entities, and determining the interaction relation among the data entities of the business functions among different business domains by identifying the flow paths of the data entities of the management modules among the different management modules.
Specifically, after determining the interaction relationship between the management modules and the interaction relationship between the data entities, a DFD tool adapted to the reconstruction process may be selected, and a data flow graph in which data flows between the management modules may be drawn by identifying the primary source and the data endpoint of the ERP system. And determining the data sharing range and mode between the management modules by analyzing the data exchange requirements between the management modules so as to determine the interaction relationship between the data entities of the business functions between different business domains.
In some alternative embodiments, the data access frequency and access pattern of each management module may also be evaluated to predict data growth trends and peak load conditions by analyzing the frequency of high frequency access data items and low frequency access data items and various types of data operations (read, write, update, delete) to formulate storage strategies.
And optionally, designing a data synchronization mechanism to solve the problem of data anomalies caused by distributed transactions.
Thus, a solid foundation is laid for the subsequent reconstruction work by comprehensively knowing the data structure, the functional modules and the data flow of the ERP system. Such systematic partitioning can help embodiments of the present invention identify potential optimization points, improving overall efficiency and maintainability of the system.
In some embodiments of the present disclosure, in the step of analyzing the database of the ERP system and identifying the data entities in each business domain and the interaction relationship between each data entity by establishing an entity relationship graph, the reconstruction method further includes establishing a data dictionary corresponding to the data entities, where the data dictionary includes definitions, data types, value ranges, and business meanings of the data entities.
By establishing the data dictionary, each data entity can be queried, and the reconstruction efficiency is further improved.
In some embodiments of the present disclosure, after determining the service functions of each service domain and the interaction relationship between the data entities of the service functions between different service domains, the data flow direction between each service domain and different service domains may be determined.
Referring to a flowchart for determining a data flow in the embodiment of the present specification shown in fig. 3, the following steps may be performed as shown in fig. 3:
s31, identifying key events in service functions of each service domain, defining the structure and metadata of each key event, and establishing a key event processing and routing mechanism so as to store data of each service domain in areas corresponding to different routes.
Specifically, different business domains have different business functions, and the corresponding key events in the business functions are different (for example, for purchasing, the key events can be created for orders and inventory changes), in this case, the structure and metadata of each key event can be determined by designing a key event distribution-subscription model, and then the sequence and consistency of the key events are considered, and a key event processing and routing mechanism is established, that is, one key event corresponds to one route, and then the data of each business domain can be stored in the area corresponding to the different routes.
For example, individual slices may be designed according to data partition dimensions (e.g., time, geographic location, customer ID, etc.), and such that one slice corresponds to one route, to achieve dynamic balancing of slices and routing mechanisms.
S32, selecting a message queue technology adapted to the key event, and adopting a predefined format of an output transmission protocol to enable data between each service domain and different service domains to be transmitted in areas corresponding to different routes.
Specifically, after the critical event processing and routing mechanism is established, the message queue technology adapted to the critical event is selected to realize the persistence and fault tolerance requirements of the message by evaluating the characteristics (such as throughput, delay and reliability) of different message array technologies and considering the expansibility requirement and expected data volume of the system. Then, the serialization/deserialization efficiency of different data formats is evaluated, and an output protocol and format (e.g., JSON) can be defined, so that data transmission can be performed according to the above rule.
By adopting the mode, the key event processing and routing mechanism is established, so that the data can be transmitted in the areas corresponding to different routes, and the data flow direction can be clarified.
In some embodiments of the present disclosure, setting functional interfaces between each service domain and different service domains includes normalizing the functional interfaces between each service domain and different service domains, and planning a discarding and migrating process of the functional interfaces by using a version control mechanism.
More specifically, it may include 1) defining RESTful API design specifications, such as formulating URL naming conventions (e.g., using noun plural forms), normalizing the use of HTTP methods (GET, POST, PUT, DELETE, etc.), designing unified request and response formats, defining standard methods of paging, ordering, and filtering, and normalizing the formats of error codes and error messages, 2) implementing API version control mechanisms, such as selecting version control policies (e.g., URL paths, request headers, media types), defining format and growth rules for version numbers, designing backward compatibility policies, planning API discard and migration flows, and establishing version document and change log maintenance mechanisms.
In some alternative examples, a unified error handling and logging mechanism may also be designed, enabling a unified logging framework by defining a standardized error response format, and building a global exception handling mechanism and designing a fine-grained error code system.
An API document auto-generation mechanism may also be established to generate documents, taking into account the differences in interfaces between different business domains.
In some embodiments of the present description, the manner in which the micro-services are obtained may include determining responsibilities and boundaries of the respective services (i.e., determining micro-service granularity and boundaries) based on the business services and the independence and cohesiveness of the business services.
In some alternative examples, a service registry may be selected, service registration and de-registration flows designed to enable registration of micro-services, and a health detection mechanism may be established to evaluate the performance and extensibility of service discovery.
In some alternative examples, a load balancing algorithm (e.g., polling, minimum number of connections) may be selected to achieve balancing among loads, considering the role of server health in load balancing. Further, applicable scenarios for synchronous and asynchronous communications are evaluated to enable tracking and monitoring of cross-service calls.
Through the detailed steps, the method and the device can systematically plan the functions and data reconstruction of the ERP system. The comprehensive design of the app can not only improve the expandability and maintainability of the system, but also provide a flexible infrastructure for future service growth and technical evolution. The introduction of event driven architecture and micro services will make the system more responsive and easy to expand, while standardized interface designs ensure consistency and interoperability of the parts of the system.
By adopting the scheme in the example, the data flow direction and service functions between the services can be combed, so that the existing ERP system can be transferred to the migration platform system.
In some embodiments of the present disclosure, to prevent the occurrence of a failure such as a usage failure in the migration platform system, an adaptation operation may be performed.
For example, based on SQL grammar compatibility testing, incompatible SQL statements of the first database and the second database are identified, and the incompatible SQL statements are rewritten, and JOIN operations of the first database are adjusted to accommodate optimizers of the second database.
Specifically, when the database of the migration platform system is selected, SQL grammar compatibility test can be performed, if incompatible SQL sentences are identified, the sentences can be modified to realize the adaptation between the first database and the second database, and the compatibility is improved.
And adjusting the JOIN operation of the first database to adapt the transaction mechanism of the second database in accordance with the transaction logic of the second database, such that the optimizer of the second database may be adapted.
In some embodiments of the present description, a static code analysis tool is used to identify code segments of the first database and the second database that are incompatible in a runtime environment, perform code reconstruction operations to replace incompatible third party libraries, and modify and adjust system calls and network programming interfaces associated with the first database process and thread management.
More specifically, the static code analysis tool is used to scan the entire code library, identify all code that interacts directly with the operating system to determine code segments that are incompatible with the second database and the runtime environment, and rewrite the code associated with the file system operation, and study API and system call differences for the first database and the second database, adjust process and thread management related system calls (e.g., modify network programming interfaces, adapt memory management and resource allocation related system calls, etc.), and improve compatibility of the first database and the second database.
In some alternative examples, the operation performance of the second database may also be improved by analyzing hardware architecture information of the second database, by optimizing operations such as computationally intensive algorithms, adjusting memory access patterns, optimizing concurrent processing logic, and the like.
In some embodiments of the present disclosure, the data migration operation may include designing a table structure of the second database, partitioning the table structure to store micro services corresponding to each service domain, and data entities of each micro service, each service domain, and data flow directions between different service domains in different areas, setting a data extraction policy and a data conversion rule of the second database to process different data types and format differences during migration, and executing a rollback mechanism, and setting an index in the table structure to perform a create, modify, or delete operation on data in the second database.
In other words, partition control is achieved by adjusting the second database, e.g., analyzing and optimizing table structure design, adjusting index policies, including creating, modifying or deleting indexes, and adjusting database parameter configurations, such as buffer pool size, number of concurrent connections, etc. After the above operation is performed on the second database, a data migration scheme may be set based on the second database, for example, a data extraction policy is formulated, a data conversion rule is designed, a data type and format difference are processed, and a data loading process is planned, so that the second database is more adapted to the requirements of the ERP system.
In some embodiments of the present description, after the adaptation operation is completed, a security compliance check may be performed. For example, security vulnerability scanning is performed by selecting a security scanning tool, performing comprehensive code security audit, checking common security vulnerabilities such as SQL injection, XSS and the like, performing penetration test, simulating an attack scene, performing encryption and desensitization of data, including identifying sensitive data to be encrypted, selecting an encryption algorithm, realizing a data storage and transmission encryption scheme, performing a key management mechanism, ensuring secure storage and use of keys, developing a data desensitization strategy, protecting personal privacy information, establishing an access control and identity authentication mechanism, including performing a multi-factor authentication mechanism to realize role-based access control and authority control, and establishing a single sign-on (SSO) system to realize one-time login, inquiring all business domain information, and finally performing security audit and log management to realize centralized log collection and storage and detecting abnormal behaviors.
By implementing the steps, the information creation reconstruction of the ERP system can be realized, and the process not only relates to the technical level adaptation and optimization, but also ensures the safety and compliance of the system in a domestic environment. The database adaptation ensures smooth migration and efficient processing of data, the code transformation enables the system to fully utilize the characteristics of domestic software and hardware, and the safety compliance check ensures the safe operation of the system in a new environment. The comprehensive reconstruction app can help enterprises smoothly finish transition to the domestic IT infrastructure, and meanwhile, the overall performance and the safety of the system are improved.
After the ERP system is migrated to the migration platform system, test tasks can be executed to execute corresponding processing processes. For ease of understanding, the following description is given by way of example.
The method comprises the steps of performing independent test on each micro service, writing test cases for each micro service based on functions and boundary conditions of each micro service, wherein the test cases comprise normal flow cases and abnormal flow cases, identifying dependency relationships among each micro service, creating simulation objects to replace the dependency relationships so as to model various response scenes, and performing test processes so as to perform independent test on each micro service.
Specifically, the core functions and boundary conditions of each micro service are identified, test cases are designed to cover normal flows and abnormal conditions, a Test Driving Development (TDD) method is used for compiling tests, independence and repeatability of the tests are guaranteed, edge conditions and limit value tests are considered, negative test cases are compiled for verifying an error processing mechanism, then dependency relationships among the micro services are identified, a proper simulation framework (such as Mockito and Sin. Js) is selected, a simulation object is created to replace external dependencies, various response scenes including success, failure and timeout are simulated to determine processing logic of each micro service to the simulation response, then a code coverage tool is selected, coverage targets (such as row coverage and branch coverage) are set, tests are executed and coverage reports are generated, and finally an integrated unit tests to a Continuous Integrated (CI) flow and configures an automatic test operation environment to realize independent tests.
For each micro-service to carry out the integrated test, the following steps can be executed that an integrated test environment is built, and each micro-service is subjected to the integrated test.
The method comprises the steps of configuring a test server simulating a production environment, deploying all micro services and dependent components among the micro services on the test server, configuring a network model, simulating the deployment topology of the micro services, designing an end-to-end test scene, designing a test scene covering all the micro services based on key business processes and user tours of all the micro services, executing communication tests among the micro services, including testing synchronous and asynchronous communication modes, verifying load balancing and fault transfer functions, testing safety authentication and authorization among the services, simulating network delay and disconnection conditions and verifying consistency of distributed transactions.
It should be noted that, when performing the integration test, it should be ensured that the existing functions are not affected
The integrated test of the data in the migration platform system comprises the steps of designing a data consistency test case based on the dependency relationship among the data entities among the micro services so as to test the consistency of the copying and the buffering of the data entities and a data synchronization mechanism, executing the data entity migration test and the cross-service domain data synchronization test, and designing an automatic data verification script so as to verify the integrity of the data.
The method comprises the steps of identifying cross-service data dependency, designing a test scene verification data synchronization mechanism to test the ACID attribute of distributed transactions and verify a data version control and conflict resolution mechanism, executing a data migration test, testing data conversion logic in the migration process and verifying the integrity and consistency of migrated data, testing data circulation among different service domains, verifying the consistency of cross-domain transactions and testing a data update propagation mechanism to verify authority control of cross-domain data access, developing an automatic data verification script, executing comprehensive data comparison and verification to verify the accuracy of calculation fields and summarized data, and checking the maintenance of data relationship and constraint.
The performance test may include defining key performance indicators and objectives for performance testing, designing concurrency for operating scripts, simulating high concurrency scenarios, and monitoring resource usage of the migration platform system, and optimizing database query and index policies based on resource usage of the migration platform system.
In some examples, it may include defining Key Performance Indicators (KPIs) and targets, designing test scenarios and planning test datasets of different load levels, testing duration, then selecting performance test tools, setting monitored system components and indicators, executing high concurrency scenarios, simulating user accesses at different geographic locations through progressively increasing load patterns to test the performance of the system under peak loads, and finally tracking CPU, memory, disk I/O and network usage, monitoring database performance and connection pool status, and response time and throughput of micro-services, identifying operations and services for longer response times, optimizing database queries and indexes, adjusting caching policies and configurations, and optimizing code and algorithm efficiency.
The test run comprises the steps of selecting a test point service department, making a test run plan and an emergency plan, collecting user feedback and system running data, and carrying out risk assessment based on a test run result and the collected user feedback and system running data.
The method comprises the steps of operating in all business departments in stages, including making a comprehensive deployment plan, including adjusting deployment strategies based on test operation experience, making detailed deployment schedules and milestones, planning required hardware and network resources, sorting implementation orders according to business importance, including evaluating the key degree of each business domain, considering the dependency relationship among the business domains, evaluating the readiness degree and acceptance capacity of each department, balancing risks and benefits, making a priority order, expanding the deployment range gradually, including deploying the business domains or departments one by one according to the plan, performing stability evaluation after each deployment, solving the problems in the deployment process, collecting and analyzing feedback of each stage, ensuring stable transition of new and old systems, continuously monitoring and optimizing system performance, including building a comprehensive system monitoring mechanism, periodically analyzing performance data and user feedback, and implementing a continuous performance tuning plan.
Through these detailed steps, embodiments of the present invention may systematically perform comprehensive testing and staged implementation of ERP systems. This process not only ensures the technical quality of the system, but also verifies the implementation risk through commissioning and staged deployment. The unit test and the integrated test ensure the functional correctness of each component and the whole system, the data integration test ensures the consistency and the integrity of data, and the performance test verifies the performance of the system under the actual load. The commissioning phase provides valuable practical operational experience, ready for comprehensive deployment. Finally, through staged implementation and continuous optimization, the ERP system can be ensured to be stably and efficiently put into use, and the business requirements can be continuously met.
In practical application, when the reconstructed service domain cannot achieve the expected effect due to uncontrollable factors such as poor hardware performance, insufficient network bandwidth, overload server pressure and the like, the reconstructed service domain can be returned to the prior reconstruction through simple configuration of the entrance.
Specifically, when the service domain reconstruction in the migration platform system is determined to fail, a service domain reconstruction rollback operation is executed.
In some embodiments, the rollback operation includes establishing a rollback mechanism including defining a rollback trigger condition and formulating a rollback operation flow. For example, key Performance Indicators (KPIs) and system health metrics are identified, and thresholds such as system response time, error rate, transaction success rate, etc., and criteria for interruption of service continuity are set as rollback trigger conditions.
The configuration quick switching entry comprises the steps of developing a configuration management system, supporting a quick switching function switch, distributing a configuration center, ensuring the instantaneity of configuration change, dynamically configuring a routing layer and realizing quick flow switching.
The method comprises the steps of establishing a data rollback mechanism, namely designing an incremental data backup strategy, adopting a data snapshot function, supporting data recovery at a specific time point, setting a data version control system, tracking a data change history to realize a transaction log replay mechanism, realizing fine-grained data rollback, and designing a cross-service data consistency rollback strategy;
the rollback exercise comprises the steps of simulating a rollback scene in a test scene, evaluating the influence degree of rollback operation on each business domain, optimizing a rollback flow and executing training operation.
The method comprises the steps of simulating a rollback scene in a test environment, constructing a test environment similar to a production environment, designing a plurality of rollback scenes to cover different trigger conditions, simulating various system faults and performance problems, executing an end-to-end rollback flow, measuring the execution time of rollback operation, determining the time required by the system to restore to a usable state, the data loss risk in the rollback process, calculating potential financial loss in the rollback process, evaluating the influence of rollback on customer experience and company reputation, analyzing bottlenecks and problems found in rollback exercise, adopting rollback scripts and an automation tool to improve a data rollback strategy, and reducing the data loss risk.
In some alternative examples, rollback operations may also include formulating rollback operation flows and responsibility division mechanisms. For example, establishing evaluation and approval flows of rollback decisions, defining steps and checkpoints of rollback operations, assigning responsibilities and permissions of each role in the rollback process, establishing system verification and business validation flows after rollback, establishing real-time communication mechanisms of the rollback process, and establishing problem analysis and improvement plans after rollback.
The personnel involved are trained to be familiar with rollback operations.
Performing monitoring and early warning operations including establishing system performance monitoring metrics, such as defining key performance metrics (KPIs) and Service Level Agreements (SLAs), implementing comprehensive system health check mechanisms, monitoring response time and throughput of each micro-service, tracking database performance and query execution time in real time, monitoring network delay and bandwidth usage, setting self-warning thresholds, such as setting performance benchmarks based on historical data and traffic demand, configuring dynamic thresholds, implementing multi-level early warning mechanisms, including warning, severity and urgency levels, adapting to traffic peak-valley variations, and setting composite early warning rules, integrating multiple metrics, establishing early warning upgrade mechanisms, ensuring timely processing, implementing real-time log analysis, such as collecting and storing logs of all services centrally, and real-time log stream processing and analysis, identifying abnormal log patterns using machine learning algorithms, and establishing fast response mechanisms.
Through the detailed steps, the embodiment of the invention can establish a comprehensive and effective service domain reconstruction rollback mechanism. This mechanism includes not only the fast switching and data rollback capabilities at the technical level, but also the flows, responsibility division and training at the organizational level. Through regular rollback exercise, the embodiment of the invention can continuously optimize the rollback flow and improve the response capability of teams. Meanwhile, the powerful monitoring and early warning system can help the embodiment of the invention to discover problems early and make rollback decisions quickly when necessary. The preparation can greatly reduce the risk in the reconstruction process, ensure that the influence on the service can be minimized even when accidents are met, and ensure the stability and the reliability of the system.
In some embodiments, the end-to-end encryption operation may also be performed on data transmission and storage in the ERP system when data migration or data storage is performed.
In this step, embodiments of the present description will implement comprehensive end-to-end encryption of data transmission and storage in an ERP system. This means that the entire life cycle of the data from generation to use is in an encrypted state. The method comprises the steps of constructing a multi-level encryption system by adopting a national encryption algorithm and an international general encryption algorithm, performing dynamic desensitization operation on sensitive data in the ERP system to process the sensitive data, and performing privacy-enhanced conditional text anonymization operation by adopting private attribute randomization, wherein the operation comprises the steps of identifying private attributes needing anonymization and generating random parameters, and changing the private attributes by adopting a randomization algorithm and reconstructing text contents according to conditional text semantics.
In one embodiment, the TLS/SSL protocol is used at the data transport layer to ensure security of the data during network transport, and a strong encryption algorithm, such as AES-256, is used on sensitive information stored in the database, and an encryption key management system is implemented to periodically rotate the keys and ensure secure storage of the keys.
Specifically, in order to meet the national information security standard and requirement, the embodiment of the specification adopts a hybrid encryption scheme combining a national encryption algorithm and an international general encryption algorithm, for example, an SM2 algorithm is used for asymmetric encryption and is used for key exchange and digital signature, for example, an SM4 algorithm is used for symmetric encryption, and a large amount of data is protected, for example, confidentiality is also used. And constructing a multi-level encryption system by combining international algorithms such as AES, RSA and the like, thereby ensuring the security and simultaneously considering the system performance and compatibility.
Dynamic desensitization of sensitive data is then implemented, wherein dynamic desensitization is a key technology for protecting sensitive data, which is capable of processing sensitive information in real time when the data is accessed, and comprises the steps of establishing a sensitive data classification system, definitely defining data types needing to be desensitized, such as personal identity information, financial data and the like, developing a dynamic desensitization rule engine, dynamically determining a desensitization strategy according to user roles, access context and data types, implementing various desensitization technologies, including but not limited to data shielding, data replacement, scoping, tokenization and the like, implementing desensitization logic at a database query layer, ensuring that the sensitive data is desensitized before returning to an application layer, and establishing a desensitization audit log, recording all desensitization operations for subsequent security audit and compliance inspection.
Finally, privacy enhanced conditional text anonymization is realized by adopting private attribute randomization, wherein the technology can furthest protect personal privacy while preserving the use value of data.
The method specifically comprises the steps of analyzing text data in an ERP system, identifying attributes possibly related to personal privacy, such as names, IDs, addresses and the like, then establishing a private attribute dictionary and a rule base, supporting automatic identification and marking of information needing anonymization, identifying private information implied in a context by using a natural language processing technology, then generating unique randomization parameters for each identified private attribute, ensuring unpredictability of the parameters by using a cryptographically secure random number generator, and finally,
A series of randomization algorithms such as substitution, addition homomorphic encryption, local differential privacy and the like are used, a selected algorithm is applied according to the data type and the use scene, the generated random parameters are used for transforming the private attributes, the semantic structures of texts are analyzed, key semantic elements and relations are identified, a natural language generation technology is used for reconstructing the texts containing anonymized private attributes, and therefore consistency and naturality of the reconstructed texts in grammar and semantics are guaranteed.
And optionally, developing an anonymization quality assessment model, and assessing from two dimensions of privacy protection degree and information retention degree. For example, the privacy protection intensity of the anonymized data is verified by using a differential privacy theory, and information entropy analysis is performed to ensure that the loss of key information in the anonymization process is within an acceptable range.
Through the detailed steps, the method and the device can realize comprehensive and deep data encryption and privacy protection in the ERP system. The method not only meets the increasingly strict data protection regulation requirements, but also provides powerful data security for enterprises, and simultaneously maintains the availability and value of the data.
And executing identity authentication and access control to determine the authority level of the access user so as to display the data in the ERP system which is matched with the authority level.
The log collection system is arranged in the ERP system and comprises abnormal behavior detection, wherein the abnormal behavior detection comprises a value penalty auxiliary control method without rewards or demonstration learning examples and a discrete latent variable enhanced continuous diffusion model, an abnormal detection model is trained, the abnormal detection model is deployed on a working node of the ERP system, abnormal behaviors are detected, and abnormal behavior alarms and detailed analysis reports are generated.
Specifically, the abnormal behavior detection based on the AI assistance is realized, so that the safety and the reliability of the system can be greatly improved. This process involves multiple complex steps, each incorporating advanced machine learning techniques.
Specifically, a value penalty auxiliary control method without rewards or demonstration learning examples is adopted, and a novel learning method is introduced in the step, and the method is independent of a traditional rewarding mechanism or a large amount of labeling data. The control method comprises the steps of establishing a behavior baseline model, analyzing historical data, establishing a standard model of normal user behaviors and system operation, designing a cost function, defining a cost function for evaluating the 'normality' of the behaviors, adopting an implementation punishment mechanism, wherein punishment is applied to the operation deviating from a normal behavior mode, the punishment is reflected in the scoring of the cost function, and self-adaptive learning, the system continuously adjusts and optimizes the cost function through continuous observation and learning to improve the identification capability of novel abnormal behaviors, and unsupervised abnormal detection is adopted, wherein the system can autonomously learn to identify potential abnormal behaviors without an abnormal sample with clear labels, so that the function does not depend on external rewards, but is based on the internal rules and the predefined safety strategy of the system.
In addition, a discrete latent variable enhanced continuous diffusion model is introduced to better capture the time sequence characteristics and the potential modes of the behavior data, wherein the diffusion model comprises the steps of constructing a continuous diffusion model based on the time sequence data of user behaviors and system logs, constructing a diffusion model capable of simulating a data generation process, introducing the discrete latent variable into the continuous diffusion model for capturing discrete state conversion in the behavior mode, designing an encoder-decoder structure, realizing a network structure capable of mapping observed behavior data to a potential space and reconstructing the behavior data from the potential space, optimizing a training process, optimizing model parameters by adopting a variation inference method, enabling the model parameters to be accurately reconstructed and sensitive to abnormal behaviors, and detecting the abnormal by comparing differences between the actually observed behaviors and the behaviors generated by the model by utilizing the trained model.
The method developed in the first two steps is combined to train a powerful abnormality detection model, comprising data preprocessing, cleaning, standardization and feature extraction of collected user behavior data and system logs, model architecture design, designing an end-to-end abnormality detection network by combining value punishment auxiliary control and discrete latent variable enhancement diffusion model, staged training, self-supervision learning by using a large amount of unlabeled data and fine tuning by using a small amount of labeled data, cross-validation, ensuring generalization capability of the model by using k-fold cross-validation and other technologies, and model interpretation, namely attention introducing mechanism and interpretable AI technology, so that the decision making process of the model is more transparent. And (3) continuous learning, namely designing an online learning mechanism so that the model can learn from new data continuously and adapt to the continuously changing environment.
Finally, deploying an anomaly detection model into a production environment to realize real-time anomaly detection, wherein the anomaly detection comprises streaming data processing, characteristic real-time extraction, model reasoning optimization, multi-scale analysis, context sensing, and adaptive threshold, wherein the streaming data processing comprises the steps of establishing a real-time data pipeline, efficiently processing continuously generated system logs and user behavior data, the characteristic real-time extraction comprises the steps of developing an efficient characteristic extraction algorithm capable of extracting key characteristics in the data stream in real time, the model reasoning optimization comprises the steps of using model quantization, carrying out a pruned and other technical optimization models to realize low-delay real-time reasoning, the multi-scale analysis comprises the steps of simultaneously carrying out short-term (second level), medium-term (minute level) and long-term (hour/day level) behavior analysis, comprehensively capturing different types of anomalies, the context sensing comprises the steps of considering context information such as user roles, time, positions and the like, and the detection accuracy is improved, and the anomaly judgment threshold is dynamically adjusted according to factors such as system load and time.
Abnormal behavior alerts and detailed analysis reports may then be generated to convert the detection results into actionable information supporting a security team quick response.
The method comprises the steps of classifying alarms according to severity and potential influence of abnormality, ensuring priority treatment of important alarms, correlating detected abnormality with historical events and known threat information to provide comprehensive threat assessment, developing an intuitive instrument board, displaying abnormality detection results, trend analysis and overall safety state of a system, designing a template, automatically generating a detailed report containing abnormality details, influence assessment and advice operation, and finally providing preliminary response advice and mitigation measures based on the detected abnormality type and severity to continuously optimize a model and report generation process.
Through the series of steps, the ERP system has advanced AI auxiliary abnormal behavior detection capability, can timely discover and cope with various potential security threats, and greatly improves the overall safety and reliability of the system. The method not only can detect the known attack mode, but also can identify novel and unseen abnormal behaviors, thereby providing omnibearing security for enterprises.
In some embodiments of the present disclosure, in the reconfiguration process of the ERP system, the source code security audit is a key step of ensuring the security of the system, so the reconfiguration method may further include performing the security audit on the source code of the ERP system.
The method comprises the steps of classifying and grading source codes of an ERP system, generating a multi-dimensional safety target, adopting a multi-target combined optimization framework for large-scale hierarchical population synthesis, executing code scanning, executing audit operation on source codes of different types and different levels, and generating a safety audit report and a restoration suggestion according to audit operation results.
In an alternative example, the framework may be optimized by employing a multi-objective combination of innovative large-scale hierarchical population synthesis to significantly improve the efficiency and effectiveness of auditing.
The multi-objective combined optimization framework for large-scale layered population synthesis allows efficient security audit in a re-complex code library, and comprises the steps of constructing a multi-level optimization framework, wherein each layer represents a code audit strategy of different abstraction levels, initializing a population, creating an initial population for each layer, each individual represents a possible audit strategy combination, defining a fitness function, designing a multi-objective fitness function, considering audit coverage rate, resource consumption, vulnerability discovery rate and other factors, realizing an evolutionary algorithm, adopting an improved genetic algorithm or particle swarm optimization algorithm to exchange and optimize information among the layers, balancing exploration and utilization, balancing global search (exploration of new strategies) and local search (improvement of existing strategies) in the optimization process, and accelerating the evaluation and evolutionary process of the large-scale population by using a distributed computing technology.
Further, for more targeted auditing, the source code may also be systematically categorized, where the division may be based on multiple dimensions 1) the functional modules may be divided into different modules according to the functionality of the code, such as user authentication, data processing, network communications, etc. 2) Risk ranking each module is assigned a risk ranking based on the importance, complexity and potential security impact of the code. 3) Technology stack classification-code is classified according to the programming language, framework and library used in order to apply specific audit rules. 4) Update frequency analysis-identifying frequently altered "hot spot" code regions that may require more frequent auditing. 5) Dependency graph construction-the dependency between code modules is analyzed and visualized to understand the potential secure propagation path. 6) And (3) associating the code module with the type of the loopholes discovered in the history, and identifying the high-risk area.
And finally, generating a multi-dimensional safety target so as to comprehensively ensure the safety of the system.
For example, vulnerability types override, ensure audit overrides OWASP Top, common vulnerability types, and security risks specific to the ERP system.
Compliance requirements compliance targets are defined according to industry standards and regulatory requirements (e.g., GDPR, PCI DSS, etc.).
Performance impact assessment-security and efficiency are balanced by considering the potential impact of security measures on system performance.
Maintainability goal-ensuring that security practices do not unduly increase code complexity, affecting long-term maintenance.
Security boundary integrity-the boundary between different security domains is assessed and enforced to prevent unauthorized access.
Encryption standard-defining and executing proper encryption standard to protect sensitive data.
And generating an audit strategy by utilizing a multi-objective combination optimization framework of large-scale layered population synthesis. Generating an optimized audit strategy aiming at different code modules, reasonably distributing audit resources based on risk assessment results, focusing on high-risk areas, selecting the most suitable static and dynamic analysis tools for different types of codes to generate a custom code audit rule based on the special requirements of an ERP system, dynamically adjusting the audit depth and breadth according to the complexity and risk level of the codes to formulate an optimal audit time table, and balancing audit frequency and development progress.
Finally, a comprehensive report is generated and feasible improvement suggestions are provided. For example, vulnerability classification summarization, classification and summarization of discovered safety problems, highlighting of key risks, severity assessment, assigning a severity level to each discovered problem to help teams to treat the key problems preferentially, root cause analysis, deep analysis of root cause of the safety problem, avoiding reoccurrence of similar problems, repair advice, providing specific repair advice for each safety problem, including code examples and best practices, trend analysis, comparison of historical audit results, identification of improvement or deterioration trends of safety conditions, visual presentation of audit results and safety states using charts and visualization tools, knowledge base construction, sorting of audit discovery and repair experience into an into knowledge base, and service of future development and audit work.
In some alternative examples, an audit process combining automated code scanning and manual audit may also be performed, i.e., combining automated tools and manual expertise, to achieve efficient and comprehensive code audit to reduce false positives.
By implementing this series of steps, the source code security audit process of the ERP system will become more systematic, intelligent and efficient. The method not only can comprehensively identify and repair the existing security holes, but also can continuously improve development practice and improve the security of the system from the source. Meanwhile, through optimizing resource allocation and strategy generation, the maximum safe benefit can be realized in limited time and resources, and powerful guarantee is provided for reliable operation of the ERP system.
In some embodiments of the present disclosure, to further improve data security, data on the ERP system may be backed up and restored.
The method comprises the steps of executing remote multi-center data backup and data backup scheduling based on data priority, wherein the data backup scheduling based on the data priority comprises the steps of analyzing the relevance and the dependence among different data in the ERP system based on a dependence perception priority adjustment technology to determine the priority of each data on the ERP system, establishing a critical time sensitive network flow model, preferentially distributing time slots for the data with higher priority to backup the data with higher priority, and adjusting the priority of the data according to the change frequency and the backup time interval of the data.
In an alternative example, in an ERP system, an efficient and reliable data backup strategy is critical to ensure business continuity and data security. By implementing priority-based data backup scheduling, protection of critical data can be maximized within a limited backup window. And by utilizing the dependency sensing priority adjustment technology, the backup priority can be dynamically adjusted by complex dependency relationship among data. The method comprises the steps of building a dependency graph, constructing a comprehensive data dependency graph, reflecting the association between different data sets in an ERP system, designing a mathematical model, calculating the initial priority of the data according to the depth and breadth of the dependency, analyzing cascade influence, evaluating cascade influence possibly caused by the loss of a certain data set on the whole system, dynamically distributing weights of the different data sets according to the real-time state of a business process, building a feedback loop, continuously optimizing the priority calculation according to the actual backup performance and the recovery test result, and detecting abnormality, namely realizing an abnormality detection mechanism and rapidly identifying and responding to the mutation of the data dependency.
Next, a hybrid critical TSN (time sensitive network) flow model is built, applying the concept of time sensitive networks to data backups to better handle data flows of different priorities. The method allocates proper time slots for different types of data streams, ensures the timely backup of high-priority data, reserves necessary network bandwidth for key data streams, and prevents congestion in the backup process.
Finally, the priority of the backup task is dynamically adjusted to realize a system capable of dynamically adjusting the priority of the backup task according to real-time conditions, which comprises a real-time monitoring system, a priority scoring mechanism, a dynamic queue management system, a resource competition processing system and an emergency inserting mechanism, wherein the real-time monitoring system is used for deploying monitoring tools and tracking key indexes such as data change rate, system load and the like in real time, the priority scoring mechanism is used for establishing a scoring system comprehensively considering factors such as data importance, change frequency, last backup time and the like, the dynamic queue management is used for realizing a queue management system capable of dynamically rearranging the backup task according to the real-time priorities, the resource competition processing is used for realizing an intelligent decision mechanism when a plurality of high-priority tasks compete for resources, the emergency inserting mechanism is used for allowing the emergency backup task to be quickly inserted into a current backup queue, and the priority adjustment audit is used for recording all priority adjustment operations so as to facilitate subsequent analysis and optimization.
And optionally, optimizing a backup window to ensure timely backup of the critical data, comprising:
The backup window is effectively managed to maximize backup efficiency, for example, the backup window analyzes a service operation mode, identifies an optimal backup time window, an incremental backup strategy is implemented to reduce time and resources required for each backup, and a parallel backup technique is utilized to simultaneously backup a plurality of non-conflicting data sets.
Compression and deduplication by applying efficient data compression and deduplication techniques to reduce the amount of backup data.
And the backup task is divided into a plurality of small tasks, so that the overall backup flexibility is improved.
And (3) self-adaptive scheduling, namely dynamically adjusting the execution time of the backup task according to the real-time system load and the network condition.
And monitoring and adjusting backup performance to ensure that a Recovery Point Objective (RPO) is met, including displaying a current RPO status of each key data set and tracking key performance indicators such as backup completion time, data transmission rate, etc., then informing an administrator in time when the RPO is about to be violated and analyzing a root cause of backup failure or delay, finally, performing prospective backup capacity planning based on historical data and growth trend, and arranging periodic recovery tests to verify the effectiveness of backup and the accessibility of Recovery Time Objective (RTO) so as to establish a continuously improved process, and periodically inspecting and optimizing the whole backup strategy.
By implementing this series of steps, the data backup process of the ERP system will become more intelligent, efficient and reliable. The priority-based data backup scheduling method not only can ensure that key service data is timely and reliably protected, but also can optimize resource use and reduce the influence on daily service operation. By utilizing innovative technologies such as dependency sensing priority adjustment and TSN stream model, the system can better cope with complex data dependency and time sensitivity requirements, and a powerful and flexible data protection solution is provided for enterprises.
In summary, the embodiment of the specification provides a systematic method for reconstructing a distributed deployment service domain of an ERP system, which efficiently completes the whole process from service carding to system reconstruction through a series of normalized steps, and ensures the smooth implementation of the distributed reconstruction of the ERP system. The method not only improves the efficiency of system reconstruction, but also enhances the stability and maintainability of the system, and provides a solid foundation for the digital transformation of enterprises.
The embodiment of the present disclosure further provides a system corresponding to the service domain deployment and reconfiguration method of the distributed ERP system, as shown in fig. 4, in which, in the embodiment of the present disclosure, a structural schematic diagram of the service domain deployment and reconfiguration system of the distributed ERP system, as shown in fig. 4, the service domain deployment and reconfiguration system 100 of the distributed ERP system may include:
The processing unit 110 is suitable for combing the business functions of each business domain on the ERP system and the interaction relation among the data entities of the business functions among different business domains, and dividing according to the business functions of each business domain to obtain micro-services corresponding to each business domain;
The configuration unit 120 is adapted to set the data flow direction between each service domain and different service domains and set the function interfaces between each service domain and different service domains according to the service functions of each service domain and the interaction relationship;
The reconfiguration unit 130 is adapted to perform database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of the migration platform system, so as to migrate the micro services corresponding to each service domain, and data flows of data entities of each micro service, each service domain, and different service domains to the migration platform system, and perform security check on the migration platform system;
The test unit 140 is adapted to perform an independent test and an integrated test on each micro service in the migration platform system, and perform an integrated test and a performance test on data in the migration platform system, so as to generate a test result;
The execution unit 150 is adapted to perform a commissioning operation based on the test results and to operate in stages in all business departments.
The specific operation and principle of the processing unit 110, the configuration unit 120, the reconstruction unit 130, the test unit 140 and the execution unit 150 may be found in the relevant description of the previous examples.
It should be understood that the above division of each unit is only a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated when actually implemented. Furthermore, the above units may be implemented in the form of processor-invoked software.
The present invention also provides a computer system adapted to implement the distributed ERP system business domain deployment reconfiguration method, wherein the computer system includes a central processing unit (Central Processing Unit, CPU) that can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) or a program loaded from a storage portion into a random access Memory (Random Access Memory, RAM). In the RAM, various programs and data required for the system operation are also stored. The CPU, ROM and RAM are connected to each other by a bus. An Input/Output (I/O) interface is also connected to the bus.
Connected to the I/O interface are an input section including a keyboard, a mouse, and the like, an output section including an output section such as a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and the like, a storage section including a hard disk, and the like, and a communication section including a network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section performs communication processing via a network such as the internet. The driver 310 is also connected to the I/O interface as needed. Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like are mounted on the drive as needed so that a computer program read therefrom is mounted into the storage section as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. When being executed by a Central Processing Unit (CPU), performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above.
As another aspect, the present application also provides a computer-readable medium that may be included in the electronic device described in the above embodiment, or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Although the embodiments of the present specification are disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.

Claims (7)

1. The utility model provides a distributed ERP system service domain deployment reconstruction method which is characterized in that the method comprises the following steps:
service functions of each service domain on the ERP system are combed, and interaction relations among data entities of the service functions among different service domains are obtained;
Dividing according to service functions of each service domain to obtain micro services corresponding to each service domain;
setting data flow directions among the business domains and different business domains and setting functional interfaces among the business domains and different business domains according to business functions of the business domains and the interaction relation;
performing database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of a migration platform system, micro services corresponding to each service domain are migrated to the migration platform system, and data flows among data entities of each micro service, each service domain and different service domains are migrated to the migration platform system, and security inspection is performed on the migration platform system;
performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, performing test operation based on test results, and operating in all business departments in stages;
The service functions of each service domain on the carding ERP system and the interaction relationship between the data entities of the service functions among different service domains comprise:
based on the collected workflow information of each business domain in the ERP system, a business flow chart is generated by adopting a flow modeling tool, wherein the business flow chart comprises the action relation and the data flow direction of each business function of each business domain;
Analyzing a database of the ERP system, and identifying data entities in each service domain and interaction relations among the data entities by establishing an entity relation diagram;
According to the service flow chart, based on the identified service activities, decomposing the service functions of each service domain into a plurality of management modules, and determining the functional boundaries and data entities of each management module and the interaction relationship among each management module according to the calling relationship among each management module;
according to the interaction relation between the management modules and the interaction relation between the data entities, constructing a data flow diagram between the management modules, and determining the interaction relation between the data entities of the business functions between different business domains by identifying the flow paths of the data entities of the management modules between the different management modules;
The setting of the data flow direction between each service domain and different service domains according to the service functions of each service domain and the interaction relation comprises the following steps:
The method comprises the steps of identifying key events in service functions of each service domain, defining structures and metadata of each key event, establishing a key event processing and routing mechanism to store data of each service domain in areas corresponding to different routes, selecting a message queue technology matched with the key event, and adopting a predefined format of an output transmission protocol to enable the data of each service domain and the data of different service domains to be transmitted in the areas corresponding to the different routes;
The setting the functional interfaces between each service domain and different service domains comprises the following steps:
Functional interfaces among various service domains and different service domains are standardized, a version control mechanism is adopted, and discarding and migration flows of the functional interfaces are planned;
The performing database adaptation and source code transformation operations to adapt a first database of the ERP system to a second database of the migration platform system includes:
Based on SQL grammar compatibility test, identifying incompatible SQL sentences of the first database and the second database, rewriting the incompatible SQL sentences, and adjusting the JOIN operation of the first database to adapt to the optimizer of the second database;
identifying incompatible code segments of the first database and the second database in an operation environment by adopting a static code analysis tool, and carrying out code reconstruction operation to replace an incompatible third party library;
The migration of the micro service corresponding to each service domain, the data entity of each micro service, each service domain, and the data flow between different service domains to the migration platform system includes:
Designing a table structure of the second database, and partitioning the table structure to store micro services corresponding to each service domain, and data entities of each micro service, each service domain and data flow directions among different service domains in different areas;
setting a data extraction strategy and a data conversion rule of the second database in the migration process so as to process different data types and format differences and execute a rollback mechanism;
And setting an index in the table structure to perform a create, modify, or delete operation on the data in the second database.
2. The method according to claim 1, wherein in the step of analyzing the database of the ERP system and identifying the data entities in each business domain and the interaction relationship between each data entity by establishing an entity relationship graph, the method further comprises:
and establishing a data dictionary corresponding to the data entity, wherein the data dictionary comprises definition, data type, value range and business meaning of the data entity.
3. The reconfiguration method according to claim 1, wherein the independent testing and the integrated testing are performed on each of the micro services in the migration platform system, and the integrated testing and the performance testing are performed on the data in the migration platform system, and the reconfiguration method is performed in all business departments in stages based on the test results, including:
Writing test cases for each micro service based on the functions and boundary conditions of each micro service, wherein the test cases comprise normal flow cases and abnormal flow cases; identifying the dependency relationship among the micro services, and creating a simulation object to replace the dependency relationship so as to model various response scenes;
Setting up an integrated test environment to perform integrated test on each micro-service, wherein the integrated test environment comprises the steps of configuring a test server simulating a production environment, deploying all the micro-services and dependent components among the micro-services on the test server, configuring a network model, simulating the deployment topology of the micro-services, designing an end-to-end test scene, designing a test scene covering all the micro-services based on key business processes and user trips of each micro-service, executing communication test among the micro-services, including testing synchronous and asynchronous communication modes, verifying load balancing and fault transfer functions, safety authentication and authorization among the test services, simulating network delay and disconnection conditions and verifying consistency of distributed transactions;
Based on the dependency relationship between the data entities of each micro service, designing a data consistency test case to test the consistency of the copying and the buffering of the data entities and a data synchronization mechanism, executing the data entity migration test and the cross-service domain data synchronization test, and designing an automatic data verification script to verify the integrity of the data;
The method comprises the steps of defining key performance indexes and purposes of performance test, designing concurrency for operating scripts, simulating high concurrency scenes, monitoring resource use conditions of the migration platform system, and optimizing database query and index strategies based on the resource use conditions of the migration platform system;
The method comprises the steps of selecting a test point service department, making a test operation plan and an emergency plan, collecting user feedback and system operation data, and performing risk assessment based on a test operation result and the collected user feedback and system operation data.
4. The reconstruction method according to claim 1, further comprising:
When determining that the service domain reconstruction in the migration platform system fails, executing the service domain reconstruction rollback operation comprises the steps of establishing a rollback mechanism, configuring a quick switching entry, establishing a data rollback mechanism, performing rollback exercise, including simulating a rollback scene in a test scene and evaluating the influence degree of the rollback operation on each service domain, optimizing the rollback flow and executing training operation.
5. The reconstruction method according to claim 1, further comprising:
Performing end-to-end encryption operation on data transmission and storage in the ERP system, including constructing a multi-level encryption system by adopting a national encryption algorithm and an international general encryption algorithm, performing dynamic desensitization operation on sensitive data in the ERP system to process the sensitive data, and performing privacy-enhanced conditional text anonymization operation by adopting private attribute randomization, including identifying private attributes needing anonymization and generating random parameters;
performing identity authentication and access control to determine the authority level of an access user so as to display data in the ERP system which is matched with the authority level;
The method comprises the steps of setting a log collection system in the ERP system, detecting abnormal behaviors, adopting a value penalty auxiliary control method without rewards or demonstration learning examples and a discrete latent variable enhanced continuous diffusion model, training an abnormal detection model, deploying the abnormal detection model to a working node of the ERP system, detecting the abnormal behaviors, and generating an abnormal behavior alarm and a detailed analysis report.
6. The reconstruction method according to claim 1, further comprising:
Performing security audit on source codes of the ERP system, including classifying and grading the source codes of the ERP system and generating a multi-dimensional security target, adopting a multi-target combined optimization framework synthesized by a large-scale hierarchical population, executing code scanning, and performing audit operations on source codes of different types and different levels;
The method comprises the steps of backing up and recovering data on the ERP system, and comprises the steps of executing remote multi-center data backup and data backup scheduling based on data priority, wherein the data backup scheduling based on the data priority comprises the steps of analyzing relevance and dependence among different data in the ERP system based on a dependence perception priority adjustment technology to determine the priority of each data on the ERP system, establishing a key time sensitive network flow model, preferentially distributing time slots for the data with higher priority to backup the data with higher priority, and adjusting the priority of the data according to the change frequency and backup time interval of the data.
7. A distributed ERP system business domain deployment reconfiguration system, comprising:
The processing unit is suitable for combing the business functions of each business domain on the ERP system and the interaction relation among the data entities of the business functions among different business domains, and dividing according to the business functions of each business domain to obtain micro-services corresponding to each business domain;
The service functions of each service domain on the carding ERP system and the interaction relationship between the data entities of the service functions among different service domains comprise:
based on the collected workflow information of each business domain in the ERP system, a business flow chart is generated by adopting a flow modeling tool, wherein the business flow chart comprises the action relation and the data flow direction of each business function of each business domain;
Analyzing a database of the ERP system, and identifying data entities in each service domain and interaction relations among the data entities by establishing an entity relation diagram;
According to the service flow chart, based on the identified service activities, decomposing the service functions of each service domain into a plurality of management modules, and determining the functional boundaries and data entities of each management module and the interaction relationship among each management module according to the calling relationship among each management module;
according to the interaction relation between the management modules and the interaction relation between the data entities, constructing a data flow diagram between the management modules, and determining the interaction relation between the data entities of the business functions between different business domains by identifying the flow paths of the data entities of the management modules between the different management modules;
The configuration unit is suitable for setting the data flow direction between each service domain and different service domains and setting the function interfaces between each service domain and different service domains according to the service functions of each service domain and the interaction relation;
The reconfiguration unit is suitable for executing database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of the migration platform system, micro services corresponding to each service domain are migrated to the migration platform system, and data flow directions among data entities of each micro service, each service domain and different service domains are migrated to the migration platform system, and security inspection is carried out on the migration platform system;
The test unit is suitable for performing independent test and integrated test on each micro service in the migration platform system, performing integrated test and performance test on data in the migration platform system, and generating a test result;
the execution unit is suitable for executing test operation and operating in all business departments in stages based on the test result;
The configuration unit is adapted to set a data flow direction between each service domain and different service domains according to service functions of each service domain and the interaction relationship, and includes:
The method comprises the steps of identifying key events in service functions of each service domain, defining structures and metadata of each key event, establishing a key event processing and routing mechanism to store data of each service domain in areas corresponding to different routes, selecting a message queue technology matched with the key event, and adopting a predefined format of an output transmission protocol to enable the data of each service domain and the data of different service domains to be transmitted in the areas corresponding to the different routes;
The setting the functional interfaces between each service domain and different service domains comprises the following steps:
Functional interfaces among various service domains and different service domains are standardized, a version control mechanism is adopted, and discarding and migration flows of the functional interfaces are planned;
The reconstruction unit is adapted to perform database adaptation and source code transformation operations, so that a first database of the ERP system is adapted to a second database of the migration platform system, and includes:
Based on SQL grammar compatibility test, identifying incompatible SQL sentences of the first database and the second database, rewriting the incompatible SQL sentences, and adjusting the JOIN operation of the first database to adapt to the optimizer of the second database;
identifying incompatible code segments of the first database and the second database in an operation environment by adopting a static code analysis tool, and carrying out code reconstruction operation to replace an incompatible third party library;
The migration of the micro service corresponding to each service domain, the data entity of each micro service, each service domain, and the data flow between different service domains to the migration platform system includes:
Designing a table structure of the second database, and partitioning the table structure to store micro services corresponding to each service domain, and data entities of each micro service, each service domain and data flow directions among different service domains in different areas;
setting a data extraction strategy and a data conversion rule of the second database in the migration process so as to process different data types and format differences and execute a rollback mechanism;
And setting an index in the table structure to perform a create, modify, or delete operation on the data in the second database.
CN202411186819.1A 2024-08-28 2024-08-28 Service domain deployment reconstruction method and system for distributed ERP system Active CN118694812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411186819.1A CN118694812B (en) 2024-08-28 2024-08-28 Service domain deployment reconstruction method and system for distributed ERP system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411186819.1A CN118694812B (en) 2024-08-28 2024-08-28 Service domain deployment reconstruction method and system for distributed ERP system

Publications (2)

Publication Number Publication Date
CN118694812A CN118694812A (en) 2024-09-24
CN118694812B true CN118694812B (en) 2024-12-17

Family

ID=92778382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411186819.1A Active CN118694812B (en) 2024-08-28 2024-08-28 Service domain deployment reconstruction method and system for distributed ERP system

Country Status (1)

Country Link
CN (1) CN118694812B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119597363A (en) * 2024-11-14 2025-03-11 北京创想天空科技有限公司 Enterprise software system construction method and device based on micro-service
CN119273480A (en) * 2024-12-06 2025-01-07 深圳市法本信息技术股份有限公司 A data testing method and system
CN119396466A (en) * 2025-01-03 2025-02-07 北京零壹视界科技有限公司 Method, device, equipment and medium for migrating business systems to low-code platforms
CN120218056B (en) * 2025-03-14 2025-09-02 钛脉商学科技(北京)有限公司 Automatic testing method and system for message creation equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191008A (en) * 2018-09-30 2019-01-11 江苏农牧科技职业学院 A kind of micro services frame system for fish quality supervisory systems
CN114416174A (en) * 2022-01-22 2022-04-29 平安科技(深圳)有限公司 Metadata-based model reconstruction method, device, electronic device and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8849862B2 (en) * 2004-05-21 2014-09-30 Rsvpro, Llc Architectural frameworks, functions and interfaces for relationship management (AFFIRM)
CN114756174B (en) * 2022-04-18 2024-12-27 中国电信股份有限公司 A method and device for storing data in a hybrid cloud environment
CN115454452A (en) * 2022-09-22 2022-12-09 中能融合智慧科技有限公司 Application platform loading method suitable for energy industry internet platform
CN116860288B (en) * 2023-06-20 2025-06-17 昆仑数智科技有限责任公司 ERP system upgrade method, device, equipment and medium
CN117149146A (en) * 2023-08-08 2023-12-01 神州数码融信软件有限公司 Service model construction method, system, device and storage medium
CN117435582B (en) * 2023-10-11 2024-04-19 广东美尼科技有限公司 Method and device for capturing and processing ERP temporary data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191008A (en) * 2018-09-30 2019-01-11 江苏农牧科技职业学院 A kind of micro services frame system for fish quality supervisory systems
CN114416174A (en) * 2022-01-22 2022-04-29 平安科技(深圳)有限公司 Metadata-based model reconstruction method, device, electronic device and storage medium

Also Published As

Publication number Publication date
CN118694812A (en) 2024-09-24

Similar Documents

Publication Publication Date Title
EP3884378B1 (en) Automation of task identification in a software lifecycle
US11288167B2 (en) System and method for visualizing and measuring software assets
CN118694812B (en) Service domain deployment reconstruction method and system for distributed ERP system
US11880360B2 (en) Transforming data in DevOps landscape
US11307971B1 (en) Computer analysis of software resource load
US20230117225A1 (en) Automated workflow analysis and solution implementation
US11080162B2 (en) System and method for visualizing and measuring software assets
US12237965B2 (en) Asset error remediation for continuous operations in a heterogeneous distributed computing environment
Iturriza et al. Modelling methodologies for analysing critical infrastructures
Mamun Integration Of Artificial Intelligence And DevOps In Scalable And Agile Product Development: A Systematic Literature Review On Frameworks
Assar et al. Using text clustering to predict defect resolution time: a conceptual replication and an evaluation of prediction accuracy
Jain Integrating Artificial Intelligence with DevOps: Enhancing Continuous Delivery, Automation, and Predictive Analytics for High-Performance Software Engineering
Tatineni Integrating Artificial Intelligence with DevOps: Advanced Techniques, Predictive Analytics, and Automation for Real-Time Optimization and Security in Modern Software Development
Arshad et al. Big data testing techniques: taxonomy, challenges and future trends
Zeng et al. Quantitative Risk Assessment for Cloud‐Based Software Migration Processes
Korman et al. Technology management through architecture reference models: A smart metering case
Tamanampudi AI-Enhanced Continuous Integration and Continuous Deployment Pipelines: Leveraging Machine Learning Models for Predictive Failure Detection, Automated Rollbacks, and Adaptive Deployment Strategies in Agile Software Development
Campbell et al. Assured Cloud Computing
Solanke Cloud Migration for Critical Enterprise Workloads: Quantifiable Risk Mitigation Frameworks
Zhang Evaluating the merits of low code/no code paradigm from the perspective of ISO 25010 quality requirment
US20250322244A1 (en) Validating autonomous artificial intelligence (ai) agents using generative ai
US20240211373A1 (en) Discovery and Predictive Simulation of Software-Based Processes
Vailraj AI-Assisted Incident Management in SRE: The Role of LLMs and Anomaly Detection
Thompson et al. Role of Machine Learning in Predicting Downtime During Data Migration
Belay Challenges of Large-ScaleSoftware Testing and the Role of Quality Characteristics: Empirical Study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant