[go: up one dir, main page]

CN119561987A - Request processing method, device, equipment and medium - Google Patents

Request processing method, device, equipment and medium Download PDF

Info

Publication number
CN119561987A
CN119561987A CN202411735154.5A CN202411735154A CN119561987A CN 119561987 A CN119561987 A CN 119561987A CN 202411735154 A CN202411735154 A CN 202411735154A CN 119561987 A CN119561987 A CN 119561987A
Authority
CN
China
Prior art keywords
service instance
hash
service
target
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411735154.5A
Other languages
Chinese (zh)
Inventor
叶锋玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202411735154.5A priority Critical patent/CN119561987A/en
Publication of CN119561987A publication Critical patent/CN119561987A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure relates to a request processing method, a device, equipment and a medium, wherein the method is applied to a proxy gateway and comprises the steps of receiving a service request sent by a requester, acquiring a user identifier based on the service request, determining a target service instance based on the user identifier and service instance information of a distributed cluster, forwarding the service request to the target service instance, receiving a request result of the target service instance for responding to the service request based on locally cached data, and forwarding the request result to the requester. The embodiment of the disclosure can route the multiple service requests of the same user identifier to the same service instance, thereby greatly improving the hit rate of the local cache of the service instance, reducing the calling times of an external interface and improving the stability and efficiency of request processing.

Description

Request processing method, device, equipment and medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method, a device, equipment and a medium for processing a request.
Background
Currently, in large-scale distributed systems, service instances are typically supported by a cluster of multiple servers. In general, in order to increase the response speed of a user request, a distributed system typically caches data acquired by an external interface, for example, when a user accesses an application, a service instance may be required to call the external interface to acquire data, and the data may be cached for a period of time, so as to reduce the number of times of repeatedly calling the external interface, and improve the system performance.
However, due to the randomness of the service requests, multiple service requests of the same user may fall on different service instances, resulting in a decrease in the cache data hit rate, and the number of times of calling the external interface cannot be reduced.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a request processing method, apparatus, device, and medium.
The embodiment of the disclosure provides a request processing method which is applied to a proxy gateway and comprises the steps of receiving a service request sent by a requester and acquiring a user identifier based on the service request, determining a target service instance based on the user identifier and service instance information of a distributed cluster, forwarding the service request to the target service instance, receiving a request result of the target service instance responding to the service request based on locally cached data, and forwarding the request result to the requester.
Optionally, the determining the target service instance based on the user identifier and the service instance information of the distributed cluster includes performing hash calculation on the user identifier and the total number of current service instances of the distributed cluster based on a preset hash algorithm to obtain a target service instance identifier, and taking a service instance corresponding to the target service instance identifier in the distributed cluster as the target service instance.
Optionally, the determining a target service instance based on the user identifier and the service instance information of the distributed cluster includes obtaining a target hash space corresponding to the service instance identifier of the distributed cluster, performing hash calculation on the user identifier based on a preset hash algorithm to obtain a user hash value, mapping the user hash value to the target hash space to obtain a user hash position, determining a target service instance hash position in the target hash space based on the user hash position and a preset query direction, and taking a service instance corresponding to the target service instance hash position as the target service instance.
Optionally, the method further comprises the steps of obtaining a service instance identifier of each service instance in the distributed cluster, carrying out hash calculation on the service instance identifier based on the hash algorithm to obtain a service instance hash value of each service instance, and mapping the service instance hash value of each service instance to a pre-built initial hash space to obtain the target hash space comprising the service instance hash position corresponding to each service instance.
Optionally, the method further comprises the steps of obtaining instance update information of the distributed cluster, determining an added service instance based on the instance update information, calculating an added service instance hash value of the added service instance to map to the target hash space to obtain an added service instance hash position, determining a shared service instance hash position based on the added service instance hash position and the query direction, and taking a service instance corresponding to the shared service instance hash position as a shared service instance of the added service instance.
Optionally, the method further comprises the steps of determining a deleted service instance based on the instance update information, determining a deleted service instance hash position on the target hash space corresponding to the deleted service instance, determining a candidate service instance hash position based on the deleted service instance hash position and the query direction, and taking the service instance corresponding to the candidate service instance hash position as the candidate service instance of the deleted service instance.
Optionally, the method further comprises configuring a plurality of virtual service instances corresponding to each service instance in the distributed cluster, obtaining virtual service instance identifiers of each virtual service instance, carrying out hash calculation on the virtual service instance identifiers based on the hash algorithm to obtain virtual service instance hash values of each virtual service instance, mapping the virtual service instance hash values of each virtual service instance to the target hash space to obtain virtual service instance hash positions corresponding to each virtual service instance in the target hash space, determining target service instance hash positions in the target hash space based on the user hash positions and a preset query direction, taking service instances corresponding to the target service instance hash positions as the target service instances, and determining target virtual service instance positions from all the virtual service instance positions based on the user hash positions and the query direction, and obtaining target virtual service instances corresponding to the target virtual service instance positions, wherein the target virtual service instances belong to the target hash instances.
Optionally, the method further comprises the steps of obtaining a historical service request amount and a historical access service instance in a historical time period, analyzing the historical service request amount and the historical access service instance based on a preset machine learning algorithm, and adjusting the hash algorithm and/or the initial hash space based on an analysis result.
Optionally, when the service request is a first request, the request result is that the target service instance obtains data from an external system based on an external interface in response to the service request, wherein the target service instance caches the data obtained from the external system locally, and sets a data cache time based on a request frequency of the service request.
The embodiment of the disclosure also provides a request processing device which is applied to the proxy gateway and comprises a receiving and acquiring module, a determining module, a processing module and a request processing module, wherein the receiving and acquiring module is used for receiving a service request sent by a requester and acquiring a user identifier based on the service request, the determining module is used for determining a target service instance based on the user identifier and service instance information of a distributed cluster, the processing module is used for forwarding the service request to the target service instance, receiving a request result of the target service instance responding to the service request based on locally cached data, and forwarding the request result to the requester.
The embodiment of the disclosure also provides electronic equipment, which comprises a processor, a memory for storing executable instructions of the processor, and the processor, wherein the processor is used for reading the executable instructions from the memory and executing the executable instructions to realize the request processing method provided by the embodiment of the disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the request processing method as provided by the embodiments of the present disclosure.
The disclosed embodiments also provide a computer program product comprising a computer program, wherein the computer program is configured to perform, when executed by a processor, a request processing method as provided by an embodiment of the present application.
According to the technical scheme provided by the embodiment of the disclosure, multiple service requests of the same user identifier can be routed to the same service instance, so that the hit rate of the local cache of the service instance is greatly improved, the calling times of an external interface are reduced, the dependence of a distributed system on external resources and the resource consumption and cost of the distributed system in operation are reduced, the performance and stability of the distributed system are improved, the operation and maintenance cost is reduced, repeated external interface calling is reduced when a large number of service requests are processed, the request processing time and resource consumption are effectively reduced, and the request processing efficiency is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a request processing flow provided in the related art;
Fig. 2 is a flow chart of a request processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 7 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 8 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 9 is a schematic diagram of a request processing flow provided in an embodiment of the disclosure;
FIG. 10 is a flowchart of a request processing method according to an embodiment of the disclosure;
FIG. 11 is a flowchart of a request processing method according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a request processing apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein, and it is apparent that the embodiments in the specification are only some, rather than all, of the embodiments of the present disclosure.
In the existing request processing scenario, a user may send a service request through an application program, a front-end webpage, etc., provide a service instance through a cluster formed by a plurality of servers, and respond to the service request, so that in order to improve response efficiency, the service instance may cache data acquired by an external interface, that is, the cached content is stored in a memory of the service instance itself, and when the cache is effective, call to the external interface may be reduced. However, due to the randomness of the service request, multiple accesses by the same UID (User Identifier) may fall on different instances, resulting in a decrease in cache hit rate.
For ease of understanding, a schematic request processing flow may be provided with reference to the related art shown in fig. 1, as shown in fig. 1, by sending service requests to the distributed cluster, where the service requests are randomly distributed on different service instances, for example, a first request with UID "123456" in fig. 1 is distributed to instance 1, so that a first time with UID "123456" calls an external interface to obtain data from an external system and cache the data in instance 1, that is, queries the external interface according to UID and caches the data, and then a second time with UID "123456" is distributed to instance 2, so that a second time with UID "123456" calls an external interface to obtain data from an external system and cache the data in instance 2.
Therefore, when the same UID in the related art requests service for multiple times, the results are cached in the local cache, but the cache hit rate cannot be improved because the requests are distributed on different service instances, so that the calling times of an external interface cannot be reduced.
In the prior art, in a large-scale distributed system, although a local cache is used to reduce the call to an external interface, due to the randomness of a service request, multiple accesses of the same UID may be scattered to different service instances, so that cache data cannot be reused, the cache hit rate is still lower, and the number of times of calling the external interface is higher. The following is a detailed explanation:
Fig. 2 is a flow chart of a method for processing a request according to an embodiment of the disclosure, where the method may be applied to a proxy gateway, such as a computer, a server, etc., and is not limited herein. As shown in FIG. 2, the method mainly comprises the following steps S202-S206:
step S202, receiving a service request sent by a requester, and acquiring a user identifier based on the service request.
In the embodiment of the present disclosure, a requestor may be understood as an application program, a front-end webpage, etc., and a user may send a service request based on the application program, the front-end webpage, etc. according to actual use requirements, for example, when the user accesses an application, the user needs to watch a video of the application, and needs to send a request service for playing the video through the application program.
In the embodiment of the disclosure, the user identifier can uniquely identify a user, and when the service request is sent based on the requester, the service request contains the user identifier, and the user identifier can be obtained by analyzing the service request.
Step S204, a target service instance is determined based on the user identification and the service instance information of the distributed cluster.
In the embodiment of the disclosure, the service instance information includes the total number of current service instances in the distributed cluster, namely the number of service instances currently deployed in the distributed cluster, a service instance identifier capable of uniquely identifying one service instance, and the like, specifically, the service instance is selected and set according to an actual application scene, the service instance can be understood as one server node in the distributed cluster, the distributed cluster includes a plurality of service instances, and the target service instance is one specific service instance in the plurality of service instances.
In the embodiment of the present disclosure, there are various ways of determining the target service instance based on the user identifier and the service instance information of the distributed cluster, and as an example, the number of service instances in the distributed cluster is generally unchanged, and the target service instance identifier may be obtained by performing hash computation on the user identifier and the current service instance total number of the distributed cluster based on a preset hash algorithm, and the service instance corresponding to the target service instance identifier in the distributed cluster is used as the target service instance.
As another example, service instances in the distributed cluster often change, for example, an online service instance is added, or some service instances are offline, hash calculation is performed on a user identifier based on a preset hash algorithm to obtain a user hash value, the user hash value is mapped to a target hash space including a service instance hash position corresponding to each service instance to obtain a user hash position, the target service instance hash position in the target hash space is obtained based on the user hash position and a preset query direction, and a service instance corresponding to the target service instance hash position is taken as a target service instance.
As yet another example, a service instance that is frequently accessed by the user identification may be determined as the target service instance based on the historical service request amount and the historical access service instance corresponding to the user identification.
The above three ways are merely examples of determining a target service instance based on the user identification and the service instance information of the distributed cluster, and the embodiments of the present disclosure do not specifically limit the way in which the target service instance is determined based on the user identification and the service instance information of the distributed cluster.
Step S206, the service request is forwarded to the target service instance, the request result of the target service instance responding to the service request based on the locally cached data is received, and the request result is forwarded to the requester.
In the embodiment of the disclosure, when the service request is a first request, the request result is that the target service instance obtains data from the external system based on the external interface in response to the service request, wherein the target service instance caches the data obtained from the external system locally, and sets the data caching time based on the request frequency of the service request.
That is, when the target service instance responds to the service request of the user identifier for the first time, the data acquired from the external system needs to be cached locally, and in order to further improve the data processing efficiency and stability, the data caching time needs to be set, and in order to further meet the user use requirement, the data caching time can be updated in real time according to the request frequency of the user identifier, for example, how many times the service is requested in one hour, etc.
In the embodiment of the disclosure, when the target service instance receives the service request based on the user identifier of the proxy gateway again, the request result can be obtained and sent to the proxy gateway based on the locally cached data response service request, and the proxy gateway forwards the request result to the requester.
That is, in the data buffering time, that is, as long as the data is locally buffered in the target service instance, the request result can be obtained based on the locally buffered data response service request and sent to the proxy gateway, so that the data processing efficiency is improved.
For easy understanding, a request processing flow schematic diagram shown in fig. 3 provided by the embodiment of the present disclosure may also be referred to, where a target service instance is determined by using a user identifier and a current service instance total number of a distributed cluster as an example, and compared with fig. 1, in the embodiment of the present disclosure, a UID routing policy is introduced in a proxy gateway layer, that is, a Hash calculation is performed according to the user identifier and the current service instance total number based on a Hash (Hash) algorithm to determine the target service instance, so that a service request of the same UID is fixedly routed to the target service instance, thereby greatly improving a hit rate of a local cache of the service instance, and reducing the number of external interface calls.
Specifically, as shown in fig. 3, a proxy gateway is added between a requester and a distributed cluster, where the request processing method in the embodiment of the present disclosure is applied to the proxy gateway, and when the proxy gateway receives a service request, the proxy gateway obtains a user identifier, for example, a service request with UID of "123456" in fig. 3, performs hash calculation based on a hash algorithm according to the UID of "123456" and the total number of current service instances to determine a target service instance, for example, 2 in fig. 3, so that as long as the service request with UID of "123456" is a service request, the proxy gateway routes the service request to example 2 for responding, and thus, the cached data in example 2 can respond to multiple service requests with UID of "123456", thereby improving the local cache hit rate and reducing the external interface call times.
Therefore, multiple service requests of the same user identifier can be routed to the same service instance, so that the hit rate of the local cache of the service instance is greatly improved, the calling times of an external interface are reduced, the dependence of a distributed system on external resources and the resource consumption and cost of the distributed system in operation are reduced, the performance and stability of the distributed system are improved, the operation and maintenance cost is reduced, repeated external interface calling is reduced when a large number of service requests are processed, the time and resource consumption of request processing are effectively reduced, and the request processing efficiency is improved.
In some embodiments, determining the target service instance based on the user identifier and the service instance information of the distributed cluster includes performing hash calculation on the user identifier and the total number of current service instances of the distributed cluster based on a preset hash algorithm to obtain the target service instance identifier, and taking the service instance corresponding to the target service instance identifier in the distributed cluster as the target service instance.
In the embodiment of the disclosure, the hash algorithm may select a hash function, such as SHA-1, MD5, etc., according to the actual application.
In the embodiment of the disclosure, the total number of current service instances refers to the number of service instances in the distributed cluster, which can currently respond to a service request, as an example, after the service instances are deployed in the distributed cluster, a message can be sent to the proxy gateway, so that the proxy gateway obtains the number of all deployed service instances in the distributed cluster, as another example, the distributed cluster is monitored according to a preset frequency to obtain the total number of current service instances, and as another example, a heartbeat message is sent between the proxy gateway and the service instances to determine the total number of all service instances currently on line, namely the total number of current service instances.
In the embodiment of the disclosure, the target service instance identifier can uniquely identify a target service instance, and in this way, multiple service requests of the user identifier can be routed to the target service instance for responding.
Specifically, the total number of current service instances of the distributed cluster, for example, 100 service instances are currently obtained at the proxy gateway, hash modulo distribution is carried out on the user identification UID according to the total number of the current service instances, for example, hash operation is carried out on the UID to be 'UID.hashCode ()% 100', so that the service request of the same UID can be ensured to be routed to a fixed service instance, for example, the service request of which the UID is '803803123' is always routed to a server with the service instance number 23.
Specifically, the routing strategy of the UID is realized through the proxy gateway, so that the same UID is ensured to be routed to the same service instance, that is, the first service request of the user can call the external system, then the data is cached to the service instance local, if the UID accesses for a plurality of times before the cached data is invalid, the external system can not be called, the hit rate of the cache is improved, and the interface calling times of the external system are reduced. If the service request with UID "123456" is routed to the instance 2 by the proxy gateway and the data of the interface of the external system is requested to be locally cached, as shown in fig. 3, if the user sends the service request for multiple times, the relevant data is directly taken out of the cache of the instance 2 to respond to the service request, and the data of the external system does not need to be acquired from the interface.
In the above embodiment, the problem of low cache hit rate caused by random distribution of service requests of the same UID to different service instances is solved, and hash modulo distribution is performed on the UID according to the total number of current service instances by the proxy gateway, so that the service requests of the same UID are fixedly routed to a specific service instance, thereby greatly improving the hit rate of the local cache of the service instance, and reducing the call times of external interfaces.
Based on the foregoing description, if the total number of current service instances changes, there may be a case that service requests of the same UID are routed to different service instances, that is, service instances in a distributed cluster may store an online state, a offline state, and other elastic conditions, which may cause a routing policy to change, and cache a large number of invalid scenes, that is, because when a new service instance is added or an existing service instance is removed, the foregoing hash modulo algorithm generally causes a large amount of data to need to be redistributed, for example, the total number of current service instances is N, N is a positive integer, and a hash calculation result of most data is also changed when the total number of current service instances N is changed, so that most data needs to be redistributed to the new service instance, thereby causing a large amount of data migration.
For example, as shown in fig. 4, assuming that there is a distributed cluster, the initial service instance number is 2 (i.e., n=2), for example, the hash value of a user identifier is calculated to be 15, i.e., "hash (UID) =15", then "hash (UID)% 2", i.e., "15% 2=1", then the service request of the user identifier is routed to instance 1, and when the distributed cluster needs to add one service instance, the total number of current service instances is changed to 3 (i.e., n=3), and the same hash value 15, a new instance distribution "15% 2=0" is recalculated. So that the user-identified service request will be routed to instance 0.
That is, when the total number of the current service instances is changed, hash modulo results of almost all user identities are changed, resulting in a significant decrease in cache hit rate when the service instances are changed.
In order to ensure the hit rate of the local cache of the service instance and reduce the number of times of external interface calls, in some embodiments, determining the target service instance based on the user identifier and the service instance information of the distributed cluster includes obtaining a target hash space corresponding to the service instance identifier of the distributed cluster, performing hash calculation on the user identifier based on a preset hash algorithm to obtain a user hash value, mapping the user hash value to the target hash space to obtain a user hash position, determining the target service instance hash position in the target hash space based on the user hash position and a preset query direction, and taking the service instance corresponding to the target service instance hash position as the target service instance.
In some embodiments, the method further includes obtaining a service instance identifier of each service instance in the distributed cluster, performing hash computation on the service instance identifier based on a hash algorithm to obtain a service instance hash value of each service instance, and mapping the service instance hash value of each service instance to a pre-built initial hash space to obtain a target hash space including a service instance hash position corresponding to each service instance.
In the embodiment of the disclosure, an initial hash space is preset to be built, that is, the whole initial hash space (usually a fixed range, for example, 0 to 2 ^ -1) is organized into a virtual ring, and the starting point and the end point of the initial hash space are connected to form a ring structure, as shown in fig. 5, where the range can be adjusted according to the number of service instances, for example, the service instances are relatively many, 2 ^ -32-1 can be adjusted to 2 ^ -64-1, and the like, and specifically, the setting is selected according to the application scene requirement.
In the embodiment of the present disclosure, after an initial hash space is constructed, a service instance identifier of each service instance needs to be obtained to perform hash computation to obtain a service instance hash value of each service instance, the service instance hash value of each service instance is mapped to the initial hash space to obtain a target hash space including a service instance hash position corresponding to each service instance, that is, each service instance is mapped to a position of a certain point on the initial hash space, such as a hash ring shown in fig. 5, through a certain hash function (for example, SHA-1, MD5, etc.), which may also be referred to as a service instance hash position, and example 1, example 2, and example 3 shown in fig. 6 represent service instance hash positions corresponding to the example 1, example 2, and example 3 on the target hash space (hash ring shown in fig. 6), respectively.
In the embodiment of the present disclosure, service instance identifiers of distributed clusters have corresponding target hash spaces, that is, different target hash spaces of distributed clusters are different, after obtaining a target hash space corresponding to a service instance identifier of a distributed cluster, hash calculation is performed on a user identifier by using the same hash algorithm to obtain a user hash value, and the user hash value is mapped to the target hash space to obtain a user hash position, that is, a UID is mapped to a certain point on a hash ring by using the same hash function, which is referred to as a user hash position, where a user identifier K1, a user identifier K2, and a user identifier K3 shown in fig. 7 respectively represent user hash positions corresponding to the user identifier K1, the user identifier K2, and the user identifier K3 in the target hash space (hash ring shown in fig. 7).
In the embodiment of the present disclosure, the query direction is preset, which may be clockwise or counterclockwise, so that the hash position of the target service instance is determined in the target hash space based on the hash position of the user and the preset query direction, and the service instance corresponding to the hash position of the target service instance is used as the target service instance, for example, by searching for the latest service instance in the clockwise direction, the UIDs are allocated to the corresponding service instance, that is, each UID is responsible for processing the UID by the first service instance greater than the current user hash value in the clockwise direction, as shown in fig. 7, and example 1 is responsible for processing the service request of the user identifier K1, example 2 is responsible for processing the service request of the user identifier K2, and example 3 is responsible for processing the service request of the user identifier K3.
In the above embodiment, when the total number of the current service instances changes, multiple service requests of the same user identifier can be routed to the same service instance, so that the hit rate of the local cache of the service instance is ensured, the number of times of calling external interfaces is reduced, the method is applicable to the dynamic addition and deletion of the service instance, and the problem that the hit rate of the local cache is low when the distributed system expands or contracts can be well solved.
In some embodiments, the method further comprises obtaining instance update information of the distributed cluster, determining an added service instance based on the instance update information, calculating an added service instance hash value of the added service instance to map to a target hash space to obtain an added service instance hash position, determining a shared service instance hash position based on the added service instance hash position and a query direction, and taking a service instance corresponding to the shared service instance hash position as a shared service instance of the added service instance.
In some embodiments, the method further comprises determining a deleted service instance based on the instance update information, determining a deleted service instance hash position on a target hash space corresponding to the deleted service instance, determining a candidate service instance hash position based on the deleted service instance hash position and the query direction, and taking the service instance corresponding to the candidate service instance hash position as a candidate service instance of the deleted service instance.
In the embodiment of the disclosure, whether the service instance is added or deleted in the distributed cluster, the proxy gateway can acquire instance update information, for example, the proxy gateway can acquire the added service instance and/or the deleted service instance by receiving a heart state message transmitted by the service instance or monitoring and acquiring each service instance.
In the embodiment of the disclosure, after an added service instance is obtained, an added service instance hash value of the added service instance is calculated and mapped to a target hash space to obtain an added service instance hash position, then a shared service instance hash position is determined according to a query direction, a service instance corresponding to the shared service instance hash position is taken as a shared service instance of the added service instance, for example, the added service instance shown in fig. 8 is taken as an instance 4, so that the service instance corresponding to the shared service instance hash position is determined as a shared service instance of instance 2 as instance 4 according to a clockwise direction, namely, instance 2 and instance 4 are jointly responsible for a service request of a user identifier K2, for example, instance 1 in fig. 7 is deleted, and the service instance corresponding to the candidate service instance hash position is determined as a candidate service instance for deleting the service instance based on the instance 1 hash position and the clockwise direction, namely, instance 2 is taken as a service request of the user identifier K1 and the user identifier K2.
In the embodiment of the disclosure, when a new service instance is added, the position of the instance on the target hash space is determined, the new service instance only needs to take over part of data of the service instance in the query direction, such as the clockwise direction, and the data distribution of other service instances is unchanged, so that the number of data reassignments is reduced, and when one service instance is removed, the data born by the service instance is transferred to the next service instance according to the query direction, such as the clockwise direction, and the data distribution of other service instances is unchanged, so that the cache invalidation amount is small.
In the above embodiment, when the distributed system expands or contracts, the distributed system can still route multiple service requests of the same user identifier to the same service instance, so as to ensure the hit rate of the local cache of the service instance, and further improve the data processing efficiency and effect.
Based on the description of the foregoing embodiment, as shown in fig. 8, after the example 4 is added, the flow of the original example 2 is respectively borne by the example 2 and the example 4, so that the flow of the two examples is halved compared with that of the examples 1 and 3, and the load is unbalanced.
In order to further balance loads, in some embodiments, the method further includes configuring a plurality of virtual service instances corresponding to each service instance in the distributed cluster, obtaining virtual service instance identifiers of each virtual service instance, performing hash computation on the virtual service instance identifiers based on a hash algorithm to obtain virtual service instance hash values of each virtual service instance, mapping the virtual service instance hash values of each virtual service instance to the target hash space, and obtaining virtual service instance hash positions corresponding to each virtual service instance in the target hash space.
In some embodiments, determining the hash position of the target service instance in the target hash space based on the hash position of the user and the preset query direction, and taking the service instance corresponding to the hash position of the target service instance as the target service instance comprises determining the hash position of the target virtual service instance from all the hash positions of the virtual service instance based on the hash position of the user and the query direction, and acquiring the target virtual service instance corresponding to the hash position of the target virtual service instance, and taking the service instance to which the target virtual service instance belongs as the target service instance.
In the embodiment of the disclosure, each service instance (physical instance) may correspond to a plurality of virtual service instances, so that each service instance may correspond to a plurality of hash positions on a hash ring (target hash space), and the virtual service instance may achieve a load balancing effect when the actual service instance is unevenly distributed.
Specifically, each service instance in the distributed cluster is configured to correspond to a plurality of virtual service instances, for example, three service instances A, B and C are provided, each service instance is configured to correspond to three virtual service instances, namely, a service instance A is provided with virtual service instances A1, A2 and A3, a service instance B is provided with virtual service instances B1, B2 and B3, and a service instance C is provided with virtual service instances C1, C2 and C3. The virtual service instances calculate different virtual service instance hash positions through hash functions, respectively, and the virtual service instance hash positions are scattered on the whole hash ring (target hash space).
Specifically, determining a target virtual service instance hash position from all virtual service instance hash positions based on a user hash position and a query direction, acquiring a target virtual service instance corresponding to the target virtual service instance hash position, calculating the user hash position of the user identifier K1 on a target hash space through a hash function when the service instance which the target virtual service instance belongs to is used as a target service instance, and then finding the nearest target virtual service instance hash position according to the query direction, for example, clockwise direction, if a target virtual service instance A2 corresponding to the target virtual service instance hash position is found and the service instance A which the target virtual service instance A2 belongs to is used as the target service instance, wherein the service request of the user identifier K1 is allocated to the service instance A for processing.
For example, as shown in fig. 9, the positions of the virtual service instances corresponding to the three service instances A, B and C on the target hash space (hash ring) are as shown in fig. 9, for example, when the service instance a is deleted, the virtual service instances A1, A2, A3 are not available, and at this time, the service requests corresponding to the original virtual service instances A1, A2, A3 can be transferred to the virtual service instances B1, B2, B3 and/or the virtual service instances C1, C2, C3 according to the query direction, for example, in the clockwise direction, so that it can be seen that the variation of the virtual service instances on the target hash space is relatively small, but only the service requests can be reassigned in a small range.
In the above embodiment, when the service instances (physical instances) are increased or decreased, the distribution of the service requests in the corresponding small range is only affected due to the variation of the individual virtual service instances in the target hash space, so that a large amount of data is not rerouted, and the stability of the system and the request processing effect are further improved.
In some embodiments, the method further comprises obtaining a historical service request amount and a historical access service instance in a historical time period, analyzing the historical service request amount and the historical access service instance based on a preset machine learning algorithm, and adjusting a hash algorithm and/or an initial hash space based on an analysis result.
In the embodiment of the disclosure, the history request quantity refers to the total number of requests of each user identification UID in a history time period, which UIDs are higher in request frequency and which UIDs are lower in request frequency can be identified by analyzing the history request quantity, and the history access service instance can be understood as a service instance to which each UID is routed in the past and the cache hit rate and performance of the service instances, so that whether some UIDs are easier to hit in cache on some service instances or whether some service instances process the service requests of some UIDs more efficiently or not is analyzed.
In the embodiment of the disclosure, the machine learning algorithm may be a linear regression algorithm, a support vector machine algorithm, or the like, and the setting may be selected according to actual application needs.
In the embodiment of the disclosure, the machine learning algorithm can predict the future access mode based on the historical data, so as to make a more intelligent decision on the adjustment of the hash algorithm or the initial hash space, that is, the routing strategy of each UID can be adjusted in real time by analyzing the historical request quantity and the historical access service instance, so that the routing strategy can be more accurately distributed to the proper service instance, the cache hit rate is maximized, and the external interface calling times are reduced.
In some embodiments, real-time load information may also be obtained, that is, the current load condition of each service instance, such as CPU usage, memory usage, network traffic, number of concurrent requests, etc., so that it may be determined whether the service request needs to be redistributed to the service instance with lower load according to the real-time load information.
In some embodiments, real-time requests may also be obtained, i.e., current service request changes analyzed, especially during peak hours, to avoid overload of certain service instances by dynamically adjusting the routing policy of the UID.
In the embodiment of the disclosure, the historical service request amount and the historical access service instance are analyzed based on a preset machine learning algorithm, and the hash algorithm and/or the initial hash space are/is adjusted based on the analysis result, for example, if a certain service instance shows a higher cache hit rate in the historical processing of certain UIDs, the probability of the UIDs being allocated to the service instance can be increased, and in addition, if a service instance does not perform well in certain situations (such as too high load), the probability of being allocated to the service instance can be temporarily reduced.
In the embodiment of the disclosure, the distribution density of UIDs among service instances is controlled by adjusting the size of the hash space, for example, when the number of service instances is increased, the hash space can be properly enlarged to reduce hash collision and further optimize routing accuracy.
It should be noted that, the historical data such as the historical service request amount and the historical access service instance are mainly used for analysis and modeling, knowing the access mode and instance performance in a long time, helping to optimize the whole routing strategy, the real-time data such as the real-time load information and the real-time request amount are mainly used for dynamic adjustment and real-time optimization, ensuring the load balance and efficient processing at a specific time point, generally, the historical data is suitable for long-time trend analysis, and the real-time data is suitable for immediate load balance and optimization.
In the above embodiment, the routing policy of the UID is dynamically adjusted according to the historical access data and the real-time load condition by a machine learning algorithm, so as to further improve the routing accuracy and the cache hit rate.
In some embodiments, when the service request is a first request, the request result is that the target service instance obtains data from the external system based on the external interface, responds to the service request, wherein the target service instance caches the data obtained from the external system locally, and sets the data caching time based on the request frequency of the service request.
Specifically, when the target service instance responds to the service request of the user identifier for the first time, data acquired from an external system needs to be cached locally, and in order to further improve the data processing efficiency and stability, data caching time needs to be set, and in order to further meet the use requirement of the user, the data caching time can be updated in real time according to the request frequency of the user identifier, for example, how many times the user identifier requests for service, etc.
In the above embodiment, the expiration time of the cache is dynamically adjusted according to the access frequency and the change condition of the data, so as to optimize the utilization efficiency of the cache.
Fig. 10 is a flowchart of a request processing method according to an embodiment of the present disclosure, where the method mainly includes steps S1002 to S1008:
Step S1002, a service request sent by a requester is received, and a user identifier is obtained based on the service request.
Step S1004, hash calculation is carried out on the user identification and the total number of the current service instances of the distributed cluster based on a preset hash algorithm, and a target service instance identification is obtained.
Step S1006, the service instance corresponding to the target service instance identifier in the distributed cluster is used as the target service instance.
Step S1008, forwarding the service request to the target service instance, receiving a request result of the target service instance to respond to the service request based on the locally cached data, and forwarding the request result to the requester.
In the embodiment of the disclosure, a user may send a service request based on an application program, a front-end webpage, etc. according to actual usage requirements, obtain the total number of current service instances of a distributed cluster at a proxy gateway, hash a target service instance identifier for a user identifier UID according to the total number of current service instances, use a service instance corresponding to the target service instance identifier in the distributed cluster as a target service instance, buffer data acquired from an external system when the target service instance responds to the service request of the user identifier for the first time, and when the target service instance receives a service request based on the user identifier of the proxy gateway again, send a request result to the proxy gateway based on the locally buffered data response service request, and the proxy gateway forwards the request result to a requesting party.
Therefore, hash modulo distribution is carried out on the UIDs at the proxy gateway side, so that service requests of the same UID are fixedly routed to a specific service instance, the cache hit rate is improved, and the calling times of an external interface are reduced.
Fig. 11 is a flowchart of a request processing method according to an embodiment of the present disclosure, where the method mainly includes steps S1102 to S1108:
step S1102, a service request sent by a requester is received, and a user identifier is obtained based on the service request.
Step 1104, obtaining a target hash space corresponding to the service instance identifier of the distributed cluster, performing hash calculation on the user identifier based on a preset hash algorithm to obtain a user hash value, and mapping the user hash value to the target hash space to obtain a user hash position.
In step 1106, a target service instance hash position is determined in the target hash space based on the user hash position and a preset query direction, and a service instance corresponding to the target service instance hash position is used as a target service instance.
Step 1108 forwards the service request to the target service instance, receives a request result of the target service instance to respond to the service request based on the locally cached data, and forwards the request result to the requestor.
In the embodiment of the disclosure, a service instance identifier of each service instance in a distributed cluster is obtained, hash calculation is performed on the service instance identifier based on a hash algorithm to obtain a service instance hash value of each service instance, and the service instance hash value of each service instance is mapped to a pre-built initial hash space to obtain a target hash space comprising a service instance hash position corresponding to each service instance.
In the embodiment of the disclosure, a user may send a service request based on an application program, a front-end webpage, etc. according to actual use requirements, obtain a target hash space corresponding to a service instance identifier of a distributed cluster, perform hash calculation on a user identifier by using the same hash algorithm to obtain a user hash value, and map the user hash value to the target hash space to obtain a user hash position.
In the embodiment of the disclosure, the query direction is preset, which may be clockwise or counterclockwise, so that the hash position of the target service instance is determined in the target hash space based on the hash position of the user and the preset query direction, the service instance corresponding to the hash position of the target service instance is taken as the target service instance, when the target service instance responds to the service request of the user identifier for the first time, the data required to be acquired from the external system is cached locally, when the target service instance receives the service request based on the user identifier of the proxy gateway again, the request result can be acquired based on the locally cached data in response to the service request and sent to the proxy gateway, and the proxy gateway forwards the request result to the requester.
Therefore, the cache invalidation problem caused by the dynamic adjustment of the number of instances is solved by adopting the consistent hash algorithm, and the system performance and stability are further improved, namely, when the number of instances is dynamically adjusted, only the affected service requests are remapped, so that large-scale cache invalidation is avoided, and the validity of most cache data is maintained.
In summary, in the service request method of the embodiment of the disclosure, by improving the local cache hit rate and reducing the number of times of calling external interfaces, the overall performance and stability of the distributed system can be obviously improved, the dependence of the system on external resources is reduced, the response delay and failure risk of the system are reduced, and the user experience and service availability are improved; in addition, the improvement of the cache hit rate represents fewer external interface calls, reduces the resource consumption and cost of the system in operation, simultaneously reduces the complexity and cost of the system in operation and maintenance by optimizing load balancing and cache management, improves the maintainability and management efficiency of the system, reduces repeated external interface calls when processing a large number of user requests, effectively reduces the time and resource consumption of data processing, optimizes the data access path of the system, accelerates the data response speed, and improves the processing efficiency and throughput capacity of the system.
Corresponding to the foregoing request processing method, the embodiment of the present disclosure further provides a request processing apparatus, and fig. 12 is a schematic structural diagram of a request processing apparatus provided in the embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and is applied to a proxy gateway, and the apparatus includes:
A receiving and acquiring module 1202, configured to receive a service request sent by a requester, and acquire a user identifier based on the service request;
A determining module 1204, configured to determine a target service instance based on the user identifier and service instance information of the distributed cluster;
a processing module 1206 is configured to forward the service request to the target service instance, receive a request result of the target service instance in response to the service request based on locally cached data, and forward the request result to the requestor.
The device provided by the embodiment of the disclosure can route the multiple service requests of the same user identifier to the same service instance, thereby greatly improving the hit rate of the local cache of the service instance, reducing the number of times of calling external interfaces, reducing the dependence of a distributed system on external resources and the resource consumption and cost when the distributed system operates, improving the performance and stability of the distributed system and simultaneously reducing the operation and maintenance cost, and reducing repeated external interface calling when a large number of service requests are processed, effectively reducing the time and resource consumption for processing the requests and improving the request processing efficiency.
In some embodiments, the determining module 1204 is specifically configured to perform hash calculation on the user identifier and a total number of current service instances of the distributed cluster based on a preset hash algorithm to obtain a target service instance identifier, and use a service instance corresponding to the target service instance identifier in the distributed cluster as the target service instance.
In some embodiments, the determining module 1204 includes a first obtaining unit configured to obtain a target hash space corresponding to a service instance identifier of the distributed cluster, a computation mapping unit configured to perform hash computation on the user identifier based on a preset hash algorithm to obtain a user hash value, and map the user hash value to the target hash space to obtain a user hash position, and a query unit configured to determine a target service instance hash position in the target hash space based on the user hash position and a preset query direction, and use a service instance corresponding to the target service instance hash position as the target service instance.
In some embodiments, the device further comprises an acquisition computing unit, a first mapping unit and a first mapping unit, wherein the acquisition computing unit is used for acquiring a service instance identifier of each service instance in the distributed cluster, carrying out hash computation on the service instance identifier based on the hash algorithm to obtain a service instance hash value of each service instance, and the first mapping unit is used for mapping the service instance hash value of each service instance to a pre-built initial hash space to obtain the target hash space comprising a service instance hash position corresponding to each service instance.
In some embodiments, the device further comprises a second obtaining unit, a determining and calculating unit and a first determining unit, wherein the second obtaining unit is used for obtaining instance updating information of the distributed cluster, the determining and calculating unit is used for determining an added service instance based on the instance updating information, calculating an added service instance hash value of the added service instance to map to the target hash space to obtain an added service instance hash position, and the first determining unit is used for determining a shared service instance hash position based on the added service instance hash position and the query direction, and taking a service instance corresponding to the shared service instance hash position as a shared service instance of the added service instance.
In some embodiments, the device further comprises a third obtaining unit, a second determining unit and a candidate service instance hash position determining unit, wherein the third obtaining unit is used for determining a deleted service instance based on the instance update information and determining a deleted service instance hash position on the target hash space corresponding to the deleted service instance, and the second determining unit is used for determining a candidate service instance hash position based on the deleted service instance hash position and the query direction, and the service instance corresponding to the candidate service instance hash position is used as a candidate service instance of the deleted service instance.
In some embodiments, the device further comprises a configuration unit configured to configure each service instance in the distributed cluster to correspond to a plurality of virtual service instances and obtain a virtual service instance identifier of each virtual service instance, a calculation unit configured to perform hash calculation on the virtual service instance identifier based on the hash algorithm to obtain a virtual service instance hash value of each virtual service instance, a second mapping unit configured to map the virtual service instance hash value of each virtual service instance to the target hash space to obtain a virtual service instance hash position corresponding to each virtual service instance in the target hash space, and the query unit is specifically configured to determine a target virtual service instance hash position from all the virtual service instance hash positions based on the user hash position and the query direction, obtain a target virtual service instance corresponding to the target virtual service instance hash position, and use a service instance to which the target virtual service instance belongs as the target service instance.
In some embodiments, the device further comprises an acquisition data module for acquiring the historical service request quantity and the historical access service instance in the historical time period, and an analysis and adjustment module for analyzing the historical service request quantity and the historical access service instance based on a preset machine learning algorithm and adjusting the hash algorithm and/or the initial hash space based on an analysis result.
In some embodiments, when the service request is a first request, the request result is that the target service instance obtains data from an external system based on an external interface in response to the service request, wherein the target service instance caches the data obtained from the external system locally and sets a data cache time based on a request frequency of the service request.
The request processing device provided by the embodiment of the disclosure can execute the request processing method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described apparatus embodiments may refer to corresponding procedures in the method embodiments, which are not described herein again.
An embodiment of the present disclosure provides an electronic device, which includes a storage device having a computer program stored thereon, and a processing device configured to execute the computer program in the storage device to implement steps of any one of the methods of the present disclosure.
Referring now to fig. 13, a schematic diagram of an electronic device 1300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 13 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 13, the electronic device 1300 may include a processing apparatus 1301 (e.g., a central processor, a graphics processor, etc.), which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1302 or a program loaded from a storage apparatus 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data necessary for the operation of the electronic apparatus 1300 are also stored. The processing device 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
In general, devices including input devices 1306 such as a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 1307 including a Liquid Crystal Display (LCD), speaker, vibrator, etc., storage devices 1308 including a magnetic tape, hard disk, etc., and communication devices 1309 may be connected to the I/O interface 1305. The communication means 1309 may allow the electronic device 1300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 13 shows an electronic device 1300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communications device 1309, or installed from the storage device 1308, or installed from the ROM 1302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 1301.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be computer program products comprising computer program instructions which, when executed by a processor, cause the processor to perform the image processing methods provided by the embodiments of the present disclosure. The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Further, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the request processing method provided by the embodiments of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The disclosed embodiments also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the request processing method in the disclosed embodiments.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative, and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method for processing a request, applied to a proxy gateway, the method comprising:
receiving a service request sent by a requester, and acquiring a user identifier based on the service request;
Determining a target service instance based on the user identification and service instance information of the distributed cluster;
forwarding the service request to the target service instance, receiving a request result of the target service instance responding to the service request based on locally cached data, and forwarding the request result to the requester.
2. The method of claim 1, wherein the determining a target service instance based on the user identification and service instance information of the distributed cluster comprises:
carrying out hash calculation on the user identification and the total number of the current service instances of the distributed cluster based on a preset hash algorithm to obtain a target service instance identification;
And taking the service instance corresponding to the target service instance identifier in the distributed cluster as the target service instance.
3. The method of claim 1, wherein the determining a target service instance based on the user identification and service instance information of the distributed cluster comprises:
Acquiring a target hash space corresponding to a service instance identifier of the distributed cluster;
Carrying out hash calculation on the user identifier based on a preset hash algorithm to obtain a user hash value, and mapping the user hash value to the target hash space to obtain a user hash position;
And determining a target service instance hash position in the target hash space based on the user hash position and a preset query direction, and taking a service instance corresponding to the target service instance hash position as the target service instance.
4. A method according to claim 3, characterized in that the method further comprises:
acquiring a service instance identifier of each service instance in the distributed cluster, and performing hash calculation on the service instance identifier based on the hash algorithm to obtain a service instance hash value of each service instance;
And mapping the service instance hash value of each service instance to a pre-constructed initial hash space to obtain the target hash space comprising the service instance hash position corresponding to each service instance.
5. The method according to claim 4, wherein the method further comprises:
acquiring instance update information of the distributed cluster;
Determining an added service instance based on the instance update information, calculating an added service instance hash value of the added service instance, and mapping the added service instance hash value to the target hash space to obtain an added service instance hash position;
and determining a shared service instance hash position based on the added service instance hash position and the query direction, and taking the service instance corresponding to the shared service instance hash position as the shared service instance of the added service instance.
6. The method according to claim 4, wherein the method further comprises:
Determining a deleted service instance based on the instance update information, and determining a deleted service instance hash position on the target hash space corresponding to the deleted service instance;
And determining a candidate service instance hash position based on the deleted service instance hash position and the query direction, and taking the service instance corresponding to the candidate service instance hash position as the candidate service instance of the deleted service instance.
7. The method according to claim 4, wherein the method further comprises:
configuring each service instance in the distributed cluster to correspond to a plurality of virtual service instances, and acquiring a virtual service instance identifier of each virtual service instance;
Performing hash calculation on the virtual service instance identifier based on the hash algorithm to obtain a virtual service instance hash value of each virtual service instance;
Mapping the virtual service instance hash value of each virtual service instance to the target hash space to obtain a virtual service instance hash position corresponding to each virtual service instance in the target hash space;
The determining a target service instance hash position in the target hash space based on the user hash position and a preset query direction, and taking a service instance corresponding to the target service instance hash position as the target service instance includes:
And determining a target virtual service instance hash position from all the virtual service instance hash positions based on the user hash position and the query direction, acquiring a target virtual service instance corresponding to the target virtual service instance hash position, and taking a service instance to which the target virtual service instance belongs as the target service instance.
8. The method according to claim 4, wherein the method further comprises:
Acquiring a history service request amount and a history access service instance in a history time period;
Analyzing the historical service request quantity and the historical access service instance based on a preset machine learning algorithm, and adjusting the hash algorithm and/or the initial hash space based on an analysis result.
9. The method of claim 1, wherein when the service request is a first request, the request results in the target service instance retrieving data from an external system based on an external interface in response to the service request, wherein the target service instance caches the data retrieved from the external system locally and sets a data cache time based on a request frequency of the service request.
10. A request processing apparatus for use in a proxy gateway, the apparatus comprising:
The receiving and acquiring module is used for receiving a service request sent by a requester and acquiring a user identifier based on the service request;
The determining module is used for determining a target service instance based on the user identification and the service instance information of the distributed cluster;
and the processing module is used for forwarding the service request to the target service instance, receiving a request result of the target service instance responding to the service request based on the locally cached data, and forwarding the request result to the requester.
11. An electronic device, the electronic device comprising:
A storage device having a computer program stored thereon;
Processing means for executing said computer program in said storage means to carry out the steps of the request processing method according to any one of claims 1 to 9.
12. A computer readable storage medium, characterized in that the storage medium stores a computer program for executing the request processing method according to any one of the preceding claims 1-9.
CN202411735154.5A 2024-11-29 2024-11-29 Request processing method, device, equipment and medium Pending CN119561987A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411735154.5A CN119561987A (en) 2024-11-29 2024-11-29 Request processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411735154.5A CN119561987A (en) 2024-11-29 2024-11-29 Request processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN119561987A true CN119561987A (en) 2025-03-04

Family

ID=94737892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411735154.5A Pending CN119561987A (en) 2024-11-29 2024-11-29 Request processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN119561987A (en)

Similar Documents

Publication Publication Date Title
US11012892B2 (en) Resource obtaining method, apparatus, and system
CN111464615B (en) Request processing method, device, server and storage medium
US9519585B2 (en) Methods and systems for implementing transcendent page caching
US10257115B2 (en) Cloud-based service resource provisioning based on network characteristics
CN113010818A (en) Access current limiting method and device, electronic equipment and storage medium
CN114153754B (en) Data transmission method and device for computing cluster and storage medium
CN107729570B (en) Data migration method and apparatus for server
CN108900626B (en) Data storage method, device and system in cloud environment
CN112631504B (en) Method and device for implementing local cache using off-heap memory
CN112445857A (en) Resource quota management method and device based on database
WO2015149644A1 (en) Intelligent file pre-fetch based on access patterns
CN109196807B (en) Network node and method of operating a network node for resource distribution
CN112764948B (en) Data transmission method, data transmission device, computer device, and storage medium
CN116737080A (en) Distributed storage system data block management method, system, equipment and storage medium
CN107026879A (en) A kind of data cache method and background application system
CN117762898A (en) Data migration method, device, equipment and storage medium
RU2435236C1 (en) System and method of recording data into cloud storage
CN119484641A (en) Cache optimization method, device, equipment, storage medium and computer program product
CN118520011A (en) Method, apparatus, device and computer readable medium for processing service request
CN119561987A (en) Request processing method, device, equipment and medium
US20240089339A1 (en) Caching across multiple cloud environments
CN115033656A (en) Data processing method and device based on map cache technology and electronic device
CN115080143A (en) Page resource preloading method, device, equipment and storage medium
US12443440B2 (en) Method for executing data processing task in cluster mixed deployment scenario, electronic device and storage medium
US12009976B2 (en) Configuration of a server in view of a number of clients connected to the server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination