[go: up one dir, main page]

WO2018103320A1 - Gated launch method, system, server, and storage medium - Google Patents

Gated launch method, system, server, and storage medium Download PDF

Info

Publication number
WO2018103320A1
WO2018103320A1 PCT/CN2017/091179 CN2017091179W WO2018103320A1 WO 2018103320 A1 WO2018103320 A1 WO 2018103320A1 CN 2017091179 W CN2017091179 W CN 2017091179W WO 2018103320 A1 WO2018103320 A1 WO 2018103320A1
Authority
WO
WIPO (PCT)
Prior art keywords
policy
file
parsing
user information
parsing file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/091179
Other languages
French (fr)
Chinese (zh)
Inventor
俞晓鸣
顾钰芬
李金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Publication of WO2018103320A1 publication Critical patent/WO2018103320A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management

Definitions

  • the present invention relates to the field of computer processing, and in particular, to a grayscale publishing method, system, server, and storage medium.
  • Grayscale publishing is a kind of publishing method that can smoothly transition between black and white.
  • AB Test is a grayscale publishing method, which allows some users to continue to use A, and some users start to use B. If the user has no objection to B, then gradually expand the scope and move all users to B. The grayscale release can ensure the stability of the overall system, and the problem can be found and adjusted at the initial gray level to ensure its influence.
  • ABtest is based on the traffic distribution policy configured in the system. In the test phase, if a problem is found, a traffic distribution policy is added to the Redis server (cache server). Each traffic distribution policy corresponds to a policy resolution file and user.
  • the information parsing file because the traditional parsing policy corresponding to the policy parsing file and the user information parsing file is stored in the memory of the Nginx server (which is a performance-oriented HTTP server), so if you add a diversion policy, you need to The policy parsing file and user information parsing file corresponding to the new offloading policy are uploaded to the Nginx server. In this process, the Nginx server needs to be reloaded or restarted. Restarting the Nginx server is not only time-consuming but also very troublesome.
  • a grayscale publishing method and system is provided.
  • a grayscale publishing system comprising:
  • the Redis server is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the shunting policy, where the policy parsing file and the user information parsing file are stored in the form of a string.
  • the Nginx server is configured to check whether the traffic distribution policy identifier exists in the Cache. If the traffic distribution policy identifier does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server.
  • the Nginx server is further configured to: according to the current offloading policy identifier, a search for a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier, if the memory does not have a corresponding corresponding to the current shunting policy identifier.
  • the policy parsing file and the user information parsing file are loaded into the memory by the lua in the form of a string in the manner of loading the string according to the current shunting policy identifier.
  • the Nginx server is further configured to perform the publishing according to the offloading policy parsed by the policy parsing file.
  • a grayscale publishing method comprising:
  • the Nginx server periodically searches for the traffic distribution policy identifier in the Cache according to the preset rule. If the traffic distribution policy does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server, where the Redis server stores The current splitting policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;
  • the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and
  • the distribution policy is parsed according to the policy parsing file.
  • a server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • the current traffic distribution policy identifier is read from the preset location in the Redis server.
  • the Redis server stores the current traffic policy.
  • the traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;
  • the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and
  • the distribution policy is parsed according to the policy parsing file.
  • One or more non-volatile readable storage media storing computer-executable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the current traffic distribution policy identifier is read from the preset location in the Redis server.
  • the Redis server stores the current traffic policy.
  • the traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;
  • the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and
  • the distribution policy is parsed according to the policy parsing file.
  • 1 is an architectural diagram of a grayscale distribution system in an embodiment
  • FIG. 2 is a flow chart of a grayscale publishing method in an embodiment
  • FIG. 3 is a flowchart of a method for loading a policy parsing file and a user information parsing file according to a current shunting policy according to an embodiment
  • Figure 5 is a diagram showing the internal structure of an Nginx server in one embodiment.
  • a grayscale publishing system comprising: a Redis server 102 and an Nginx server 104;
  • the Redis server 102 is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the traffic diverting policy, where the policy parsing file and the user information parsing file are stored in the form of a string.
  • the ABtest (AB test) is based on the offloading policy configured in the system. For each of the traffic distribution policies, a policy resolution file and a user information analysis file are required.
  • the policy analysis file is used to parse the traffic distribution policy, and the correspondence between the user features and the forwarding path in the traffic distribution policy is analyzed.
  • the user information parsing file is used to parse the obtained user information, parse the user feature in the user information, and then determine a specific forwarding path corresponding to the user feature, and forward the packet according to the determined forwarding path.
  • the policy analysis file and the user information analysis file corresponding to the traffic distribution policy and the traffic distribution policy are directly uploaded to the Redis server through the background management page, where the policy corresponding to the traffic distribution policy needs to be resolved.
  • the file and user information parsing file is first converted into a string form and then uploaded to the Redis server. That is, the policy parsing file and the user information parsing file in the Redis server are stored in the form of a string.
  • the Nginx server 104 is configured to search for a traffic distribution policy identifier in the Cache. If not, the current traffic distribution policy identifier is read from a preset location in the Redis server.
  • the traffic distribution policy identifier is used to uniquely identify a traffic distribution policy
  • the current traffic distribution policy identifier may be an identifier corresponding to the currently used traffic distribution policy.
  • the current traffic distribution policy identifier is stored in the Cache (cache memory) of the Nginx server. However, if the traffic distribution policy needs to be replaced, the content in the Cache is first cleared, that is, if a new traffic off policy is to be used. The traffic policy identifier in the Cache needs to be cleared. That is, when the new traffic distribution policy is used for the first time, there is no traffic distribution policy identifier in the Cache.
  • the Cache belongs to the internal memory, and the content of the Cache can be cleared by using a special clearing mechanism, that is, an interface for clearing the contents of the Cache is set, and the content is cleared through the interface. You can also set the time limit for the contents of the Cache. For example, if you do not use the offload policy ID in more than 1 minute, it will be automatically cleared.
  • the traffic policy ID in the Cache. There is no restriction on how to empty the contents of the Cache.
  • the Nginx server periodically searches for a traffic distribution policy identifier according to a preset rule. If not, the current traffic distribution policy is replaced. The current traffic distribution policy is set in the Redis server.
  • the Nginx server needs to read the current traffic distribution policy identifier from the preset location in the Redis server.
  • the Nginx server stores the location (ie, the address) of the current traffic distribution policy identifier, reads the current traffic distribution policy identifier from the Redis server according to the storage location, and saves the read current traffic distribution policy identifier to the Cache, which facilitates subsequent offload forwarding.
  • the Nginx server 104 is further configured to: according to the current traffic distribution policy identifier, a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier are found in the memory, and if not, the redisets are loaded according to the current shunting policy identifier.
  • the policy parsing file and the user information parsing file in the form of a string in the server are loaded into the memory through lua.
  • the Nginx server after obtaining the current traffic distribution policy identifier, the Nginx server first searches for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier, and if not, The traffic distribution policy corresponding to the current traffic distribution policy identifier is a new traffic distribution policy.
  • the corresponding policy resolution file and user information analysis file exist in the form of a string on the Redis server.
  • the Nginx server needs to load the policy parsing file and the user information parsing file in the Redis server in the form of a string into the memory by loading the string (loadString).
  • Lua is a dynamic scripting language that can be embedded into Nginx server configuration files.
  • the Nginx server first loads the policy parsing file and the user information parsing file in the form of a string into lua, and then converts the policy parsing file and the user information parsing file in the form of a string into lua into lua.
  • the Table form is then stored in memory.
  • the Table form is a form that can be directly called by the Nginx server.
  • the Nginx server can load the policy parsing file and the user information parsing file in the Redis server into the memory by loading the string, so when the file needs to be added, only the new file needs to be added. It is converted to a string, and then uploaded to the Redis server through the background management page, and then the new traffic distribution policy is set to the current traffic distribution policy to implement the traffic policy update without restarting the Nginx server.
  • the Nginx server 104 is further configured to perform the distribution according to the offloading policy parsed by the policy parsing file.
  • the corresponding grayscale publishing may be performed according to the shunting policy parsed by the policy parsing file.
  • the Nginx server parses the corresponding traffic splitting policy by using the policy parsing file, and can obtain a specific correspondence between the traffic splitting policies, that is, the at least one parameter information and the forwarding path (upstream) in the traffic splitting policy can be parsed. Correspondence relationship. For example, if the traffic distribution policy is based on the city information, the policy analysis file can resolve the correspondence between the city information and the forwarding path in the traffic distribution policy, and the corresponding file is parsed from the user according to the user information.
  • the user feature extracted from the information must also be the city information. Therefore, the forwarding path corresponding to the user feature can be determined by the correspondence between the parsed city information and the forwarding path.
  • the parsed city information and forwarding path are: Shanghai (SH) corresponds to forwarding path 1, Beijing (BJ) corresponds to forwarding path 2, and the remaining cities correspond to forwarding path 3. If the extracted user feature is Shanghai, then the forwarding path corresponding to the user feature is 1.
  • the corresponding relationship between the parsed at least one parameter information and the forwarding path is saved to the Cache, so that when the next user requests to come over, the parsed shunting policy can be directly used for offloading and forwarding.
  • the policy analysis file and the user information analysis file corresponding to the new traffic distribution policy are converted into a string and uploaded to the Redis server.
  • the Nginx server needs to use the newly added policy resolution.
  • the corresponding new policy parsing file and user information parsing file are searched from the Redis server, and then the policy parsing file and user in the form of a string in the Redis server are loaded by using a string.
  • the information parsing file is loaded into the memory through lua, and then distributed according to the shunting policy parsed by the policy parsing file.
  • the policy resolution file and the user information parsing file corresponding to the traffic splitting policy need to be converted into a string and stored in the Redis server.
  • the Nginx server can dynamically load new policy resolution files and user information parsing files from the Redis server into memory. It does not require reloading and restarting the Nginx server. It is easy to operate and saves time, thus improving the forwarding efficiency.
  • the Nginx server is further configured to search for a policy resolution file ID and a user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier, and parse the file ID and the user information according to the policy.
  • the ID parses the policy parsing file and the user information parsing file in the Redis server in the form of a string into the memory and converts it into a table for storage.
  • Nginx After obtaining the current traffic distribution policy identifier, the server searches the Redis server for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier.
  • the ID is used to uniquely identify a file or content.
  • the mapping between the traffic policy identifier and the policy resolution file ID and the user information resolution file ID is pre-stored in the Redis server, and the current traffic distribution policy can be found in the Redis server according to the current traffic policy identifier. Identifies the corresponding policy resolution file ID and user information resolution file ID.
  • the Nginx server can find the corresponding policy resolution file and user information analysis file according to the policy resolution file ID and the user information resolution file ID in the Redis server.
  • the user information parsing file exists in the form of a string (String) in the Redis server, and the form of the string cannot be directly called, so it is necessary to parse the file and user information by using the policy in the form of a string.
  • the file is loaded into the memory through lua, and the form of the string (String) is converted into a Table form for storage in the memory, so that the Nginx server can directly call the policy parsing file and the user information parsing file to parse and determine the user request.
  • the corresponding forwarding path is also convenient for calling the corresponding policy parsing file and user information parsing file directly from the memory next time.
  • the Nginx server is further configured to parse the corresponding offloading policy by using the policy parsing file, and parse the correspondence between the at least one parameter information and the forwarding path in the offloading policy. And saving the correspondence between the at least one parameter information and the forwarding path to the Cache.
  • the corresponding splitting policy is parsed by calling the policy parsing file, and the correspondence between the at least one parameter information and the forwarding path in the splitting policy is parsed, and the parsed at least one parameter information and the forwarding path are The corresponding relationship is saved to the Cache, so that when the next user requests to come over, the split traffic policy can be directly used for offload forwarding.
  • the parameter information has various types according to the type and the corresponding position.
  • UID User
  • Identification user identification
  • IP IP
  • URL URL information
  • the UID information may be divided into a UID suffix shunt, a specified special UID shunt, and a UID user segment shunt according to different extraction locations.
  • the shunting strategy is divided into single-level shunt and multi-level shunt. Single-stage splitting is relatively simple, which means that only one parameter information needs to be referenced to find the corresponding forwarding path. Multi-level shunting needs to refer to a variety of parameter information. Among them, multi-level shunting is divided into two types: union and intersection.
  • the first-level split is the split of the city: divided into two types, Shanghai (SH), Beijing (BJ), and its corresponding split strategy is: SH
  • the upstream (forward path) is beta1, the upstream of BJ is beta2
  • the second-level offload is the split of UID collection: UID is 123, 124,
  • the upstream of 125 is beta1, the upstream of UID is 567, 568, 569 is beta2
  • the third-level shunting is IP-wide shunting: IP's long value range is 1000001 ⁇ 2000000, upstream is beta1, and ip's long value is 2000001 ⁇
  • the 3000000 upstream is beta2; if the union is used, the city information is first extracted from the user information.
  • the request is forwarded to beta1 or beta2, and the UID and IP information are not considered. If the city information is not extracted, or the city information is not within the scope of SH and BJ, then the UID is continuously determined. If the UID is 567, the request is forwarded to beta2, and so on, as long as the forwarding condition is met, the forwarding is not necessary. The information behind. If the intersection method is adopted, the same three conditions must be met at the same time, that is, if the acquired city information is SH, and its corresponding upstream is beta1, it is necessary to continue to obtain the information and IP range of the UID set.
  • the acquired UID set is also beta1, and the information of the IP range must correspond to beta1, then the user request will be forwarded to beta1, if the corresponding information or the upstream corresponding to the obtained information is not obtained
  • the acquired city information is SH
  • the acquired UID set information is 567. Since the two correspond to beta1 and beta2, respectively, three conditions are not met at the same time, so the forwarding of the offload cannot be realized, but in practice Instead of using the form of intersection alone, the intersection and union are used.
  • the Nginx server is further configured to receive a request sent by the client, extract user information in the request, and extract at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, where The extracted at least one type of parameter information is used as a user feature, and the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and the corresponding forwarding is performed according to the forwarding path.
  • the Nginx server receives the request sent by the client, extracts the user information in the request, and parses the user information according to the user information parsing file corresponding to the current traffic distribution policy identifier previously stored in the memory, from the user information. Extracting at least one parameter information, and then extracting the extracted at least one parameter information as a user feature.
  • the extracting of the user feature is related to the corresponding splitting strategy.
  • the traffic distribution policy is offloaded according to the city information (City)
  • the user information analysis file corresponding to the traffic distribution policy naturally extracts the city information in the user information, and the extracted city information is the user feature.
  • the traffic offloading strategy may be based on one parameter information, or may be based on multiple parameter information.
  • the shunt operation is performed according to multiple parameter information, multiple parameter information needs to be extracted from the user information, for example, simultaneous extraction. If the city information and UID information are not extracted, the IP information may need to be further extracted. What information is specifically extracted as a user feature is determined by the corresponding offloading strategy. After the user feature is extracted, the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and then the corresponding forwarding is performed according to the forwarding path.
  • the Nginx server is further configured to parse the corresponding traffic distribution policy by using a policy resolution file, parse the corresponding percentage policy, and perform corresponding release according to the percentage policy.
  • the traffic distribution policy is forwarded according to the percentage policy.
  • the Nginx server parses the corresponding traffic distribution policy by using the policy resolution file, and parses out the specific percentage policy, that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
  • the specific percentage policy that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
  • a grayscale publishing method comprising:
  • Step 202 The Nginx server periodically searches for a traffic distribution policy identifier in the Cache according to the preset rule. If yes, the process proceeds directly to step 206. If not, the process proceeds to step 204.
  • Step 204 Read the current offload policy identifier from a preset location in the Redis server.
  • the traffic distribution policy identifier is used to uniquely identify a traffic distribution policy
  • the current traffic distribution policy identifier may be an identifier corresponding to the currently used traffic distribution policy.
  • the Redis server stores the current traffic distribution policy identifier and the corresponding policy analysis file and user resolution file that exist in the form of a string.
  • the current traffic distribution policy identifier is stored in the Cache (cache memory) of the Nginx server. However, if the traffic distribution policy needs to be replaced, the content in the Cache is first cleared, that is, if a new traffic off policy is to be used. The traffic policy identifier in the Cache needs to be cleared.
  • the Nginx server periodically searches for a traffic distribution policy identifier according to a preset rule. If not, the current traffic distribution policy is replaced. The current traffic distribution policy is set in the Redis server.
  • the Nginx server needs to read the current traffic distribution policy identifier from the preset location in the Redis server.
  • the Nginx server stores the location (ie, the address) of the current traffic distribution policy identifier, reads the current traffic distribution policy identifier from the Redis server according to the storage location, and saves the read current traffic distribution policy identifier to the Cache, which facilitates subsequent offload forwarding.
  • Step 206 Search for the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier in the memory according to the current traffic distribution policy identifier. If yes, go directly to step 210. If not, go to step 208.
  • the Nginx server after obtaining the current traffic distribution policy identifier, the Nginx server first searches for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier, and if not, The traffic distribution policy corresponding to the current traffic distribution policy identifier is a new traffic distribution policy.
  • the corresponding policy resolution file and user information analysis file exist in the form of a string on the Redis server.
  • the policy analysis file and the user information analysis file corresponding to the traffic distribution policy and the traffic distribution policy are directly uploaded to the Redis server through the background management page, where the policy corresponding to the traffic distribution policy needs to be resolved.
  • File and user information parsing files are first converted to strings and then uploaded to the Redis server. In the Redis server, the policy parsing file and the user information parsing file are stored in the form of strings.
  • Step 208 The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by using lua according to the current shunting policy identifier.
  • the Nginx server needs to load the policy parsing file and the user information parsing file in the form of a string in the Redis server into the memory by loading the string (loadString).
  • Lua is a dynamic scripting language that can be embedded into Nginx server configuration files.
  • the Nginx server first loads the policy parsing file and the user information parsing file in the form of a string into lua, and then converts the policy parsing file and the user information parsing file in the form of a string into lua into lua.
  • the Table form is then stored in memory, where the Table form is directly available to the Nginx server.
  • the policy parsing file and the user information parsing file are initially stored in the Redis server in the form of a string, instead of being directly stored in the Nginx server, when the new file needs to be added, only the new file needs to be added.
  • the file is converted to a string, and then uploaded to the Redis server through the background management page, and then the new traffic distribution policy is set to the current traffic distribution policy to implement the traffic policy update.
  • Step 210 Publish according to the offloading policy parsed by the policy parsing file.
  • the corresponding grayscale publishing may be performed according to the shunting policy parsed by the policy parsing file.
  • the Nginx server parses the corresponding traffic splitting policy by using the policy parsing file, and can obtain a specific correspondence between the traffic splitting policies, that is, the at least one parameter information and the forwarding path (upstream) in the traffic splitting policy can be parsed. Correspondence relationship. For example, if the traffic distribution policy is based on the city information, the policy analysis file can resolve the correspondence between the city information and the forwarding path in the traffic distribution policy, and the corresponding file is parsed from the user according to the user information.
  • the user feature extracted from the information must also be the city information. Therefore, the forwarding path corresponding to the user feature can be determined by the correspondence between the parsed city information and the forwarding path.
  • the parsed city information and forwarding path are: Shanghai (SH) corresponds to forwarding path 1, Beijing (BJ) corresponds to forwarding path 2, and the remaining cities correspond to forwarding path 3. If the extracted user feature is Shanghai, then the forwarding path corresponding to the user feature is 1.
  • the corresponding relationship between the parsed at least one parameter information and the forwarding path is saved to the Cache, so that when the next user requests to come over, the parsed shunting policy can be directly used for offloading and forwarding.
  • the policy analysis file and the user information analysis file corresponding to the new traffic distribution policy are converted into a string and uploaded to the Redis server.
  • the Nginx server needs to use the newly added policy resolution.
  • the corresponding new policy parsing file and user information parsing file are searched from the Redis server, and then the policy parsing file and user in the form of a string in the Redis server are loaded by using a string.
  • the information parsing file is loaded into the memory through lua, and then distributed according to the shunting policy parsed by the policy parsing file.
  • the policy resolution file and the user information parsing file corresponding to the traffic splitting policy need to be converted into a string and stored in the Redis server.
  • the Nginx server can dynamically load new policy resolution files and user information parsing files from the Redis server into memory. It does not require reloading and restarting the Nginx server. It is easy to operate and saves time, thus improving the forwarding efficiency.
  • Step 208 includes:
  • Step 208A Search for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier.
  • Nginx After obtaining the current traffic distribution policy identifier, the server searches the Redis server for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier.
  • the ID is used to uniquely identify a file or content.
  • the mapping between the traffic policy identifier and the policy resolution file ID and the user information resolution file ID is pre-stored in the Redis server, and the current traffic distribution policy can be found in the Redis server according to the current traffic policy identifier. Identifies the corresponding policy resolution file ID and user information resolution file ID.
  • Step 208B The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by the lua and converted into Table by loading the string according to the policy parsing file ID and the user information parsing file ID. The form is stored.
  • the Nginx server can find the corresponding policy parsing file and user information parsing according to the policy parsing file ID and the user information parsing file ID in the Redis server.
  • the file because the policy parsing file and the user information parsing file exist in the form of a string in the Redis server, and the form of the string cannot be directly called, so it is necessary to adopt a strategy that exists in the form of a string.
  • the parsing file and the user information parsing file are loaded into the memory by lua, and the string (String) form is converted into a Table form for storage, so that the Nginx server can directly call the policy parsing file and the user information parsing file pair.
  • the user requests to parse and determine the corresponding forwarding path, and it is also convenient to call the corresponding policy parsing file and user information parsing file directly from the memory next time.
  • the step of distributing the splitting policy according to the policy parsing file includes: parsing the corresponding splitting policy by using the policy parsing file, and parsing between at least one parameter information and the forwarding path in the splitting policy Corresponding relationship, and storing the correspondence between the at least one parameter information and the forwarding path to the Cache, and publishing according to the correspondence between the at least one parameter information in the Cache and the forwarding path.
  • the corresponding splitting policy is parsed by calling the policy parsing file, and the correspondence between the at least one parameter information and the forwarding path in the splitting policy is parsed, and the correspondence between the parsed at least one parameter information and the forwarding path is analyzed.
  • the relationship is saved to the Cache, so that when the next user requests to come over, the split traffic policy can be directly used for offload forwarding.
  • the parameter information has various types according to the type and the corresponding position.
  • UID information may be divided into a UID suffix shunt, a specified special UID shunt, and a UID user segment shunt according to different extraction locations.
  • the shunting strategy is divided into single-level shunt and multi-level shunt. Single-stage splitting is relatively simple, which means that only one parameter information needs to be referenced to find the corresponding forwarding path. Multi-level shunting needs to refer to a variety of parameter information. Among them, multi-level shunting is divided into two types: union and intersection.
  • the first-level split is the split of the city: divided into two types, Shanghai (SH), Beijing (BJ), and its corresponding split strategy is: SH
  • the upstream (forward path) is beta1, the upstream of BJ is beta2
  • the second-level offload is the split of UID collection: UID is 123, 124,
  • the upstream of 125 is beta1, the upstream of UID is 567, 568, 569 is beta2
  • the third-level shunting is IP-wide shunting: IP's long value range is 1000001 ⁇ 2000000, upstream is beta1, and ip's long value is 2000001 ⁇
  • the 3000000 upstream is beta2; if the union is used, the city information is first extracted from the user information.
  • the request is forwarded to beta1 or beta2, and the UID and IP information are not considered. If the city information is not extracted, or the city information is not within the scope of SH and BJ, then the UID is continuously determined. If the UID is 567, the request is forwarded to beta2, and so on, as long as the forwarding condition is met, the forwarding is not necessary. The information behind. If the intersection method is adopted, the same three conditions must be met at the same time, that is, if the acquired city information is SH, and its corresponding upstream is beta1, it is necessary to continue to obtain the information and IP range of the UID set.
  • the acquired UID set is also beta1, and the information of the IP range must correspond to beta1, then the user request will be forwarded to beta1, if the corresponding information or the upstream corresponding to the obtained information is not obtained
  • the acquired city information is SH
  • the acquired UID set information is 567. Since the two correspond to beta1 and beta2, respectively, three conditions are not met at the same time, so the forwarding of the offload cannot be realized, but in practice Instead of using the form of intersection alone, the intersection and union are used.
  • the grayscale publishing method further includes:
  • Step 402 The Nginx server periodically searches for a traffic distribution policy identifier in the Cache according to a preset rule. If yes, the process proceeds directly to step 406. If not, the process proceeds to step 404.
  • Step 404 Read the current offload policy identifier from a preset location in the Redis server.
  • step 406 the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier are found in the memory according to the current traffic distribution policy identifier. If yes, the process proceeds directly to step 410. If not, the process proceeds to step 408.
  • Step 408 The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by lua according to the current shunting policy identifier.
  • Step 410 The policy analysis file is used to parse the corresponding traffic distribution policy, and the correspondence between the at least one parameter information and the forwarding path in the traffic distribution policy is parsed, and the correspondence between the at least one parameter information and the forwarding path is analyzed. The relationship is saved to the Cache.
  • Step 412 Receive a request sent by the client, and extract user information in the request.
  • the Nginx server receives the request sent by the client, and extracts the user information in the request.
  • the user information includes city information, IP address information, and UID information. At least one of remote address information and the like.
  • Step 414 Extract at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, and use the extracted at least one parameter information as a user feature.
  • the Nginx server parses the user information according to the current user information analysis file in the memory, and extracts at least one parameter information from the user information according to a preset extraction manner, and then extracts the extracted information.
  • the at least one parameter information is used as a user feature.
  • the extracting of the user feature is related to the corresponding splitting strategy.
  • the traffic distribution policy is offloaded according to the city information (City)
  • the user information analysis file corresponding to the traffic distribution policy naturally extracts the city information in the user information, and the extracted city information is the user feature.
  • the traffic offloading strategy may be based on one parameter information, or may be based on multiple parameter information.
  • the shunt operation is performed according to multiple parameter information, multiple parameter information needs to be extracted from the user information, for example, simultaneous extraction. If the city information and UID information are not extracted, the IP information may need to be further extracted. What information is specifically extracted as a user feature is determined by the corresponding offloading strategy.
  • Step 416 Determine a forwarding path corresponding to the user feature according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and perform corresponding forwarding according to the forwarding path.
  • the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and then the corresponding forwarding path is performed according to the forwarding path.
  • Forward Taking multi-level shunting as an example, a shunting strategy using intersection and union is adopted.
  • the first level of the split includes two strategies, which is the intersection relationship: the city ID is 0 for the strategy ID, the beta1 for the Shanghai (SH), the beta2 for the Beijing (BJ), and the UID for the strategy ID of 1.
  • the upstream of UID is 123, 124, 125 is beta1, the upstream of UID is 567, 568, 569 is beta2; the second-level split is the union of the first-level split, and the second-level split is only one strategy.
  • the strategy is 2 is the ip range of the shunt, ip long value range is 1000001 ⁇ 2000000 of the upstream is beta1, ip long value range of 2000001 ⁇ 3000000 of the upstream is beta2;
  • the city information in the user information is SH and the UID is 123, then the first level is offloaded, and the request is forwarded to beta1.
  • the second-level offloading policy is continued. If the long value of the IP is 1200000, the request is forwarded to beta1.
  • the step of distributing the splitting policy according to the policy parsing file includes: parsing the corresponding traffic splitting policy by using the policy parsing file, parsing the corresponding percentage policy, and performing corresponding publishing according to the percentage policy.
  • the traffic distribution policy is forwarded according to the percentage policy.
  • the Nginx server parses the corresponding traffic distribution policy by using the policy resolution file, and parses out the specific percentage policy, that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
  • the specific percentage policy that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
  • the internal structure of the Nginx server 104 is as shown in FIG. 5, which includes a processor connected through a system bus, a non-volatile storage medium, an internal memory, and a network interface.
  • the non-volatile storage medium of the Nginx server stores an operating system and computer readable instructions executable by the processor to implement a grayscale publishing method suitable for the Nginx server.
  • This processor is used to provide computing and control capabilities to support the operation of the entire server.
  • the internal memory in the Nginx server 104 provides an environment for the operation of operating systems and computer executable instructions in a non-volatile storage medium for network communication. It will be understood by those skilled in the art that the structure shown in FIG.
  • 5 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation of the Nginx server 104 to which the solution of the present application is applied, and a specific Nginx server. 104 may include more or fewer components than shown, or some components may be combined, or have different component arrangements.
  • Figure 5 When the computer readable instructions in the Nginx server are executed by the processor, the processor is configured to: perform a step of searching for a shunt policy identifier in the Cache according to a preset rule timing, and if not present, preset from the Redis server The location is configured to read the current traffic distribution policy identifier; the current traffic distribution policy identifier is used to search for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier; if the current traffic distribution policy identifier does not exist in the memory, The policy parsing file and the user information parsing file are loaded into the memory by the lua in the form of a string in the manner of loading the string according to the current shunting policy identifier. And publishing according to the offloading policy parsed by the policy parsing file.
  • the method for loading the string according to the current shunting policy identifier is The step of loading the policy parsing file and the user information parsing file in the form of a string into the memory by the lua in the redis server includes: if the in-memory does not exist, the policy parsing file corresponding to the current shunting policy identifier and the user information parsing And searching for the policy resolution file ID and the user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier; and parsing the file ID according to the policy and parsing the file ID with the user information by loading the string
  • the policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by lua and converted into a Table for storage.
  • the performing the distribution of the offloading policy that is parsed according to the policy parsing file by the processor includes: parsing the corresponding offloading policy by using the policy parsing file, and parsing at least one of the offloading policies. Correspondence between parameter information and forwarding path, And storing the correspondence between the at least one parameter information and the forwarding path to the Cache; and publishing according to the correspondence between the at least one parameter information and the forwarding path in the Cache.
  • the processor is further configured to: receive a request sent by the client, extract user information in the request, and parse the user from the user according to a preset extraction manner in the file. At least one parameter information is extracted from the information, and the extracted at least one parameter information is used as a user feature; and the forwarding corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache. The path is forwarded according to the forwarding path.
  • the performing the distribution of the offloading policy that is parsed according to the policy parsing file by the processor includes: parsing the corresponding offloading policy by using the policy parsing file, and parsing the corresponding percentage policy, according to The percentage policy is correspondingly released.
  • the foregoing storage medium may be a magnetic disk, an optical disk, or a read-only storage memory (Read-Only)
  • a nonvolatile storage medium such as a memory or a ROM, or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A gated launch system, comprising: a Redis server (102) and an Nginx server (104). The Nginx server (104) is used to search a cache to determine whether the cache contains a traffic division policy identifier. If not, then the Ngix server reads a current traffic division policy identifier from a predetermined location in the Redis server (102), and searches, according to the current traffic division policy identifier, a memory to determine whether the memory contains a policy parsing file and a user information parsing file corresponding to the current traffic division policy identifier. If not, then the Nginx server loads, according to the current traffic division policy identifier and by means of character string loading, a policy parsing file and a user information parsing file stored on the Redis server (102) in a character string format to the memory by means of Lua, and performs a launch according to a traffic division policy obtained by parsing the policy parsing file.

Description

灰度发布方法、系统、服务器及存储介质Grayscale publishing method, system, server and storage medium

本申请要求于2016年12月08日提交中国专利局、申请号为2016111239034、发明名称为“灰度发布方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims the priority of the Chinese Patent Application, the entire disclosure of which is hereby incorporated by reference.

【技术领域】[Technical Field]

本发明涉及计算机处理领域,特别是涉及一种灰度发布方法、系统、服务器及存储介质。The present invention relates to the field of computer processing, and in particular, to a grayscale publishing method, system, server, and storage medium.

【背景技术】【Background technique】

灰度发布是指在黑与白之间,能够平滑过渡的一种发布方式。AB test就是一种灰度发布方式,让一部分用户继续用A,一部分用户开始用B,如果用户对B没有什么反对意见,那么逐步扩大范围,把所有用户都迁移到B上面来。灰度发布可以保证整体系统的稳定,在初始灰度的时候就可以发现、调整问题,以保证其影响度。ABtest是依据系统中配置的分流策略进行分流工作的,在测试阶段,如果发现问题,往往需要在Redis服务器(缓存服务器)中新增分流策略,而每个分流策略都会对应一个策略解析文件和用户信息解析文件,由于传统的分流策略对应的策略解析文件和用户信息解析文件是存储在Nginx服务器(是一款面向性能设计的HTTP服务器)的内存中的,所以若新增分流策略,则需要将新增分流策略对应的策略解析文件和用户信息解析文件上传到Nginx服务器,这个过程中需要重新加载(reload)或重启Nginx服务器,而重启Nginx服务器不仅耗时而且十分麻烦。Grayscale publishing is a kind of publishing method that can smoothly transition between black and white. AB Test is a grayscale publishing method, which allows some users to continue to use A, and some users start to use B. If the user has no objection to B, then gradually expand the scope and move all users to B. The grayscale release can ensure the stability of the overall system, and the problem can be found and adjusted at the initial gray level to ensure its influence. ABtest is based on the traffic distribution policy configured in the system. In the test phase, if a problem is found, a traffic distribution policy is added to the Redis server (cache server). Each traffic distribution policy corresponds to a policy resolution file and user. The information parsing file, because the traditional parsing policy corresponding to the policy parsing file and the user information parsing file is stored in the memory of the Nginx server (which is a performance-oriented HTTP server), so if you add a diversion policy, you need to The policy parsing file and user information parsing file corresponding to the new offloading policy are uploaded to the Nginx server. In this process, the Nginx server needs to be reloaded or restarted. Restarting the Nginx server is not only time-consuming but also very troublesome.

【发明内容】 [Summary of the Invention]

根据本申请的各种实施例,提供一种灰度发布方法和系统。In accordance with various embodiments of the present application, a grayscale publishing method and system is provided.

一种灰度发布系统,包括:A grayscale publishing system comprising:

Redis服务器,用于接收上传的分流策略以及与所述分流策略对应的策略解析文件和用户信息解析文件并进行存储,其中,所述策略解析文件和用户信息解析文件是以字符串的形式进行存储的;及The Redis server is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the shunting policy, where the policy parsing file and the user information parsing file are stored in the form of a string. And

Nginx服务器,用于查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从所述Redis服务器中预设的位置读取当前分流策略标识;其中,The Nginx server is configured to check whether the traffic distribution policy identifier exists in the Cache. If the traffic distribution policy identifier does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server.

所述Nginx服务器还用于根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件,若内存中不存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;其中The Nginx server is further configured to: according to the current offloading policy identifier, a search for a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier, if the memory does not have a corresponding corresponding to the current shunting policy identifier. The policy parsing file and the user information parsing file are loaded into the memory by the lua in the form of a string in the manner of loading the string according to the current shunting policy identifier. Medium;

所述Nginx服务器还用于根据所述策略解析文件解析出的分流策略进行发布。The Nginx server is further configured to perform the publishing according to the offloading policy parsed by the policy parsing file.

一种灰度发布方法,包括:A grayscale publishing method, comprising:

Nginx服务器根据预设的规则定时查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从Redis服务器中预设的位置读取当前分流策略标识,其中,所述Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件; The Nginx server periodically searches for the traffic distribution policy identifier in the Cache according to the preset rule. If the traffic distribution policy does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server, where the Redis server stores The current splitting policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;

根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;Determining, by the current offloading policy identifier, whether a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier exist in the memory;

若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及If the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and

根据所述策略解析文件解析出的分流策略进行发布。The distribution policy is parsed according to the policy parsing file.

一种服务器,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:A server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:

根据预设的规则定时查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从Redis服务器中预设的位置读取当前分流策略标识,其中,所述Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件;If the traffic policy is not found in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server. The Redis server stores the current traffic policy. The traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;

根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;Determining, by the current offloading policy identifier, whether a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier exist in the memory;

若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及If the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and

根据所述策略解析文件解析出的分流策略进行发布。The distribution policy is parsed according to the policy parsing file.

一个或多个存储有计算机可执行指令的非易失性可读存储介质,所述计算机可执行指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:One or more non-volatile readable storage media storing computer-executable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:

根据预设的规则定时查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从Redis服务器中预设的位置读取当前分流策略标识,其中,所述Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件;If the traffic policy is not found in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server. The Redis server stores the current traffic policy. The traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;

根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;Determining, by the current offloading policy identifier, whether a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier exist in the memory;

若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及If the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and

根据所述策略解析文件解析出的分流策略进行发布。The distribution policy is parsed according to the policy parsing file.

本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。Details of one or more embodiments of the invention are set forth in the accompanying drawings and description below. Other features, objects, and advantages of the invention will be apparent from the description and appended claims.

【附图说明】[Description of the Drawings]

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present invention, and other drawings can be obtained from those skilled in the art without any creative work.

图1为一个实施例中灰度发布系统的架构图;1 is an architectural diagram of a grayscale distribution system in an embodiment;

图2为一个实施例中灰度发布方法的流程图;2 is a flow chart of a grayscale publishing method in an embodiment;

图3为一个实施例中根据当前分流策略加载策略解析文件和用户信息解析文件的方法流程图;3 is a flowchart of a method for loading a policy parsing file and a user information parsing file according to a current shunting policy according to an embodiment;

图4为另一个实施例中灰度发布方法的流程图;4 is a flow chart of a grayscale publishing method in another embodiment;

图5为一个实施例中Nginx服务器的内部结构图。Figure 5 is a diagram showing the internal structure of an Nginx server in one embodiment.

【具体实施方式】 【detailed description】

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

如图1所示,在一个实施例中,提出了一种灰度发布系统,该系统包括:Redis服务器102和Nginx服务器104;其中,As shown in FIG. 1, in one embodiment, a grayscale publishing system is proposed, the system comprising: a Redis server 102 and an Nginx server 104;

Redis服务器102,用于接收上传的分流策略以及与分流策略对应的策略解析文件和用户信息解析文件并进行存储,其中,策略解析文件和用户信息解析文件是以字符串的形式进行存储的。The Redis server 102 is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the traffic diverting policy, where the policy parsing file and the user information parsing file are stored in the form of a string.

在一个实施例中,ABtest(AB测试)是依据系统中配置的分流策略进行分流工作的, 而针对每个分流策略系统都需要一个策略解析文件和一个用户信息解析文件;其中,策略解析文件用于对分流策略进行解析,解析出分流策略中用户特征和转发路径之间的对应关系。用户信息解析文件用于对获取到的用户信息进行解析,解析出用户信息中的用户特征,继而确定出与该用户特征对应的具体转发路径,根据该确定的转发路径进行转发。在本实施例中,当需要新增分流策略时,通过后台管理页面直接将分流策略和分流策略对应的策略解析文件和用户信息解析文件上传到Redis服务器,其中,需要将分流策略对应的策略解析文件和用户信息解析文件先转换为字符串的形式,然后再上传到Redis服务器,即在Redis服务器中策略解析文件和用户信息解析文件是以字符串的形式进行存储的。In one embodiment, the ABtest (AB test) is based on the offloading policy configured in the system. For each of the traffic distribution policies, a policy resolution file and a user information analysis file are required. The policy analysis file is used to parse the traffic distribution policy, and the correspondence between the user features and the forwarding path in the traffic distribution policy is analyzed. The user information parsing file is used to parse the obtained user information, parse the user feature in the user information, and then determine a specific forwarding path corresponding to the user feature, and forward the packet according to the determined forwarding path. In this embodiment, when a new traffic distribution policy is required, the policy analysis file and the user information analysis file corresponding to the traffic distribution policy and the traffic distribution policy are directly uploaded to the Redis server through the background management page, where the policy corresponding to the traffic distribution policy needs to be resolved. The file and user information parsing file is first converted into a string form and then uploaded to the Redis server. That is, the policy parsing file and the user information parsing file in the Redis server are stored in the form of a string.

Nginx服务器104,用于查找Cache里面是否存在分流策略标识,若不存在,则从Redis服务器中预设的位置读取当前分流策略标识。The Nginx server 104 is configured to search for a traffic distribution policy identifier in the Cache. If not, the current traffic distribution policy identifier is read from a preset location in the Redis server.

在本实施例中,分流策略标识用来唯一标识一个分流策略,因为分流策略可能有多个,当前分流策略标识是指当前使用的分流策略所对应的标识。一般来说,在Nginx服务器的Cache(高速缓存存储器)中会存储有当前分流策略标识,但如果需要更换分流策略,那么首先要清空Cache里面的内容,也就是说,如果要使用新的分流策略,则需要清除Cache里面之前的分流策略标识,即当第一次使用新的分流策略时,Cache里面是不存在分流策略标识的。其中,Cache(高速缓存存储器)属于内存储器,Cache里面内容的清除可以采用专门的清空机制,即设置一个清空Cache里面内容的接口,通过该接口进行内容的清除。还可以为Cache里面的内容设置时效,比如,如果超过1分钟没有使用里面的分流策略标识,则自动清除 Cache里面的分流策略标识。这里并不对如何清空Cache里面的内容作限制。在一个实施例中,Nginx服务器根据预设的规则定时查找Cache里面是否存在分流策略标识,若不存在,则很可能是更换了当前分流策略。而当前分流策略是在Redis服务器中设置的,所以,如果Cache里面不存在当前分流策略标识,则需要Nginx服务器从Redis服务器中预设的位置读取当前分流策略标识。在一个实施例中,Nginx服务器中存储了当前分流策略标识存在的位置(即地址),根据该存储位置从Redis服务器中读取当前分流策略标识,并将读取到的当前分流策略标识保存到Cache,便于后续的分流转发。In this embodiment, the traffic distribution policy identifier is used to uniquely identify a traffic distribution policy, and the current traffic distribution policy identifier may be an identifier corresponding to the currently used traffic distribution policy. Generally, the current traffic distribution policy identifier is stored in the Cache (cache memory) of the Nginx server. However, if the traffic distribution policy needs to be replaced, the content in the Cache is first cleared, that is, if a new traffic off policy is to be used. The traffic policy identifier in the Cache needs to be cleared. That is, when the new traffic distribution policy is used for the first time, there is no traffic distribution policy identifier in the Cache. The Cache (cache memory) belongs to the internal memory, and the content of the Cache can be cleared by using a special clearing mechanism, that is, an interface for clearing the contents of the Cache is set, and the content is cleared through the interface. You can also set the time limit for the contents of the Cache. For example, if you do not use the offload policy ID in more than 1 minute, it will be automatically cleared. The traffic policy ID in the Cache. There is no restriction on how to empty the contents of the Cache. In an embodiment, the Nginx server periodically searches for a traffic distribution policy identifier according to a preset rule. If not, the current traffic distribution policy is replaced. The current traffic distribution policy is set in the Redis server. Therefore, if the current traffic distribution policy identifier does not exist in the Cache, the Nginx server needs to read the current traffic distribution policy identifier from the preset location in the Redis server. In one embodiment, the Nginx server stores the location (ie, the address) of the current traffic distribution policy identifier, reads the current traffic distribution policy identifier from the Redis server according to the storage location, and saves the read current traffic distribution policy identifier to the Cache, which facilitates subsequent offload forwarding.

Nginx服务器104还用于根据当前分流策略标识查找内存中是否存在与当前分流策略标识对应的策略解析文件和用户信息解析文件,若不存在,则根据当前分流策略标识采用加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中。The Nginx server 104 is further configured to: according to the current traffic distribution policy identifier, a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier are found in the memory, and if not, the redisets are loaded according to the current shunting policy identifier. The policy parsing file and the user information parsing file in the form of a string in the server are loaded into the memory through lua.

在本实施例中,Nginx服务器获取到当前分流策略标识后,首先,根据该当前分流策略标识查找内存中是否存在与当前分流策略标识对应的策略解析文件和用户信息解析文件,如果不存在,则说明该当前分流策略标识对应的分流策略为新增的分流策略,其对应的策略解析文件和用户信息解析文件以字符串的形式存在于Redis服务器。Nginx服务器需要通过加载字符串(loadString)的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中,其中, lua是一个可以嵌入到Nginx服务器配置文件中的动态脚本语言。在一个实施例中,Nginx服务器首先将以字符串形式存在的策略解析文件和用户信息解析文件加载到lua中,然后在lua中将以字符串形式存在的策略解析文件和用户信息解析文件转换为Table形式,然后存储到内存中。其中,Table形式是直接可以被Nginx服务器调用的形式。在本实施例中,由于Nginx服务器可以通过加载字符串的方式将存在与Redis服务器中的策略解析文件和用户信息解析文件加载到内存中,所以当需要新增文件时,只需要将新增文件转换为字符串,然后通过后台管理页面上传到Redis服务器,然后设置新的分流策略为当前分流策略即可实现分流策略的更新,不需要重启Nginx服务器。In this embodiment, after obtaining the current traffic distribution policy identifier, the Nginx server first searches for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier, and if not, The traffic distribution policy corresponding to the current traffic distribution policy identifier is a new traffic distribution policy. The corresponding policy resolution file and user information analysis file exist in the form of a string on the Redis server. The Nginx server needs to load the policy parsing file and the user information parsing file in the Redis server in the form of a string into the memory by loading the string (loadString). Lua is a dynamic scripting language that can be embedded into Nginx server configuration files. In one embodiment, the Nginx server first loads the policy parsing file and the user information parsing file in the form of a string into lua, and then converts the policy parsing file and the user information parsing file in the form of a string into lua into lua. The Table form is then stored in memory. Among them, the Table form is a form that can be directly called by the Nginx server. In this embodiment, the Nginx server can load the policy parsing file and the user information parsing file in the Redis server into the memory by loading the string, so when the file needs to be added, only the new file needs to be added. It is converted to a string, and then uploaded to the Redis server through the background management page, and then the new traffic distribution policy is set to the current traffic distribution policy to implement the traffic policy update without restarting the Nginx server.

Nginx服务器104还用于根据策略解析文件解析出的分流策略进行发布。The Nginx server 104 is further configured to perform the distribution according to the offloading policy parsed by the policy parsing file.

在本实施例中,当Nginx服务器将策略解析文件和用户信息解析文件加载到内存中后,就可以根据策略解析文件解析出的分流策略进行对应的灰度发布。在一个实施例中,Nginx服务器采用策略解析文件对相应的分流策略进行解析,能够得到分流策略具体的对应关系,即能够解析出分流策略中至少一种参数信息与转发路径(upstream)之间的对应关系。举个例子,如果分流策略中是根据城市信息来进行分流的,那么策略解析文件就能够将分流策略中城市信息与转发路径之间的对应关系解析出来,而对应的根据用户信息解析文件从用户信息中提取的用户特征也一定是城市信息,所以由解析出的城市信息与转发路径之间的对应关系就能够确定出与用户特征对应的转发路径。比如,解析出的城市信息与转发路径为:上海(SH)对应转发路径1,北京(BJ)对应转发路径2,其余城市对应转发路径3。如果提取到的用户特征是上海时,那么与该用户特征对应的转发路径就是1。将解析出的至少一种参数信息与转发路径之间的对应关系保存到Cache,这样当下一个用户请求过来时,可以直接用该解析好的分流策略进行分流转发。In this embodiment, after the Nginx server loads the policy parsing file and the user information parsing file into the memory, the corresponding grayscale publishing may be performed according to the shunting policy parsed by the policy parsing file. In an embodiment, the Nginx server parses the corresponding traffic splitting policy by using the policy parsing file, and can obtain a specific correspondence between the traffic splitting policies, that is, the at least one parameter information and the forwarding path (upstream) in the traffic splitting policy can be parsed. Correspondence relationship. For example, if the traffic distribution policy is based on the city information, the policy analysis file can resolve the correspondence between the city information and the forwarding path in the traffic distribution policy, and the corresponding file is parsed from the user according to the user information. The user feature extracted from the information must also be the city information. Therefore, the forwarding path corresponding to the user feature can be determined by the correspondence between the parsed city information and the forwarding path. For example, the parsed city information and forwarding path are: Shanghai (SH) corresponds to forwarding path 1, Beijing (BJ) corresponds to forwarding path 2, and the remaining cities correspond to forwarding path 3. If the extracted user feature is Shanghai, then the forwarding path corresponding to the user feature is 1. The corresponding relationship between the parsed at least one parameter information and the forwarding path is saved to the Cache, so that when the next user requests to come over, the parsed shunting policy can be directly used for offloading and forwarding.

在本实施例中,当需要新增分流策略时,只需要将新增分流策略对应的策略解析文件和用户信息解析文件转换为字符串上传到Redis服务器,Nginx服务器当需要使用新增的策略解析文件和用户信息解析文件时,则从Redis服务器中查找对应的新增策略解析文件和用户信息解析文件,然后采用加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存,然后根据策略解析文件解析出的分流策略进行发布。在这个过程中,当需要新增分流策略时,只需要将与该分流策略对应的策略解析文件和用户信息解析文件转换成字符串存到Redis服务器, Nginx服务器能够动态的从Redis服务器中加载新增的策略解析文件和用户信息解析文件到内存中,不需要reload和重启Nginx服务器,简单易操作,节省时间,从而提高了相应的转发效率。In this embodiment, when a new traffic distribution policy is required, the policy analysis file and the user information analysis file corresponding to the new traffic distribution policy are converted into a string and uploaded to the Redis server. The Nginx server needs to use the newly added policy resolution. When the file and user information are parsed, the corresponding new policy parsing file and user information parsing file are searched from the Redis server, and then the policy parsing file and user in the form of a string in the Redis server are loaded by using a string. The information parsing file is loaded into the memory through lua, and then distributed according to the shunting policy parsed by the policy parsing file. In this process, when a new traffic splitting policy is required, the policy resolution file and the user information parsing file corresponding to the traffic splitting policy need to be converted into a string and stored in the Redis server. The Nginx server can dynamically load new policy resolution files and user information parsing files from the Redis server into memory. It does not require reloading and restarting the Nginx server. It is easy to operate and saves time, thus improving the forwarding efficiency.

在一个实施例中,Nginx服务器还用于根据所述当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID,根据所述策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。In an embodiment, the Nginx server is further configured to search for a policy resolution file ID and a user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier, and parse the file ID and the user information according to the policy. The ID parses the policy parsing file and the user information parsing file in the Redis server in the form of a string into the memory and converts it into a table for storage.

在本实施例中,Nginx 服务器获取到当前分流策略标识后,根据该当前分流策略标识在Redis服务器中查找与之对应的策略解析文件ID和用户信息解析文件ID。其中,ID用来唯一标识一个文件或内容。在一个实施例中,Redis服务器中预先存储了分流策略标识与策略解析文件ID和用户信息解析文件ID之间的对应关系,根据当前分流策略标识就可以在Redis服务器中查找到与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID。Nginx服务器获取到策略解析文件ID和用户信息解析文件ID后,就可以在Redis服务器中根据策略解析文件ID和用户信息解析文件ID查找到对应的策略解析文件和用户信息解析文件,由于策略解析文件和用户信息解析文件在Redis服务器中是以字符串(String)的形式存在的,而字符串的形式是不能直接被调用的,所以需要通过将以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中,并在内存中将字符串(String)的形式转换为Table的形式进行存储,这样Nginx服务器就可以直接调用策略解析文件和用户信息解析文件对用户请求进行解析并确定对应的转发路径,也便于下次直接从内存中就可以调用对应的策略解析文件和用户信息解析文件。In this embodiment, Nginx After obtaining the current traffic distribution policy identifier, the server searches the Redis server for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier. The ID is used to uniquely identify a file or content. In an embodiment, the mapping between the traffic policy identifier and the policy resolution file ID and the user information resolution file ID is pre-stored in the Redis server, and the current traffic distribution policy can be found in the Redis server according to the current traffic policy identifier. Identifies the corresponding policy resolution file ID and user information resolution file ID. After obtaining the policy resolution file ID and the user information resolution file ID, the Nginx server can find the corresponding policy resolution file and user information analysis file according to the policy resolution file ID and the user information resolution file ID in the Redis server. The user information parsing file exists in the form of a string (String) in the Redis server, and the form of the string cannot be directly called, so it is necessary to parse the file and user information by using the policy in the form of a string. The file is loaded into the memory through lua, and the form of the string (String) is converted into a Table form for storage in the memory, so that the Nginx server can directly call the policy parsing file and the user information parsing file to parse and determine the user request. The corresponding forwarding path is also convenient for calling the corresponding policy parsing file and user information parsing file directly from the memory next time.

在一个实施例中,Nginx服务器还用于采用策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系, 并将至少一种参数信息与转发路径之间的对应关系保存到Cache。In an embodiment, the Nginx server is further configured to parse the corresponding offloading policy by using the policy parsing file, and parse the correspondence between the at least one parameter information and the forwarding path in the offloading policy. And saving the correspondence between the at least one parameter information and the forwarding path to the Cache.

在本实施例中,虽然分流策略中已经设置好了参数信息与转发路径之间的对应关系,但是由于在计算机里面是不能直接识别分流策略中的参数信息与转发路径之间的对应关系的,所以需要通过调用策略解析文件对相应的分流策略进行解析,解析出分流策略中的至少一种参数信息与转发路径之间的对应关系,将解析出的至少一种参数信息与转发路径之间的对应关系保存到Cache,这样当下一个用户请求过来时,可以直接用该解析好的分流策略进行分流转发。其中,参数信息根据类型以及对应的位置有多种,如果按类型一般常见的就有四种,一种的城市信息(City),一种是UID(User Identification,用户身份证明)信息,一种是IP信息,一种是URL信息。其中,UID信息根据提取位置不同,又可以分为UID后缀分流、指定特殊UID分流、UID用户段分流等。而分流策略又分为单级分流和多级分流。单级分流比较简单,就是指只需要参考一种参数信息就可以找到对应的转发路径。而多级分流则需要参考多种参数信息,其中,多级分流又分为并集和交集两种,所谓并集就是只要满足其中一个条件就可以实现分流的转发,而交集必须同时满足多个条件才可以实现分流的转发,以3级分流为例,第1级分流是城市的分流:分为2种,上海的(SH),北京的(BJ),其对应的分流策略为:SH的upstream(转发路径)是beta1,BJ的upstream是beta2;第2级分流是UID集合的分流:UID为123、124、 125的upstream是beta1,UID为567、568、569的upstream是beta2;第3级分流是IP范围的分流:IP的long值范围是1000001~2000000的upstream是beta1,ip的long值范围是2000001~3000000的upstream是beta2;如果采用的是并集的方式,那么首先从用户信息中提取城市信息,如果能够提取到城市信息SH或BJ,则转发请求到beta1或beta2,不用再考虑UID和IP信息,如果提取不到城市信息,或者城市信息不在SH和BJ范围之内,则继续判断UID,如果UID是567,则转发请求到beta2,依次类推,只要满足一个转发条件就进行转发,不必再获取后面的信息。而如果采用的是交集的方式,则必须同时满足上面的三个相同的条件,即,如果获取到的城市信息是SH,其对应的upstream是beta1,还需要继续获取UID集合的信息和IP范围信息,且获取到的UID集合的信息对应的也是beta1,IP范围的信息必须对应的也是beta1时,才会将用户请求转发给beta1,如果获取不到相应的信息或者获取到的信息对应的upstream不一致,比如,获取到的城市信息是SH,而获取到的UID集合信息是567,由于两者分别对应beta1和beta2,没有同时满足三个条件,所以不能实现分流的转发,而在实际中一般不会单独采用交集的形式,而是采用交集和并集的方式。In this embodiment, although the correspondence between the parameter information and the forwarding path has been set in the traffic distribution policy, since the corresponding relationship between the parameter information and the forwarding path in the traffic distribution policy cannot be directly identified in the computer, Therefore, the corresponding splitting policy is parsed by calling the policy parsing file, and the correspondence between the at least one parameter information and the forwarding path in the splitting policy is parsed, and the parsed at least one parameter information and the forwarding path are The corresponding relationship is saved to the Cache, so that when the next user requests to come over, the split traffic policy can be directly used for offload forwarding. Among them, the parameter information has various types according to the type and the corresponding position. If there are four types commonly used by type, one type of city information (City), one type is UID (User). Identification, user identification) information, one is IP information, and the other is URL information. The UID information may be divided into a UID suffix shunt, a specified special UID shunt, and a UID user segment shunt according to different extraction locations. The shunting strategy is divided into single-level shunt and multi-level shunt. Single-stage splitting is relatively simple, which means that only one parameter information needs to be referenced to find the corresponding forwarding path. Multi-level shunting needs to refer to a variety of parameter information. Among them, multi-level shunting is divided into two types: union and intersection. The so-called union is that as long as one of the conditions is met, the shunting can be realized, and the intersection must satisfy multiple The condition can only realize the forwarding of the offload. Taking the three-level split as an example, the first-level split is the split of the city: divided into two types, Shanghai (SH), Beijing (BJ), and its corresponding split strategy is: SH The upstream (forward path) is beta1, the upstream of BJ is beta2; the second-level offload is the split of UID collection: UID is 123, 124, The upstream of 125 is beta1, the upstream of UID is 567, 568, 569 is beta2; the third-level shunting is IP-wide shunting: IP's long value range is 1000001~2000000, upstream is beta1, and ip's long value is 2000001~ The 3000000 upstream is beta2; if the union is used, the city information is first extracted from the user information. If the city information SH or BJ can be extracted, the request is forwarded to beta1 or beta2, and the UID and IP information are not considered. If the city information is not extracted, or the city information is not within the scope of SH and BJ, then the UID is continuously determined. If the UID is 567, the request is forwarded to beta2, and so on, as long as the forwarding condition is met, the forwarding is not necessary. The information behind. If the intersection method is adopted, the same three conditions must be met at the same time, that is, if the acquired city information is SH, and its corresponding upstream is beta1, it is necessary to continue to obtain the information and IP range of the UID set. Information, and the information of the acquired UID set is also beta1, and the information of the IP range must correspond to beta1, then the user request will be forwarded to beta1, if the corresponding information or the upstream corresponding to the obtained information is not obtained Inconsistent, for example, the acquired city information is SH, and the acquired UID set information is 567. Since the two correspond to beta1 and beta2, respectively, three conditions are not met at the same time, so the forwarding of the offload cannot be realized, but in practice Instead of using the form of intersection alone, the intersection and union are used.

在一个实施例中,Nginx服务器还用于接收客户端发送的请求,提取请求中的用户信息,根据用户信息解析文件中预设的提取方式从所述用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征,根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与用户特征对应的转发路径,根据转发路径进行对应的转发。In an embodiment, the Nginx server is further configured to receive a request sent by the client, extract user information in the request, and extract at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, where The extracted at least one type of parameter information is used as a user feature, and the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and the corresponding forwarding is performed according to the forwarding path.

在本实施例中,Nginx服务器接收客户端发送的请求,提取该请求中的用户信息,根据之前存储在内存中的当前分流策略标识对应的用户信息解析文件对用户信息进行解析,从用户信息中提取出至少一个参数信息,然后将提取出的该至少一种参数信息作为用户特征。其中,用户特征的提取与对应的分流策略相关。在一个实施例中,若分流策略是根据城市信息(City)进行分流的,那么分流策略对应的用户信息解析文件自然是要提取该用户信息中的城市信息,该提取到的城市信息就是用户特征。当然分流策略进行分流可以依据一种参数信息,也可以同时依据多种参数信息,如果是根据多种参数信息来进行分流操作,那么需要从用户信息中提取出多种参数信息,比如,同时提取出城市信息和UID信息,如果提取不到城市信息和UID信息,还可能需要进一步提取IP信息。具体提取什么信息作为用户特征是由对应的分流策略来决定的。当提取到用户特征后,根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与该用户特征对应的转发路径,继而根据该转发路径进行对应的转发。In this embodiment, the Nginx server receives the request sent by the client, extracts the user information in the request, and parses the user information according to the user information parsing file corresponding to the current traffic distribution policy identifier previously stored in the memory, from the user information. Extracting at least one parameter information, and then extracting the extracted at least one parameter information as a user feature. The extracting of the user feature is related to the corresponding splitting strategy. In an embodiment, if the traffic distribution policy is offloaded according to the city information (City), the user information analysis file corresponding to the traffic distribution policy naturally extracts the city information in the user information, and the extracted city information is the user feature. . Of course, the traffic offloading strategy may be based on one parameter information, or may be based on multiple parameter information. If the shunt operation is performed according to multiple parameter information, multiple parameter information needs to be extracted from the user information, for example, simultaneous extraction. If the city information and UID information are not extracted, the IP information may need to be further extracted. What information is specifically extracted as a user feature is determined by the corresponding offloading strategy. After the user feature is extracted, the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and then the corresponding forwarding is performed according to the forwarding path.

在一个实施例中,Nginx服务器还用于采用策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据所述百分比策略进行对应的发布。In an embodiment, the Nginx server is further configured to parse the corresponding traffic distribution policy by using a policy resolution file, parse the corresponding percentage policy, and perform corresponding release according to the percentage policy.

在本实施例中,分流策略是根据百分比策略进行转发的,Nginx服务器采用策略解析文件对相应的分流策略进行解析,解析出具体的百分比策略,即将百分之多少转发给路径A,将百分之多少转发给路径B。比如有2组upstream,分别是beta1和beta2,可以设置30%的请求到beta1,70%的请求到beta2,然后根据百分比策略进行对应的发布。此外,为了保证同一用户总是能够转发到相同的路径,因此,需要引入客户号作为参数,将相同客户号在百分比不变的情况下,用户请求总是转发到相同的upstream,因此只有在大量请求的理想情况下,才可能达到30%和70%的分配策略。In this embodiment, the traffic distribution policy is forwarded according to the percentage policy. The Nginx server parses the corresponding traffic distribution policy by using the policy resolution file, and parses out the specific percentage policy, that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B. For example, there are 2 sets of upstream, which are beta1 and beta2, which can set 30% of requests to beta1, 70% of requests to beta2, and then publish correspondingly according to the percentage policy. In addition, in order to ensure that the same user can always forward to the same path, it is necessary to introduce the customer number as a parameter. If the same customer number is unchanged, the user request is always forwarded to the same upstream, so only in a large number Ideally, the 30% and 70% allocation strategies are possible.

如图2所示,在一个实施例中,提出了一种灰度发布方法,该方法包括:As shown in FIG. 2, in one embodiment, a grayscale publishing method is proposed, the method comprising:

步骤202,Nginx服务器根据预设的规则定时查找Cache里面是否存在分流策略标识,若存在,则直接进入步骤206,若不存在,则进入步骤204。Step 202: The Nginx server periodically searches for a traffic distribution policy identifier in the Cache according to the preset rule. If yes, the process proceeds directly to step 206. If not, the process proceeds to step 204.

步骤204,从Redis服务器中预设的位置读取当前分流策略标识。Step 204: Read the current offload policy identifier from a preset location in the Redis server.

在本实施例中,分流策略标识用来唯一标识一个分流策略,因为分流策略可能有多个,当前分流策略标识是指当前使用的分流策略所对应的标识。其中,Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件。一般来说,在Nginx服务器的Cache(高速缓存存储器)中会存储有当前分流策略标识,但如果需要更换分流策略,那么首先要清空Cache里面的内容,也就是说,如果要使用新的分流策略,则需要清除Cache里面之前的分流策略标识,即当第一次使用新的分流策略时,Cache里面是不存在分流策略标识的。其中,Cache里面内容的清除可以采用专门的清空机制,即设置一个清空Cache里面内容的接口,通过该接口进行内容的清除。还可以为Cache里面的内容设置时效,比如,如果超过1分钟没有使用里面的分流策略标识,则自动清除 Cache里面的分流策略标识。这里并不对如何清空Cache里面的内容作限制。在一个实施例中,Nginx服务器根据预设的规则定时查找Cache里面是否存在分流策略标识,若不存在,则很可能是更换了当前分流策略。而当前分流策略是在Redis服务器中设置的,所以,如果Cache里面不存在当前分流策略标识,则需要Nginx服务器从Redis服务器中预设的位置读取当前分流策略标识。在一个实施例中,Nginx服务器中存储了当前分流策略标识存在的位置(即地址),根据该存储位置从Redis服务器中读取当前分流策略标识,并将读取到的当前分流策略标识保存到Cache,便于后续的分流转发。In this embodiment, the traffic distribution policy identifier is used to uniquely identify a traffic distribution policy, and the current traffic distribution policy identifier may be an identifier corresponding to the currently used traffic distribution policy. The Redis server stores the current traffic distribution policy identifier and the corresponding policy analysis file and user resolution file that exist in the form of a string. Generally, the current traffic distribution policy identifier is stored in the Cache (cache memory) of the Nginx server. However, if the traffic distribution policy needs to be replaced, the content in the Cache is first cleared, that is, if a new traffic off policy is to be used. The traffic policy identifier in the Cache needs to be cleared. That is, when the new traffic distribution policy is used for the first time, there is no traffic distribution policy identifier in the Cache. The content of the Cache can be cleared by using a special clearing mechanism, that is, setting an interface for clearing the contents of the Cache, and clearing the content through the interface. You can also set the time limit for the contents of the Cache. For example, if you do not use the offload policy ID in more than 1 minute, it will be automatically cleared. The traffic policy ID in the Cache. There is no restriction on how to empty the contents of the Cache. In an embodiment, the Nginx server periodically searches for a traffic distribution policy identifier according to a preset rule. If not, the current traffic distribution policy is replaced. The current traffic distribution policy is set in the Redis server. Therefore, if the current traffic distribution policy identifier does not exist in the Cache, the Nginx server needs to read the current traffic distribution policy identifier from the preset location in the Redis server. In one embodiment, the Nginx server stores the location (ie, the address) of the current traffic distribution policy identifier, reads the current traffic distribution policy identifier from the Redis server according to the storage location, and saves the read current traffic distribution policy identifier to the Cache, which facilitates subsequent offload forwarding.

步骤206,根据当前分流策略标识查找内存中是否存在与当前分流策略标识对应的策略解析文件和用户信息解析文件,若存在,则直接进入步骤210,若不存在,则进入步骤208。Step 206: Search for the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier in the memory according to the current traffic distribution policy identifier. If yes, go directly to step 210. If not, go to step 208.

在本实施例中,Nginx服务器获取到当前分流策略标识后,首先,根据该当前分流策略标识查找内存中是否存在与当前分流策略标识对应的策略解析文件和用户信息解析文件,如果不存在,则说明该当前分流策略标识对应的分流策略为新增的分流策略,其对应的策略解析文件和用户信息解析文件以字符串的形式存在于Redis服务器。在一个实施例中,当需要新增分流策略时,通过后台管理页面直接将分流策略和分流策略对应的策略解析文件和用户信息解析文件上传到Redis服务器,其中,需要将分流策略对应的策略解析文件和用户信息解析文件先转换为字符串的形式,然后再上传到Redis服务器。在Redis服务器中策略解析文件和用户信息解析文件是以字符串的形式进行存储的。In this embodiment, after obtaining the current traffic distribution policy identifier, the Nginx server first searches for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier, and if not, The traffic distribution policy corresponding to the current traffic distribution policy identifier is a new traffic distribution policy. The corresponding policy resolution file and user information analysis file exist in the form of a string on the Redis server. In an embodiment, when a new traffic distribution policy is required, the policy analysis file and the user information analysis file corresponding to the traffic distribution policy and the traffic distribution policy are directly uploaded to the Redis server through the background management page, where the policy corresponding to the traffic distribution policy needs to be resolved. File and user information parsing files are first converted to strings and then uploaded to the Redis server. In the Redis server, the policy parsing file and the user information parsing file are stored in the form of strings.

步骤208,根据当前分流策略标识采用加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中。Step 208: The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by using lua according to the current shunting policy identifier.

在本实施例中,Nginx服务器需要通过加载字符串(loadString)的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中,其中, lua是一个可以嵌入到Nginx服务器配置文件中的动态脚本语言。在一个实施例中,Nginx服务器首先将以字符串形式存在的策略解析文件和用户信息解析文件加载到lua中,然后在lua中将以字符串形式存在的策略解析文件和用户信息解析文件转换为Table形式,然后存储到内存中,其中,Table形式是直接可以被Nginx服务器调用的形式。在本实施例中,由于策略解析文件和用户信息解析文件最初是以字符串的形式存在Redis服务器中的,而不是直接存储在Nginx服务器中,所以当需要新增文件时,只需要将新增文件转换为字符串,然后通过后台管理页面上传到Redis服务器,然后设置新的分流策略为当前分流策略即可实现分流策略的更新。In this embodiment, the Nginx server needs to load the policy parsing file and the user information parsing file in the form of a string in the Redis server into the memory by loading the string (loadString). Lua is a dynamic scripting language that can be embedded into Nginx server configuration files. In one embodiment, the Nginx server first loads the policy parsing file and the user information parsing file in the form of a string into lua, and then converts the policy parsing file and the user information parsing file in the form of a string into lua into lua. The Table form is then stored in memory, where the Table form is directly available to the Nginx server. In this embodiment, since the policy parsing file and the user information parsing file are initially stored in the Redis server in the form of a string, instead of being directly stored in the Nginx server, when the new file needs to be added, only the new file needs to be added. The file is converted to a string, and then uploaded to the Redis server through the background management page, and then the new traffic distribution policy is set to the current traffic distribution policy to implement the traffic policy update.

步骤210,根据策略解析文件解析出的分流策略进行发布。Step 210: Publish according to the offloading policy parsed by the policy parsing file.

在本实施例中,当Nginx服务器将策略解析文件和用户信息解析文件加载到内存中后,就可以根据策略解析文件解析出的分流策略进行对应的灰度发布。在一个实施例中,Nginx服务器采用策略解析文件对相应的分流策略进行解析,能够得到分流策略具体的对应关系,即能够解析出分流策略中至少一种参数信息与转发路径(upstream)之间的对应关系。举个例子,如果分流策略中是根据城市信息来进行分流的,那么策略解析文件就能够将分流策略中城市信息与转发路径之间的对应关系解析出来,而对应的根据用户信息解析文件从用户信息中提取的用户特征也一定是城市信息,所以由解析出的城市信息与转发路径之间的对应关系就能够确定出与用户特征对应的转发路径。比如,解析出的城市信息与转发路径为:上海(SH)对应转发路径1,北京(BJ)对应转发路径2,其余城市对应转发路径3。如果提取到的用户特征是上海时,那么与该用户特征对应的转发路径就是1。将解析出的至少一种参数信息与转发路径之间的对应关系保存到Cache,这样当下一个用户请求过来时,可以直接用该解析好的分流策略进行分流转发。In this embodiment, after the Nginx server loads the policy parsing file and the user information parsing file into the memory, the corresponding grayscale publishing may be performed according to the shunting policy parsed by the policy parsing file. In an embodiment, the Nginx server parses the corresponding traffic splitting policy by using the policy parsing file, and can obtain a specific correspondence between the traffic splitting policies, that is, the at least one parameter information and the forwarding path (upstream) in the traffic splitting policy can be parsed. Correspondence relationship. For example, if the traffic distribution policy is based on the city information, the policy analysis file can resolve the correspondence between the city information and the forwarding path in the traffic distribution policy, and the corresponding file is parsed from the user according to the user information. The user feature extracted from the information must also be the city information. Therefore, the forwarding path corresponding to the user feature can be determined by the correspondence between the parsed city information and the forwarding path. For example, the parsed city information and forwarding path are: Shanghai (SH) corresponds to forwarding path 1, Beijing (BJ) corresponds to forwarding path 2, and the remaining cities correspond to forwarding path 3. If the extracted user feature is Shanghai, then the forwarding path corresponding to the user feature is 1. The corresponding relationship between the parsed at least one parameter information and the forwarding path is saved to the Cache, so that when the next user requests to come over, the parsed shunting policy can be directly used for offloading and forwarding.

在本实施例中,当需要新增分流策略时,只需要将新增分流策略对应的策略解析文件和用户信息解析文件转换为字符串上传到Redis服务器,Nginx服务器当需要使用新增的策略解析文件和用户信息解析文件时,则从Redis服务器中查找对应的新增策略解析文件和用户信息解析文件,然后采用加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存,然后根据策略解析文件解析出的分流策略进行发布。在这个过程中,当需要新增分流策略时,只需要将与该分流策略对应的策略解析文件和用户信息解析文件转换成字符串存到Redis服务器, Nginx服务器能够动态的从Redis服务器中加载新增的策略解析文件和用户信息解析文件到内存中,不需要reload和重启Nginx服务器,简单易操作,节省时间,从而提高了相应的转发效率。In this embodiment, when a new traffic distribution policy is required, the policy analysis file and the user information analysis file corresponding to the new traffic distribution policy are converted into a string and uploaded to the Redis server. The Nginx server needs to use the newly added policy resolution. When the file and user information are parsed, the corresponding new policy parsing file and user information parsing file are searched from the Redis server, and then the policy parsing file and user in the form of a string in the Redis server are loaded by using a string. The information parsing file is loaded into the memory through lua, and then distributed according to the shunting policy parsed by the policy parsing file. In this process, when a new traffic splitting policy is required, the policy resolution file and the user information parsing file corresponding to the traffic splitting policy need to be converted into a string and stored in the Redis server. The Nginx server can dynamically load new policy resolution files and user information parsing files from the Redis server into memory. It does not require reloading and restarting the Nginx server. It is easy to operate and saves time, thus improving the forwarding efficiency.

如图3所示,在一个实施例中,根据当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中的步骤208包括:As shown in FIG. 3, in one embodiment, the policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory through lua according to the current shunting policy identifier. Step 208 includes:

步骤208A,根据当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID。Step 208A: Search for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier.

在本实施例中,Nginx 服务器获取到当前分流策略标识后,根据该当前分流策略标识在Redis服务器中查找与之对应的策略解析文件ID和用户信息解析文件ID。其中,ID用来唯一标识一个文件或内容。在一个实施例中,Redis服务器中预先存储了分流策略标识与策略解析文件ID和用户信息解析文件ID之间的对应关系,根据当前分流策略标识就可以在Redis服务器中查找到与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID。In this embodiment, Nginx After obtaining the current traffic distribution policy identifier, the server searches the Redis server for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier. The ID is used to uniquely identify a file or content. In an embodiment, the mapping between the traffic policy identifier and the policy resolution file ID and the user information resolution file ID is pre-stored in the Redis server, and the current traffic distribution policy can be found in the Redis server according to the current traffic policy identifier. Identifies the corresponding policy resolution file ID and user information resolution file ID.

步骤208B,根据策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。Step 208B: The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by the lua and converted into Table by loading the string according to the policy parsing file ID and the user information parsing file ID. The form is stored.

在本实施例中,Nginx服务器获取到策略解析文件ID和用户信息解析文件ID后,就可以在Redis服务器中根据策略解析文件ID和用户信息解析文件ID查找到对应的策略解析文件和用户信息解析文件,由于策略解析文件和用户信息解析文件在Redis服务器中是以字符串(String)的形式存在的,而字符串的形式是不能直接被调用的,所以需要通过将以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中,并在内存中将字符串(String)的形式转换为Table的形式进行存储,这样Nginx服务器就可以直接调用策略解析文件和用户信息解析文件对用户请求进行解析并确定对应的转发路径,也便于下次直接从内存中就可以调用对应的策略解析文件和用户信息解析文件。In this embodiment, after obtaining the policy parsing file ID and the user information parsing file ID, the Nginx server can find the corresponding policy parsing file and user information parsing according to the policy parsing file ID and the user information parsing file ID in the Redis server. The file, because the policy parsing file and the user information parsing file exist in the form of a string in the Redis server, and the form of the string cannot be directly called, so it is necessary to adopt a strategy that exists in the form of a string. The parsing file and the user information parsing file are loaded into the memory by lua, and the string (String) form is converted into a Table form for storage, so that the Nginx server can directly call the policy parsing file and the user information parsing file pair. The user requests to parse and determine the corresponding forwarding path, and it is also convenient to call the corresponding policy parsing file and user information parsing file directly from the memory next time.

在一个实施例中,根据策略解析文件解析出的分流策略进行发布的步骤包括:采用所述策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系,并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache,根据Cache中的至少一种参数信息与转发路径之间的对应关系进行发布。In an embodiment, the step of distributing the splitting policy according to the policy parsing file includes: parsing the corresponding splitting policy by using the policy parsing file, and parsing between at least one parameter information and the forwarding path in the splitting policy Corresponding relationship, and storing the correspondence between the at least one parameter information and the forwarding path to the Cache, and publishing according to the correspondence between the at least one parameter information in the Cache and the forwarding path.

在本实施例中,虽然分流策略中已经设置好了参数信息与转发路径之间的对应关系,但是由于在计算机里面是不能识别分流策略中的参数信息与转发路径之间的对应关系的,所以需要通过调用策略解析文件对相应的分流策略进行解析,解析出分流策略中的至少一种参数信息与转发路径之间的对应关系,将解析出的至少一种参数信息与转发路径之间的对应关系保存到Cache,这样当下一个用户请求过来时,可以直接用该解析好的分流策略进行分流转发。其中,参数信息根据类型以及对应的位置有多种,如果按类型一般常见的就有四种,一种的城市信息(City),一种是UID信息,一种是IP信息,一种是URL信息。其中,UID信息根据提取位置不同,又可以分为UID后缀分流、指定特殊UID分流、UID用户段分流等。而分流策略又分为单级分流和多级分流。单级分流比较简单,就是指只需要参考一种参数信息就可以找到对应的转发路径。而多级分流则需要参考多种参数信息,其中,多级分流又分为并集和交集两种,所谓并集就是只要满足其中一个条件就可以实现分流的转发,而交集必须同时满足多个条件才可以实现分流的转发,以3级分流为例,第1级分流是城市的分流:分为2种,上海的(SH),北京的(BJ),其对应的分流策略为:SH的upstream(转发路径)是beta1,BJ的upstream是beta2;第2级分流是UID集合的分流:UID为123、124、 125的upstream是beta1,UID为567、568、569的upstream是beta2;第3级分流是IP范围的分流:IP的long值范围是1000001~2000000的upstream是beta1,ip的long值范围是2000001~3000000的upstream是beta2;如果采用的是并集的方式,那么首先从用户信息中提取城市信息,如果能够提取到城市信息SH或BJ,则转发请求到beta1或beta2,不用再考虑UID和IP信息,如果提取不到城市信息,或者城市信息不在SH和BJ范围之内,则继续判断UID,如果UID是567,则转发请求到beta2,依次类推,只要满足一个转发条件就进行转发,不必再获取后面的信息。而如果采用的是交集的方式,则必须同时满足上面的三个相同的条件,即,如果获取到的城市信息是SH,其对应的upstream是beta1,还需要继续获取UID集合的信息和IP范围信息,且获取到的UID集合的信息对应的也是beta1,IP范围的信息必须对应的也是beta1时,才会将用户请求转发给beta1,如果获取不到相应的信息或者获取到的信息对应的upstream不一致,比如,获取到的城市信息是SH,而获取到的UID集合信息是567,由于两者分别对应beta1和beta2,没有同时满足三个条件,所以不能实现分流的转发,而在实际中一般不会单独采用交集的形式,而是采用交集和并集的方式。In this embodiment, although the correspondence between the parameter information and the forwarding path has been set in the traffic distribution policy, since the correspondence between the parameter information and the forwarding path in the traffic distribution policy cannot be identified in the computer, The corresponding splitting policy is parsed by calling the policy parsing file, and the correspondence between the at least one parameter information and the forwarding path in the splitting policy is parsed, and the correspondence between the parsed at least one parameter information and the forwarding path is analyzed. The relationship is saved to the Cache, so that when the next user requests to come over, the split traffic policy can be directly used for offload forwarding. Among them, the parameter information has various types according to the type and the corresponding position. If there are four types commonly used by type, one type of city information (City), one type is UID information, one is IP information, and the other is URL. information. The UID information may be divided into a UID suffix shunt, a specified special UID shunt, and a UID user segment shunt according to different extraction locations. The shunting strategy is divided into single-level shunt and multi-level shunt. Single-stage splitting is relatively simple, which means that only one parameter information needs to be referenced to find the corresponding forwarding path. Multi-level shunting needs to refer to a variety of parameter information. Among them, multi-level shunting is divided into two types: union and intersection. The so-called union is that as long as one of the conditions is met, the shunting can be realized, and the intersection must satisfy multiple The condition can only realize the forwarding of the offload. Taking the three-level split as an example, the first-level split is the split of the city: divided into two types, Shanghai (SH), Beijing (BJ), and its corresponding split strategy is: SH The upstream (forward path) is beta1, the upstream of BJ is beta2; the second-level offload is the split of UID collection: UID is 123, 124, The upstream of 125 is beta1, the upstream of UID is 567, 568, 569 is beta2; the third-level shunting is IP-wide shunting: IP's long value range is 1000001~2000000, upstream is beta1, and ip's long value is 2000001~ The 3000000 upstream is beta2; if the union is used, the city information is first extracted from the user information. If the city information SH or BJ can be extracted, the request is forwarded to beta1 or beta2, and the UID and IP information are not considered. If the city information is not extracted, or the city information is not within the scope of SH and BJ, then the UID is continuously determined. If the UID is 567, the request is forwarded to beta2, and so on, as long as the forwarding condition is met, the forwarding is not necessary. The information behind. If the intersection method is adopted, the same three conditions must be met at the same time, that is, if the acquired city information is SH, and its corresponding upstream is beta1, it is necessary to continue to obtain the information and IP range of the UID set. Information, and the information of the acquired UID set is also beta1, and the information of the IP range must correspond to beta1, then the user request will be forwarded to beta1, if the corresponding information or the upstream corresponding to the obtained information is not obtained Inconsistent, for example, the acquired city information is SH, and the acquired UID set information is 567. Since the two correspond to beta1 and beta2, respectively, three conditions are not met at the same time, so the forwarding of the offload cannot be realized, but in practice Instead of using the form of intersection alone, the intersection and union are used.

如图4所示,在一个实施例中,上述灰度发布方法还包括:As shown in FIG. 4, in an embodiment, the grayscale publishing method further includes:

步骤402,Nginx服务器根据预设的规则定时查找Cache里面是否存在分流策略标识,若存在,则直接进入步骤406,若不存在,则进入步骤404。Step 402: The Nginx server periodically searches for a traffic distribution policy identifier in the Cache according to a preset rule. If yes, the process proceeds directly to step 406. If not, the process proceeds to step 404.

步骤404,从Redis服务器中预设的位置读取当前分流策略标识。Step 404: Read the current offload policy identifier from a preset location in the Redis server.

步骤406,根据当前分流策略标识查找内存中是否存在与当前分流策略标识对应的策略解析文件和用户信息解析文件,若存在,则直接进入步骤410,若不存在,则进入步骤408。In step 406, the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier are found in the memory according to the current traffic distribution policy identifier. If yes, the process proceeds directly to step 410. If not, the process proceeds to step 408.

步骤408,根据当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中。Step 408: The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by lua according to the current shunting policy identifier.

步骤410,采用策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系,并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache。Step 410: The policy analysis file is used to parse the corresponding traffic distribution policy, and the correspondence between the at least one parameter information and the forwarding path in the traffic distribution policy is parsed, and the correspondence between the at least one parameter information and the forwarding path is analyzed. The relationship is saved to the Cache.

步骤412,接收客户端发送的请求,提取请求中的用户信息。Step 412: Receive a request sent by the client, and extract user information in the request.

在本实施例中,Nginx服务器接收客户端发送的请求,提取该请求中的用户信息。其中,用户信息包括城市信息、IP地址信息,UID信息, remote地址信息等中的至少一种。In this embodiment, the Nginx server receives the request sent by the client, and extracts the user information in the request. The user information includes city information, IP address information, and UID information. At least one of remote address information and the like.

步骤414,根据用户信息解析文件中预设的提取方式从用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征。Step 414: Extract at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, and use the extracted at least one parameter information as a user feature.

在本实施例中,Nginx服务器根据内存中的当前分流策略标识对应的用户信息解析文件对用户信息进行解析,根据预设的提取方式从用户信息中提取出至少一个参数信息,然后将提取出的该至少一种参数信息作为用户特征。其中,用户特征的提取与对应的分流策略相关。在一个实施例中,若分流策略是根据城市信息(City)进行分流的,那么分流策略对应的用户信息解析文件自然是要提取该用户信息中的城市信息,该提取到的城市信息就是用户特征。当然分流策略进行分流可以依据一种参数信息,也可以同时依据多种参数信息,如果是根据多种参数信息来进行分流操作,那么需要从用户信息中提取出多种参数信息,比如,同时提取出城市信息和UID信息,如果提取不到城市信息和UID信息,还可能需要进一步提取IP信息。具体提取什么信息作为用户特征是由对应的分流策略来决定的。In this embodiment, the Nginx server parses the user information according to the current user information analysis file in the memory, and extracts at least one parameter information from the user information according to a preset extraction manner, and then extracts the extracted information. The at least one parameter information is used as a user feature. The extracting of the user feature is related to the corresponding splitting strategy. In an embodiment, if the traffic distribution policy is offloaded according to the city information (City), the user information analysis file corresponding to the traffic distribution policy naturally extracts the city information in the user information, and the extracted city information is the user feature. . Of course, the traffic offloading strategy may be based on one parameter information, or may be based on multiple parameter information. If the shunt operation is performed according to multiple parameter information, multiple parameter information needs to be extracted from the user information, for example, simultaneous extraction. If the city information and UID information are not extracted, the IP information may need to be further extracted. What information is specifically extracted as a user feature is determined by the corresponding offloading strategy.

步骤416,根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与用户特征对应的转发路径,根据转发路径进行对应的转发。Step 416: Determine a forwarding path corresponding to the user feature according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and perform corresponding forwarding according to the forwarding path.

在本实施例中,当提取到用户特征后,根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与该用户特征对应的转发路径,继而根据该转发路径进行对应的转发。以多级分流为例,采用交集和并集的方式的分流策略。其中,第1级分流包括2个策略,是交集的关系:策略ID为0的是城市信息,上海(SH)的upstream是beta1,北京(BJ)的upstream是beta2;策略ID为1的是UID集合分流,UID为123、124、125的upstream是beta1,UID为567、568、569的upstream是beta2;第2级分流与第1级分流是并集的关系,第2级分流只有1个策略:策略为2的是ip范围的分流,ip的long值范围是1000001~2000000的upstream是beta1,ip的long值范围是2000001~3000000的upstream是beta2; 在该实施例中,如果用户信息中的城市信息是SH,且UID为123,则走第1级分流,转发请求到beta1,如果用户信息中的城市信息是SH,而UID为567,由于各自转发的upstream不一致,不符合第1级分流策略,则继续走第2级分流策略,如果IP的long值是1200000,则转发请求到beta1。In this embodiment, after the user feature is extracted, the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and then the corresponding forwarding path is performed according to the forwarding path. Forward. Taking multi-level shunting as an example, a shunting strategy using intersection and union is adopted. Among them, the first level of the split includes two strategies, which is the intersection relationship: the city ID is 0 for the strategy ID, the beta1 for the Shanghai (SH), the beta2 for the Beijing (BJ), and the UID for the strategy ID of 1. Set splitting, the upstream of UID is 123, 124, 125 is beta1, the upstream of UID is 567, 568, 569 is beta2; the second-level split is the union of the first-level split, and the second-level split is only one strategy. The strategy is 2 is the ip range of the shunt, ip long value range is 1000001 ~ 2000000 of the upstream is beta1, ip long value range of 2000001 ~ 3000000 of the upstream is beta2; In this embodiment, if the city information in the user information is SH and the UID is 123, then the first level is offloaded, and the request is forwarded to beta1. If the city information in the user information is SH, and the UID is 567, due to the respective If the forwarded inbound stream is inconsistent and does not meet the level 1 splitting policy, the second-level offloading policy is continued. If the long value of the IP is 1200000, the request is forwarded to beta1.

在一个实施例中,根据策略解析文件解析出的分流策略进行发布的步骤包括:采用策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据百分比策略进行对应的发布。In an embodiment, the step of distributing the splitting policy according to the policy parsing file includes: parsing the corresponding traffic splitting policy by using the policy parsing file, parsing the corresponding percentage policy, and performing corresponding publishing according to the percentage policy.

在本实施例中,分流策略是根据百分比策略进行转发的,Nginx服务器采用策略解析文件对相应的分流策略进行解析,解析出具体的百分比策略,即将百分之多少转发给路径A,将百分之多少转发给路径B。比如有2组upstream,分别是beta1和beta2,可以设置30%的请求到beta1,70%的请求到beta2,然后根据百分比策略进行对应的发布。此外,为了保证同一用户总是能够转发到相同的路径,因此,需要引入客户号作为参数,将相同客户号在百分比不变的情况下,用户请求总是转发到相同的upstream,因此只有在大量请求的理想情况下,才可能达到30%和70%的分配策略。In this embodiment, the traffic distribution policy is forwarded according to the percentage policy. The Nginx server parses the corresponding traffic distribution policy by using the policy resolution file, and parses out the specific percentage policy, that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B. For example, there are 2 sets of upstream, which are beta1 and beta2, which can set 30% of requests to beta1, 70% of requests to beta2, and then publish correspondingly according to the percentage policy. In addition, in order to ensure that the same user can always forward to the same path, it is necessary to introduce the customer number as a parameter. If the same customer number is unchanged, the user request is always forwarded to the same upstream, so only in a large number Ideally, the 30% and 70% allocation strategies are possible.

如图5所示,在一个实施例中,Nginx服务器104的内部结构图如图5所示,该服务器包括通过系统总线连接的处理器、非易失性存储介质、内存储器和网络接口。其中,该Nginx服务器的非易失性存储介质存储有操作系统和计算机可读指令,该计算机可读指令可被处理器执行以实现适用于Nginx服务器的一种灰度发布方法。该处理器用于提供计算和控制能力,支撑整个服务器的运行。Nginx服务器104中的内存储器为非易失性存储介质中的操作系统和计算机可执行指令的运行提供环境,该Nginx服务器104的网络接口用于进行网络通信。本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的Nginx服务器104的限定,具体的Nginx服务器104可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。As shown in FIG. 5, in one embodiment, the internal structure of the Nginx server 104 is as shown in FIG. 5, which includes a processor connected through a system bus, a non-volatile storage medium, an internal memory, and a network interface. Wherein, the non-volatile storage medium of the Nginx server stores an operating system and computer readable instructions executable by the processor to implement a grayscale publishing method suitable for the Nginx server. This processor is used to provide computing and control capabilities to support the operation of the entire server. The internal memory in the Nginx server 104 provides an environment for the operation of operating systems and computer executable instructions in a non-volatile storage medium for network communication. It will be understood by those skilled in the art that the structure shown in FIG. 5 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation of the Nginx server 104 to which the solution of the present application is applied, and a specific Nginx server. 104 may include more or fewer components than shown, or some components may be combined, or have different component arrangements.

在一个实施例中,图5 Nginx服务器中的所述计算机可读指令被处理器执行时,使得处理器执行以下步骤:根据预设的规则定时查找Cache里面是否存在分流策略标识,若不存在,则从Redis服务器中预设的位置读取当前分流策略标识;根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及根据所述策略解析文件解析出的分流策略进行发布。In one embodiment, Figure 5 When the computer readable instructions in the Nginx server are executed by the processor, the processor is configured to: perform a step of searching for a shunt policy identifier in the Cache according to a preset rule timing, and if not present, preset from the Redis server The location is configured to read the current traffic distribution policy identifier; the current traffic distribution policy identifier is used to search for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier; if the current traffic distribution policy identifier does not exist in the memory, The policy parsing file and the user information parsing file are loaded into the memory by the lua in the form of a string in the manner of loading the string according to the current shunting policy identifier. And publishing according to the offloading policy parsed by the policy parsing file.

在一个实施例中,所述处理器所执行的若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中的步骤包括:若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID;根据所述策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。In an embodiment, if the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not present in the memory, the method for loading the string according to the current shunting policy identifier is The step of loading the policy parsing file and the user information parsing file in the form of a string into the memory by the lua in the redis server includes: if the in-memory does not exist, the policy parsing file corresponding to the current shunting policy identifier and the user information parsing And searching for the policy resolution file ID and the user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier; and parsing the file ID according to the policy and parsing the file ID with the user information by loading the string The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by lua and converted into a Table for storage.

在一个实施例中,所述处理器所执行的根据所述策略解析文件解析出的分流策略进行发布包括:采用所述策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系, 并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache;根据Cache中的至少一种参数信息与转发路径之间的对应关系进行发布。In an embodiment, the performing the distribution of the offloading policy that is parsed according to the policy parsing file by the processor includes: parsing the corresponding offloading policy by using the policy parsing file, and parsing at least one of the offloading policies. Correspondence between parameter information and forwarding path, And storing the correspondence between the at least one parameter information and the forwarding path to the Cache; and publishing according to the correspondence between the at least one parameter information and the forwarding path in the Cache.

在一个实施例中,所述处理器还用于执行以下步骤:接收客户端发送的请求,提取所述请求中的用户信息;根据所述用户信息解析文件中预设的提取方式从所述用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征;根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与所述用户特征对应的转发路径,根据所述转发路径进行对应的转发。In an embodiment, the processor is further configured to: receive a request sent by the client, extract user information in the request, and parse the user from the user according to a preset extraction manner in the file. At least one parameter information is extracted from the information, and the extracted at least one parameter information is used as a user feature; and the forwarding corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache. The path is forwarded according to the forwarding path.

在一个实施例中,所述处理器所执行的根据所述策略解析文件解析出的分流策略进行发布包括:采用所述策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据所述百分比策略进行对应的发布。In an embodiment, the performing the distribution of the offloading policy that is parsed according to the policy parsing file by the processor includes: parsing the corresponding offloading policy by using the policy parsing file, and parsing the corresponding percentage policy, according to The percentage policy is correspondingly released.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。A person skilled in the art can understand that all or part of the process of implementing the above embodiment method can be completed by a computer program to instruct related hardware, and the computer program can be stored in a computer readable storage medium. When executed, the flow of an embodiment of the methods as described above may be included. The foregoing storage medium may be a magnetic disk, an optical disk, or a read-only storage memory (Read-Only) A nonvolatile storage medium such as a memory or a ROM, or a random access memory (RAM).

以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-described embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction between the combinations of these technical features, All should be considered as the scope of this manual.

以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-described embodiments are merely illustrative of several embodiments of the present invention, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Claims (20)

一种灰度发布系统,包括:A grayscale publishing system comprising: Redis服务器,用于接收上传的分流策略以及与该分流策略对应的策略解析文件和用户信息解析文件并进行存储,其中,所述策略解析文件和用户信息解析文件是以字符串的形式进行存储的;及The Redis server is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the shunting policy, where the policy parsing file and the user information parsing file are stored in the form of a string. ;and Nginx服务器,用于查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从所述Redis服务器中预设的位置读取当前分流策略标识;其中,The Nginx server is configured to check whether the traffic distribution policy identifier exists in the Cache. If the traffic distribution policy identifier does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server. 所述Nginx服务器还用于根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件,若内存中不存在与当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;其中,The Nginx server is further configured to: according to the current offloading policy identifier, a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier are found in the memory, if there is no policy corresponding to the current shunting policy identifier in the memory. And parsing the file and the user information parsing file, and loading the policy parsing file and the user information parsing file in the form of a string in the Redis server into the memory by using a loading string according to the current shunting policy identifier; among them, 所述Nginx服务器还用于根据所述策略解析文件解析出的分流策略进行发布。The Nginx server is further configured to perform the publishing according to the offloading policy parsed by the policy parsing file. 根据权利要求1所述的系统,其特征在于,所述Nginx服务器还用于根据所述当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID,根据所述策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。The system according to claim 1, wherein the Nginx server is further configured to search for a policy resolution file ID and a user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier, according to the The policy parsing file ID and the user information parsing file ID are used to load the policy parsing file and the user information parsing file in the form of a string in the Redis server by way of loading the string into the memory and converting it into a table for storage. 根据权利要求1所述的系统,其特征在于,所述Nginx服务器还用于采用所述策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系, 并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache。The system according to claim 1, wherein the Nginx server is further configured to parse the corresponding offloading policy by using the policy parsing file, and parse between at least one parameter information and a forwarding path in the offloading policy. Correspondence relationship, And saving the correspondence between the at least one parameter information and the forwarding path to the Cache. 根据权利要求3所述的系统,其特征在于,所述Nginx服务器还用于接收客户端发送的请求,提取所述请求中的用户信息,根据所述用户信息解析文件中预设的提取方式从所述用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征,根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与所述用户特征对应的转发路径,根据所述转发路径进行对应的转发。The system according to claim 3, wherein the Nginx server is further configured to receive a request sent by the client, extract user information in the request, and parse the preset extraction manner in the file according to the user information. At least one parameter information is extracted from the user information, and the extracted at least one parameter information is used as a user feature, and the user feature is determined according to a correspondence between at least one parameter information and a forwarding path stored in the Cache. Corresponding forwarding paths are forwarded according to the forwarding path. 根据权利要求1所述的系统,其特征在于,所述Nginx服务器还用于采用所述策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据所述百分比策略进行对应的发布。The system according to claim 1, wherein the Nginx server is further configured to parse the corresponding offloading policy by using the policy parsing file, parse out a corresponding percentage policy, and perform corresponding publishing according to the percentage policy. . 一种灰度发布方法,包括:A grayscale publishing method, comprising: Nginx服务器根据预设的规则定时查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从Redis服务器中预设的位置读取当前分流策略标识,其中,所述Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件;The Nginx server periodically searches for the traffic distribution policy identifier in the Cache according to the preset rule. If the traffic distribution policy does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server, where the Redis server stores The current splitting policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string; 根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;Determining, by the current offloading policy identifier, whether a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier exist in the memory; 若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及If the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and 根据所述策略解析文件解析出的分流策略进行发布。The distribution policy is parsed according to the policy parsing file. 根据权利要求6所述的方法,其特征在于,所述若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中包括:The method according to claim 6, wherein if the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier do not exist in the memory, the loading string is used according to the current shunting policy identifier. The method of loading the policy parsing file and the user information parsing file in the form of a string in the Redis server into the memory through lua includes: 若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID;If the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier are not in the memory, the policy resolution file ID and the user information analysis file ID corresponding to the current traffic distribution policy identifier are searched according to the current traffic distribution policy identifier; 根据所述策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。According to the policy parsing file ID and the user information parsing file ID, the policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory through lua and converted into a Table by loading a string. Store. 根据权利要求6所述的方法,其特征在于,所述根据所述策略解析文件解析出的分流策略进行发布包括:The method according to claim 6, wherein the distributing according to the splitting policy parsed by the policy parsing file comprises: 采用所述策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系, 并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache;The policy analysis file is used to parse the corresponding traffic distribution policy, and the correspondence between the at least one parameter information and the forwarding path in the traffic distribution policy is analyzed. And saving the correspondence between the at least one parameter information and the forwarding path to the Cache; 根据Cache中的至少一种参数信息与转发路径之间的对应关系进行发布。The advertised according to the correspondence between the at least one parameter information in the Cache and the forwarding path. 根据权利要求8所述的方法,其特征在于,还包括:The method of claim 8 further comprising: 接收客户端发送的请求,提取所述请求中的用户信息;Receiving a request sent by the client, and extracting user information in the request; 根据所述用户信息解析文件中预设的提取方式从所述用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征;And extracting at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, and using the extracted at least one parameter information as a user feature; 根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与所述用户特征对应的转发路径,根据所述转发路径进行对应的转发。And determining, according to the correspondence between the at least one parameter information and the forwarding path, the forwarding path corresponding to the user feature, and performing corresponding forwarding according to the forwarding path. 根据权利要求6所述的方法,其特征在于,所述根据所述策略解析文件解析出的分流策略进行发布包括:The method according to claim 6, wherein the distributing according to the splitting policy parsed by the policy parsing file comprises: 采用所述策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据所述百分比策略进行对应的发布。The policy analysis file is used to parse the corresponding traffic distribution policy, and the corresponding percentage policy is parsed, and the corresponding release is performed according to the percentage policy. 一种服务器,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:A server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps: 根据预设的规则定时查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从Redis服务器中预设的位置读取当前分流策略标识,其中,所述Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件;If the traffic policy is not found in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server. The Redis server stores the current traffic policy. The traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string; 根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;Determining, by the current offloading policy identifier, whether a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier exist in the memory; 若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及If the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and 根据所述策略解析文件解析出的分流策略进行发布。The distribution policy is parsed according to the policy parsing file. 根据权利要求11所述的服务器,其特征在于,所述处理器所执行的若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中的步骤包括:The server according to claim 11, wherein if the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not present in the memory, the current shunting policy identifier is The steps of loading the policy parsing file and the user information parsing file in the form of a string into the memory by using the load string in the manner of loading the string into the memory include: 若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID;If the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier are not in the memory, the policy resolution file ID and the user information analysis file ID corresponding to the current traffic distribution policy identifier are searched according to the current traffic distribution policy identifier; 根据所述策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。According to the policy parsing file ID and the user information parsing file ID, the policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory through lua and converted into a Table by loading a string. Store. 根据权利要求11所述的服务器,其特征在于,所述处理器所执行的根据所述策略解析文件解析出的分流策略进行发布的步骤包括:The server according to claim 11, wherein the step of publishing by the processor according to the splitting policy parsed by the policy parsing file comprises: 采用所述策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系, 并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache;The policy analysis file is used to parse the corresponding traffic distribution policy, and the correspondence between the at least one parameter information and the forwarding path in the traffic distribution policy is analyzed. And saving the correspondence between the at least one parameter information and the forwarding path to the Cache; 根据Cache中的至少一种参数信息与转发路径之间的对应关系进行发布。The advertised according to the correspondence between the at least one parameter information in the Cache and the forwarding path. 根据权利要求13所述的服务器,其特征在于,所述处理器还用于执行以下步骤:The server according to claim 13, wherein the processor is further configured to perform the following steps: 接收客户端发送的请求,提取所述请求中的用户信息;Receiving a request sent by the client, and extracting user information in the request; 根据所述用户信息解析文件中预设的提取方式从所述用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征;And extracting at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, and using the extracted at least one parameter information as a user feature; 根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与所述用户特征对应的转发路径,根据所述转发路径进行对应的转发。And determining, according to the correspondence between the at least one parameter information and the forwarding path, the forwarding path corresponding to the user feature, and performing corresponding forwarding according to the forwarding path. 根据权利要求11所述的服务器,所述处理器所执行的根据所述策略解析文件解析出的分流策略进行发布的步骤包括:The server according to claim 11, wherein the step of issuing the offloading policy parsed according to the policy parsing file performed by the processor comprises: 采用所述策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据所述百分比策略进行对应的发布。The policy analysis file is used to parse the corresponding traffic distribution policy, and the corresponding percentage policy is parsed, and the corresponding release is performed according to the percentage policy. 一个或多个存储有计算机可读指令的计算机可读非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:One or more computer readable non-volatile storage media storing computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of: 根据预设的规则定时查找Cache里面是否存在分流策略标识,若Cache里面不存在分流策略标识,则从Redis服务器中预设的位置读取当前分流策略标识,其中,所述Redis服务器中存储有当前分流策略标识以及对应的以字符串形式存在的策略解析文件和用户解析文件;If the traffic policy is not found in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server. The Redis server stores the current traffic policy. The traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string; 根据所述当前分流策略标识查找内存中是否存在与所述当前分流策略标识对应的策略解析文件和用户信息解析文件;Determining, by the current offloading policy identifier, whether a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier exist in the memory; 若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中;及If the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and 根据所述策略解析文件解析出的分流策略进行发布。The distribution policy is parsed according to the policy parsing file. 根据权利要求16所述的非易失性存储介质,其特征在于,所述处理器所执行的若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识采用加载字符串的方式将所述Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中的步骤包括:The non-volatile storage medium according to claim 16, wherein if the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not present in the memory, The step of loading the policy parsing file and the user information parsing file in the form of a string in the Redis server into the memory by using the load string in the manner of loading the string includes: 若内存中不存在所述当前分流策略标识对应的策略解析文件和用户信息解析文件,则根据所述当前分流策略标识查找与该当前分流策略标识对应的策略解析文件ID和用户信息解析文件ID;If the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier are not in the memory, the policy resolution file ID and the user information analysis file ID corresponding to the current traffic distribution policy identifier are searched according to the current traffic distribution policy identifier; 根据所述策略解析文件ID和用户信息解析文件ID通过加载字符串的方式将Redis服务器中的以字符串形式存在的策略解析文件和用户信息解析文件通过lua加载到内存中并转换为Table的形式进行存储。According to the policy parsing file ID and the user information parsing file ID, the policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory through lua and converted into a Table by loading a string. Store. 根据权利要求16所述的非易失性存储介质,其特征在于,所述处理器所执行的根据所述策略解析文件解析出的分流策略进行发布的步骤包括:The non-volatile storage medium according to claim 16, wherein the step of issuing the offloading policy parsed according to the policy parsing file by the processor comprises: 采用所述策略解析文件对相应的分流策略进行解析,解析出分流策略中至少一种参数信息与转发路径之间的对应关系, 并将所述至少一种参数信息与转发路径之间的对应关系保存到Cache;The policy analysis file is used to parse the corresponding traffic distribution policy, and the correspondence between the at least one parameter information and the forwarding path in the traffic distribution policy is analyzed. And saving the correspondence between the at least one parameter information and the forwarding path to the Cache; 根据Cache中的至少一种参数信息与转发路径之间的对应关系进行发布。The advertised according to the correspondence between the at least one parameter information in the Cache and the forwarding path. 根据权利要求18所述的非易失性存储介质,其特征在于,所述处理器还用于执行以下步骤:The non-volatile storage medium according to claim 18, wherein the processor is further configured to perform the following steps: 接收客户端发送的请求,提取所述请求中的用户信息;Receiving a request sent by the client, and extracting user information in the request; 根据所述用户信息解析文件中预设的提取方式从所述用户信息中提取出至少一个参数信息,将提取出的至少一种参数信息作为用户特征;And extracting at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, and using the extracted at least one parameter information as a user feature; 根据Cache中存储的至少一种参数信息与转发路径之间的对应关系确定出与所述用户特征对应的转发路径,根据所述转发路径进行对应的转发。And determining, according to the correspondence between the at least one parameter information and the forwarding path, the forwarding path corresponding to the user feature, and performing corresponding forwarding according to the forwarding path. 根据权利要求16所述的非易失性存储介质,所述处理器所执行的根据所述策略解析文件解析出的分流策略进行发布的步骤包括:The non-volatile storage medium of claim 16, wherein the step of issuing the offloading policy parsed according to the policy parsing file by the processor comprises: 采用所述策略解析文件对相应的分流策略进行解析,解析出对应的百分比策略,根据所述百分比策略进行对应的发布。The policy analysis file is used to parse the corresponding traffic distribution policy, and the corresponding percentage policy is parsed, and the corresponding release is performed according to the percentage policy.
PCT/CN2017/091179 2016-12-08 2017-06-30 Gated launch method, system, server, and storage medium Ceased WO2018103320A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611123903.4A CN106775859B (en) 2016-12-08 2016-12-08 Gray scale dissemination method and system
CN201611123903.4 2016-12-08

Publications (1)

Publication Number Publication Date
WO2018103320A1 true WO2018103320A1 (en) 2018-06-14

Family

ID=58881671

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091179 Ceased WO2018103320A1 (en) 2016-12-08 2017-06-30 Gated launch method, system, server, and storage medium

Country Status (2)

Country Link
CN (1) CN106775859B (en)
WO (1) WO2018103320A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109669719A (en) * 2018-09-26 2019-04-23 深圳壹账通智能科技有限公司 Using gray scale dissemination method, device, equipment and readable storage medium storing program for executing
CN109766270A (en) * 2018-12-19 2019-05-17 北京万维之道信息技术有限公司 Project testing method and device, server, platform
CN110162382A (en) * 2019-04-09 2019-08-23 平安科技(深圳)有限公司 Gray scale dissemination method, device, computer equipment and storage medium based on container
CN112788103A (en) * 2020-12-25 2021-05-11 江苏省未来网络创新研究院 Method for solving same-application multi-instance web proxy access conflict based on nginx + lua
CN113377770A (en) * 2021-06-07 2021-09-10 北京沃东天骏信息技术有限公司 Data processing method and device

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775859B (en) * 2016-12-08 2018-02-02 上海壹账通金融科技有限公司 Gray scale dissemination method and system
CN107451020B (en) * 2017-06-28 2020-12-15 北京五八信息技术有限公司 AB test system and test method
CN107632842B (en) * 2017-09-26 2020-06-30 携程旅游信息技术(上海)有限公司 Rule configuration and release method, system, equipment and storage medium
CN108418764A (en) * 2018-02-07 2018-08-17 深圳壹账通智能科技有限公司 Current-limiting method, device, computer equipment and storage medium
CN108427751A (en) * 2018-03-13 2018-08-21 深圳乐信软件技术有限公司 A kind of short chain connects jump method, device and electronic equipment
CN108965381B (en) * 2018-05-31 2023-03-21 康键信息技术(深圳)有限公司 Nginx-based load balancing implementation method and device, computer equipment and medium
CN108829459B (en) * 2018-05-31 2023-03-21 康键信息技术(深圳)有限公司 Nginx server-based configuration method and device, computer equipment and storage medium
CN110661835B (en) * 2018-06-29 2023-05-02 马上消费金融股份有限公司 Gray release method, processing method, node and system thereof and storage device
CN109189494B (en) * 2018-07-27 2022-01-21 创新先进技术有限公司 Configuration gray level publishing method, device and equipment and computer readable storage medium
CN109597643A (en) * 2018-11-27 2019-04-09 平安科技(深圳)有限公司 Using gray scale dissemination method, device, electronic equipment and storage medium
CN109739757A (en) * 2018-12-28 2019-05-10 微梦创科网络科技(中国)有限公司 A kind of AB testing method and device
CN110032699A (en) * 2019-03-11 2019-07-19 北京智游网安科技有限公司 A kind of web data acquisition methods, intelligent terminal and storage medium
CN110647336A (en) * 2019-08-13 2020-01-03 平安普惠企业管理有限公司 Grayscale publishing method, apparatus, computer equipment and storage medium
CN111880831A (en) * 2020-07-27 2020-11-03 平安国际智慧城市科技股份有限公司 Method and device for synchronously updating server, computer equipment and storage medium
CN112632430A (en) * 2020-12-28 2021-04-09 四川新网银行股份有限公司 Method for realizing channel user to access gray level environment API service in H5 page
CN114579205A (en) * 2022-03-09 2022-06-03 平安普惠企业管理有限公司 Resource request processing method and device, electronic equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176790A (en) * 2011-12-26 2013-06-26 阿里巴巴集团控股有限公司 Application releasing method and application releasing system
CN105591825A (en) * 2016-01-21 2016-05-18 烽火通信科技股份有限公司 How to modify the configuration when the home gateway is upgraded
CN105955761A (en) * 2016-06-30 2016-09-21 乐视控股(北京)有限公司 Docker-based gray level issuing device and docker-based gray level issuing method
WO2016179958A1 (en) * 2015-05-12 2016-11-17 百度在线网络技术(北京)有限公司 Method, device and system for performing grey-releasing on mobile application
CN106775859A (en) * 2016-12-08 2017-05-31 上海亿账通互联网科技有限公司 Gray scale dissemination method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103023939B (en) * 2011-09-26 2017-10-20 中兴通讯股份有限公司 The method and system of the REST interfaces of cloud caching is realized on Nginx
CN103095743A (en) * 2011-10-28 2013-05-08 阿里巴巴集团控股有限公司 Handling method and system of grey release
CN105975270A (en) * 2016-05-04 2016-09-28 北京思特奇信息技术股份有限公司 Gray scale distribution method and system based on HTTP request forwarding
CN106100927A (en) * 2016-06-20 2016-11-09 浪潮电子信息产业股份有限公司 Method for realizing SSR gray scale release

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176790A (en) * 2011-12-26 2013-06-26 阿里巴巴集团控股有限公司 Application releasing method and application releasing system
WO2016179958A1 (en) * 2015-05-12 2016-11-17 百度在线网络技术(北京)有限公司 Method, device and system for performing grey-releasing on mobile application
CN105591825A (en) * 2016-01-21 2016-05-18 烽火通信科技股份有限公司 How to modify the configuration when the home gateway is upgraded
CN105955761A (en) * 2016-06-30 2016-09-21 乐视控股(北京)有限公司 Docker-based gray level issuing device and docker-based gray level issuing method
CN106775859A (en) * 2016-12-08 2017-05-31 上海亿账通互联网科技有限公司 Gray scale dissemination method and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109669719A (en) * 2018-09-26 2019-04-23 深圳壹账通智能科技有限公司 Using gray scale dissemination method, device, equipment and readable storage medium storing program for executing
CN109766270A (en) * 2018-12-19 2019-05-17 北京万维之道信息技术有限公司 Project testing method and device, server, platform
CN110162382A (en) * 2019-04-09 2019-08-23 平安科技(深圳)有限公司 Gray scale dissemination method, device, computer equipment and storage medium based on container
CN110162382B (en) * 2019-04-09 2023-12-15 平安科技(深圳)有限公司 Container-based gray level publishing method, device, computer equipment and storage medium
CN112788103A (en) * 2020-12-25 2021-05-11 江苏省未来网络创新研究院 Method for solving same-application multi-instance web proxy access conflict based on nginx + lua
CN112788103B (en) * 2020-12-25 2022-08-02 江苏省未来网络创新研究院 Method for solving same-application multi-instance web proxy access conflict based on nginx + lua
CN113377770A (en) * 2021-06-07 2021-09-10 北京沃东天骏信息技术有限公司 Data processing method and device

Also Published As

Publication number Publication date
CN106775859A (en) 2017-05-31
CN106775859B (en) 2018-02-02

Similar Documents

Publication Publication Date Title
WO2018103320A1 (en) Gated launch method, system, server, and storage medium
EP3116178B1 (en) Packet processing device, packet processing method, and program
WO2018058959A1 (en) Sql auditing method and apparatus, server and storage device
CN109800207B (en) Log parsing method, apparatus, device, and computer-readable storage medium
WO2018103315A1 (en) Monitoring data processing method, apparatus, server and storage equipment
US10866894B2 (en) Controlling memory usage in a cache
WO2018227771A1 (en) Insurance policy-based region dividing method, system, server, and storage medium
WO2018014580A1 (en) Data interface test method and apparatus, and server and storage medium
WO2020186773A1 (en) Call request monitoring method, device, apparatus, and storage medium
WO2014189190A1 (en) System and method for retrieving information on basis of data member tagging
WO2010123168A1 (en) Database management method and system
WO2020077832A1 (en) Cloud desktop access method, apparatus and device, and storage medium
JPH1021134A (en) Gateway device, client computer and distributed file system connecting them
WO2020186791A1 (en) Data transmission method, apparatus, device, and storage medium
CN113051460A (en) Elasticissearch-based data retrieval method and system, electronic device and storage medium
WO2021107211A1 (en) In-memory database-based time-series data management system
WO2021012490A1 (en) Service relay switching method and apparatus, terminal device, and storage medium
CN110945496A (en) System and method for state object data store
WO2021012487A1 (en) Cross-system information synchronisation method, user device, storage medium, and apparatus
CN107276916A (en) Interchanger flow table management method based on agreement unaware retransmission technique
WO2015068929A1 (en) Operation method of node considering packet characteristic in content-centered network and node
WO2013176431A1 (en) System and method for allocating server to server and for efficient messaging
JPH10240768A (en) Method for retrieving data base system constituted of different program language
US8503442B2 (en) Transmission information transfer apparatus and method thereof
WO2018221998A1 (en) Method for automatically analyzing bottleneck in real time and an apparatus for performing the method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17879634

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17879634

Country of ref document: EP

Kind code of ref document: A1