WO2018103320A1 - Procédé de lancement à déclenchement périodique, système, serveur et support de stockage - Google Patents
Procédé de lancement à déclenchement périodique, système, serveur et support de stockage Download PDFInfo
- Publication number
- WO2018103320A1 WO2018103320A1 PCT/CN2017/091179 CN2017091179W WO2018103320A1 WO 2018103320 A1 WO2018103320 A1 WO 2018103320A1 CN 2017091179 W CN2017091179 W CN 2017091179W WO 2018103320 A1 WO2018103320 A1 WO 2018103320A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- policy
- file
- parsing
- user information
- parsing file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
- G06F8/71—Version control; Configuration management
Definitions
- the present invention relates to the field of computer processing, and in particular, to a grayscale publishing method, system, server, and storage medium.
- Grayscale publishing is a kind of publishing method that can smoothly transition between black and white.
- AB Test is a grayscale publishing method, which allows some users to continue to use A, and some users start to use B. If the user has no objection to B, then gradually expand the scope and move all users to B. The grayscale release can ensure the stability of the overall system, and the problem can be found and adjusted at the initial gray level to ensure its influence.
- ABtest is based on the traffic distribution policy configured in the system. In the test phase, if a problem is found, a traffic distribution policy is added to the Redis server (cache server). Each traffic distribution policy corresponds to a policy resolution file and user.
- the information parsing file because the traditional parsing policy corresponding to the policy parsing file and the user information parsing file is stored in the memory of the Nginx server (which is a performance-oriented HTTP server), so if you add a diversion policy, you need to The policy parsing file and user information parsing file corresponding to the new offloading policy are uploaded to the Nginx server. In this process, the Nginx server needs to be reloaded or restarted. Restarting the Nginx server is not only time-consuming but also very troublesome.
- a grayscale publishing method and system is provided.
- a grayscale publishing system comprising:
- the Redis server is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the shunting policy, where the policy parsing file and the user information parsing file are stored in the form of a string.
- the Nginx server is configured to check whether the traffic distribution policy identifier exists in the Cache. If the traffic distribution policy identifier does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server.
- the Nginx server is further configured to: according to the current offloading policy identifier, a search for a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier, if the memory does not have a corresponding corresponding to the current shunting policy identifier.
- the policy parsing file and the user information parsing file are loaded into the memory by the lua in the form of a string in the manner of loading the string according to the current shunting policy identifier.
- the Nginx server is further configured to perform the publishing according to the offloading policy parsed by the policy parsing file.
- a grayscale publishing method comprising:
- the Nginx server periodically searches for the traffic distribution policy identifier in the Cache according to the preset rule. If the traffic distribution policy does not exist in the Cache, the current traffic distribution policy identifier is read from the preset location in the Redis server, where the Redis server stores The current splitting policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;
- the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and
- the distribution policy is parsed according to the policy parsing file.
- a server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
- the current traffic distribution policy identifier is read from the preset location in the Redis server.
- the Redis server stores the current traffic policy.
- the traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;
- the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and
- the distribution policy is parsed according to the policy parsing file.
- One or more non-volatile readable storage media storing computer-executable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
- the current traffic distribution policy identifier is read from the preset location in the Redis server.
- the Redis server stores the current traffic policy.
- the traffic policy identifier and the corresponding policy parsing file and user parsing file exist in the form of a string;
- the policy parsing file and the user information parsing file corresponding to the current shunting policy identifier are not in the memory, the policy in the form of a string in the Redis server is used to load the string according to the current shunting policy identifier. Parsing files and user information parsing files are loaded into memory via lua; and
- the distribution policy is parsed according to the policy parsing file.
- 1 is an architectural diagram of a grayscale distribution system in an embodiment
- FIG. 2 is a flow chart of a grayscale publishing method in an embodiment
- FIG. 3 is a flowchart of a method for loading a policy parsing file and a user information parsing file according to a current shunting policy according to an embodiment
- Figure 5 is a diagram showing the internal structure of an Nginx server in one embodiment.
- a grayscale publishing system comprising: a Redis server 102 and an Nginx server 104;
- the Redis server 102 is configured to receive and upload the uploaded traffic splitting policy and the policy parsing file and the user information parsing file corresponding to the traffic diverting policy, where the policy parsing file and the user information parsing file are stored in the form of a string.
- the ABtest (AB test) is based on the offloading policy configured in the system. For each of the traffic distribution policies, a policy resolution file and a user information analysis file are required.
- the policy analysis file is used to parse the traffic distribution policy, and the correspondence between the user features and the forwarding path in the traffic distribution policy is analyzed.
- the user information parsing file is used to parse the obtained user information, parse the user feature in the user information, and then determine a specific forwarding path corresponding to the user feature, and forward the packet according to the determined forwarding path.
- the policy analysis file and the user information analysis file corresponding to the traffic distribution policy and the traffic distribution policy are directly uploaded to the Redis server through the background management page, where the policy corresponding to the traffic distribution policy needs to be resolved.
- the file and user information parsing file is first converted into a string form and then uploaded to the Redis server. That is, the policy parsing file and the user information parsing file in the Redis server are stored in the form of a string.
- the Nginx server 104 is configured to search for a traffic distribution policy identifier in the Cache. If not, the current traffic distribution policy identifier is read from a preset location in the Redis server.
- the traffic distribution policy identifier is used to uniquely identify a traffic distribution policy
- the current traffic distribution policy identifier may be an identifier corresponding to the currently used traffic distribution policy.
- the current traffic distribution policy identifier is stored in the Cache (cache memory) of the Nginx server. However, if the traffic distribution policy needs to be replaced, the content in the Cache is first cleared, that is, if a new traffic off policy is to be used. The traffic policy identifier in the Cache needs to be cleared. That is, when the new traffic distribution policy is used for the first time, there is no traffic distribution policy identifier in the Cache.
- the Cache belongs to the internal memory, and the content of the Cache can be cleared by using a special clearing mechanism, that is, an interface for clearing the contents of the Cache is set, and the content is cleared through the interface. You can also set the time limit for the contents of the Cache. For example, if you do not use the offload policy ID in more than 1 minute, it will be automatically cleared.
- the traffic policy ID in the Cache. There is no restriction on how to empty the contents of the Cache.
- the Nginx server periodically searches for a traffic distribution policy identifier according to a preset rule. If not, the current traffic distribution policy is replaced. The current traffic distribution policy is set in the Redis server.
- the Nginx server needs to read the current traffic distribution policy identifier from the preset location in the Redis server.
- the Nginx server stores the location (ie, the address) of the current traffic distribution policy identifier, reads the current traffic distribution policy identifier from the Redis server according to the storage location, and saves the read current traffic distribution policy identifier to the Cache, which facilitates subsequent offload forwarding.
- the Nginx server 104 is further configured to: according to the current traffic distribution policy identifier, a policy parsing file and a user information parsing file corresponding to the current shunting policy identifier are found in the memory, and if not, the redisets are loaded according to the current shunting policy identifier.
- the policy parsing file and the user information parsing file in the form of a string in the server are loaded into the memory through lua.
- the Nginx server after obtaining the current traffic distribution policy identifier, the Nginx server first searches for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier, and if not, The traffic distribution policy corresponding to the current traffic distribution policy identifier is a new traffic distribution policy.
- the corresponding policy resolution file and user information analysis file exist in the form of a string on the Redis server.
- the Nginx server needs to load the policy parsing file and the user information parsing file in the Redis server in the form of a string into the memory by loading the string (loadString).
- Lua is a dynamic scripting language that can be embedded into Nginx server configuration files.
- the Nginx server first loads the policy parsing file and the user information parsing file in the form of a string into lua, and then converts the policy parsing file and the user information parsing file in the form of a string into lua into lua.
- the Table form is then stored in memory.
- the Table form is a form that can be directly called by the Nginx server.
- the Nginx server can load the policy parsing file and the user information parsing file in the Redis server into the memory by loading the string, so when the file needs to be added, only the new file needs to be added. It is converted to a string, and then uploaded to the Redis server through the background management page, and then the new traffic distribution policy is set to the current traffic distribution policy to implement the traffic policy update without restarting the Nginx server.
- the Nginx server 104 is further configured to perform the distribution according to the offloading policy parsed by the policy parsing file.
- the corresponding grayscale publishing may be performed according to the shunting policy parsed by the policy parsing file.
- the Nginx server parses the corresponding traffic splitting policy by using the policy parsing file, and can obtain a specific correspondence between the traffic splitting policies, that is, the at least one parameter information and the forwarding path (upstream) in the traffic splitting policy can be parsed. Correspondence relationship. For example, if the traffic distribution policy is based on the city information, the policy analysis file can resolve the correspondence between the city information and the forwarding path in the traffic distribution policy, and the corresponding file is parsed from the user according to the user information.
- the user feature extracted from the information must also be the city information. Therefore, the forwarding path corresponding to the user feature can be determined by the correspondence between the parsed city information and the forwarding path.
- the parsed city information and forwarding path are: Shanghai (SH) corresponds to forwarding path 1, Beijing (BJ) corresponds to forwarding path 2, and the remaining cities correspond to forwarding path 3. If the extracted user feature is Shanghai, then the forwarding path corresponding to the user feature is 1.
- the corresponding relationship between the parsed at least one parameter information and the forwarding path is saved to the Cache, so that when the next user requests to come over, the parsed shunting policy can be directly used for offloading and forwarding.
- the policy analysis file and the user information analysis file corresponding to the new traffic distribution policy are converted into a string and uploaded to the Redis server.
- the Nginx server needs to use the newly added policy resolution.
- the corresponding new policy parsing file and user information parsing file are searched from the Redis server, and then the policy parsing file and user in the form of a string in the Redis server are loaded by using a string.
- the information parsing file is loaded into the memory through lua, and then distributed according to the shunting policy parsed by the policy parsing file.
- the policy resolution file and the user information parsing file corresponding to the traffic splitting policy need to be converted into a string and stored in the Redis server.
- the Nginx server can dynamically load new policy resolution files and user information parsing files from the Redis server into memory. It does not require reloading and restarting the Nginx server. It is easy to operate and saves time, thus improving the forwarding efficiency.
- the Nginx server is further configured to search for a policy resolution file ID and a user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier, and parse the file ID and the user information according to the policy.
- the ID parses the policy parsing file and the user information parsing file in the Redis server in the form of a string into the memory and converts it into a table for storage.
- Nginx After obtaining the current traffic distribution policy identifier, the server searches the Redis server for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier.
- the ID is used to uniquely identify a file or content.
- the mapping between the traffic policy identifier and the policy resolution file ID and the user information resolution file ID is pre-stored in the Redis server, and the current traffic distribution policy can be found in the Redis server according to the current traffic policy identifier. Identifies the corresponding policy resolution file ID and user information resolution file ID.
- the Nginx server can find the corresponding policy resolution file and user information analysis file according to the policy resolution file ID and the user information resolution file ID in the Redis server.
- the user information parsing file exists in the form of a string (String) in the Redis server, and the form of the string cannot be directly called, so it is necessary to parse the file and user information by using the policy in the form of a string.
- the file is loaded into the memory through lua, and the form of the string (String) is converted into a Table form for storage in the memory, so that the Nginx server can directly call the policy parsing file and the user information parsing file to parse and determine the user request.
- the corresponding forwarding path is also convenient for calling the corresponding policy parsing file and user information parsing file directly from the memory next time.
- the Nginx server is further configured to parse the corresponding offloading policy by using the policy parsing file, and parse the correspondence between the at least one parameter information and the forwarding path in the offloading policy. And saving the correspondence between the at least one parameter information and the forwarding path to the Cache.
- the corresponding splitting policy is parsed by calling the policy parsing file, and the correspondence between the at least one parameter information and the forwarding path in the splitting policy is parsed, and the parsed at least one parameter information and the forwarding path are The corresponding relationship is saved to the Cache, so that when the next user requests to come over, the split traffic policy can be directly used for offload forwarding.
- the parameter information has various types according to the type and the corresponding position.
- UID User
- Identification user identification
- IP IP
- URL URL information
- the UID information may be divided into a UID suffix shunt, a specified special UID shunt, and a UID user segment shunt according to different extraction locations.
- the shunting strategy is divided into single-level shunt and multi-level shunt. Single-stage splitting is relatively simple, which means that only one parameter information needs to be referenced to find the corresponding forwarding path. Multi-level shunting needs to refer to a variety of parameter information. Among them, multi-level shunting is divided into two types: union and intersection.
- the first-level split is the split of the city: divided into two types, Shanghai (SH), Beijing (BJ), and its corresponding split strategy is: SH
- the upstream (forward path) is beta1, the upstream of BJ is beta2
- the second-level offload is the split of UID collection: UID is 123, 124,
- the upstream of 125 is beta1, the upstream of UID is 567, 568, 569 is beta2
- the third-level shunting is IP-wide shunting: IP's long value range is 1000001 ⁇ 2000000, upstream is beta1, and ip's long value is 2000001 ⁇
- the 3000000 upstream is beta2; if the union is used, the city information is first extracted from the user information.
- the request is forwarded to beta1 or beta2, and the UID and IP information are not considered. If the city information is not extracted, or the city information is not within the scope of SH and BJ, then the UID is continuously determined. If the UID is 567, the request is forwarded to beta2, and so on, as long as the forwarding condition is met, the forwarding is not necessary. The information behind. If the intersection method is adopted, the same three conditions must be met at the same time, that is, if the acquired city information is SH, and its corresponding upstream is beta1, it is necessary to continue to obtain the information and IP range of the UID set.
- the acquired UID set is also beta1, and the information of the IP range must correspond to beta1, then the user request will be forwarded to beta1, if the corresponding information or the upstream corresponding to the obtained information is not obtained
- the acquired city information is SH
- the acquired UID set information is 567. Since the two correspond to beta1 and beta2, respectively, three conditions are not met at the same time, so the forwarding of the offload cannot be realized, but in practice Instead of using the form of intersection alone, the intersection and union are used.
- the Nginx server is further configured to receive a request sent by the client, extract user information in the request, and extract at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, where The extracted at least one type of parameter information is used as a user feature, and the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and the corresponding forwarding is performed according to the forwarding path.
- the Nginx server receives the request sent by the client, extracts the user information in the request, and parses the user information according to the user information parsing file corresponding to the current traffic distribution policy identifier previously stored in the memory, from the user information. Extracting at least one parameter information, and then extracting the extracted at least one parameter information as a user feature.
- the extracting of the user feature is related to the corresponding splitting strategy.
- the traffic distribution policy is offloaded according to the city information (City)
- the user information analysis file corresponding to the traffic distribution policy naturally extracts the city information in the user information, and the extracted city information is the user feature.
- the traffic offloading strategy may be based on one parameter information, or may be based on multiple parameter information.
- the shunt operation is performed according to multiple parameter information, multiple parameter information needs to be extracted from the user information, for example, simultaneous extraction. If the city information and UID information are not extracted, the IP information may need to be further extracted. What information is specifically extracted as a user feature is determined by the corresponding offloading strategy. After the user feature is extracted, the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and then the corresponding forwarding is performed according to the forwarding path.
- the Nginx server is further configured to parse the corresponding traffic distribution policy by using a policy resolution file, parse the corresponding percentage policy, and perform corresponding release according to the percentage policy.
- the traffic distribution policy is forwarded according to the percentage policy.
- the Nginx server parses the corresponding traffic distribution policy by using the policy resolution file, and parses out the specific percentage policy, that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
- the specific percentage policy that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
- a grayscale publishing method comprising:
- Step 202 The Nginx server periodically searches for a traffic distribution policy identifier in the Cache according to the preset rule. If yes, the process proceeds directly to step 206. If not, the process proceeds to step 204.
- Step 204 Read the current offload policy identifier from a preset location in the Redis server.
- the traffic distribution policy identifier is used to uniquely identify a traffic distribution policy
- the current traffic distribution policy identifier may be an identifier corresponding to the currently used traffic distribution policy.
- the Redis server stores the current traffic distribution policy identifier and the corresponding policy analysis file and user resolution file that exist in the form of a string.
- the current traffic distribution policy identifier is stored in the Cache (cache memory) of the Nginx server. However, if the traffic distribution policy needs to be replaced, the content in the Cache is first cleared, that is, if a new traffic off policy is to be used. The traffic policy identifier in the Cache needs to be cleared.
- the Nginx server periodically searches for a traffic distribution policy identifier according to a preset rule. If not, the current traffic distribution policy is replaced. The current traffic distribution policy is set in the Redis server.
- the Nginx server needs to read the current traffic distribution policy identifier from the preset location in the Redis server.
- the Nginx server stores the location (ie, the address) of the current traffic distribution policy identifier, reads the current traffic distribution policy identifier from the Redis server according to the storage location, and saves the read current traffic distribution policy identifier to the Cache, which facilitates subsequent offload forwarding.
- Step 206 Search for the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier in the memory according to the current traffic distribution policy identifier. If yes, go directly to step 210. If not, go to step 208.
- the Nginx server after obtaining the current traffic distribution policy identifier, the Nginx server first searches for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier, and if not, The traffic distribution policy corresponding to the current traffic distribution policy identifier is a new traffic distribution policy.
- the corresponding policy resolution file and user information analysis file exist in the form of a string on the Redis server.
- the policy analysis file and the user information analysis file corresponding to the traffic distribution policy and the traffic distribution policy are directly uploaded to the Redis server through the background management page, where the policy corresponding to the traffic distribution policy needs to be resolved.
- File and user information parsing files are first converted to strings and then uploaded to the Redis server. In the Redis server, the policy parsing file and the user information parsing file are stored in the form of strings.
- Step 208 The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by using lua according to the current shunting policy identifier.
- the Nginx server needs to load the policy parsing file and the user information parsing file in the form of a string in the Redis server into the memory by loading the string (loadString).
- Lua is a dynamic scripting language that can be embedded into Nginx server configuration files.
- the Nginx server first loads the policy parsing file and the user information parsing file in the form of a string into lua, and then converts the policy parsing file and the user information parsing file in the form of a string into lua into lua.
- the Table form is then stored in memory, where the Table form is directly available to the Nginx server.
- the policy parsing file and the user information parsing file are initially stored in the Redis server in the form of a string, instead of being directly stored in the Nginx server, when the new file needs to be added, only the new file needs to be added.
- the file is converted to a string, and then uploaded to the Redis server through the background management page, and then the new traffic distribution policy is set to the current traffic distribution policy to implement the traffic policy update.
- Step 210 Publish according to the offloading policy parsed by the policy parsing file.
- the corresponding grayscale publishing may be performed according to the shunting policy parsed by the policy parsing file.
- the Nginx server parses the corresponding traffic splitting policy by using the policy parsing file, and can obtain a specific correspondence between the traffic splitting policies, that is, the at least one parameter information and the forwarding path (upstream) in the traffic splitting policy can be parsed. Correspondence relationship. For example, if the traffic distribution policy is based on the city information, the policy analysis file can resolve the correspondence between the city information and the forwarding path in the traffic distribution policy, and the corresponding file is parsed from the user according to the user information.
- the user feature extracted from the information must also be the city information. Therefore, the forwarding path corresponding to the user feature can be determined by the correspondence between the parsed city information and the forwarding path.
- the parsed city information and forwarding path are: Shanghai (SH) corresponds to forwarding path 1, Beijing (BJ) corresponds to forwarding path 2, and the remaining cities correspond to forwarding path 3. If the extracted user feature is Shanghai, then the forwarding path corresponding to the user feature is 1.
- the corresponding relationship between the parsed at least one parameter information and the forwarding path is saved to the Cache, so that when the next user requests to come over, the parsed shunting policy can be directly used for offloading and forwarding.
- the policy analysis file and the user information analysis file corresponding to the new traffic distribution policy are converted into a string and uploaded to the Redis server.
- the Nginx server needs to use the newly added policy resolution.
- the corresponding new policy parsing file and user information parsing file are searched from the Redis server, and then the policy parsing file and user in the form of a string in the Redis server are loaded by using a string.
- the information parsing file is loaded into the memory through lua, and then distributed according to the shunting policy parsed by the policy parsing file.
- the policy resolution file and the user information parsing file corresponding to the traffic splitting policy need to be converted into a string and stored in the Redis server.
- the Nginx server can dynamically load new policy resolution files and user information parsing files from the Redis server into memory. It does not require reloading and restarting the Nginx server. It is easy to operate and saves time, thus improving the forwarding efficiency.
- Step 208 includes:
- Step 208A Search for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier according to the current traffic distribution policy identifier.
- Nginx After obtaining the current traffic distribution policy identifier, the server searches the Redis server for the policy resolution file ID and the user information resolution file ID corresponding to the current traffic distribution policy identifier.
- the ID is used to uniquely identify a file or content.
- the mapping between the traffic policy identifier and the policy resolution file ID and the user information resolution file ID is pre-stored in the Redis server, and the current traffic distribution policy can be found in the Redis server according to the current traffic policy identifier. Identifies the corresponding policy resolution file ID and user information resolution file ID.
- Step 208B The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by the lua and converted into Table by loading the string according to the policy parsing file ID and the user information parsing file ID. The form is stored.
- the Nginx server can find the corresponding policy parsing file and user information parsing according to the policy parsing file ID and the user information parsing file ID in the Redis server.
- the file because the policy parsing file and the user information parsing file exist in the form of a string in the Redis server, and the form of the string cannot be directly called, so it is necessary to adopt a strategy that exists in the form of a string.
- the parsing file and the user information parsing file are loaded into the memory by lua, and the string (String) form is converted into a Table form for storage, so that the Nginx server can directly call the policy parsing file and the user information parsing file pair.
- the user requests to parse and determine the corresponding forwarding path, and it is also convenient to call the corresponding policy parsing file and user information parsing file directly from the memory next time.
- the step of distributing the splitting policy according to the policy parsing file includes: parsing the corresponding splitting policy by using the policy parsing file, and parsing between at least one parameter information and the forwarding path in the splitting policy Corresponding relationship, and storing the correspondence between the at least one parameter information and the forwarding path to the Cache, and publishing according to the correspondence between the at least one parameter information in the Cache and the forwarding path.
- the corresponding splitting policy is parsed by calling the policy parsing file, and the correspondence between the at least one parameter information and the forwarding path in the splitting policy is parsed, and the correspondence between the parsed at least one parameter information and the forwarding path is analyzed.
- the relationship is saved to the Cache, so that when the next user requests to come over, the split traffic policy can be directly used for offload forwarding.
- the parameter information has various types according to the type and the corresponding position.
- UID information may be divided into a UID suffix shunt, a specified special UID shunt, and a UID user segment shunt according to different extraction locations.
- the shunting strategy is divided into single-level shunt and multi-level shunt. Single-stage splitting is relatively simple, which means that only one parameter information needs to be referenced to find the corresponding forwarding path. Multi-level shunting needs to refer to a variety of parameter information. Among them, multi-level shunting is divided into two types: union and intersection.
- the first-level split is the split of the city: divided into two types, Shanghai (SH), Beijing (BJ), and its corresponding split strategy is: SH
- the upstream (forward path) is beta1, the upstream of BJ is beta2
- the second-level offload is the split of UID collection: UID is 123, 124,
- the upstream of 125 is beta1, the upstream of UID is 567, 568, 569 is beta2
- the third-level shunting is IP-wide shunting: IP's long value range is 1000001 ⁇ 2000000, upstream is beta1, and ip's long value is 2000001 ⁇
- the 3000000 upstream is beta2; if the union is used, the city information is first extracted from the user information.
- the request is forwarded to beta1 or beta2, and the UID and IP information are not considered. If the city information is not extracted, or the city information is not within the scope of SH and BJ, then the UID is continuously determined. If the UID is 567, the request is forwarded to beta2, and so on, as long as the forwarding condition is met, the forwarding is not necessary. The information behind. If the intersection method is adopted, the same three conditions must be met at the same time, that is, if the acquired city information is SH, and its corresponding upstream is beta1, it is necessary to continue to obtain the information and IP range of the UID set.
- the acquired UID set is also beta1, and the information of the IP range must correspond to beta1, then the user request will be forwarded to beta1, if the corresponding information or the upstream corresponding to the obtained information is not obtained
- the acquired city information is SH
- the acquired UID set information is 567. Since the two correspond to beta1 and beta2, respectively, three conditions are not met at the same time, so the forwarding of the offload cannot be realized, but in practice Instead of using the form of intersection alone, the intersection and union are used.
- the grayscale publishing method further includes:
- Step 402 The Nginx server periodically searches for a traffic distribution policy identifier in the Cache according to a preset rule. If yes, the process proceeds directly to step 406. If not, the process proceeds to step 404.
- Step 404 Read the current offload policy identifier from a preset location in the Redis server.
- step 406 the policy analysis file and the user information analysis file corresponding to the current traffic distribution policy identifier are found in the memory according to the current traffic distribution policy identifier. If yes, the process proceeds directly to step 410. If not, the process proceeds to step 408.
- Step 408 The policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by lua according to the current shunting policy identifier.
- Step 410 The policy analysis file is used to parse the corresponding traffic distribution policy, and the correspondence between the at least one parameter information and the forwarding path in the traffic distribution policy is parsed, and the correspondence between the at least one parameter information and the forwarding path is analyzed. The relationship is saved to the Cache.
- Step 412 Receive a request sent by the client, and extract user information in the request.
- the Nginx server receives the request sent by the client, and extracts the user information in the request.
- the user information includes city information, IP address information, and UID information. At least one of remote address information and the like.
- Step 414 Extract at least one parameter information from the user information according to a preset extraction manner in the user information parsing file, and use the extracted at least one parameter information as a user feature.
- the Nginx server parses the user information according to the current user information analysis file in the memory, and extracts at least one parameter information from the user information according to a preset extraction manner, and then extracts the extracted information.
- the at least one parameter information is used as a user feature.
- the extracting of the user feature is related to the corresponding splitting strategy.
- the traffic distribution policy is offloaded according to the city information (City)
- the user information analysis file corresponding to the traffic distribution policy naturally extracts the city information in the user information, and the extracted city information is the user feature.
- the traffic offloading strategy may be based on one parameter information, or may be based on multiple parameter information.
- the shunt operation is performed according to multiple parameter information, multiple parameter information needs to be extracted from the user information, for example, simultaneous extraction. If the city information and UID information are not extracted, the IP information may need to be further extracted. What information is specifically extracted as a user feature is determined by the corresponding offloading strategy.
- Step 416 Determine a forwarding path corresponding to the user feature according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and perform corresponding forwarding according to the forwarding path.
- the forwarding path corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache, and then the corresponding forwarding path is performed according to the forwarding path.
- Forward Taking multi-level shunting as an example, a shunting strategy using intersection and union is adopted.
- the first level of the split includes two strategies, which is the intersection relationship: the city ID is 0 for the strategy ID, the beta1 for the Shanghai (SH), the beta2 for the Beijing (BJ), and the UID for the strategy ID of 1.
- the upstream of UID is 123, 124, 125 is beta1, the upstream of UID is 567, 568, 569 is beta2; the second-level split is the union of the first-level split, and the second-level split is only one strategy.
- the strategy is 2 is the ip range of the shunt, ip long value range is 1000001 ⁇ 2000000 of the upstream is beta1, ip long value range of 2000001 ⁇ 3000000 of the upstream is beta2;
- the city information in the user information is SH and the UID is 123, then the first level is offloaded, and the request is forwarded to beta1.
- the second-level offloading policy is continued. If the long value of the IP is 1200000, the request is forwarded to beta1.
- the step of distributing the splitting policy according to the policy parsing file includes: parsing the corresponding traffic splitting policy by using the policy parsing file, parsing the corresponding percentage policy, and performing corresponding publishing according to the percentage policy.
- the traffic distribution policy is forwarded according to the percentage policy.
- the Nginx server parses the corresponding traffic distribution policy by using the policy resolution file, and parses out the specific percentage policy, that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
- the specific percentage policy that is, how much is forwarded to the path A, and the percentage is How much is forwarded to path B.
- the internal structure of the Nginx server 104 is as shown in FIG. 5, which includes a processor connected through a system bus, a non-volatile storage medium, an internal memory, and a network interface.
- the non-volatile storage medium of the Nginx server stores an operating system and computer readable instructions executable by the processor to implement a grayscale publishing method suitable for the Nginx server.
- This processor is used to provide computing and control capabilities to support the operation of the entire server.
- the internal memory in the Nginx server 104 provides an environment for the operation of operating systems and computer executable instructions in a non-volatile storage medium for network communication. It will be understood by those skilled in the art that the structure shown in FIG.
- 5 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation of the Nginx server 104 to which the solution of the present application is applied, and a specific Nginx server. 104 may include more or fewer components than shown, or some components may be combined, or have different component arrangements.
- Figure 5 When the computer readable instructions in the Nginx server are executed by the processor, the processor is configured to: perform a step of searching for a shunt policy identifier in the Cache according to a preset rule timing, and if not present, preset from the Redis server The location is configured to read the current traffic distribution policy identifier; the current traffic distribution policy identifier is used to search for the policy resolution file and the user information analysis file corresponding to the current traffic distribution policy identifier; if the current traffic distribution policy identifier does not exist in the memory, The policy parsing file and the user information parsing file are loaded into the memory by the lua in the form of a string in the manner of loading the string according to the current shunting policy identifier. And publishing according to the offloading policy parsed by the policy parsing file.
- the method for loading the string according to the current shunting policy identifier is The step of loading the policy parsing file and the user information parsing file in the form of a string into the memory by the lua in the redis server includes: if the in-memory does not exist, the policy parsing file corresponding to the current shunting policy identifier and the user information parsing And searching for the policy resolution file ID and the user information resolution file ID corresponding to the current distribution policy identifier according to the current distribution policy identifier; and parsing the file ID according to the policy and parsing the file ID with the user information by loading the string
- the policy parsing file and the user information parsing file in the form of a string in the Redis server are loaded into the memory by lua and converted into a Table for storage.
- the performing the distribution of the offloading policy that is parsed according to the policy parsing file by the processor includes: parsing the corresponding offloading policy by using the policy parsing file, and parsing at least one of the offloading policies. Correspondence between parameter information and forwarding path, And storing the correspondence between the at least one parameter information and the forwarding path to the Cache; and publishing according to the correspondence between the at least one parameter information and the forwarding path in the Cache.
- the processor is further configured to: receive a request sent by the client, extract user information in the request, and parse the user from the user according to a preset extraction manner in the file. At least one parameter information is extracted from the information, and the extracted at least one parameter information is used as a user feature; and the forwarding corresponding to the user feature is determined according to the correspondence between the at least one parameter information and the forwarding path stored in the Cache. The path is forwarded according to the forwarding path.
- the performing the distribution of the offloading policy that is parsed according to the policy parsing file by the processor includes: parsing the corresponding offloading policy by using the policy parsing file, and parsing the corresponding percentage policy, according to The percentage policy is correspondingly released.
- the foregoing storage medium may be a magnetic disk, an optical disk, or a read-only storage memory (Read-Only)
- a nonvolatile storage medium such as a memory or a ROM, or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne un système de lancement à déclenchement périodique comprenant un serveur Redis (102) et un serveur Nginx (104). Le serveur Nginx (104) permet de rechercher une mémoire cache pour déterminer si la mémoire cache contient un identifiant de politique de division de trafic. Si tel n'est pas le cas, le serveur Ngix lit un identifiant de politique de division de trafic actuel à partir d'un emplacement prédéterminé dans le serveur de Redis (102), puis recherche une mémoire en fonction de l'identifiant de politique de division de trafic actuel afin de déterminer si la mémoire contient un fichier d'analyse de politique ainsi qu'un fichier d'analyse d'informations d'utilisateur correspondant à l'identifiant de politique de division de trafic actuel. Si tel n'est pas le cas, en fonction de l'identifiant de politique de division de trafic actuel et au moyen d'un chargement de chaîne de caractères, le serveur Nginx charge un fichier d'analyse de politique ainsi qu'un fichier d'analyse d'informations d'utilisateur stocké sur le serveur de Redis (102) dans un format chaîne de caractères dans la mémoire à l'aide de Lua, puis effectue un lancement en fonction d'une politique de division de trafic obtenue en analysant le fichier d'analyse de politique.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611123903.4A CN106775859B (zh) | 2016-12-08 | 2016-12-08 | 灰度发布方法和系统 |
| CN201611123903.4 | 2016-12-08 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018103320A1 true WO2018103320A1 (fr) | 2018-06-14 |
Family
ID=58881671
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/091179 Ceased WO2018103320A1 (fr) | 2016-12-08 | 2017-06-30 | Procédé de lancement à déclenchement périodique, système, serveur et support de stockage |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106775859B (fr) |
| WO (1) | WO2018103320A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109669719A (zh) * | 2018-09-26 | 2019-04-23 | 深圳壹账通智能科技有限公司 | 应用灰度发布方法、装置、设备及可读存储介质 |
| CN109766270A (zh) * | 2018-12-19 | 2019-05-17 | 北京万维之道信息技术有限公司 | 项目测试方法及装置、服务器、平台 |
| CN110162382A (zh) * | 2019-04-09 | 2019-08-23 | 平安科技(深圳)有限公司 | 基于容器的灰度发布方法、装置、计算机设备及存储介质 |
| CN112788103A (zh) * | 2020-12-25 | 2021-05-11 | 江苏省未来网络创新研究院 | 一种基于nginx+lua解决同应用多实例web代理访问冲突的方法 |
| CN113377770A (zh) * | 2021-06-07 | 2021-09-10 | 北京沃东天骏信息技术有限公司 | 一种数据处理方法和装置 |
Families Citing this family (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106775859B (zh) * | 2016-12-08 | 2018-02-02 | 上海壹账通金融科技有限公司 | 灰度发布方法和系统 |
| CN107451020B (zh) * | 2017-06-28 | 2020-12-15 | 北京五八信息技术有限公司 | 一种ab测试系统及测试方法 |
| CN107632842B (zh) * | 2017-09-26 | 2020-06-30 | 携程旅游信息技术(上海)有限公司 | 规则配置和发布方法、系统、设备及存储介质 |
| CN108418764A (zh) * | 2018-02-07 | 2018-08-17 | 深圳壹账通智能科技有限公司 | 限流方法、装置、计算机设备和存储介质 |
| CN108427751A (zh) * | 2018-03-13 | 2018-08-21 | 深圳乐信软件技术有限公司 | 一种短链接跳转方法、装置及电子设备 |
| CN108965381B (zh) * | 2018-05-31 | 2023-03-21 | 康键信息技术(深圳)有限公司 | 基于Nginx的负载均衡实现方法、装置、计算机设备和介质 |
| CN108829459B (zh) * | 2018-05-31 | 2023-03-21 | 康键信息技术(深圳)有限公司 | 基于Nginx服务器的配置方法、装置、计算机设备和存储介质 |
| CN110661835B (zh) * | 2018-06-29 | 2023-05-02 | 马上消费金融股份有限公司 | 一种灰度发布方法及其处理方法、节点及系统和存储装置 |
| CN109189494B (zh) * | 2018-07-27 | 2022-01-21 | 创新先进技术有限公司 | 配置灰度发布方法、装置、设备及计算机可读存储介质 |
| CN109597643A (zh) * | 2018-11-27 | 2019-04-09 | 平安科技(深圳)有限公司 | 应用灰度发布方法、装置、电子设备及存储介质 |
| CN109739757A (zh) * | 2018-12-28 | 2019-05-10 | 微梦创科网络科技(中国)有限公司 | 一种ab测试方法及装置 |
| CN110032699A (zh) * | 2019-03-11 | 2019-07-19 | 北京智游网安科技有限公司 | 一种网页数据获取方法、智能终端及存储介质 |
| CN110647336A (zh) * | 2019-08-13 | 2020-01-03 | 平安普惠企业管理有限公司 | 灰度发布方法、装置、计算机设备和存储介质 |
| CN111880831A (zh) * | 2020-07-27 | 2020-11-03 | 平安国际智慧城市科技股份有限公司 | 服务器同步更新的方法、装置、计算机设备及存储介质 |
| CN112632430A (zh) * | 2020-12-28 | 2021-04-09 | 四川新网银行股份有限公司 | 一种实现渠道用户在h5页面中访问灰度环境api服务的方法 |
| CN114579205A (zh) * | 2022-03-09 | 2022-06-03 | 平安普惠企业管理有限公司 | 资源请求处理方法、装置、电子设备及可读存储介质 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103176790A (zh) * | 2011-12-26 | 2013-06-26 | 阿里巴巴集团控股有限公司 | 应用发布方法和系统 |
| CN105591825A (zh) * | 2016-01-21 | 2016-05-18 | 烽火通信科技股份有限公司 | 在家庭网关升级时修改配置的方法 |
| CN105955761A (zh) * | 2016-06-30 | 2016-09-21 | 乐视控股(北京)有限公司 | 基于docker的灰度发布装置及方法 |
| WO2016179958A1 (fr) * | 2015-05-12 | 2016-11-17 | 百度在线网络技术(北京)有限公司 | Procédé, dispositif et système pour effectuer un déploiement bêta sur une application mobile |
| CN106775859A (zh) * | 2016-12-08 | 2017-05-31 | 上海亿账通互联网科技有限公司 | 灰度发布方法和系统 |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103023939B (zh) * | 2011-09-26 | 2017-10-20 | 中兴通讯股份有限公司 | 在Nginx上实现云缓存的REST接口的方法和系统 |
| CN103095743A (zh) * | 2011-10-28 | 2013-05-08 | 阿里巴巴集团控股有限公司 | 一种灰度发布的处理方法及系统 |
| CN105975270A (zh) * | 2016-05-04 | 2016-09-28 | 北京思特奇信息技术股份有限公司 | 一种基于http请求转发的灰度发布方法及系统 |
| CN106100927A (zh) * | 2016-06-20 | 2016-11-09 | 浪潮电子信息产业股份有限公司 | 一种实现ssr灰度发布的方法 |
-
2016
- 2016-12-08 CN CN201611123903.4A patent/CN106775859B/zh active Active
-
2017
- 2017-06-30 WO PCT/CN2017/091179 patent/WO2018103320A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103176790A (zh) * | 2011-12-26 | 2013-06-26 | 阿里巴巴集团控股有限公司 | 应用发布方法和系统 |
| WO2016179958A1 (fr) * | 2015-05-12 | 2016-11-17 | 百度在线网络技术(北京)有限公司 | Procédé, dispositif et système pour effectuer un déploiement bêta sur une application mobile |
| CN105591825A (zh) * | 2016-01-21 | 2016-05-18 | 烽火通信科技股份有限公司 | 在家庭网关升级时修改配置的方法 |
| CN105955761A (zh) * | 2016-06-30 | 2016-09-21 | 乐视控股(北京)有限公司 | 基于docker的灰度发布装置及方法 |
| CN106775859A (zh) * | 2016-12-08 | 2017-05-31 | 上海亿账通互联网科技有限公司 | 灰度发布方法和系统 |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109669719A (zh) * | 2018-09-26 | 2019-04-23 | 深圳壹账通智能科技有限公司 | 应用灰度发布方法、装置、设备及可读存储介质 |
| CN109766270A (zh) * | 2018-12-19 | 2019-05-17 | 北京万维之道信息技术有限公司 | 项目测试方法及装置、服务器、平台 |
| CN110162382A (zh) * | 2019-04-09 | 2019-08-23 | 平安科技(深圳)有限公司 | 基于容器的灰度发布方法、装置、计算机设备及存储介质 |
| CN110162382B (zh) * | 2019-04-09 | 2023-12-15 | 平安科技(深圳)有限公司 | 基于容器的灰度发布方法、装置、计算机设备及存储介质 |
| CN112788103A (zh) * | 2020-12-25 | 2021-05-11 | 江苏省未来网络创新研究院 | 一种基于nginx+lua解决同应用多实例web代理访问冲突的方法 |
| CN112788103B (zh) * | 2020-12-25 | 2022-08-02 | 江苏省未来网络创新研究院 | 一种基于nginx+lua解决同应用多实例web代理访问冲突的方法 |
| CN113377770A (zh) * | 2021-06-07 | 2021-09-10 | 北京沃东天骏信息技术有限公司 | 一种数据处理方法和装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106775859A (zh) | 2017-05-31 |
| CN106775859B (zh) | 2018-02-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018103320A1 (fr) | Procédé de lancement à déclenchement périodique, système, serveur et support de stockage | |
| EP3116178B1 (fr) | Dispositif de traitement de paquets, procédé de traitement de paquets, et programme | |
| WO2018058959A1 (fr) | Procédé et appareil de vérification de langage sql, serveur et dispositif de stockage | |
| CN109800207B (zh) | 日志解析方法、装置、设备及计算机可读存储介质 | |
| WO2018103315A1 (fr) | Procédé de traitement de données de surveillance, appareil, serveur et équipement de stockage | |
| US10866894B2 (en) | Controlling memory usage in a cache | |
| WO2018227771A1 (fr) | Procédé, système, serveur de division de régions sur la base d'une police d'assurance, et support d'informations | |
| WO2018014580A1 (fr) | Procédé et appareil de test d'interface de données, serveur et support de stockage | |
| WO2020186773A1 (fr) | Procédé, dispositif et appareil de surveillance de demandes d'appel, et support d'informations | |
| WO2014189190A1 (fr) | Système et procédé de récupération d'informations sur la base d'un étiquetage d'éléments de données | |
| WO2010123168A1 (fr) | Procédé et système de gestion de base de données | |
| WO2020077832A1 (fr) | Procédé, appareil et dispositif d'accès à un bureau dans le nuage et support de stockage | |
| JPH1021134A (ja) | ゲートウェイ装置、クライアント計算機およびそれらを接続した分散ファイルシステム | |
| WO2020186791A1 (fr) | Procédé de transmission de données, appareil, dispositif, et support d'enregistrement | |
| CN113051460A (zh) | 基于Elasticsearch的数据检索方法、系统、电子设备及存储介质 | |
| WO2021107211A1 (fr) | Système de gestion de données chronologiques basé sur une base de données en mémoire | |
| WO2021012490A1 (fr) | Procédé et appareil de commutation de relais de service, dispositif terminal, et support d'informations | |
| CN110945496A (zh) | 用于状态对象数据存储区的系统和方法 | |
| WO2021012487A1 (fr) | Procédé de synchronisation d'informations intersystème, dispositif utilisateur, support de stockage, et appareil | |
| CN107276916A (zh) | 基于协议无感知转发技术的交换机流表管理方法 | |
| WO2015068929A1 (fr) | Procédé de fonctionnement d'un nœud considérant les caractéristiques de paquets dans un réseau centré sur le contenu et nœud | |
| WO2013176431A1 (fr) | Système et procédé pour allouer un serveur à un serveur et pour une messagerie efficace | |
| JPH10240768A (ja) | 異プログラム言語で構成されたデータベースシステムの検索方法 | |
| US8503442B2 (en) | Transmission information transfer apparatus and method thereof | |
| WO2018221998A1 (fr) | Procédé d'analyse automatique de goulot d'étranglement en temps réel et appareil permettant d'effectuer le procédé |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17879634 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17879634 Country of ref document: EP Kind code of ref document: A1 |