[go: up one dir, main page]

CN118316890A - Content acceleration method, device, equipment and storage medium based on edge cache - Google Patents

Content acceleration method, device, equipment and storage medium based on edge cache Download PDF

Info

Publication number
CN118316890A
CN118316890A CN202410576772.3A CN202410576772A CN118316890A CN 118316890 A CN118316890 A CN 118316890A CN 202410576772 A CN202410576772 A CN 202410576772A CN 118316890 A CN118316890 A CN 118316890A
Authority
CN
China
Prior art keywords
data
content
access
node
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410576772.3A
Other languages
Chinese (zh)
Other versions
CN118316890B (en
Inventor
黄家炽
赵剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zexintong Information Engineering Co ltd
Original Assignee
Shenzhen Zexintong Information Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zexintong Information Engineering Co ltd filed Critical Shenzhen Zexintong Information Engineering Co ltd
Priority to CN202410576772.3A priority Critical patent/CN118316890B/en
Publication of CN118316890A publication Critical patent/CN118316890A/en
Application granted granted Critical
Publication of CN118316890B publication Critical patent/CN118316890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请涉及数据处理技术领域,公开了一种基于边缘缓存的内容加速方法、装置、设备及存储介质。所述方法包括:采集历史网络流量数据,并对历史网络流量数据进行流量特征提取,得到流量特征数据;通过流量特征数据构建边缘节点资源分配策略,并根据边缘节点资源分配策略生成缓存空间优先级表;采集目标用户的用户数据,并通过边缘节点资源分配策略对用户数据进行边缘节点匹配,得到多个初始边缘节点;基于缓存空间优先级表,对多个初始边缘节点进行节点布局构建,得到目标节点布局;获取目标用户的数据访问请求,并基于目标节点布局对数据访问请求进行访问节点匹配,得到目标访问节点,本申请提升基于边缘缓存的内容加速的加速效率。

The present application relates to the field of data processing technology, and discloses a content acceleration method, device, equipment and storage medium based on edge cache. The method includes: collecting historical network traffic data, and extracting traffic characteristics of the historical network traffic data to obtain traffic characteristic data; constructing an edge node resource allocation strategy through traffic characteristic data, and generating a cache space priority table according to the edge node resource allocation strategy; collecting user data of the target user, and matching the user data with edge nodes through the edge node resource allocation strategy to obtain multiple initial edge nodes; based on the cache space priority table, constructing a node layout for multiple initial edge nodes to obtain a target node layout; obtaining the data access request of the target user, and matching the data access request with access nodes based on the target node layout to obtain the target access node. The present application improves the acceleration efficiency of content acceleration based on edge cache.

Description

基于边缘缓存的内容加速方法、装置、设备及存储介质Content acceleration method, device, equipment and storage medium based on edge cache

技术领域Technical Field

本申请涉及数据处理领域,尤其涉及一种基于边缘缓存的内容加速方法、装置、设备及存储介质。The present application relates to the field of data processing, and in particular to a content acceleration method, device, equipment and storage medium based on edge caching.

背景技术Background technique

在当今的互联网时代,内容传输速度和质量成为衡量用户体验的关键指标。随着在线视频流、社交媒体和云服务的兴起,用户对数据传输速度的要求日益增加。为满足这些需求,边缘计算应运而生,通过将数据处理和存储从中心服务器转移到网络边缘更靠近用户的位置,大大减少了数据传输的延迟,提高了服务的响应速度。然而,尽管边缘计算在理论上能够提供更快的内容访问速度,但在实际应用中,如何高效地管理边缘节点、智能地调度网络资源,以及如何根据网络状况动态优化内容分发策略,仍然是一大挑战。In today's Internet era, content transmission speed and quality have become key indicators for measuring user experience. With the rise of online video streaming, social media, and cloud services, users' requirements for data transmission speed are increasing. To meet these needs, edge computing has emerged. By transferring data processing and storage from central servers to the edge of the network closer to users, it greatly reduces the delay of data transmission and improves the response speed of services. However, although edge computing can theoretically provide faster content access speed, in practical applications, how to efficiently manage edge nodes, intelligently schedule network resources, and how to dynamically optimize content distribution strategies according to network conditions are still major challenges.

现有的边缘计算解决方案在处理动态网络环境和用户需求时存在一定的局限性。首先,大多数解决方案采用静态的数据缓存和分发策略,难以适应网络状态的快速变化和用户访问模式的多样性。此外,现有方法在资源分配和负载均衡方面往往缺乏足够的灵活性和智能性,不能充分利用边缘计算的潜力来优化内容传输。例如,当多个用户同时访问热门内容时,静态的缓存策略可能导致某些边缘节点过载,而其他节点却资源闲置,从而影响整体的服务质量和用户体验。因此,如何根据实时网络状况和用户行为智能调整边缘节点的资源分配和内容分发策略,成为提高边缘计算效率和优化用户体验的关键。Existing edge computing solutions have certain limitations when dealing with dynamic network environments and user needs. First, most solutions adopt static data caching and distribution strategies, which are difficult to adapt to the rapid changes in network status and the diversity of user access patterns. In addition, existing methods often lack sufficient flexibility and intelligence in resource allocation and load balancing, and cannot fully utilize the potential of edge computing to optimize content delivery. For example, when multiple users access popular content at the same time, static caching strategies may cause some edge nodes to be overloaded while other nodes have idle resources, thus affecting the overall service quality and user experience. Therefore, how to intelligently adjust the resource allocation and content distribution strategies of edge nodes according to real-time network conditions and user behavior has become the key to improving edge computing efficiency and optimizing user experience.

发明内容Summary of the invention

本申请提供了一种基于边缘缓存的内容加速方法、装置、设备及存储介质,用于提升基于边缘缓存的内容加速的加速效率。The present application provides a content acceleration method, apparatus, device and storage medium based on edge cache, which are used to improve the acceleration efficiency of content acceleration based on edge cache.

第一方面,本申请提供了一种基于边缘缓存的内容加速方法,所述基于边缘缓存的内容加速方法包括:采集历史网络流量数据,并对所述历史网络流量数据进行流量特征提取,得到流量特征数据;通过所述流量特征数据构建边缘节点资源分配策略,并根据所述边缘节点资源分配策略生成缓存空间优先级表;采集目标用户的用户数据,并通过所述边缘节点资源分配策略对所述用户数据进行边缘节点匹配,得到多个初始边缘节点;基于所述缓存空间优先级表,对多个所述初始边缘节点进行节点布局构建,得到目标节点布局;获取所述目标用户的数据访问请求,并基于所述目标节点布局对所述数据访问请求进行访问节点匹配,得到目标访问节点。In a first aspect, the present application provides a content acceleration method based on edge cache, and the content acceleration method based on edge cache includes: collecting historical network traffic data, and extracting traffic features of the historical network traffic data to obtain traffic feature data; constructing an edge node resource allocation strategy through the traffic feature data, and generating a cache space priority table according to the edge node resource allocation strategy; collecting user data of a target user, and performing edge node matching on the user data through the edge node resource allocation strategy to obtain multiple initial edge nodes; based on the cache space priority table, performing node layout construction on the multiple initial edge nodes to obtain a target node layout; obtaining a data access request of the target user, and performing access node matching on the data access request based on the target node layout to obtain a target access node.

结合第一方面,在本申请第一方面的第一种实现方式中,所述采集历史网络流量数据,并对所述历史网络流量数据进行流量特征提取,得到流量特征数据,包括:采集所述历史网络流量数据,并对所述历史网络流量数据进行信息遍历,得到历史访问频率、历史访问时间以及历史访问内容类型;对所述历史访问频率进行向量转换,得到第一向量,对所述历史访问时间进行向量转换,得到第二向量,同时,对所述历史访问内容类型进行向量转换,得到第三向量;对所述第一向量以及所述第二向量进行向量融合,得到第一融合向量;对所述第一向量以及所述第三向量进行向量融合,得到第二融合向量;对所述第二向量以及所述第三向量进行向量融合,得到第三融合向量;将所述第一融合向量输入预置的卷积长短期记忆网络模型进行高频访问时间特征提取,得到高频访问时间特征;将所述第二融合向量输入所述卷积长短期记忆网络模型进行高频访问内容特征提取,得到高频访问内容特征;将所述第三融合向量输入所述卷积长短期记忆网络模型进行长时访问内容提取,得到长时访问内容特征;对所述高频访问时间特征以及所述高频访问内容特征进行特征融合,得到第一融合特征,同时,对所述高频访问内容特征以及所述长时访问内容特征进行特征融合,得到第二融合特征;将所述第一融合特征输入所述卷积长短期记忆网络模型的特征识别层进行流量模式识别,得到流量模式,同时,将所述第二融合特征输入所述卷积长短期记忆网络模型的特征识别层进行内容趋势识别,得到访问内容趋势;将所述流量模式以及所述访问内容趋势合并为所述流量特征数据。In combination with the first aspect, in a first implementation method of the first aspect of the present application, the historical network traffic data is collected and traffic feature extraction is performed on the historical network traffic data to obtain traffic feature data, including: collecting the historical network traffic data and performing information traversal on the historical network traffic data to obtain historical access frequency, historical access time and historical access content type; performing vector conversion on the historical access frequency to obtain a first vector, performing vector conversion on the historical access time to obtain a second vector, and at the same time, performing vector conversion on the historical access content type to obtain a third vector; performing vector fusion on the first vector and the second vector to obtain a first fused vector; performing vector fusion on the first vector and the third vector to obtain a second fused vector; performing vector fusion on the second vector and the third vector to obtain a third fused vector; inputting the first fused vector into a preset convolutional long short-term memory network model for Extract high-frequency access time features to obtain high-frequency access time features; input the second fusion vector into the convolutional long short-term memory network model to extract high-frequency access content features to obtain high-frequency access content features; input the third fusion vector into the convolutional long short-term memory network model to extract long-term access content to obtain long-term access content features; perform feature fusion on the high-frequency access time features and the high-frequency access content features to obtain a first fusion feature, and at the same time, perform feature fusion on the high-frequency access content features and the long-term access content features to obtain a second fusion feature; input the first fusion feature into the feature recognition layer of the convolutional long short-term memory network model to perform traffic pattern recognition to obtain a traffic pattern, and at the same time, input the second fusion feature into the feature recognition layer of the convolutional long short-term memory network model to perform content trend recognition to obtain an access content trend; merge the traffic pattern and the access content trend into the traffic feature data.

结合第一方面,在本申请第一方面的第二种实现方式中,所述通过所述流量特征数据构建边缘节点资源分配策略,并根据所述边缘节点资源分配策略生成缓存空间优先级表,包括:对所述流量模式进行模式识别,得到模式识别数据,其中,所述模式识别数据包括:周期性流量变化值以及流量峰值;通过时序预测算法对所述周期性流量变化值以及所述流量峰值进行带宽需求量预测,得到多个时段的带宽需求量;对多个时段的带宽需求量进行高峰时段识别,得到流量高峰时段;对多个时段的带宽需求量进行低谷时段识别,得到流量低谷时段;对所述访问内容趋势进行高需求内容类型识别,得到高需求内容类型集;基于所述流量高峰时段以及所述高需求内容类型集生成初始资源分配策略;通过所述流量低谷时段对所述初始资源分配策略进行策略修正,得到所述边缘节点资源分配策略;通过所述边缘节点资源分配策略对所述高需求内容类型集进行优先级排序,得到排序数据,并通过所述排序数据生成缓存空间优先级表。In combination with the first aspect, in a second implementation method of the first aspect of the present application, the edge node resource allocation strategy is constructed through the traffic feature data, and a cache space priority table is generated according to the edge node resource allocation strategy, including: performing pattern recognition on the traffic pattern to obtain pattern recognition data, wherein the pattern recognition data includes: periodic traffic change values and traffic peak values; predicting the bandwidth demand for the periodic traffic change values and the traffic peak values through a timing prediction algorithm to obtain bandwidth demand for multiple time periods; identifying peak time periods for bandwidth demand for multiple time periods to obtain traffic peak time periods; identifying valley time periods for bandwidth demand for multiple time periods to obtain traffic valley time periods; identifying high-demand content types for the access content trend to obtain a high-demand content type set; generating an initial resource allocation strategy based on the traffic peak time period and the high-demand content type set; performing policy modification on the initial resource allocation strategy through the traffic valley time period to obtain the edge node resource allocation strategy; prioritizing the high-demand content type set through the edge node resource allocation strategy to obtain sorting data, and generating a cache space priority table through the sorting data.

结合第一方面,在本申请第一方面的第三种实现方式中,所述采集目标用户的用户数据,并通过所述边缘节点资源分配策略对所述用户数据进行边缘节点匹配,得到多个初始边缘节点,包括:采集目标用户的用户数据,其中,所述用户数据包括用户地理位置数据以及用户预期访问内容数据;对所述用户地理位置数据进行服务器计算区域识别,得到服务器计算区域;对所述服务器计算区域内的边缘节点信息进行数据采集,得到区域节点数据;对所述用户地理位置数据以及所述用户预期访问内容数据进行数据融合,得到用户需求指纹;对所述区域节点数据进行节点资源遍历,得到多个区域节点对应的节点算力资源,同时,采集多个所述区域节点的资源负载数据;通过所述用户需求指纹,对多个所述区域节点的资源负载数据进行节点筛选,得到多个初始边缘节点。In combination with the first aspect, in a third implementation method of the first aspect of the present application, the user data of the target user is collected, and the user data is matched with edge nodes through the edge node resource allocation strategy to obtain multiple initial edge nodes, including: collecting the user data of the target user, wherein the user data includes user geographic location data and user expected access content data; performing server computing area identification on the user geographic location data to obtain a server computing area; performing data collection on edge node information in the server computing area to obtain regional node data; performing data fusion on the user geographic location data and the user expected access content data to obtain a user demand fingerprint; performing node resource traversal on the regional node data to obtain node computing power resources corresponding to multiple regional nodes, and at the same time, collecting resource load data of multiple regional nodes; performing node screening on the resource load data of multiple regional nodes through the user demand fingerprint to obtain multiple initial edge nodes.

结合第一方面,在本申请第一方面的第四种实现方式中,所述获取所述目标用户的数据访问请求,并基于所述目标节点布局对所述数据访问请求进行访问节点匹配,得到目标访问节点,包括:获取所述目标用户的数据访问请求,并对所述数据访问请求进行解析,得到数据请求频率以及当前服务器负载数据;获取所述目标用户的视场角数据,并采集待访问视频内容的流行度数值,将所述视场角数据以及所述流行度数值集输入预置的深度确定性策略梯度算法进行视频内容焦点区域预测,得到用户焦点区域;基于所述用户焦点区域对所述待访问视频内容进行待缓存内容提取,得到待缓存内容;通过所述目标节点布局对所述待缓存内容进行数据缓存,基于所述目标节点布局,将所述数据请求频率以及所述当前服务器负载数据输入预置的双重深度Q网络算法进行访问节点匹配,得到目标访问节点。In combination with the first aspect, in a fourth implementation method of the first aspect of the present application, the data access request of the target user is obtained, and access node matching is performed on the data access request based on the target node layout to obtain the target access node, including: obtaining the data access request of the target user, and parsing the data access request to obtain the data request frequency and the current server load data; obtaining the field of view data of the target user, and collecting the popularity value of the video content to be accessed, and inputting the field of view data and the popularity value set into a preset deep deterministic policy gradient algorithm to predict the focus area of the video content to obtain the user focus area; extracting the content to be cached for the video content to be accessed based on the user focus area to obtain the content to be cached; data caching is performed on the content to be cached through the target node layout, and based on the target node layout, the data request frequency and the current server load data are input into a preset dual deep Q network algorithm to match the access nodes to obtain the target access node.

结合第一方面,在本申请第一方面的第五种实现方式中,所述获取所述目标用户的视场角数据,并采集待访问视频内容的流行度数值,将所述视场角数据以及所述流行度数值集输入预置的深度确定性策略梯度算法进行视频内容焦点区域预测,得到用户焦点区域,包括:获取所述目标用户的视场角数据,并对所述待访问视频内容进行视频浏览量提取,得到视频浏览量;对所述待访问视频内容进行视频点赞量提取,得到视频点赞量;对所述待访问视频内容进行视频完整观看量提取,得到视频完整观看量;通过预置的权重参数集,对所述视频浏览量、所述视频点赞量以及所述视频完整观看量进行视频流行度数值计算,得到所述流行度数值集;对所述视场角数据进行视频帧比例值计算,得到所述视场角数据对应的视频帧比例值;In combination with the first aspect, in a fifth implementation of the first aspect of the present application, the field of view data of the target user is obtained, and the popularity value of the video content to be accessed is collected, and the field of view data and the popularity value set are input into a preset deep deterministic policy gradient algorithm to predict the focus area of the video content to obtain the user focus area, including: obtaining the field of view data of the target user, and extracting the video views of the video content to be accessed to obtain the video views; extracting the video likes of the video content to be accessed to obtain the video likes; extracting the video complete viewing amount of the video content to be accessed to obtain the video complete viewing amount; calculating the video popularity value of the video views, the video likes and the video complete viewing amount through a preset weight parameter set to obtain the popularity value set; calculating the video frame ratio value of the field of view data to obtain the video frame ratio value corresponding to the field of view data;

基于所述视频帧比例值,将所述视场角数据及所述流行度数值集输入所述深度确定性策略梯度算法,通过中心点坐标计算公式进行焦点区域中心点坐标计算,得到焦点区域中心点坐标,其中,所述中心点坐标计算公式如下所示:Based on the video frame ratio value, the field of view data and the popularity value set are input into the deep deterministic policy gradient algorithm, and the coordinates of the center point of the focus area are calculated by the center point coordinate calculation formula to obtain the coordinates of the center point of the focus area, wherein the center point coordinate calculation formula is as follows:

; ;

其中,n是所述视场角数据的数据点数量,为焦点区域中心点坐标,为第i个视场角数据点的坐标,是第i个视场角数据点相对于视频内容中心的角度偏移量,为第一指数参数,为第二指数参数,为权重因子,其中,权重因子的计算公式如下所示;Where n is the number of data points of the field of view data, is the coordinate of the center point of the focal area, is the coordinate of the i-th field of view data point, is the angular offset of the i-th field of view data point relative to the center of the video content, is the first exponential parameter, is the second exponential parameter, is the weight factor, where The calculation formula is as follows;

; ;

其中,是第i个视场角数据点对应的流行度数值,是第i个视场角数据点到视频内容中心的欧氏距离,为修正系数;in, is the popularity value corresponding to the i-th viewing angle data point, is the Euclidean distance from the i-th field of view data point to the center of the video content, is the correction factor;

基于所述焦点区域中心点坐标生成多个候选焦点区域,通过预置的奖励函数对每个所述候选焦点区域进行奖励分值计算,得到每个所述候选焦点区域的奖励分值数据;基于每个所述候选焦点区域的奖励分值数据对多个所述候选焦点区域进行区域筛选,得到所述用户焦点区域。Based on the coordinates of the center point of the focus area, multiple candidate focus areas are generated, and a reward score is calculated for each of the candidate focus areas through a preset reward function to obtain reward score data for each candidate focus area; based on the reward score data for each candidate focus area, multiple candidate focus areas are area screened to obtain the user focus area.

结合第一方面,在本申请第一方面的第六种实现方式中,所述通过所述目标节点布局对所述待缓存内容进行数据缓存,基于所述目标节点布局,将所述数据请求频率以及所述当前服务器负载数据输入预置的双重深度Q网络算法进行访问节点匹配,得到目标访问节点,包括:对所述目标节点布局进行网络连接状态分析,得到当前网络连接状态;基于所述网络连接状态匹配数据缓存策略,通过所述数据缓存策略对所述待缓存内容进行数据缓存,其中,所述数据缓存策略包括数据分配子策略以及数据传输子策略;将所述数据请求频率输入所述双重深度Q网络算法进行频率状态向量构建,得到多维频率状态向量;将所述当前服务器负载数据输入所述双重深度Q网络算法进行负载状态向量构建,得到多维负载状态向量;将所述多维频率状态向量以及所述多维负载状态向量输入所述双重深度Q网络算法的主网络进行动作执行,得到动作执行状态数据以及奖励参数;将所述动作执行状态数据以及所述奖励参数输入所述双重深度Q网络算法的目标网络进行最大Q值动态分析,得到最大Q值动作;提取所述最大Q值动作的目标Q值,基于所述目标Q值对所述目标节点布局进行访问节点匹配,得到所述目标访问节点。In combination with the first aspect, in a sixth implementation of the first aspect of the present application, data caching is performed on the content to be cached through the target node layout, and based on the target node layout, the data request frequency and the current server load data are input into a preset dual deep Q network algorithm for access node matching to obtain a target access node, including: performing a network connection status analysis on the target node layout to obtain a current network connection status; matching a data caching strategy based on the network connection status, and caching data on the content to be cached through the data caching strategy, wherein the data caching strategy includes a data allocation sub-strategy and a data transmission sub-strategy; inputting the data request frequency into the dual deep Q network The algorithm constructs a frequency state vector to obtain a multidimensional frequency state vector; the current server load data is input into the dual deep Q network algorithm to construct a load state vector to obtain a multidimensional load state vector; the multidimensional frequency state vector and the multidimensional load state vector are input into the main network of the dual deep Q network algorithm to perform action execution to obtain action execution state data and reward parameters; the action execution state data and the reward parameters are input into the target network of the dual deep Q network algorithm to perform maximum Q value dynamic analysis to obtain a maximum Q value action; the target Q value of the maximum Q value action is extracted, and the target node layout is matched with the access node based on the target Q value to obtain the target access node.

第二方面,本申请提供了一种基于边缘缓存的内容加速装置,所述基于边缘缓存的内容加速装置包括:In a second aspect, the present application provides a content acceleration device based on edge cache, and the content acceleration device based on edge cache includes:

采集模块,用于采集历史网络流量数据,并对所述历史网络流量数据进行流量特征提取,得到流量特征数据;A collection module is used to collect historical network traffic data and extract traffic characteristics from the historical network traffic data to obtain traffic characteristic data;

生成模块,用于通过所述流量特征数据构建边缘节点资源分配策略,并根据所述边缘节点资源分配策略生成缓存空间优先级表;A generation module, used to construct an edge node resource allocation strategy based on the traffic characteristic data, and generate a cache space priority table according to the edge node resource allocation strategy;

匹配模块,用于采集目标用户的用户数据,并通过所述边缘节点资源分配策略对所述用户数据进行边缘节点匹配,得到多个初始边缘节点;A matching module, used to collect user data of a target user, and perform edge node matching on the user data through the edge node resource allocation strategy to obtain a plurality of initial edge nodes;

构建模块,用于基于所述缓存空间优先级表,对多个所述初始边缘节点进行节点布局构建,得到目标节点布局;A construction module, configured to construct a node layout for the plurality of initial edge nodes based on the cache space priority table to obtain a target node layout;

获取模块,用于获取所述目标用户的数据访问请求,并基于所述目标节点布局对所述数据访问请求进行访问节点匹配,得到目标访问节点。The acquisition module is used to acquire the data access request of the target user, and perform access node matching on the data access request based on the target node layout to obtain the target access node.

本申请第三方面提供了一种基于边缘缓存的内容加速设备,包括:存储器和至少一个处理器,所述存储器中存储有指令;所述至少一个处理器调用所述存储器中的所述指令,以使得所述基于边缘缓存的内容加速设备执行上述的基于边缘缓存的内容加速方法。A third aspect of the present application provides an edge cache-based content acceleration device, comprising: a memory and at least one processor, wherein the memory stores instructions; the at least one processor calls the instructions in the memory so that the edge cache-based content acceleration device executes the above-mentioned edge cache-based content acceleration method.

本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述的基于边缘缓存的内容加速方法。A fourth aspect of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores instructions, which, when executed on a computer, enable the computer to execute the above-mentioned edge cache-based content acceleration method.

本申请提供的技术方案中,通过资源分配和数据传输策略,显著提升了内容传输的效率和质量。首先,通过采集和分析历史网络流量数据,构建的边缘节点资源分配策略能够动态地调整资源分配,确保在用户需求高峰期,资源得到有效利用,避免了资源的浪费或过载现象,从而在保障服务质量的同时,也提高了系统的整体运行效率。其次,利用深度学习算法,特别是双重深度Q网络算法对数据请求频率和服务器负载进行实时分析,使得数据缓存和分发策略能够根据网络状态的实时变化智能调整,这不仅提升了数据传输的稳定性和可靠性,也极大地减少了因网络拥塞引起的延迟,确保了用户能够获得更流畅的访问体验。此外,通过优化的缓存空间优先级表和针对性的节点布局构建,本方案能够更精准地匹配用户的数据访问需求,无论是在用户地理位置数据的精确识别还是在预期访问内容数据的有效缓存上,都大大提高了边缘计算资源的利用率和内容传输的时效性。In the technical solution provided by this application, the efficiency and quality of content transmission are significantly improved through resource allocation and data transmission strategies. First, by collecting and analyzing historical network traffic data, the constructed edge node resource allocation strategy can dynamically adjust resource allocation to ensure that resources are effectively utilized during the peak period of user demand, avoiding resource waste or overload, thereby ensuring the quality of service while also improving the overall operating efficiency of the system. Secondly, the data request frequency and server load are analyzed in real time using deep learning algorithms, especially the dual-depth Q network algorithm, so that the data cache and distribution strategy can be intelligently adjusted according to the real-time changes in the network status, which not only improves the stability and reliability of data transmission, but also greatly reduces the delay caused by network congestion, ensuring that users can get a smoother access experience. In addition, through the optimized cache space priority table and targeted node layout construction, this solution can more accurately match the user's data access needs, whether in the accurate identification of user geographic location data or in the effective caching of expected access content data, it greatly improves the utilization of edge computing resources and the timeliness of content transmission.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以基于这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the accompanying drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings described below are some embodiments of the present invention. For ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without paying any creative work.

图1为本申请实施例中基于边缘缓存的内容加速方法的一个实施例示意图;FIG1 is a schematic diagram of an embodiment of a content acceleration method based on edge caching in an embodiment of the present application;

图2为本申请实施例中基于边缘缓存的内容加速装置的一个实施例示意图。FIG. 2 is a schematic diagram of an embodiment of a content acceleration device based on edge caching in an embodiment of the present application.

具体实施方式Detailed ways

本申请实施例提供了一种基于边缘缓存的内容加速方法、装置、设备及存储介质。本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。Embodiments of the present application provide a content acceleration method, apparatus, device and storage medium based on edge caching. The terms "first", "second", "third", "fourth", etc. (if any) in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable where appropriate, so that the embodiments described herein can be implemented in an order other than that illustrated or described herein. In addition, the terms "including" or "having" and any variations thereof are intended to cover non-exclusive inclusions, for example, a process, method, system, product or device that includes a series of steps or units is not necessarily limited to those steps or units that are clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or devices.

为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请实施例中基于边缘缓存的内容加速方法的一个实施例包括:For ease of understanding, the specific process of the embodiment of the present application is described below. Please refer to Figure 1. An embodiment of the content acceleration method based on edge caching in the embodiment of the present application includes:

步骤S101、采集历史网络流量数据,并对历史网络流量数据进行流量特征提取,得到流量特征数据;Step S101, collecting historical network traffic data, and extracting traffic characteristics from the historical network traffic data to obtain traffic characteristic data;

可以理解的是,本申请的执行主体可以为基于边缘缓存的内容加速装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。It is understandable that the execution subject of the present application may be a content acceleration device based on edge caching, or a terminal or a server, which is not specifically limited here. The present application embodiment is described by taking a server as the execution subject as an example.

具体的,历史网络流量数据包括但不限于网站访问日志、CDN(内容分发网络)的使用统计和网络监控工具的输出。例如,视频分享平台想要优化其视频内容的分发,以减少缓冲时间并提高用户满意度。会收集过去几个月内的网络流量数据,包括视频的观看次数、观看时段、观看持续时间以及用户请求的地理位置。以视频观看次数为例,通过分析不同时间段内视频的观看次数,可以识别出哪些视频在特定时间段内特别受欢迎。比如,一个新发布的电影预告片在晚上8点到10点间的观看次数可能会激增。这个时间段的流量峰值可以转换为一个特征向量,表明了用户对新内容的高度需求。Specifically, historical network traffic data includes but is not limited to website access logs, CDN (content distribution network) usage statistics, and outputs from network monitoring tools. For example, a video sharing platform wants to optimize the distribution of its video content to reduce buffering time and improve user satisfaction. Network traffic data from the past few months will be collected, including the number of video views, viewing time period, viewing duration, and geographic location of user requests. Taking the number of video views as an example, by analyzing the number of video views in different time periods, it is possible to identify which videos are particularly popular during specific time periods. For example, a newly released movie trailer may see a surge in views between 8 and 10 p.m. The traffic peak in this time period can be converted into a feature vector, indicating a high demand for new content among users.

此外,观看持续时间和用户请求的地理位置也是提取流量特征的重要方面。例如,如果发现某个地区的用户倾向于完整观看视频而不是简单地跳过,这可能表明该地区的用户对视频内容的质量有较高的要求。将这些特征融合后,得到描述历史网络流量行为的流量特征数据。例如,基于流量特征数据,可以预测未来某段时间内对某类视频内容的需求量,并据此在靠近需求高峰地区的边缘节点上优先缓存这些内容,从而在用户发起请求时减少数据的传输距离和时间,加速内容的访问速度。In addition, viewing duration and the geographic location of user requests are also important aspects of extracting traffic features. For example, if it is found that users in a certain area tend to watch videos in full rather than simply skipping, this may indicate that users in that area have higher requirements for the quality of video content. After fusing these features, traffic feature data describing historical network traffic behavior is obtained. For example, based on traffic feature data, the demand for a certain type of video content in a certain period of time in the future can be predicted, and based on this, these contents can be cached preferentially on edge nodes close to the peak demand areas, thereby reducing the data transmission distance and time when users initiate requests, and accelerating the access speed of content.

步骤S102、通过流量特征数据构建边缘节点资源分配策略,并根据边缘节点资源分配策略生成缓存空间优先级表;Step S102: constructing an edge node resource allocation strategy based on traffic feature data, and generating a cache space priority table according to the edge node resource allocation strategy;

具体的,根据流量特征数据可以识别出哪些内容是“热点”内容,即那些需求量大且访问频率高的内容。进一步分析这些热点内容的需求模式,如某些内容可能在特定的时间段或特定的地理区域内特别受欢迎。例如,对于那些在周末晚间特别受欢迎的电视剧集,可以提前在靠近主要观众群体所在地的边缘节点上增加它们的缓存副本。基于此制定边缘节点资源分配策略,策略考虑到各个边缘节点的存储容量、带宽能力以及它们与用户地理位置的相对关系,以确保热点内容被优先考虑并合理分配到各个边缘节点上。因此,拥有更高存储容量和带宽能力、且靠近用户高密度区域的边缘节点,会被赋予更高的缓存优先级。Specifically, based on the traffic characteristic data, it is possible to identify which content is "hot" content, that is, content that is in high demand and frequently accessed. Further analyze the demand patterns of these hot content, such as some content that may be particularly popular during specific time periods or in specific geographic areas. For example, for those TV series that are particularly popular on weekend evenings, their cache copies can be added in advance on edge nodes close to the locations of the main audience groups. Based on this, an edge node resource allocation strategy is formulated. The strategy takes into account the storage capacity, bandwidth capabilities of each edge node, and their relative relationship with the user's geographic location to ensure that hot content is prioritized and reasonably allocated to each edge node. Therefore, edge nodes with higher storage capacity and bandwidth capabilities and close to areas with high user density will be given higher cache priority.

根据边缘节点资源分配策略,生成缓存空间优先级表,优先级表明确指出了各个内容项应该被缓存到哪些边缘节点上,以及它们在这些节点上的缓存优先级。例如,对于热门电视剧集,缓存空间优先级表会指示将其缓存到所有靠近主要观众群体所在地的边缘节点,并在这些节点上赋予它们较高的缓存优先级。这样,当用户在周末晚间请求观看这些电视剧集时,他们的请求可以被迅速响应,因为相关内容已经被缓存在离他们最近的边缘节点上,从而显著减少了数据传输的延迟,提高了用户体验。基于流量特征数据的边缘节点资源分配策略不仅能够确保热点内容被有效缓存,还能优化整个网络的资源使用和用户的访问速度。According to the edge node resource allocation strategy, a cache space priority table is generated. The priority table clearly indicates which edge nodes each content item should be cached on and their cache priority on these nodes. For example, for popular TV series, the cache space priority table will indicate that they should be cached on all edge nodes close to the main audience groups, and give them a higher cache priority on these nodes. In this way, when users request to watch these TV series on weekend evenings, their requests can be responded to quickly because the relevant content has been cached on the edge nodes closest to them, which significantly reduces the delay of data transmission and improves the user experience. The edge node resource allocation strategy based on traffic feature data can not only ensure that hot content is effectively cached, but also optimize the resource utilization of the entire network and the access speed of users.

步骤S103、采集目标用户的用户数据,并通过边缘节点资源分配策略对用户数据进行边缘节点匹配,得到多个初始边缘节点;Step S103: collecting user data of the target user, and performing edge node matching on the user data through an edge node resource allocation strategy to obtain multiple initial edge nodes;

具体的,收集目标用户的数据,包括用户的地理位置、请求的视频内容ID、访问时间等信息。例如,一个用户位于北京,晚上8点钟通过手机客户端请求观看一部热门电影。在这个过程中,用户的地理位置数据可以通过客户端的GPS功能获取,视频内容ID和访问时间则由用户的请求直接提供。Specifically, data of the target user is collected, including the user's geographic location, requested video content ID, access time, etc. For example, a user in Beijing requests to watch a popular movie through a mobile client at 8 pm. In this process, the user's geographic location data can be obtained through the GPS function of the client, and the video content ID and access time are directly provided by the user's request.

将这些收集到的用户数据与已有的边缘节点资源分配策略相结合,进行边缘节点的匹配。基于对历史网络流量数据,能够反映不同时间段内不同地理区域的用户访问模式和内容流行度变化。如以上的例子所讲的情况,此时会优先考虑位于北京及其周边地区的边缘节点,并进一步基于每个节点的当前服务器负载和与用户地理位置的距离,评估每个节点作为缓存节点的适用性。如果某个节点的当前负载较低,并且距离用户较近,那么这个节点就会被赋予较高的匹配分数。最终得到一个初始边缘节点列表,这些节点是根据用户的地理位置、请求的内容特性以及节点的当前状态综合评估后认为最适合处理当前请求的节点。例如,对于前述位于北京的用户请求观看的热门电影,可能会选择位于北京市中心、当前负载较低的一个边缘节点,以及另外两个位于用户地理位置附近、同样具有较低负载的备用节点,形成一个多节点的缓存和分发策略。Combine these collected user data with the existing edge node resource allocation strategy to match edge nodes. Based on historical network traffic data, it can reflect the user access patterns and content popularity changes in different geographical areas in different time periods. As described in the above example, edge nodes located in Beijing and its surrounding areas will be given priority at this time, and the suitability of each node as a cache node will be further evaluated based on the current server load of each node and the distance from the user's geographic location. If a node has a low current load and is close to the user, then this node will be given a higher matching score. Finally, an initial list of edge nodes is obtained, which are nodes that are considered to be the most suitable for processing the current request after a comprehensive evaluation based on the user's geographic location, the content characteristics of the request, and the current status of the node. For example, for the popular movie requested by the user in Beijing, an edge node located in the center of Beijing with a low current load and two other backup nodes located near the user's geographic location and also with a low load may be selected to form a multi-node cache and distribution strategy.

步骤S104、基于缓存空间优先级表,对多个初始边缘节点进行节点布局构建,得到目标节点布局;Step S104: constructing a node layout for multiple initial edge nodes based on the cache space priority table to obtain a target node layout;

具体的,需要说明的是,假设视频流服务平台希望通过边缘计算技术提升用户的观看体验,尤其是减少视频加载时间。该平台已经在全球范围内部署了多个边缘节点,这些节点分布在用户密集的地区,包括主要的城市和人口中心。通过对历史网络流量数据的分析,得到了一张详尽的缓存空间优先级表,表中记录了不同类型的内容在各个时间段和地理位置的访问频率,以及每个边缘节点的性能指标。在这个基础上,目标是利用这张优先级表来对现有的边缘节点进行布局优化,使得最受用户欢迎的内容能够被缓存在最接近用户的节点上。例如,如果缓存空间优先级表显示,美剧和韩剧在亚洲特别受欢迎,而欧洲用户更倾向于观看当地的电视节目和电影,那么这个信息就会直接影响到内容在亚洲和欧洲节点上的缓存策略。Specifically, it should be noted that a video streaming service platform wants to improve the user's viewing experience through edge computing technology, especially to reduce video loading time. The platform has deployed multiple edge nodes around the world, which are distributed in areas with dense users, including major cities and population centers. Through the analysis of historical network traffic data, a detailed cache space priority table is obtained, which records the access frequency of different types of content in various time periods and geographical locations, as well as the performance indicators of each edge node. On this basis, the goal is to use this priority table to optimize the layout of existing edge nodes so that the most popular content can be cached on the nodes closest to the users. For example, if the cache space priority table shows that American and Korean dramas are particularly popular in Asia, while European users prefer to watch local TV shows and movies, then this information will directly affect the content caching strategy on Asian and European nodes.

因此,需要对每个初始边缘节点的当前状态进行评估,包括节点的存储容量、带宽、以及与用户的地理距离。然后,结合缓存空间优先级表中的数据,平台可以确定哪些内容应该被优先缓存到哪些节点上。在这个过程中,一个关键的决策是如何平衡节点之间的负载,同时确保高优先级的内容可以被缓存在距离用户最近的节点上。Therefore, it is necessary to evaluate the current status of each initial edge node, including the node's storage capacity, bandwidth, and geographical distance from the user. Then, combined with the data in the cache space priority table, the platform can determine which content should be cached on which nodes first. In this process, a key decision is how to balance the load between nodes while ensuring that high-priority content can be cached on the node closest to the user.

以亚洲市场为例,考虑到美剧和韩剧的高流行度,平台可能会决定在靠近用户密集区域的边缘节点上增加这些内容的缓存量。这可能意味着对某些节点进行存储扩容,或者在高峰时段临时增加带宽配置,以应对突发的流量需求。同时,平台还需要考虑内容的更新频率,确保新发布的热门剧集能够及时被分发到关键节点上。此外,目标节点布局的构建也需要考虑到节点之间的内容冗余问题。为了提高系统的鲁棒性和可靠性,同一内容可能需要在多个节点上进行备份。然而,过度的冗余会占用宝贵的存储资源,因此需要通过智能算法来优化冗余策略,确保内容分布的合理性。Taking the Asian market as an example, considering the high popularity of American and Korean dramas, the platform may decide to increase the cache capacity of these contents on edge nodes close to user-dense areas. This may mean expanding the storage capacity of certain nodes or temporarily increasing bandwidth configuration during peak hours to cope with sudden traffic demands. At the same time, the platform also needs to consider the update frequency of content to ensure that newly released popular dramas can be distributed to key nodes in a timely manner. In addition, the construction of the target node layout also needs to take into account the content redundancy between nodes. In order to improve the robustness and reliability of the system, the same content may need to be backed up on multiple nodes. However, excessive redundancy will take up valuable storage resources, so it is necessary to optimize the redundancy strategy through intelligent algorithms to ensure the rationality of content distribution.

步骤S105、获取目标用户的数据访问请求,并基于目标节点布局对数据访问请求进行访问节点匹配,得到目标访问节点。Step S105: Obtain a data access request from a target user, and perform access node matching on the data access request based on the target node layout to obtain a target access node.

具体的,其中,假设一个国际新闻网站希望通过边缘缓存技术加速其内容的全球分发。该网站拥有多个边缘节点分布在世界各地,每个节点都缓存了部分网站内容,以便于用户能够从最近的节点获取数据。当用户发起一个访问请求时,例如一个位于B地区的用户试图访问一篇关于最新国际新闻的文章,首先通过用户的IP地址识别出其大致的地理位置。随后,将这一信息与目标节点布局相比对。在这个例子中,目标节点布局是基于之前通过流量特征数据和边缘节点资源分配策略构建的,其详细指明了每个节点的地理位置、所缓存内容的类型和优先级,以及节点的性能指标等信息。Specifically, assume that an international news website wants to accelerate the global distribution of its content through edge caching technology. The website has multiple edge nodes distributed around the world, and each node caches part of the website content so that users can get data from the nearest node. When a user initiates an access request, for example, a user in region B tries to access an article about the latest international news, the user's approximate geographic location is first identified through the user's IP address. This information is then compared with the target node layout. In this example, the target node layout is based on the traffic feature data and edge node resource allocation strategy previously constructed, which details the geographic location of each node, the type and priority of the cached content, and the performance indicators of the node.

根据目标节点布局,能够迅速识别出位于欧洲的几个边缘节点,这些节点由于地理位置的优势,成为处理这位B地区用户请求的首选。但是,仅仅基于地理位置的匹配还不够,还需要考虑节点的当前负载情况和所请求内容的缓存状态。例如,如果最近的某个节点当前正处于高负载状态,或者所请求的新闻文章尚未在该节点缓存,那么可能需要选择另一个条件更加匹配的节点。According to the target node layout, several edge nodes in Europe can be quickly identified. These nodes become the first choice for processing the request of the user in region B due to their geographical advantages. However, matching based on geographical location alone is not enough. The current load of the node and the cache status of the requested content also need to be considered. For example, if the nearest node is currently under high load, or the requested news article is not yet cached at the node, then another node with a better matching condition may need to be selected.

进一步地,会通过双重深度Q网络(DDQN)算法,对这些潜在的节点进行综合评估和匹配。DDQN算法通过考虑节点的负载、内容的缓存优先级和用户的地理位置等多个因素,计算出每个节点处理该请求的预期效率。通过这种方式,算法最终确定出最佳的目标访问节点。在我们的例子中,假设选择了一个位于距离B地区较近的C地区的边缘节点作为目标访问节点,因为该节点当前负载适中,且已经缓存了用户请求的新闻文章,能够以最快的速度响应用户的请求。Furthermore, these potential nodes are comprehensively evaluated and matched through the Dual Deep Q Network (DDQN) algorithm. The DDQN algorithm calculates the expected efficiency of each node in processing the request by considering multiple factors such as the node load, the cache priority of the content, and the user's geographical location. In this way, the algorithm ultimately determines the best target access node. In our example, assume that an edge node located in area C, which is closer to area B, is selected as the target access node because the node currently has a moderate load and has cached the news article requested by the user, and can respond to the user's request at the fastest speed.

本申请实施例中,通过资源分配和数据传输策略,显著提升了内容传输的效率和质量。首先,通过采集和分析历史网络流量数据,构建的边缘节点资源分配策略能够动态地调整资源分配,确保在用户需求高峰期,资源得到有效利用,避免了资源的浪费或过载现象,从而在保障服务质量的同时,也提高了系统的整体运行效率。其次,利用深度学习算法,特别是双重深度Q网络算法对数据请求频率和服务器负载进行实时分析,使得数据缓存和分发策略能够根据网络状态的实时变化智能调整,这不仅提升了数据传输的稳定性和可靠性,也极大地减少了因网络拥塞引起的延迟,确保了用户能够获得更流畅的访问体验。此外,通过优化的缓存空间优先级表和针对性的节点布局构建,本方案能够更精准地匹配用户的数据访问需求,无论是在用户地理位置数据的精确识别还是在预期访问内容数据的有效缓存上,都大大提高了边缘计算资源的利用率和内容传输的时效性。In the embodiment of the present application, the efficiency and quality of content transmission are significantly improved through resource allocation and data transmission strategies. First, by collecting and analyzing historical network traffic data, the constructed edge node resource allocation strategy can dynamically adjust resource allocation to ensure that resources are effectively utilized during the peak period of user demand, avoiding resource waste or overload, thereby ensuring the quality of service while also improving the overall operating efficiency of the system. Secondly, a deep learning algorithm, especially a dual-depth Q network algorithm, is used to analyze the data request frequency and server load in real time, so that the data cache and distribution strategy can be intelligently adjusted according to the real-time changes in the network status, which not only improves the stability and reliability of data transmission, but also greatly reduces the delay caused by network congestion, ensuring that users can get a smoother access experience. In addition, through the optimized cache space priority table and targeted node layout construction, this solution can more accurately match the user's data access needs, whether in the accurate identification of user geographic location data or in the effective caching of expected access content data, it greatly improves the utilization of edge computing resources and the timeliness of content transmission.

在一具体实施例中,执行步骤S101的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S101 may specifically include the following steps:

(1)采集历史网络流量数据,并对历史网络流量数据进行信息遍历,得到历史访问频率、历史访问时间以及历史访问内容类型;(1) Collect historical network traffic data and traverse the historical network traffic data to obtain historical access frequency, historical access time, and historical access content type;

(2)对历史访问频率进行向量转换,得到第一向量,对历史访问时间进行向量转换,得到第二向量,同时,对历史访问内容类型进行向量转换,得到第三向量;(2) Perform vector conversion on the historical access frequency to obtain a first vector, perform vector conversion on the historical access time to obtain a second vector, and at the same time, perform vector conversion on the historical access content type to obtain a third vector;

(3)对第一向量以及第二向量进行向量融合,得到第一融合向量;(3) performing vector fusion on the first vector and the second vector to obtain a first fused vector;

(4)对第一向量以及第三向量进行向量融合,得到第二融合向量;(4) performing vector fusion on the first vector and the third vector to obtain a second fused vector;

(5)对第二向量以及第三向量进行向量融合,得到第三融合向量;(5) performing vector fusion on the second vector and the third vector to obtain a third fused vector;

(6)将第一融合向量输入预置的卷积长短期记忆网络模型进行高频访问时间特征提取,得到高频访问时间特征;(6) Inputting the first fused vector into a preset convolutional long short-term memory network model to extract high-frequency access time features, thereby obtaining high-frequency access time features;

(7)将第二融合向量输入卷积长短期记忆网络模型进行高频访问内容特征提取,得到高频访问内容特征;(7) Inputting the second fused vector into the convolutional long short-term memory network model to extract the high-frequency access content features, thereby obtaining the high-frequency access content features;

(8)将第三融合向量输入卷积长短期记忆网络模型进行长时访问内容提取,得到长时访问内容特征;(8) Inputting the third fused vector into the convolutional long short-term memory network model to extract long-term access content and obtain long-term access content features;

(9)对高频访问时间特征以及高频访问内容特征进行特征融合,得到第一融合特征,同时,对高频访问内容特征以及长时访问内容特征进行特征融合,得到第二融合特征;(9) Fusing the high-frequency access time features and the high-frequency access content features to obtain a first fused feature, and fusing the high-frequency access content features and the long-term access content features to obtain a second fused feature;

(10)将第一融合特征输入卷积长短期记忆网络模型的特征识别层进行流量模式识别,得到流量模式,同时,将第二融合特征输入卷积长短期记忆网络模型的特征识别层进行内容趋势识别,得到访问内容趋势;(10) Inputting the first fused feature into the feature recognition layer of the convolutional long short-term memory network model to perform traffic pattern recognition to obtain the traffic pattern. Meanwhile, inputting the second fused feature into the feature recognition layer of the convolutional long short-term memory network model to perform content trend recognition to obtain the access content trend.

(11)将流量模式以及访问内容趋势合并为流量特征数据。(11) Combine traffic patterns and access content trends into traffic feature data.

具体的,假设正在管理一个大型的视频流平台,该平台拥有遍布全球的边缘节点,目标是为用户提供低延迟、高质量的视频观看体验。为了实现这个目标,首先需要对平台的历史网络流量数据进行采集,这些数据包括了用户访问视频内容的频率、访问的具体时间点以及所访问内容的类型等信息。例如,历史数据可能显示,在周末晚间,来自特定地理区域的用户访问某热门剧集的频率显著高于其他时间段。采集到这些历史网络流量数据后,任务是对这些数据进行深入的信息遍历,将它们转换为可供进一步分析的向量形式。通过对历史访问频率、访问时间以及访问内容类型的深入分析,可以将这些数据转换为三个向量:第一向量代表历史访问频率,第二向量代表历史访问时间,而第三向量则代表历史访问内容类型。进一步地,通过对这三个基础向量的不同组合进行融合,得到了三个融合向量:第一融合向量是访问频率和访问时间的融合,它可以揭示特定内容在不同时间段内的访问模式;第二融合向量是访问频率和内容类型的融合,它帮助识别哪些类型的内容更受用户欢迎;而第三融合向量则是访问时间和内容类型的融合,指示了不同类型的内容在何时受到更多的关注。为了从这些融合向量中提取深层次的流量特征,将它们输入到一个预置的卷积长短期记忆网络(C-LSTM)模型中。这种模型结合了卷积神经网络(CNN)在图像和序列数据上的强大表达能力和长短期记忆网络(LSTM)对时间序列数据长期依赖的处理能力,使其特别适合处理和分析时间序列数据。通过C-LSTM模型的处理,能够从第一融合向量中提取高频访问时间特征,从第二融合向量中提取高频访问内容特征,而从第三融合向量中则提取了长时访问内容特征。Specifically, suppose you are managing a large video streaming platform with edge nodes all over the world, with the goal of providing users with a low-latency, high-quality video viewing experience. To achieve this goal, you first need to collect the platform's historical network traffic data, which includes information such as the frequency of users accessing video content, the specific time of access, and the type of content accessed. For example, historical data may show that users from a specific geographic area access a popular series significantly more frequently on weekend evenings than at other time periods. After collecting these historical network traffic data, the task is to conduct an in-depth information traversal of these data and convert them into vector form for further analysis. Through in-depth analysis of historical access frequency, access time, and access content type, these data can be converted into three vectors: the first vector represents the historical access frequency, the second vector represents the historical access time, and the third vector represents the historical access content type. Furthermore, by fusing different combinations of the three basic vectors, three fusion vectors are obtained: the first fusion vector is a fusion of access frequency and access time, which can reveal the access pattern of specific content in different time periods; the second fusion vector is a fusion of access frequency and content type, which helps identify which types of content are more popular with users; and the third fusion vector is a fusion of access time and content type, indicating when different types of content receive more attention. In order to extract deep traffic features from these fusion vectors, they are input into a pre-set convolutional long short-term memory network (C-LSTM) model. This model combines the powerful expression ability of convolutional neural network (CNN) on image and sequence data and the processing ability of long short-term memory network (LSTM) on long-term dependency of time series data, making it particularly suitable for processing and analyzing time series data. Through the processing of the C-LSTM model, high-frequency access time features can be extracted from the first fusion vector, high-frequency access content features can be extracted from the second fusion vector, and long-term access content features can be extracted from the third fusion vector.

最后,将高频访问时间特征与高频访问内容特征进行进一步的特征融合,得到了第一融合特征,这个特征反映了哪些内容在何时最受欢迎;同时,将高频访问内容特征与长时访问内容特征融合,得到了第二融合特征,这个特征揭示了哪些内容具有持续的受欢迎度。通过再次将这些融合特征输入C-LSTM模型进行分析,最终得到了流量模式和访问内容趋势的综合反映,这些流量特征数据为如何优化边缘节点的内容缓存策略提供了宝贵的指导。Finally, the high-frequency access time feature and the high-frequency access content feature are further fused to obtain the first fusion feature, which reflects which content is most popular at what time; at the same time, the high-frequency access content feature is fused with the long-term access content feature to obtain the second fusion feature, which reveals which content has sustained popularity. By inputting these fusion features into the C-LSTM model for analysis again, we finally get a comprehensive reflection of the traffic pattern and access content trend. These traffic feature data provide valuable guidance on how to optimize the content caching strategy of edge nodes.

例如,如果发现在周末晚间,科幻和奇幻类电影在美国西海岸地区尤为受欢迎,而这一趋势在过去几个月持续存在,那么就可以针对性地在该地区的边缘节点上优先缓存这类电影,确保当用户在周末晚间寻求观看这些内容时,可以从地理位置最近的节点获取数据,从而显著加速内容的加载速度,提升用户体验。这种基于深入分析历史网络流量数据并提取流量特征的方法,让能够更智能、更高效地管理边缘缓存资源,最终达到加速内容访问的目的。For example, if it is found that science fiction and fantasy movies are particularly popular on the West Coast of the United States on weekend nights, and this trend has continued over the past few months, then such movies can be cached preferentially on edge nodes in the region to ensure that when users seek to watch these contents on weekend nights, they can obtain data from the geographically closest node, thereby significantly accelerating the loading speed of content and improving user experience. This method based on in-depth analysis of historical network traffic data and extraction of traffic characteristics enables smarter and more efficient management of edge cache resources, ultimately achieving the goal of accelerating content access.

需要说明的是,第一融合向量通过结合访问频率和时间信息,能够揭示出特定内容在不同时间段的受欢迎程度,第二融合向量通过融合访问频率和内容类型,使得模型不仅能预测哪些类型的内容将会受到高频访问,还能辨识出潜在的流行趋势,从而提前做好内容缓存的准备,第三融合向量通过结合访问时间和内容类型的特征,可以识别在特定时间段需求增长的内容类型,从而优化了针对早高峰或晚高峰等特定时段的缓存策略,确保用户在任何时候都能快速访问到感兴趣的内容。It should be noted that the first fusion vector can reveal the popularity of specific content in different time periods by combining access frequency and time information. The second fusion vector can not only predict which types of content will be frequently accessed, but also identify potential popular trends by fusing access frequency and content type, so as to prepare for content caching in advance. The third fusion vector can identify the content type with increasing demand in a specific time period by combining the characteristics of access time and content type, thereby optimizing the caching strategy for specific time periods such as morning peak or evening peak, ensuring that users can quickly access the content of interest at any time.

在一具体实施例中,执行步骤S102的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S102 may specifically include the following steps:

(1)对流量模式进行模式识别,得到模式识别数据,其中,模式识别数据包括:周期性流量变化值以及流量峰值;(1) Performing pattern recognition on the traffic pattern to obtain pattern recognition data, wherein the pattern recognition data includes: periodic traffic change value and traffic peak value;

(2)通过时序预测算法对周期性流量变化值以及流量峰值进行带宽需求量预测,得到多个时段的带宽需求量;(2) Use the time series prediction algorithm to predict the bandwidth demand of periodic traffic changes and traffic peaks to obtain the bandwidth demand for multiple time periods;

(3)对多个时段的带宽需求量进行高峰时段识别,得到流量高峰时段;(3) Identify the peak periods of bandwidth demand in multiple time periods to obtain the peak traffic periods;

(4)对多个时段的带宽需求量进行低谷时段识别,得到流量低谷时段;(4) Identify the low-peak periods of bandwidth demand in multiple time periods to obtain the low-peak periods of traffic;

(5)对访问内容趋势进行高需求内容类型识别,得到高需求内容类型集;(5) Identify high-demand content types based on access content trends and obtain a high-demand content type set;

(6)基于流量高峰时段以及高需求内容类型集生成初始资源分配策略;(6) Generate an initial resource allocation strategy based on the peak traffic hours and the set of high-demand content types;

(7)通过流量低谷时段对初始资源分配策略进行策略修正,得到边缘节点资源分配策略;(7) Modify the initial resource allocation strategy during the low traffic period to obtain the edge node resource allocation strategy;

(8)通过边缘节点资源分配策略对高需求内容类型集进行优先级排序,得到排序数据,并通过排序数据生成缓存空间优先级表。(8) Prioritize the high-demand content type set through the edge node resource allocation strategy to obtain sorted data, and generate a cache space priority table based on the sorted data.

具体的,通过卷积长短期记忆网络(C-LSTM)模型,对采集到的历史网络流量数据进行模式识别,能够从中提取出两种关键信息:周期性流量变化值和流量峰值。反映了用户访问内容的规律性和不规律性高峰,为理解流量波动提供了基础。例如,假设一个在线教育平台在每周一的晚上8点到10点之间经历流量高峰,这一模式可能是由于平台在这一时间段推出了热门的直播课程。利用长短期记忆网络(LSTM),根据历史的周期性流量变化和流量峰值来预测未来特定时段的带宽需求量。不仅基于过去的流量模式,还考虑可能的趋势和季节性因素。通过这种方法,可以预测出,在接下来的几个月里每周一晚上的流量高峰将会持续,甚至变得更加显著。Specifically, through the convolutional long short-term memory network (C-LSTM) model, the collected historical network traffic data is pattern-recognized, and two key information can be extracted from it: periodic traffic change value and traffic peak value. It reflects the regularity and irregular peaks of user access to content, and provides a basis for understanding traffic fluctuations. For example, suppose an online education platform experiences a traffic peak between 8 and 10 pm every Monday. This pattern may be due to the platform launching popular live courses during this time period. Using the long short-term memory network (LSTM), the bandwidth demand for a specific period of time in the future is predicted based on historical periodic traffic changes and traffic peaks. Not only based on past traffic patterns, but also considering possible trends and seasonal factors. In this way, it can be predicted that the traffic peak every Monday night will continue in the next few months, or even become more significant.

从多个时段的带宽需求量识别出流量高峰时段和低谷时段。其中,通过比较不同时间段的预测带宽需求量,识别出需求量显著高于平均水平的时段作为流量高峰时段,而需求量显著低于平均水平的时段则被识别为低谷时段。同时,对访问内容趋势的分析帮助识别出用户需求高的内容类型,形成高需求内容类型集。例如,由于最近的热门事件而突然增加的新闻报道,或者是由于季节性因素而变得热门的主题内容。根据以上内容生成初始资源分配策略,考虑到流量高峰时段和高需求内容类型集,指导如何优先分配边缘节点的缓存资源。例如,如果预测显示科技新闻在每周一晚上的流量会出现高峰,那么相关的边缘节点就会被指定为优先缓存这类内容。Peak traffic periods and trough traffic periods are identified from bandwidth demands in multiple time periods. By comparing the predicted bandwidth demands in different time periods, the periods with significantly higher than average demands are identified as peak traffic periods, while the periods with significantly lower than average demands are identified as trough traffic periods. At the same time, the analysis of access content trends helps identify content types with high user demand and form a set of high-demand content types. For example, a sudden increase in news reports due to recent hot events, or topic content that has become popular due to seasonal factors. Based on the above content, an initial resource allocation strategy is generated to guide how to prioritize the allocation of cache resources for edge nodes, taking into account peak traffic periods and high-demand content type sets. For example, if the forecast shows that technology news will have a peak in traffic every Monday night, then the relevant edge nodes will be designated to prioritize caching of such content.

然而,为了保证系统整体的效率和鲁棒性,还需要考虑流量低谷时段对初始资源分配策略的影响,并据此进行策略修正。这种修正旨在平衡不同时段的资源利用,确保在低谷时段不会浪费宝贵的缓存资源。最后,通过边缘节点资源分配策略,对高需求内容类型集进行优先级排序,并据此生成缓存空间优先级表。这张表直接指导哪些内容应该被优先缓存,以及在哪些边缘节点上缓存,以确保用户可以尽可能快地访问到这些内容。However, in order to ensure the overall efficiency and robustness of the system, it is also necessary to consider the impact of low-traffic periods on the initial resource allocation strategy and make policy corrections accordingly. This correction aims to balance resource utilization in different time periods and ensure that precious cache resources are not wasted during low-traffic periods. Finally, through the edge node resource allocation strategy, the set of high-demand content types is prioritized and a cache space priority table is generated accordingly. This table directly guides which content should be cached first and on which edge nodes to ensure that users can access the content as quickly as possible.

在一具体实施例中,执行步骤S103的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S103 may specifically include the following steps:

(1)采集目标用户的用户数据,其中,用户数据包括用户地理位置数据以及用户预期访问内容数据;(1) Collecting user data of target users, where the user data includes user geographic location data and data on the content that the user is expected to access;

(2)对用户地理位置数据进行服务器计算区域识别,得到服务器计算区域;(2) Identify the server computing area based on the user's geographic location data to obtain the server computing area;

(3)对服务器计算区域内的边缘节点信息进行数据采集,得到区域节点数据;(3) Collect data on edge nodes in the server computing area to obtain regional node data;

(4)对用户地理位置数据以及用户预期访问内容数据进行数据融合,得到用户需求指纹;(4) Fusing user geographic location data and user expected content data to obtain user demand fingerprints;

(5)对区域节点数据进行节点资源遍历,得到多个区域节点对应的节点算力资源,同时,采集多个区域节点的资源负载数据;(5) Perform node resource traversal on regional node data to obtain node computing resources corresponding to multiple regional nodes. At the same time, collect resource load data of multiple regional nodes;

(6)通过用户需求指纹,对多个区域节点的资源负载数据进行节点筛选,得到多个初始边缘节点。(6) Based on the user demand fingerprint, the resource load data of multiple regional nodes are screened to obtain multiple initial edge nodes.

具体的,用户数据包括用户的地理位置数据和用户预期访问的内容数据。例如,当一个用户在A地区尝试观看一部最新上映的好莱坞电影时,通过用户的设备获取到他的地理位置为A地区,同时,用户通过搜索或点击了该电影的链接,因此得知用户意图访问的内容。Specifically, user data includes the user's geographic location data and the content data that the user intends to access. For example, when a user in region A tries to watch a newly released Hollywood movie, his geographic location is obtained as region A through the user's device. At the same time, the user searches or clicks on the link of the movie, so the content that the user intends to access is known.

基于用户的地理位置数据对服务器计算区域进行识别。这一步骤旨在确定用户所在的最佳服务区域,以便为用户的请求分配最适合的边缘节点。在以上例子中,识别出用户位于A地区,随后将A地区及其周边区域标识为服务器计算区域。随后,对该服务器计算区域内的所有边缘节点信息进行详细的数据采集。包括每个节点的地理位置、可用的算力资源以及当前的资源负载情况,最终获得一个全面的区域节点数据视图。The server computing area is identified based on the user's geographic location data. This step aims to determine the best service area for the user so that the most suitable edge node can be assigned to the user's request. In the above example, the user is identified as being in area A, and then area A and its surrounding areas are identified as server computing areas. Subsequently, detailed data collection is performed on all edge node information in the server computing area. This includes the geographic location of each node, available computing resources, and current resource load conditions, and ultimately a comprehensive regional node data view is obtained.

进而,将用户的地理位置数据和预期访问内容数据进行数据融合,得到用户需求指纹。用户需求指纹反映了用户的具体需求和偏好,包括用户想要访问的内容类型以及他们访问内容的地理位置。在上述例子中,用户需求指纹将包含用户位于A地区以及他们意图访问新电影的信息。之后,对区域内所有边缘节点的资源进行遍历,包括每个节点的算力资源和当前的资源负载情况。最终,基于用户需求指纹,通过对多个区域节点的资源负载数据进行智能筛选,确定最适合用户需求的初始边缘节点。这个筛选过程考虑了节点的地理位置、可用资源以及当前的负载情况,在以上例子中,可以选择一个位于A地区、当前负载相对较低且已经缓存了所需电影内容的边缘节点。Then, the user's geographic location data and expected access content data are fused to obtain the user demand fingerprint. The user demand fingerprint reflects the user's specific needs and preferences, including the type of content the user wants to access and the geographic location from which they access the content. In the above example, the user demand fingerprint will contain information that the user is located in region A and that they intend to access new movies. After that, the resources of all edge nodes in the region are traversed, including the computing power resources of each node and the current resource load. Finally, based on the user demand fingerprint, the resource load data of multiple regional nodes is intelligently screened to determine the initial edge node that best suits the user's needs. This screening process takes into account the node's geographic location, available resources, and current load. In the above example, an edge node located in region A with a relatively low current load and that has cached the required movie content can be selected.

在一具体实施例中,执行步骤S105的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S105 may specifically include the following steps:

(1)获取目标用户的数据访问请求,并对数据访问请求进行解析,得到数据请求频率以及当前服务器负载数据;(1) Obtain the target user's data access request and parse it to obtain the data request frequency and current server load data;

(2)获取目标用户的视场角数据,并采集待访问视频内容的流行度数值,将视场角数据以及流行度数值集输入预置的深度确定性策略梯度算法进行视频内容焦点区域预测,得到用户焦点区域;(2) Obtain the field of view data of the target user and collect the popularity value of the video content to be accessed. Input the field of view data and the popularity value set into the preset deep deterministic policy gradient algorithm to predict the focus area of the video content and obtain the user focus area.

(3)基于用户焦点区域对待访问视频内容进行待缓存内容提取,得到待缓存内容;(3) extracting the content to be cached from the video content to be accessed based on the user's focus area to obtain the content to be cached;

(4)通过目标节点布局对待缓存内容进行数据缓存,基于目标节点布局,将数据请求频率以及当前服务器负载数据输入预置的双重深度Q网络算法进行访问节点匹配,得到目标访问节点。(4) Data caching is performed on the content to be cached through the target node layout. Based on the target node layout, the data request frequency and the current server load data are input into the preset dual deep Q network algorithm for access node matching to obtain the target access node.

具体的,假设运营一个广受欢迎的视频流媒体服务,面临的挑战是如何在全球范围内快速响应用户的视频内容请求,尤其是在用户数量众多的高峰时段。首先,需要捕获目标用户的数据访问请求,这包括用户请求视频的频率以及对当前服务器的负载情况进行实时监控。例如,一个用户可能频繁尝试访问一部新发布的电影,而当前负载数据表明服务器正面临高流量的压力。通过对这些信息的解析,可以了解到用户的访问意图以及服务器当前的处理能力。Specifically, assuming that you are operating a popular video streaming service, the challenge is how to quickly respond to users' video content requests worldwide, especially during peak hours when there are a large number of users. First, you need to capture the data access requests of target users, including the frequency of user video requests and real-time monitoring of the current server load. For example, a user may frequently try to access a newly released movie, and the current load data shows that the server is facing high traffic pressure. By analyzing this information, you can understand the user's access intention and the current processing capacity of the server.

获取用户的视场角数据,可以帮助预测用户最感兴趣的视频部分。同时,采集待访问视频内容的流行度数值,流行度数值是基于视频的观看次数、分享频率或用户评分得到的。将视场角数据和流行度数值输入预置的深度确定性策略梯度(DDPG)算法后,预测出用户的焦点区域,即用户最可能关注的视频内容部分。例如,如果大量用户在观看某部电影时都集中关注同一段精彩片段,就能识别出这一焦点区域。Obtaining the user's field of view data can help predict the part of the video that the user is most interested in. At the same time, the popularity value of the video content to be accessed is collected. The popularity value is based on the number of views, sharing frequency or user ratings of the video. After inputting the field of view data and popularity value into the preset deep deterministic policy gradient (DDPG) algorithm, the user's focus area is predicted, that is, the part of the video content that the user is most likely to pay attention to. For example, if a large number of users focus on the same exciting clip when watching a movie, this focus area can be identified.

接下来会从整个视频内容中提取被高度关注的片段作为待缓存内容。因此,不仅整部电影需要被缓存,那些特别受欢迎的片段更应该被优先考虑,以确保在用户访问时能够快速加载。Next, highly-watched segments are extracted from the entire video content as the content to be cached. Therefore, not only the entire movie needs to be cached, but those particularly popular segments should be prioritized to ensure that they can be loaded quickly when users access them.

最后,根据目标节点布局,将结合数据请求频率和当前服务器负载数据,使用预置的双重深度Q网络(DDQN)算法对所有可用的边缘节点进行智能匹配,以确定最适合缓存待缓存内容的边缘节点。根据节点的地理位置、当前的负载情况以及节点与用户之间的网络延迟,确保选定的节点既能承担新的负载,又能提供最快的内容访问速度。例如,如果识别到D地区的一个节点当前负载较低,并且距离请求电影的用户地理位置最近,那么这个节点就会被选为缓存该电影焦点片段的目标节点。Finally, based on the target node layout, the data request frequency and current server load data will be combined to intelligently match all available edge nodes using the preset dual deep Q network (DDQN) algorithm to determine the edge node that is most suitable for caching the content to be cached. Based on the node's geographic location, current load, and network latency between the node and the user, ensure that the selected node can both bear the new load and provide the fastest content access speed. For example, if a node in region D is identified to have a low current load and is closest to the user's geographic location requesting a movie, then this node will be selected as the target node for caching the focus segment of the movie.

在一具体实施例中,执行将视场角数据以及流行度数值集输入预置的深度确定性策略梯度算法进行视频内容焦点区域预测步骤的过程可以具体包括如下步骤:In a specific embodiment, the process of inputting the field of view data and the popularity value set into a preset deep deterministic policy gradient algorithm to predict the video content focus area may specifically include the following steps:

(1)获取目标用户的视场角数据,并对待访问视频内容进行视频浏览量提取,得到视频浏览量;(1) Obtain the field of view angle data of the target user, and extract the video views of the video content to be accessed to obtain the video views;

(2)对待访问视频内容进行视频点赞量提取,得到视频点赞量;(2) Extract the video likes for the video content to be visited to obtain the video likes;

(3)对待访问视频内容进行视频完整观看量提取,得到视频完整观看量;(3) Extract the complete viewing volume of the video content to be accessed to obtain the complete viewing volume of the video;

(4)通过预置的权重参数集,对视频浏览量、视频点赞量以及视频完整观看量进行视频流行度数值计算,得到流行度数值集;(4) Using a preset weight parameter set, the video popularity value is calculated for the video views, video likes, and video complete views to obtain a popularity value set;

(5)对视场角数据进行视频帧比例值计算,得到视场角数据对应的视频帧比例值;(5) Calculating the video frame ratio value of the field of view angle data to obtain the video frame ratio value corresponding to the field of view angle data;

(6)基于视频帧比例值,将视场角数据及流行度数值集输入深度确定性策略梯度算法,通过中心点坐标计算公式进行焦点区域中心点坐标计算,得到焦点区域中心点坐标,其中,中心点坐标计算公式如下所示:(6) Based on the video frame ratio value, the field of view angle data and the popularity value set are input into the deep deterministic policy gradient algorithm, and the coordinates of the center point of the focus area are calculated by the center point coordinate calculation formula to obtain the coordinates of the center point of the focus area. The center point coordinate calculation formula is as follows:

; ;

其中,n是视场角数据的数据点数量,为焦点区域中心点坐标,为第i个视场角数据点的坐标,是第i个视场角数据点相对于视频内容中心的角度偏移量,为第一指数参数,为第二指数参数,为权重因子,其中,权重因子的计算公式如下所示;Where n is the number of data points of the field of view data, is the coordinate of the center point of the focal area, is the coordinate of the i-th field of view data point, is the angular offset of the i-th field of view data point relative to the center of the video content, is the first exponential parameter, is the second exponential parameter, is the weight factor, where The calculation formula is as follows;

; ;

其中,是第i个视场角数据点对应的流行度数值,是第i个视场角数据点到视频内容中心的欧氏距离,为修正系数;in, is the popularity value corresponding to the i-th viewing angle data point, is the Euclidean distance from the i-th field of view data point to the center of the video content, is the correction factor;

(7)基于焦点区域中心点坐标生成多个候选焦点区域,通过预置的奖励函数对每个候选焦点区域进行奖励分值计算,得到每个候选焦点区域的奖励分值数据;(7) Generate multiple candidate focus areas based on the coordinates of the center point of the focus area, calculate the reward score for each candidate focus area using a preset reward function, and obtain reward score data for each candidate focus area;

(8)基于每个候选焦点区域的奖励分值数据对多个候选焦点区域进行区域筛选,得到用户焦点区域。(8) Based on the reward score data of each candidate focus area, multiple candidate focus areas are screened to obtain the user focus area.

具体的,假设管理一个视频分享平台,希望通过边缘缓存技术加速视频内容的访问。首先,平台需要获取目标用户的视场角数据,这些数据可以通过分析用户在观看视频时的鼠标移动、滚动行为或者在移动设备上的触摸行为等交互信息来收集。例如,当用户在观看一部科幻电影时,能够记录用户在特定场景暂停观看或者重复观看的行为,从而捕捉到用户的视场角数据。对待访问视频内容进行多维度的流行度分析。首先,通过视频浏览量提取,可以得知视频被观看的总次数;随后,视频点赞量提取反映了视频受欢迎程度的另一个维度;最后,视频完整观看量提取则揭示了观众观看视频内容完整性的情况。这些指标一起构成了视频内容的流行度数值集,为提供了全面的视频流行度信息。例如,一部新上线的短片在短时间内获得了高浏览量和点赞量,但完整观看量较低,可能表明用户对这部短片有高度的好奇心,但内容可能无法持续吸引用户观看。Specifically, suppose you manage a video sharing platform and hope to accelerate the access of video content through edge caching technology. First, the platform needs to obtain the field of view data of the target user, which can be collected by analyzing the interactive information such as the mouse movement, scrolling behavior, or touch behavior on the mobile device when the user is watching the video. For example, when a user is watching a science fiction movie, it is possible to record the user's behavior of pausing or repeating the viewing at a specific scene, thereby capturing the user's field of view data. Perform a multi-dimensional popularity analysis on the video content to be accessed. First, through the video view extraction, the total number of times the video has been viewed can be known; then, the video like extraction reflects another dimension of the video's popularity; finally, the video complete view extraction reveals the completeness of the video content watched by the audience. Together, these indicators constitute a set of popularity values for the video content, providing comprehensive video popularity information. For example, a newly launched short film has a high number of views and likes in a short period of time, but a low number of complete views, which may indicate that users are highly curious about the short film, but the content may not continue to attract users to watch.

将视场角数据和视频流行度数值集结合起来,使用深度确定性策略梯度(DDPG)算法进行分析。通过中心点坐标计算公式精准预测用户最感兴趣的视频内容部分,即焦点区域中心点坐标。例如,如果分析显示用户在观看科幻电影的特定爆炸场景时集中了大量的视场角数据,且该场景的流行度数值高,便能识别出这一场景作为用户的焦点区域。基于焦点区域中心点坐标生成多个候选焦点区域,并通过预置的奖励函数对这些候选区域进行奖励分值计算。这一步骤旨在评估每个候选焦点区域的相关性和吸引力,确保选出最能代表用户兴趣的焦点区域。例如,对于包含多个高流行度场景的视频,将为每个场景计算奖励分值,并选择分值最高的场景作为焦点区域。最后,根据每个候选焦点区域的奖励分值数据进行筛选,确定最终的用户焦点区域。基于这一焦点区域,进一步对待缓存内容进行提取,并根据目标节点布局优化其在边缘节点上的缓存位置。例如,通过分析确定科幻电影中的几个高流行度爆炸场景为焦点区域后,将这些场景优先缓存到距离目标用户地理位置最近的边缘节点上,以便用户能够以最低延迟访问到这些精彩内容。The field of view data and the video popularity value set are combined and analyzed using the deep deterministic policy gradient (DDPG) algorithm. The center point coordinate calculation formula accurately predicts the part of the video content that the user is most interested in, that is, the center point coordinate of the focus area. For example, if the analysis shows that the user concentrates a lot of field of view data when watching a specific explosion scene in a science fiction movie, and the popularity value of this scene is high, this scene can be identified as the user's focus area. Based on the center point coordinates of the focus area, multiple candidate focus areas are generated, and the reward scores of these candidate areas are calculated through the preset reward function. This step aims to evaluate the relevance and attractiveness of each candidate focus area to ensure that the focus area that best represents the user's interest is selected. For example, for a video containing multiple highly popular scenes, the reward score will be calculated for each scene, and the scene with the highest score will be selected as the focus area. Finally, the reward score data of each candidate focus area is screened to determine the final user focus area. Based on this focus area, the cached content is further extracted, and its cache location on the edge node is optimized according to the target node layout. For example, after analyzing and determining that several highly popular explosion scenes in science fiction movies are focus areas, these scenes are preferentially cached on the edge nodes closest to the target user's geographic location so that users can access these exciting contents with the lowest latency.

在一具体实施例中,执行将数据请求频率以及当前服务器负载数据输入预置的双重深度Q网络算法进行访问节点匹配步骤的过程可以具体包括如下步骤:In a specific embodiment, the process of inputting the data request frequency and the current server load data into the preset dual-depth Q network algorithm to perform the access node matching step may specifically include the following steps:

(1)对目标节点布局进行网络连接状态分析,得到当前网络连接状态;(1) Analyze the network connection status of the target node layout to obtain the current network connection status;

(2)基于网络连接状态匹配数据缓存策略,通过数据缓存策略对待缓存内容进行数据缓存,其中,数据缓存策略包括数据分配子策略以及数据传输子策略;(2) matching a data caching strategy based on the network connection status, and caching the cached content through the data caching strategy, wherein the data caching strategy includes a data allocation sub-strategy and a data transmission sub-strategy;

(3)将数据请求频率输入双重深度Q网络算法进行频率状态向量构建,得到多维频率状态向量;(3) Inputting the data request frequency into the dual deep Q network algorithm to construct the frequency state vector and obtain a multi-dimensional frequency state vector;

(4)将当前服务器负载数据输入双重深度Q网络算法进行负载状态向量构建,得到多维负载状态向量;(4) Input the current server load data into the dual deep Q network algorithm to construct the load state vector and obtain a multi-dimensional load state vector;

(5)将多维频率状态向量以及多维负载状态向量输入双重深度Q网络算法的主网络进行动作执行,得到动作执行状态数据以及奖励参数;(5) Input the multi-dimensional frequency state vector and the multi-dimensional load state vector into the main network of the dual deep Q network algorithm to execute the action, and obtain the action execution state data and reward parameters;

(6)将动作执行状态数据以及奖励参数输入双重深度Q网络算法的目标网络进行最大Q值动态分析,得到最大Q值动作;(6) Input the action execution state data and reward parameters into the target network of the dual deep Q network algorithm to perform maximum Q value dynamic analysis to obtain the maximum Q value action;

(7)提取最大Q值动作的目标Q值,基于目标Q值对目标节点布局进行访问节点匹配,得到目标访问节点。(7) Extract the target Q value of the action with the maximum Q value, match the access nodes of the target node layout based on the target Q value, and obtain the target access node.

具体的,通过对目标节点布局进行网络连接状态分析来获得当前网络的连接状态。这个步骤至关重要,因为网络状态的波动直接影响到内容传输的效率和稳定性。比如,在一个典型的应用场景中,如果一个用户尝试访问一个在边缘缓存中的热门视频,首先检查与目标边缘节点之间的网络连接状态,包括但不限于带宽使用率、延迟和丢包率。这些参数可以实时反映网络的拥堵情况和传输质量,为接下来的决策提供依据。基于得到的网络连接状态,匹配数据缓存策略,包括数据分配子策略和数据传输子策略。例如,当网络状态良好时,数据传输子策略可能会选择更高的传输速率,以减少用户的等待时间。相反,在网络状况不佳时,可能会采用更为保守的传输策略,如降低传输速率,以避免造成网络拥塞进一步恶化。Specifically, the current network connection status is obtained by analyzing the network connection status of the target node layout. This step is crucial because the fluctuation of the network status directly affects the efficiency and stability of content transmission. For example, in a typical application scenario, if a user tries to access a popular video in the edge cache, the network connection status with the target edge node is first checked, including but not limited to bandwidth usage, latency, and packet loss rate. These parameters can reflect the network congestion and transmission quality in real time, providing a basis for subsequent decisions. Based on the obtained network connection status, the data caching strategy is matched, including the data allocation sub-strategy and the data transmission sub-strategy. For example, when the network status is good, the data transmission sub-strategy may choose a higher transmission rate to reduce the user's waiting time. On the contrary, when the network condition is poor, a more conservative transmission strategy may be adopted, such as reducing the transmission rate, to avoid further deterioration of network congestion.

将数据请求频率和当前服务器负载数据作为输入,利用双重深度Q网络算法构建频率状态向量和负载状态向量。这两个向量分别代表了用户请求数据的模式和边缘节点当前的工作负载情况,是优化决策的重要依据。双重深度Q网络算法通过学习过去的经验,能够预测在特定状态下采取不同动作所带来的长期效益,从而做出最优选择。以频率状态向量和负载状态向量为输入,双重深度Q网络算法的主网络将执行动作并输出动作执行状态数据和奖励参数。这一步骤是寻找最优缓存和传输策略的关键,算法会尝试不同的策略组合,通过不断的试错学习最终确定最适合当前网络和服务器状态的策略。最后,动作执行状态数据和奖励参数将被送入算法的目标网络进行最大Q值动态分析,以确定最佳动作。这个最佳动作代表了在当前状态下,能够最大化长期效益的缓存和传输策略。比如,对于之前提到的热门视频的请求,最佳动作可能包括选择一个距离用户最近同时网络连接状态最佳的边缘节点进行内容分发,或者在网络状态不佳时通过调整视频的码率来确保流畅播放。The frequency of data requests and the current server load data are used as inputs, and the frequency state vector and load state vector are constructed using the dual deep Q network algorithm. These two vectors represent the user's data request mode and the current workload of the edge node, respectively, and are important bases for optimization decisions. The dual deep Q network algorithm can predict the long-term benefits of taking different actions in a specific state by learning from past experience, so as to make the best choice. With the frequency state vector and the load state vector as inputs, the main network of the dual deep Q network algorithm will execute the action and output the action execution state data and reward parameters. This step is the key to finding the optimal cache and transmission strategy. The algorithm will try different strategy combinations and finally determine the strategy that best suits the current network and server status through continuous trial and error learning. Finally, the action execution state data and reward parameters will be sent to the algorithm's target network for dynamic analysis of the maximum Q value to determine the best action. This optimal action represents the cache and transmission strategy that can maximize long-term benefits under the current state. For example, for the request for the popular video mentioned earlier, the best action may include selecting an edge node that is closest to the user and has the best network connection status for content distribution, or adjusting the video bit rate to ensure smooth playback when the network status is poor.

上面对本申请实施例中基于边缘缓存的内容加速方法进行了描述,下面对本申请实施例中基于边缘缓存的内容加速装置进行描述,请参阅图2,本申请实施例中基于边缘缓存的内容加速装置一个实施例包括:The above describes the content acceleration method based on edge cache in the embodiment of the present application. The following describes the content acceleration device based on edge cache in the embodiment of the present application. Please refer to Figure 2. An embodiment of the content acceleration device based on edge cache in the embodiment of the present application includes:

采集模块201,用于采集历史网络流量数据,并对所述历史网络流量数据进行流量特征提取,得到流量特征数据;The collection module 201 is used to collect historical network traffic data and extract traffic characteristics from the historical network traffic data to obtain traffic characteristic data;

生成模块202,用于通过所述流量特征数据构建边缘节点资源分配策略,并根据所述边缘节点资源分配策略生成缓存空间优先级表;A generating module 202, configured to construct an edge node resource allocation strategy based on the traffic characteristic data, and generate a cache space priority table according to the edge node resource allocation strategy;

匹配模块203,用于采集目标用户的用户数据,并通过所述边缘节点资源分配策略对所述用户数据进行边缘节点匹配,得到多个初始边缘节点;A matching module 203 is used to collect user data of a target user, and perform edge node matching on the user data according to the edge node resource allocation strategy to obtain a plurality of initial edge nodes;

构建模块204,用于基于所述缓存空间优先级表,对多个所述初始边缘节点进行节点布局构建,得到目标节点布局;A construction module 204 is used to construct a node layout for the plurality of initial edge nodes based on the cache space priority table to obtain a target node layout;

获取模块205,用于获取所述目标用户的数据访问请求,并基于所述目标节点布局对所述数据访问请求进行访问节点匹配,得到目标访问节点。The acquisition module 205 is used to acquire the data access request of the target user, and perform access node matching on the data access request based on the target node layout to obtain a target access node.

通过上述各个组成部分的协同合作,通过资源分配和数据传输策略,显著提升了内容传输的效率和质量。首先,通过采集和分析历史网络流量数据,构建的边缘节点资源分配策略能够动态地调整资源分配,确保在用户需求高峰期,资源得到有效利用,避免了资源的浪费或过载现象,从而在保障服务质量的同时,也提高了系统的整体运行效率。其次,利用深度学习算法,特别是双重深度Q网络算法对数据请求频率和服务器负载进行实时分析,使得数据缓存和分发策略能够根据网络状态的实时变化智能调整,这不仅提升了数据传输的稳定性和可靠性,也极大地减少了因网络拥塞引起的延迟,确保了用户能够获得更流畅的访问体验。此外,通过优化的缓存空间优先级表和针对性的节点布局构建,本方案能够更精准地匹配用户的数据访问需求,无论是在用户地理位置数据的精确识别还是在预期访问内容数据的有效缓存上,都大大提高了边缘计算资源的利用率和内容传输的时效性。Through the collaboration of the above components, the efficiency and quality of content transmission are significantly improved through resource allocation and data transmission strategies. First, by collecting and analyzing historical network traffic data, the constructed edge node resource allocation strategy can dynamically adjust resource allocation to ensure that resources are effectively utilized during the peak period of user demand, avoiding resource waste or overload, thereby ensuring the quality of service while improving the overall operation efficiency of the system. Secondly, deep learning algorithms, especially the dual deep Q network algorithm, are used to analyze the data request frequency and server load in real time, so that the data cache and distribution strategy can be intelligently adjusted according to the real-time changes in the network status, which not only improves the stability and reliability of data transmission, but also greatly reduces the delay caused by network congestion, ensuring that users can get a smoother access experience. In addition, through the optimized cache space priority table and targeted node layout construction, this solution can more accurately match the user's data access needs, whether in the accurate identification of user geographic location data or the effective caching of expected access content data, which greatly improves the utilization of edge computing resources and the timeliness of content transmission.

本申请还提供一种基于边缘缓存的内容加速设备,所述基于边缘缓存的内容加速设备包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述各实施例中的所述基于边缘缓存的内容加速方法的步骤。The present application also provides a content acceleration device based on edge cache, and the content acceleration device based on edge cache includes a memory and a processor, and the memory stores computer-readable instructions. When the computer-readable instructions are executed by the processor, the processor executes the steps of the content acceleration method based on edge cache in the above-mentioned embodiments.

本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质也可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行所述基于边缘缓存的内容加速方法的步骤。The present application also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium. Instructions are stored in the computer-readable storage medium. When the instructions are executed on a computer, the computer executes the steps of the edge cache-based content acceleration method.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,系统和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the above-described systems, systems and units can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random acceS memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product. The computer software product is stored in a storage medium, including several instructions to enable a computer device (which can be a personal computer, server, or network device, etc.) to execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program codes.

以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。As described above, the above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit it. Although the present application has been described in detail with reference to the aforementioned embodiments, a person of ordinary skill in the art should understand that the technical solutions described in the aforementioned embodiments can still be modified, or some of the technical features therein can be replaced by equivalents. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. The content acceleration method based on the edge cache is characterized by comprising the following steps of:
collecting historical network flow data, and extracting flow characteristics of the historical network flow data to obtain flow characteristic data;
Constructing an edge node resource allocation strategy according to the flow characteristic data, and generating a cache space priority table according to the edge node resource allocation strategy;
collecting user data of a target user, and performing edge node matching on the user data through the edge node resource allocation strategy to obtain a plurality of initial edge nodes;
based on the buffer space priority table, carrying out node layout construction on a plurality of initial edge nodes to obtain a target node layout;
and acquiring the data access request of the target user, and performing access node matching on the data access request based on the target node layout to obtain a target access node.
2. The content acceleration method based on edge cache as set forth in claim 1, wherein the collecting historical network traffic data and extracting traffic features of the historical network traffic data to obtain traffic feature data includes:
collecting the historical network flow data, and performing information traversal on the historical network flow data to obtain historical access frequency, historical access time and historical access content types;
Performing vector conversion on the historical access frequency to obtain a first vector, performing vector conversion on the historical access time to obtain a second vector, and performing vector conversion on the historical access content type to obtain a third vector;
vector fusion is carried out on the first vector and the second vector to obtain a first fusion vector;
vector fusion is carried out on the first vector and the third vector to obtain a second fusion vector;
vector fusion is carried out on the second vector and the third vector, and a third fusion vector is obtained;
Inputting the first fusion vector into a preset convolution long-short-term memory network model to extract high-frequency access time characteristics, so as to obtain the high-frequency access time characteristics;
Inputting the second fusion vector into the convolution long-short-term memory network model to extract high-frequency access content characteristics, so as to obtain the high-frequency access content characteristics;
Inputting the third fusion vector into the convolution long-short-term memory network model to extract long-term access content, so as to obtain long-term access content characteristics;
performing feature fusion on the high-frequency access time feature and the high-frequency access content feature to obtain a first fusion feature, and performing feature fusion on the high-frequency access content feature and the long-time access content feature to obtain a second fusion feature;
inputting the first fusion features into a feature recognition layer of the convolution long-short-term memory network model to perform flow pattern recognition to obtain a flow pattern, and simultaneously inputting the second fusion features into a feature recognition layer of the convolution long-short-term memory network model to perform content trend recognition to obtain an access content trend;
And merging the flow mode and the access content trend into the flow characteristic data.
3. The content acceleration method based on edge cache as set forth in claim 2, wherein the constructing an edge node resource allocation policy from the traffic feature data and generating a cache space priority table according to the edge node resource allocation policy includes:
Performing pattern recognition on the flow pattern to obtain pattern recognition data, wherein the pattern recognition data comprises: periodic flow change values and flow peaks;
Predicting the bandwidth demand quantity of the periodic flow variation value and the flow peak value through a time sequence prediction algorithm to obtain the bandwidth demand quantity of a plurality of time periods;
carrying out peak time period identification on the bandwidth demand of a plurality of time periods to obtain a flow peak time period;
Carrying out low-valley period identification on the bandwidth demand of a plurality of periods to obtain a flow low-valley period;
identifying the high-demand content type of the access content trend to obtain a high-demand content type set;
Generating an initial resource allocation policy based on the traffic rush hour and the set of high demand content types;
Performing policy correction on the initial resource allocation policy through the traffic trough period to obtain the edge node resource allocation policy;
And carrying out priority ranking on the high-demand content type set through the edge node resource allocation strategy to obtain ranking data, and generating a buffer space priority table through the ranking data.
4. The content acceleration method based on edge cache as set forth in claim 1, wherein the collecting the user data of the target user and performing edge node matching on the user data by the edge node resource allocation policy to obtain a plurality of initial edge nodes includes:
Collecting user data of a target user, wherein the user data comprises user geographic position data and content data which the user expects to access;
Carrying out server calculation region identification on the user geographic position data to obtain a server calculation region;
acquiring data of edge node information in the server calculation area to obtain area node data;
Carrying out data fusion on the user geographic position data and the user expected access content data to obtain user demand fingerprints;
Traversing node resources of the regional node data to obtain node computing power resources corresponding to a plurality of regional nodes, and collecting resource load data of the plurality of regional nodes;
and carrying out node screening on the resource load data of a plurality of regional nodes through the user demand fingerprints to obtain a plurality of initial edge nodes.
5. The content acceleration method based on edge cache as set forth in claim 1, wherein the obtaining the data access request of the target user and performing access node matching on the data access request based on the target node layout to obtain a target access node includes:
acquiring a data access request of the target user, and analyzing the data access request to obtain data request frequency and current server load data;
acquiring the field angle data of the target user, collecting popularity values of video contents to be accessed, inputting the field angle data and the popularity value set into a preset depth deterministic strategy gradient algorithm to predict a video content focus area, and obtaining a user focus area;
Extracting the content to be cached from the video content to be accessed based on the user focus area to obtain the content to be cached;
And carrying out data caching on the content to be cached through the target node layout, and inputting the data request frequency and the current server load data into a preset dual-depth Q network algorithm to carry out access node matching based on the target node layout so as to obtain a target access node.
6. The content acceleration method based on edge cache according to claim 5, wherein the obtaining the view angle data of the target user, collecting popularity values of the video content to be accessed, inputting the view angle data and the popularity value set into a preset depth deterministic strategy gradient algorithm to predict a focus area of the video content, and obtaining a focus area of the user includes:
acquiring the field angle data of the target user, and extracting the video browsing amount of the video content to be accessed to obtain the video browsing amount;
Extracting the video praise amount of the video content to be accessed to obtain the video praise amount;
extracting the video complete viewing quantity of the video content to be accessed to obtain the video complete viewing quantity;
Carrying out video popularity value calculation on the video browsing amount, the video praise amount and the video complete watching amount through a preset weight parameter set to obtain a popularity value set;
Calculating a video frame proportion value of the field angle data to obtain a video frame proportion value corresponding to the field angle data;
Inputting the view angle data and the popularity value set into the depth certainty strategy gradient algorithm based on the video frame proportion value, and calculating the center point coordinates of the focus area through a center point coordinate calculation formula to obtain the center point coordinates of the focus area, wherein the center point coordinate calculation formula is as follows:
;
where n is the number of data points of the field angle data, Is the center point coordinates of the focal region,For the coordinates of the ith field angle data point,Is the angular offset of the ith field of view data point relative to the center of the video content,As a first index parameter of the first set of values,As a parameter of the second index of the light,Is a weight factor, wherein the weight factorThe calculation formula of (2) is shown below;
;
wherein, Is the popularity value corresponding to the ith field angle data point,Is the euclidean distance of the ith field angle data point to the center of the video content,Is a correction coefficient;
generating a plurality of candidate focus areas based on the focus area center point coordinates, and calculating the reward score of each candidate focus area through a preset reward function to obtain reward score data of each candidate focus area;
and carrying out region screening on a plurality of candidate focus regions based on the reward score data of each candidate focus region to obtain the user focus region.
7. The content acceleration method based on edge cache according to claim 5, wherein the performing data caching on the content to be cached by the target node layout, performing access node matching on the data request frequency and the current server load data input preset dual-depth Q network algorithm based on the target node layout, to obtain a target access node, includes:
analyzing the network connection state of the target node layout to obtain the current network connection state;
Based on the network connection state matching data caching strategy, carrying out data caching on the content to be cached through the data caching strategy, wherein the data caching strategy comprises a data distribution sub-strategy and a data transmission sub-strategy;
inputting the data request frequency into the dual depth Q network algorithm to construct a frequency state vector, so as to obtain a multidimensional frequency state vector;
inputting the current server load data into the dual-depth Q network algorithm to construct a load state vector, so as to obtain a multi-dimensional load state vector;
Inputting the multidimensional frequency state vector and the multidimensional load state vector into a main network of the dual-depth Q network algorithm for action execution to obtain action execution state data and rewarding parameters;
Inputting the action execution state data and the rewarding parameter into a target network of the dual-depth Q network algorithm to perform maximum Q value dynamic analysis, so as to obtain maximum Q value action;
And extracting a target Q value of the maximum Q value action, and performing access node matching on the target node layout based on the target Q value to obtain the target access node.
8. An edge cache-based content acceleration apparatus, comprising:
The acquisition module is used for acquiring historical network flow data, and extracting flow characteristics of the historical network flow data to obtain flow characteristic data;
The generating module is used for constructing an edge node resource allocation strategy according to the flow characteristic data and generating a buffer space priority table according to the edge node resource allocation strategy;
The matching module is used for collecting user data of a target user, and carrying out edge node matching on the user data through the edge node resource allocation strategy to obtain a plurality of initial edge nodes;
The construction module is used for constructing node layouts of a plurality of initial edge nodes based on the buffer space priority table to obtain target node layouts;
the acquisition module is used for acquiring the data access request of the target user, and carrying out access node matching on the data access request based on the target node layout to obtain a target access node.
9. An edge cache-based content acceleration apparatus, characterized in that the edge cache-based content acceleration apparatus comprises: a memory and at least one processor, the memory having instructions stored therein;
The at least one processor invoking the instructions in the memory to cause the edge cache based content acceleration device to perform the edge cache based content acceleration method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor implement the edge cache based content acceleration method of any of claims 1-7.
CN202410576772.3A 2024-05-10 2024-05-10 Content acceleration method, device, equipment and storage medium based on edge cache Active CN118316890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410576772.3A CN118316890B (en) 2024-05-10 2024-05-10 Content acceleration method, device, equipment and storage medium based on edge cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410576772.3A CN118316890B (en) 2024-05-10 2024-05-10 Content acceleration method, device, equipment and storage medium based on edge cache

Publications (2)

Publication Number Publication Date
CN118316890A true CN118316890A (en) 2024-07-09
CN118316890B CN118316890B (en) 2024-11-22

Family

ID=91725828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410576772.3A Active CN118316890B (en) 2024-05-10 2024-05-10 Content acceleration method, device, equipment and storage medium based on edge cache

Country Status (1)

Country Link
CN (1) CN118316890B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118631650A (en) * 2024-08-08 2024-09-10 中国电信股份有限公司 Network resource configuration method, device, equipment, medium and product
CN119276887A (en) * 2024-10-10 2025-01-07 学科网(北京)股份有限公司 File resource management system and method based on CDN
CN120086291A (en) * 2025-04-30 2025-06-03 清枫(北京)科技有限公司 Multi-platform data real-time synchronization method, device, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220368650A1 (en) * 2021-05-11 2022-11-17 Beijing University Of Posts And Telecommunications Method and Device of Network Resource Allocation
US20220394745A1 (en) * 2019-10-25 2022-12-08 Nokia Technologies Oy Pdcch monitoring in unlicensed spectrum for a terminal device with a single active panel
CN116743584A (en) * 2023-08-09 2023-09-12 山东科技大学 Dynamic RAN slicing method based on information sensing and joint calculation caching
CN117675917A (en) * 2023-11-13 2024-03-08 北京工业大学 Adaptive cache update method, device and edge server for edge intelligence
CN117835325A (en) * 2023-12-28 2024-04-05 联想未来通信科技(重庆)有限公司 Information processing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220394745A1 (en) * 2019-10-25 2022-12-08 Nokia Technologies Oy Pdcch monitoring in unlicensed spectrum for a terminal device with a single active panel
US20220368650A1 (en) * 2021-05-11 2022-11-17 Beijing University Of Posts And Telecommunications Method and Device of Network Resource Allocation
CN116743584A (en) * 2023-08-09 2023-09-12 山东科技大学 Dynamic RAN slicing method based on information sensing and joint calculation caching
CN117675917A (en) * 2023-11-13 2024-03-08 北京工业大学 Adaptive cache update method, device and edge server for edge intelligence
CN117835325A (en) * 2023-12-28 2024-04-05 联想未来通信科技(重庆)有限公司 Information processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨大武;李泽平;: "一种移动流媒体分层协同缓存系统", 计算机与现代化, no. 06, 15 June 2020 (2020-06-15), pages 26 - 31 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118631650A (en) * 2024-08-08 2024-09-10 中国电信股份有限公司 Network resource configuration method, device, equipment, medium and product
CN119276887A (en) * 2024-10-10 2025-01-07 学科网(北京)股份有限公司 File resource management system and method based on CDN
CN120086291A (en) * 2025-04-30 2025-06-03 清枫(北京)科技有限公司 Multi-platform data real-time synchronization method, device, electronic device and storage medium

Also Published As

Publication number Publication date
CN118316890B (en) 2024-11-22

Similar Documents

Publication Publication Date Title
CN118316890A (en) Content acceleration method, device, equipment and storage medium based on edge cache
Goian et al. Popularity-based video caching techniques for cache-enabled networks: A survey
CN110213627B (en) Streaming media cache allocation method based on multi-cell user mobility
US8954556B2 (en) Utility-based model for caching programs in a content delivery network
US8504548B2 (en) System and method for dynamically managing data centric searches
JP2007510224A (en) A method for determining the segment priority of multimedia content in proxy cache
US12363390B2 (en) Video file storage prediction process for caches in video servers
CN109982104B (en) A mobile-aware video prefetch and cache replacement decision method in mobile edge computing
CN104486350A (en) Network content acceleration method based on user behavior
CN108366089A (en) A kind of CCN caching methods based on content popularit and pitch point importance
US8209711B2 (en) Managing cache reader and writer threads in a proxy server
CN104967868B (en) video transcoding method, device and server
US9779362B1 (en) Ranking video delivery problems
Zhang et al. How to cache important contents for multi-modal service in dynamic networks: a DRL-based caching scheme
CN120525063A (en) Memory pooling method, system and computer program product for model reasoning acceleration
KR20220078244A (en) Method and edge server for managing cache file for content fragments caching
CN116055809A (en) Video information display method, electronic device and storage medium
Ling et al. An adaptive caching algorithm suitable for time-varying user accesses in VOD systems
Cao et al. FlyCache: Recommendation-Driven Edge Caching Architecture for Full Life Cycle of Video Streaming
CN119484641A (en) Cache optimization method, device, equipment, storage medium and computer program product
CN115051996B (en) Video cache management method based on local video utility value under multi-access edge calculation
CN117909067A (en) EPG request processing method, device and readable storage medium
CN115866051B (en) An edge caching method based on content popularity
WO2017097368A1 (en) System and method for efficient caching in the access network
Alkassab et al. Benefits and schemes of prefetching from cloud to fog networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant