[go: up one dir, main page]

MX2008005091A - Caching memory attribute indicators with cached memory data - Google Patents

Caching memory attribute indicators with cached memory data

Info

Publication number
MX2008005091A
MX2008005091A MXMX/A/2008/005091A MX2008005091A MX2008005091A MX 2008005091 A MX2008005091 A MX 2008005091A MX 2008005091 A MX2008005091 A MX 2008005091A MX 2008005091 A MX2008005091 A MX 2008005091A
Authority
MX
Mexico
Prior art keywords
cache
address
memory
page
tlb
Prior art date
Application number
MXMX/A/2008/005091A
Other languages
Spanish (es)
Inventor
Norris Dieffenderfer James
Andrew Sartorius Thomas
Wayne Smith Rodney
Todd Bridges Jeffrey
Michael Stempel Brian
Original Assignee
Todd Bridges Jeffrey
Norris Dieffenderfer James
Qualcomm Incorporated
Andrew Sartorius Thomas
Wayne Smith Rodney
Michael Stempel Brian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Todd Bridges Jeffrey, Norris Dieffenderfer James, Qualcomm Incorporated, Andrew Sartorius Thomas, Wayne Smith Rodney, Michael Stempel Brian filed Critical Todd Bridges Jeffrey
Publication of MX2008005091A publication Critical patent/MX2008005091A/en

Links

Abstract

A processing system may include a memory configured to store data in a plurality of pages, a TLB, and a memory cache including a plurality of cache lines. Each page in the memory may include a plurality of lines of memory. The memory cache may permit, when a virtual address is presented to the cache, a matching cache line to be identified from the plurality of cache lines, the matching cache line having a matching address that matches the virtual address. The memory cache may be configured to permit one or more page attributes of a page located at the matching address to be retrieved from the memory cache and not from the TLB, by further storing in each one of the cache lines a page attribute of the line of data stored in the cache line.

Description

INDICATORS OF CACHE MEMORY ATTRIBUTES WITH MEMORY DATA STORED IN CACHE MEMORY FIELD OF THE INVENTION The present invention relates to caches.
BACKGROUND OF THE INVENTION In a processing system that supports paged virtual memory, data can be specified using virtual addresses (also referred to as "effective" or "linear" addresses) that occupy a virtual address space of the processing system. The virtual address space can usually be larger than the size of the actual physical memory in the system. The operating system in the processing system can manage the physical memory in blocks of fixed size called pages. In order to translate the virtual page addresses into physical page addresses, the processing system can search for tables of pages stored in the system memory, which may contain the necessary address translation information. A The page box can usually be rather large in size, because it can contain a list of all physical page addresses for all the addresses of virtual pages generated by the processing system. Also, page box searches (or "page box inquiries") may involve memory accesses, which can be time consuming. The processing system can then perform the translation of the address using one or more advance translation buffers (TLBs), which usually contain a subset of the entries in the page box. A TLB is an address translation cache, that is, a small cache that stores recent mappings from virtual addresses to physical addresses. The processing system can cache a physical address in the TLB, after performing a page frame search and an address translation. A TLB can usually contain a plurality of TLB entries, each TLB entry containing a virtual page address and a corresponding physical page address. When a TLB receives a virtual page address, the TLB can search its entries to see if any of the virtual page addresses stored in cache in any of these entries matches the received virtual page address. If the virtual page address presented to the TLB does not match a virtual page address stored in any of the TLB entries, a "hit" of TLB may occur; otherwise, a "loss" of TLB may occur. Because each TLB search consumes energy and computer time, it may be desirable to reduce the frequency of TLB accesses. A TLB can also store information regarding one or more memory attributes, as well as information regarding translations from virtual address to physical. These memory attributes may, for example, include memory entry protection features, such as read / write / execute authorizations. Memory attributes stored in a TLB can be accessed before, or in parallel with, access to the cache memory. Storing these memory attributes in the TLB, in addition to storing the translation information from virtual to physical address, can increase the number of bits that are required to cache in each TLB entry. The more bits have to be accessed, the slower the search becomes in the TLB, and it consumes more energy.
SUMMARY OF THE INVENTION A processing system may include a memory configured to store data in a plurality of pages, each page having a plurality of lines. The processing system may further include an advance translation buffer (TLB), and a cache memory that includes a plurality of cache lines. Each of the cache lines can be configured to store an address of one of the memory lines, and to store a data line located at the address. The cache can be configured to allow, when a virtual address is presented to the cache, a matching cache line is identified between the plurality of cache lines, the matching cache line has a matching address that matches the address virtual presented to the cache. The cache memory can be configured to allow a page attribute of a page located in the matching address to be retrieved from the cache without accessing the TLB in order to retrieve the page attribute, additionally storing in each of the lines cache a page attribute of the data line stored in the cache line. One method may include, accessing a cache using a virtual address of a data line. The method may also include, retrieving from the cache a page attribute for the data line, without accessing the TLB to retrieve the page attribute.
BRIEF DESCRIPTION OF THE FIGURES Figure 1 illustrates an advance translation buffer (TLB) in the context of a virtual memory system. Figure 2 illustrates one embodiment of a processing system. Figure 3A illustrates a TLB having TLB entries that are configured to store one or more memory attributes. Figure 3B illustrates a TLB entry configured to store only address translation information, and not memory attributes. Figure 4A illustrates cache access in a cache memory. Figure 4B illustrates a cache configured to store memory attributes as extra bits in its cache lines.
DETAILED DESCRIPTION OF THE INVENTION The detailed description set forth below in connection with the appended figures is intended to describe various embodiments of a processing system, but is not intended to represent the only possible modes. The detailed description includes specific details in order to allow a complete understanding of what is described. However, those skilled in the art should appreciate that specific details may not be included in some of the modalities of the processing system. In some cases, well-known structures and components are shown in block diagram form in order to illustrate more clearly the concepts that are being explained. Figure 1 schematically illustrates an advance translation buffer (TLB) in the context of a virtual memory system that includes a physical memory 30 and a frame of pages 20. In virtual memory systems, mappings (or translations) ) can usually be executed between a space of Virtual address (or "linear") of a computer (referred to as the set of all virtual addresses generated by the computer) and a physical address space of computer memory. The physical address of a piece of data indicates the actual location within the physical memory 30 of the piece of data, and can be provided in a memory link to write to, or read from the particular location in the physical memory 30. In a paged virtual memory system, the data can be displayed as grouped in fixed-length memory blocks commonly referred to as page 31. For example, if the smallest addressable memory unit is a byte, and a set of sequential addresses is referred to to a set of sequential memory bytes, then a page can be defined as a block of sequential memory bytes that are comprised of a particular number of bytes. Pages can be composed of a number of bytes that is a power of two (for example, 212 = 4096 bytes, or 4 KB). The pages can be placed in the memory so that the beginning of each page is "aligned" with the page size, that is, the address of the first byte on the page can be divided evenly among the number of bytes that they understand the page. Therefore, if the size of the pages is 2N bytes, then the N low order bits of the page address (ie, the address of the first byte on the page) can always be zeros. The remaining bits in the address, that is, the most important bits, can be referred to as the "page number". Both the virtual address space and the physical address space can be divided into pages, and the mapping of virtual addresses into physical addresses can be achieved by mapping the virtual page number to the physical page number and concatenating the N bits of order bottom of the virtual address to the physical page number. That is to say, corresponding virtual and physical bytes addresses can always have the same N bits of low order, where N is log (2) of page size in bytes. Therefore, the virtual address space and the physical address space can be divided into blocks of contiguous addresses, each virtual address provides a virtual page number, and each corresponding physical page number indicates the location within the memory 30 of a particular page 31 of data. The page box 20 in the physical memory 30 may contain the physical page numbers corresponding to all the page numbers virtual virtual memory system, that is, it can contain the mappings between the addresses of virtual pages and the corresponding physical page addresses, for all virtual page addresses in the virtual address space. Typically, the page box 20 may contain a plurality of page frame entries (PTE) 21, each PTE 21 points to a page 31 in the physical memory 30 which corresponds to a particular virtual address. Access to the PTEs 21 stored in the page box 20 in the physical memory 30 may usually require memory link transactions, which can be expensive in terms of processor cycle time and power consumption. The number of memory link transactions can be reduced by accessing the TLB 10, instead of the physical memory 30. The TLB 10 usually contains a subset of the physical to virtual address mappings that are stored in the virtual memory box. pages 20. A plurality of entries TLB 12 can usually be contained in a TLB 10. When an instruction has a virtual address 22 that needs to be translated into a corresponding physical address, during the execution of a program, the TLB 10 so Regular can be accessed to search for the virtual address 22 between the TLB entries 12 stored in the TLB 10. The virtual address 22 may ordinarily be contained within an address register. As shown in Figure 1, each input TLB 12 may have a label field 14 and a data field 16. The label field 14 may specify the virtual page number, and the data field 16 may indicate the number Physical page corresponding to the tagged virtual page. If the TLB 10 finds, among its TLB entries, the particular physical page number corresponding to the virtual page number contained in the virtual address 22 presented to the TLB, then a "hit" of TLB occurs, and the physical page address may be retrieved from the data field 16 of the TLB 10. If the TLB 10 does not contain the particular physical page address corresponding to the virtual page number in the virtual address 22 presented to the TLB, then a "loss" of TLB occurs, and a search of the page frame 20 could be executed in the physical memory 30. Figure 2 is a diagram of a processing system 100 that includes a general purpose record set 105, a TLB 110, a cache virtually labeled 125, and a main physical memory 130. General purpose registers 105 may be contained within a CPU 117 in the processing system 100. In the illustrated embodiment, the TLB 110 is also shown as located within the CPU 117, although in other embodiments, the TLB 100 may be located within a management unit. of separate memory (MMU) (not shown), which can be located either outside or inside the CPU. The memory 130 includes a table of pages 120, which was described in conjunction with figure 1. The cache 125 is a small amount of fast memory that can be used to maintain the data that is used most frequently by the processing system. 100. Due to the reference location, which can be an attribute of many computer programs, the cache 125 can effectively shorten the inherent latency in most memory accesses. The caches usually work by selecting a certain number of candidate lines from the cache memory and comparing the stored address labels with each line for the desired memory address. If the candidate lines do not comprise all the lines in the cache, then a certain selection method is used, usually using some bits of the physical or virtual address. If the method of selection uses only bits of the virtual address, the cache memory is said to be "virtually indexed". If the method uses bits of the physical address (translated), it says that the cache memory is "physically indexed". Also, the address labels stored with each cache line can be the virtual address or the physical address. Caches that use physical address to index or tag must, of course, translate the virtual address into the physical address before it can be used. Virtually indexed, virtually tagged (VIVT) caches do not need to produce a physical address from a virtual one before accessing the cache and determining if there is desired data present. The cache 125, in the illustrated embodiment of the processing system 100, is a virtually tagged cache. It should be appreciated that in other embodiments of the processing system 100, caches that are not virtually labeled or virtually indexed can be used. The record set 105 usually includes a plurality of address records, an example of which is shown as the address register 122. As explained above in set with figure 1, the address register 122 can present a virtual address to the TLB 110, which can search within the plurality of its TLB entries, to discover if any of its TLB entries have a label that matches the address virtual address presented by the address register 122. If the search results in a TLB match, that is, a TLB entry containing the physical address corresponding to the virtual address in the address register 122 is discovered, it is then possible to access the the cache memory 125, to locate the data having the physical address retrieved from the TLB 110. Most of the time, the data can usually be recovered from the cache 125, although sometimes the data may not have been stored in the cache 125, and then the main memory 130 can be accessed. The TLB 110 can be accessed before, or in parallel with an access to the mem or cache 125. Access to the virtually tagged cache 125 is illustrated functionally in Figure 2 using a dashed arrow that sends a virtual address from the address register 122 to the cache 125. Figure 3A illustrates a TLB 180 and a record of exemplary virtual addresses 22. The TLB 180 so regular contains a number of rows or lines of entries TLB, an example TLB entry is illustrated using the reference number 182. The illustrated virtual address register 22 is a 32-bit register, although the address registers in general may have a number of bits greater than or less than 32. The address register 22 may include page compensation information in its lower order bits and page number address information in its higher order bits. The page number specifies in which of the plurality of pages the desired data is found in the main memory 30. The page compensation specifies the location within the particular page (which is located at the page number specified in the order bits). highest of the address register 22) in which the desired word or bytes is located. The address register 22 may be a 32-bit register, wherein the lowest order bits (bits 9 to 0 in this example) contain the page compensation information; and the highest order bits, namely bits 31 to 10, contain the page number information. A comparator 190 can compare the label fields of the TLB 182 entries with the virtual address shown in the 0 to 11, to see if the virtual address indicated by the label field of any of the entries TLB 182 in the TLB 180 matches the virtual address indicated by the highest order bits in the address register 22. The page compensation information may not need to be translated, due that is the same in a virtual environment as a physical environment. Although a TLB is basically a cache of virtual address mappings to a physical processing system, in a TLB it may become habitual to cache one or more memory attributes that are defined by the physical region or page, in addition of the address translation information. These memory attributes may include, for example, read, write and execute authorizations. The storage of one or more memory attributes in the input TLB 182 is shown in Figure 3A using hyphenated lines. A TLB may have a multi-level structure (not shown), where a relatively small TLB is used for most memory accesses, and the backup is provided by one or more higher-level, larger TLBs which will be used when the TLB of the first level is lost. If the losses occur successively in all TLB of the highest level, the table of pages in the main memory could be accessed, that is, the search for an address translation can continue until a valid translation entry is found. The reduction in the number of bits required for caching in each TLB entry is desirable, because the larger the number of bits that have to be accessed, the slower the search becomes in the TLB, and the more Energy. In fact, if the number of bits that are required to be accessed from the TLB can be reduced to zero for some configurations, then the TLB could be removed, or at least it could not be accessed frequently, potentially saving energy, area and complexity. In a virtually tagged, virtually indexed cache, as shown in the processing system 100 in FIG. 2, a cache search may not require address translation unless the desired data is not in the cache, that is, a cache leak occurs. In this case, the only thing that the TLB could be required to produce conventionally, for each cache search, may be the memory attributes required to execute the instruction for accessing the memory, that is, the authorizations of reading / writing / execution. In one mode of a processing system, these memory attributes are not stored in a TLB but rather in an alternate location. Figure 3B illustrates a TLB entry 112 in the TLB 110 used in the processing system 100 (shown in Figure 2). As can be seen from FIG. 3B, the input TLB 110 is configured to store only address translation information (in the LABEL and DATA fields) and not in memory attributes. Figure 4A schematically illustrates the cache access in a cache 220. The caches can be divided into smaller segments called lines. Each line cache 228 in a cache memory usually contains an address tag indicating a memory address specifying a particular location within the main memory, and a copy of the data that is located in the main memory in the address of the memory. memory contained in that cache line. The tagging and indexing procedure for the cache is similar to the tagging and indexing procedure for the TLB, because a TLB is basically a cache of address mappings. The cache 220 can be configured to Allow cache access using a virtual address. In other words, when a virtual address is presented to the cache 220, the cache memory 220 can be configured to allow a match cache line to be identified from the plurality of cache lines. The match cache line may be the cache line whose address label indicates an address that matches the virtual address presented to the cache. The cache can be configured to allow one or more page attributes of a page located in the matching address to be retrieved from the cache, and not from the TLB. This can be done by storing in each cache line a page attribute of the data page stored in the cache line, in addition to the address label and the data. As seen in Figure 4A, the bits of the lowest order in an address register 210 presented to the cache 220 may contain information related to the compensation within a particular cache line 228. The compensation information may be used for select a byte within a multi-byte cache line 228. For single-byte cache lines, it will not be necessary to use bits for the compensation. The following bytes provide Index information to select a particular cache line (or set of cache lines) within all cache lines in the cache. Finally, the address bits in the label portion of the cache line can be used to execute a tag review against the labels in the cache lines 228 in the cache 220. In a virtually tagged cache, virtually indexed, or the label or the Index have to be translated, and they can operate concurrently with the TLB. In a physically labeled, virtually indexed cache, the virtual address in an address record can be used to access the line in the cache, and the physical address can be used for tagging. In the physically labeled, virtually indexed cache, indexing may occur concurrently with the TLB or other memory management unit, but the output of the TLB (or other memory management unit) may be necessary for the revision of the memory. label . In one embodiment of the processing system 100, the memory attributes are stored as extra bits in the cache lines of the cache memory. Figure 4B schematically illustrates a cache 125, used in one embodiment of the processing system 100 (shown in Figure 2). As seen in Figure 4B, instead of storing the memory attributes in a TLB, these are stored in the cache, extending the cache lines to contain not only the address and the copy of the data, but also the attributes for each cache line. Each cache line 135 in FIG. 4B is configured to store one or more memory attributes as extra bits in the cache line. These memory attributes can include authorization criteria such as whether the authorization can be granted to perform an operation on the data, for example, if the data can be accessed to be read, or if new data can be written in the data. existing data, or if an instruction (for example SUM or MULTIPLY) can be executed using the existing data. Memory attributes can also provide information as to whether authorization for an operation can be granted to a particular operating mode (for example, a "supervisor" or privileged mode, in contrast to a "user" or non-privileged mode). In other words, memory attributes can indicate whether a user is allowed access to the data stored in that cache line particular, or if only the supervisor has access. In addition to reading / writing / executing and user / supervisor mode authorizations, memory attributes can also provide other types of information, including but not limited to, information regarding cache capacity and write allocation policy for others. cache levels between the aforementioned cache memory and the actual system memory. In a processing system that has a virtually tagged, virtually indexed instruction cache, just by way of example, the CPU would only have access to one TLB in order to obtain the read / write / execute authorization attributes and compare them against the characteristics of the application that requested the search and load of the instruction. By placing a copy of these attributes in each cache line, you can eliminate the need for a TLB search in the searches and instruction loads that a cache hits. A TLB search may be necessary only to fill an instruction cache line, by accessing the next level of memory, because the attributes would eventually have to be used in order to authorize the running application to execute the commands. instructions searched and loaded (and the translated address would be necessary to access the physical memory). It should be appreciated that the cache memory 220 is not limited to a cache of virtually tagged, virtually indexed instructions. Any cache that allows access through a virtual address can be used. In summary, the memory attributes described above are stored as extra bits in each line of a cache, and are not stored in a TLB, obviating the need to recover those attributes of the TLB, at least at a first level. Avoiding the need to store these attributes in a TLB can result in lower power, area and / or system complexity in general. The above description of the described embodiments is provided to enable any person skilled in the art to make or use the processing system described above. Various modifications to these modalities will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other modalities without departing from the spirit or scope of what has been described. Therefore, the system processing is not intended to be limited to the modalities shown here, but will be accorded the full scope consistent with the claims, where the reference to an element in the singular does not mean "one and only one" unless specifically stipulated so , but rather "one or more". All structural and functional equivalents to the elements of the various embodiments described in this description, which are known or will later become known to those skilled in the art, are expressly incorporated by reference herein and are intended to be encompassed for the claims. Furthermore, nothing herein described is intended to be dedicated to the public without considering whether said description is explicitly recited in the claims. No item claimed shall be construed under the provisions of 35 U.S.C. ยง112, sixth paragraph, unless the element is expressly mentioned using the phrase "means for" or, in the case of a claimed method, the element is recited using the phrase "step for".

Claims (20)

NOVELTY OF THE INVENTION Having described the present invention, it is considered as a novelty and, therefore, the content of the following is claimed as a priority: CLAIMS
1. - A processing system comprising: a memory configured to store data in a plurality of pages, each page includes a plurality of lines; an advance translation buffer (TLB); a cache memory including a plurality of cache lines, each of the cache lines configured to store an address of one of the memory lines, and to store a data line located in said address; wherein the cache memory is configured to allow, when a virtual address is presented to the cache, a match cache line is identified from the plurality of cache lines, the match cache line has a matching match address with the virtual address presented to the cache; and wherein the cache memory is configured to allow a page attribute, of a page located in said matching address, to be retrieved from the cache without accessing the TLB in order to retrieve the page attribute, further storing in Each of the lines caches a page attribute of the data line stored in said cache line.
2. - The processing system according to claim 1, characterized in that at least some of the cache lines are configured to store an address label that is a virtual address label.
3. The processing system according to claim 1, characterized in that at least some of the cache lines are configured to store an address label that is a physical address label.
4. - The processing system according to claim 1, characterized in that at least some of the cache lines in the cache are accessed by a virtual index that allows one or more cache lines to be selected from among the plurality of cache lines.
5. - The processing system according to claim 1, characterized in that at least some of the cache lines in the cache are accessed by a physical index that allows one or more cache lines to be selected from among the plurality of cache lines.
6. The processing system according to claim 1, characterized in that one of the page attributes comprises an authorization criterion that indicates whether an authorization can be granted for an operation to be carried out in the data page located in an address indicated by the address label of the cache line that has said page attributes.
7. - The processing system according to claim 6, characterized in that the operation comprises at least one of a read operation, a write operation, and an execution operation.
8. - The processing system according to claim 6, characterized in that the authorization criterion also indicates whether the authorization for the operation can be granted to an operator.
9. The processing system according to claim 6, characterized in that the operator it comprises at least one of a user and a supervisor.
10. The processing system according to claim 1, further comprising: a plurality of additional levels of cache memory, and wherein one of the page attributes comprises a capacity criterion for caching which indicates whether the data, located in a physical address corresponding to a virtual address of one of the cache lines having said memory attributes, can be stored in one or more of the additional cache levels.
11. The processing system according to claim 1, characterized in that the data stored in at least some of the plurality of pages comprise one or more instructions.
12. The processing system according to claim 1, characterized in that the memory contains a page frame containing a plurality of page frame entries, and wherein each page frame entry is configured to store a mapping between a virtual address and a physical address of one of the plurality of pages.
13. The processing system according to claim 12, characterized in that the TLB includes a plurality of entries TLB, wherein each of the entries TLB is configured to store address translation information for a translation of a virtual address into a physical address of one of the pages, and wherein the TLB includes a subset of the plurality of pages entries.
14. The processing system according to claim 13, characterized in that each of the inputs TLB is configured to store only the address translation information and not any page attribute.
15. A method comprising: accessing a cache memory using a virtual address of a data line; and retrieving from the cache a page attribute for the data line, without accessing the TLB to retrieve the page attribute.
16. The method according to claim 15, characterized in that the act of accessing the cache comprises the act of accessing a virtually tagged cache.
17. The method according to claim 15, characterized in that the act of accessing the cache comprises the act of having access to a physically labeled cache.
18. The method according to claim 15, characterized in that the act of accessing the cache comprises the act of accessing a virtually indexed cache memory.
19. The method according to claim 15, characterized in that the act of accessing the cache comprises the act of accessing a physically indexed cache memory.
20. The method according to claim 15, characterized in that the act of storing a memory attribute comprises the act of storing an authorization criterion that indicates whether an authorization can be granted for an operation that is to be carried out in the data located in the physical address corresponding to the virtual address of the cache line.
MXMX/A/2008/005091A 2005-10-20 2008-04-18 Caching memory attribute indicators with cached memory data MX2008005091A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11254873 2005-10-20

Publications (1)

Publication Number Publication Date
MX2008005091A true MX2008005091A (en) 2008-09-02

Family

ID=

Similar Documents

Publication Publication Date Title
JP5475055B2 (en) Cache memory attribute indicator with cached memory data
EP1934753B1 (en) Tlb lock indicator
US8082416B2 (en) Systems and methods for utilizing an extended translation look-aside buffer having a hybrid memory structure
US7917725B2 (en) Processing system implementing variable page size memory organization using a multiple page per entry translation lookaside buffer
US7783859B2 (en) Processing system implementing variable page size memory organization
CN101606134B (en) Address translation method and apparatus
US20050013183A1 (en) Load control
US20170286296A1 (en) Managing synonyms in virtual-address caches
US7779214B2 (en) Processing system having a supported page size information register
US20160140042A1 (en) Instruction cache translation management
US20120173843A1 (en) Translation look-aside buffer including hazard state
US20070094476A1 (en) Updating multiple levels of translation lookaside buffers (TLBs) field
US6567907B1 (en) Avoiding mapping conflicts in a translation look-aside buffer
US6766435B1 (en) Processor with a general register set that includes address translation registers
US20140006747A1 (en) Systems and methods for processing instructions when utilizing an extended translation look-aside buffer having a hybrid memory structure
MX2008005091A (en) Caching memory attribute indicators with cached memory data