Method, apparatus and system for reducing pipeline stalls due to address translation misses

文档序号:789528 发布日期:2021-04-09 浏览:20次 中文

阅读说明:本技术 减少由于地址转换缺失而引起的管线暂缓的方法、装置和系统 (Method, apparatus and system for reducing pipeline stalls due to address translation misses ) 是由 P·戈沙尔 N·乔杜里 R·拉贾戈帕兰 P·埃比勒 B·斯坦普尔 D·S·雷 T·P·施 于 2019-08-26 设计创作,主要内容包括:提出了一种用于减少由于地址转换缺失引起的管线暂缓的方法、装置和系统。一种装置包括:存储器访问指令管线,耦合到存储器访问指令管线的转换后备缓冲器,以及耦合到TLB和存储器访问指令管线两者的TLB缺失队列。TLB缺失队列被配置为选择性地存储第一存储器访问指令连同与第一存储器访问指令相关联的信息,由于第一存储器访问指令在TLB中缺失,第一存储器访问指令已经被从存储器访问指令管线移除。TLB缺失队列还被配置为将第一存储器访问指令重新引入到存储器访问指令管线,存储器访问指令管线与地址转换的返回相关联,该地址转换与第一存储器访问指令有关。(A method, apparatus, and system for reducing pipeline stalls due to address translation misses are presented. An apparatus comprising: the apparatus includes a memory-access instruction pipeline, a translation lookaside buffer coupled to the memory-access instruction pipeline, and a TLB miss queue coupled to both the TLB and the memory-access instruction pipeline. The TLB miss queue is configured to selectively store a first memory access instruction along with information associated with the first memory access instruction, the first memory access instruction having been removed from the memory access instruction pipeline as a result of the first memory access instruction missing in the TLB. The TLB miss queue is further configured to reintroduce the first memory access instruction into a memory access instruction pipeline associated with return of the address translation associated with the first memory access instruction.)

1. An apparatus, comprising:

a memory access instruction pipeline;

a Translation Lookaside Buffer (TLB) coupled to the memory access instruction pipeline; and

a TLB miss queue coupled to the TLB and to the memory access instruction pipeline;

wherein the TLB miss queue is configured to selectively store a first memory access instruction and information associated with the first memory access instruction, the first memory access instruction having been removed from the memory access instruction pipeline due to the first memory access instruction missing in the TLB.

2. The apparatus of claim 1, wherein the TLB miss queue is further configured to reintroduce the first memory access instruction into the memory access instruction pipeline, the memory access instruction pipeline associated with return of an address translation, the address translation related to the first memory access instruction.

3. The apparatus of claim 1, wherein the TLB miss queue is further configured to: comparing a memory page associated with the first memory access instruction to memory pages associated with all active entries of the TLB miss queue; and

generate a translation request if the memory page associated with the first memory access instruction does not match the memory page associated with any of the active entries of the TLB miss queue, or

Suppress a translation request if the memory page associated with the first memory access instruction matches any of the memory pages associated with any of the active entries of the TLB miss queue.

4. The apparatus of claim 1, wherein the TLB miss queue is further configured to: compare a memory page associated with the first memory access instruction to an address translation expected to be received, and if an address translation corresponding to the memory page associated with the first memory access instruction is expected to be received within a particular number of cycles, refrain from storing the first memory access instruction and associated information in the TLB miss queue, and stall the memory access instruction pipeline until the address translation is received.

5. The apparatus of claim 1, wherein the memory access instructions comprise load instructions and store instructions.

6. The apparatus of claim 5, wherein the TLB miss queue is a unified TLB miss queue configured to store both load and store instructions.

7. The device of claim 5, wherein the TLB miss queue comprises: a separate load TLB miss queue configured to store load instructions that miss in the TLB and associated information; and a store TLB miss queue configured to store instructions and associated information that are missing in the TLB.

8. The device of claim 7, wherein the load TLB miss queue and the store TLB miss queue are heterogeneous designs.

9. The apparatus of claim 2, wherein the information associated with the first memory access instruction does not include hazard detection information, and wherein the TLB miss queue is further configured to reintroduce the first memory access instruction into the memory access instruction pipeline such that the memory access instruction pipeline will perform hazard detection on the first memory access instruction as if the first memory access instruction was a new instruction.

10. The apparatus of claim 2, wherein the information associated with the first memory access instruction comprises hazard detection information, and wherein the TLB miss queue is further configured to reintroduce the first memory access instruction into the memory access instruction pipeline such that the memory access instruction pipeline does not perform hazard detection for the first memory access instruction as if the first memory access instruction was a new instruction.

11. The apparatus of claim 2, wherein the TLB miss queue is further configured to reintroduce the first memory access instruction into the memory access instruction pipeline a plurality of cycles prior to the return of the address translation related to the first memory access instruction.

12. The apparatus of claim 1, wherein the TLB miss queue is further configured to reintroduce the first memory access instruction into a second memory access instruction pipeline associated with return of an address translation associated with the first memory access instruction.

13. The apparatus of claim 1, integrated into a computing device.

14. The apparatus of claim 13, the computing device further integrated into a device selected from the group consisting of: mobile phones, communication devices, computers, servers, laptops, tablets, personal digital assistants, music players, video players, entertainment units, and set-top boxes.

15. A method, comprising:

removing a first memory access instruction that has missed in a Translation Lookaside Buffer (TLB) from a memory access instruction pipeline to make the memory access instruction pipeline available to other memory access instructions; and

selectively storing the first memory access instruction and associated information in a TLB miss queue while waiting for an address translation for the first memory access instruction.

16. The method of claim 15, further comprising reintroducing the first memory-access instruction into the memory-access instruction pipeline, the memory-access instruction pipeline associated with the return of the address translation associated with the first memory-access instruction.

17. The method of claim 15, further comprising:

comparing a memory page associated with the first memory address instruction to memory pages associated with all active entries of the TLB miss queue; and

generating a translation request for the first memory access instruction if the memory page associated with the first memory access instruction does not match the memory page associated with any of the active entries of the TLB miss queue; or

Refraining from generating a translation request for the first memory access instruction if the memory page associated with the first memory access instruction matches any of the memory pages associated with any of the active entries of the TLB miss queue.

18. The method of claim 15, further comprising:

comparing a memory page associated with the first memory access instruction to an address translation expected to be received; and

if the address translation corresponding to the memory page associated with the first memory access instruction is expected to be received within a particular number of cycles, refraining from storing the first memory access instruction and associated information in the TLB miss queue, and suspending the memory access instruction pipeline until the address translation corresponding to the memory page associated with the first memory access instruction is received.

19. The method of claim 16, wherein the information associated with the first memory access instruction does not include hazard detection information, and wherein reintroducing the first memory access instruction into the memory access instruction pipeline is performed at a stage of the memory access instruction pipeline such that, after reintroducing the first memory access instruction, the memory access instruction pipeline will perform hazard detection on the first memory access instruction as if the first memory access instruction was a new instruction.

20. The method of claim 16, wherein the information associated with the first memory access instruction comprises hazard detection information, and wherein reintroducing the first memory access instruction into the memory access instruction pipeline is performed at a stage of the memory access instruction pipeline such that, after reintroducing the first memory access instruction, the memory access instruction pipeline will not perform hazard detection on the first memory access instruction as if the first memory access instruction was a new instruction.

21. The method of claim 16, wherein reintroducing the first memory access instruction into the memory access instruction pipeline is performed a plurality of cycles prior to the return of the address translation associated with the first memory access instruction.

22. The method of claim 15, further comprising reintroducing the first memory-access instruction into a second memory-access instruction pipeline associated with return of the address translation associated with the first memory-access instruction.

23. A non-transitory computer readable medium comprising instructions that, when executed by a processor, cause the processor to:

removing a first memory access instruction that has missed in a Translation Lookaside Buffer (TLB) from a memory access instruction pipeline to make the memory access instruction pipeline available to other memory access instructions; and

selectively storing the first memory access instruction and associated information in a TLB miss queue while waiting for an address translation for the first memory access instruction.

24. An apparatus, comprising:

means for executing memory access instructions;

means for caching address translations coupled to the means for executing memory access instructions; and

means for storing an instruction that misses in the means for cache address translation, coupled to the means for cache address translation and coupled to the means for executing a memory access instruction;

wherein the means for storing the missed instruction is configured to selectively store a first memory access instruction and information associated with the first memory access instruction, the first memory access instruction having been removed from the means for executing memory access instructions due to the first memory access instruction missing in the means for caching address translation.

Technical Field

Aspects of the present disclosure relate generally to reducing pipeline stalls, and more particularly to reducing pipeline stalls associated with address translation misses.

Background

Modern computing devices may employ virtual memory technology to manage their memory hierarchy. As part of managing virtual memory, such computing devices translate virtual memory addresses used by applications to physical addresses via a Memory Management Unit (MMU). This translation may then be used by a memory queue or similar hardware block to interact with main memory. Since the need for such translations may be frequent, mechanisms have been developed to cache known or recently used translations, which are commonly referred to as Translation Lookaside Buffers (TLBs). The TLB acts as a cache for virtual-to-physical translations, which may improve latency of memory access operations by avoiding the need to traverse a memory hierarchy to perform virtual-to-physical address translations each time a memory access operation is encountered, as such traversal may be a relatively long latency operation.

Further complications may result when a memory access operation has its virtual address missing in the TLB and must wait for a translation from the MMU. As described above, a common method of handling TLB misses is to stall the pipeline of the computing device while waiting for a translation. This means that instructions following the memory access operation will also be stalled. However, these subsequent instructions may not cause a TLB miss, nor do they necessarily depend on the result of the memory access operation that missed in the TLB. Thus, the cycles that the processor remains stalled and waits for a transition are effectively wasted because subsequent instructions can be executed during that time period, but the memory access operations waiting for their transition block the pipeline to which they are to be allocated.

Accordingly, it is desirable to provide a mechanism for allowing instructions following a memory access operation that misses in a TLB that are not dependent on the return of the memory access operation to be performed while the computing device is waiting for an address translation associated with the TLB miss.

Disclosure of Invention

The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.

In one aspect, an apparatus comprises: the apparatus includes a memory-access instruction pipeline, a translation lookaside buffer coupled to the memory-access instruction pipeline, and a TLB miss queue coupled to both the TLB and the memory-access instruction pipeline. The TLB miss queue is configured to selectively store a first memory access instruction and information associated with the first memory access instruction, the first memory access instruction having been removed from the memory access instruction pipeline due to the first memory access instruction missing in the TLB. The TLB miss queue may be further configured to reintroduce the first memory access instruction into a memory access instruction pipeline associated with a return of an address translation associated with the first memory access instruction.

In another aspect, a method comprises: the first memory access instruction that has missed in the TLB is removed from the memory access instruction pipeline to make the memory access instruction pipeline available to other memory access instructions. The method also includes selectively storing the first memory access instruction and related information in the TLB miss queue while waiting for an address translation for the first memory access instruction. The method may also include reintroducing the first memory access instruction into a memory access instruction pipeline associated with the return of the address translation associated with the first memory access instruction.

In yet another aspect, a non-transitory computer-readable medium includes instructions that, when executed by a processor, cause the processor to remove a first memory access instruction that has been missed in a TLB from a memory access instruction pipeline to make the memory access instruction pipeline available to other memory access instructions. The instructions also cause the processor to selectively store the first memory access instruction and associated information in the TLB miss queue while waiting for an address translation for the first memory access instruction.

In yet another aspect, an apparatus comprises: means for executing memory access instructions; means for caching address translations, coupled to the means for executing the memory access instructions; and means for storing the instruction that missed in the means for cache address translation, coupled to the means for performing cache address translation and to the means for executing the memory access instruction. The means for storing the missed instruction is configured to selectively store a first memory access instruction and information associated with the first memory access instruction, the first memory access instruction having been removed from the means for executing the memory access instruction due to the first memory access instruction being missed in the means for caching address translation.

One advantage of one or more of the disclosed aspects is that the disclosed aspects allow for improved throughput of computing devices that implement the TLB miss queue as described above, by removing operations that generate TLB misses from the pipeline and allowing subsequent memory access operations to be performed. In some aspects, this may reduce power consumption and improve overall system performance.

Drawings

FIG. 1 illustrates a block diagram of a computing device configured to reduce pipeline stalls due to address translation misses, in accordance with certain aspects of the present disclosure.

FIG. 2 illustrates a detailed diagram of an example TLB miss queue, in accordance with certain aspects of the present disclosure.

FIG. 3 illustrates a detailed diagram of an implementation of a TLB miss queue associated with a load pipeline and a store pipeline, in accordance with certain aspects of the present disclosure.

FIG. 4 illustrates a block diagram of a method of reducing pipeline stalls due to address translation misses, in accordance with certain aspects of the present disclosure.

FIG. 5 illustrates a system level diagram of a computing device configured to reduce pipeline stalls due to address translation misses, according to certain aspects of the present disclosure.

Detailed Description

Aspects of the inventive teachings herein are disclosed in the following description and related drawings directed to specific aspects. Alternative aspects may be devised without departing from the scope of the inventive concepts herein. Additionally, well-known elements of the environment may not be described in detail or may be omitted so as not to obscure the relevant details of the inventive teachings herein.

The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term "aspects of the invention" does not require that all aspects of the invention include the discussed feature, advantage or mode of operation.

The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of aspects of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., Application Specific Integrated Circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Further, the sequences of such acts described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functions described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which are considered to be within the scope of the claimed subject matter. Additionally, for each of the aspects described herein, the corresponding form of any such aspect may be described herein as, for example, "logically configured to" perform the described action.

In this regard, fig. 1 illustrates a block diagram of a computing device 100 configured to reduce pipeline stalls due to address translation misses, in accordance with certain aspects of the present disclosure. The computing device includes a Central Processing Unit (CPU)110 coupled to a Memory Management Unit (MMU) 120. The CPU 110 also includes a load/store pipeline 112 coupled to both a Translation Lookaside Buffer (TLB)114 and a TLB Miss Queue (TMQ) 116. The TLB 114 and TLB miss queue 116 are coupled to each other and to the MMU 120. As will be further described herein, the MMU 120 may be coupled to a main storage system (not shown) and may be configured to perform a page table walk (page table walk) to provide address translations back to the TLB 114 and the TLB miss queue 116.

During operation, the CPU 100 may encounter a Memory Access Instruction (MAI)111 (i.e., a load or store instruction) that it may dispatch to a load/store pipeline 112. To execute the memory access instruction 111, the load/store pipeline 112 may request an address translation for the memory access instruction 111 from the TLB 114. If the TLB 114 already has an address translation for the memory access instruction 111, it may return the translation to the load/store pipeline 112 and execution of the memory access instruction 111 may be continued. However, if the TLB 114 does not have an address translation for the memory access instruction 111, it must request a translation from the MMU 120, which MMU 120 performs a page table walk to determine the translation. The page table walk may involve multiple cycles and during that time the memory access instruction 111 cannot continue to execute. However, there may be other subsequent memory access instructions waiting to be dispatched from the CPU 110 to the load/store pipeline 112.

To allow these subsequent memory access instructions to access the load/store pipeline 112 while waiting for an address translation associated with a memory access instruction 111 that has been missed in the TLB 114, the memory access instruction 111 is temporarily removed from the load/store pipeline 112 and stored in an entry of the TLB miss queue 116. The TLB miss queue 116 includes a plurality of entries, each of which may store information associated with at least one memory access instruction (e.g., memory access instruction 111). The TLB miss queue 116 may store both load and store instructions in a unified queue, or may maintain separate structures that perform substantially similar miss queue functions for load and store instructions.

Whether implemented as a unified queue or separate load and store TLB miss queues, the TLB miss queue 116 may track whether a particular memory access instruction stored in the queue requires a request for address translation to be submitted to the MMU 120, and may track whether a translation for the entry has been received from the MMU 120. Depending on where and how the TLB miss queue 116 is configured to reintroduce memory access instructions (such as the memory access instruction 111) to the load/store pipeline 112, the TLB miss queue 116 may also store hazard information associated with each stored memory access instruction (if the stored memory access instruction is to be reinserted into the load/store pipeline 112 at a stage subsequent to performing the hazard check).

Those skilled in the art will recognize that some types of instructions may not be suitable for placement into the TLB miss queue 116, and in the case of such instructions, the pipeline may be stalled. In particular, instructions that execute a particular command may not be placed in the TLB miss queue 116 (as doing so may allow newer instructions to move ahead of them, which is not allowed by definition). Likewise, other instructions that perform a particular ordering (such as barrier instructions) may not be placed in the TLB miss queue 116 to avoid a deadlock situation.

Furthermore, if a memory access instruction misses in the TLB 114, but it is known that a translation associated with the memory access instruction has been requested and will be available in a relatively short number of cycles, it may be more beneficial to stall the pipeline for that number of cycles than to use another entry from the TLB miss queue 116. For example, a threshold number of cycles may be programmed, and if a translation is to be available within that number of cycles, the computing device 100 may stall and wait for the translation instead of storing the memory access instruction in the TLB miss queue 116. Determining the threshold may depend on many factors, such as latency of translation requests, the architecture of the pipeline and TLB, the size of the TLB miss queue 116, and pipeline reentry policies, among other relevant factors. Alternatively, instead of or in addition to stalling the load/store pipeline 112 for some number of cycles, as described above, the memory access instructions may be reintroduced into the load/store pipeline 112 instead of stalling the memory access instructions at an appropriate location in the load/store pipeline 112.

In this regard, fig. 2 illustrates a detailed diagram 200 of an exemplary TLB miss queue 202, in accordance with certain aspects of the present disclosure. The TLB miss queue 202 includes a storage structure 210, the storage structure 210 including a plurality of entries 211 a-d. Each of the entries 211a-d includes an instruction field 212, a "request for demand" field 214, and a "with translation" field 216. Instruction field 212 may be used to store a particular memory access instruction (such as memory access instruction 111 described with respect to FIG. 1), and may also be used as an index to determine the relevant pages of memory for which a translation has been requested. The "demand request" field 214 stores an indicator of whether the memory access instruction associated with the entry requires a request to be submitted for address translation or whether a request for address translation for the relevant page of memory has already been submitted. This may occur, for example, when both memory access instructions miss in the TLB 114 and both target the same page of memory. The first of the two memory access instructions will be placed in the TLB miss queue 202 and will trigger a page table walk in the MMU 120. The second memory access instruction will be stored in the TLB miss queue 202, but the TLB miss queue 202 may be configured to compare the second memory access instruction with any other memory access instructions currently stored in the TLB miss queue (i.e. the active entries of the TLB miss queue), and the second memory access instruction has an address translation request pending, and if the second memory access instruction targets a page for which an address translation request has already been made, the "demand request" field 214 may be set to indicate that no associated address translation request should be generated. The "with translations" field 216 indicates whether a translation has been received for a particular entry.

Those skilled in the art will recognize that whether the TLB miss queue is implemented as a unified or separate load/store structure, how many entries the queue contains, where the queue reintroduces the instruction (e.g., load, store, or combined load/store) into the pipeline, and therefore how much memory space to use to store data related to the instruction stored in the queue (e.g., "demand requests," "with translations," hazard information, etc.) is a matter of design choice and is within the scope of the teachings of the present disclosure. To this end, FIG. 3 illustrates a detailed diagram of an implementation 300 of a TLB miss queue associated with a load pipeline and a store pipeline, according to one aspect.

The illustrated implementation 300 has separate load and store pipelines 302, 304, each load and store pipeline 302, 304 having four illustrated stages (stage LD 1310, stage LD 2312, stage LD 3314, and stage LD 4316 for the load pipeline 302, and stage ST 1320, stage ST 2322, and stage ST 4326 for the store pipeline 304). Load pipeline 302 and store pipeline 304 are coupled to a common TLB 308. The load pipeline 302 is associated with a load TLB miss queue (LD TMQ)318, the load TLB miss queue 318 configured to reinsert instructions at the LD1 stage 310 of the load pipeline 302. The store pipeline 304 is associated with a store TLB miss queue (ST TMQ)328, the store TLB miss queue 328 being configured to reinsert instructions at the ST3 stage 324 of the store pipeline 304. As previously discussed with respect to fig. 3, the load TLB miss queue 318 and store TLB miss queue 328 are coupled together to enable detection and suppression of redundant page translation requests (i.e., a load instruction 370 that requires the same page translation as the store instruction 380, which store instruction 380 has issued a page translation request to a higher level TLB or MMU, and vice versa, will not issue its own independent translation request).

In the illustrated aspect, the load TLB miss queue 318 may correspond to the TLB miss queue 202 of FIG. 2 and may contain 4 entries, each storing an instruction, a "demand request" field, and a "with translation" field. Because the load TLB miss queue 318 does not track hazards associated with entries, it may reintroduce instructions into the load pipeline 302 at a stage where these instructions will recheck their hazards as they flow through the load pipeline 302. In the illustrated aspect, hazard checking is performed in the LD1 stage 310 and the LD2 stage 312, so the load TLB miss queue 318 reintroduces instructions into the load pipeline 302 prior to the LD1 stage 310. Conversely, the store TLB miss queue 328 may contain only a single entry, but because there are fewer entries, it may contain the same information as the load TLB miss queue 318, and may additionally contain complete hazard check information, to allow instructions stored in the store TLB miss queue 328 to participate in hazard checks while waiting for their associated translations (i.e., either from a higher level TLB or from a page table walk). Because the store TLB miss queue 328 implements a complete hazard check for the instruction stored therein, it may reintroduce the instruction into the store pipeline 304 at the same stage (in the illustrated example, in the ST3 stage 324) that the instruction was removed from the pipeline.

Those skilled in the art will recognize that the choice of how many entries to store and how much information to store in each of the load TLB miss queue 318 and the store TLB miss queue 328 is a design choice and may depend on factors such as the area consumed by the physical structures associated with the store instruction and information, the relative frequency and latency penalties associated with the load instruction and the store instruction. Further, the selection of a re-entry point for an instruction may depend on similar factors. Furthermore, implementations of load/store pipelines with multiple loads, multiple stores, or multiple combinations are possible, and load or store instructions may reenter any pipeline capable of servicing instructions of that type, as long as the multiple pipelines implement similar methods with respect to reentrant points and hazard checking and information storage.

FIG. 4 illustrates a block diagram of a method 400 of reducing pipeline stalls due to address translation misses, in accordance with certain aspects of the present disclosure. The method 400 begins at block 410 by removing the first memory access instruction that misses in the TLB from the memory access pipeline to make the pipeline available to other memory access instructions. For example, with respect to FIG. 1, after a miss in the TLB 114, the memory access instruction 111 is removed from the load/store pipeline 112 to allow subsequent memory access instructions to use the load/store pipeline 112.

The method 400 continues in block 420 by selectively storing the first memory access instruction and associated information in the TLB miss queue while waiting for an address translation for the first memory access instruction. For example, with respect to FIG. 1, the memory access instruction 111 is stored in an entry of the TLB miss queue 116. In some aspects, the TLB miss queue may correspond to the TLB miss queue 202 of fig. 2, the load TLB miss queue 318 of fig. 3, or the store TLB miss queue 328 of fig. 3.

The method 400 may further continue in block 430 by reintroducing the first memory access instruction into the memory access pipeline. For example, with respect to FIG. 1, the memory access instructions 111 may be reintroduced into the load/store pipeline 112. As discussed with respect to fig. 1-3, the reintroduction of the first memory access instruction into the pipeline may be accomplished in a number of ways, all of which are within the scope of the teachings of the present disclosure. Further, the timing of the reintroduction of the first memory access instruction may vary. In one aspect, the system may wait until after the associated address translation has returned from the higher level TLB or page table walk before reintroducing the first memory access instruction into the pipeline. In another aspect, the system may track and anticipate the return of address translations, and may reintroduce the first memory access instruction into the pipeline so that it will reach the pipeline stage where address translations are performed prior to or concurrently with associated address translations.

An example apparatus in which aspects of the present disclosure may be utilized will now be discussed in conjunction with fig. 5. FIG. 5 shows a diagram of a computing device 500, the computing device 500 including structure for reducing pipeline stalls due to address translation misses as described with respect to FIGS. 1, 2, and 3, and the structure may operate in accordance with the method described in FIG. 4. In that regard, the system 500 includes a processor 502, which processor 502 may incorporate the load/store pipeline 112, the TLB 114, and the TLB miss queue 116 of FIG. 1 (which may also correspond to the TLB miss queue 202 of FIG. 2, and any of the elements of the implementation 300 of FIG. 3). The system 500 also includes a main memory system 580 coupled to the processor 502 via the system bus 140. The main memory system 580 may also store non-transitory computer readable instructions that, when executed by the processor 502, may perform the method 400 of fig. 4.

Fig. 5 also shows, in dashed lines, optional blocks such as a coder/decoder (CODEC) 534 (e.g., an audio and/or speech CODEC) coupled to the processor 502; a speaker 536 and a microphone 538 coupled to the codec 534; and a wireless antenna 542 coupled to the wireless controller 540, the wireless antenna 542 coupled to the processor 502. Further, the system 502 also shows a display controller 526 coupled to the processor 502 and to a display 528, and a wired network controller 570 coupled to the processor 502 and to a network 572. Where one or more of these optional blocks are present, in a particular aspect, the processor 502, the display controller 526, the memory 580, and the wireless controller 540 may be included in a system-in-package or system-on-chip device 522.

Thus, in a particular aspect, an input device 530 and a power supply 544 are coupled to the system-on-chip device 522. Moreover, in a particular aspect, as shown in fig. 5, the display 528, the input device 530, the speaker 536, the microphone 538, the wireless antenna 542, and the power supply 544 are external to the system-on-chip device 522 in the presence of one or more optional blocks. However, each of the display 528, the input device 530, the speaker 536, the microphone 538, the wireless antenna 542, and the power supply 544 can be coupled to a component of the system-on-chip device 522, such as an interface or a controller.

It should be noted that although fig. 5 generally describes a computing device, the processor 502 and memory 580 may also be integrated into a mobile phone, a communication device, a computer, a server, a laptop computer, a tablet, a personal digital assistant, a music player, a video player, an entertainment unit, and a set-top box, or other similar device.

Those of skill in the art would recognize that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.

Furthermore, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.

While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于控制存储器存取的装置及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类