Memory compressed hashing mechanism
阅读说明:本技术 存储器压缩散列机制 (Memory compressed hashing mechanism ) 是由 A·R·阿普 A·考克 J·雷 N·库雷 P·萨蒂 S·卡玛 V·兰甘纳坦 于 2020-02-18 设计创作,主要内容包括:本申请公开了存储器压缩散列机制。公开了用于促进存储器数据压缩的装置。装置包括:存储器,具有多个区块,用于存储主数据和与主数据相关联的元数据;以及存储器管理单元(MMU),耦合至多个区块,用于执行散列函数来为主数据和元数据计算到存储器中的虚拟地址位置中的索引,并且调节元数据虚拟地址位置以将每个经调节的元数据虚拟地址位置存储在存储相关联的主数据的区块中。(The application discloses a memory compressed hash mechanism. An apparatus for facilitating memory data compression is disclosed. The device comprises: a memory having a plurality of blocks for storing main data and metadata associated with the main data; and a Memory Management Unit (MMU) coupled to the plurality of banks to perform a hash function to compute an index into virtual address locations in memory for the host data and the metadata, and to adjust the metadata virtual address locations to store each adjusted metadata virtual address location in a bank storing the associated host data.)
1. An apparatus for facilitating memory data compression, comprising:
a memory having a plurality of blocks for storing main data and metadata associated with the main data; and
a memory management unit MMU coupled to the plurality of banks for performing a hash function to compute indices into virtual address locations in memory for the host data and the metadata, and adjusting metadata virtual address locations to store each adjusted metadata virtual address location in a bank storing associated host data.
2. The apparatus of claim 1, wherein the MMU to adjust the address location of the metadata comprises: the metadata to be stored in the blocks are combined to generate a metadata block.
3. The apparatus of claim 2, wherein the MMU to adjust the address location of the metadata further comprises: one or more shift operations are performed.
4. The apparatus of claim 3, wherein the MMU comprises a plurality of MMUs, each coupled to one or more of the plurality of banks.
5. The apparatus of claim 4, wherein each of the plurality of MMUs comprises a hash table implemented to perform the hash function.
6. The apparatus of claim 5, wherein each of the plurality of MMUs further performs a linear mapping to map primary data addresses to metadata addresses.
7. The apparatus of claim 4, further comprising:
a first MMU coupled to a first tile for storing a first set of primary data and metadata for a first metadata tile associated with the first set of primary data; and
a second MMU coupled to a second tile for storing a second set of primary data and metadata for a second metadata tile associated with the second set of primary data.
8. A method for facilitating memory data compression, comprising:
performing a hash function to compute an index into a virtual address location in memory for primary data and metadata associated with the primary data;
adjusting the metadata virtual address location; and
storing the metadata at adjusted metadata virtual address locations, wherein each adjusted metadata virtual address location is located in a block that stores associated master data.
9. The method of claim 8, further comprising:
receiving a master data address; and
mapping the primary data address to a metadata address.
10. The method of claim 9, further comprising: adjusting the address location of the metadata includes performing one or more shift operations.
11. The method of claim 10, further comprising: the metadata to be stored in the blocks are combined to generate a metadata block.
12. The method of claim 8, further comprising:
storing a first set of main data and metadata of a first metadata block associated with the first set of main data at a first chunk; and
storing, at a second tile, a second set of main data and metadata of a second metadata block associated with the second set of main data.
13. A graphics processing unit, GPU, comprising:
a memory having a plurality of blocks for storing main data and metadata associated with the main data; and
a plurality of structural elements coupled to the plurality of banks, each structural element comprising a Memory Management Unit (MMU) coupled to one or more of the plurality of banks, the MMU to perform a hash function to calculate an index into a virtual address location in memory for the host data and the metadata, and adjust the metadata virtual address location to store each adjusted metadata virtual address location in the bank storing the associated host data.
14. The GPU of claim 13, wherein the MMU to adjust the address location of the metadata comprises: the metadata to be stored in the blocks are combined to generate a metadata block.
15. The GPU of claim 14, wherein the MMU to adjust the address location of the metadata further comprises: one or more shift operations are performed.
16. The GPU of claim 15, wherein the MMU comprises a hash table implemented to perform the hash function.
17. The GPU of claim 16, wherein the MMU further performs linear mapping to map primary data addresses to metadata addresses.
18. The GPU of claim 13, further comprising:
a first fabric element having a first MMU coupled to a first tile for storing a first set of primary data and metadata for a first metadata chunk associated with the first set of primary data; and
a second structural element having a second MMU coupled to a second tile for storing a second set of primary data and metadata for a second metadata tile associated with the second set of primary data.
19. The GPU of claim 18, further comprising:
a first set of one or more processing nodes coupled to the first structural element; and
a second set of one or more processing nodes coupled to the second structural element.
20. The GPU of claim 19, wherein the first structural element comprises a first control cache coupled between the first set of one or more processing nodes and the first MMU, the first control cache to perform data compression and decompression, and the second structural element comprises a second control cache coupled between the second set of one or more processing nodes and the second MMU, the second control cache to perform data compression and decompression.
Technical Field
The present invention relates generally to graphics processing, and more particularly to memory data compression.
Background
A Graphics Processing Unit (GPU) is a highly threaded machine in which hundreds of threads of a program are executed in parallel to achieve high throughput. The GPU thread groups are implemented in a mesh shading application to perform three-dimensional (3D) rendering. As more and more complex GPUs require a large amount of computation, maintaining memory bandwidth requirements is challenging. Therefore, bandwidth compression has become critical to ensure that the hardware/memory subsystem can support the required bandwidth.
Drawings
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
FIG. 1 is a block diagram of a processing system according to an embodiment;
FIG. 2 is a block diagram of a processor according to an embodiment;
FIG. 3 is a block diagram of a graphics processor according to an embodiment;
FIG. 4 is a block diagram of a graphics processing engine of a graphics processor, according to some embodiments;
FIG. 5 is a block diagram of a graphics processor provided by an additional embodiment;
FIGS. 6A and 6B illustrate thread execution logic, including an array of processing elements employed in some embodiments;
FIG. 7 is a block diagram illustrating a graphics processor instruction format, according to some embodiments;
FIG. 8 is a block diagram of a graphics processor according to another embodiment;
FIGS. 9A and 9B illustrate a graphics processor command format and command sequence, according to some embodiments;
FIG. 10 illustrates an exemplary graphics software architecture for a data processing system, in accordance with some embodiments;
11A and 11B are block diagrams illustrating an IP core development system according to an embodiment;
FIG. 12 is a block diagram illustrating an exemplary system-on-chip integrated circuit, according to an embodiment;
FIGS. 13A and 13B are block diagrams illustrating additional exemplary graphics processors;
14A and 14B are block diagrams illustrating additional exemplary graphics processors of a system-on-chip integrated circuit according to embodiments;
FIG. 15 illustrates one embodiment of a computing device;
FIG. 16 illustrates one embodiment of a graphics processing unit;
FIG. 17 illustrates one embodiment of a memory space;
FIG. 18 illustrates one embodiment of a repackaged memory space;
FIG. 19 illustrates one embodiment of a memory management unit; and
FIG. 20 is a flow diagram illustrating one embodiment of a process for performing compressed hashing.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.
In an embodiment, the memory management unit performs a hash function to compute an index into physical address locations in memory for the master data and the metadata, and adjusts the metadata physical address locations to store each adjusted metadata physical address location in a block that stores the associated master data.
Fig. 1 is a block diagram of a
In one embodiment, the
In some embodiments, the one or
In some embodiments, the
In some embodiments, one or
Memory device 120 may be a Dynamic Random Access Memory (DRAM) device, a Static Random Access Memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with the appropriate capabilities to act as a process memory. In one embodiment, memory device 120 may operate as system memory for
In some embodiments,
It will be appreciated that the
FIG. 2 is a block diagram of an embodiment of a processor 200, the processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Those elements of fig. 2 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 200 may include additional cores up to additional core 202N represented by a dashed box and including additional core 202N represented by a dashed box. Each of the processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments, each processor core also has access to one or more shared cache units 206.
Internal cache units 204A-204N and shared cache unit 206 represent a cache memory hierarchy within processor 200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as an LLC. In some embodiments, cache coherency logic maintains coherency between the various cache molecules 206 and 204A-204N.
In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and a system agent core 210. One or more bus controller units 216 manage a set of peripheral buses, such as one or more PCI buses or PCI Express buses. The system agent core 210 provides management functions for the processor components. In some embodiments, the system proxy core 210 includes one or more integrated memory controllers 214 for managing access to various external memory devices (not shown).
In some embodiments, one or more of the processor cores 202A-202N includes support for simultaneous multithreading. In such embodiments, system proxy core 210 includes components for coordinating and operating cores 202A-202N during multi-threaded processing. The system proxy core 210 may additionally include a Power Control Unit (PCU) that includes logic and components for regulating the power states of the processor cores 202A-202N and the graphics processor 208.
In some embodiments, the processor 200 additionally includes a graphics processor 208 for performing graphics processing operations. In some embodiments, the graphics processor 208 is coupled to a set of shared cache units 206 and a system proxy core 210, the system proxy core 210 including one or more integrated memory controllers 214. In some embodiments, the system proxy core 210 also includes a display controller 211 for driving graphics processor output to one or more coupled displays. In some embodiments, the display controller 211 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 208.
In some embodiments, ring-based interconnect unit 212 is used to couple internal components of processor 200. However, alternative interconnection elements may be used, such as point-to-point interconnections, switched interconnections, or other techniques, including those known in the art. In some embodiments, the graphics processor 208 is coupled with the ring interconnect 212 via an I/O link 213.
Exemplary I/O link 213 represents at least one of a plurality of various I/O interconnects, including an on-package I/O interconnect that facilitates communication between various processor components and a high-performance embedded memory module 218, such as an eDRAM module. In some embodiments, each of the processor cores 202A-202N and the graphics processor 208 use the embedded memory module 218 as a shared last level cache.
In some embodiments, processor cores 202A-202N are homogeneous cores that execute the same instruction set architecture. In another embodiment, processor cores 202A-202N are heterogeneous in Instruction Set Architecture (ISA), in which one or more of processor cores 202A-202N execute a first instruction set and at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores 202A-202N are heterogeneous in micro-architecture, wherein one or more cores having relatively higher power consumption are coupled with one or more power cores having lower power consumption. Further, processor 200 may be implemented on one or more chips, or as an SoC integrated circuit having the illustrated components, among other components.
Fig. 3 is a block diagram of a graphics processor 300, which graphics processor 300 may be a discrete graphics processing unit or may be a graphics processor integrated with multiple processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into processor memory. In some embodiments, graphics processor 300 includes a memory interface 314 for accessing memory. Memory interface 314 may be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.
In some embodiments, graphics processor 300 also includes a
In some embodiments, graphics processor 300 includes a block image transfer (BLIT)
In some embodiments, GPE310 includes a
In some embodiments,
In some embodiments, 3D/
Graphics processing engine
FIG. 4 is a block diagram of a
In some embodiments, GPE410 is coupled with
In embodiments,
In some embodiments,
Output data generated by threads executing on
In some embodiments, the
Shared functionality is implemented where the need for a given specialized functionality is insufficient to be included in
Figure 5 is a block diagram of hardware logic of
In some embodiments, the fixed
In one embodiment, fixed
In one embodiment,
In one embodiment,
In one embodiment, the additional fixed
Within each graphics sub-core 501A-501F is included a set of execution resources that are available to perform graphics operations, media operations, and compute operations in response to requests made by a graphics pipeline, media pipeline, or shader program. Graphics sub-cores 501A-501F include: a plurality of
Execution unit
6A-6B illustrate thread execution logic 600 according to embodiments described herein, the thread execution logic 600 comprising an array of processing elements employed in a graphics processor core. Those elements of fig. 6A-6B having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Fig. 6A illustrates an overview of thread execution logic 600, which thread execution logic 600 may include variations of the hardware logic illustrated for each of the sub-cores 501A-501F of fig. 5. FIG. 6B illustrates exemplary internal details of an execution unit.
As illustrated in fig. 6A, in some embodiments, thread execution logic 600 includes shader processor 602, thread dispatcher 604, instruction cache 606, scalable execution unit array including a plurality of execution units 608A-608N, sampler 610, data cache 612, and data port 614. In one embodiment, the scalable array of execution units may be dynamically scaled by enabling or disabling one or more execution units (e.g., execution units 608A, 608B, 608C, 608D, up to any of 608N-1 and 608N) based on the computational requirements of the workload. In one embodiment, the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, the thread execution logic 600 includes one or more connections to memory (such as system memory or cache memory) through the instruction cache 606, the data port 614, the sampler 610, and one or more of the execution units 608A-608N. In some embodiments, each execution unit (e.g., 608A) is a standalone programmable general purpose computing unit capable of executing multiple simultaneous hardware threads while processing multiple data elements for each thread in parallel. In various embodiments, the array of execution units 608A-608N is scalable to include any number of individual execution units.
In some embodiments, the execution units 608A-608N are primarily used to execute shader programs. Shader processor 602 can process various shader programs and can dispatch threads of execution associated with the shader programs via thread dispatcher 604. In one embodiment, the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics pipeline and the media pipeline and to instantiate the requested thread on one or more of the execution units 608A-608N. For example, a geometry pipeline may dispatch a vertex shader, a tessellation shader, or a geometry shader to the in-situ execution logic for processing. In some embodiments, the thread dispatcher 604 may also process runtime thread generation requests from executing shader programs.
In some embodiments, execution units 608A-608N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct3D and OpenGL) are executed with minimal translation. These execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders), and general purpose processing (e.g., compute and media shaders). Each of the execution units 608A-608N is capable of multiple issue Single Instruction Multiple Data (SIMD) execution, and multi-threading enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high bandwidth register file and associated independent thread state. Execution is multi-issue per clock for pipelines capable of integer, single, and double precision floating point operations, SIMD branch capable, logical, override, and other miscellaneous operations. While waiting for data from one of the memory or shared functions, dependency logic within the execution units 608A-608N sleeps the waiting thread until the requested data has been returned. While the waiting thread is sleeping, the hardware resources may be devoted to processing other threads. For example, during a delay associated with vertex shader operations, the execution unit may perform operations directed to a pixel shader, a fragment shader, or another type of shader program that includes a different vertex shader.
Each of the execution units 608A-608N operates on an array of data elements. The number of data elements is the "execution size", or number of lanes for the instruction. An execution channel is a logical unit for execution of data element access, masking, and flow control within an instruction. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 608A-608N support both integer and floating point data types.
The execution unit instruction set includes SIMD instructions. Various data elements may be stored as packed data types in registers, and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, 256 bits of the vector are stored in a register, and the execution unit operates on the vector as four separate 64-bit packed data elements (four-word (QW) sized data elements), eight separate 32-bit packed data elements (double-word (DW) sized data elements), sixteen separate 16-bit packed data elements (word (W) sized data elements), or thirty-two separate 8-bit data elements (byte (B) sized data elements). However, different vector widths and register sizes are possible.
In one embodiment, one or more execution units may be combined into a fused execution unit 609A-609N, the fused execution unit 609A-609N having thread control logic (607A-607N) common to the fused EU. Multiple EUs can be fused into an EU group. Each EU in the fused EU set may be configured to execute a separate SIMD hardware thread. The number of EUs in the fused EU set may vary depending on the embodiment. Additionally, various SIMD widths may be performed EU-by-EU, including but not limited to SIMD8, SIMD16, and SIMD 32. Each fused graphics execution unit 609A-609N includes at least two execution units. For example, the fused execution unit 609A includes a first EU 608A, a second EU 608B, and thread control logic 607A common to the first EU 608A and the second EU 608B. The thread control logic 607A controls the threads executing on the fused graphics execution unit 609A, allowing each EU within the fused execution units 609A-609N to execute using a common instruction pointer register.
One or more internal instruction caches (e.g., 606) are included in the thread execution logic 600 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 612) are included to cache thread data during thread execution. In some embodiments, sampler 610 is included to provide texture samples for 3D operations and media samples for media operations. In some embodiments, sampler 610 includes specialized texture or media sampling functionality to process texture data or media data during the sampling process prior to providing the sampled data to the execution units.
During execution, the graphics pipeline and the media pipeline send thread initiation requests to the thread execution logic 600 via the thread generation and dispatch logic. Once a set of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within shader processor 602 is invoked to further compute output information and cause the results to be written to an output surface (e.g., a color buffer, a depth buffer, a stencil (stencil) buffer, etc.). In some embodiments, the pixel shader or fragment shader computes values for each vertex attribute that will be interpolated across the rasterized object. In some embodiments, pixel processor logic within shader processor 602 then executes an Application Programming Interface (API) supplied pixel shader program or fragment shader program. To execute shader programs, shader processor 602 dispatches threads to execution units (e.g., 608A) via thread dispatcher 604. In some embodiments, shader processor 602 uses texture sampling logic in sampler 610 to access texture data in a texture map stored in memory. Arithmetic operations on the texture data and the input geometry data calculate pixel color data for each geometric segment, or discard one or more pixels without further processing.
In some embodiments, data port 614 provides a memory access mechanism for thread execution logic 600 to output processed data to memory for further processing on a graphics processor output pipeline. In some embodiments, data port 614 includes or is coupled to one or more cache memories (e.g., data cache 612) to cache data for memory access via the data port.
As illustrated in FIG. 6B, the graphics execution unit 608 may include an instruction fetch
In one embodiment, the graphics execution unit 608 has an architecture that is a combination of Simultaneous Multithreading (SMT) and fine-grained Interleaved Multithreading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on a target number of synchronous threads and a number of registers per execution unit, where execution unit resources are partitioned across logic for executing multiple synchronous threads.
In one embodiment, the graphics execution unit 608 may cooperatively issue multiple instructions, which may each be different instructions. The
In one embodiment, memory operations, sampler operations, and other longer latency system communications are dispatched via a "send" instruction executed by messaging transmit unit 630. In one embodiment, branch instructions are dispatched to a dedicated branch unit 632 to facilitate SIMD divergence and eventual convergence.
In one embodiment, graphics execution unit 608 includes one or more SIMD Floating Point Units (FPUs) 634 for performing floating point operations. In one embodiment, the FPU(s) 634 also support integer computations. In one embodiment, FPU(s) 634 may perform up to a number M of 32-bit floating point (or integer) operations on SIMD's, or up to 2M of 16-bit integer or 16-bit floating point operations on SIMD's. In one embodiment, at least one of the FPU(s) provides extended mathematical capabilities that support high throughput transcendental mathematical functions and double precision 64-bit floating points. In some embodiments, a
In one embodiment, an array of multiple instances of the graphics execution unit 608 may be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, the product architect may select the exact number of execution units per sub-core group. In one embodiment, the execution unit 608 may execute instructions across multiple execution lanes. In a further embodiment, each thread executing on the graphics execution unit 608 is executed on a different channel.
Fig. 7 is a block diagram illustrating a graphics
In some embodiments, the graphics processor execution unit natively supports instructions of 128-
For each format,
Some execution unit instructions have up to three operands, including two source operands, src0720, src 1722, and one
In some embodiments, 128-
In some embodiments, 128-
In one embodiment, the addressing mode portion of access/addressing mode field 726 determines whether the instruction is to use direct addressing or indirect addressing. When using the direct register addressing mode, bits in the instruction directly provide the register address of one or more operands. When using the indirect register addressing mode, register addresses for one or more operands may be calculated based on address register values and address immediate fields in the instruction.
In some embodiments, instructions are grouped based on
Graphics pipeline
Fig. 8 is a block diagram of another embodiment of a graphics processor 800. Those elements of fig. 8 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.
In some embodiments, graphics processor 800 includes a graphics pipeline 820, a media pipeline 830, a display engine 840, thread execution logic 850, and a render output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown), or via commands issued to the graphics processor 800 over the ring interconnect 802. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general purpose processors. Commands from the ring interconnect 802 are interpreted by a command streamer 803, which supplies instructions to the various components of the geometry pipeline 820 or the media pipeline 830.
In some embodiments, the command streamer 803 directs the operation of a vertex fetcher 805, which vertex fetcher 805 reads vertex data from memory and executes vertex processing commands provided by the command streamer 803. In some embodiments, vertex fetcher 805 provides vertex data to vertex shader 807, which vertex shader 807 performs coordinate space transformations and lighting operations on each vertex. In some embodiments, vertex fetcher 805 and vertex shader 807 execute vertex processing instructions by dispatching execution threads to execution units 852A-852B via thread dispatcher 831.
In some embodiments, execution units 852A-852B are an array of vector processors having sets of instructions for performing graphics operations and media operations. In some embodiments, execution units 852A-852B have an attached L1 cache 851 dedicated to each array or shared between arrays. The cache may be configured as a data cache, an instruction cache, or partitioned into a single cache containing data and instructions in different partitions.
In some embodiments, geometry pipeline 820 includes a tessellation component for performing hardware accelerated tessellation of 3D objects. In some embodiments, the programmable hull shader 811 configures tessellation operations. The programmable domain shader 817 provides back-end evaluation of the tessellation output. The tessellator 813 operates under the direction of the hull shader 811 and includes dedicated logic for generating a detailed set of geometric objects based on a coarse geometric model that is provided as input to the geometry pipeline 820. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader 811, tessellator 813, and domain shader 817) may be bypassed.
In some embodiments, a complete geometric object may be processed by the geometry shader 819 via one or more threads dispatched to the execution units 852A-852B, or may travel directly to the clipper 829. In some embodiments, the geometry shader operates on the entire geometry object rather than on vertices or patches of vertices as in previous stages of the graphics pipeline. If tessellation is disabled, geometry shader 819 receives input from vertex shader 807. In some embodiments, the geometry shaders 819 are programmable by a geometry shader program to perform geometry tessellation with the tessellation unit disabled.
Prior to rasterization, the crop 829 processes the vertex data. The crop 829 may be a fixed-function crop or a programmable crop with crop and geometry shader functions. In some embodiments, the rasterizer and depth test component 873 in the render output pipeline 870 dispatches pixel shaders to convert the geometric objects into a pixel-by-pixel representation. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, the application may bypass the rasterizer and depth test component 873 and access the non-rasterized vertex data via the stream out unit 823.
Graphics processor 800 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and messages to pass among the main components of the processor. In some embodiments, the execution units 852A-852B and associated logic units (e.g., L1 cache 851, sampler 854, texture cache 858, etc.) are interconnected via data ports 856 to perform memory accesses and communicate with the rendering output pipeline components of the processor. In some embodiments, sampler 854, caches 851, 858 and execution units 852A-852B each have separate memory access paths. In one embodiment, texture cache 858 may also be configured as a sampler cache.
In some embodiments, the render output pipeline 870 includes a rasterizer and depth test component 873 that converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangulation and wire rasterization. An associated render cache 878 and depth cache 879 may also be available in some embodiments. The pixel operations component 877 performs pixel-based operations on the data, but in some instances, the pixel operations associated with the 2D operations (e.g., with blended bit-block image transfers) are performed by the 2D engine 841 or replaced by the display controller 843 when displayed using an overlaid display plane. In some embodiments, a shared L3 cache 875 is available to all graphics components, allowing data to be shared without using main system memory.
In some embodiments, graphics processor media pipeline 830 includes a media engine 837 and a video front end 834. In some embodiments, video front end 834 receives pipeline commands from command streamer 803. In some embodiments, media pipeline 830 includes a separate command streamer. In some embodiments, the video front end 834 processes the media command before sending the command to the media engine 837. In some embodiments, media engine 837 includes thread generation functionality to generate threads for dispatch to thread execution logic 850 via thread dispatcher 831.
In some embodiments, graphics processor 800 includes a display engine 840. In some embodiments, the display engine 840 is external to the processor 800 and is coupled with the graphics processor via the ring interconnect 802, or some other interconnect bus or fabric. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, the display engine 840 contains dedicated logic capable of operating independently of the 3D pipeline. In some embodiments, the display controller 843 is coupled with a display device (not shown), which may be a system integrated display device (as in a laptop computer) or an external display device attached via a display device connector.
In some embodiments, geometry pipeline 820 and media pipeline 830 may be configured to perform operations based on multiple graphics and media programming interfaces and are not specific to any one Application Programming Interface (API). In some embodiments, driver software of the graphics processor translates API calls specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for an open graphics library (OpenGL), open computing language (OpenCL), and/or Vulkan graphics and computing APIs, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from microsoft corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for an open source computer vision library (OpenCV). Future APIs with compatible 3D pipelines will also be supported if a mapping from the pipeline of the future API to the pipeline of the graphics processor can be made.
Graphics pipeline programming
FIG. 9A is a block diagram illustrating a graphics
In some embodiments, the
The flowchart in FIG. 9B illustrates an exemplary graphics processor command sequence 910. In some embodiments, software or firmware of a data processing system featuring an embodiment of a graphics processor uses some version of the illustrated command sequence to create, execute, and terminate a set of graphics operations. Sample command sequences are shown and described for exemplary purposes only, as embodiments are not limited to these particular commands or this sequence of commands. Moreover, the commands may be issued in a command sequence as a batch of commands such that the graphics processor will process the command sequence in an at least partially simultaneous manner.
In some embodiments, graphics processor command sequence 910 may begin with a pipeline
In some embodiments, the pipeline
In some embodiments, pipeline control commands 914 configure a graphics pipeline for operation and are used to program
In some embodiments, the return buffer status command 916 is used to configure a set of return buffers for a respective pipeline to write data. Some pipelining requires allocating, selecting, or configuring one or more return buffers into which an operation writes intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform cross-thread communications. In some embodiments, return buffer status 916 includes the size and number of return buffers selected to be used for the set of pipelined operations.
The remaining commands in the command sequence differ based on the active pipeline for the operation. Based on the
Commands for configuring the 3D pipeline state 930 include 3D state set commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables to be configured before processing the 3D primitive commands. The values of these commands are determined based at least in part on the particular 3D API in use. In some embodiments, the 3D pipeline state 930 commands can also selectively disable or bypass certain pipeline elements if those elements are not to be used.
In some embodiments, the 3D primitive 932 command is used to submit a 3D primitive to be processed by the 3D pipeline. Commands and associated parameters passed to the graphics processor via the 3D primitive 932 commands are forwarded to a vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 932 command data to generate a plurality of vertex data structures. The vertex data structure is stored in one or more return buffers. In some embodiments, the 3D primitive 932 command is for performing a vertex operation on the 3D primitive via a vertex shader. To process the vertex shader,
In some embodiments, the
In some embodiments, graphics processor command sequence 910 follows the
In some embodiments,
In some embodiments, media object commands 942 supply pointers to media objects for processing by the media pipeline. The media object includes a memory buffer containing video data to be processed. In some embodiments, all of the media pipeline state must be valid before issuing the
Graphics software architecture
FIG. 10 illustrates an exemplary graphics software architecture for data processing system 1000 in accordance with some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general purpose processor cores 1034. Graphics application 1010 and operating system 1020 each execute in system memory 1050 of the data processing system.
In some embodiments, 3D graphics application 1010 includes one or more shader programs, including shader instructions 1012. The shader language instructions can be in a high level shader language, such as High Level Shader Language (HLSL) or OpenGL shader language (GLSL). The application also includes executable instructions 1014 in a machine language suitable for execution by the general purpose processor core 1034. The application also includes a graphical object 1016 defined by the vertex data.
In some embodiments, operating system 1020 is from Microsoft corporation An operating system, a proprietary UNIX-like operating system, or an open source UNIX-like operating system that uses a variation of the Linux kernel. The operating system 1020 may support a graphics API 1022, such as the Direct3D API, the OpenGL API, or the Vulkan API. When Direct3DAPI is in use, operating system 1020 uses a front-end shader compiler 1024 to compile any shader instructions 1012 that employ HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or an application executable shader precompilation. In some embodiments, during compilation of the 3D graphics application 1010, high-level shaders are compiled into low-level shaders. In some embodiments, the shader instructions 1012 are provided in an intermediate form, such as some version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.
In some embodiments, user mode graphics driver 1026 includes a back-end shader compiler 1027, the back-end shader compiler 1027 to convert shader instructions 1012 into a hardware-specific representation. When the OpenGL API is in use, shader instructions 1012 in the GLSL high-level language are passed to user-mode graphics driver 1026 for compilation. In some embodiments, the user mode graphics driver 1026 uses operating system kernel mode functions 1028 to communicate with the kernel mode graphics driver 1029. In some embodiments, the kernel mode graphics driver 1029 communicates with the graphics processor 1032 to dispatch commands and instructions.
IP core implementation
One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium that represents and/or defines logic within an integrated circuit (such as a processor). For example, a machine-readable medium may include instructions representing various logic within a processor. When read by a machine, the instructions may cause the machine to fabricate logic to perform the techniques described herein. Such representations (referred to as "IP cores") are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities that load the hardware model on the manufacturing machines that manufacture the integrated circuits. The integrated circuit may be fabricated such that the circuit performs the operations described in association with any of the embodiments described herein.
Fig. 11A is a block diagram illustrating an IP core development system 1100, which IP core development system 1100 may be used to fabricate integrated circuits to perform operations, according to an embodiment. The IP core development system 1100 may be used to generate a modular, reusable design that may be incorporated into a larger design or used to build an entire integrated circuit (e.g., an SOC integrated circuit). Design facility 1130 may generate
The RTL design 1115 or equivalent may be further synthesized by the design facility into a hardware model 1120, which hardware model 1120 may employ a Hardware Description Language (HDL) or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. Non-volatile memory 1140 (e.g., a hard disk, flash memory, or any non-volatile storage medium) may be used to store the IP core design for delivery to third
Figure 11B illustrates a cross-sectional side view of an integrated circuit package assembly 1170, according to some embodiments described herein. The integrated circuit package assembly 1170 illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly 1170 includes a plurality of hardware logic units 1172, 1174 connected to a substrate 1180. The logic 1172, 1174 may be implemented at least partially in configurable logic or fixed function logic hardware, and may include one or more portions of any of the processor core(s), graphics processor(s) or other accelerator device described herein. Each logic unit 1172, 1174 may be implemented within a semiconductor die and coupled with a substrate 1180 via an interconnect 1173. Interconnect structure 1173 may be configured to route electrical signals between logic 1172, 1174 and substrate 1180, and may include interconnects such as, but not limited to, bumps or posts. In some embodiments, the interconnect fabric 1173 may be configured to route electrical signals, such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic 1172, 1174. In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. In other embodiments, package assembly 1170 may include other suitable types of substrates. The package assembly 1170 may be connected to other electrical devices via a package interconnect 1183. Package interconnect 1183 may be coupled to a surface of substrate 1180 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or a multi-chip module.
In some embodiments, the logic units 1172, 1174 are electrically coupled with a bridge 1182, the bridge 1182 configured to route electrical signals between the logic 1172 and the logic 1174. Bridge 1182 may be a dense interconnect structure that provides routing for electrical signals. The bridge 1182 may include a bridge substrate composed of glass or a suitable semiconductor material. Circuitry may be formed on the bridge substrate to provide chip-to-chip connections between logic 1172 and logic 1174.
Although two logic units 1172, 1174 and a bridge 1182 are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. These one or more dies may be connected by zero or more bridges, as the bridge 1182 may be excluded when logic is included on a single die. Alternatively, multiple dies or logic units may be connected by one or more bridges. Additionally, in other possible configurations (including three-dimensional configurations), multiple logic units, dies, and bridges may be connected together.
Exemplary System-on-chip Integrated Circuit
Fig. 12-14 illustrate an example integrated circuit and associated graphics processor that may be fabricated using one or more IP cores according to various embodiments described herein. Other logic and circuitry may be included in addition to those illustrated, including additional graphics processor/cores, peripheral interface controllers, or general purpose processor cores.
Fig. 12 is a block diagram illustrating an example system-on-chip
Fig. 13A-13B are block diagrams illustrating an exemplary graphics processor for use within a SoC, according to embodiments described herein. Fig. 13A illustrates an
As shown in FIG. 13A,
As shown in FIG. 13B,
14A-14B illustrate additional exemplary graphics processor logic, according to embodiments described herein. FIG. 14A illustrates a graphics core 1400, which graphics core 1400 may be included within
As shown in fig. 14A, graphics core 1400 includes a shared instruction cache 1402, texture unit 1418, and cache/shared memory 1420 that are common to execution resources within graphics core 1400. Graphics core 1400 may include multiple slices 1401A-1401N or partitions for each core, and a graphics processor may include multiple instances of graphics core 1400. The slices 1401A-1401N may include support logic that includes a local instruction cache 1404A-1404N, a thread scheduler 1406A-1406N, a thread dispatcher 1408A-1408N, and a set of registers 1410A-1410N. To perform logical operations, slices 1401A-1401N may include an additional set of functional units (AFUs 1412A-1412N), floating point units (FPUs 1414A-1414N), integer arithmetic logic units (ALUs 1416-1416N), address calculation units (ACUs 1413A-1413N), double precision floating point units (DPFPUs 1415A-1415N), and matrix processing units (MPUs 1417A-1417N).
Some of these calculation units operate with a certain accuracy. For example, FPUs 1414A-1414N may perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while DPFPUs 1415A-1415N perform double-precision (64-bit) floating point operations. The ALUs 1416A-1416N can perform variable precision integer operations with 8-bit precision, 16-bit precision, and 32-bit precision, and can be configured for mixed precision operations. The MPUs 1417A-1417N may also be configured for mixed precision matrix operations, including half precision floating point operations and 8-bit integer operations. The MPUs 1417A-1417N may perform a wide variety of matrix operations to accelerate the machine learning application framework, including enabling support for accelerated generalized matrix-to-matrix multiplications (GEMMs). AFUs 1412A-1412N can perform additional logical operations not supported by floating point units or integer units, including trigonometric function operations (e.g., sine, cosine, etc.).
As shown in fig. 14B, a general purpose processing unit (GPGPU)1430 may be configured to enable highly parallel computing operations to be performed by an array of graphics processing units. Additionally, the GPGPU1430 may be directly linked to other instances of gpgpgpus to create multi-GPU clusters, thereby improving training speed, especially for deep neural networks. The GPGPU1430 includes a host interface 1432 for enabling connection to a host processor. In one embodiment, host interface 1432 is a PCI Express interface. However, the host interface may also be a vendor-specific communication interface or communication structure. The GPGPU1430 receives commands from host processors and distributes the execution threads associated with those commands to a set of compute clusters 1436A-1436H using a global scheduler 1434. Compute clusters 1436A-1436H share cache memory 1438. The cache memory 1438 may act as a higher level cache for cache memory within the compute clusters 1436A-1436H.
The GPGPU1430 includes memories 1434A-1434B coupled with compute clusters 1436A-1436H via a set of memory controllers 1442A-1442B. In various embodiments, memories 1434A-1434B may comprise various types of memory devices, including Dynamic Random Access Memory (DRAM) or graphics random access memory, such as Synchronous Graphics Random Access Memory (SGRAM), including Graphics Double Data Rate (GDDR) memory.
In one embodiment, compute clusters 1436A-1436H each include a set of graphics cores, such as graphics core 1400 of fig. 14A, which may include multiple types of integer and floating point logic units that may perform compute operations including those suitable for machine learning computations within a range of precision. For example and in one embodiment, at least a subset of the floating point units in each of the compute clusters 1436A-1436H may be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating point units may be configured to perform 64-bit floating point operations.
Multiple instances of the GPGPU1430 may be configured to operate as a compute cluster. The communication mechanisms used by the compute clusters for synchronization and data exchange vary across embodiments. In one embodiment, multiple instances of the GPGPU1430 communicate through a host interface 1432. In one embodiment, GPGPU1430 includes an I/O hub 1439, the I/O hub 1439 couples the GPGPU1430 with a GPU link 1440, the GPU link 1440 enabling direct connections to other instances of the GPGPU. In one embodiment, GPU link 1440 is coupled to a dedicated GPU-GPU bridge that enables communication and synchronization between multiple instances of GPGPU 1430. In one embodiment, GPU link 1440 is coupled with a high speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In one embodiment, multiple instances of the GPGPU1430 are located in separate data processing systems and communicate via a network device that is accessible via the host interface 1432. In one embodiment, GPU link 1440 may be configured to enable a connection to a host processor in addition to or instead of host interface 1432.
While the illustrated configuration of the GPGPU1430 may be configured to train a neural network, one embodiment provides an alternative configuration of the GPGPU1430 that may be configured for deployment within a high performance or low power inference platform. In an inferred configuration, the GPGPU1430 includes fewer of the compute clusters 1436A-1436H relative to a training configuration. Additionally, the memory technology associated with memories 1434A-1434B may differ between the inferred configuration and the training configuration, with higher bandwidth memory technologies dedicated to the training configuration. In one embodiment, the inferred configuration of the GPGPU1430 may support inference-specific instructions. For example, the inference configuration may provide support for one or more 8-bit integer dot-product instructions that are typically used during inference operations for deployed neural networks.
FIG. 15 illustrates one embodiment of a computing device 1500. Computing device 1500 (e.g., a smart wearable device, a Virtual Reality (VR) device, a Head Mounted Display (HMD), a mobile computer, an internet of things (IoT) device, a laptop computer, a desktop computer, a server computer, etc.) may be the same as
Computing device 1500 may include any number and type of communication devices, such as a mainframe computing system, such as a server computer, desktop computer, etc., and may further include a set-top box (e.g., an internet-based cable set-top box, etc.), a Global Positioning System (GPS) based device, etc. Computing device 1500 may include a mobile computing device that functions as a communication device, such as a cellular phone including a smartphone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an e-reader, a smart television, a television platform, a wearable device (e.g., glasses, a watch, a bracelet, a smart card, jewelry, clothing, etc.), a media player, and so forth. For example, in one embodiment, computing device 1500 may comprise a mobile computing device employing a computer platform hosting an integrated circuit ("IC"), such as a system on a chip ("SoC" or "SoC"), that integrates various hardware and/or software components of computing device 1500 on a single chip.
As shown, in one embodiment, computing device 1500 may include any number and type of hardware and/or software components, such as, but not limited to, a GPU1514, graphics drivers (also referred to as "GPU drivers," "graphics driver logic," "driver logic," User Mode Drivers (UMDs), UMDs, user mode driver chassis (UMDF), UMDF, or simply "driver") 1516, a CPU1512, a memory 1508, network devices, drivers, and the like, as well as input/output (I/O) sources 1504 such as touchscreens, touch panels, touchpads, virtual or regular keyboards, virtual or regular mice, ports, connectors, and the like.
Computing device 1500 can include an Operating System (OS)1506 that serves as an interface between the hardware and/or physical resources of the computing device 1500 and a user. It is contemplated that CPU1512 may include one or more processors and GPU1514 may include one or more graphics processors.
It should be noted that throughout this document, terms such as "node," "computing node," "server device," "cloud computer," "cloud server computer," "machine," "host," "device," "computing device," "computer," "computing system," and the like are used interchangeably. It should be further noted that throughout this document, terms such as "application," "software application," "program," "software program," "package," "software package," and the like may be used interchangeably. Also, terms such as "job," "input," "request," "message," and the like may be used interchangeably throughout this document.
It is contemplated, and as further described with reference to fig. 1-14, that some processes of the graphics pipeline described above are implemented in software, while the remaining processes are implemented in hardware. The graphics pipeline may be implemented in a graphics coprocessor design, where CPU1512 is designed to work with GPU1514, which GPU1514 may be included in or co-located with CPU 1512. In one embodiment, GPU1514 may employ any number and type of conventional software and hardware logic for performing conventional functions associated with graphics rendering, as well as novel software and hardware logic for performing any number and type of instructions.
As described above, memory 1508 may include Random Access Memory (RAM) including an application database having object information. The memory controller hub may access the data in RAM and forward it to GPU1514 for graphics pipeline processing. The RAM may include double data rate RAM (DDR RAM), extended data output RAM (EDO RAM), and the like. The CPU1512 interacts with the hardware graphics pipeline to share graphics pipeline functionality.
The processed data is stored in buffers in the hardware graphics pipeline, and state information is stored in memory 1508. The resulting image is then passed to an I/O source 1504, such as a display component for displaying the image. It is contemplated that the display device may be various types of display devices for displaying information to a user, such as a Cathode Ray Tube (CRT), a Thin Film Transistor (TFT), a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) array, and the like.
Memory 1508 may include a pre-allocated area of buffers (e.g., frame buffers); however, one of ordinary skill in the art will appreciate that embodiments are not so limited and any memory accessible to the lower-level graphics pipeline may be used. The computing device 1500 may further include a Platform Controller Hub (PCH)130, one or more I/O sources 1504, and the like, as referenced in fig. 1.
The CPU1512 may include one or more processors for executing instructions to perform any software routines implemented by a computing system. An instruction often involves some operation being performed on data. Both data and instructions may be stored in system memory 1508 and any associated caches. Caches are typically designed to have a shorter latency than system memory 1508; for example, the cache may be integrated onto the same silicon chip(s) as the processor(s) and/or constructed with faster static ram (sram) cells, while the system memory 1508 may be constructed with slower dynamic ram (dram) cells. By tending to store more frequently used instructions and data in the cache, the overall performance efficiency of the computing device 1500 is improved, as opposed to the system memory 1508. It is contemplated that in some embodiments, GPU1514 may exist as part of CPU1512 (such as part of a physical CPU package), in which case memory 1508 may be shared or kept separate by CPU1512 and GPU 1514.
System memory 1508 may be available to other components within computing device 1500. For example, in the implementation of software programs, any data (e.g., input graphics data) received from various interfaces to the computing device 1500 (e.g., keyboard and mouse, printer port, Local Area Network (LAN) port, modem port, etc.) or retrieved from an internal storage element of the computing device 1500 (e.g., a hard disk drive) is typically temporarily queued into system memory 1508 prior to operation by one or more processors. Similarly, data that the software program determines should be sent from the computing device 1500 to an external entity through one of the computing system interfaces or stored into an internal storage element is often temporarily queued in system memory 1508 before it is transferred or stored.
Further, for example, the PCH may be used to ensure that such data is properly transferred between the system memory 1508 and its appropriate corresponding computing system interface (and internal storage device if the computing system is so designed) and that there may be a bidirectional point-to-point link between itself and the I/O source/device 1504 shown. Similarly, the MCH may be used to manage various competing requests for access to system memory 1508 between the CPU1512 and the GPU1514, interfaces, and internal storage elements, which may occur in close temporal proximity to each other.
The I/O source 1504 can include one or more I/O devices (e.g., network adapters) implemented to transfer data to and/or from the computing device 1500; or for large-scale non-volatile storage (e.g., hard disk drives) within the computing device 1500. A user input device, including alphanumeric and other keys, may be used to communicate information and command selections to GPU 1514. Another type of user input device is a cursor control, such as a mouse, a trackball, a touch screen, a touchpad, or cursor direction keys, for communicating direction information and command selections to GPU1514 and controlling cursor movement on a display device. The camera and microphone array of computer device 1500 may be employed to observe gestures, record audio and video, and receive and transmit visual and audio commands.
Computing device 1500 may further include network interface(s) to provide access to networks such as LANs, Wide Area Networks (WANs), Metropolitan Area Networks (MANs), Personal Area Networks (PANs), bluetooth, cloud networks, mobile networks (e.g., 3 rd generation (3G), 4 th generation (4G), etc.), intranets, the internet, and the like. The network interface(s) may include, for example, a wireless network interface having an antenna, which may represent one or more antennas. The network interface(s) may also include a wired network interface that communicates with remote devices, for example, via a network cable, which may be, for example, an ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable.
The network interface(s) may provide access to a LAN, for example by conforming to IEEE802.11 b and/or IEEE802.11g standards, and/or the wireless network interface may provide access to a personal area network, for example by conforming to a bluetooth standard. Other wireless network interfaces and/or protocols may also be supported, including previous and subsequent versions of the standard. In addition to, or instead of, communication via the wireless LAN standard, the network interface(s) may provide wireless communication using, for example, the following protocols: a Time Division Multiple Access (TDMA) protocol, a global system for mobile communications (GSM) protocol, a Code Division Multiple Access (CDMA) protocol, and/or any other type of wireless communication protocol.
The network interface(s) may include one or more communication interfaces such as a modem, a network interface card, or other well-known interface devices such as those used to couple to ethernet, token ring, or other types of physical wired or wireless attachments for the purpose of providing a communication link to support, for example, a LAN or WAN. In this manner, the computer system may also be coupled to a number of peripheral devices, clients, control surfaces, consoles, or servers via a conventional network infrastructure (e.g., including an intranet or the internet).
It should be appreciated that for certain implementations, systems that are less or more equipped than the examples described above may be preferred. Thus, the configuration of computing device 1500 may vary from implementation to implementation depending on numerous factors, such as price constraints, performance requirements, technological improvements, or other circumstances. Examples of electronic device or computer system 1500 may include (but are not limited to): mobile device, personal digital assistant, mobile computing device, smartphone, cellular telephone, handheld device, one-way pager, two-way pager, messaging device, computer, Personal Computer (PC), desktop computer, laptop computer, notebook computer, handheld computer, tablet computer, server array or server farm, web server, network server, internet server, workstation, minicomputer, mainframe computer, supercomputer, network appliance, web appliance, distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, router, hub, gateway, bridge, and the like, A switch, a machine, or a combination thereof.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:计算机存储加速方法、电子设备及存储介质