Processing unit with mixed precision operation

文档序号:144438 发布日期:2021-10-22 浏览:24次 中文

阅读说明:本技术 具有混合精度运算的处理单元 (Processing unit with mixed precision operation ) 是由 何斌 迈克尔·曼特 陈佳升 于 2020-03-10 设计创作,主要内容包括:一种图形处理单元(GPU)[100]实现具有相关联的运算码的运算[105]以执行混合精度数学运算。所述GPU包括具有不同执行路径[106、107]的算术逻辑单元(ALU)[104],其中每个执行路径执行不同的混合精度运算。通过响应于指定描述运算的运算码而实现ALU处的混合精度运算,GPU在减少执行开销的同时高效地提高了指定的数学运算的精度。(A Graphics Processing Unit (GPU) [100] implements operations [105] with associated opcodes to perform mixed precision mathematical operations. The GPU includes an Arithmetic Logic Unit (ALU) [104] having different execution paths [106, 107], where each execution path performs a different blend precision operation. By implementing mixed-precision operations at the ALUs in response to the opcodes that specify the descriptive operations, the GPU efficiently increases the precision of the specified mathematical operations while reducing execution overhead.)

1. A method, comprising:

decoding a first instruction [101] at a processing unit [100] to identify a first multi-precision operation [105 ]; and

the first multi-precision operation is performed at an Arithmetic Logic Unit (ALU) [104] by performing a first mathematical operation using operands of different precisions.

2. The method of claim 1, wherein the first mathematical operation comprises a floating-point multiply-accumulate operation.

3. The method of claim 2, wherein the floating-point multiply-accumulate operation multiplies two sets of N operands of a first precision and adds operands of a second precision different from the first precision.

4. The method of claim 3, wherein N is at least two.

5. The method of claim 4, wherein N is at least four.

6. The method of claim 1, wherein the first mathematical operation comprises an integer multiply-accumulate operation.

7. The method of claim 1, further comprising:

decoding, at the processing unit, a second instruction to identify a second multi-precision operation that is different from the first multi-precision operation; and

performing the second multi-precision operation at the ALU by performing a second mathematical operation using operands of different precision, the second mathematical operation being different from the first mathematical operation.

8. The method of claim 7, wherein:

performing the first multi-precision operation comprises performing the first multi-precision operation at a first execution path [106] of the ALU; and is

Performing the second multi-precision operation comprises performing the second multi-precision operation at a second execution path [107] of the ALU, the second execution path being different from the first execution path.

9. A processing unit [100], comprising:

a decode stage [102] for decoding a first instruction [101] to identify a first multi-precision operation [105 ]; and

an Arithmetic Logic Unit (ALU) [104] for performing the first multi-precision operation by performing a first mathematical operation using operands of different precisions.

10. The processing unit of claim 9, wherein the first mathematical operation comprises a floating-point multiply-accumulate operation.

11. The processing unit of claim 10, wherein the floating-point multiply-accumulate operation multiplies two sets of N operands of a first precision and adds operands of a second precision different from the first precision.

12. The processing unit of claim 11, wherein N is at least two.

13. The processing unit of claim 12, wherein N is at least four.

14. The processing unit of claim 9, wherein the first mathematical operation comprises an integer multiply-accumulate operation.

15. The processing unit of claim 9, wherein:

the decode stage is to decode a second instruction to identify a second multi-precision operation different from the first multi-precision operation; and is

The ALU performs the second multi-precision operation at the ALU by performing a second mathematical operation using operands of different precision, the second mathematical operation being different from the first mathematical operation.

16. The processing unit of claim 15, wherein the ALU comprises:

a first execution path [106] for performing said first multi-precision operation; and

a second execution path [107] for performing the second multi-precision operation, the second execution path being different from the first execution path.

17. A processing unit [100], comprising:

an Arithmetic Logic Unit (ALU) [104], the Arithmetic Logic Unit (ALU) comprising:

a first execution path [106] for performing a first multi-precision operation [105] by performing a first mathematical operation using operands of different precisions; and

a second execution path [107] for performing a second multi-precision operation by performing a second mathematical operation using operands of different precisions, the second mathematical operation being different from the first execution path.

18. The processing unit of claim 17, wherein the first mathematical operation comprises a first floating-point multiply-accumulate operation and the second mathematical operation comprises a second floating-point multiply-accumulate operation.

19. The processing unit of claim 18, wherein the first floating-point multiply-accumulate operation multiplies two sets of N operands of a first precision and adds operands having a second precision different from the first precision, and wherein the second floating-point multiply-accumulate operation multiplies two sets of M operands of the first precision and adds operands of the second precision.

20. The processing unit of claim 19, wherein N is at least two and M is at least four.

Background

A processor employs one or more processing units that are specially designed and configured to perform specified operations on behalf of the processor. For example, some processors employ Graphics Processing Units (GPUs) and other parallel processing units that typically implement multiple processing elements (also referred to as processor cores or compute units) that execute multiple instances of a single program on multiple data sets simultaneously to perform graphics, vector, and other computational processing operations. A Central Processing Unit (CPU) of the processor provides commands to the GPU, and a Command Processor (CP) of the GPU decodes the commands into one or more operations. The execution units of the GPU, such as one or more Arithmetic Logic Units (ALUs), perform operations to perform graphics and vector processing operations.

Drawings

The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.

Fig. 1 is a block diagram of a portion of a processing unit employing opcodes for mixed precision operations, according to some embodiments.

FIG. 2 is a block diagram of a mixed precision floating point execution path of the processing unit of FIG. 1, according to some embodiments.

FIG. 3 is a block diagram of another mixed precision floating point execution path of the processing unit of FIG. 1, according to some embodiments.

FIG. 4 is a block diagram of a mixed precision integer execution path of the processing unit of FIG. 1, according to some embodiments.

Detailed Description

Fig. 1-4 illustrate techniques in which parallel processing units, in this example Graphics Processing Units (GPUs), implement operations with associated opcodes to perform mixed-precision mathematical operations. The GPU includes an Arithmetic Logic Unit (ALU) having different execution paths, where each execution path performs a different mixed-precision operation. By implementing mixed-precision operations at the ALUs in response to the opcodes that specify the descriptive operations, the GPU efficiently increases the precision of the specified mathematical operations while reducing execution overhead.

For example, in executing an instruction, the GPU performs a mathematical operation specified by an opcode associated with the instruction. An opcode indicates the precision of a mathematical operation, at least in part, by specifying the size of operands used for the mathematical operation. For example, some opcodes specify 16-bit floating-point operations to be performed with 16-bit operands, while other opcodes specify 32-bit operations to be performed with 32-bit operands. Conventionally, all operands used by the operation are of the same size and, therefore, have the same precision. However, for some operations, such as processing mathematical operations with some vectors of operands having the same size, an overall loss of precision of the operation results. For example, the result of a dot product operation in which all operands are limited to 16 bits has relatively low precision for some applications. Using the techniques described herein, a GPU performs mathematical operations with mixed-precision operands in response to corresponding opcodes, thereby efficiently supporting increased mathematical precision.

Fig. 1 illustrates a GPU100 supporting mixed-precision operations, according to some embodiments. For purposes of description, it is assumed that GPU100 is part of a processor that executes a set of instructions (e.g., a computer program) to perform tasks on behalf of an electronic device. Thus, in various embodiments, the GPU100 is part of an electronic device such as a desktop computer, a notebook computer, a server, a tablet, a smartphone, a game console, and so forth. Further, assume that a processor including the GPU100 includes a Central Processing Unit (CPU) that executes an instruction set.

The GPU100 is designed and manufactured to perform specified operations on behalf of the CPU. In particular, GPU100 performs graphics and vector processing operations on behalf of the CPU. For example, in some embodiments, during execution of instructions, the CPU generates commands associated with graphics and vector processing operations. The CPU provides commands to the GPU100, which employs a command processor (not shown) to decode the commands into a set of instructions for execution at the GPU 100.

To facilitate execution of instructions, GPU100 includes a decode stage 102 and ALUs 104. In some embodiments, decode stage 102 is part of an instruction pipeline (not shown) that includes additional stages to support instruction execution, including a fetch stage to fetch instructions from an instruction buffer, additional decode stages, execution units other than ALUs 104, and an exit stage to exit executed instructions. The decode stage 102 includes circuitry to decode an instruction (e.g., instruction 101) received from the fetch stage into one or more operations (e.g., operation 105) and dispatch these operations to one of the execution units depending on the type of operation. In some embodiments, each operation is identified by a corresponding opcode, and the decode stage identifies an execution unit based on the opcode and provides information indicative of the opcode to the execution unit. The execution unit employs the opcode or information based on the opcode to determine the type of operation to be performed and to perform the indicated operation.

For example, some operations and associated opcodes indicate arithmetic operations. In response to recognizing that the received instruction indicates an arithmetic operation, the decode stage 102 determines an opcode for the operation and provides the opcode, along with other information such as operands to be used for the arithmetic operation, to the ALU 104. The ALU104 uses the indicated operands stored at the register file 110 to perform the operation indicated by the opcode. In some implementations, the operations provided by the ALU104 indicate the precision of the operands and the operation to be performed. For example, in some embodiments, decode stage 102 provides one operation (and corresponding opcode) for a 16-bit multiply operation using 16-bit operands and another operation (and corresponding opcode) for a 32-bit multiply operation using 32-bit operands.

Additionally, decode stage 102 generates operations with corresponding opcodes for mixed precision mathematical operations that employ operands of different sizes. For example, in some embodiments, the decode stage generates a multiply-accumulate (MACC) operation that multiplies operands of one size (e.g., 16 bits) and adds the result to operands of a different size (e.g., 32 bits) based on the corresponding instruction. In some embodiments, the operations include: 1) a mixed precision DOT-product operation (named DOT4_ F32_ F16) that multiplies two sets of four 16-bit floating-point operands and adds the multiplication results to each other and to the 32-bit floating-point operands; 2) a mixed precision DOT-product operation (named DOT2_ F32_ F16) that multiplies two sets of two 16-bit floating-point operands and adds the multiplication results to each other and to the 32-bit floating-point operands; and 3) mixed precision DOT product operations (named DOT _ I32_ I16), multiply two sets of four 16-bit integer operands and add the multiplication results to each other and to the 32-bit integer operands.

The ALU104 includes different execution paths to perform each mixed-precision operation. In some embodiments, different execution paths share electronic components or modules, such as registers, adders, multipliers, and the like. In other embodiments, some or all of the different execution paths are independent and do not share arithmetic circuitry or modules. In the depicted implementation, ALU104 includes a path 106 to perform DOT4_ F32_ F16 operations, a path 107 to perform DOT2_ F32_ F16 operations, and a path 108 to perform DOT4_ I32_ I16 operations. In response to receiving an opcode or other indicator for a mixed precision operation, ALU104 performs the operation using the corresponding execution path and stores the result at a register of register file 110. In some embodiments, each mixed-precision operation is specified by a single opcode. That is, ALU104 does not require multiple opcodes or operations to perform mixed-precision operations, thereby reducing processing overhead while supporting increased precision.

Fig. 2 illustrates DOT4_ F32_ F16 execution path 106, according to some embodiments. As described above, DOT4_ F32_ F16 operations operate on a set of four 16-bit operands (named A for descriptive purposes)0、A1、A2And A3) Andanother set of four 16-bit operands (named B for descriptive purposes)0、B1、B2And B3) And the result is added to a third 32-bit operand (named C). Thus, the DOT4_ F32_ F16 operation is represented by the following formula:

D.f32=A.f16[0]*B.f16[0]+A.f16[1]*B.f16[1]+A.f16[2]*B.f16[2]+A.f16[3]*B.f16[3]+C.f32

further, the function of the DOT _ F32_ F16 operation is represented by the following pseudo code:

to implement this operation, execution path 106 includes: a set of 16-bit registers (e.g., register 212) for storing operand A0、A1、A2And A3And operand B0、B1、B2And B3(ii) a And a 32-bit register to store operand C. Each of these operands is represented as a floating point number that includes a mantissa and an exponent. The execution path 106 also includes a set of adders (e.g., adder 216) and multipliers (e.g., multiplier 218), where each adder adds exponents of a corresponding pair of a and B operands, and each multiplier multiplies mantissas of a corresponding pair of a and B operands. Thus, for example, adder 216 adds operand A0And B0And multiplier 218 adds the operands a0And B0The mantissas of (a).

Execution path 106 also includes exponent compare module 220 and mantissa ordering and alignment module 222. The exponent compare module 220 receives the summed exponents from the adder and compares these sums to determine any mismatch in the exponents and to determine a temporary exponent value for the subsequent normalized result d.f32, as described below. The exponent compare module 220 provides control signaling indicating the identified mismatch to the mantissa ordering and alignment module 222. Mantissa ordering and alignment module 222 receives the mantissa product from the multiplier and is based on information provided by the exponent comparison module. Based on the exponent mismatch information, mantissa ordering and alignment module 222 shifts the mantissa products such that each shifted mantissa product is represented by the same exponent value. The mantissa ordering and alignment module 222 thus aligns the mantissa products for addition.

To add the aligned mantissa products, execution path 106 includes a fusion adder 224. In some embodiments, to improve accuracy, the fusion adder 224 adds values having a larger bit size than the A, B and C operands. For example, in some embodiments, the a and B operands are 16 bit values, the C operand is a 32 bit value, and the mantissa ordering and alignment module 222 generates mantissa values that are 82 bits wide. In these embodiments, the fusion adder 224 is capable of adding 82-bit (or greater) values, thereby preventing loss of precision during mantissa addition.

The fusion adder 224 adds the mantissa values to generate a temporary value for the mantissa of d.f32 and provides the temporary mantissa value to a normalization module 226, which normalizes the temporary mantissa value. For example, in some embodiments, normalization module 226 shifts the temporary mantissa value to remove any leading zeros in the mantissa. In some embodiments, normalization module 226 adjusts the temporary mantissa to cause the integer portion of the temporary mantissa to be a specified value (e.g., 1). Based on the adjustments made to the mantissas, the normalization module adjusts the temporary exponent values provided by the exponent comparison module 220 to preserve the overall value of the temporary mantissa. In addition, the normalization module sets sticky bits for mantissas based on mantissa bits 229 received from mantissa ordering and alignment module 222.

The normalization module 226 provides the adjusted mantissa value and the exponent value of d.f32 to the rounding module 228. The rounding module 228 rounds the mantissa value based on a specified rounding rule, such as rounding d.f. 32 to the nearest even value, thereby generating a final value of d.f. 32. The rounding module 228 provides the final d.f32 value to the register file 110 for storage at the register indicated by the received operation.

Fig. 3 illustrates the DOT2_ F32_ F16 execution path 107, according to some embodiments. As described above, DOT2_ F32_ F16 operations operate on a set of 2 16-bit operands (named A for descriptive purposes)0And A1) With another set of two 16-bit operands (named B for descriptive purposes)0And B1) And the result is added to a third 32-bit operand (named C). Thus, the DOT2_ F32_ F16 operation is represented by the following formula:

D.f32=A.f16[0]*B.f16[0]+A.f16[1]*B.f16[1]+C.f32

to accomplish this, execution path 107 includes a set of 32-bit registers (registers 320, 321, and 323) to store operand A0、A1、B0、B1And operand C. In some embodiments, operands are stored in different ones of registers 320, 321, and 323, depending on the particular instruction or operation being performed. For example, for one example of a DOT2_ F32_ F16 operation, register 320 stores the A operand, register 321 stores the B operand, and register 323 stores the C operand. For another example of a DOT2_ F32_ F16 operation, register 320 stores the C operand, register 321 stores the B operand, and register 323 stores the A operand. Furthermore, for different instances of DOT2_ F32_ F16 operations, different portions of the 32-bit register store different ones of the 16-bit operands. For example, for some examples, A0The operand is stored in the upper 16 bits of one of registers 320, 321, and 323, while for other examples, A is stored0The operand is stored in the lower 16 bits of one of registers 320, 321, and 323. The execution path 106 also includes an operand selection module to select operands from the registers 320, 321, and 323 based on control information provided by the received operation.

Each of the operands is represented as a floating point number comprising a mantissa and an exponent. The execution path 107 includes a set of pre-normalization modules (e.g., pre-normalization module 324) to pre-normalize the 16-bit operand A by, for example, converting the 16-bit operand into a 32-bit operand (such as by converting an exponent value of the 16-bit operand)0、A1、B0、B1. In addition, the first and second substrates are,execution path 107 includes a non-normalized refresh module 326 that refreshes the value of C to zero when the C operand is a non-normalized value.

To multiply the A and B operands, execution path 107 includes a set of AND gates (e.g., AND gate 328), a set of adders (e.g., adder 330), and a set of multipliers (e.g., multiplier 332). Each and gate performs a logical and operation on the sign bits of a corresponding pair of pre-normalized a and B operands to generate a sign bit for the corresponding multiplication operation. Thus, for example, in one example, AND gate 328 pairs the pre-normalized operand A0And B0Performs an and operation to generate a signal for a0*B0Sign bit of the operation. Each adder adds the exponents of a corresponding pair of pre-normalized a and B operands, and each multiplier multiplies the mantissas of a corresponding pair of pre-normalized a and B operands. Thus, for example, adder 216 would pre-normalize operand A0And B0And multiplier 218 adds the pre-normalized operands a0And B0The mantissas of (a).

To add the products generated by the AND gate, multiplier, and adder, execution path 107 includes a fused adder 334. In some embodiments, to improve accuracy, the fusion adder 224 adds values having a larger bit size than the A, B and C operands. For example, in some embodiments, the a and B operands are 16-bit values, the C operand is a 32-bit value, the adder generates a 7-bit exponent, and the multiplier generates a 22-bit mantissa product. In these embodiments, the fusion adder 334 is capable of adding 52-bit values, thereby preventing loss of precision during mantissa addition.

The fusion adder 224 adds the mantissa product values to generate a temporary value for the mantissa of d.f32 and provides the temporary mantissa value and the exponent value to the normalization module 336, which normalizes the temporary d.f32 value. For example, in some embodiments, normalization module 336 shifts the temporary mantissa value to remove any leading zeros in the mantissa. In some implementations, the normalization module 336 adjusts the temporary mantissa to cause the integer portion of the temporary mantissa to be a specified value (e.g., 1). Based on the adjustments made to the mantissa, the normalization module adjusts the temporary index value to preserve an overall value of d.f. 32.

The normalization module 336 provides the adjusted mantissa value and the exponent value of d.f32 to the rounding module 338. The rounding module 338 rounds the mantissa value based on a specified rounding rule, such as rounding d.f. 32 to the nearest even value, thereby generating a final value of d.f. 32. The rounding module 228 provides the final d.f32 value to the register file 110 for storage.

Fig. 4 illustrates DOT2_ I32_ I16 execution path 108 according to some embodiments. As described above, DOT2_ I32_ I16 operations operate on a set of 2 16-bit integer operands (named A for descriptive purposes)0And A1) And another set of two 16-bit integer operands (named B for descriptive purposes)0And B1) And the result is added to a third 32-bit integer operand (named C). Thus, the DOT2_ I32_ I16 operation is represented by the following formula:

D.i32=A.i16[0]*B.i16[0]+A.i16[1]*B.i16[1]+C.i32

to accomplish this, the execution path 108 includes a set of 32-bit registers (registers 440, 441, and 443) to store the operand A0、A1、B0、B1And operand C. In some embodiments, operands are stored in different ones of registers 440, 441, and 443, similar to execution path 107 described above with respect to fig. 3, depending on the particular instruction or operation being executed. The execution path 108 also includes an operand selection module to select operands from registers 440, 441, and 443 based on control information provided by the received operations.

To multiply the A and B operands, execution path 108 includes multipliers 444 and 446. Each of multipliers 444 and 446 multiplies a corresponding pair of 16-bit operands to generate a 32-bit product. Execution path 108 also includes a 32-bit adder that adds the products generated by multipliers 444 and 446 to each other and to the C operand to generate the temporary value of d.i. 32. Execution path 108 includes a saturation module 450 that receives the temporary value of d.i. 32 and a clamp value (named CLMP). The saturation module 450 compares the temporary d.i32 value to the CLMP value. The saturation module 450 sets the final d.i32 value to the CLMP value in response to the d.i32 value exceeding the CLMP value, and otherwise sets the final d.i32 value to the temporary d.i32 value. Saturation module 450 provides the final d.i32 value to register file 110 for storage.

As described herein, in some embodiments, a method comprises: decoding, at a processing unit, a first instruction to identify a first multi-precision operation; and performing a first multi-precision operation at an Arithmetic Logic Unit (ALU) by performing a first mathematical operation using operands of different precisions. In one aspect, the first mathematical operation comprises a floating-point multiply-accumulate operation. In another aspect, a floating-point multiply-accumulate operation multiplies two sets of N operands of a first precision and adds operands of a second precision different from the first precision. In one aspect, N is at least two. In another aspect, N is at least four.

In one aspect, the first mathematical operation comprises an integer multiply-accumulate operation. In another aspect, the method comprises: decoding, at the processing unit, the second instruction to identify a second multi-precision operation that is different from the first multi-precision operation; and performing a second multi-precision operation at the ALU by performing a second mathematical operation using operands of different precisions, the second mathematical operation being different from the first mathematical operation. In another aspect, performing the first multi-precision operation includes performing the first multi-precision operation at a first execution path of the ALU; and performing the second multi-precision operation comprises performing the second multi-precision operation at a second execution path of the ALU that is different from the first execution path.

In some embodiments, a processing unit comprises: a decode stage to decode a first instruction to identify a first multi-precision operation; and an Arithmetic Logic Unit (ALU) for performing a first multi-precision operation by performing a first mathematical operation using operands of different precisions. In one aspect, the first mathematical operation comprises a floating-point multiply-accumulate operation. In another aspect, a floating-point multiply-accumulate operation multiplies two sets of N operands of a first precision and adds operands of a second precision different from the first precision. In one aspect, N is at least two. In another aspect, N is at least four.

In one aspect, the first mathematical operation comprises an integer multiply-accumulate operation. In another aspect, the decode stage is to decode a second instruction to identify a second multi-precision operation different from the first multi-precision operation; and the ALU is to perform a second multi-precision operation at the ALU by performing a second mathematical operation using operands of different precisions, the second mathematical operation being different from the first mathematical operation. In another aspect, the ALU comprises: a first execution path for performing a first multi-precision operation; and a second execution path for performing a second multi-precision operation, the second execution path being different from the first execution path.

In some embodiments, the processing unit comprises: an Arithmetic Logic Unit (ALU) comprising: a first execution path to perform a first multi-precision operation by performing a first mathematical operation using operands of different precisions; and a second execution path for performing a second multi-precision operation by performing a second mathematical operation using operands of different precisions, the second mathematical operation being different from the first execution path. In one aspect, the first mathematical operation comprises a first floating-point multiply-accumulate operation and the second mathematical operation comprises a second floating-point multiply-accumulate operation. In another aspect, a first floating-point multiply-accumulate operation multiplies two sets of N operands of a first precision and adds operands having a second precision different from the first precision, and wherein a second floating-point multiply-accumulate operation multiplies two sets of M operands of the first precision and adds operands of the second precision. In one aspect, N is at least two and M is at least four.

In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. Software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer-readable storage medium. The software includes instructions and certain data which, when executed by one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. Non-transitory computer-readable storage media include, for example, magnetic or optical disk storage, solid-state storage such as flash memory, cache, Random Access Memory (RAM), or one or more other non-volatile memory devices, and so forth. Executable instructions stored on a non-transitory computer-readable storage medium may take the form of source code, assembly language code, object code, or other instruction formats that are interpreted or otherwise executed by one or more processors.

It should be noted that not all of the activities or elements described above in the general description are required, that a portion of a particular activity or apparatus may not be required, and that one or more other activities may be performed, or that elements other than those described may be included. Further, the order in which activities are listed is not necessarily the order in which the activities are performed. Moreover, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.

Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:数据结构放弃

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类