Neural network processing device, control method, and computing system

文档序号:1525399 发布日期:2020-02-11 浏览:6次 中文

阅读说明:本技术 神经网络处理装置、控制方法以及计算系统 (Neural network processing device, control method, and computing system ) 是由 杨康 李鹏 韩峰 谷骞 于 2018-11-28 设计创作,主要内容包括:提供一种神经网络处理装置、控制方法以及计算系统。该神经网络处理装置包括:计算电路;控制电路,根据一条目标指令,控制计算电路执行神经网络的至少两个层对应的计算。采用“一条”目标指令实现神经网络的至少两层的计算,使得保证神经网络处理装置灵活性的前提下,减少了控制信号的占比,节省了神经网络处理装置的功耗和面积,从而能够提高神经网络处理装置的性能。(Provided are a neural network processing apparatus, a control method, and a computing system. The neural network processing apparatus includes: a computing circuit; and the control circuit controls the calculation circuit to execute the calculation corresponding to at least two layers of the neural network according to a target instruction. The calculation of at least two layers of the neural network is realized by adopting the 'one' target instruction, so that the occupation ratio of control signals is reduced on the premise of ensuring the flexibility of the neural network processing device, the power consumption and the area of the neural network processing device are saved, and the performance of the neural network processing device can be improved.)

1. A neural network processing apparatus, comprising:

a computing circuit;

and the control circuit controls the calculation circuit to execute the calculation corresponding to at least two layers of the neural network according to a target instruction.

2. The neural network processing device of claim 1, further comprising:

an input interface configured to read the target instruction from an external memory.

3. The neural network processing device according to claim 1 or 2, wherein the target instruction contains a configuration parameter of each layer of the neural network.

4. The neural network processing device of any one of claims 1-3, wherein the target instructions include at least one of the following configuration parameters of the neural network:

configuration parameters of the convolutional layer;

configuration parameters of the pooling layer;

configuration parameters of the active layer;

configuration parameters per element operation layer;

parameters of the bias cell.

5. The neural network processing device of any one of claims 1-4, wherein the target instructions further comprise cached configuration parameters in the neural network processing device.

6. The neural network processing apparatus of claim 5, wherein the buffer is configured to store the input data and/or the weight data of the neural network.

7. The neural network processing device of any one of claims 1-6, further comprising:

and the analysis circuit is used for analyzing the target instruction.

8. The neural network processing device of any one of claims 1-7, wherein the input interface is further configured to: input data and/or weight data of the neural network are received.

9. The neural network processing device of any one of claims 1-8, further comprising:

and the writing control circuit is used for writing the calculation result of the neural network into an external memory.

10. The neural network processing device of any one of claims 1-9, wherein the computational circuitry comprises:

a first computation circuit for performing computations corresponding to a first layer of the neural network;

a second computation circuit for performing computations corresponding to a second layer of the neural network;

wherein the first layer is a convolutional layer and the second layer comprises at least one of a pooling layer, an active layer, or a per-element operational layer.

11. The neural network processing device of claim 10, wherein the control circuit is further configured to control data transfer between the first computing circuit and the second computing circuit such that the output result of the first computing circuit is transferred directly to the second computing circuit without being buffered.

12. A computing system for a neural network, comprising:

the neural network processing device of any one of claims 1-11;

a processor for allocating computational tasks to the neural network processing device.

13. The computing system of claim 12, further comprising:

the memory is connected with the neural network processing device and is used for storing at least one of the following data of the neural network: input data, weight data, and output data.

14. A control method of a neural network processing apparatus, comprising:

and according to a target instruction, controlling a computing circuit in the neural network processing device to execute corresponding computation of at least two layers of the neural network.

15. The control method according to claim 14, characterized by further comprising:

the target instruction is read from an external memory.

16. The control method according to claim 14 or 15, wherein the target instruction contains a configuration parameter of each layer of the neural network.

17. The control method according to any one of claims 14 to 16, wherein the target instruction contains at least one of the following configuration parameters of the neural network:

configuration parameters of the convolutional layer;

configuration parameters of the pooling layer;

configuration parameters of the active layer;

configuration parameters per element operation layer;

parameters of the bias cell.

18. The control method according to any one of claims 14 to 17, wherein the target instruction further includes a configuration parameter of a cache in the neural network processing device.

19. The control method according to claim 18, wherein the buffer is used for storing input data and/or weight data of the neural network.

20. The control method according to any one of claims 14 to 19, characterized by further comprising:

and analyzing the target instruction.

21. The control method according to any one of claims 14 to 20, characterized by further comprising:

receiving input data and/or weight data of a neural network; and/or

And outputting the calculation result of the neural network outwards.

22. The control method according to any one of claims 14 to 21, wherein the controlling the computation circuit in the neural network processing apparatus to perform the computation corresponding to at least two layers of the neural network according to one target instruction comprises:

controlling a first computation circuit to perform a computation corresponding to a first layer of the neural network;

controlling a second computation circuit to perform a computation corresponding to a second layer of the neural network;

wherein the first layer is a convolutional layer and the second layer comprises at least one of a pooling layer, an active layer, or a per-element operational layer.

23. The control method according to claim 22, characterized by further comprising:

and controlling data transmission between the first computing circuit and the second computing circuit, so that the output result of the first computing circuit is directly transmitted to the second computing circuit without being cached.

Technical Field

The present application relates to the field of artificial intelligence, and more particularly, to a neural network processing apparatus, a control method, and a computing system.

Background

With the development of neural network technology, neural network processing devices are applied more and more.

The traditional neural network processing device has poor flexibility and poor computing performance, and cannot meet the performance requirements of people on neural network computing.

Disclosure of Invention

The application provides a neural network processing device, a control method and a computing system, which can improve the performance of the neural network processing device.

In a first aspect, a neural network processing apparatus is provided, including: a computing circuit; and the control circuit controls the calculation circuit to execute the calculation corresponding to at least two layers of the neural network according to a target instruction.

In a second aspect, a computing system for a neural network is provided, comprising: the neural network processing device of the first aspect; a processor for allocating computational tasks to the neural network processing device.

In a third aspect, a method for controlling a neural network processing apparatus is provided, including: and according to a target instruction, controlling a computing circuit in the neural network processing device to execute corresponding computation of at least two layers of the neural network.

In a fourth aspect, a computer-readable storage medium is provided, having stored thereon instructions for performing the method of the third aspect.

In a fifth aspect, there is provided a computer program product comprising instructions for performing the method of the third aspect.

The calculation of at least two layers of the neural network is realized by adopting the 'one' target instruction, so that the occupation ratio of control signals is reduced on the premise of ensuring the flexibility of the neural network processing device, the power consumption and the area of the neural network processing device are saved, and the performance of the neural network processing device can be improved.

Drawings

Fig. 1 is a schematic structural diagram of a neural network processing device according to an embodiment of the present application.

Fig. 2 is a schematic structural diagram of a neural network processing device according to another embodiment of the present application.

Fig. 3 is a schematic flowchart of a control method of a neural network processing apparatus according to an embodiment of the present application.

FIG. 4 is a diagram of an example of a target instruction provided by an embodiment of the present application.

FIG. 5 is a diagram of another example of a target instruction provided by an embodiment of the present application.

Detailed Description

The neural network processing device provided by the application can also be called a neural network processor or a neural network accelerator. The neural network processing device may be a dedicated neural network processing device, such as a hardware circuit or chip dedicated to neural network computations.

The neural network processing device mentioned in the present application can be used for calculating various types of neural networks, such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs).

Neural networks typically have a multi-layer structure. For ease of understanding, the multilayer structure of the neural network is illustrated below by taking a convolutional neural network as an example.

The convolutional neural network may include one or more convolutional layers. In addition, the convolutional neural network may also include other layers. The other layer may be one or more of the following layers: pooling layer, activation layer, and per-element operation layer (elementary layer).

The calculation amount of the neural network is usually large, so how to perform high-performance neural network calculation becomes a major concern.

In order to satisfy a certain flexibility, a conventional neural network processing apparatus first configures each layer of the neural network with a plurality of instructions before performing neural network computation. It will be appreciated that the content of the instructions is related to the desired neural network structure (which may be trained in a pre-trained manner), and that different neural network structures can be used to implement different functions. In other words, the user can configure different neural network structures according to the instruction, thereby realizing different neural network functions.

For example, when a user wants the neural network processing device to be used for image localization, each layer of the neural network can be configured by using an instruction, so that the configured neural network structure has an image localization function. For another example, when the user desires the neural network processing device to be used for image classification, each layer of the neural network may be configured with an instruction, so that the configured neural network structure has an image classification function.

With conventional neural network processing devices, each layer of the neural network needs to be configured with one or more instructions. Instructions are typically carried by control signals, so more instructions means a larger fraction of control signals. Too large a proportion of control signals may result in poor overall performance of the neural network processing device. For example, a larger duty ratio of the control signal means a larger power consumption of the neural network processing device; the larger the occupation ratio of the control signal is, the more complicated the structure of the neural network processing device is, and the larger the occupied area is.

In order to improve the overall performance of the neural network processing device, the neural network processing device provided in the embodiments of the present application is described below.

As shown in fig. 1, a neural network processing apparatus 1 provided in an embodiment of the present application may include a calculation circuit 2 and a control circuit 4.

The computation circuitry 2 may be used to perform computations corresponding to multiple layers of the neural network. The specific form of the computing circuit 2 is related to the type of neural network, and the embodiment of the present application is not limited thereto.

Taking a convolutional neural network as an example, as shown in fig. 2, the calculation circuit 2 may include a first calculation circuit 21 and a second calculation circuit 22. The first calculation circuit 21 is configured to perform calculations corresponding to convolutional layers of the neural network; the second computational circuitry 22 may be configured to perform computations corresponding to other layers of the neural network, such as at least one of the pooling layer, the activation layer, or the per-element operation layer.

The first computing circuit 21 may include, for example, an array of processing engines (process engine array) and a Network On Chip (NOC).

The processing engine array may be referred to as a PE array for short. The PE array may include a plurality of PEs. The multiple PEs may be used to perform matrix multiplication operations in convolution operations. Thus, the PE array may also be referred to as a convolution specific accelerator.

NOCs can be used to implement communications and control between the PE arrays and the outside world. For example, the computational objects and computation timing of each PE in the PE array may be controlled externally by the NOC.

The second computational circuitry 22 may be used to implement one or more of a biasing operation, an activation operation, and a pooling operation. As a possible implementation, the second calculation circuit 22 may include at least one of the following circuits: a bias circuit for implementing bias operations, an activation circuit for implementing activation operations, and a pooling circuit for implementing pooling operations.

It should be understood that the first calculation circuit 21 and the second calculation circuit 22 shown in fig. 2 are only one possible implementation manner of the first calculation circuit 21 and the second calculation circuit 22, and the embodiment of the present application is not limited thereto, and may also be implemented in other manners. For example, the first calculation circuit 21 may be composed of only a plurality of multiply-accumulate units; the second computation circuitry 22 may include circuitry for implementing computations corresponding by the elemental operating layers.

It should be noted that, the above is described by taking a convolutional neural network as an example, when the neural network processing apparatus 1 is used for performing other types of neural network (e.g. a recurrent neural network) calculations, the calculating circuit 2 may also adopt a completely different implementation manner.

The control circuit 4 may control the calculation circuit 2 to perform neural network calculations according to the instructions. Unlike the control manner of the conventional neural network processing apparatus, in the embodiment of the present application, the control circuit 4 may control the calculation circuit 2 to perform the calculation corresponding to the "at least two layers" of the neural network according to the received "one" target instruction.

The embodiment of the application adopts the 'one' target instruction to realize the calculation of at least two layers of the neural network, reduces the occupation ratio of control signals on the premise of ensuring the flexibility of the neural network processing device, and saves the power consumption and the area of the neural network processing device. Therefore, the neural network processing device provided by the embodiment of the application is more suitable for high-performance neural network calculation.

The target instructions may be used to configure at least two layers of the neural network, and thus, the target instructions may contain configuration parameters for at least two layers of the neural network. The configuration parameters in the target instructions may be used to indicate the manner of computation or implementation of the at least two layers.

The at least two layers of the target instruction configuration may include layers with the same function or may be a combination of layers with different functions. Taking convolutional neural networks as an example, the at least two layers may be two convolutional layers, or a combination of convolutional layers and one or more of the other layers (e.g., pooling layer, active layer, operating on elements layer). Similarly, taking the recurrent neural network as an example, the at least two layers may be any combination of the input layer, the hidden layer and the output layer of the recurrent neural network.

Configuring at least two layers of the neural network by the target instruction is actually configuring a computation circuit that implements computations corresponding to the at least two layers. Therefore, the target instruction including the configuration parameters of the at least two layers can also be understood as: the target instruction includes configuration parameters for the computational circuitry that performs the computations corresponding to the two layers. The configuration parameters may be used to instruct the computing circuit to perform corresponding calculations of the neural network layer, such as instructing data reading, operating, and outputting modes.

The content of the target instruction may specifically include one or more of the following parameters: configuration parameters of the convolutional layer; configuration parameters of the pooling layer; configuration parameters of the active layer; configuration parameters per element operation layer; parameters of the bias cell.

The configuration parameters of the convolutional layer may be used to configure the convolutional circuitry in the computational circuitry. For example, the calculation objects of the calculation units in the convolution circuit, the data transmission mode between the calculation units, and the like are configured, so that the calculation of the convolution layer can be completed by the cooperation of the calculation units.

The configuration parameters of the pooling layer may be used to configure a pooling circuit in the computing circuit. Such as the type of pooling configured (maximum pooling or average pooling), the size of the pooling window, etc.

The configuration parameters of the activation layer may be used to configure the activation mode, or the type of activation function, etc.

The per-element operation layer configuration parameters may be used to configure the manner of operation of data input to the per-element operation layer. The modes of operation may include, for example, product by element, sum by element, save largest element, and so on.

The configuration parameters of the bias unit may be used to configure whether to apply a bias to the data, the magnitude of the bias value, and so on.

The target instruction may include multiple fields or fields, each field being used to carry a configuration parameter required for neural network computation, which corresponds to integrating multiple instructions into one instruction in a serial manner. For example, as shown in FIG. 4, target instruction 40 may include a first configuration field 42 and a second configuration field 44, where first configuration field 42 may include configuration parameters for a convolutional layer and second configuration field 44 may include configuration parameters for a pooling layer. It can be understood that, in the embodiment of the present application, the convolutional layer configuration parameters and the pooling layer configuration parameters that would otherwise need to be configured by multiple instructions are integrated into one target instruction for configuration. The target instruction actually includes a group of instructions, and each instruction in the group of instructions may be arranged in series in the target instruction, and is respectively carried in different configuration domains of the target instruction.

FIG. 5 illustrates another example of a target instruction. As shown in fig. 5, target instruction 50 may include first configuration field 51 through fifth configuration field 55. Wherein, the first configuration field 51 may contain configuration parameters of convolutional layer; the second configuration field 52 may contain configuration parameters for the pooling layer; the third configuration field 53 may contain configuration parameters for the active layer; the fourth configuration field 54 may contain configuration parameters per element operation layer; the fifth configuration field 55 may contain configuration parameters for the bias unit. It can be understood that, in the embodiments of the present application, the configuration parameters of the convolutional layer, the configuration parameters of the pooling layer, the configuration parameters of the active layer, the configuration parameters of the per-element operation layer, and the configuration parameters of the bias unit, which originally need to be configured by multiple instructions, are integrated into one target instruction for configuration. The target instruction actually includes a group of instructions, and each instruction in the group of instructions may be arranged in series in the target instruction, and is respectively carried in different configuration domains of the target instruction.

Optionally, as an embodiment, the target instruction may include configuration parameters of each layer of the neural network. In other words, the configuration of the whole neural network can be completed by using one target instruction, so that the occupation ratio of the control signal in the neural network processing device is greatly reduced, and the performance of the neural network processing device is improved.

The embodiment of the present application does not specifically limit the manner of obtaining the target instruction. As an example, the target instruction may be an instruction generated internally by the neural network processing device 1. As another example, the target instruction may be an instruction read from an external memory. The external memory may be a memory device (such as Double Data Rate (DDR)) in the same system as the neural network processing device 1.

The neural network processing device 1 may include an input interface, and read the target instruction from the external memory 9 using the input interface. In some embodiments, the input interface may be a bus interface. Taking fig. 2 as an example (the input interface is not shown in fig. 2), the input interface may be a connection interface between the neural network processing device 1 and the memory interconnect module 7. The memory interconnect module 7 may function to connect the neural network processing device 1 with the external memory 9, and in some embodiments, the memory interconnect module 7 may also be integrated inside (i.e., on-chip) the neural network processing device 1.

Optionally, as shown in fig. 2, in some embodiments, the neural network processing device 1 may further include a resolution circuit 5. The resolution circuit 5 may be used to resolve the target instruction. For example, assume that the target instruction includes a plurality of fields, each field for configuring a portion of the functionality of the neural network. The analysis circuit 5 can analyze each field of the target instruction to obtain the configuration parameters of the neural network; and then distributing the parsed configuration parameters to corresponding circuits.

Alternatively, in some implementations, if the configuration parameters in the target instruction are designed to be parameters that can be directly recognized by the functional circuits of the neural network device, the above-described parsing circuit may not be provided in the neural network processing device 1. After receiving the target instruction, the neural network processing device 1 may directly distribute the configuration parameters in the target instruction to each functional circuit of the neural network device.

Alternatively, in some embodiments, some caches may be provided inside the neural network processing device 1. As shown in fig. 2, the buffer of the neural network processing device 1 may include, for example, a first buffer 61 of input data and a second buffer 62 of weight data.

The input data of the neural network may also be referred to as an input feature map (input feature map), and therefore, the first buffer 61 may also be referred to as an input feature map buffer (IF _ BUF for short).

The weight data (sometimes also referred to as weights) of the neural network may be used to filter the input feature map, and therefore, the second buffer 62 may also be referred to as a filter buffer (FILT _ BUF for short).

In the above embodiment, the target instruction may further include configuration parameters of the first cache 61 and the second cache 62.

The configuration parameters of the first cache 61 may be used to configure the way in which the cache 61 reads the input feature map from the external storage 9 (which may also be referred to as external memory). For example, the configuration parameter of the first cache 61 may be used to configure at least one of the following information: the position of the input feature map in the memory 9, the number of input feature maps, the height and width of the input feature map, the division manner of the input feature map, and the like.

The configuration parameters of the second buffer 62 may be used to configure the way in which the second buffer 62 reads the weight data from the external memory 9. For example, the configuration parameters of the second cache 62 may be used to configure at least one of the following information: the location of the weight data in memory 9, the size of the convolution kernel, etc.

Optionally, in some embodiments, the input interface mentioned above may also be used to receive input data (or feature maps) and/or weight data (or weights) of the neural network.

Optionally, in some embodiments, as shown in fig. 2, the neural network processing device 1 may further include a write control circuit 3. The calculation result of the neural network can be written to the external memory 9 under the control of the write control circuit 3.

As noted above, the convolutional neural network may include a per-element operation layer. The corresponding circuitry of the per-element operation layer is not shown in fig. 2. As a possible implementation, the circuitry corresponding to the element-wise operation layer may be integrated in the second calculation circuit 22; as another possible implementation, the circuits corresponding to the element-by-element operation layer may be integrated in the write control circuit 3.

Alternatively, as an embodiment, in the neural network processing apparatus 1, the computing circuits corresponding to the respective layers may transmit the intermediate results to a temporary buffer (such as a Random Access Memory (RAM)) on the chip.

Alternatively, as another embodiment, as shown in fig. 2, a temporary buffer may not be provided between two computing circuits (such as the first computing circuit 21 and the second computing circuit 22 in fig. 2) corresponding to two adjacent layers of the neural network processing apparatus 1. In this case, the control circuit 4 may be used to control the data transfer between the first calculation circuit 21 and the second calculation circuit 22 so that the output result of the first calculation circuit 21 is directly transferred to the second calculation circuit 22 without being buffered. This control method can further reduce the power consumption and the area of the neural network processing device 1.

The embodiment of the application also provides a computing system of the neural network. As shown in fig. 2, the computing system comprises a neural network processing device 1 as mentioned in any of the previous embodiments and a processor 8. The processor 8 may be used to assign computational tasks to the neural network processing device 1. The neural network processing device 1 and the processor 8 can be connected through a bus.

Optionally, the computing system may also include a memory 9. The memory 9 may be connected to the neural network processing device 1. The memory 9 may be used to store at least one of the following data for the neural network: input data, weight data, and output data.

The apparatus embodiment of the present application is described in detail above with reference to fig. 1 to 2, and the method embodiment of the present application is described in detail below with reference to fig. 3. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding apparatus embodiments for parts which are not described in detail.

The embodiment of the application also provides a control method of the neural network processing device. This control method may be executed by the neural network processing device 1 mentioned above. As shown in fig. 3, the control method may include step S34.

In step S34, according to a target instruction, the calculation circuit in the neural network processing device is controlled to perform calculations corresponding to at least two layers of the neural network.

The target instructions may contain configuration parameters for at least two layers.

Optionally, the method of fig. 3 may further include step S32. In step S32, the target instruction is read from the external memory.

Optionally, the target instruction contains configuration parameters for each layer of the neural network.

Optionally, the target instruction may contain at least one of the following configuration parameters of the neural network: configuration parameters of the convolutional layer; configuration parameters of the pooling layer; configuration parameters of the active layer; configuration parameters per element operation layer; parameters of the bias cell.

Optionally, the target instruction may further include a cached configuration parameter in the neural network processing device.

Optionally, a buffer may be used to store input data and/or weight data for the neural network.

Optionally, the method of fig. 3 may further include: and analyzing the target instruction.

Optionally, the method of fig. 3 may further include: receiving input data and/or weight data of a neural network; and/or outputting the calculation result of the neural network outwards.

Alternatively, step S34 may include: controlling a first computing circuit to perform a computation corresponding to a first layer of a neural network; controlling a second computing circuit to perform a computation corresponding to a second layer of the neural network; wherein the first layer is a convolutional layer and the second layer comprises at least one of a pooling layer, an active layer, or an on-element operational layer.

Optionally, the method of fig. 3 may further include: and controlling data transmission between the first computing circuit and the second computing circuit, so that the output result of the first computing circuit is directly transmitted to the second computing circuit without being buffered.

In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:自动驾驶车辆中乘客的提前上车

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!