DPDK-based data packet processing method and device

文档序号:1558854 发布日期:2020-01-21 浏览:7次 中文

阅读说明:本技术 一种基于dpdk的数据包处理方法及装置 (DPDK-based data packet processing method and device ) 是由 许小奎 马奥 于 2019-10-17 设计创作,主要内容包括:本申请提供一种基于DPDK的数据包处理方法及装置,该方法包括:在DPDK运行环境中,从至少一个网卡中读取数据包,并将所述数据包保存在mbuf中;将mbuf分别加入到预先创建的多个无锁队列中,所述多个无锁队列中的每个无锁队列分别与一个数据包处理服务绑定,以使所述数据包处理服务从绑定的无锁队列中读取所述mbuf并处理所述mbuf中的数据包。本申请使用数据平面开发套件DPDK,从一个或多个网卡接收数据包并将其放到多个无锁共享队列中,应用层服务从绑定的共享队列中读取数据包并进行包处理,从而实现多个服务间数据包的共享。(The application provides a DPDK-based data packet processing method and a DPDK-based data packet processing device, wherein the method comprises the following steps: reading a data packet from at least one network card in a DPDK operating environment, and storing the data packet in mbuf; adding the mbuf into a plurality of lock-free queues created in advance, wherein each lock-free queue in the lock-free queues is bound with a data packet processing service, so that the data packet processing service reads the mbuf from the bound lock-free queues and processes the data packet in the mbuf. According to the application, a data plane development kit DPDK is used, data packets are received from one or more network cards and are placed into a plurality of lock-free shared queues, and an application layer service reads the data packets from the bound shared queues and performs packet processing, so that sharing of the data packets among a plurality of services is achieved.)

1. A DPDK-based data packet processing method is characterized by comprising the following steps:

reading a data packet from at least one network card in a DPDK operating environment, and storing the data packet in mbuf;

adding the mbuf into a plurality of lock-free queues created in advance, wherein each lock-free queue in the lock-free queues is bound with a data packet processing service, so that the data packet processing service reads the mbuf from the bound lock-free queues and processes the data packet in the mbuf.

2. The method of claim 1, wherein before adding mbufs to a plurality of pre-created lock-free queues, respectively, the method further comprises:

and setting a reference count of the data packet, and storing the value of the reference count in the mbuf.

3. The method of claim 2, further comprising:

in the case where the value of the reference count in the mbuf is 0, the mbuf is released.

4. The method of claim 1, wherein prior to reading the data packet from the at least one network card, the method further comprises:

initializing the DPDK to determine the working mode of the network card, applying for a memory pool and creating a plurality of lock-free queues.

5. The method of claim 1, wherein the lock-free queue is a ring queue, the method further comprising:

when redundant ring queues exist and data packet processing services to be expanded exist, ring queues with the same number as the data packet processing services to be expanded are selected from the redundant ring queues to serve as target ring queues, and the target ring queues are respectively bound with one data packet processing service to be expanded.

6. A DPDK-based packet processing apparatus, comprising:

the reading module is used for reading a data packet from at least one network card in a DPDK running environment and storing the data packet in mbuf;

the shared module is used for adding the mbuf into a plurality of lock-free queues which are created in advance, wherein each lock-free queue in the lock-free queues is bound with a data packet processing service, so that the data packet processing service reads the mbuf from the bound lock-free queue and processes the data packet in the mbuf.

7. The apparatus of claim 6, wherein the sharing module is further configured to: and setting a reference count of the data packet, and storing the value of the reference count in the mbuf.

8. The apparatus of claim 6, wherein the lock-free queue is a ring queue, the apparatus further comprising: and the service expansion module is used for selecting the ring queues with the same number as the data packet processing services to be expanded from the redundant ring queues as target ring queues when the redundant ring queues exist and the data packet processing services to be expanded exist, and binding the target ring queues with one data packet processing service to be expanded respectively.

9. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the method according to any one of claims 1-5.

10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the method of any of claims 1-5.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a DPDK-based data packet processing method and apparatus.

Background

In a network data processing scenario, there is a scenario in which a collected network data packet needs to be handed over to multiple services for processing on the same server. At present, a common method is to configure a virtual switch device by a linux Bridge or an Open VSwitch (OVS), virtualize a plurality of ports, forward data acquired by a physical port to a virtual port, bind different virtual ports to each service, read a data packet in the virtual port, and process the data packet. However, in this method, a virtual switch needs to be created when the application is performed, and corresponding configuration is performed according to the requirement, and the transceiving performance is reduced due to the fact that the data packet is copied for many times.

Disclosure of Invention

An object of the embodiments of the present application is to provide a method and an apparatus for processing a data packet based on a DPDK, which implement zero-copy multi-service data packet sharing using the DPDK, without deploying a virtual switch, thereby effectively improving performance.

In a first aspect, an embodiment of the present invention provides a DPDK-based data packet processing method, including: reading a data packet from at least one network card in a DPDK operating environment, and storing the data packet in mbuf; adding the mbuf into a plurality of lock-free queues created in advance, wherein each lock-free queue in the lock-free queues is bound with a data packet processing service, so that the data packet processing service reads the mbuf from the bound lock-free queues and processes the data packet in the mbuf.

In the above scheme, after a data packet is read from a network card, the data packet is stored in an mbuf, the mbuf is enqueued in a plurality of lock-free queues bound with a data packet processing service, an upper layer application service reads the packet from the queues through the bound lock-free queues, and because an mbuf pointer stored in each lock-free queue points to the same mbuf, all the mbufs in the lock-free queues are shared, so that a plurality of data packet processing services can share the data packet in the mbuf, and zero-copy multi-service data packet sharing is realized.

In an optional embodiment, before adding mbuf to a plurality of lock-free queues created in advance, respectively, the method further includes: and setting a reference count of the data packet, and storing the value of the reference count in the mbuf.

In an alternative embodiment, the method further comprises: in the case where the value of the reference count in the mbuf is 0, the mbuf is released.

In the above two embodiments, the reference count is set so that the mbuf can be used by a plurality of packet processing services and released after the plurality of packet processing services have finished processing.

In an optional embodiment, before reading the data packet from the at least one network card, the method further comprises: initializing the DPDK to determine the working mode of the network card, applying for a memory pool and creating a plurality of lock-free queues.

In the initialization, the DPDK can manage the memory more easily through a memory pool mechanism, and meanwhile, after a plurality of lock-free queues are created, the data packets can be shared by binding the lock-free queues with the data packet processing service.

In an optional embodiment, the lock-free queue is a ring queue, and the method further includes: when redundant ring queues exist and data packet processing services to be expanded exist, ring queues with the same number as the data packet processing services to be expanded are selected from the redundant ring queues to serve as target ring queues, and the target ring queues are respectively bound with one data packet processing service to be expanded.

In the scheme, the number of the data packet processing services can be dynamically expanded, when the data packet processing services to be expanded exist, the services only need to be bound with unused ring queues in a plurality of ring queues which are pre-established, and the dynamic expansion of the services is very convenient.

In a second aspect, an embodiment of the present invention provides a DPDK-based data packet processing apparatus, including: the reading module is used for reading a data packet from at least one network card in a DPDK running environment and storing the data packet in mbuf; the shared module is used for adding the mbuf into a plurality of lock-free queues which are created in advance, wherein each lock-free queue in the lock-free queues is bound with a data packet processing service, so that the data packet processing service reads the mbuf from the bound lock-free queue and processes the data packet in the mbuf.

In an optional embodiment, the sharing module is further configured to: and setting a reference count of the data packet, and storing the value of the reference count in the mbuf.

In an optional embodiment, the lock-free queue is a ring queue, and the apparatus further includes: and the service expansion module is used for selecting the ring queues with the same number as the data packet processing services to be expanded from the redundant ring queues as target ring queues when the redundant ring queues exist and the data packet processing services to be expanded exist, and binding the target ring queues with one data packet processing service to be expanded respectively.

In a third aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the method according to the foregoing implementation manner of the first aspect.

In a fourth aspect, an embodiment of the present invention provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the method according to embodiments of the first aspect.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.

Fig. 1 is a flowchart of a data packet processing method according to an embodiment of the present application;

fig. 2 is a schematic diagram of a data packet processing method according to an embodiment of the present application;

fig. 3 is a schematic diagram of a packet processing apparatus according to an embodiment of the present application;

fig. 4 is a schematic view of an electronic device provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.

A Data Plane Development Kit (DPDK) is developed by multiple companies such as Intel, and is used for a function library and a driver set for fast packet processing, so that Data processing performance and throughput can be greatly improved, and the work efficiency of a Data Plane application program can be improved. In the embodiment of the present application, a DPDK is used to provide zero-copy packet sharing for multiple packet processing services in an application layer, and the packet processing method provided in this embodiment will be described in detail below according to a flowchart shown in fig. 1.

The method comprises the following steps:

s101: and pre-deploying the DPDK environment.

The DPDK environment deployment comprises setting an operating environment and system variables, loading a drive module, binding one or more network cards needing to collect data packets, allocating a large page of memory and the like.

Firstly, the installation of a DPDK target environment is needed, and after the target environment is created, all libraries including a DPDK operating environment needed by the construction of a client application program, including a polling driver of the DPDK and a DPDK environment header file, are included. After the target environment is created, the user may also modify the DPDK configuration by editing the config file in the directory. Before the DPDK program runs, all network cards used in the device are bound to the uio _ pci _ genetic, igb _ uio or vfio-pci module. If the network port is controlled by the linux kernel driver, the port needs to be unbound from the linux, and then bound to the uio _ pci _ generic, igb _ uio or vfio-pci module for the DPDK to use. The DPDK continuously reads the data packets from the bound network card in a polling mode, and the network card which is not bound with the DPDK is ignored by the DPDK polling drive and cannot be read by the DPDK program.

The linux memory page defaults to adopt 4KB as one page, the smaller the page, the larger the memory, and the larger the memory occupation of the page table, and by allocating the large page memory, the program needs fewer pages, thereby effectively improving the performance. The allocation of large pages of memory should be done at or soon after device system boot to avoid physical memory fragmentation. The application of the large-page memory may specify the number of memory pages to be applied and the size of a default memory page, for example, 4 memory pages of 1G are allocated, the number of memory pages is 4, and the size of the memory page is 1G, and then the parameters to be specified are as follows: defaults _ hugepagesz ═ 1G hugepages ═ 4.

S102: the DPDK is initialized in advance.

Before the DPDK is operated, the DPDK is initialized in advance to determine the working mode of the network card, apply for a memory pool, create a plurality of lock-free queues and the like.

Generally, the network cards of the devices all work in a non-promiscuous mode, that is, the network cards only accept data packets with destination addresses pointing to the network cards. When the DPDK is initialized, the working mode of the network card is set to be a hybrid mode, when the DPDK runs, all data packets passing through the network card can be acquired through the polling drive, if the network card works in the non-hybrid mode, as the network card screens the destination address of the data packets in the network port, more data packets in the data packets passing through the network card do not enter the network card, and some data packets are discarded. The network card may be set to the hybrid mode in any manner for the working mode of the network card, which is not limited in this embodiment.

The DPDK can acquire data in the mbuf from the memory pool by calling different interfaces and release the mbuf, so that the memory can be managed more simply. In order to access data efficiently, the DPDK encapsulates the Memory in an mbuf (Memory buffer) structure, that is, encapsulates and stores the received data packets through the mbuf. The mbuf is stored in the created memory pool, so that performance overhead caused by frequent packet sending and receiving application of the mbuf memory can be avoided. When creating a memory pool, creating a continuous large buffer area in a pre-allocated large-page memory as the memory pool, and creating a plurality of continuous object elements, namely mbuf in the memory pool, where the mbuf is used for storing data packets received by the DPDK from the network card. When the memory pool is created, a plurality of lock-free queues are created, the lock-free queues are ring queues, and the ring queues are annular lock-free queues formed by connecting the queues in series.

The above steps S101-S102 are performed in advance before implementing the packet processing method, and if the device has deployed the operating environment of the DPDK in advance and initialized the DPDK in advance, the above steps may be omitted, i.e., the following step S103 is directly performed.

S103: the DPDK reads the data packet from at least one network card and stores the data packet in the mbuf.

The DPDK reads the data packet from the bound network card in a polling mode and packages the data packet in the mbuf structure body in the memory pool. The structure of a plurality of mbufs in the memory pool is the same, and each mbuf comprises a head part, a data area and a tail part, and the DPDK encapsulates the data packet in the data area of the mbuf.

S104: and adding the mbuf into a plurality of lock-free queues which are created in advance respectively, wherein each lock-free queue in the plurality of lock-free queues is bound with one data packet processing service respectively.

Each service in the application layer, which needs to process the data packet, selects one lock-free queue from a plurality of lock-free queues created in advance to bind. In practical application, the DPDK may enqueue the mbuf to all lock-free queues bound to the packet processing service, and may not enqueue the idle lock-free queues not bound to the packet processing service, or enqueue the mbuf to all created lock-free queues, so that it is not necessary to determine whether the lock-free queues are bound to the packet processing service during operation, which makes the logic more simple and general, and does not affect the performance.

The pointer of mbuf is stored in the lock-free ring queue, but not the data copy of mbuf itself. Pointers stored in all the ring queues enqueued in the mbuf point to the same mbuf, that is, the mbufs stored in all the ring queues are shared and are the same mbuf, and each packet processing service can read the packet stored in the mbuf by accessing the bound ring queue. Because a plurality of ring queues are arranged and the packet processing services correspond to the ring queues one by one, even if one or more packet processing services read the mbuf and take out the pointer of the mbuf from the ring queues, the mbuf pointers stored in other ring queues are not affected, other packet processing services can still read the mbuf from the corresponding ring queues, and therefore sharing of the packets in the mbuf by the plurality of packet processing services in the application layer can be achieved.

S105: and a plurality of data packet processing services in the application layer read the mbuf from the bound lock-free queue and process the data packets in the mbuf.

The above-mentioned packet processing service may be a thread or a process of an application, and each packet processing service reads an mbuf through a corresponding lock-free queue, parses the read mbuf, and processes a packet in the mbuf.

Before the DPDK enqueues mbuf to a plurality of ring queues, the method further comprises: the reference count of the packet is set by the DPDK, and the value of the reference count is stored in mbuf. Besides storing the data packet read from the network card, the mbuf also stores some other information, including the value of the reference count. And adding 1 to the value of the reference count of the mbuf every time the mbuf is added into a ring queue, and subtracting 1 from the value of the reference count of the mbuf after a packet processing service reads and processes the data in the mbuf from the ring queue. Therefore, if there are M ring queues bound to the packet processing service, after the mbuf enqueues all ring queues, the stored value of the reference count is M, the mbuf waits for the reading of the packet processing service, and after all packet processing services have read the packet in the mbuf and perform packet processing, the value of the reference count is gradually reduced to 0.

When the mbuf is not used any more, that is, after all the packet processing services have read the mbuf, the memory space occupied by the mbuf can be released, so that the mbuf can store new packets received from the network card, and continuous occupation of mbuf resources is avoided. Therefore, in the case that the value of the reference count is 0, that is, after each packet processing service bound to the ring queue has read and processed a packet in the mbuf, the mbuf is released. The purpose of setting the reference count is to enable the mbuf to be used by a plurality of packet processing services and to be released after the plurality of packet processing services have finished processing.

Fig. 2 is a schematic diagram of a data packet processing method in this embodiment, as shown in fig. 2, a DPDK can read data packets on J network cards in a device and store the data packets through mbuf, K ring queues are respectively bound with K data packet processing services one by one, mbuf are respectively enqueued in K ring queues, and each data packet processing service reads a data packet in mbuf by accessing a corresponding ring queue.

It should be noted that, in an actual application scenario, for a packet that is continuously received by the network card, the enqueuing step of mbuf and the reading step of the packet processing service are executed asynchronously. The DPDK continuously reads packets from the network card through the polling driver, and stores each packet in an mbuf, and meanwhile, the packet processing service also continuously reads packets from the mbuf through the ring queue, there is no precedence relationship between the two steps in a strict sense, for convenience of understanding, the sequence in the drawing shows a complete sequence of receiving, storing to an mbuf, enqueuing to a ring queue, and reading by the packet processing service for one packet, and it should not be limited that the method provided in this embodiment must have such a sequence.

Further, in this embodiment, the number of packet processing services may be dynamically expanded, and may be arbitrarily increased or decreased. The method also includes the steps of: when redundant ring queues exist and the data packet processing service is to be expanded, selecting ring queues with the same number as the data packet processing service to be expanded from the redundant ring queues as target ring queues, and binding the target ring queues with the data packet processing service to be expanded respectively; and when the data packet processing service to be unbound exists, unbinding the data packet processing service to be unbound and the bound ring queue.

For example, N ring queues are created in advance in step S102 for the DPDK, and M ring queues are bound to M packet processing services in the N ring queues, where M is less than or equal to N. When redundant ring queues exist in the N ring queues and X data packet processing services to be expanded exist in the equipment, X ring queues are selected from the redundant N-M ring queues and are respectively and correspondingly bound with the X data packet processing services to be expanded, at the next moment, after a DPDK stores a data packet to an mbuf, the enqueue of the mbuf is changed from the original M ring queues to the expanded M + X ring queues, and the reference count value after the mbuf is enqueued is also changed from the original M to M + X, so that the new X data packet processing services can share the data packet with the original M data packet processing services, and the dynamic expansion of the services is realized. Therefore, when the data packet processing service to be expanded exists, the service only needs to be bound with the unused ring queues in the plurality of ring queues which are created in advance, and when the number of the data packet processing service needs to be reduced, the data packet processing service to be unbound only needs to be bound with the corresponding ring queues, and the unbound ring queues wait to be bound with the new data packet processing service.

Specifically, corresponding configuration may be performed in advance in the start parameter or configuration script of the packet processing service, and when one packet processing service is started, one ring queue is automatically selected from the redundant ring queues for binding, and when the packet processing service exits, the ring queue is automatically unbound from the bound ring queue, so as to release the mbuf resource. Specifically, an application programming interface API may be provided for each packet processing service, and the packet processing service may implement the processes of searching, binding, and unbinding by calling the API when starting or exiting.

The data packet processing method provided by the embodiment has the following characteristics: (1) after the DPDK reads a data packet from a bound network card, the data packet is stored in mbuf in a memory pool and the mbuf is queued in bound ring queues, each ring queue is similar to a virtual port in a virtual switch, an upper layer application service reads the packet from the queues by binding the ring queues, and as the mbuf stored in each ring queue is the same and shared, a plurality of data packet processing services can share the data packet, thereby realizing the sharing of the data packet among multiple services; (2) the method receives the data packet of the network card based on the DPDK, has a high-performance data acquisition characteristic, the acquired data packet is not forwarded, but shared by utilizing the mbuf and the plurality of ring queues, and the DPDK runs in a user mode, bypasses a Linux kernel mode protocol stack, can reduce interruption and improve data processing efficiency, is different from the conventional virtual switch, realizes zero copy of the data packet, and effectively improves performance.

In the data packet processing method in this embodiment, by sharing a large page of memory, locking-free ring queues, setting reference counts, and the like, the requirement that multiple services share a data packet in a network data processing scenario can be met, multi-thread or multi-process sharing of the data packet is achieved, a virtual switch does not need to be deployed, environment configuration is reduced, and excellent performance can be obtained by using DPDK for data packet acquisition. The method can be deployed on a general server or any electronic equipment with network communication capability, and dependence on hardware equipment is greatly reduced.

Based on the same inventive concept, referring to fig. 3, an embodiment of the present application further provides a DPDK-based data packet processing apparatus 200, where the apparatus 200 includes:

a reading module 201, configured to read a data packet from at least one network card in a DPDK operating environment, and store the data packet in mbuf;

the sharing module 202 is configured to add the mbuf to a plurality of lock-free queues created in advance, where each lock-free queue in the plurality of lock-free queues is bound to a data packet processing service, so that the data packet processing service reads the mbuf from the bound lock-free queue and processes the data packet in the mbuf.

Optionally, the sharing module 202 is further configured to: the reference count of the packet is set and the value of the reference count is stored in mbuf.

Optionally, the sharing module 202 is further configured to: in the case where the value of the reference count in the mbuf is 0, the mbuf is released.

Optionally, the apparatus 200 further comprises: the initialization module is used for initializing the DPDK so as to determine the working mode of the network card, apply for a memory pool and create a plurality of lock-free queues.

Optionally, the lock-free queue is a ring queue, and the apparatus 200 further includes: and the service expansion module is used for selecting the ring queues with the same number as the data packet processing services to be expanded from the redundant ring queues as target ring queues when the redundant ring queues exist and the data packet processing services to be expanded exist, and binding the target ring queues with one data packet processing service to be expanded respectively.

The implementation principle and the generated technical effects of the DPDK-based data packet processing apparatus provided in the embodiment of the present application are the same as those of the foregoing method embodiment, and for brief description, corresponding contents in the foregoing method embodiment may be referred to where no embodiment is mentioned in the apparatus embodiment, and are not described herein again.

The embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by a processor, the storage medium executes the data packet processing method provided in the above embodiment of the present application.

Referring to fig. 4, the present embodiment provides an electronic device 300, which includes a processor 301 and a memory 302, where the memory 302 stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor 301 to implement the packet processing method provided in the foregoing embodiment. The electronic device 300 may further comprise a communication bus 303, wherein the processor 301 and the memory 302 communicate with each other via the communication bus 303. The memory 302 may include high-speed random access memory (as a cache) and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. A communication bus 303 is a circuit connecting the described elements and enabling transmission between these elements. For example, the processor 301 receives commands from other elements through the communication bus 303, decodes the received commands, and performs calculations or data processing according to the decoded commands.

The electronic device 300 is provided with at least one network card and runs a DPDK program, the DPDK can acquire a data packet received by the at least one network card through a polling driver and store the data packet in the mbuf, and a plurality of services in the application layer read the data packet in the mbuf through the bound ring queue, so that the plurality of services running on the electronic device 300 can share the data packet. Electronic device 300 may include, but is not limited to, a desktop computer, a personal computer, a server, etc. computing device having network communication capabilities.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.

In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.

It should be noted that the functions, if implemented in the form of software functional modules and sold or used as independent products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.

The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:宽带资源调配方法、装置及终端设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!