Data center flow adaptive scheduling method under condition of unknown information

文档序号:1569762 发布日期:2020-01-24 浏览:27次 中文

阅读说明:本技术 一种信息不可知情况下的数据中心流自适应调度方法 (Data center flow adaptive scheduling method under condition of unknown information ) 是由 吴宣够 王士帅 赵伟 高云全 于 2019-10-14 设计创作,主要内容包括:本发明提供的一种信息不可知情况下的数据中心流自适应调度方法,在于提出一种基于时间滑动窗口的流调度方法,在不需要提前知道流的信息的前提下实现对小流和大流区分处理,并且同时减轻了降级阈值与流分布不匹配现象。本发明相比较基于数据流信息先验可知方案,不需要提前知道数据流的大小,生存期限等信息,解决了在实际环境中难以实现问题,进而实现在实际网络中,能够快速传输数量庞大的数据流,使用便捷快速,有针对的采用了时间滑动窗口结合ECN来解决多级反馈队列降级阈值与流分布不匹配问题,有效的降低了小流的流完成时间,提高了网络的性能。(The invention provides a data center flow self-adaptive scheduling method under the condition of unknown information, and aims to provide a flow scheduling method based on a time sliding window, which realizes the partial processing of small flows and large flows on the premise of not needing to know the information of the flows in advance and simultaneously lightens the mismatching phenomenon of a degradation threshold and the flow distribution. Compared with a priori known scheme based on data flow information, the method and the device do not need to know the information of the size, the life time and the like of the data flow in advance, solve the problem that the realization is difficult in the actual environment, further realize that the huge number of data flows can be rapidly transmitted in the actual network, are convenient and rapid to use, aim at solving the problem that the degradation threshold value of a multi-stage feedback queue is not matched with the flow distribution by combining a time sliding window with ECN, effectively reduce the flow completion time of the small flows and improve the performance of the network.)

1. A data center flow adaptive scheduling method under the condition of unknown information is characterized by comprising the following steps:

step 1: the sending end application sends data to the buffer area;

step 2: the sending end host compares the byte number of the sent data packet of each data stream with a multi-level feedback queue degradation threshold value, and carries out priority marking on the data packet to be sent of each data stream according to the comparison result;

and step 3: the method comprises the steps that a switch end receives a data packet which is marked with priority and sent by a sending end application, caches the data packet, the switch end counts the size of an ending beam of each priority queue within a time sliding window at a set time interval, calculates the average value of the data packet in each priority queue, and readjusts the priority of the data packet in a buffer area according to the average value;

and 4, step 4: comparing the number of the data packets in the buffer, and marking the data packets exceeding the threshold with CE marks if the number is larger than the ECN mark threshold of the switch port;

and 5: the switch end forwards the data packets in each priority queue according to the sequence of the priority from high to low, and the data packets in the same priority queue are forwarded according to the first-in first-out rule;

step 6: the receiving end receives the data packet forwarded by the switch end and informs the sending end to continue sending.

2. The method according to claim 1, wherein the method for marking the priority of each data stream by the sending end host in step 2 comprises the following steps:

step 2.1: suppose that the sending end-host maintains k priority queues PiAnd k-1 degradation thresholds ajWherein 1 ═<i<=k,1=<j<K-1, and P1>P2>....>Pk,a1<a2<a3<...<ak-1The data packet initially sent by each data stream is marked as the highest priority, and the sending end host sends the total sent byte number byte _ sent of the data stream and the degradation threshold ajMatch is made if aj-1<byte_sent<=ajIf so, marking the priority of a data packet sent by the data stream as j;

step 2.2: and according to the step 2.1, recording the byte number of the data packet to be sent, and updating the total sent byte number of the data stream to which the data packet belongs by the sending end.

3. The adaptive data center flow scheduling method under the condition of unknown information as claimed in claim 2, wherein the method for calculating the priority adjustment by the switch end through the time sliding window in step 3 comprises the following steps:

step 3.1: calculating the finished data flow in each priority queue within the time sliding window range at set time intervals;

step 3.2: calculating the finished data stream, and calculating the average value of the total sending bytes of the current finished data stream of each priority queue by using a time sliding window, wherein the calculation formula is as follows:

Queue_mean(i)=finish_bytes(i)/finish_flows(i)

where i is the calculated switch priority Queue number, Queue mean (i) is the switch priority Queue i's mean of the completed packets, finish bytes (i) is the total number of bytes sent by the completed flows within the switch priority Queue i's time sliding window, and finish flow (i) is the total number of completed flows within the switch priority Queue i's time sliding window.

Step 3.3: when the data packet of the next data flow enters the priority queue buffer area, if the priority queue to which the data packet belongs is not the lowest priority queue, firstly comparing the number of bytes sent by the data flow to which the data packet belongs in the priority queue to the total byte average value of the data flow finished by the priority queue to which the data packet belongs, and if the number of bytes sent by the data flow to which the data packet belongs exceeds the total byte average value of the data flow finished by the current queue, reducing the data packet exceeding the data packet to the next priority; if not, sending in the priority queue.

Step 3.4: the time sliding window is moved.

4. The method according to claim 3, wherein if the data packet in the queue buffer at the switch end exceeds the ECN mark threshold at the switch port in step 4, the process of marking the data packet exceeding the ECN mark threshold with CE mark is as follows:

calculating the current number of data packets to be sent of all priority queue buffers at the switch end, marking CE marks on the data packets which are received below and exceed the ECN mark threshold if the number of the data packets to be sent exceeds the set ECN mark threshold, carrying ECN notification in a data confirmation packet after the switch end receives the data packets containing the CE marks, and correspondingly reducing the sending rate after the sending end receives the data packets and notifying the receiving end of receiving the ECN notification.

5. The method for adaptively scheduling data center flows under the condition of unknown information according to claim 4, wherein the forwarding process of the marked priority by the switch end in the step 5 is as follows:

for the data packets in different priority queues, the switch firstly forwards the data packet in the highest priority queue, the data packet in the second highest priority queue is forwarded after the data packet in the highest priority queue is sent out, and the data packet in the same priority queue is forwarded in a first-in first-out mode.

Technical Field

The invention relates to the field of data center network flow scheduling, in particular to a data center flow self-adaptive scheduling method under the condition of unknown information.

Background

With the rapid development of the internet, reducing the network delay becomes a potential huge problem. With the development of multi-tenant data centers and virtual machine technologies, many companies can seamlessly migrate applications to the cloud, such as web search services, social networks, recommendation systems, and the like. These real-time interactive software generate a large number of small requests and responses to resources within the data center that are very sensitive to network latency. Therefore, reducing the stream completion time of these streamlets and improving the user experience have become a hot research hotspot at present.

In a data center network, data flow completion time is one of important indexes for measuring transmission performance of the data center network, and many researches on traffic transmission are performed around reducing delay of small flows. Conventional ways of reducing the flow completion time of small flows can be divided into reducing the flow completion time based on the flow prior information and reducing the flow completion time based on the flow prior information that is not known. A scheme which can be known based on the flow prior information tries to realize ideal shortest residual processing time scheduling, such as PDQ, pFAbric and other schemes; in a traditional scheme based on stream prior information agnostic, stream completion time of small streams is reduced by keeping low queue occupancy, for example, schemes such as DCTCP and L2DCT are provided, PIAS proposes that a multi-level feedback queue mode is adopted to distinguish large streams from small streams by degrading streams according to the number of bytes sent by the streams, so as to simulate a short job priority scheduling method.

However, these solutions mainly have the following problems: 1. for some schemes where the flow a priori information is known, such as PDQ, pFabric, etc., it requires that the flow size, deadline, etc. information be known a priori or that modifications to the switch hardware are required. In an actual network, because the number of streams is extremely large and the transmission speed is high, it is difficult to acquire stream information in an actual data center network, and many commercial switches do not support hardware modification to realize functions, so that the schemes are difficult to deploy in an actual environment; 2. some schemes that are agnostic to a priori information, such as DCTCP, L2DCT, etc., reduce flow completion time mainly by performing rate control on the host side to keep queue occupancy low, but these schemes are inefficient in terms of flow scheduling. For an algorithm that adopts a multi-stage feedback queue to degrade the flow so as to simulate the priority of short operation, such as PIAS, the distribution of the flow in the data center network changes along with the space-time dynamics, so that the phenomenon that the degradation threshold of the multi-stage feedback queue is not matched with the flow distribution in the data center network occurs.

Disclosure of Invention

The invention aims to provide a data center flow self-adaptive scheduling method under the condition of unknown information, which solves the problem of low efficiency in the aspect of data flow scheduling, realizes a short job priority scheduling method for simulating the prior information unknown flow in a data center network, and reduces the flow completion time of small flows.

In order to achieve the above purpose, the invention provides the following technical scheme:

a data center flow adaptive scheduling method under the condition of unknown information comprises the following steps:

step 1: the sending end application sends data to the buffer area;

step 2: the sending end host compares the byte number of the sent data packet of each data stream with a multi-level feedback queue degradation threshold value, carries out priority marking on the data packet to be sent of each data stream according to the comparison result, and the data packets with the same priority are waited in the same priority queue;

and step 3: the method comprises the steps that a switch end receives a data packet which is marked with priority and sent by a sending end application, caches the data packet, the switch end counts the size of an ending beam of each priority queue within a time sliding window at a set time interval, calculates the average value of the data packet in each priority queue, and readjusts the priority of the data packet in a buffer area according to the average value;

and 4, step 4: comparing the number of the data packets in the buffer, and marking the data packets exceeding the threshold with CE marks if the number is larger than the ECN mark threshold of the switch port;

and 5: the switch end forwards the data packets in each priority queue according to the sequence of the priority from high to low, and the data packets in the same priority queue are forwarded according to the first-in first-out rule;

step 6: the receiving end receives the data packet forwarded by the switch end and informs the sending end to continue sending.

Preferably, the method for marking the priority of each data stream by the sending end host in step 2 includes the following steps:

step 2.1: suppose that the sending end-host maintains k priority queues PiAnd k-1 degradation thresholds ajWherein 1 ═<i<=k,1=<j<K-1, and P1>P2>....>Pk,a1<a2<a3<...<ak-1The data packet initially sent by each data stream is marked as the highest priority, and the sending end host sends the total sent byte number byte _ sent of the data stream and the degradation threshold ajMatch is made if aj-1<byte_sent<=ajIf so, marking the priority of a data packet sent by the data stream as j;

step 2.2: and according to the step 2.1, recording the byte number of the data packet to be sent, and updating the total sent byte number of the data stream to which the data packet belongs by the sending end.

Preferably, the method for calculating the priority adjustment by the switch end through the time sliding window in step 3 includes the following steps:

step 3.1: calculating the finished data flow in each priority queue within the time sliding window range at set time intervals;

step 3.2: calculating the finished data stream, and calculating the average value of the total sending bytes of the current finished data stream of each priority queue by using a time sliding window, wherein the calculation formula is as follows:

Queue_mean(i)=finish_bytes(i)/finish_flows(i)

where i is the calculated switch priority Queue number, Queue mean (i) is the switch priority Queue i's mean of the completed packets, finish bytes (i) is the total number of bytes sent by the completed flows within the switch priority Queue i's time sliding window, and finish flow (i) is the total number of completed flows within the switch priority Queue i's time sliding window.

Step 3.3: when the data packet of the next data flow enters the priority queue buffer area, if the priority queue to which the data packet belongs is not the lowest priority queue, firstly comparing the number of bytes sent by the data flow to which the data packet belongs in the priority queue to the total byte average value of the data flow finished by the priority queue to which the data packet belongs, and if the number of bytes sent by the data flow to which the data packet belongs exceeds the total byte average value of the data flow finished by the current queue, reducing the data packet exceeding the data packet to the next priority; if not, sending in the priority queue.

Step 3.4: the time sliding window is moved.

Preferably, in the step 4, if the data packet in the queue buffer at the switch end exceeds the ECN marking threshold of the switch port, the process of marking the data packet exceeding the ECN marking threshold with the CE mark is as follows:

calculating the current number of data packets to be sent of all priority queue buffers at the switch end, marking CE marks on the data packets which are received below and exceed the ECN mark threshold if the number of the data packets to be sent exceeds the set ECN mark threshold, carrying ECN notification in a confirmation packet of the data after the switch end receives the data packets containing the CE marks, and correspondingly reducing the sending rate after the sending end receives the data packets and notifying the switch end that the ECN notification is received.

Preferably, the process of forwarding the marked priority by the switch end in step 5 is as follows:

for the data packets in different priority queues, the switch firstly forwards the data packet in the highest priority queue, the data packet in the second highest priority queue is forwarded after the data packet in the highest priority queue is sent out, and the data packet in the same priority queue is forwarded in a first-in first-out mode.

According to the technical scheme, the data center flow adaptive scheduling method under the condition of unknown information, provided by the technical scheme of the invention, has the following beneficial effects:

1. compared with a priori known scheme based on data stream information, the method and the device do not need to know the information of the size, the life time and the like of the data stream in advance, solve the problem that the method and the device are difficult to realize in the actual environment, further realize that the huge data stream can be rapidly transmitted in the actual network, and are convenient and rapid to use.

2. The invention adopts the time sliding window and the ECN to solve the problem that the degradation threshold of the multi-level feedback queue is not matched with the flow distribution, effectively reduces the flow completion time of the small flow and improves the performance.

It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent.

The foregoing and other aspects, embodiments and features of the present teachings can be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the present invention.

Drawings

The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:

FIG. 1 is a flow chart of the system of the present invention;

FIG. 2 is a global view of the present invention;

FIG. 3 is a graph of the mismatch of data flow distribution with threshold.

FIG. 4 is a flow chart of the switch priority re-optimization of the present invention

Detailed Description

In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.

In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. Embodiments of the present disclosure are not intended to include all aspects of the present invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.

Based on the prior art, the traditional data center network flow scheduling technology does not effectively distinguish small flows from large flows, but uniformly schedules the small flows and the large flows, and the technology mainly aims at ensuring the fairness between the large flows and the small flows; although some proposed schemes have implemented the process of distinguishing large streams from small streams, these schemes either need to know the information of the streams in advance, which is difficult to obtain in the actual environment due to the huge number of data streams and the extremely fast transmission speed, so that the scheme is not practical; other schemes such as the PIAS do not need to know flow information, but use a multi-stage feedback queue to distinguish large flows from small flows, but the phenomenon that the degradation threshold value is not matched with the traffic distribution occurs because the dynamic distribution of flows in the network changes along with the space-time dynamic distribution. The scheme aims to provide a data center flow self-adaptive scheduling method under the condition of unknown information, so that small flow and large flow are processed on the premise of not knowing flow information in advance, and the phenomenon that a degradation threshold value is not matched with flow distribution is reduced.

The invention discloses a data center flow self-adaptive scheduling method under the condition of unknown information, which comprises the following steps:

step 1: the sending end transmits a data packet of the data stream to a buffer area;

step 2: the sending end host compares the byte number of the sent data packet of each data stream with a multi-level feedback queue degradation threshold value, and carries out priority marking on the data packet to be sent of each data stream according to the comparison result;

and step 3: the switch end receives the data packet marked with the priority sent by the sending end host in the step 1, and the switch end calculates the average value of the data packet sent by the data flow of each priority queue ending in the time sliding window range through the time sliding window every set time, so as to adjust the priority queue of the data packet received by each priority queue;

and 4, step 4: according to the average value calculated by the time sliding window, if the data packet of the queue buffer area at the switch end exceeds the ECN marking threshold value of the switch port, marking the CE mark on the received data packet exceeding the ECN marking threshold value through the switch end;

and 5: the data packets in different priority queues are forwarded by the switch end according to the marked priority of the data packets, and the data packets in the same priority queue are forwarded in a first-in first-out mode;

step 6: the receiving end receives the data packet and informs the sending end to continue sending.

More specifically:

in this embodiment, the method for marking the priority of each data stream by the sending end host in step 2 includes the following steps:

step 2.1: suppose that the sending end-host maintains k priority queues PiAnd k-1 degradation thresholds ajWherein 1 ═<i<=k,1=<j<K-1, and P1>P2>....>Pk,a1<a2<a3<...<ak-1The data packet initially sent by each data stream is marked as the highest priority, and the sending end host sends the total sent byte number byte _ sent of the data stream and the degradation threshold ajMatch is made if aj-1<byte_sent<=ajIf so, marking the priority of a data packet sent by the data stream as j;

step 2.2: and according to the step 2.1, recording the byte number of the data packet to be sent, and updating the total sent byte number of the data stream to which the data packet belongs by the sending end.

In this embodiment, the method for the switch end to calculate the priority adjustment through the time sliding window in step 3 includes the following steps:

step 3.1: calculating the finished data flow in each priority queue within the time sliding window range at set time intervals;

step 3.2: calculating the finished data stream, and calculating the average value of the total sending bytes of the current finished data stream of each priority queue by using a time sliding window, wherein the calculation formula is as follows:

Queue_mean(i)=finish_bytes(i)/finish_flows(i)

where i is the calculated switch priority Queue number, Queue mean (i) is the switch priority Queue i's mean of the completed packets, finish bytes (i) is the total number of bytes sent by the completed flows within the switch priority Queue i's time sliding window, and finish flow (i) is the total number of completed flows within the switch priority Queue i's time sliding window.

Step 3.3: when the data packet of the next data flow enters the priority queue buffer area, if the priority queue to which the data packet belongs is not the lowest priority queue, firstly comparing the number of bytes sent by the data flow to which the data packet belongs in the priority queue to the total byte average value of the data flow finished by the priority queue to which the data packet belongs, and if the number of bytes sent by the data flow to which the data packet belongs exceeds the total byte average value of the data flow finished by the current queue, reducing the data packet exceeding the data packet to the next priority; if not, sending in the priority queue.

Step 3.4: the time sliding window is moved.

In this embodiment, in the step 4, if the data packet of the queue buffer at the switch end exceeds the ECN marking threshold of the switch port, the process of marking the data packet exceeding the ECN marking threshold with the CE mark is as follows:

calculating the current number of data packets to be sent of all priority queue buffers at the switch end, marking CE marks on the data packets which are received below and exceed the ECN mark threshold if the number of the data packets to be sent exceeds the set ECN mark threshold, carrying ECN notification in a confirmation packet of the data after the switch end receives the data packets containing the CE marks, and correspondingly reducing the sending rate after the sending end receives the data packets and notifying the switch end that the ECN notification is received.

In this embodiment, the process of forwarding the marked priority by the switch end in step 5 is as follows:

for data packets in different priority queues, the exchanger firstly forwards the data packet in the highest priority queue, after the data packet in the highest priority queue is sent out, the exchanger forwards the data packet in the second highest priority queue, and for the data packet in the same priority queue, the exchanger forwards the data packet in the first-in first-out mode

The invention realizes the degradation of the larger data flow in the same priority queue by utilizing the information of the data flow of each switch queue ending in the time sliding window range counted by the switch in the time sliding window, thereby leading the smaller data flow in the priority queue to be sent first and reducing the flow completion time of the small data flow.

The following describes the data center flow adaptive scheduling method under the information agnostic condition in further detail with reference to the embodiments shown in the drawings.

With reference to fig. 1, the method for adaptively scheduling data center streams under the condition of unknown information according to the present invention is implemented by first performing TCP three-way handshake connection between a sending end and a receiving end, if the connection is successful, the sending end will send a data stream to the receiving end, and during sending, a host of the sending end will match according to a degradation threshold and the number of bytes sent by each data stream, so as to mark the data packet of each data stream with a corresponding priority, and when the switch end forwards the data packet, the information of each data stream will be recorded, and the time sliding window is used to count the average value of the bytes of the data stream ending in the time sliding window range of each switch end queue, and the average value is used as a standard to perform priority adjustment on the data packet to be sent, thereby achieving the purpose of reducing the problem of mismatching between network stream distribution and the degradation threshold of the data center, and simultaneously reducing the occurrence of packet, the stream completion time for small streams is further reduced. And after all the data streams are sent, performing TCP four-way handshake to release connection.

With reference to fig. 2, fig. 3 and fig. 4, the method for adaptively scheduling data center flows under the condition of unknown information according to the present invention mainly includes a priority marking module and a priority adjusting module.

(1) Priority marking module

A priority tagging module is deployed at the host end to tag priority for packets of a flow. The host side maintains k priority queues Pi,1=<i<K, and P1>P2>.....>PkAnd k-1 degradation thresholds aj,1=<j<K-1, and a1<a2<....<ak-1The packet of each stream is initially marked with the highest priority, P1Then the host will record the sent byte number send _ byte of each stream, when it is marked as P1Is more than a1Then send _ byte>=a1Then the priority of the next packet of the stream is marked as P2And so on until the stream priority is reduced to the lowest priority, i.e. PkAnd (6) ending. By the multi-stage feedback queue degradation mode, a large data stream and a small data stream can be distinguished without knowing the size of the data stream in advance, so that the priority of the short operation is simulated, and the stream completion time of the small stream is reduced.

However, due to the characteristic that data flow distribution in the network dynamically changes from time to time, a phenomenon that a degradation threshold value is not matched with flow distribution in the network is necessarily caused, and two phenomena are mainly caused, wherein the first phenomenon is that the degradation threshold value is set to be too large, so that a plurality of large data flows are distributed in a high-priority queue, small data flows arranged behind the large data flows in the same priority queue are influenced, and the flow completion time of the small flows is reduced; the second is that the degradation threshold is set too small so that many small flows drop to the low priority queue, and large flows in the low priority queue still adversely affect the small flows. According to the scheme, priority adjustment is carried out on the switch, and the large flows in the same priority queue are reduced to be in a lower priority, so that the phenomenon that the flow completion time of the small flows is prolonged due to the mismatching problem is reduced.

(2) Priority adjustment module

The priority adjusting module is deployed at the switch end and is used for degrading the large data streams in the same priority queue, so that the transmission delay of the small data streams is ensured, and the problem that the degradation threshold of the host at the sending end is not matched with the distribution of the data streams is solved. A time sliding window is maintained for each priority Queue at the switch end, and the number of flows finish _ flow ending in the time sliding window range and the number of packets finish _ bytes of the ending data flow are recorded at set time intervals, and by calculating the average value Queue _ mean of the data flows ending in the time sliding window range, the method comprises the following steps:

Queue_mean(i)=finish_bytes(i)/finish_flows(i)

where i is the priority Queue number, finish _ bytes (i) represents the number of bytes of the flow that ends in Queue i within the time sliding window, finish _ flows (i) represents the number of flows that end in Queue i within the time sliding window, and Queue _ mean (i) represents the mean of Queue i.

The switch end distinguishes the larger data flow in the same priority queue based on the average value, and degrades the larger data flow. The specific flow is that the switch end records the number of bytes sent of each data stream, when the priority marked by the sending end carried by the data stream passes through the switch end, the switch end matches the data stream with the average value calculated by the priority, if the number of bytes sent of the data stream does not exceed the average value of the current priority queue, the data stream is sent according to the priority; and if the number of bytes sent by the data flow exceeds the average value of the current queue, reducing the data flow to the next priority level for sending.

The specific implementation mode is that the switch end sets an ECN mark threshold value at a port, when the number of data packets buffered by the switch exceeds the ECN mark threshold value, CE marks are marked on the data packets exceeding the mark threshold value, after the switch end receives the data packets containing the CE marks, ECN notification is carried in a confirmation packet of the data, and after the sending end receives the data packets, the sending end correspondingly reduces the sending rate and notifies the switch end that the ECN notification is received.

The forwarding rule of the switch when forwarding the data packet is as follows: and forwarding the data packets in the queues with different priorities according to the priorities, and forwarding the data packets in the queues with the same priority according to a first-in first-out mode.

Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种分片数据包处理方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!