Message storage method, message in-out queue method and storage scheduling device

文档序号:571905 发布日期:2021-05-18 浏览:16次 中文

阅读说明:本技术 报文存储方法、报文出入队列方法及存储调度装置 (Message storage method, message in-out queue method and storage scheduling device ) 是由 徐子轩 夏杰 于 2020-12-30 设计创作,主要内容包括:本申请所提供了一种报文存储方法、报文出入队列方法及存储调度装置,报文存储方法包括:将报文依次划分为N个报文分片L-0、L-1、…、L-(N-1),N=int(L/W),int为向上取整函数,报文分片L-0携带sop标志位,报文L-(N-1)携带eop标志位;为待存入数据存储器的报文分片申请空闲地址ptr-X,链表控制模块的信息链表存储有:下一个报文的信息链表首地址、数据链表首地址以及数据eop标志位;数据链表存储有:下一个报文分片的数据分片地址和数据eop标志位。本发明所提供的出队列方法,在出队列操作时,通过信息链表和数据链表的融合设置,通过一次调度即可直接得到报文的分片地址,有效降低报文转发延时,同时减少了消息调度到数据调度之间的缓存信息,优化了芯片物理面积。(The application provides a message storage method, a message queue access method and a storage scheduling device, wherein the message storage method comprises the following steps: dividing message into N message fragments L 0 、L 1 、…、L N‑1 N is int (L/W), int is an upward rounding function, and a packet fragment L 0 Carrying sop flag, message L N‑1 Carries eop flag bit; applying for an idle address ptr _ X for a message fragment to be stored in a data memory, wherein an information linked list of a linked list control module stores: the first address of the information chain table, the first address of the data chain table and the flag bit of data eop of the next message; the data link table stores: the data slice address and data eop flag of the next message slice. The dequeuing method provided by the invention can directly obtain the fragment address of the message through one-time scheduling by the fusion setting of the information linked list and the data linked list during the dequeuing operation, thereby effectively reducing the message forwarding delayMeanwhile, the cache information from message scheduling to data scheduling is reduced, and the physical area of the chip is optimized.)

1. A message storage method is applied to the storage of messages in a storage scheduling device, the storage scheduling device comprises a linked list control module for controlling the dequeue and the enqueue of a linked list, a data memory for caching the messages, and a head address memory and a tail address memory of a source channel where the messages are located, the linked list control module comprises a linked list memory and an idle address memory, the bit width of the data memory is W, and the real length of the messages is L, and the method is characterized by comprising the following steps:

the message is divided into N message fragments L in sequence0、L1、…、LN-1N is int (L/W), int is an upward rounding function, and a packet fragment L0Carrying sop flag, message LN-1Carries eop flag bit;

applying for an idle address ptr _ X from the idle address storage for the message fragments to be stored in the data storage;

if the currently stored message fragments carry sop flag bits, updating the head address memory and the tail address memory of a source channel where the message fragments are located by using ptr _ X; and

and if the currently stored message fragments do not carry sop flag bits, writing the value in the tail address memory as an address and ptr _ X as a value into a data linked list of the linked list memory, and simultaneously writing ptr _ X into the tail address memory.

2. The message storage method of claim 1, wherein the method further comprises: and applying for an idle address for the message fragment and generating an information linked list.

3. The message storage method according to claim 2, wherein the information linked list stores: the first address of the information chain table, the first address of the data chain table and the flag bit of data eop of the next message;

the data link table stores: the data fragment address and data eop flag of the next message fragment.

4. The message storage method according to claim 3, wherein the first address storage stores the information chain table first address, the data chain table first address and the data eop flag bit of the first message in the current chain table; the tail address memory stores the address of the last information linked list in the information linked list.

5. A message enqueue method, applied to a storage and forwarding mode multicast forwarding message scenario, for performing message enqueue management on a message stored by applying the message storage method according to any one of claims 2 to 4, wherein the method comprises:

s1, setting the head address and the tail address of the queue and the reading state of the queue;

s2, storing the message fragments;

s3, responding to the enqueue request of the message, and applying for an address ptr _ Y matched with the information linked list;

s4, acquiring the queue reading state of the destination queue of the message, if the queue reading state is 0, executing a step S5, otherwise executing a step S6;

s5, writing the address of the message fragment carrying the sop flag bit into the data link list first address field in the queue first address, writing ptr _ Y into the information link list first address field in the queue first address, writing the eop flag bit of the message fragment carrying the sop flag bit into the data eop flag bit field in the queue first address, writing ptr _ Y into the queue tail address, writing the queue read state of the queue as 1, and executing the step S7;

s6, reading the head address and the tail address of the queue, using the tail address as an address, writing ptr _ Y, the address sum of the message fragments carrying sop flag bits into the information linked list, writing ptr _ Y into the tail address of the queue, and executing the step S7; and

and S7, finishing the enqueue operation.

6. A message dequeuing method applied to a storage-and-forwarding mode multicast forwarding message scenario, wherein a queue processed by the message enqueuing method according to claim 5 is used for dequeuing management, and the method comprises:

s01, using the queue number to access the head address and the tail address of the queue to obtain the head address field of the data chain table in the head address, and using the head address of the data chain table as the address to access the data memory to obtain the message content and send the message content to the back-stage module; if the flag bit of the data eop in the first address is 1, performing step S02, otherwise performing step S05;

s02, judging whether the head address and the tail address of the information linked list are the same, if so, executing a step S03, otherwise, executing a step S04;

s03, setting the queue reading state of the queue to 0, triggering a queue reselection mechanism of a destination port, and executing the step S06;

s04, using the information chain table initial address field as an address, reading the information chain table, obtaining the information chain table initial address, the data chain table initial address and the data eop flag bit of the next message, writing the information chain table initial address, the data chain table initial address and the data eop flag bit field into the initial address of the queue, triggering a queue reselection mechanism of the destination port, and executing the step S06;

s05, accessing the data chain table by using the first address of the data chain table to obtain the address of the data chain table and the flag bit of the data eop of the next message fragment of the message, writing the fields of the address of the data chain table and the flag bit of the data eop into the fields of the first address of the data chain table and the flag bit of the data eop in the first address of the queue correspondingly, and executing the step S06; and

and S06, finishing the dequeue operation.

7. The message dequeuing method according to claim 6, wherein if the message fragment participating in scheduling does not carry the eop flag, the QoS status is updated based on a bit width W of the data storage; and if the message fragments participating in scheduling carry eop flag bits, performing QoS state updating based on the length of the message fragments.

8. A storage scheduling apparatus, wherein the message storage method according to any one of claims 1 to 4 is applied to store a message.

9. A storage scheduler, characterized in that, the message queuing method according to claim 5 is applied to manage queues.

10. A storage scheduler, characterized in that, the message dequeuing method according to claim 6 or 7 is applied to manage queues.

Technical Field

The present invention relates to the field of network technologies, and in particular, to a message storage method, a message queue entry and exit method, and a storage scheduling device.

Background

When the network chip forwards the message in the store-and-forward mode, the message needs to be completely cached in the memory in the chip in a linked list (the linked list is represented as a message data linked list), and then the destination port of the message is determined according to the forward logic. Then, an enqueue request is generated, the key information (the initial address of the message, the message length, etc.) of the message is written into the linked list (the linked list is expressed as an information linked list) of the queue, and the queue scheduling is waited.

When a store-and-forward mode is applied to multicast and forward a packet, because multicast means that multiple copies of the packet are copied in a network chip and the packet may be sent to different destination ports, as shown in fig. 2, in the prior art, in the store-and-forward mode, an independent QoS module is first set, and key information of different packets are concatenated in a linked list manner. Through a certain strategy, the queue is selected, the key information corresponding to the first address in the information linked list is read, and the 'message data linked list first address' in the key information is sent to a 'message reading module' for message reading operation, and the realization method has the defects that:

the above scheme needs two independent linked lists of the data linked list and the information linked list, and the forwarding delay of the message is large because the two linked list reading operations are respectively stored in different modules. The two linked list reading and writing operations inevitably introduce other memories for caching, and further increase the physical area of the chip.

Disclosure of Invention

In view of the above technical problems in the prior art, the present invention provides a message storage method, which is applied to storing a message in a storage scheduling device, where the storage scheduling device includes a linked list control module for controlling linked list dequeuing and enqueuing, a data memory for caching the message, and a head address memory and a tail address memory of a source channel where the message is located, the linked list control module includes a linked list memory and an idle address memory, a bit width of the data memory is W, and a real length of the message is L, and the method includes:

the message is divided into N message fragments L in sequence0、L1、…、LN-1N is int (L/W), int is an upward rounding function, and a packet fragment L0Carrying sop flag, message LN-1Carries eop flag bit; applying for an idle address ptr _ X from the idle address storage for the message fragments to be stored in the data storage; if the currently stored message fragments carry sop flag bits, updating the head address memory and the tail address memory of a source channel where the message fragments are located by using ptr _ X; and if the currently stored message fragments do not carry sop flag bits, writing the value in the tail address memory as an address and ptr _ X as a value into a data linked list of the linked list memory, and simultaneously writing ptr _ X into the tail address memory.

Optionally, the method further comprises: and applying for an idle address for the message fragment and generating an information linked list.

Optionally, the information linked list stores: the first address of the information chain table, the first address of the data chain table and the flag bit of data eop of the next message; the data link table stores: the data fragment address and data eop flag of the next message fragment.

Optionally, the first address storage stores an information chain table first address, a data chain table first address and a data eop flag bit of a first message in a current chain table; the tail address memory stores the address of the last information linked list in the information linked list.

In order to achieve the above object, the present invention provides a message queuing method, which applies the messages stored by the message storage method described above to perform message queuing management, and the method includes:

s1, setting the head address and the tail address of the queue and the reading state of the queue;

s2, storing the message fragments;

s3, responding to the enqueue request of the message, and applying for an address ptr _ Y matched with the information linked list;

s4, acquiring the queue reading state of the destination queue of the message, if the queue reading state is 0, executing a step S5, otherwise executing a step S6;

s5, writing the address of the message fragment carrying the sop flag bit into the data link list first address field in the queue first address, writing ptr _ Y into the information link list first address field in the queue first address, writing the eop flag bit of the message fragment carrying the sop flag bit into the data eop flag bit field in the queue first address, writing ptr _ Y into the queue tail address, writing the queue read state of the queue as 1, and executing the step S7;

s6, reading the head address and the tail address of the queue, using the tail address as an address, using ptr _ Y, eop zone bits of the message fragments carrying sop zone bits of the address of the message fragments carrying sop zone bits as values, writing the values into the information chain table, writing ptr _ Y into the tail address of the queue, and executing the step S7; and

and S7, finishing the enqueue operation.

In order to achieve the above object, the present invention provides a message dequeuing method, which is applied to dequeue management of a queue processed by the message enqueuing method, and the method includes:

s01, using the queue number to access the head address and the tail address of the queue to obtain the head address field of the data chain table in the head address, and using the head address of the data chain table as the address to access the data memory to obtain the message content and send the message content to the back-stage module; if the flag bit of the data eop in the first address is 1, performing step S02, otherwise performing step S05;

s02, judging whether the head address and the tail address of the information linked list are the same, if so, executing a step S03, otherwise, executing a step S04;

s03, setting the queue reading state of the queue to 0, triggering a queue reselection mechanism of a destination port, and executing the step S06;

s04, using the information chain table initial address field as an address, reading the information chain table, obtaining the information chain table initial address, the data chain table initial address and the data eop flag bit of the next message, writing the information chain table initial address, the data chain table initial address and the data eop flag bit field into the initial address of the queue, triggering a queue reselection mechanism of the destination port, and executing the step S06;

s05, accessing the data chain table by using the first address of the data chain table to obtain the address of the data chain table and the flag bit of the data eop of the next message fragment of the message, writing the fields of the address of the data chain table and the flag bit of the data eop into the fields of the first address of the data chain table and the flag bit of the data eop in the first address of the queue correspondingly, and executing the step S06; and

and S06, finishing the dequeue operation.

Optionally, if the message fragment participating in scheduling does not carry the eop flag, performing QoS status update based on the bit width W of the data memory; and if the message fragments participating in scheduling carry eop flag bits, performing QoS state updating based on the length of the message fragments.

In order to achieve the above object, the present invention provides a storage scheduling apparatus, which stores a packet by applying the above packet storage method.

In order to achieve the above object, the present invention provides a storage scheduling apparatus, which manages queues by applying the above message queuing method.

In order to achieve the above object, the present invention provides a storage scheduling apparatus, which manages queues by applying the above message dequeuing method.

The invention has the technical effects that: during the dequeue operation, the message fragment address can be directly obtained through one-time scheduling through the fusion setting of the information linked list and the data linked list, and the message forwarding delay is effectively reduced. Meanwhile, cache information from message scheduling to data scheduling is reduced, and the physical area of a chip is optimized.

Drawings

FIG. 1 is a schematic diagram of a scheduling storage device according to an embodiment of the present invention;

fig. 2 is a schematic diagram illustrating a flow of multicast forwarding a packet in a store-and-forward mode in the prior art;

fig. 3 is a message storage method according to an embodiment of the present invention;

FIG. 4 is a schematic structural diagram of an information linked list, a data linked list, a head address memory and a tail address memory in an embodiment of the present invention;

fig. 5 is a message queuing method according to an embodiment of the present invention;

fig. 6 is a message dequeuing method according to an embodiment of the present invention.

Detailed Description

The present invention will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the present invention.

Examples

The present embodiment is implemented based on a basic scheduling storage device, and for easy understanding, the operation principle of the scheduling storage device is briefly described below with reference to fig. 1:

the storage scheduling device is applied to the caching and scheduling of data, the input signal of the storage scheduling device mainly comprises a queue number, data and a linked list address, and the storage scheduling device mainly comprises the following modules:

the data memory 1 can buffer data according to a linked list address of an input signal.

And the linked list control module 2 is used for controlling the conventional linked list enqueue and dequeue operations. The control of the linked list belongs to the general technical category and is not described in detail here.

The linked list control module 2 mainly comprises: a head pointer memory 21, a tail pointer memory 22, a linked list memory 23, and a queue read status memory 24.

When writing data, if the queue read state corresponding to the queue number is 0, which indicates that no other data in the queue is waiting for scheduling at this time, then:

the queue number is used as an address, and the linked list address of the data is used as a value, and written into the head pointer memory 21 and the tail pointer memory 22, respectively.

The queue read state is changed to 1.

If the queue reading state corresponding to the queue number is not 0, indicating that other data waiting for scheduling exist in the queue, writing a linked list address of the data into a linked list memory 23 as a value and a tail pointer as an address to complete the linking operation of the data; the queue number is used as an address, and the linked list address of the data is used as a value, and is written into the tail pointer memory 22, thereby completing the tail pointer update operation.

When reading data, the head pointer memory 21 and the tail pointer memory 22 are first accessed according to the queue number. Judging whether the head pointer is equal to the tail pointer, if so, setting the queue reading state of the channel to be 0, otherwise: the linked list memory 23 is accessed according to the head pointer to obtain the next jump pointer, the next jump pointer is used as a value, and the queue number is written into the head pointer memory 21 as an address.

And the scheduler 3, if the queue reading state is 1, the scheduler 3 participates in the queue scheduling. The scheduler 3 sends the scheduled queue to the linked list control module 2 to obtain the linked list address of the queue and triggers the linked list control module 2 to update the queue read state information.

And the information reading module 4 accesses the data memory 1 according to the linked list address acquired by the scheduler 3 to obtain data and generate output.

When the network chip forwards the message in the store-and-forward mode, the message needs to be completely cached in the memory in the chip in a linked list (the linked list is represented as a message data linked list), and then the destination port of the message is determined according to the forward logic. Then, an enqueue request is generated, the key information (the initial address of the message, the message length, etc.) of the message is written into the linked list (the linked list is expressed as an information linked list) of the queue, and the queue scheduling is waited.

Optionally, the queue may be selected by a QoS policy, the key information in the queue is scheduled and sent to a message data reading module (not shown), and the message is read from the memory according to a message start address in the key information and sent to the destination port.

When a store-and-forward mode is applied to multicast and forward a packet, because multicast means that multiple copies of the packet are copied in a network chip and the packet may be sent to different destination ports, as shown in fig. 2, in the prior art, in the store-and-forward mode, an independent QoS module is first set, and key information of different packets are connected in series in a linked list manner. Through a certain strategy, the queue is selected, the key information corresponding to the first address in the information linked list is read, and the 'message data linked list first address' in the key information is sent to a 'message reading module' for message reading operation, and the realization method has the defects that:

the above scheme needs two independent linked lists of the data linked list and the information linked list, and the forwarding delay of the message is large because the two linked list reading operations are respectively stored in different modules. The two linked list reading and writing operations inevitably introduce other memories for caching, and further increase the physical area of the chip.

In order to solve the above technical problem, first, this embodiment provides a message storage method, which is applied to storing a message in a storage scheduling device, where, except for special description, the storage scheduling device is consistent with each component of the storage scheduling device shown in fig. 1, and the storage scheduling device further includes: as shown in fig. 3, the message storage method includes that a head address memory (not shown), a tail address memory (not shown), and a free address memory (not shown) of a source channel where a message is located are provided, a bit width of a data memory 1 is W, and a real length of the message is L:

dividing message into N message fragments L0、L1、…、LN-1N is int (L/W), int is an upward rounding function, and a packet fragment L0Carrying sop flag, message LN-1Carries eop flag bit;

applying for an idle address ptr _ X from an idle address memory for message fragments to be stored in the data memory 1;

if the currently stored message fragments carry sop flag bits, updating a head address memory and a tail address memory of a source channel where the message fragments are located by using ptr _ X; and

if the currently stored packet fragment does not carry the sop flag, the value in the tail address memory is used as the address, ptr _ X is used as the value, and the value is written into the data linked list of the linked list memory 23, and the ptr _ X is written into the tail address memory at the same time.

Therefore, the generation of the data linked list of the message is realized.

Further, the method further comprises: applying for an idle address for message fragmentation and generating an information linked list, wherein the information linked list comprises: the first address of the information chain table, the first address of the data chain table and the flag bit of data eop of the next message.

Wherein, the flag bit of the data eop is 1, which means that the corresponding message fragment carries the flag bit eop.

Therefore, the generation of the information linked list of the message is realized.

Specifically, the queue read state memory 24 stores the state of each queue link table, and the corresponding bit is 1, which indicates that there is a packet waiting for dequeuing in the queue.

And the tail address memory stores the information linked list address of the last information. Hereinafter, data of the address corresponding to the queue in the storage is referred to as a tail address.

The first address memory stores three important information of the first message of the current linked list: information chain table first address, data eop flag bit. Hereinafter, data of the address corresponding to the queue in the storage is referred to as a first address.

The information chain table stores the first address of the information chain table, the first address of the data chain table and the flag bit of the data eop of the next message.

And the data link table stores the data fragment address and the data eop flag bit of the next message fragment.

Specifically, as shown in fig. 4, in the store-and-forward mode, it is assumed that there are four pieces of key information (information 1, 2, 3, and 4) waiting for scheduling in the current queue, and the four pieces of key information correspond to a message one, a message three, a message four, and a message one, respectively. The length of the message fragmentation system is respectively three message fragments, one message fragment, two message fragments and three message fragments.

The addresses of the information 1-4 of the information linked list are respectively as follows: a0, B0, C0 and D0, wherein the information is { B0, B0}, { C0, C0}, { D0, D0}, { e0 and B0}, respectively.

In the data link table, the address of the message fragment 0 of the message three is B0, the address of the message fragment 0 of the message three is C0, and the address of the message fragment 0 of the message four is D0. Taking message one as an example, message segment 0 carries a sop flag, and message segment 2 carries eop flag.

Taking information 1 as an example, the first address memory stores: the first address B0 of the information chain table, the first address B0 of the data chain table, and the eop flag bit of the message one.

The tail address memory stores: the information link list address e0 of the next information of the information 4, and the data link list head address B0 corresponding to the information 4.

By applying the message storage method provided by the embodiment, the data linked list can be directly obtained by accessing the head address and the tail address of the queue, and the data linked list does not need to be read after the information linked list is read, so that the scheduling stage number is reduced, the memory arrangement introduced by multi-stage scheduling is reduced, the physical area of a chip is reduced, and the message forwarding delay is reduced.

This embodiment provides a method for enqueuing multicast store-and-forward packets, as shown in fig. 5, where the method includes:

s1, setting the head address and the tail address of the queue and the reading state of the queue;

s2, storing the message fragments;

s3, responding to the enqueue request of the message, and applying for an address ptr _ Y matched with the information linked list;

s4, acquiring a queue reading state of a target queue of the message, if the queue reading state is 0, executing a step S5, otherwise executing a step S6;

s5, writing the address of the message fragment carrying the sop zone bit into the data chain table first address field in the first address of the queue, writing ptr _ Y into the information chain table first address field in the first address of the queue, writing the eop zone bit of the message fragment carrying the sop zone bit into the data eop zone bit field in the first address of the queue, writing ptr _ Y into the tail address of the queue, writing the queue read state of the queue as 1, and executing the step S7;

s6, reading the head address and the tail address of the queue, writing the head address and the tail address into an information chain table by using the tail address as the address, ptr _ Y, the address of the message fragment carrying the sop zone bit and the eop zone bit of the message fragment carrying the sop zone bit as values, and writing ptr _ Y into the tail address of the queue, and executing the step S7; and

and S7, finishing the enqueue operation.

If the number of the message fragments is greater than 1, the flag bit of the message length is 1, otherwise, the flag bit of the message length is 0.

In the initialization stage or when the current message is the first message of the queue, the reading state of the queue is 0, other messages exist in the queue, and the reading state of the queue is 1.

If the message has only one fragment, the data eop flag bit field in the queue head address at this time is set to 1, otherwise, the field is set to 0.

This embodiment also provides a packet dequeuing method suitable for multicast store-and-forward, where dequeuing management is performed on a queue processed by applying the packet dequeuing method for multicast store-and-forward described above, as shown in fig. 6, the method includes:

s01, using the queue number, accessing the head address and the tail address of the queue to obtain the head address field of the data chain table in the head address, using the head address of the data chain table as the address, accessing the data memory 1, obtaining the message content and sending the message content to the back-stage module; if the flag bit of the data eop in the first address is 1, go to step S02, otherwise go to step S05;

s02, judging whether the head address and the tail address of the information linked list are the same, if so, executing a step S03, otherwise, executing a step S04;

s03, setting the queue reading state of the queue to 0, triggering a queue reselection mechanism of a destination port, and executing the step S06;

s04, reading the information linked list by using the information linked list initial address field as an address, acquiring the information linked list initial address, the data linked list initial address and the data eop flag bit of the next message, writing the information linked list initial address, the data linked list initial address and the data eop flag bit field into the initial address of the queue, triggering a queue reselection mechanism of the destination port, and executing the step S06;

s05, accessing the data linked list by using the first address of the data linked list to obtain the address of the data linked list of the next message fragment of the message and a data eop flag bit, writing fields of the address of the data linked list and the data eop flag bit into fields of the first address of the data linked list and the data eop flag bit in the first address of the queue correspondingly, and executing the step S06; and

and S06, finishing the dequeue operation.

Wherein, the flag bit of the data eop in the first address is 1, which indicates that the current first address of the data link table is the last message fragment of the message.

The first address and the tail address of the information chain table are the same, which indicates that the current dequeue is the last fragment of the last message of the queue, and the first address and the tail address of the information chain table are different, which indicates that the current dequeue is the last fragment of the message, but the message is not the last message of the queue.

Optionally, when performing QoS queue management, the dequeuing method further includes, if the packet fragment participating in scheduling does not carry eop flag bits, performing QoS status update based on bit width W of the data storage 1; and if the message fragments participating in scheduling carry eop flag bits, performing QoS state updating based on the length of the message fragments.

The length of the message/message fragment can be obtained by managing the message length flag bit, if the message has only one fragment, the message length flag bit field is the actual message length, otherwise, the message length flag bit field is the bit width W of the data memory 1.

According to the message dequeuing method provided by the embodiment, when dequeuing operation is performed, the fragment address of the message can be directly obtained through one-time scheduling through the fusion setting of the information linked list and the data linked list, so that the message forwarding delay is effectively reduced. Meanwhile, the cache information from message scheduling to data scheduling is reduced, and the physical area of a chip is reduced.

Optionally, this embodiment further provides a storage scheduling apparatus, which stores the packet by applying the packet storage method described above.

Optionally, this embodiment further provides a storage scheduling apparatus, which manages the queue by applying the above-described packet queuing method applicable to multicast store and forward.

Optionally, this embodiment further provides a storage scheduling apparatus, which manages a queue by applying the above-mentioned packet dequeuing method applicable to multicast store and forward.

Since the technical contents and features of the present invention have been disclosed above, those skilled in the art can make various substitutions and modifications without departing from the spirit of the present invention based on the teaching and disclosure of the present invention, and therefore, the scope of the present invention is not limited to the disclosure of the embodiments, but includes various substitutions and modifications without departing from the present invention, and is covered by the claims of the present patent application.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:片上网络的数据广播方法、装置、芯片及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!