Data processing method, system, electronic equipment and storage medium

文档序号:1627686 发布日期:2020-01-14 浏览:4次 中文

阅读说明:本技术 一种数据处理方法、系统、电子设备及存储介质 (Data processing method, system, electronic equipment and storage medium ) 是由 刘志勇 于 2019-09-06 设计创作,主要内容包括:本申请公开了一种数据处理方法,所述数据处理方法包括获取目标时间段内缓存中待刷写数据的数量变化信息;其中,所述目标时间段为历史时刻至当前时刻对应的时间段,所述历史时刻早于当前时刻预设时长;根据所述数量变化信息确定线程变化量;其中,所述数量变化信息包括数据变化量,所述数据变化量与所述线程变化量正相关;根据所述线程变化量确定目标线程数量,并通过启动所述目标线程数量个线程处理所述缓存中的待刷写数据。本申请能够在保证并发能力的前提下实现高效刷写数据。本申请还公开了一种数据处理系统、一种存储介质及一种电子设备,具有以上有益效果。(The application discloses a data processing method, which comprises the steps of obtaining the quantity change information of data to be refreshed in a cache in a target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time; determining thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity; and determining the number of target threads according to the thread variation, and processing the data to be flashed in the cache by starting the threads with the number of the target threads. According to the method and the device, efficient data flashing can be realized on the premise of ensuring the concurrency capability. The application also discloses a data processing system, a storage medium and an electronic device, which have the beneficial effects.)

1. A data processing method, comprising:

acquiring the quantity change information of the data to be refreshed in the cache in a target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time;

determining thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity;

and determining the number of target threads according to the thread variation, and processing the data to be flashed in the cache by starting the threads with the number of the target threads.

2. The data processing method of claim 1, further comprising:

and when new data to be refreshed sent by the upper-layer host computer is received, storing the new data to be refreshed into the cache.

3. The data processing method of claim 1, wherein determining a thread variation amount from the amount variation information comprises:

determining a data change trend according to the quantity change information, and determining thread change quantity corresponding to the data change trend by inquiring a relation comparison table; the relation comparison table stores the corresponding relation between the data change trend and the thread change quantity.

4. The data processing method of claim 3, wherein the data trend comprises a steady trend, a linear upward trend, an exponential upward trend, a linear downward trend, or an exponential downward trend.

5. The data processing method according to claim 1, wherein the obtaining of the information about the change in the amount of the data to be flushed in the cache within the target time period comprises:

acquiring the load state of the data to be refreshed in the cache in the target time period according to a preset period;

and determining the quantity change information according to the load state corresponding to each period.

6. The data processing method of claim 1, further comprising:

and when detecting that the number of the data to be refreshed in the cache is larger than or equal to the maximum preset number, generating feedback information so as to prevent the new data to be refreshed from being stored in the cache.

7. The data processing method of claim 1, wherein determining a target number of threads based on the thread variance comprises:

and determining the number of initial threads, and correcting the number of the initial threads according to the thread variation to obtain the number of the target threads.

8. A data processing system, comprising:

the load detection module is used for acquiring the quantity change information of the data to be refreshed in the cache in the target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time;

the trend prediction module is used for determining thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity;

and the thread scheduling module is used for determining the number of target threads according to the thread variation and processing the data to be refreshed in the cache by starting the threads with the number of the target threads.

9. An electronic device, comprising a memory in which a computer program is stored and a processor which, when calling the computer program in the memory, implements the steps of the data processing method according to any one of claims 1 to 7.

10. A storage medium having stored thereon computer-executable instructions which, when loaded and executed by a processor, carry out the steps of a data processing method as claimed in any one of claims 1 to 7.

Technical Field

The present disclosure relates to the field of computer technologies, and in particular, to a data processing method and system, a storage medium, and an electronic device.

Background

In the full flash memory, in order to realize a random order-changing storage mode of data, a mode of allocating addresses later is required for storage, namely when a bottom module issued by IO is written, a specific writing position is not attached, but when the bottom module receives the write-back IO, an address is allocated in an additional writing mode, then the address is recorded in metadata, and a mapping relation between a logical address and an actual address is established, so that the IO address can be accurately inquired through the metadata later. In order to ensure the concurrency degree, different blocks are mounted on different threads after a physical space is cut into blocks, additional writing of data is realized in a single block, and the concurrency degree of data writing is met in a multi-thread mode. The sliced space block is called a stripe.

In order to improve the system performance, a multi-thread concurrent processing mode is required for IO processing, a fixed number of threads are used for processing tasks to be written over in the related technology processing process, and the IO processing efficiency is low due to too few threads, so that the system performance is influenced; if the number of threads is too large, the IO processing is too dispersed, which may cause that it is difficult to fast fill the tape, and the IO processing is too dispersed, which affects the disk writing speed.

Therefore, how to implement efficient data flashing on the premise of ensuring concurrency capability is a technical problem to be solved by those skilled in the art at present.

Disclosure of Invention

The application aims to provide a data processing method, a data processing system, a storage medium and an electronic device, which can realize efficient data flashing on the premise of ensuring concurrency capability.

In order to solve the above technical problem, the present application provides a data processing method, including:

acquiring the quantity change information of the data to be refreshed in the cache in a target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time;

determining thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity;

and determining the number of target threads according to the thread variation, and processing the data to be flashed in the cache by starting the threads with the number of the target threads.

Optionally, the method further includes:

and when new data to be refreshed sent by the upper-layer host computer is received, storing the new data to be refreshed into the cache.

Optionally, determining the thread variation according to the quantity variation information includes:

determining a data change trend according to the quantity change information, and determining thread change quantity corresponding to the data change trend by inquiring a relation comparison table; the relation comparison table stores the corresponding relation between the data change trend and the thread change quantity.

Optionally, the data variation trend includes a steady trend, a linear upward trend, an exponential upward trend, a linear downward trend, or an exponential downward trend.

Optionally, the obtaining of the quantity change information of the data to be flushed in the cache within the target time period includes:

acquiring the load state of the data to be refreshed in the cache in the target time period according to a preset period;

and determining the quantity change information according to the load state corresponding to each period.

Optionally, the method further includes:

and when detecting that the number of the data to be refreshed in the cache is larger than or equal to the maximum preset number, generating feedback information so as to prevent the new data to be refreshed from being stored in the cache.

Optionally, determining the target number of threads according to the thread variation includes:

and determining the number of initial threads, and correcting the number of the initial threads according to the thread variation to obtain the number of the target threads.

The present application also provides a data processing system, comprising:

the load detection module is used for acquiring the quantity change information of the data to be refreshed in the cache in the target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time;

the trend prediction module is used for determining thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity;

and the thread scheduling module is used for determining the number of target threads according to the thread variation and processing the data to be refreshed in the cache by starting the threads with the number of the target threads.

The present application also provides a storage medium having stored thereon a computer program that, when executed, performs the steps performed by the above-described data processing method.

The application also provides an electronic device, which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the steps executed by the data processing method when calling the computer program in the memory.

The application provides a data processing method which comprises the steps of obtaining the quantity change information of data to be refreshed in a cache in a target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time; determining thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity; and determining the number of target threads according to the thread variation, and processing the data to be flashed in the cache by starting the threads with the number of the target threads.

According to the method and the device, the quantity change information of the data to be refreshed in the target time period is firstly acquired, the water level condition of the device can be determined according to the quantity change information, the trend prediction of the load pressure is further realized, and the thread change quantity is further determined according to the quantity change information. According to the method and the device, the data variation is positively correlated with the thread variation, so that the number of the target threads determined according to the thread debt can be adjusted according to the self water level variation condition of the cache module. Due to the fact that the number of the threads changes along with the number of the data to be flashed, efficient data flashing can be achieved on the premise that concurrency capability is guaranteed. The application also provides a data processing system, a storage medium and an electronic device, which have the beneficial effects and are not repeated herein.

Drawings

In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.

Fig. 1 is a flowchart of a data processing method according to an embodiment of the present application;

FIG. 2 is a flowchart of a method for changing digital quantity information according to an embodiment of the present disclosure;

FIG. 3 is a schematic diagram of an optimal number of working threads adjusting system applied to a storage system according to an embodiment of the present disclosure;

fig. 4 is a schematic structural diagram of a data processing system according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Referring to fig. 1, fig. 1 is a flowchart of a data processing method according to an embodiment of the present disclosure.

The specific steps may include:

s101: acquiring the quantity change information of the data to be refreshed in the cache in a target time period;

the embodiment can be applied to a server using full flash storage, and the data caching module enables a thread to flush the data to be flushed to an underlying module (such as a data organization module) after receiving the data to be flushed. It should be noted that the data to be flashed received by the data cache module may be an IO issued by the upper layer host, so as to reduce the write latency. That is to say, after the data to be flushed reaches the data cache module, the data to be flushed is temporarily stored in the data cache module, and meanwhile, the data to be flushed in the data cache module is issued to the data organization module in the lower layer in batch. Meanwhile, a certain amount of data to be refreshed is always stored in the data cache module, so that the amount of the data cache module occupying the total cache capacity can be called as a water level, and the higher the water level is, the higher the load pressure of the current cache is.

It should be noted that the target time period mentioned in this step refers to a time period corresponding to a time period from a historical time to a current time, the historical time is earlier than the current time by a preset time length, and a specific time length of the time period is not limited herein. As a possible implementation, the target time period may be a continuous time period.

The quantity change information obtained in this step is information describing the change condition of the data to be flashed stored in the data cache module. The quantity change information may include data variation amounts of the historical time and the current time, a data quantity fluctuation condition in a process from the historical time to the current time, and a data quantity of data to be refreshed at each time point in the target time period, where specific content of the data change information is not limited herein.

S102: determining thread variation according to the quantity variation information;

the thread variation is determined according to the quantity variation information of the data to be flashed in the data cache module, where the thread variation is quantity information that describes the quantity of threads that needs to be changed, for example, the thread variation may be N added on the basis of the original quantity of threads, or M added on the basis of the original quantity of threads, or the original quantity of threads may be kept unchanged.

As a possible implementation, when the amount change information includes a data change amount between two time points of the historical time and the current time, a thread change amount corresponding to the data change amount may be determined according to a preset correspondence. It should be noted that the preset correspondence may be preset before executing S102, and the data variation and the thread variation are positively correlated in the preset correspondence. Through the above thread variation determining method, at least the following three technical effects can be achieved:

(1) if the number of the data to be refreshed at the current moment is higher than the number of the data to be refreshed at the historical moment by a first preset number, the load pressure of the data cache module is increased, and if the number of the threads is changed, the efficiency of refreshing the data to be refreshed is reduced. The thread variation obtained by adopting the thread variation determining mode is the increase of a first preset number of threads.

(2) If the number of the data to be refreshed at the current moment is lower than the second preset number of the data to be refreshed at the historical moment, the load pressure of the data caching module is reduced, and if the number of the threads is changed, the slow time of the strip refreshing is longer. The thread variation obtained by adopting the thread variation determining mode is the reduction of a second preset number of threads.

(3) If the number of the data to be refreshed at the current moment is equal to or the difference value between the number of the data to be refreshed at the historical moment is within the preset range, it is indicated that the load pressure of the data cache module is unchanged, and the number of the current threads can not be changed.

S103: and determining the number of target threads according to the thread variation, and processing the data to be flashed in the cache by starting the threads with the number of the target threads.

The present embodiment is based on determining the thread variation, and determines the target thread number by combining the thread variation and the initial thread number, and the specific process may include: and determining the number of initial threads, and correcting the number of the initial threads according to the thread variation to obtain the number of the target threads.

Further, the present embodiment may determine the target thread number Y according to a first formula, where Y is Z + △, Z is the initial thread number, △ is the thread variation, the thread variation may be a positive value, 0, or a negative value, △ is a-B, a is the number of data to be flushed at the current time, and B is the number of data to be flushed at the historical time.

Since the operation of determining the target number of threads according to the thread variation is only the determination of the number of threads, and the number of threads executing the data flushing task is not actually changed, the setting of the number of threads is realized by starting the target number of threads to process the data to be flushed in the cache.

It can be understood that, in the process of processing the data to be flushed in the cache by using and starting the number of threads of the target thread, if the data caching module receives new data to be flushed sent by the upper host, the new data to be flushed may be stored in the cache. Further, when it is detected that the number of the data to be flushed in the cache is greater than or equal to the maximum preset number, the present embodiment may generate feedback information so as to prevent the new data to be flushed from being stored in the cache, thereby avoiding an overload condition.

According to the embodiment, the quantity change information of the data to be refreshed in the target time period is firstly acquired, the water level condition of the user can be determined according to the quantity change information, the trend prediction of the load pressure is further realized, and the thread variation is further determined according to the quantity change information. In this embodiment, the data variation is directly related to the thread variation, so that the number of the target threads determined according to the thread guard amount can be adjusted according to the self water level variation condition of the cache module. In the embodiment, the number of threads is changed along with the number of the data to be flashed, so that the data can be efficiently flashed on the premise of ensuring the concurrency capability.

As a supplementary explanation to the embodiment corresponding to fig. 1, the process of determining the thread variation amount according to the amount variation information in S102 may specifically be:

determining a data change trend according to the quantity change information, and determining thread change quantity corresponding to the data change trend by inquiring a relation comparison table; the relation comparison table stores the corresponding relation between the data change trend and the thread change quantity.

The data change trend includes a steady trend, a linear ascending trend, an exponential ascending trend, a linear descending trend or an exponential descending trend, and is not limited to the five situations, and finer granularity division can be performed according to actual needs. Each data change trend may correspond to a thread change amount. For example, the thread variation corresponding to the stationary trend is 0, the thread variation corresponding to the linear upward trend is + Q, the thread variation corresponding to the exponential upward trend is +4Q, the thread variation corresponding to the linear downward trend is-Q, and the thread variation corresponding to the exponential downward trend is-4Q. Wherein, + Q represents increasing Q new threads, -Q represents decreasing Q original threads, and so on.

Referring to fig. 2, fig. 2 is a flowchart of a method for changing information according to quantity according to an embodiment of the present disclosure; this embodiment is a further description of S101 in the embodiment corresponding to fig. 1, and a more preferable embodiment may be obtained by combining this embodiment with the embodiment corresponding to fig. 1, where this embodiment may include the following steps:

s201: acquiring the load state of the data to be refreshed in the cache in the target time period according to a preset period;

s202: and determining the quantity change information according to the load state corresponding to each period.

The above embodiment provides a way of determining data change information, which is to first obtain a load state of data to be flashed according to a preset period, and generate quantity change information by combining a complex state of each period in a target time period. The method improves the determination efficiency of the data change information, and reduces the system resources consumed in the process of determining the data change information.

The flow described in the above embodiment is explained below by an embodiment in practical use. Referring to fig. 3, fig. 3 is a schematic diagram of an optimal work thread number adjusting system applied to a storage system according to an embodiment of the present application, and the specific working process is as follows:

the data cache module is mainly used for caching IO issued by an upper-layer host, and mainly used for reducing write delay. When data IO (i.e. data to be flushed in the above embodiment) reaches the cache, the data may be temporarily stored in the cache, and meanwhile, the data in the cache may be issued to the data organization module in batch. Meanwhile, a certain amount of data to be refreshed is always stored in the cache, so that the amount of the data in the cache occupying the total cache capacity can be called as a water level, the higher the water level is, the higher the load pressure of the current cache is, and when the water level continuously rises, the processing speed of the lower-layer data organization module is lower than the speed of the upper-layer host for issuing the IO, so that the cache load pressure is continuously increased, and at the moment, the processing capacity of the lower-layer data organization module needs to be increased, that is, the number of concurrent threads is increased.

In this embodiment, a load detection sub-module is further added in the data cache module, and the load detection sub-module may continuously monitor the water level condition in the current cache in a timing sampling manner, and use a series of collected water level change conditions as a basis for notifying the change of the thread number.

The trend prediction submodule calculates the variation trend of the water level through a series of received load water level results, and the trend is mainly divided into several types: steady change, linear rise, exponential rise, linear fall, exponential fall. The evaluation method is to fit the trend of the water level change through mathematical calculation according to the latest multiple sampling values (for example, 5 times and the like), the calculation mode needs to be determined according to the number of the sampling values, the more the sampling values are, the more accurate the result is, but the corresponding complexity is increased, and therefore the proper number of the sampling values can be determined according to the requirements of equipment performance, instantaneity and the like. For example, assuming that 5 sampling values (a/B/C/D/E, respectively) are adopted, the change trend can be determined by sequentially calculating the change rate of two adjacent values, for example, B increases by 2% with respect to a, C increases by 3% with respect to B, D decreases by 2% with respect to C, and E increases by 1% with respect to D, and the change trend can be considered as a steady change; if B increases by 2% with respect to A, C increases by 4% with respect to B, D increases by 7% with respect to C, and E increases by 14% with respect to D, the trend of the change is considered to be an exponential increase, etc. It should be noted that the above-listed calculation method is only a simple method, and more complicated and precise methods can be adopted according to actual needs.

In this embodiment, after the change trend is obtained, the thread number adjusting submodule of the lower data organization module may need to be notified, and the thread number is adjusted according to the change trend. For example, when the variation trend is smooth, the number of threads is not changed, when the linear rising is carried out, one thread number is increased, when the linear rising is carried out, two or more threads are increased according to the variation amplitude when the exponential rising is carried out, and the like, so that the concurrent processing capacity of the data organization module is adjusted in advance, and the change of the load pressure is better dealt with.

After the data organization module is adjusted, the capability of concurrent processing of the IO changes, so that the data amount cached in the data caching module is correspondingly affected, generally, the stronger the concurrent capability is, the faster the IO processing speed is, thereby, the data in the cache is accelerated to be flushed down, and further, the cache water level is reduced. Therefore, the adjustment of the data organization module finally influences the change of the cache level in a feedback mode, and the cache level and the concurrency capability of the data organization are maintained in a relatively balanced state by means of continuous dynamic adjustment of the number of threads, so that the system is always kept in an optimal number of concurrent threads. After the variation trend is obtained, the number of the working threads in the lower-layer data organization module is adjusted according to the variation trend, and the threads need to be increased when the variation trend is increased, so that the concurrency capability is improved; the descending trend needs to reduce the threads, so as to prevent the IO from being too dispersed and influencing the disk writing speed.

The process includes that the upper-layer cache module carries out load pressure trend prediction through the water level condition of the upper-layer cache module, an expected result is informed to the lower-layer data organization module in advance, the data organization module increases or decreases the number of working threads in advance after receiving the notification, meanwhile, the pressure change of the upper-layer cache module is influenced in a feedback mode after the number of the threads is increased or decreased, and finally, load balance between the upper-layer cache module and the lower-layer cache module is achieved. The embodiment provides a scheme for automatically adjusting the optimal number of working threads of a storage system, and the number of threads for concurrently processing data in a data organization module can be adjusted in advance by periodically sampling the cache water level and predicting the load pressure change, so that the number of the working threads is always maintained at an optimal number level.

Referring to fig. 4, fig. 4 is a schematic structural diagram of a data processing system according to an embodiment of the present disclosure;

the system may include:

the load detection module 100 is configured to obtain quantity change information of data to be refreshed in a cache within a target time period; the target time period is a time period corresponding to a historical time to a current time, and the historical time is earlier than a preset time length of the current time;

a trend prediction module 200, configured to determine a thread variation according to the quantity variation information; the quantity change information comprises data change quantity, and the data change quantity is positively correlated with the thread change quantity;

and the thread scheduling module 300 is configured to determine the number of target threads according to the thread variation, and process the to-be-flashed data in the cache by starting the threads with the number of target threads.

According to the embodiment, the quantity change information of the data to be refreshed in the target time period is firstly acquired, the water level condition of the user can be determined according to the quantity change information, the trend prediction of the load pressure is further realized, and the thread variation is further determined according to the quantity change information. In this embodiment, the data variation is directly related to the thread variation, so that the number of the target threads determined according to the thread guard amount can be adjusted according to the self water level variation condition of the cache module. In the embodiment, the number of threads is changed along with the number of the data to be flashed, so that the data can be efficiently flashed on the premise of ensuring the concurrency capability.

Further, the method also comprises the following steps:

and the cache module is used for storing the new data to be refreshed into the cache when receiving the new data to be refreshed sent by the upper-layer host.

Further, the trend prediction module 200 is specifically a module configured to determine a data change trend according to the quantity change information, and determine a thread change amount corresponding to the data change trend by querying a relationship comparison table; the relation comparison table stores the corresponding relation between the data change trend and the thread change quantity.

Further, the data change trend includes a steady trend, a linear upward trend, an exponential upward trend, a linear downward trend, or an exponential downward trend.

Further, the load detection module 100 includes:

the period detection unit is used for acquiring the load state of the data to be refreshed in the cache in the target time period according to a preset period;

and the quantity change information determining unit is used for determining the quantity change information according to the load state corresponding to each period.

Further, the method also comprises the following steps:

and the feedback module is used for generating feedback information when the number of the data to be refreshed in the cache is detected to be larger than or equal to the maximum preset number so as to prevent the new data to be refreshed from being stored in the cache.

Further, the trend prediction module 200 is specifically a module for determining an initial thread number and correcting the initial thread number according to the thread variation to obtain the target thread number.

Since the embodiment of the system part corresponds to the embodiment of the method part, the embodiment of the system part is described with reference to the embodiment of the method part, and is not repeated here.

The present application also provides a storage medium having a computer program stored thereon, which when executed, may implement the steps provided by the above-described embodiments. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The application further provides an electronic device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided by the foregoing embodiments when calling the computer program in the memory. Of course, the electronic device may also include various network interfaces, power supplies, and the like.

The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.

It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种存储系统的分层方法、装置、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类