Nuclear power station control room multi-channel fusion man-machine interaction method and system

文档序号:1736855 发布日期:2019-12-20 浏览:4次 中文

阅读说明:本技术 一种核电站控制室多通道融合人机交互方法以及系统 (Nuclear power station control room multi-channel fusion man-machine interaction method and system ) 是由 程波 程俊 宋呈群 王鹏 周毅超 栾语 吴一谦 张学刚 于正龙 刘至垚 张建波 于 2019-08-08 设计创作,主要内容包括:本发明提供了一种核电站控制室多通道融合人机交互方法以及系统,该方法包括步骤:S1、选取核电站控制室典型的交互任务构建多种单一模态的交互方式;S2、根据用户当前状态选取适用的交互方式采集用户的交互操作获取多通道交互参数;S3、接收所述多通道交互参数进行数据融合处理,并从融合后的所述多通道交互参数中推理出用户表达意图;S4、将所述用户表达意图输出到应用程序以执行相应控制。该系统包括交互设备模块、数据采集模块、交互控制模块和输出模块。本发明提供的核电站控制室多通道融合人机交互方法以及系统,能够为用户提供安全、自然、高效的交互操作体验,降低了人因失效概率,提高了人机交互的可靠性。(The invention provides a nuclear power station control room multi-channel fusion man-machine interaction method and a system, wherein the method comprises the following steps: s1, selecting typical interaction tasks of the nuclear power station control room to construct a plurality of single-mode interaction modes; s2, selecting an applicable interactive mode according to the current state of the user to acquire the interactive operation of the user to acquire multi-channel interactive parameters; s3, receiving the multi-channel interaction parameters to perform data fusion processing, and reasoning out user expression intentions from the fused multi-channel interaction parameters; and S4, outputting the user expression intention to an application program to execute corresponding control. The system comprises an interactive equipment module, a data acquisition module, an interactive control module and an output module. The nuclear power station control room multi-channel fusion man-machine interaction method and system provided by the invention can provide safe, natural and efficient interactive operation experience for users, reduce the probability of human failure and improve the reliability of man-machine interaction.)

1. A nuclear power station control room multi-channel fusion man-machine interaction method is characterized by comprising the following steps:

s1, selecting typical interaction tasks of the nuclear power station control room to construct a plurality of single-mode interaction modes;

s2, selecting an applicable interactive mode according to the current state of the user to acquire the interactive operation of the user to acquire multi-channel interactive parameters;

s3, receiving the multi-channel interaction parameters to perform data fusion processing, and reasoning out user expression intentions from the fused multi-channel interaction parameters;

and S4, outputting the user expression intention to an application program to execute corresponding control.

2. The multi-channel fusion man-machine interaction method according to claim 1, wherein the step S3 specifically comprises:

s31, receiving the multi-channel interactive parameters, performing dynamic and joint decoding analysis on the multi-channel interactive parameters, and performing data fusion processing on the decoded and analyzed multi-channel interactive parameters by adopting a redundant fusion, complementary fusion and/or mixed fusion mode;

s32, establishing a database, performing small sample online learning on the user expression intention by combining the database, and reasoning the user expression intention from the fused multi-channel interaction parameters.

3. The multi-channel fusion man-machine interaction method according to claim 1, wherein the step S2 specifically comprises:

detecting the psychophysical situation of the user during the interactive operation to determine the current state of the user;

and selecting the applicable interactive mode according to a preset multi-mode interactive applicable principle to acquire the interactive operation of the user to acquire the multi-channel interactive parameters.

4. The multi-channel fusion human-computer interaction method according to claim 2, wherein the user expression intention comprises an operation intention and an interactive control action intention; the database comprises an application database and an interaction database, a common data set comprising human factor data and environmental knowledge is prestored in the interaction database, the interaction database is used for calculating the interaction control action intention from the fused multi-channel interaction parameters, and the application database is used for calculating the operation intention from the fused multi-channel interaction parameters.

5. The multi-channel fusion man-machine interaction method according to claim 1, wherein the step S2 further comprises:

judging whether the man-machine interaction load of the interaction operation reaches a set threshold value; and if so, prompting the user to stop the interactive operation.

6. A nuclear power station control room multichannel fuses man-machine interaction system which characterized in that includes:

the interaction equipment module is used for selecting typical interaction tasks of the nuclear power station control room to construct a plurality of single-mode interaction modes;

the data acquisition module is used for selecting an applicable interactive mode according to the current state of the user to acquire the interactive operation of the user to acquire the multi-channel interactive parameters;

the interaction control module is used for receiving the multi-channel interaction parameters to perform data fusion processing and reasoning out user expression intentions from the fused multi-channel interaction parameters;

and the output module is used for outputting the user expression intention to an application program to execute corresponding control.

7. The multi-channel converged human-computer interaction system of claim 6, wherein the interaction control module comprises:

the interactive information processing module is used for receiving the multichannel interactive parameters, performing dynamic and joint decoding analysis on the multichannel interactive parameters, and performing data fusion processing on the decoded and analyzed multichannel interactive parameters by adopting a redundant fusion, complementary fusion and/or mixed fusion mode;

and the database is used for performing small sample online learning on the user expression intention and reasoning out the user expression intention from the fused multi-channel interaction parameters.

8. The multi-channel fusion human-computer interaction system of claim 6, wherein the data acquisition module is specifically configured to: and detecting the psychophysical situation of the user during the interactive operation to determine the current state of the user, and selecting an applicable interactive mode to acquire the interactive operation of the user according to a preset multi-mode interactive applicable principle to acquire the multi-channel interactive parameters.

9. The multi-channel fusion human-computer interaction system of claim 7, wherein the user expression intents include an operational intention and an interactive control action intention; the database comprises an application database and an interaction database, a common data set comprising human factor data and environmental knowledge is prestored in the interaction database, the interaction database is used for calculating the interaction control action intention from the fused multi-channel interaction parameters, and the application database is used for calculating the operation intention from the fused multi-channel interaction parameters.

10. The multi-channel converged human-computer interaction system of claim 6, wherein the data acquisition module further comprises:

the interactive load evaluation module is used for judging whether the human-computer interactive load of the interactive operation reaches a set threshold value; and if so, prompting the user to stop the interactive operation.

Technical Field

The invention relates to the field of nuclear power station operation control, in particular to a nuclear power station control room multi-channel fusion man-machine interaction method and system.

Background

Safe operation of a nuclear power plant is critical to the well-being of the nuclear power plant itself, the country, and even the whole human, and is accomplished through human-machine interaction between a control room operator and human-machine interface equipment. The existing man-machine interaction mode of a control room of a nuclear power plant is mainly a traditional man-machine interaction mode, such as mouse, keyboard and touch screen operation, the traditional man-machine interaction technical mode is single, the operation is inconvenient, a large amount of operation time is consumed, quick and natural communication between a person and a machine is inconvenient, misoperation can often occur in use, and the probability of human failure is greatly increased due to the fact that a key is pressed by mistake, and further improvement is needed urgently.

Disclosure of Invention

The invention provides a nuclear power station control room multi-channel fusion man-machine interaction method and system aiming at the problems that the traditional interaction operation mode of a nuclear power station is single, misoperation is easy to occur and the like, and can provide safe, natural and efficient interaction operation experience, effectively improve the operation efficiency of an operator and guarantee the operation safety of the nuclear power station.

The technical scheme of the invention for solving the technical problems is as follows: on the one hand, the method for multi-channel fusion man-machine interaction in the control room of the nuclear power station comprises the following steps:

s1, selecting typical interaction tasks of the nuclear power station control room to construct a plurality of single-mode interaction modes;

s2, selecting an applicable interactive mode according to the current state of the user to acquire the interactive operation of the user to acquire multi-channel interactive parameters;

s3, receiving the multi-channel interaction parameters to perform data fusion processing, and reasoning out user expression intentions from the fused multi-channel interaction parameters;

and S4, outputting the user expression intention to an application program to execute corresponding control.

In the above-mentioned multi-channel fusion human-computer interaction method of the present invention, step S3 specifically includes:

s31, receiving the multi-channel interactive parameters, performing dynamic and joint decoding analysis on the multi-channel interactive parameters, and performing data fusion processing on the decoded and analyzed multi-channel interactive parameters by adopting a redundant fusion, complementary fusion and/or mixed fusion mode;

s32, establishing a database, performing small sample online learning on the user expression intention by combining the database, and reasoning the user expression intention from the fused multi-channel interaction parameters.

In the above-mentioned multi-channel fusion human-computer interaction method of the present invention, step S2 specifically includes:

detecting the psychophysical situation of the user during the interactive operation to determine the current state of the user;

and selecting the applicable interactive mode according to a preset multi-mode interactive applicable principle to acquire the interactive operation of the user to acquire the multi-channel interactive parameters.

In the above-mentioned multi-channel fusion human-computer interaction method of the present invention, the user expression intention includes an operation intention and an interaction control action intention; the database comprises an application database and an interaction database, a common data set comprising human factor data and environmental knowledge is prestored in the interaction database, the interaction database is used for calculating the interaction control action intention from the fused multi-channel interaction parameters, and the application database is used for calculating the operation intention from the fused multi-channel interaction parameters.

In the above-mentioned multi-channel fusion human-computer interaction method of the present invention, step S2 further includes:

judging whether the man-machine interaction load of the interaction operation reaches a set threshold value; and if so, prompting the user to stop the interactive operation.

On the other hand, still provide a nuclear power station control room multichannel and fuse man-machine interactive system, include:

the interaction equipment module is used for selecting typical interaction tasks of the nuclear power station control room to construct a plurality of single-mode interaction modes;

the data acquisition module is used for selecting an applicable interactive mode according to the current state of the user to acquire the interactive operation of the user to acquire the multi-channel interactive parameters;

the interaction control module is used for receiving the multi-channel interaction parameters to perform data fusion processing and reasoning out user expression intentions from the fused multi-channel interaction parameters;

and the output module is used for outputting the user expression intention to an application program to execute corresponding control.

In the above multi-channel fusion human-computer interaction system of the present invention, the interaction control module comprises:

the interactive information processing module is used for receiving the multichannel interactive parameters, performing dynamic and joint decoding analysis on the multichannel interactive parameters, and performing data fusion processing on the decoded and analyzed multichannel interactive parameters by adopting a redundant fusion, complementary fusion and/or mixed fusion mode;

and the database is used for performing small sample online learning on the user expression intention and reasoning out the user expression intention from the fused multi-channel interaction parameters.

In the above multi-channel fusion human-computer interaction system of the present invention, the data acquisition module is specifically configured to: and detecting the psychophysical situation of the user during the interactive operation to determine the current state of the user, and selecting an applicable interactive mode to acquire the interactive operation of the user according to a preset multi-mode interactive applicable principle to acquire the multi-channel interactive parameters.

In the above multi-channel fusion human-computer interaction system of the invention, the user expression intention comprises an operation intention and an interaction control action intention; the database comprises an application database and an interaction database, a common data set comprising human factor data and environmental knowledge is prestored in the interaction database, the interaction database is used for calculating the interaction control action intention from the fused multi-channel interaction parameters, and the application database is used for calculating the operation intention from the fused multi-channel interaction parameters.

In the above multi-channel fusion human-computer interaction system of the present invention, the data acquisition module further comprises:

the interactive load evaluation module is used for judging whether the human-computer interactive load of the interactive operation reaches a set threshold value; and if so, prompting the user to stop the interactive operation.

The nuclear power station control room multi-channel fusion man-machine interaction system and the method have the following beneficial effects that:

according to the multi-channel fusion man-machine interaction method and system for the nuclear power station control room, provided by the invention, the multi-channel interaction parameters can be acquired by selecting an applicable interaction mode according to the current state of a user, the human failure probability is reduced, the reliability of man-machine interaction is improved, the expression intention of the user can be accurately inferred by carrying out data fusion processing on the multi-channel interaction parameters, the safe, natural and efficient interaction operation experience is ensured, the operation safety of the nuclear power station is improved, the nuclear power station operator can monitor the nuclear power station more naturally and conveniently, and the operation efficiency of the operator is effectively improved.

Drawings

The invention will be further described with reference to the accompanying drawings and examples, in which:

FIG. 1 is a flowchart of a multi-channel fusion human-computer interaction method provided by an embodiment of the invention;

fig. 2 is a block diagram of a multi-channel human-computer interaction system according to an embodiment of the present invention.

Detailed Description

In order that those skilled in the art will more clearly understand the present invention, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.

The invention provides a nuclear power station control multi-channel fusion man-machine interaction method and system aiming at the problems of single interaction mode, low naturalness, long time consumption, mistaken touch and the like in the traditional interaction mode, and the core idea is as follows: selecting a typical interaction task of a nuclear power plant control room to construct a plurality of single-mode interaction modes, selecting an applicable interaction mode according to the current state of a user to acquire a multi-channel interaction parameter, performing data fusion processing on the multi-channel interaction parameter, reasoning out a user expression intention from the fused multi-channel interaction parameter, and outputting the user expression intention to an application program to execute corresponding control, so that the interaction naturalness and the efficiency of an operator and a human-computer interface of the nuclear power plant control room are improved, the cognitive load of the operator is reduced, the human factor failure probability is reduced, and the operation safety of the nuclear power plant is improved.

Fig. 1 is a flowchart of a nuclear power plant control room multichannel fusion human-computer interaction method shown in this embodiment, and as shown in fig. 1, the method includes the steps of:

s1, selecting typical interaction tasks of the nuclear power station control room to construct a plurality of single-mode interaction modes;

s2, selecting an applicable interactive mode according to the current state of the user to acquire the interactive operation of the user to acquire multi-channel interactive parameters;

s3, receiving the multi-channel interaction parameters to perform data fusion processing, and reasoning out user expression intentions from the fused multi-channel interaction parameters;

and S4, outputting the user expression intention to an application program to execute corresponding control.

According to the embodiment, on the basis of task analysis of nuclear power station control, a typical interactive task of a nuclear power station control room is selected to construct an interactive mode, so that the usability of the interactive mode is ensured; specifically, typical interaction tasks include screen layout, mouse positioning, parameter and equipment retrieval, equipment switch operation, equipment fixed value input, three-way valve selection operation and local trend graph scaling, and further include parameter monitoring, screen calling, equipment operation and the like involved in executing a nuclear power plant accident procedure under an accident condition; the interaction modes of the plurality of single modalities comprise eye control interaction, gesture interaction, voice interaction and keyboard and mouse interaction.

The eye control interaction can be constructed through interaction equipment such as an eye tracker, the attention of an operator is tracked through the eye tracker, so that a mouse can be replaced, cursor positioning is carried out through sight, the function similar to cursor control through the mouse and a touch pad is realized, and the process of finding a cursor is omitted for the operator;

the gesture interaction can be constructed through interaction equipment such as a camera and a sensor, all actions executed by an operator on a control room disk table are identified through the gesture interaction, the actions include but are not limited to moving, clicking, sliding, grabbing, loosening, amplifying, shrinking, partially amplifying and the like, the virtual keyboard can be called, and the operator can input and control on the virtual keyboard without moving the position; the gesture control can be performed while the action diagram of the hand skeleton can be displayed on the screen in real time, so that a user can intuitively feel and confirm the accuracy of gesture interaction before using the gesture control device;

the voice interaction can be realized through interactive equipment such as a voice sensor, the functions of voice recognition, voice understanding, voice synthesis and the like are realized through voice interactive operation, the mode of interaction through voice and system prompt tones and the mode of interaction between a voice technology and a system are realized, a unified system suitable for isolated voice recognition and continuous voice recognition is established, and a short command control task and a continuous text input task are realized.

In the above embodiment of the invention, the novel interaction can cooperate with the input devices of the existing nuclear power plant control room console, such as a keyboard, a mouse, a track ball and the like, so that the original user experience is not influenced.

Further, the step S3 specifically includes:

s31, receiving the multi-channel interactive parameters, performing dynamic and joint decoding analysis on the multi-channel interactive parameters, and performing data fusion processing on the decoded and analyzed multi-channel interactive parameters by adopting a redundant fusion, complementary fusion and/or mixed fusion mode;

in the implementation, the multi-channel interaction parameters are multi-source heterogeneous information of different data structures acquired by different sensors, dynamic and combined decoding analysis is to input the multi-source heterogeneous information in a time sequence manner, analyze the internal relations generated before and after a homologous information sequence and among the multi-source heterogeneous information along with the time change, and output corresponding structure information; for example, various human body postures can be accurately identified by utilizing the joint decoding analysis of an image time sequence and an acceleration sensor signal time sequence;

the redundancy fusion mode means that one path of information of the multi-source heterogeneous information can completely confirm the output category, but the other path of information is still fused to further confirm the output category; the complementary fusion mode means that one path of information of the multi-source heterogeneous information can confirm the output parent category of the multi-source heterogeneous information, and the other path of information needs to be fused to confirm the sub-category of the parent category; for example, the image information can confirm that a person is moving, but cannot confirm specific movement, and the inertial sensing signal can further confirm the fine movement of the movement; and the hybrid fusion mode is to combine the redundant fusion mode and the complementary fusion mode to output deconstruction information together.

In the above embodiment of the present invention, since the multi-channel interaction parameter is multi-source heterogeneous information of different data structures acquired by different sensors, in order to reduce the redundancy of data and increase the processing speed of the interaction process, the received multi-channel interaction parameter is dynamically and jointly decoded and analyzed, and data fusion processing is performed by adopting multiple fusion methods, so that the processing burden of reasoning the user expression intention of the database can be effectively reduced, and the accuracy of identifying the user expression intention can be indirectly increased.

Further, the step S3 includes the steps of:

s32, establishing a database, performing small sample online learning on the user expression intention by combining the database, and reasoning the user expression intention from the fused multi-channel interaction parameters.

Wherein the user expression intent comprises an operational intent and an interactive control action intent; the database comprises an application database and an interaction database, a common data set comprising human factor data and environmental knowledge is prestored in the interaction database, the interaction database is used for calculating the interaction control action intention from the fused multi-channel interaction parameters, and the application database is used for calculating the operation intention from the fused multi-channel interaction parameters.

In this embodiment, the application database is used for calculating and understanding a specific operation intention of a user in an actual application, for example, an intention of dragging a display screen from a certain large screen to another large screen; the interaction database is used for calculating and understanding specific interaction control action intentions, such as waving hands leftwards, moving eyeballs and other specific interaction control actions. In order to better realize the above calculation understanding process, some common human factor data and environmental knowledge are pre-stored in the interaction database as a common data set to support various interaction applications.

According to the embodiment of the invention, the database is divided into the application database for specific application calculation and the interactive database for supporting interactive control calculation, so that the accuracy and efficiency of data understanding are effectively improved; meanwhile, small sample online learning is carried out on the user expression intention by combining the database in advance, so that the database trained by the samples has better adaptability and higher accuracy, and the accuracy of the user expression intention deduced from the multi-channel interaction parameters is ensured.

Further, the step S2 specifically includes:

s21, detecting the psychophysical situation of the user during the interactive operation to determine the current state of the user;

and S22, selecting an applicable interactive mode according to a preset multi-mode interactive applicable principle to acquire the interactive operation of the user to acquire the multi-channel interactive parameters.

In this embodiment, the psychophysical situation and the multi-modal interaction applicable principle corresponding to the psychophysical situation when the user performs the interactive operation include:

the psychological and physical situation during the handling of the emergency, for example, the situation that an operator has extreme psychological tension during the pipeline rupture, some interaction modes which are easy to be confused need to be locked, for example, hands of ordinary people can be involuntarily shaken during the emergency, and gesture interaction is not suitable at this moment;

the psychophysical situation of a common event is processed, for example, the routine operation is carried out daily, and the operator is in a mental state of relaxing emotion at the moment, so that the method is suitable for all interaction modes;

the psychophysical situation of the repetitive events is processed, such as repetitive page turning retrieval operation, and the operator is in a boring psychological state at the moment, so that the method is suitable for a performance enhancement mode, such as how to quickly operate by voice prompt when the system detects that a person is doing repetitive actions.

According to the embodiment of the invention, the cognitive load and the human factor failure probability of the user can be reduced, and the reliability of man-machine interaction is improved, so that the safety of the nuclear power station operation is ensured.

Further, the step S2 includes the steps of:

s23, judging whether the man-machine interaction load of the interaction operation reaches a set threshold value; and if so, prompting the user to stop the operation.

In the embodiment, the man-machine interaction load of the interactive operation can be judged in time, for example, when a user operates for a long time, the user can be reminded to have a rest properly, and the user is prevented from fatigue and making mistakes; the man-machine interaction load of the interaction operation can also be judged from the operation intensity, for example, the operation times of a user in a certain time period are counted, and whether the man-machine interaction load reaches a set threshold value is judged, so that the working performance of an operator is ensured, and the working load of the operator is reduced; on the other hand, the user can feel that the computer is under his control or allow him to perform appropriate control, avoiding affecting his mood and the discomfort that it brings.

Fig. 2 is a multi-channel fusion human-computer interaction system for nuclear power plant control provided in this embodiment, and as shown in fig. 2, the system includes an interaction device module 10, a data acquisition module 20, an interaction control module 30, and an output module 40; the interactive equipment module 10 is used for selecting typical interactive tasks of the nuclear power plant control room to construct various single-mode interactive modes; the data acquisition module 20 is used for acquiring the interaction operation of the user by selecting an applicable interaction mode according to the current state of the user to obtain a multi-channel interaction parameter; the interaction control module 30 is configured to receive the multi-channel interaction parameters, perform data fusion processing, and infer a user expression intention from the fused multi-channel interaction parameters; the output module 40 is used for outputting the user expression intention to an application program to execute corresponding control.

The interactive control module 30 comprises an interactive information processing module 31 and a database 32, wherein the interactive information processing module 31 is used for receiving the multi-channel interactive parameters, performing dynamic and joint decoding analysis on the multi-channel interactive parameters, and performing data fusion processing on the multi-channel interactive parameters after decoding analysis by adopting a redundant fusion, complementary fusion and/or mixed fusion mode; the database 32 is used for performing small sample online learning on the user expression intention and reasoning the user expression intention from the fused multi-channel interaction parameters.

In the embodiment, the user expression intention comprises an operation intention and an interactive control action intention; the database comprises an application database and an interaction database, a common data set comprising human factor data and environmental knowledge is prestored in the interaction database, the interaction database is used for calculating the interaction control action intention from the fused multi-channel interaction parameters, and the application database is used for calculating the operation intention from the fused multi-channel interaction parameters.

Further, the data acquisition module 20 is specifically configured to: and detecting the psychophysical situation of the user during the interactive operation to determine the current state of the user, and selecting an applicable interactive mode to acquire the interactive operation of the user according to a preset multi-mode interactive applicable principle to acquire the multi-channel interactive parameters.

The data acquisition module 20 further comprises an interactive load evaluation module 21, and the interactive load evaluation module 21 is used for judging whether the human-computer interactive load of the interactive operation reaches a set threshold value; and if so, prompting the user to stop the interactive operation.

The following describes a processing flow of the human-computer interaction system through a specific application scenario:

1) a user collects gesture information through the camera interaction device, collects voice information through the microphone interaction device, forms a multi-channel interaction parameter, and transmits the multi-channel interaction parameter to the interaction control module 30;

2) the interactive control module 30 performs dynamic and joint decoding analysis on the multi-source heterogeneous information of the camera channel and the microphone channel through the interactive information processing module 31, and processes data information of the two channels in a complementary fusion mode; the complementary fusion mode is that a certain option of the interactive interface is activated by microphone voice information, then the posture of a person is identified through image data of a camera, and the option is operated according to the posture;

3) the interactive information processing module 31 sends the complementary fusion data to the database 32, performs small sample online learning on the user expression intention by combining the database 32, and accurately infers the user expression intention from a plurality of parallel/serial, accurate/inaccurate, independent/cooperative input information streams;

4) the output module 40 transmits the user expression intention to the application program to execute corresponding operation control, thereby providing humanized natural interaction instructions for the user on the natural interaction interface.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, other implementation details of the multi-channel fusion human-computer interaction system can be implemented by referring to the corresponding implementation process provided in the human-computer interaction method, and details are not described herein again.

In summary, the invention provides a nuclear power station control room multi-channel fusion man-machine interaction method and system, which can acquire multi-channel interaction parameters by selecting an applicable interaction mode according to the current state of a user, reduce the probability of human failure, improve the reliability of man-machine interaction, accurately deduce the expression intention of the user by performing data fusion processing on the multi-channel interaction parameters, ensure safe, natural and efficient interaction operation experience, improve the operation safety of a nuclear power station, enable the nuclear power station operator to monitor the nuclear power station more naturally and conveniently, and effectively improve the working efficiency of the operator.

It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种信息处理方法、装置、交通工具及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类