interaction method and device based on intelligent robot

文档序号:1719989 发布日期:2019-12-17 浏览:25次 中文

阅读说明:本技术 一种基于智能机器人的交互方法及装置 (interaction method and device based on intelligent robot ) 是由 刘源 于 2019-08-21 设计创作,主要内容包括:本发明涉及人计算机技术领域,一种基于智能机器人的交互方法,所述方法包含步骤:实时采集交互数据,所述交互数据包含人物数据和场景数据;对所述人物数据和所述场景数据进行识别,得到识别标签集;基于所述识别标签集,与知识图谱进行匹配计算,得到所述识别标签集对应的场景关系,并进一步基于所述知识图谱推导获得所述场景关系对应的概念现象;基于所述概念现象确定交互输出内容;根据所述交互输出内容获取对应的计算机交互指令,并执行。可实现智能机器人的自动场景判断及反馈,降低人力投入。此外本发明还提供了一种智能机器人装置、及计算机可读存储介质。(the invention relates to the technical field of human computers, in particular to an interaction method based on an intelligent robot, which comprises the following steps: acquiring interactive data in real time, wherein the interactive data comprises character data and scene data; identifying the character data and the scene data to obtain an identification tag set; based on the identification tag set, performing matching calculation with a knowledge graph to obtain a scene relation corresponding to the identification tag set, and further deducing and obtaining a concept phenomenon corresponding to the scene relation based on the knowledge graph; determining interactive output content based on the conceptual phenomena; and acquiring a corresponding computer interaction instruction according to the interaction output content, and executing. The automatic scene judgment and feedback of the intelligent robot can be realized, and the human input is reduced. In addition, the invention also provides an intelligent robot device and a computer readable storage medium.)

1. An interaction method based on an intelligent robot, which is characterized by comprising the following steps:

acquiring interactive data in real time, wherein the interactive data comprises character data and scene data;

Identifying the character data and the scene data to obtain an identification tag set;

Based on the identification tag set, performing matching calculation with a knowledge graph to obtain a scene relation corresponding to the identification tag set, and further deducing and obtaining a concept phenomenon corresponding to the scene relation based on the knowledge graph;

Determining interactive output content based on the conceptual phenomena;

And acquiring a corresponding computer interaction instruction according to the interaction output content, and executing.

2. The method of claim 1, wherein the interaction data comprises multimodal data acquired via audio video capture devices, GPS.

3. The method of claim 1, wherein the identifying the character data and the scene data to obtain an identification tag set comprises:

determining a person identity label according to face recognition and voiceprint recognition;

Outputting character emotion labels according to the computer vision, the first deep learning model and the semantic recognition model;

Outputting a character action label according to the computer vision and the second deep learning model;

Deriving character dialogue content in a text form according to voice recognition, and performing character intention recognition based on the character dialogue content to obtain an intention label;

Outputting object labels according to the computer vision and the third deep learning model;

Outputting an event label according to the computer vision and the fourth deep learning model;

Determining the time of the occurrence of an event based on the self-contained system time of the robot;

Presume the place scene that the incident happens according to GPS positioning and map information;

And simultaneously determining the confidence level of each of the identification tags.

4. the method according to claim 3, wherein the obtaining of the scene relationship corresponding to the identification tag set by performing matching calculation with a knowledge graph based on the identification tag set specifically comprises:

Respectively calculating the weight of each identification tag in the knowledge graph;

Calculating the matching degree of the scene relation between the identification tag set and the knowledge graph based on the weight and the confidence coefficient;

and determining the scene relation corresponding to the identification tag set based on the matching degree.

5. The method of claim 4, wherein the separately calculating the weight of each of the identification tags in the knowledge-graph specifically comprises:

obtaining the weight of each identification tag in the knowledge graph based on the following mathematical formula:

wherein, wijrepresenting the weight of the i identification label in the j scene relationship; tf isijrepresenting the number of occurrences of the i-identification tag in the j-scene relationship; df is aiRepresenting the number of scene relationships containing the i identification tag; n represents the total number of scene relationships.

6. the method according to claim 4 or 5, wherein the calculating the matching degree of the recognition tag set and the scene relationship in the knowledge-graph based on the weight and the confidence degree specifically comprises:

Obtaining the matching degree of the scene relation between the identification tag set and the knowledge graph based on the following mathematical expression:

Wherein, PsRepresenting the matching degree of the identification label in a scene relation s; n represents the total number of the tag slot positions in the scene relation s; xnAnd representing the corresponding confidence of the label n in the knowledge graph in the identification label set.

7. The method of claim 1, wherein after determining the scene relationship corresponding to the set of identification tags, further comprising the steps of:

And determining a vacant tag slot position in the scene relation based on the identification tag set, and supplementing the vacant tag slot position based on the identification tag set.

8. The method of claim 1, wherein after determining the scene relationship corresponding to the set of identification tags, further comprising the steps of:

and constructing a character model based on the identification tag set and the mapping relation of the identification tag set in the knowledge graph, and storing the character model in a graph database mode.

9. an intelligent robot device is characterized by comprising a data acquisition module, an interactive output module and a data processing module which is respectively in communication connection with the data acquisition module and the interactive output module;

The data acquisition module is used for acquiring interactive data in real time and sending the interactive data to the data processing module;

The data processing module receives the interactive data and identifies the interactive data to obtain an identification tag set; based on the identification tag set, performing matching calculation with a knowledge graph to obtain a scene relation corresponding to the identification tag set, further deducing and obtaining a concept phenomenon corresponding to the scene relation based on the knowledge graph, determining interactive output content, and sending the interactive output content to the interactive output module;

And the interactive output module acquires a corresponding computer interactive instruction based on the interactive output content and executes the computer interactive instruction.

10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the intelligent robot-based interaction method of any one of claims 1 to 8.

Technical Field

The present invention relates to the field of computer information technology, and in particular, to an intelligent robot-based interaction method, an intelligent robot apparatus, and a computer-readable storage medium.

background

with the rapid development of computer technology, intelligent robots have been widely applied to various industries, especially the field of interpersonal interaction.

Most of the robots in the market currently have a voice interaction form, and mainly use a preset question-answering pair (query/answer) with instruction properties and some entertainment interaction functions (video playing, video games, etc.) of PDAs to realize interactive output of input information.

each scene based on product positioning of the robot needs to consume a large number of experts and programmers to write preset interactive contents, but the written fixed interactive contents are stiff and have poor expansibility, and compared with other industries (such as computer and mobile phones), the robot is a design which is very unscientific and has poor user experience.

Disclosure of Invention

in view of the above problem, an embodiment of the present invention provides an interaction method based on an intelligent robot, where the method includes the steps of: acquiring interactive data in real time, wherein the interactive data comprises character data and scene data; identifying the character data and the scene data to obtain an identification tag set; based on the identification tag set, performing matching calculation with a knowledge graph to obtain a scene relation corresponding to the identification tag set, and further deducing and obtaining a concept phenomenon corresponding to the scene relation based on the knowledge graph; determining interactive output content based on the conceptual phenomena; and acquiring a corresponding computer interaction instruction according to the interaction output content, and executing.

The intelligent robot-based interaction method provided by the invention acquires and identifies data in an interaction scene in real time to obtain a multi-dimensional identification tag set, and further performs matching calculation through the identification tag set and a knowledge graph to determine a scene relation mapped by the identification tag set, so that not only can the automatic identification of the scene relation be realized, but also the identification result is more accurate; furthermore, after the scene relation corresponding to the identification tag set is determined, a concept phenomenon hidden in the scene relation needs to be further deduced, so that more accurate interactive output content is provided, the intelligent robot can automatically judge the psychological needs of people according to the scene, reasonable and intelligent services are provided, manual intervention is reduced, and the labor cost is reduced.

Meanwhile, the invention also provides an intelligent robot device, which comprises a data acquisition module, an interactive output module and a data processing module, wherein the data processing module is respectively in communication connection with the data acquisition module and the interactive output module; the data processing module receives the interactive data and identifies the interactive data to obtain an identification tag set; based on the identification tag set, performing matching calculation with a knowledge graph to obtain a scene relation corresponding to the identification tag set, further deducing and obtaining a concept phenomenon corresponding to the scene relation based on the knowledge graph, determining interactive output content, and sending the interactive output content to the interactive output module; and the interactive output module acquires a corresponding computer interactive instruction based on the interactive output content and executes the computer interactive instruction.

and a computer-readable storage medium storing a computer program which, when executed by a processor, implements the intelligent robot-based interaction method described above.

In one implementation, the interactive data includes multi-modal data acquired by an audio/video acquisition device and a GPS.

in one implementation, the identifying the character data and the scene data to obtain an identification tag set specifically includes: determining a person identity label according to face recognition and voiceprint recognition; outputting character emotion labels according to the computer vision, the first deep learning model and the semantic recognition model; outputting a character action label according to the computer vision and the second deep learning model; deriving character dialogue content in a text form according to voice recognition, and performing character intention recognition based on the character dialogue content to obtain an intention label; outputting object labels according to the computer vision and the third deep learning model; outputting an event label according to the computer vision and the fourth deep learning model; determining the time of the occurrence of an event based on the self-contained system time of the robot; presume the place scene that the incident happens according to GPS positioning and map information; and simultaneously determining the confidence level of each of the identification tags.

In one implementation, the performing matching calculation with a knowledge graph based on the identification tag set to obtain a scene relationship corresponding to the identification tag set specifically includes:

Respectively calculating the weight of each identification tag in the knowledge graph;

calculating the matching degree of the scene relation between the identification tag set and the knowledge graph based on the weight and the confidence coefficient;

And determining the scene relation corresponding to the identification tag set based on the matching degree.

In one implementation, the method for separately calculating the weight of each identification tag in the knowledge graph specifically includes:

obtaining the weight of each identification tag in the knowledge graph based on the following mathematical formula:

wherein wij represents the weight of the i identification label in the j scene relationship; tfij represents the number of occurrences of the i-id tag in the j-scene relationship; dfi represents the number of scene relationships containing i identification tags; n represents the total number of scene relationships.

In one implementation, the method for calculating the matching degree between the identification tag set and the scene relationship in the knowledge-graph based on the weight and the confidence specifically includes:

Obtaining the matching degree of the scene relation between the identification tag set and the knowledge graph based on the following mathematical expression:

Wherein Ps represents the matching degree of the identification tag in the scene relation s; n represents the total number of the tag slot positions in the scene relation s; xn represents the confidence level that label n in the knowledge-graph corresponds to in the identified label set.

In one implementation, after determining the scene relationship corresponding to the identification tag set, the method further includes the steps of: and determining a vacant tag slot position in the scene relation based on the identification tag set, and supplementing the vacant tag slot position based on the identification tag set.

In one implementation, after determining the scene relationship corresponding to the identification tag set, the method further includes the steps of: and constructing a character model based on the identification tag set and the mapping relation of the identification tag set in the knowledge graph, and storing the character model in a graph database mode.

Drawings

One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.

Fig. 1 is a flowchart illustrating an interaction method based on an intelligent robot according to a first embodiment of the present invention;

FIG. 2 is a diagram illustrating a knowledge-graph fragment structure according to a first embodiment of the present invention;

Fig. 3 is a schematic structural diagram of an intelligent robot apparatus according to a second embodiment of the invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.

In the first embodiment of the invention, the interaction method based on the intelligent robot is provided, the intelligent reasoning brain of the intelligent robot can be realized based on the technical architecture of deep learning algorithm and deep fusion of knowledge graph technology, a large amount of experts and programmers are not required to input manpower to preset each scene interaction content, the robot can be enabled to have more scene intelligent interaction contents directly based on the knowledge graph technology, and the service robot is enabled to have professional knowledge and professional interaction contents.

referring to fig. 1 in detail, fig. 1 is a flowchart illustrating an interaction method based on an intelligent robot according to a first embodiment of the present invention. As shown in fig. 1, the method specifically includes the steps of:

Step 101, collecting interactive data in real time.

In this embodiment, the multi-modal data of the interactive scene may be acquired in real time through the audio/video acquisition device and the GPS, and the collected data may specifically include: image, voice, audio, video, geographic location, system time, etc. occur in the interactive scene. The interactive data can be mainly divided into character data and scene data. Wherein the particular data acquisition device is not particularly limited in embodiments of the present invention.

and 102, identifying the interactive data to obtain an identification tag.

In order to acquire more accurate information from the acquired interactive data, the interactive data can be classified first, and the interactive data can be classified into object data and scene data. And identifying the interactive data, namely identifying the character data and the scene data.

Specifically, in a specific interactive scenario, the character data may include information such as identity information, emotion, action, and intention, and the intelligent robot may identify the information and obtain corresponding tags. The method specifically comprises the following steps:

(1) Determining a person identity label according to face recognition and voiceprint recognition;

(2) outputting character emotion labels according to the computer vision, the first deep learning model and the semantic recognition model;

(3) Outputting a character action label according to the computer vision and the second deep learning model;

(4) And deriving character conversation content in a text form according to the voice recognition, and performing character intention recognition based on the character conversation content to obtain an intention label.

In this way, the corresponding person identification tag can be obtained based on the person data.

Similarly, the scene data may be identified to obtain a corresponding tag, which specifically includes:

(1) and outputting object labels according to computer vision and a third deep learning model, wherein the objects refer to real objects appearing in the interactive scene, such as football, automobiles, musical instruments and the like, and can be used for assisting in confirming the interactive scene so as to improve the accuracy of recognition.

(2) And outputting an event label according to the computer vision and a fourth deep learning model, wherein the event refers to the description of the occurrence in the interactive scene, such as crossing roads, waiting for vehicles, picking up objects, pulling luggage and the like, and can also be used for assisting in confirming the interactive scene.

(3) And determining the time of the event occurrence based on the self-contained system time of the robot.

(4) And estimating the place scene of the event according to the GPS positioning and the map information.

The above-mentioned label recognition based on the deep learning model can be realized based on a Convolutional Neural Network (CNN) algorithm, and the confidence of each label can be simultaneously output while outputting the corresponding recognition label, please refer to the schematic description in table 1:

label identification Identification label 1 (confidence) identification label 2 (confidence) Identification label 3 (confidence)
Emotion label Cry(44%) Anger (23%) Happy (2%)
Action label push-pull (74%) Shaking (15%) running (4%)
Intention label Coming home (59%) find mother (28%) Have a meal (5%)

TABLE 1

the confidence corresponding to each identification tag can be used for subsequent matching calculation, and it should be noted that the data used for matching calculation may include all the identification results of each identification tag, and the identification result with higher confidence may also be preliminarily screened out for subsequent calculation, so as to reduce the calculation amount.

And 103, performing matching calculation with a knowledge graph based on the identification tag set to obtain a scene relation corresponding to the identification tag set.

Specifically, the knowledge graph includes a plurality of scene relationships, each scene relationship including a corresponding tag slot, for example:

Different labels are carried in scene relations in knowledge graph

scene 1: s2(D), S3(G), S4(I)

Scene 2S 1(B), S3(F), S5(K)

The content in the tag slots (Sn) may correspond to the identified tags, such as all corresponding character emotion tags, wherein the scene definition of the initial knowledge map generally does not fill all slots (Sn) in order to improve compatibility applicability, so some slot content is empty.

The knowledge graph matching calculation comprises two processes, firstly, the weight of each identification tag in the knowledge graph is calculated respectively, then, the matching degree of the identification tag set and the scene relation in the knowledge graph is calculated based on the weight and the confidence degree, and the scene relation corresponding to the identification tag set is determined based on the matching degree.

it can be understood that, since it cannot be guaranteed that all the expressions of the identification tags output by the identification are consistent with the expressions of the knowledge graph entities, the matching accuracy needs to be improved through coreference resolution before the matching calculation.

the following will be detailed for the above two processes respectively:

firstly, the method for respectively calculating the weight of each identification tag in the knowledge graph specifically comprises the following steps:

Obtaining the weight of each identification tag in the knowledge graph based on the following mathematical formula:

Wherein, wijrepresenting the weight of the i identification label in the j scene relationship; tf isijRepresenting the number of occurrences of the i-identification tag in the j-scene relationship; df is aiRepresenting the number of scene relationships containing the i identification tag; n represents the total number of scene relationships.

It can be seen that, in the embodiment, each identification tag is subjected to weight assignment (Wn) through the word Frequency (Term Frequency) of the identification tag in the knowledge graph. If the knowledgegraph output tag slot is empty, the weight is 0.

word frequency is a statistical method used to assess how important a word (label) is to one of the documents (scenes) of a corpus (knowledge graph) or a corpus. The importance of an identification tag increases in direct proportion to its number of occurrences in the scene, but at the same time decreases in inverse proportion to its frequency of occurrence in the entire knowledge-graph. That is, when the identification tag appears in the knowledge graph, the identification tag has a weight value, and the weight may be larger as the number of occurrences is higher, but at the same time, the weight needs to be set in combination with the frequency of the occurrence in the entire knowledge graph, in other words, the higher the frequency of the occurrence in the entire knowledge graph, the lower the possibility that the scene relationship is uniquely determined by the identification tag is. The method quantifies and measures the importance degree of a label in a scene through word frequency, and the obtained result is more reasonable.

Then, the matching degree of the scene relation between the identification tag set and the knowledge graph can be obtained based on the following mathematical expression:

wherein, Psrepresenting the matching degree of the identification label in a scene relation s; n represents the total number of the tag slot positions in the scene relation s; xnAnd representing the corresponding confidence of the label n in the knowledge graph in the identification label set.

by the method, the matching degree of the identification tag set and each scene relation in the knowledge graph can be calculated.

Generally speaking, at least 2 tags in the scene relationship are required to be included in the identification tag set, interaction is performed, if the number of tags is less than 2, the scene relationship cannot be matched, and no interaction is output.

after the matching degree with each scene relation is obtained, the scene relation with the highest matching degree can be selected and determined as the scene relation corresponding to the identification tag set.

it should be noted that, in order to ensure the accuracy of the matching result, a threshold may be set to prevent the mismatch.

Specifically, after the matching degree of each scene relation is obtained, the scene relation with the matching degree larger than a preset threshold value is determined, the scene relation with the highest matching degree is selected from the scene relations, the scene relation corresponding to the identification tag set is determined, if all the matching degrees are smaller than the preset threshold value, no matching result is determined, and the scene relation corresponding to the tag set does not exist in the knowledge graph, in this case, the default interactive output content can be selected as the current interactive output content, and the default interactive output content can be preset and stored locally.

And 104, deducing and obtaining a conceptual phenomenon corresponding to the scene relation based on the knowledge graph.

Step 105, determining interactive output content based on the conceptual phenomena.

The knowledge graph in the present embodiment may include a general knowledge graph and a professional knowledge graph, wherein the professional knowledge graph may be determined based on an application scenario of the intelligent robot. In this embodiment, the priori knowledge and the inference function of the knowledge graph may be utilized to perform knowledge inference to determine the conceptual phenomenon corresponding to the scene relationship. Referring to FIG. 2, FIG. 2 is a schematic diagram of a knowledge-graph fragment structure according to a first embodiment of the present invention. After the identification tag set is matched with the entity tags of the knowledge graph to obtain the confirmed scene relationship [ crying in the garden ], knowledge reasoning can be carried out by using the priori knowledge and the reasoning function of the knowledge graph, the back concept phenomenon [ separation anxiety ] is obtained by reasoning from the scene relationship, and the corresponding interactive output content [ playing games ] is further output through the preset knowledge content on the knowledge graph.

And 106, acquiring a corresponding computer interaction instruction according to the interaction output content, and executing.

After the interactive output content is determined, a preset computer interaction instruction can be obtained through inquiry according to the interactive output content and executed to realize interactive output. For example, when the interactive output content is [ play game ], an interactive game program preset by the system can be called and run to realize interactive output.

Therefore, the method acquires and identifies data in an interactive scene in real time to obtain a multi-dimensional identification tag set, and further performs matching calculation on the identification tag set and a knowledge graph to determine a scene relation mapped by the identification tag set, so that automatic identification of the scene relation can be realized, and an identification result is more accurate; furthermore, after the scene relation corresponding to the identification tag set is determined, a concept phenomenon implied in the scene relation needs to be further deduced, so that more accurate interactive output content is provided, the intelligent robot can automatically judge the psychological needs of people according to the scene, and reasonable and intelligent services are provided.

in an implementation, after the scene relationship corresponding to the identification tag set is determined in step 103, a vacant tag slot in the scene relationship may be determined based on the identification tag set, and the vacant tag slot may be supplemented based on the identification tag set. The knowledge graph content is continuously improved through matching scene relations in the robot interaction process, the content is added to the vacant slots in the knowledge graph through successful matching, information in the knowledge graph is continuously improved, and matching accuracy is improved.

further, other label matching fed back by the robot sensor can be used as result verification after the intelligent robot outputs interaction, and the knowledge graph is supplemented as complete flow matching (for example, separation of the original anxiety label is sad, and the sensed emotion label after the output interaction is happy).

Meanwhile, a character model can be constructed based on the identification tag set and the mapping relation of the identification tag set in the knowledge graph and stored in a graph database mode. If a person stores content:

For generating an event, recording time, place, action tag, emotion tag, intention tag, scene tag and event (knowledge graph entity) in order to successfully identify the scene relationship.

the character model record can be used to: the construction of the cognitive field artificial intelligence internet of things with the education service robot as the terminal, and the education service robot as the terminal collecting unit figure triggers and calls specific data records of professional knowledge. Can be used as follows: the study data of the scholars and the researchers are collected by the robot terminal, so that a large amount of manpower can be reduced, and the standard of the data can be standardized. The artificial intelligence internet of things (including intelligent terminals) can be used for carrying out more valuable subject researches such as child behavior observation and analysis, and a data basis is provided for product content optimization of robot development designers.

Based on the same inventive concept, an embodiment of the invention further provides a customer service assistance device, and specifically, referring to fig. 3, fig. 3 is a schematic structural diagram of an intelligent robot apparatus according to a second embodiment of the invention.

As shown in fig. 3, the intelligent robot apparatus 300 comprises a received data acquisition module 301, an interaction output module 303, and a data processing module 302 communicatively connected to the data acquisition module 301 and the interaction output module 303 respectively,

The data acquisition module 301 is configured to acquire interactive data in real time and send the interactive data to the data processing module 302;

The data processing module 302 receives the interactive data and identifies the interactive data to obtain an identification tag set; based on the identification tag set, matching calculation is performed with a knowledge graph to obtain a scene relationship corresponding to the identification tag set, and further, based on the knowledge graph, a concept phenomenon corresponding to the scene relationship is derived, interactive output content is determined, and the interactive output content is sent to the interactive output module 303. The method for identifying and calculating matching of the interactive data by the data processing module 302 may specifically refer to the description in the embodiment of fig. 1, and is not described again.

The interaction output module 303 obtains a corresponding computer interaction instruction based on the interaction output content, and executes the computer interaction instruction.

it should be noted that: the intelligent robot apparatus provided in the above embodiment may be implemented based on a computer program, and the division of the above functional modules is only an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.

The intelligent robot device provided by the embodiment has primary basic cognitive thinking capability, can actively react to a scene based on multi-modal interaction data and knowledge graph inference, and is not input and output in an instruction manner, and in addition, man-machine interaction content can be modularly designed, so that a large amount of design work is reduced.

Still another embodiment of the present invention relates to a computer-readable storage medium storing a computer program. When being executed by a processor, the computer program realizes the embodiment of the interaction method based on the intelligent robot.

those skilled in the art can understand that all or part of the steps in the method according to the above embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于堆叠自编码器的特征提取方法、装置及终端设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!