Fingertip tracking method based on deep learning and K-curvature method

文档序号:1798137 发布日期:2021-11-05 浏览:8次 中文

阅读说明:本技术 一种基于深度学习和k-曲率法的指尖跟踪方法 (Fingertip tracking method based on deep learning and K-curvature method ) 是由 孟浩 王玥 田洋 邓艳琴 于 2021-07-12 设计创作,主要内容包括:本发明公开了一种基于深度学习和K-曲率法的指尖跟踪方法,首先利用YOLOv3网络模型训练预处理后的数据集,获取指尖检测模型;再利用摄像头获取视频流,输入检测模型并检测出检测框信息,初始化卡尔曼滤波器;然后利用卡尔曼滤波器得到预测框,计算出本帧检测框和预测框的IOU,设定IOU阈值,判断该IOU是否大于IOU阈值,若该IOU大于IOU阈值则更新卡尔曼滤波器得到指尖跟踪框;否则,利用K-曲率法对指尖位置进行校正,并更新卡尔曼滤波器;最后设定一个时间阈值T-max,在该时间阈值帧内未检测跟踪信息,则终止跟踪。本发明减弱了复杂环境对检测准确性的影响,提升了检测速度,增加了准确性和鲁棒性。(The invention discloses a fingertip tracking method based on deep learning and a K-curvature method, which comprises the steps of firstly training a preprocessed data set by using a YOLOv3 network model to obtain a fingertip detection model; then, acquiring a video stream by using a camera, inputting a detection model, detecting detection frame information, and initializing a Kalman filter; then, a prediction frame is obtained by using a Kalman filter, the IOU of the frame detection frame and the IOU of the prediction frame are calculated, an IOU threshold value is set, whether the IOU is larger than the IOU threshold value or not is judged, and if the IOU is larger than the IOU threshold value, the Kalman filter is updated to obtain a fingertip tracking frame; otherwise, correcting the fingertip position by using a K-curvature method, and updating the Kalman filter; and finally, setting a time threshold T-max, and terminating the tracking if no tracking information is detected in the time threshold frame. The invention weakens the influence of the complex environment on the detection accuracy, improves the detection speed and increases the accuracy and the robustness.)

1. A fingertip tracking method based on deep learning and a K-curvature method is characterized by comprising the following steps:

s1, acquiring a hand data set and preprocessing the hand data set;

s2, training the data set by using a deep learning neural network model YOLOv3 to obtain a fingertip detection model;

s3, acquiring video stream, inputting the current frame into a fingertip detection model, performing multi-scale feature extraction on the image of the current frame by using a Darknet53 network, detecting the target category and the position information of a detection frame, and initializing a Kalman filter according to the information;

s4, reading the next frame of image, obtaining a prediction frame by using a Kalman filter, calculating the IOU of the frame of detection frame and the prediction frame, setting an IOU threshold value, judging whether the IOU is larger than the IOU threshold value, if so, performing the step S5, otherwise, performing the step S6;

s5, updating the Kalman filter by the prediction frame matched with the frame, outputting the state updating value as the tracking frame of the frame, completing the tracking of the frame, and returning to the step S4;

s6, obtaining the position information of the fingertip point by using a K-curvature method;

s7, calculating the Euclidean distance between the fingertip point and the central point of the detection frame, setting a threshold value, initializing a Kalman filter to obtain a new prediction frame when the distance is smaller than the threshold value, and restarting matching; otherwise, deleting the tracking information of the frame and reading the next frame;

s8, a time threshold value T-max is set, and if no tracking information is detected in the time threshold value frame, the tracking is terminated.

2. The fingertip tracking method based on the deep learning and K-curvature method according to claim 1, characterized in that: the step S1 includes the steps of:

s11, collecting a large amount of color images of the hand with exposed fingertips under different conditions of scenes, illumination, angles of the person and the fingertips and the number of the hand;

s12, performing data expansion on the hand color image by using a data enhancement method;

and S13, marking the area frame of the target fingertip, adding label information and generating a data label file.

3. The fingertip tracking method based on the deep learning and K-curvature method according to claim 1, characterized in that: the step 2 comprises the following steps:

s21, converting the hand data set input picture into a grid picture with the size of 416 x 416, and adding gray bars to prevent distortion;

s22, performing network downsampling on the processed picture for 5 times through Darknet53 feature extraction to generate a multi-scale feature map;

s23, performing convolution detection on the feature maps of the three scales 13 × 13, 26 × 26 and 52 × 52 by using a multi-scale fusion mode to obtain 3 detection results, and performing non-maximum suppression to obtain a final result;

and S24, generating a trained fingertip detection model.

4. The fingertip tracking method based on the deep learning and K-curvature method according to claim 1, characterized in that: the step S4 includes the steps of:

s41, reading the next frame of image, and obtaining all predicted fingertip prediction frames by using a Kalman filter;

s42, calculating the IOU of the frame detection frame and all the prediction frames;

s43, obtaining the unique match with the largest IOU by using the Hungarian algorithm;

s44, setting an IOU threshold value, and judging whether the IOU is larger than the IOU threshold value;

s45, if the IOU is larger than the IOU threshold value, the step S5 is carried out, otherwise, the step S6 is carried out.

5. The fingertip tracking method based on the deep learning and K-curvature method according to claim 4, characterized in that: the step S41 of reading the next frame of image and obtaining all predicted fingertip prediction frames by using the kalman filter specifically includes:

the kalman filter used contains seven state variables, four observation inputs:

outputting a target state:

in the formula (I), the compound is shown in the specification,representing the estimated value of the a priori state at the time k,respectively representing posterior state estimated values at the time k and the time k-1, A representing a state transition matrix, and B representing an optional control input u e RlGain of uk-1Representing the control gain at time k-1,representing the a priori estimated covariance at time k,representing the posterior estimates of time k and time k-1, respectivelyCovariance, Q denotes the covariance of the process excitation noise, KkRepresenting the Kalman gain, ZkDenotes a measurement value, H denotes a measurement matrix, u denotes a horizontal pixel position of a center of an object of the current frame, v denotes a vertical pixel position of the center of the object of the current frame, x denotes an area of an object region, y denotes an aspect ratio of the object region,represents the horizontal pixel position of the center of the target in the predicted next frame image,indicating the vertical pixel position of the center of the target in the predicted next frame image,indicates the area of the target region in the predicted next frame image,indicating the aspect ratio of the target area in the predicted next frame image.

6. The fingertip tracking method based on the deep learning and K-curvature method according to claim 1, characterized in that: the step S6 includes the steps of:

s61, selecting a YCbCr color space to extract the hand complexion of the image of the current frame:

wherein, R represents a red channel, G represents a green channel, and B represents a blue channel; y is brightness; cb is the difference between the luminance values of the blue part and the RGB signal of the RGB input signal; cr is the difference between the red portion of the RGB input signal and the luminance value of the RGB signal.

S62, calculating the Mahalanobis distance of the Gaussian model:

in the formula, msMean vector representing a single Gaussian model of skin color, CsThe covariance matrix of the skin color single Gaussian model is shown, and X is a pixel point.

S63, establishing single Gaussian models for skin color areas and non-skin color areas respectively by using the Mahalanobis distance, then solving the Mahalanobis distance of a certain pixel under the skin color models and the non-skin color models to judge whether the certain pixel belongs to a skin color point, and segmenting a hand image:

in the formula, τdiffTo set the threshold value, mnsMean, C, representing a single Gaussian model of non-skin colorsnsIs the covariance of the gaussian model.

S64, carrying out binarization processing and bilateral filtering processing on the image;

s65, taking a contour point kiFrom the m point k preceding iti+mAnd the m point k of the backi-mUsing vectorsSum vectorThe cosine value of the included angle is taken as a point kiCurvature of (2):

s66, detecting convex contour points with local maximum K cosine as the fingertip points:

di>(di-s+di+s)/2

in the formula (I), the compound is shown in the specification,direpresenting the distance from the centroid of the largest circle of the palm center to the point.

Technical Field

The invention belongs to a target detection and tracking technology, and particularly relates to a fingertip tracking method based on deep learning and a K-curvature method.

Background

The hand gesture detection and tracking is a hot direction in the field of human-computer interaction and computer vision at present, the sub-direction fingertip detection and tracking technology is an important component of the hand gesture detection and tracking technology, and good foundation can be provided for human-computer interaction behaviors such as hand writing in the air, virtual screen clicking in the air, gesture recognition, intelligent teaching and the like by detecting and tracking fingertips.

In the fingertip detection and tracking technology, the accuracy of fingertip detection and the rapidity and accuracy of tracking are both important; at present, algorithms based on target detection are mainly divided into traditional detection algorithms and detection algorithms based on deep learning, the traditional target detection algorithms mainly comprise DPM (Deformable Parts model), selective search and the like, the methods have the defects of poor robustness, poor generalization and the like caused by high time complexity and complex environment in practical application, and the effective detection and tracking of rapidly-changed and temporarily-shielded fingertips are difficult to carry out.

The fingertip detection and tracking method based on deep learning can well increase accuracy and robustness, the mainstream deep learning target detection algorithm is mainly divided into a two-stage detection algorithm and a single-stage detection algorithm, wherein the YOLO series in the single-stage detection algorithm well considers rapidity and accuracy; the tracking algorithm is widely applied to Kalman filtering, particle filtering and the like based on a filtering theory at present, but in practical application, the missing detection rate of fingertips which are temporarily shielded is high by a pure tracking algorithm based on filtering, and the space is greatly improved in real-time property.

Disclosure of Invention

Aiming at the problems in the prior art, the technical problem to be solved by the invention is to provide a fingertip tracking method based on deep learning and a K-curvature method, so that the influence of a complex environment on the detection accuracy is weakened, and the detection speed and the detection accuracy are improved.

In order to solve the technical problem, the fingertip tracking method based on the deep learning and K-curvature method comprises the following steps:

s1, acquiring a hand data set and preprocessing the hand data set;

s2, training the data set by using a deep learning neural network model YOLOv3 to obtain a fingertip detection model;

s3, acquiring video stream, inputting the current frame into a fingertip detection model, performing multi-scale feature extraction on the image of the current frame by using a Darknet53 network, detecting the target category and the position information of a detection frame, and initializing a Kalman filter according to the information;

s4, reading the next frame of image, obtaining a prediction frame by using a Kalman filter, calculating the IOU of the frame of detection frame and the prediction frame, setting an IOU threshold value, judging whether the IOU is larger than the IOU threshold value, if so, performing the step S5, otherwise, performing the step S6;

s5, updating the Kalman filter by the prediction frame matched with the frame, outputting the state updating value as the tracking frame of the frame, completing the tracking of the frame, and returning to the step S4;

s6, obtaining the position information of the fingertip point by using a K-curvature method;

s7, calculating the Euclidean distance between the fingertip point and the central point of the detection frame, setting a threshold value, initializing a Kalman filter to obtain a new prediction frame when the distance is smaller than the threshold value, and restarting matching; otherwise, deleting the tracking information of the frame and reading the next frame;

s8, a time threshold value T-max is set, and if no tracking information is detected in the time threshold value frame, the tracking is terminated.

The invention also includes:

1. step S1 includes the following steps:

s11, collecting a large amount of color images of the hand with exposed fingertips under different conditions of scenes, illumination, angles of the person and the fingertips and the number of the hand;

s12, performing data expansion on the hand color image by using a data enhancement method;

and S13, marking the area frame of the target fingertip, adding label information and generating a data label file.

2. The step 2 comprises the following steps:

s21, converting the hand data set input picture into a grid picture with the size of 416 x 416, and adding gray bars to prevent distortion;

s22, performing network downsampling on the processed picture for 5 times through Darknet53 feature extraction to generate a multi-scale feature map;

s23, performing convolution detection on the feature maps of the three scales 13 × 13, 26 × 26 and 52 × 52 by using a multi-scale fusion mode to obtain 3 detection results, and performing non-maximum suppression to obtain a final result;

and S24, generating a trained fingertip detection model.

3. Step S4 includes the following steps:

s41, reading the next frame of image, and obtaining all predicted fingertip prediction frames by using a Kalman filter;

s42, calculating the IOU of the frame detection frame and all the prediction frames;

s43, obtaining the unique match with the largest IOU by using the Hungarian algorithm;

s44, setting an IOU threshold value, and judging whether the IOU is larger than the IOU threshold value;

s45, if the IOU is larger than the IOU threshold value, the step S5 is carried out, otherwise, the step S6 is carried out.

4. Step S41, reading the next frame of image, and obtaining all predicted fingertip prediction frames by using a Kalman filter specifically comprises:

the kalman filter used contains seven state variables, four observation inputs:

outputting a target state:

in the formula (I), the compound is shown in the specification,representing the estimated value of the a priori state at the time k,respectively representing posterior state estimated values at the time k and the time k-1, A representing a state transition matrix, and B representing an optional control input u e RlGain of uk-1Representing the control gain at time k-1,representing the a priori estimated covariance at time k,respectively representing the posteriori estimated covariance at time K and time K-1, Q representing the covariance of the process excitation noise, KkRepresenting the Kalman gain, ZkDenotes a measurement value, H denotes a measurement matrix, u denotes a horizontal pixel position of a center of an object of the current frame, v denotes a vertical pixel position of the center of the object of the current frame, x denotes an area of an object region, y denotes an aspect ratio of the object region,represents the horizontal pixel position of the center of the target in the predicted next frame image,after representing the predictionThe vertical pixel position of the center of the object in the next frame image,indicates the area of the target region in the predicted next frame image,indicating the aspect ratio of the target area in the predicted next frame image.

5. Step S6 includes the following steps:

s61, selecting a YCbCr color space to extract the hand complexion of the image of the current frame:

wherein, R represents a red channel, G represents a green channel, and B represents a blue channel; y is brightness; cb is the difference between the luminance values of the blue part and the RGB signal of the RGB input signal; cr is the difference between the red portion of the RGB input signal and the luminance value of the RGB signal.

S62, calculating the Mahalanobis distance of the Gaussian model:

in the formula, msMean vector representing a single Gaussian model of skin color, CsThe covariance matrix of the skin color single Gaussian model is shown, and X is a pixel point.

S63, establishing single Gaussian models for skin color areas and non-skin color areas respectively by using the Mahalanobis distance, then solving the Mahalanobis distance of a certain pixel under the skin color models and the non-skin color models to judge whether the certain pixel belongs to a skin color point, and segmenting a hand image:

in the formula, τdiffTo set the threshold value, mnsMean, C, representing a single Gaussian model of non-skin colorsnsIs the covariance of the gaussian model.

S64, carrying out binarization processing and bilateral filtering processing on the image;

s65, taking a contour point kiFrom the m point k preceding iti+mAnd the m point k of the backi-mUsing vectorsSum vectorThe cosine value of the included angle is taken as a point kiCurvature of (2):

s66, detecting convex contour points with local maximum K cosine as the fingertip points:

di>(di-s+di+s)/2

in the formula (d)iRepresenting the distance from the centroid of the largest circle of the palm center to the point.

The invention has the beneficial effects that: the invention can be applied in the following fields: 1. handwriting in the air; 2. intelligent teaching; 3. detecting and tracking gestures; 4. human-computer interaction based on hand gestures. The detection part of the invention utilizes a deep neural network YOLOv3 algorithm to train a fingertip data set and obtain a detection model, thereby weakening the influence of a complex environment on the detection accuracy, improving the detection speed and increasing the accuracy and robustness; the tracking part tracks the fingertips by using Kalman filtering and Hungarian algorithms, and corrects the untracked fingertips by adding a K-curvature method, so that the real-time performance and the accuracy of tracking are improved, and the influence on the tracking effectiveness caused by the conditions that the fingertips move too fast or are shielded is reduced. The fingertip tracking method based on the deep learning and K-curvature method provided by the invention can effectively reduce the requirements on the camera equipment, improves the accuracy and effectiveness of fingertip tracking, and has a good application value in practical application.

Drawings

FIG. 1 is a diagram of the overall network architecture of the present invention;

FIG. 2 is a network architecture diagram of YOLOv 3;

FIG. 3 is an illustration of an IOU and an exemplary diagram of a different IOU case;

FIG. 4 is a schematic view of the K-curvature method.

Detailed Description

The invention is further described with reference to the drawings and the detailed description.

Referring to fig. 1, the overall network structure of the present invention is shown. Firstly, training a preprocessed data set by using a YOLOv3 network model to obtain a fingertip detection model; then, acquiring a video stream by using a camera, inputting a detection model, detecting detection frame information, and initializing a Kalman filter; then, a prediction frame is obtained by using a Kalman filter, the IOU of the frame detection frame and the IOU of the prediction frame are calculated, an IOU threshold value is set, whether the IOU is larger than the IOU threshold value or not is judged, and if the IOU is larger than the IOU threshold value, the Kalman filter is updated to obtain a fingertip tracking frame; otherwise, correcting the fingertip position by using a K-curvature method, and updating the Kalman filter; and finally, setting a time threshold T-max, and terminating the tracking if no tracking information is detected in the time threshold frame.

The invention discloses a fingertip tracking method based on deep learning and a K-curvature method, which comprises the following steps:

s1, acquiring a hand data set and preprocessing the hand data set;

the hand data set needs to contain a large number of hand color images under different states and conditions, and contains unique corresponding label information of each image;

step S1 includes the following substeps:

s11, collecting a large amount of color images of the hand with the exposed fingertip under different scenes, illumination, angles of the person and the fingertip, the number of the hand and shielding conditions;

s12, performing data expansion on the hand color image with the probability of 25% by using the methods of rotation, deformation, translation and noise addition respectively;

and S13, marking the area frame of the target fingertip, adding label information and generating a data label file.

S2, training by using a data set of a deep learning neural network model YOLOv3 to obtain a fingertip detection model;

in the field of target detection and tracking, the accuracy and rapidity of target detection are crucial, and the robustness and accuracy of a fingertip detection model determine the accuracy of a subsequent tracking part to a great extent; the YOLOv3 is improved on the basis of the previous two versions YOLOv1 and YOLOv2, and the detection speed is greatly improved when the precision is comparable to that of the two ww-stage series algorithms such as R-CNN and the like; the network structure of Yolov3 is shown in fig. 2;

step S2 includes the following substeps:

s21, converting the hand data set input picture into a grid picture with the size of 416 x 416, and adding gray bars to prevent distortion;

s22, performing network downsampling on the processed picture for 5 times through Darknet53 feature extraction to generate a multi-scale feature map; the Darknet53 network framework parameters are shown in Table 1:

TABLE 1 Darknet53 network framework parameters

S23, performing convolution detection on the feature maps of the three scales 13 × 13, 26 × 26 and 52 × 52 by using a multi-scale fusion mode to obtain 3 detection results, and performing non-maximum suppression to obtain a final result;

and S24, generating a trained fingertip detection model.

S3, acquiring a real-time video stream by using a camera, inputting the current frame into a fingertip monitoring model, performing multi-scale feature extraction on the image of the current frame by using a Darknet53 network, detecting the target category and the position information of a detection frame, and initializing a Kalman filter according to the information;

s4, obtaining a prediction frame by using a Kalman filter, calculating the IOU of the detection frame and the prediction frame of the frame, setting an IOU threshold value, judging whether the IOU is larger than the IOU threshold value, and if the IOU is larger than the IOU threshold value, performing the step S5, otherwise, performing the step S6;

when the IOU between a certain detection frame and all the existing targets in the prediction frame of the current frame is smaller than the set threshold, two conditions are considered to occur: firstly, the detection model does not detect fingertips; secondly, new fingertips appear or false detection appears. The IOU is simply utilized to judge that the situations of false detection, missing detection and the like are possible, a K-curvature method is added to supervise and correct the fingertip detection frame, and the tracking accuracy is effectively improved.

FIG. 3 is a schematic diagram of an IOU; the size of an intersection ratio (IOU) threshold value of each detection frame of the IOU and all the fingertip prediction frames of the fingertip of the frame determines the tracking accuracy, and the IOU threshold value is set to be 0.7 in the embodiment;

step S4 includes the following substeps:

s41, reading the next frame of image, and obtaining all predicted fingertip prediction frames by using a Kalman filter;

the kalman filter used contains seven state variables, four observation inputs:

outputting a target state:

in the formula (I), the compound is shown in the specification,representing the estimated value of the a priori state at the time k,respectively representing posterior state estimated values at the time k and the time k-1, A representing a state transition matrix, and B representing an optional control input u e RlGain of uk-1Representing the control gain at time k-1,representing the a priori estimated covariance at time k,respectively representing the posteriori estimated covariance at time K and time K-1, Q representing the covariance of the process excitation noise, KkRepresenting the Kalman gain, ZkDenotes a measurement value, H denotes a measurement matrix, u denotes a horizontal pixel position of a center of an object of the current frame, v denotes a vertical pixel position of the center of the object of the current frame, x denotes an area of an object region, y denotes an aspect ratio of the object region,represents the horizontal pixel position of the center of the target in the predicted next frame image,indicating the vertical pixel position of the center of the target in the predicted next frame image,indicates the area of the target region in the predicted next frame image,indicating the aspect ratio of the target area in the predicted next frame image.

S42, calculating the IOU of the frame detection frame and all the prediction frames;

s43, obtaining the unique match with the largest IOU by using the Hungarian algorithm;

s44, setting an IOU threshold value, and judging whether the IOU is larger than the IOU threshold value;

s45: if the IOU is larger than the IOU threshold, the step S5 is performed, otherwise, the step S6 is performed.

S5, updating the Kalman filter by the prediction frame matched with the frame, outputting the state updating value as the detection frame of the frame, completing the tracking of the frame, and returning to the step S4;

s6, obtaining the position information of the fingertip point by using a K-curvature method;

when the skin color region is segmented, the YCbCr color space has a good clustering effect on skin color, so that the conversion from RGB to YCbCr color space is easily realized, and the YCbCr color space is selected to extract a gesture region; the obtained binaryzation gesture image has a large amount of noise, so bilateral filtering is selected for smoothing; the finger tip can be detected through a high value due to the obvious height difference of the contour of the hand, and the K-curvature method can effectively measure the included angle of curve points, so that the K-curvature method is selected to detect the finger tip points. FIG. 4 is a schematic diagram of the K-curvature method;

step S6 includes the following substeps:

s61, selecting a YCbCr color space to extract the hand complexion of the image of the current frame:

wherein, R represents a red channel, G represents a green channel, and B represents a blue channel; y is brightness; cb is the difference between the blue part of the RGB input signal and the brightness value of the RGB signal, and 77< Cb <127 is taken; cr is the difference between the red portion of the RGB input signal and the RGB signal luminance value, 133< Cr < 173.

S62, calculating the Mahalanobis distance of the Gaussian model:

in the formula, msMean vector representing a single Gaussian model of skin color, CsThe covariance matrix of the skin color single Gaussian model is shown, and X is a pixel point.

S63, establishing single Gaussian models for skin color and non-skin color respectively by using the Mahalanobis distance, then solving the Mahalanobis distance of a certain pixel under the skin color model and the non-skin color model to judge whether the certain pixel belongs to a skin color point, and segmenting a hand image:

in the formula, τdiffTo set the threshold value, mnsMean, C, representing a single Gaussian model of non-skin colorsnsIs the covariance of the gaussian model.

S64, carrying out binarization processing and bilateral filtering processing on the image;

s65, taking a contour point kiFrom the m point k preceding iti+mAnd the m point k of the backi-mUsing vectorsSum vectorThe cosine value of the included angle is taken as a point kiCurvature of (2):

in the formula, the size of m determines the accuracy of the finger tip point calculation, and in this case, m is 5.

S66, detecting convex contour points with local maximum K cosine as the fingertip points:

di>(di-s+di+s)/2

in the formula (d)iRepresenting the distance from the centroid of the largest circle of the palm center to the point.

S7, respectively calculating Euclidean distances between the fingertip point and the central point of the detection frame, setting a threshold value, and initializing a Kalman filter to obtain a new prediction frame to restart matching when the distance is smaller than the threshold value; otherwise, deleting the tracking information of the frame and reading the next frame;

for initializing the position information of a new target using the detection frame information, the velocity is set to 0, and since the velocity cannot be observed at this time, the covariance of the velocity component is set to a large initial value, reflecting uncertainty. The new tracking target needs to be associated with the detection result for a preset time to accumulate the confidence of the new target, so that the false creation of the new tracking target caused by the false alarm of target detection can be effectively prevented.

S8, setting a time threshold T-max, and terminating the tracking if no tracking information is detected in the time threshold frame;

and if the predicted positions of the existing fingertips are not matched with the IOU of the detection frame in the continuous T-max frames, the fingertips are considered to disappear, and the track is terminated. This can prevent an infinite increase in the number of trackers, and positioning errors due to long-term predictions. This case sets T-max to 1.

The above embodiments are the best mode for carrying out the invention, but the embodiments of the invention are not limited to the above embodiments, and any other replacement modes such as simplification, change, replacement, combination without departing from the principle of the invention are included in the protection scope of the invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:智能语音机器人交互效果优化方法、装置及智能机器人

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类