Method, device and terminal equipment for performing stereo measurement on video picture

文档序号:1832824 发布日期:2021-11-12 浏览:13次 中文

阅读说明:本技术 在视频画面上进行立体测量的方法、装置、终端设备 (Method, device and terminal equipment for performing stereo measurement on video picture ) 是由 郑佳栋 劳健斌 邵铭 王新波 张蕾 罗平 李苗苗 张静 周先明 于 2021-07-21 设计创作,主要内容包括:本公开的实施例公开了在视频画面上进行立体测量的方法、装置、终端设备。该方法的一具体实施方式包括:获取待处理的目标图像,其中,目标图像为摄像机视频画面;获取用户确定的目标像素点坐标集合,其中,目标像素点坐标集合包括目标底轮廓像素点坐标集合、目标顶点坐标和目标底中心点坐标;基于目标顶点坐标和目标底中心点坐标,确定物理高度;基于目标底轮廓像素点坐标集合,确定目标截面积;将物理高度和目标截面积的乘积确定为目标体积;将目标体积发送至目标终端设备以用于信号输出。该方法直接在摄像机中的视频画面测量目标像素点坐标集合对应的物体的体积,避免了空间矫形处理,简化了处理流程,提高了信号输出的准确度。(The embodiment of the disclosure discloses a method, a device and a terminal device for performing stereo measurement on a video picture. One embodiment of the method comprises: acquiring a target image to be processed, wherein the target image is a video picture of a camera; acquiring a target pixel point coordinate set determined by a user, wherein the target pixel point coordinate set comprises a target bottom outline pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate; determining a physical height based on the target vertex coordinates and the target bottom center point coordinates; determining a target sectional area based on the coordinate set of the target bottom contour pixel points; determining the product of the physical height and the target sectional area as a target volume; the target volume is sent to the target terminal device for signal output. The method directly measures the volume of the object corresponding to the coordinate set of the target pixel point in the video picture of the camera, avoids spatial correction processing, simplifies the processing flow and improves the accuracy of signal output.)

1. A method of performing stereo measurements on a video frame, comprising:

acquiring a target image to be processed, wherein the target image is a video picture of a camera;

acquiring a target pixel point coordinate set determined by a user, wherein the target pixel point coordinate set comprises a target bottom outline pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate;

determining a physical height based on the target vertex coordinates and the target bottom center point coordinates;

determining a target sectional area based on the target bottom contour pixel point coordinate set;

determining a product of the physical height and the target cross-sectional area as a target volume;

and sending the target volume to a target terminal device for signal output.

2. The method of claim 1, wherein the target pixel point coordinate set is used for representing a position of a target object in the target image, and a target pixel point coordinate in the target pixel point coordinate set is a three-dimensional coordinate.

3. The method of claim 2, the determining a physical height based on the target vertex coordinates and the target nadir coordinates comprising:

determining a physical distance according to the coordinates of the target bottom center point and the coordinates of the predetermined camera bottom center;

determining a view angle parameter according to the physical distance and a predetermined camera height;

and determining the physical height according to the visual angle parameter, the target vertex coordinate and the target bottom center point coordinate.

4. The method of claim 3, wherein determining a target cross-sectional area based on the target base profile pixel point coordinate set comprises:

for each target bottom contour pixel point in the target bottom contour pixel point coordinate set, determining a three-dimensional intersection point coordinate of the target bottom contour pixel point according to the target bottom contour pixel point and a predetermined earth surface parameter set to obtain a three-dimensional intersection point coordinate set, wherein the earth surface parameter set comprises an earth spherical center coordinate and an earth radius;

for each three-dimensional intersection point coordinate in the three-dimensional intersection point coordinate set, converting the three-dimensional intersection point coordinate into a geographical three-dimensional intersection point coordinate to obtain a geographical three-dimensional intersection point coordinate set;

and determining the target sectional area based on the geographical three-dimensional intersection point coordinate set.

5. The method of claim 4, said converting the three-dimensional intersection coordinates to geographic three-dimensional intersection coordinates, comprising:

generating the geographic three-dimensional intersection coordinates based on the three-dimensional intersection coordinates using the following equation:

wherein (u, v, w) is the three-dimensional intersection coordinate, (X, Y, Z) is the geographic three-dimensional intersection coordinate, R is a rotation matrix, t is a translation matrix, s is a predetermined camera depth value, R, t are respectively predetermined camera parameters, and M is a predetermined camera parameter matrix.

6. The method according to one of claims 1 to 5, wherein the shooting angle of the target image comprises one of: horizontal shooting angle, upward shooting angle and downward shooting angle.

7. An apparatus for performing stereo measurements on video frames, comprising: :

a first acquisition unit configured to acquire a target image to be processed, wherein the target image is a camera video picture;

the second acquisition unit is configured to acquire a target pixel point coordinate set determined by a user, wherein the target pixel point coordinate set comprises a target bottom contour pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate;

a first determination unit configured to determine a physical height based on the target vertex coordinates and the target nadir point coordinates;

a second determination unit configured to determine a target cross-sectional area based on the target base contour pixel point coordinate set;

a third determination unit configured to determine a product of the physical height and the target sectional area as a target volume;

an output unit configured to transmit the target volume to a target terminal device for signal output.

8. A terminal device, comprising:

one or more processors;

a storage device having one or more programs stored thereon;

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.

9. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.

Technical Field

The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device and a terminal device for performing stereo measurement on a video picture.

Background

The spatial measurement and calculation refers to the measurement and analysis of basic parameters of various spatial targets in the geographic information system, such as the position, distance, perimeter, area, volume, curvature, spatial form, spatial distribution, and the like of the spatial targets. Space measurement and calculation are basic means for acquiring geospatial information in a geographic information system, and the acquired basic spatial parameters are the basis for performing complex spatial analysis, simulation and decision making. In the prior art, only distance measurement and area measurement operations are performed in a two-dimensional map model and height measurement operations are performed in a three-dimensional map model, and no operation of directly measuring height in a real-time video or directly measuring and calculating a target volume in the real-time video exists.

However, when acquiring a spatial target volume in a video, there are often technical problems as follows: in the prior art, the height of a target cannot be measured on a video, a height line is drawn on the video, calculation can only be performed according to a distance, and factors such as spatial projection are not considered, because a video picture has a spatial stereo property and a picture distortion property, the height is different from the length, the difference is that the height is represented by a length value, and the finally generated target volume usually has a large error.

Disclosure of Invention

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Some embodiments of the present disclosure propose a method, an apparatus, and a terminal device for performing stereo measurement on a video frame to solve one or more of the technical problems mentioned in the above background.

In a first aspect, some embodiments of the present disclosure provide a method for performing stereo measurements on a video frame, the method comprising: acquiring a target image to be processed, wherein the target image is a video picture of a camera; acquiring a target pixel point coordinate set determined by a user, wherein the target pixel point coordinate set comprises a target bottom outline pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate; determining a physical height based on the target vertex coordinates and the target bottom center point coordinates; determining a target sectional area based on the coordinate set of the target bottom contour pixel points; determining the product of the physical height and the target sectional area as a target volume; the target volume is sent to the target terminal device for signal output.

In a second aspect, some embodiments of the present disclosure provide an apparatus for performing stereo measurements on video frames, the apparatus comprising: a first acquisition unit configured to acquire a target image to be processed, wherein the target image is a camera video picture; the second acquisition unit is configured to acquire a target pixel point coordinate set determined by a user, wherein the target pixel point coordinate set comprises a target bottom contour pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate; a first determination unit configured to determine a physical height based on the target vertex coordinates and the target bottom center point coordinates; a second determination unit configured to determine a target cross-sectional area based on the target bottom contour pixel point coordinate set; a third determination unit configured to determine a product of the physical height and the target sectional area as a target volume; an output unit configured to transmit the target volume to a target terminal device for signal output.

In a third aspect, some embodiments of the present disclosure provide a terminal device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.

In a fourth aspect, some embodiments of the disclosure provide a computer readable storage medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as in any one of the first aspect.

The above embodiments of the present disclosure have the following advantages: according to the method for performing stereo measurement on the video picture, the volume of the object corresponding to the coordinate set of the target pixel point is directly measured on the video picture in the camera, so that the spatial correction processing is avoided, the processing flow is simplified, and the accuracy of signal output is improved. Specifically, the inventors found that the reason for the poor accuracy of the current object stereo measurement is that: in the prior art, the height of a target cannot be measured on a video, a height line is drawn on the video, calculation can only be performed according to a distance, and factors such as spatial projection are not considered, because a video picture has a spatial stereo property and a picture distortion property, the height is different from the length, the difference is that the height is represented by a length value, and the finally generated target volume usually has a large error. Based on this, first, some embodiments of the present disclosure acquire a target image to be processed, where the target image is a camera video picture. Specifically, the target image is a picture directly captured from the video. And secondly, acquiring a target pixel point coordinate set determined by a user. The target pixel point coordinate set comprises a target bottom contour pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate. Specifically, the set of coordinates of the target pixel points corresponds to the target object in the video frame. And thirdly, determining the physical height based on the target vertex coordinates and the target bottom center point coordinates. And then, determining the sectional area of the target based on the coordinate set of the pixel points of the contour of the bottom of the target. Finally, the product of the physical height and the target cross-sectional area is determined as a target volume, and the target volume is sent to the target terminal device for signal output. The method can distinguish the height value of the space coordinate system and the area of the plane coordinate system in the video area, so that the volume value of the regular target object is obtained, the orthopedic problem caused by directly using the physical distance between the objects for calculation is avoided, the accuracy of the volume value is improved, the level of stereo measurement in the video is improved, and the user experience is improved.

Drawings

The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.

FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;

FIG. 2 is a flow diagram of some embodiments of a method of making stereo measurements on video frames according to the present disclosure;

FIG. 3 is a flow diagram of some embodiments of an apparatus for making stereo measurements on video frames according to the present disclosure;

fig. 4 is a schematic block diagram of a terminal device suitable for use in implementing some embodiments of the present disclosure.

Detailed Description

Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.

It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.

It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.

It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.

The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method of stereo measurement on video frames of the present disclosure may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a data processing application, an information generation application, an object ranging application, and the like.

The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various terminal devices having a display screen, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-listed terminal apparatuses. It may be implemented as a plurality of software or software modules (e.g., to provide a target image to be processed, etc.), or as a single software or software module. And is not particularly limited herein.

The server 105 may be a server that provides various services, such as a server that stores target images input by the terminal apparatuses 101, 102, 103, and the like. The server may process the received target image and feed back the processing result (e.g., target volume) to the terminal device.

It should be noted that the method for performing stereo measurement on a video screen provided in the embodiment of the present disclosure may be executed by the server 105, or may be executed by the terminal device.

It should be noted that the local server 105 may also directly store the target image to be processed, and the server 105 may directly extract the local target image to be processed to obtain the target volume after processing, in this case, the exemplary system architecture 100 may not include the terminal devices 101, 102, 103 and the network 104.

It should be noted that the terminal apparatuses 101, 102, and 103 may also have applications for performing stereo measurement on video images, and in this case, the processing method may also be executed by the terminal apparatuses 101, 102, and 103. At this point, the exemplary system architecture 100 may also not include the server 105 and the network 104.

The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, for providing a stereo measurement service on a video frame), or may be implemented as a single software or software module. And is not particularly limited herein.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to fig. 2, a flow 200 of some embodiments of a method of making stereo measurements on video frames in accordance with the present disclosure is shown. The method for carrying out stereo measurement on the video picture comprises the following steps:

step 201, a target image to be processed is acquired.

In some embodiments, the execution subject of the method of performing stereo measurements on video frames (e.g., the server shown in fig. 1) acquires the target image to be processed. Wherein the target image is a video picture of a camera. Specifically, the target image may be a frame image captured in real time from a video played by the camera.

Step 202, obtaining a target pixel point coordinate set determined by a user.

In some embodiments, the execution subject obtains a set of coordinates of a target pixel point determined by a user. The target pixel point coordinate set comprises a target bottom contour pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate. The target pixel point coordinate set is used for representing the position of a target object in a target image, and the target pixel point coordinates in the target pixel point coordinate set are three-dimensional coordinates. Optionally, the shooting angle of the target image includes one of the following: horizontal shooting angle, upward shooting angle and downward shooting angle.

Step 203, determining the physical height based on the target vertex coordinates and the target bottom center point coordinates.

In some embodiments, the executive body determines the physical height based on the target vertex coordinates and the target nadir coordinates.

Optionally, the physical distance is determined according to the coordinates of the center point of the bottom of the target and the coordinates of the center of the bottom of the camera determined in advance. Specifically, the geographic distance of two coordinate points on the earth may be used to determine the physical distance between the target bottom center coordinate and the predetermined camera bottom center coordinate. And determining a viewing angle parameter according to the physical distance and the predetermined camera height. And determining the physical height according to the visual angle parameters, the target vertex coordinates and the target bottom center point coordinates. Specifically, the pitch angle between the camera and the top of the target object can be calculated according to the coordinates of the target vertex and the longitudinal visual angle parameters of the camera lens. Using trigonometry principles, the physical height may be determined using the view angle parameters, the target vertex coordinates, and the target bottom center point coordinates.

And 204, determining the sectional area of the target based on the coordinate set of the pixel points of the contour of the bottom of the target.

In some embodiments, the execution subject determines the target cross-sectional area based on the target bottom contour pixel point coordinate set.

Optionally, for each target bottom contour pixel point in the target bottom contour pixel point coordinate set, determining a three-dimensional intersection point coordinate of the target bottom contour pixel point according to the target bottom contour pixel point and a predetermined earth surface parameter set, so as to obtain a three-dimensional intersection point coordinate set. Wherein the set of earth surface parameters includes coordinates of the earth's center of sphere and the radius of the earth.

And for each three-dimensional intersection coordinate in the three-dimensional intersection coordinate set, converting the three-dimensional intersection coordinate into a geographical three-dimensional intersection coordinate to obtain a geographical three-dimensional intersection coordinate set. Optionally, for each three-dimensional intersection coordinate in the three-dimensional intersection coordinate set, a geographical three-dimensional intersection coordinate is generated based on the three-dimensional intersection coordinate by using the following formula, so as to obtain a geographical three-dimensional intersection coordinate set:

where (u, v, w) is the three-dimensional intersection coordinate, (X, Y, Z) is the geographic three-dimensional intersection coordinate, R is the rotation matrix, t is the translation matrix, s is the predetermined camera depth value, R, t are respectively the predetermined camera parameters, and M is the predetermined camera parameter matrix.

And determining the target sectional area based on the geographical three-dimensional intersection point coordinate set. Specifically, the target cross-sectional area may be determined according to the geographical three-dimensional intersection coordinate set by using a polygon area solving method. Specifically, the target cross-sectional area may be a polygonal area of the target object.

Step 205, determine the product of the physical height and the target cross-sectional area as the target volume.

Optionally, the executing body determines a product of the physical height and the target sectional area as the target volume.

Step 206, the target volume is sent to the target terminal device for signal output.

Optionally, the executing body sends the target volume to a target terminal device for signal output. The target terminal device may be a computer, and the target terminal device may also be a mobile phone. The signal output may be that the target terminal device displays the target volume value.

One embodiment presented in fig. 2 has the following beneficial effects: acquiring a target image to be processed, wherein the target image is a video picture of a camera; acquiring a target pixel point coordinate set determined by a user, wherein the target pixel point coordinate set comprises a target bottom outline pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate; determining a physical height based on the target vertex coordinates and the target bottom center point coordinates; determining a target sectional area based on the coordinate set of the target bottom contour pixel points; determining the product of the physical height and the target sectional area as a target volume; the target volume is sent to the target terminal device for signal output. The method directly measures the volume of the object corresponding to the coordinate set of the target pixel point in the video picture of the camera, avoids spatial correction processing, simplifies the processing flow and improves the accuracy of signal output.

With further reference to fig. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of an apparatus for performing stereo measurement on a video picture, which correspond to those of the method embodiments described above for fig. 2, and which may be applied in various terminal devices in particular.

As shown in fig. 3, the apparatus 300 for performing stereo measurement on a video frame according to some embodiments includes: a first acquisition unit 301, a second acquisition unit 302, a first determination unit 303, a second determination unit 304, a third determination unit 305, and an output unit 306. The first acquiring unit 301 is configured to acquire a target image to be processed, where the target image is a camera video picture. A second obtaining unit 302 configured to obtain a target pixel point coordinate set determined by a user. The target pixel point coordinate set comprises a target bottom contour pixel point coordinate set, a target vertex coordinate and a target bottom center point coordinate. A first determining unit 303 configured to determine the physical height based on the target vertex coordinates and the target bottom center point coordinates. A second determining unit 304 configured to determine the target cross-sectional area based on the target bottom contour pixel point coordinate set. A third determination unit 305 configured to determine a product of the physical height and the target sectional area as a target volume. An output unit 306 configured to transmit the target volume to a target terminal device for signal output.

It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.

Referring now to FIG. 4, shown is a block diagram of a computer system 400 suitable for use in implementing a terminal device of an embodiment of the present disclosure. The terminal device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 406 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the system 400 are also stored. The CPU 401, ROM 402, and RAM403 are connected to each other via a bus 404. An Input/Output (I/O) interface 405 is also connected to the bus 404.

The following components are connected to the I/O interface 405: a storage section 406 including a hard disk and the like; and a communication section 407 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 407 performs communication processing via a network such as the internet. A drive 408 is also connected to the I/O interface 405 as needed. A removable medium 409 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted as necessary on the drive 408, so that a computer program read out therefrom is mounted as necessary in the storage section 406.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 407 and/or installed from the removable medium 409. The above-described functions defined in the method of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 401. It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:矿井采空区空间体积的计算方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!