Method and apparatus for signaling syntax for immersive video coding

文档序号:1327993 发布日期:2020-07-14 浏览:16次 中文

阅读说明:本技术 用于沉浸式视频编解码的信令语法的方法以及装置 (Method and apparatus for signaling syntax for immersive video coding ) 是由 王鹏 林鸿志 林建良 张胜凯 于 2018-08-17 设计创作,主要内容包括:根据本发明一个方法,在来源侧或编码器侧,决定与360°虚拟现实图像相关的所选择视埠。然后决定与所述所选择金字塔投影格式相关的一个或多个参数。根据本发明,将所述一个或多个参数的一个或多个语法元素包括于所述360°虚拟现实图像的已编码数据中。提供所述360°虚拟现实图像的所述已编码数据作为输出数据。在接收器侧或解码器侧,从所述360°虚拟现实图像的所述已编码数据中解析一个或多个参数的一个或多个语法元素。基于包括所述一个或多个参数的信息决定与所述360°虚拟现实图像相关的所选择金字塔投影格式。根据所选择视埠,恢复所述360°虚拟现实图像。(According to one method of the present invention, a selected viewport associated with a 360 virtual reality image is determined at the source side or encoder side. One or more parameters associated with the selected pyramid projection format are then determined. According to the invention, one or more syntax elements of said one or more parameters are included in the encoded data of the 360 ° virtual reality image. Providing the encoded data of the 360 ° virtual reality image as output data. At the receiver side or decoder side, parsing one or more syntax elements of one or more parameters from the encoded data of the 360 ° virtual reality image. Determining a selected pyramid projection format associated with the 360 ° virtual reality image based on information including the one or more parameters. And restoring the 360-degree virtual reality image according to the selected viewport.)

1. A method of processing a 360 ° virtual reality image, the method comprising:

receiving input data of the 360 ° virtual reality image;

determining a selected viewport associated with the 360 ° virtual reality image;

determining one or more parameters associated with a selected pyramid projection format corresponding to the selected viewport, wherein the pyramid projection format includes a primary viewport dimension and four secondary viewport dimensions;

including one or more syntax elements of the one or more parameters in the encoded data of the 360 ° virtual reality image; and

providing the encoded data of the 360 ° virtual reality image as output data.

2. The method of processing a 360 ° virtual reality image of claim 1, wherein the one or more parameters comprise yaw of a center of a primary viewport, pitch of a center of a primary viewport, a width of the primary viewport face, a FOV (field of view) angle of the primary viewport face, non-uniformity factors for the four secondary viewport faces, or a combination thereof.

3. The method of processing a 360 ° virtual reality image of claim 1, wherein the one or more parameters include a type of packaging, a displacement indicator as to whether secondary viewport center displacement is allowed, horizontal plane center displacement, vertical plane displacement, or a combination thereof.

4. The method of processing a 360 ° virtual reality image of claim 3, wherein when the displacement indicator indicates that the secondary viewport center displacement is allowed, one or more syntax elements of the horizontal plane center displacement, the vertical plane center displacement, or both are included in the encoded data of the 360 ° virtual reality image.

5. An apparatus for processing a 360 ° virtual reality image, the apparatus comprising one or more electronic devices or processors configured to:

receiving input data of the 360 ° virtual reality image;

determining a selected viewport associated with the 360 ° virtual reality image;

determining one or more parameters associated with a selected pyramid projection format corresponding to the selected viewport, wherein the pyramid projection format includes a primary viewport dimension and four secondary viewport dimensions;

including one or more syntax elements of the one or more parameters in the encoded data of the 360 ° virtual reality image; and

providing the encoded data of the 360 ° virtual reality image as output data.

6. The apparatus for processing a 360 ° virtual reality image of claim 5, wherein the one or more parameters include yaw of a center of a primary viewport, pitch of a center of a primary viewport, a width of the primary viewport, a FOV (field of view) angle of the primary viewport, non-uniformity factors for the four secondary viewport planes, or a combination thereof.

7. The apparatus for processing a 360 ° virtual reality image of claim 5, wherein the one or more parameters include a type of packaging, a displacement indicator as to whether secondary viewport center displacement is allowed, horizontal plane center displacement, vertical plane displacement, or a combination thereof.

8. The apparatus for processing a 360 ° virtual reality image as recited in claim 7, wherein when the displacement indicator indicates that the secondary viewport center displacement is allowed, one or more syntax elements of the horizontal plane center displacement, the vertical plane center displacement, or both are included in the encoded data of the 360 ° virtual reality image.

9. A method of processing a 360 ° virtual reality image, the method comprising:

receiving encoded data of the 360 ° virtual reality image;

parsing one or more syntax elements of one or more parameters from the encoded data of the 360 ° virtual reality image;

determining a selected pyramid projection format associated with the 360 ° virtual reality image based on information comprising the one or more parameters, wherein the pyramid projection format comprises a primary viewport plane and four secondary viewport planes; and

restoring the 360 ° virtual reality image according to the selected viewport corresponding to the selected pyramid projection format.

10. The method of processing a 360 ° virtual reality image of claim 9, wherein the one or more parameters include yaw of a center of a primary viewport, pitch of a center of a primary viewport, a width of the primary viewport face, a FOV (field of view) angle of the primary viewport face, non-uniformity factors for the four secondary viewport faces, or a combination thereof.

11. The method of processing a 360 ° virtual reality image of claim 9, wherein the one or more parameters include a type of packaging, a displacement indicator as to whether secondary viewport center displacement is allowed, horizontal plane center displacement, vertical plane displacement, or a combination thereof.

12. The method of processing a 360 ° virtual reality image of claim 11, wherein when the displacement indicator indicates that the secondary viewport center displacement is allowed, one or more syntax elements of the horizontal plane center displacement, the vertical plane center displacement, or both, are included in the encoded data of the 360 ° virtual reality image.

13. An apparatus for processing a 360 ° virtual reality image, the apparatus comprising one or more electronic devices or processors configured to:

receiving encoded data of the 360 ° virtual reality image;

parsing one or more syntax elements of one or more parameters from the encoded data of the 360 ° virtual reality image;

determining a selected pyramid projection format associated with the 360 ° virtual reality image based on information comprising the one or more parameters, wherein the pyramid projection format comprises a primary viewport plane and four secondary viewport planes; and

restoring the 360 ° virtual reality image according to the selected viewport corresponding to the selected pyramid projection format.

14. The apparatus for processing a 360 ° virtual reality image of claim 13, wherein the one or more parameters include yaw of a center of a primary viewport, pitch of a center of a primary viewport, a width of the primary viewport, a FOV (field of view) angle of the primary viewport, non-uniformity factors for the four secondary viewport planes, or a combination thereof.

15. The apparatus for processing a 360 ° virtual reality image of claim 13, wherein the one or more parameters include a type of packaging, a displacement indicator as to whether secondary viewport center displacement is allowed, horizontal plane center displacement, vertical plane displacement, or a combination thereof.

16. The apparatus for processing a 360 ° virtual reality image of claim 15, wherein when the displacement indicator indicates that the secondary viewport center displacement is allowed, one or more syntax elements of the horizontal plane center displacement, the vertical plane center displacement, or both are included in the encoded data for the 360 ° virtual reality image.

Technical Field

The present invention relates to image/video processing or codec for 360 ° Virtual Reality (VR) images/sequences, and in particular, to syntax signaling for immersive video codec in pyramid projection format (pyramid projection format).

Background

360 ° video, also known as immersive video, is an emerging technology that can provide a "live-like experience. This immersive experience may be achieved by surrounding the user with a surrounding scene to cover a panoramic view, in particular a 360 ° field of view. This "live-like experience" can be further enhanced by stereoscopic rendering. Accordingly, panoramic video is being widely used in Virtual Reality (VR) applications.

Immersive video involves capturing a scene using multiple cameras to cover a panoramic field of view, e.g., a 360 ° field of view. Immersive cameras typically use a panoramic camera or a set of cameras to capture a 360 ° field of view. Typically, two or more cameras are used for the immersive camera. All cameras must capture and record multiple separate segments (also called separate views) of the scene simultaneously. Furthermore, the set of cameras is typically used to capture views horizontally, but other arrangements of the cameras are possible.

A 360 ° virtual reality image or images may be captured using a 360 ° spherical panoramic camera to cover the entire field of view 360 ° around. It is difficult to process or store a three-dimensional (3D) spherical image using a conventional video/image processing apparatus, and thus, a 360 ° VR image is generally converted into a two-dimensional (2D) format using a 3D-to-2D projection method. For example, Equal Rectangular Projection (ERP) and cube spherical projection (CMP) are commonly used projection methods. Thus, a 360 ° image may be stored in the format of an equirectangular projection that maps the entire surface of the sphere to a planar image, with latitude on the vertical axis and longitude on the horizontal axis. For ERP projection, the area in the north and south poles of the sphere (i.e. becoming a line from a single point) is more strongly stretched than the area near the equator. Moreover, due to the distortion introduced by stretching, especially near the two poles, predictive coding tools are often unable to make good predictions, resulting in a reduction in coding efficiency.

In the present invention, syntax signaling related to a new projection format is disclosed.

Disclosure of Invention

The invention discloses a method and a device for processing a 360-degree virtual reality image. According to one method of the present invention, input data for a 360 ° virtual reality image is received at a source side or an encoder side. Determining a selected viewport associated with the 360 ° virtual reality image. Determining one or more parameters associated with a selected pyramid projection format corresponding to the selected viewport, wherein the pyramid projection format includes a primary viewport dimension and four secondary viewport dimensions. According to the invention, one or more syntax elements of said one or more parameters are included in the encoded data of the 360 ° virtual reality image; and providing the encoded data of the 360 ° virtual reality image as output data.

On the receiver side or decoder side, encoded data of a 360 ° virtual reality image is received. Parsing one or more syntax elements of one or more parameters from the encoded data of the 360 ° virtual reality image. Determining a selected pyramid projection format associated with the 360 ° virtual reality image based on information including the one or more parameters, wherein the pyramid projection format includes a primary viewport dimension and four secondary viewport dimensions; and restoring the 360-degree virtual reality image according to the selected viewport corresponding to the selected pyramid projection format.

In one embodiment, the one or more parameters include yaw of a center of a primary viewport, pitch of a primary viewport face, width of the primary viewport face, FOV (field of view) angle of the primary viewport face, non-uniformity factors for the four secondary viewport faces, and combinations thereof. In another embodiment, the one or more parameters include a fill type, a displacement indicator as to whether secondary viewport center displacement is allowed, a horizontal plane center displacement, a vertical plane center displacement, and combinations thereof. When the displacement indicator indicates that the auxiliary viewport center displacement is allowed, one or more syntax elements of the horizontal plane center displacement, the vertical plane center displacement, or both are included in the encoded data of the 360 ° virtual reality image.

Drawings

Fig. 1A shows an example of a viewport (viewport) represented as a pyramid. The 360VR video content on the sphere is projected onto an inscribed pyramid that includes one vertex, one rectangular base, and four triangular sides.

Fig. 1B shows an example of a pyramid comprising five faces, called the front or main face and four sides, labeled R (right), L (left), T (top) and B (bottom).

FIG. 1C shows an example of a compact pyramid projection layout, where rearranging a triangular projection surface with a contracted height forms a compact layout with the front surface.

FIG. 1D illustrates an example of an exponential non-uniform mapping function.

FIG. 2 shows an example of a view-port based pyramid projection with indicated view-port center, yaw (i.e., longitude) θ value, and pitch (i.e., latitude)The value of (c).

Fig. 3 shows an example of a view port image with the proposed layout with (yaw, pitch) ═ 0,0 and a non-uniformity factor n of 2.5.

Fig. 4 shows an example of viewport represented as a pyramid enclosed in a sphere, with relevant parameters indicated.

Fig. 5 shows an example of an asymmetric pyramid projection layout in two fill types (first type, second type), with the main view and auxiliary views stacked horizontally.

Fig. 6 shows an example of an asymmetric pyramid projection layout in two fill types (third type, fourth type), where the main view and the auxiliary view are stacked vertically.

FIG. 7 shows an example of an asymmetric pyramid projection layout in which the vertex positions are offset from the default center in both the horizontal and vertical directions.

Fig. 8 shows an exemplary flowchart of a system for processing a 360 ° virtual reality image at a source side or an encoder side according to an embodiment of the present invention.

Fig. 9 shows an exemplary flowchart of a system for processing a 360 ° virtual reality image at a receiving side or a decoder side according to an embodiment of the present invention.

Detailed Description

The following description is of the best mode contemplated for carrying out the present invention. The description is made for the purpose of illustrating the general principles of the present invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.

In JVT-E0058 (12-20 months 1-2017, Rinkle Switzerland, ITU-T SG 16WP 3 and the Joint video development team (JEVT) 5 th conference of ISO/IECJTC 1/SC 29/WG 11, "AHG 8: A viewport-based video project for VR360 video streaming", by Peng Wang et al, the viewport-based pyramid projection format for 360VR video streams has been disclosed.

In FIG. 1A, the viewport is represented as a pyramid and the 360VR video content on the sphere is projected onto an inscribed pyramid, which includes one vertex, one rectangular base, and 4 triangular sides, in the particular embodiment shown in FIG. 1A, the base of the pyramid corresponds to a square, as shown in FIG. 1B, the pyramid includes five faces (i.e., the base 110 of the square and four triangular sides), where the base 110 is referred to as the front or primary face and the four triangular sides are labeled R (right), L (left), T (top), and B (bottom). As shown in FIG. 1B, the height of each unfolded side is referred to as H, the primary face is the main viewport face, which includes a 90 ° × 90 ° region. the other four isosceles triangular sides are referred to as secondary faces. As shown in FIG. 1C, rearranging these triangular faces with a collapsed height (labeled H) together with the front 120 forms a very compact layout. furthermore, there is no discontinuous boundary between any two connected faces in compact, and in FIG. 1C, the compact height is referred to as the compact image 130 in compact format as the secondary pyramid 130.

In order to preserve more detail near the primary projection plane, however, the resampling process may use a non-uniform mapping function to generate denser sampling of the corresponding side projection plane U/B/L/R near the primary projection plane. As shown in FIG. 1D, where the non-uniform mapping function is an exponential function, the non-uniform mapping function may be represented by using the following equation:

in the above equation, n is a positive number and n ≠ 1, Y is the vertical coordinate of the initial side-projection plane, and Y' is the vertical coordinate of the vertically contracted side-projection plane. The parameter n in the present invention refers to the non-uniformity factor. The non-uniform downsampling process may achieve a smaller degradation in image quality for projected areas close to the main area than for areas far from the main area.

To accommodate various viewport-based pyramid projection formats, parameters associated with the selected pyramid projection format need to be signaled so that the decoder can properly reconstruct the VR video. Accordingly, the present invention discloses a plurality of syntax elements for parameters related to a selected pyramid projection format. For example, the plurality of syntax elements may include one or more of the following syntax elements:

signed int(16)main_viewpoint_center_yaw;

signed int(16)main_viewpoint_center_pitch;

unsigned int(16)main_viewpoint_face_width;

unsigned int(8)main_viewpoint_fov_angle;

unsigned int(32)non_uniform_factor;

in the above list, main _ viewpoint _ center _ yaw specifies the value of yaw (i.e., longitude) θ, the direction of rotation is clockwise and the range of values is [ -180 °,180 °]. In the above list, main _ viewpoint _ center _ pitch specifies the pitch (i.e., latitude)Is clockwise and the range of values is [ -90 °,90 ° ]]。

FIG. 2 shows an example of a view-based pyramid projection, where the viewport centers are indicated by black dots, and also indicates yaw (i.e., longitude) θ and pitch (i.e., latitude)The value of (c).

Fig. 3 shows an example of a view port image with the proposed layout with (yaw, pitch) ═ 0,0 and a non-uniformity factor n of 2.5. In fig. 3, the center of the main face is indicated by the white dot, and the boundaries of the four triangular faces are indicated by the white lines. As shown in fig. 3, the image content crossing the boundary is continuous. main _ viewport _ face _ width specifies the number of pixels of the width and height of the main viewport, and main _ viewport _ face _ width for an exemplary image in the pyramid projection format is indicated in fig. 3.

In FIG. 4, the viewport is shown as a pyramid enclosed in a sphere, as shown in diagram 410. main _ viewpoint _ FOV _ angle specifies the degree of angle, which defines the field of view (FOV) size of the square main view area. d is the distance between the center of the sphere and the plane of the main viewport, and a cross section (cross section) of the pyramid through the vertices and a line that splits the main view into two equally sized triangles is shown in FIG. 4. The width (w) of the viewport dimension and the height (h') of the pyramid are calculated as follows:

fig. 4 also shows the relevant parameters for deriving the width (w) of the main viewport and the height (h') of the pyramid. In FIG. 4, diagram 410 shows a perspective view of a pyramid enclosed in a sphere with a major face in front, where the width (w) of the major viewport is indicated. Diagram 420 represents a side view of a pyramid enclosed in a sphere, where values for the radius of the sphere (R), the height of the pyramid (h'), the center of the sphere 422, the distance between the center of the sphere and the primary viewport (d), and the field of view (FOV) angle (θ) are indicated. Drawing 430 shows a three-dimensional view of a pyramid enclosed in a sphere, indicating the radius of the sphere (R), the height of the pyramid (h'), the center of the sphere 422, and the distance between the center of the sphere and the viewport (d).

In fig. 1A-1C, the four triangular sides are symmetrical around the major face and have the same shape and size. When the four triangular sides are folded into the square on the right side of the layout in FIG. 1C, the four triangular sides are symmetrical and the vertices are connected at the center of the square on the right side of the layout in FIG. 1C. In the present invention, a pyramidal projection layout with asymmetric sides is also disclosed. Fig. 5 shows an example of asymmetric pyramid projection in two fill types (first type 510, second type 520), where the main view and the auxiliary view are stacked horizontally. The image width and height of the asymmetric pyramid projection layout are respectively referred to as WaAnd Ha. In other words, the resolution of the asymmetric pyramid projection layout is Wa×HaThe resolution of the main view is Ha×Ha. The resolution of the auxiliary view is (W)a-Ha)×Ha. The vertex positions (512, 522) are offsets from the default center (514,524) of the auxiliary view and the horizontal offset distances (516 and 526), respectively, are indicated in fig. 5. Furthermore, there is no discontinuous boundary between any two connected faces in a compact layout.

Fig. 6 shows an example of an asymmetric pyramid projection layout in two packaging types (third type 610, fourth type 620), with the main view and auxiliary views stacked vertically. The image width and height of the asymmetric pyramid projection layout are respectively referred to as WaAnd Ha. In other words, the frame resolution of the asymmetric pyramid projection layout is Wa×HaThe resolution of the main view is Wa×WaThe resolution of the auxiliary view is Ha×(Wa-Ha). The vertex positions (612,622) are from the auxiliary viewAnd vertical offset distances (616 and 626) are indicated in fig. 6, respectively, and the offset of the default center (614,624). Furthermore, there is no discontinuous boundary between any two connected faces in a compact layout.

Fig. 7 shows an example of an asymmetric pyramid projection layout 710, with the primary view and secondary view (i.e., of the first type) stacked horizontally. The vertex position 712 is offset from the default center (714) in the horizontal as well as the vertical direction. The horizontal offset distance (716) and the vertical offset distance (718) from the auxiliary view default center are indicated in fig. 7. Furthermore, there is no discontinuous boundary between any two connected faces in a compact layout.

To support the asymmetric pyramid projection layout, the present invention also discloses additional syntax elements to be signaled in the video bitstream, so that the decoder can recover the selected asymmetric pyramid projection layout accordingly.

Additional syntax elements include:

packing_type;

disable_center_displacement;

center_displacement_x;

center_displacement_y;

as mentioned previously, there are four types of asymmetric pyramid projection arrangements, as shown in fig. 5 and 6. The syntax element packing _ type defines which of the four types is selected, the syntax element disable _ center _ displacement defines whether center displacement is disabled, and if disable _ center _ displacement is equal to 1, center _ displacement _ x and center _ displacement _ y are inferred to be 0; otherwise, center displacement is signaled using the syntax elements center _ displacement _ x and center _ displacement _ y (in pixel units). The vertex center of the asymmetric pyramid projection layout can be determined from the default center of the auxiliary view and the offset value, and the x-coordinate x _ c (measured from the left boundary of the frame) and the y-coordinate y _ c (measured from the upper boundary of the frame) for the default center of the four types of auxiliary views can be determined as follows:

type 1: x _ c ═ (W)a+Ha)/2,y_c=Ha/2;

Type 2: x _ c ═ (W)a-Ha)/2,y_c=Ha/2;

Type 3: x _ c ═ Wa/2,y_c=(Ha-Wa)/2;

Type 4: x _ c ═ Wa/2,y_c=(Ha+Wa)/2.

When the vertex is to the right of the default center, center _ displacement _ x > 0. When the vertex is to the left of the default center, center _ displacement _ x < 0. When the vertex is above the default center, center _ displacement _ y > 0. When the vertex is below the default center, center _ displacement _ y < 0. When disable _ center _ displacement is equal to 0 (i.e., an asymmetric projection format is allowed), the center coordinates for the four types of vertices are calculated as follows:

types 1 and 2 (x _ c + center _ displacement _ x, y _ c);

types 3 and 4 (x _ c, y _ c + center _ displacement _ y).

For the asymmetric pyramid projection layout of FIG. 7, where vertex position (712) is offset from the default center (714) in the horizontal and vertical directions, and disable _ center _ displacement is equal to 0 (i.e., allowing for an asymmetric projection format), the center coordinates of the vertices are calculated as follows:

(x_c+center_displacement_x,y_c+center_displacement_y)

fig. 8 shows an exemplary flowchart of a system for processing a 360 ° virtual reality image at a source side or an encoder side according to an embodiment of the present invention. The steps shown in the flowcharts may be implemented as program code executable on one or more processors (e.g., one or more CPUs) at the encoder side, and the steps shown in the flowcharts may be implemented on hardware, such as one or more electronic devices or processors for executing the steps in the flowcharts. According to this method, in step 810, input data for a 360 ° virtual reality image is received. In step 820, a selected viewport associated with the 360 ° virtual reality image is determined. In step 830, one or more parameters associated with a selected pyramid projection layout corresponding to the selected viewport are determined, wherein the pyramid projection layout includes a primary viewport dimension and four secondary viewport dimensions. In step 840, one or more syntax elements of said one or more parameters are included in the encoded data of the 360 ° virtual reality image. In step 850, the encoded data of the 360 ° virtual reality image is provided as output data.

Fig. 9 shows an exemplary flowchart of a system for processing a 360 ° virtual reality image at a receiver side or decoder side according to an embodiment of the present invention. In step 910, encoded data for a 360 ° virtual reality image is received. In step 920, one or more syntax elements of one or more parameters are parsed from the encoded data of the 360 ° virtual reality image. In step 930, a selected pyramid projection format associated with the 360 ° virtual reality image is determined based on information including the one or more parameters, wherein the pyramid projection format includes a primary viewport plane and four secondary viewport planes. In step 940, the 360 ° virtual reality image is restored according to the selected viewport corresponding to the selected pyramid projection format.

The above-described flow charts are intended to illustrate embodiments of the present invention by way of example. Those skilled in the art may practice the invention by modifying individual steps, splitting or combining steps without departing from the spirit of the invention.

The previous description is provided to enable any person skilled in the art to practice the invention in the context of a particular application and its requirements. Various modifications to the described embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. In the previous detailed description, numerous specific details were set forth in order to provide a thorough understanding of the present invention, however, it will be apparent to those skilled in the art that the present invention may be practiced and carried out.

The embodiments of the invention described above may be implemented in various hardware, software code, and combinations thereof. For example, an embodiment of the invention may be one or more electronic circuits integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. Embodiments of the invention may also be program code executing on a Digital Signal Processor (DSP) to perform the processes described herein. The invention also relates to many of the functions performed by a computer processor, digital signal processor, microprocessor, or Field Programmable Gate Array (FPGA). These processors may be used to configure the machine-readable software code or firmware code for performing the particular tasks according to the invention by defining the particular methods implemented by the invention. Software code or firmware code may be developed in different packages of programming languages and in different formats or styles, and software code may also be compiled for different target platforms. However, different code formats, styles and languages of software code, as well as other ways of configuring code to perform tasks consistent with the present invention, will not depart from the spirit and scope of the present invention.

The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用重建像素点的语法预测

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类