Image display method, device and system

文档序号:1408361 发布日期:2020-03-06 浏览:12次 中文

阅读说明:本技术 一种图像的显示方法、装置及系统 (Image display method, device and system ) 是由 包孝东 马翠娟 黄茵 刘建滨 陈�光 于 2018-08-28 设计创作,主要内容包括:一种图像的显示方法、装置及系统,用以解决现有技术Cloud VR中显示延时过长的问题。本申请中,终端设备向云端设备发送第一信息,第一信息指示终端设备在第一时刻的姿态与位置;之后,终端设备接收来自云端设备的第一视角图像的信息,第一视角图像为终端设备在第一时刻的姿态与位置对应的视角图像;终端设备基于第一视角图像的信息以及终端设备从第一时刻到第二时刻的姿态变化与位置变化,显示第二时刻终端设备的视角范围内的图像。可以有效地缩短从所述终端设备在所述第二时刻的姿态和位置变化到所述终端设备显示所述第二时刻所述终端设备视角范围内的图像的时间,能够减少图像的显示延时,进一步可以提高用户体验。(A method, a device and a system for displaying images are used for solving the problem of overlong display delay in Cloud VR in the prior art. In the application, the terminal equipment sends first information to the cloud equipment, and the first information indicates the posture and the position of the terminal equipment at a first moment; then, the terminal equipment receives information of a first visual angle image from the cloud equipment, wherein the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal equipment at a first moment; and the terminal equipment displays the image within the visual angle range of the terminal equipment at the second moment based on the information of the first visual angle image and the posture change and the position change of the terminal equipment from the first moment to the second moment. The time from the change of the posture and the position of the terminal equipment at the second moment to the display of the image within the visual angle range of the terminal equipment at the second moment by the terminal equipment can be effectively shortened, the display delay of the image can be reduced, and the user experience can be further improved.)

1. A method for displaying an image, the method comprising:

the method comprises the steps that a terminal device sends first information to a cloud device, wherein the first information is used for indicating the posture and the position of the terminal device at a first moment;

the terminal equipment receives information of a first visual angle image from the cloud equipment, wherein the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal equipment at the first moment;

and the terminal equipment displays the image within the visual angle range of the terminal equipment at a second moment based on the information of the first visual angle image and the posture change and the position change of the terminal equipment from the first moment to the second moment, wherein the second moment is later than the first moment.

2. The method of claim 1, wherein the information of the first perspective image comprises the first perspective image, and the terminal device displays images within a perspective range of the terminal device at the second time, comprising:

the terminal equipment converts the first visual angle image into a second visual angle image based on the position change of the terminal equipment from the first time to the second time, wherein the second visual angle image is a visual angle image corresponding to the position of the terminal equipment at the second time;

and the terminal equipment displays an image in the view angle range of the terminal equipment in the second view angle image based on the posture change of the terminal equipment from the first time to the second time.

3. The method of claim 2, wherein the terminal device converting the first perspective image to the second perspective image based on a change in position of the terminal device at the first time and the second time comprises:

the terminal equipment determines depth information of the second perspective image and pixel change information of the second perspective image converted from the first perspective image according to the position change of the terminal equipment from the first time to the second time;

and the terminal equipment converts the first view image into the second view image according to the depth information of the second view image and the pixel change information.

4. The method as claimed in claim 3, wherein the information of the first perspective image further includes depth information and a motion vector, the motion vector is used for characterizing the variation trend of pixels on the first perspective image, and the terminal device determines the depth information of the second perspective image and the pixel variation information of the conversion of the first perspective image into the second perspective image according to the position variation of the terminal device from the first time to the second time, including:

the terminal equipment determines the depth information of the second visual angle image information according to the position change of the terminal equipment from the first time to the second time based on the depth information of the first visual angle image information;

and the terminal equipment determines the pixel change information according to the position change of the terminal equipment from the first time to the second time based on the motion vector.

5. The method according to any one of claims 1 to 4, wherein before the terminal device displays the image within the viewing angle range of the terminal device at the second time, the method further comprises:

and the terminal equipment determines the visual angle range of the terminal equipment at the second moment according to the posture of the terminal equipment at the second moment.

6. The method according to any one of claims 1 to 5, wherein the first perspective image comprises an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is larger than the perspective range of the terminal device at the first time.

7. A method for displaying an image, the method comprising:

the method comprises the steps that a cloud end device receives first information from a terminal device, wherein the first information is used for indicating the posture and the position of the terminal device at a first moment;

the cloud device renders a prestored environment image of the terminal device according to the first information to obtain a first visual angle image, wherein the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal device at the first moment;

and the cloud equipment sends the information of the first visual angle image to the terminal equipment.

8. The method of claim 7, wherein the information of the first view image comprises the first view image, depth information, and a motion vector.

9. The method of claim 7 or 8, wherein the first perspective image comprises an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

10. An apparatus, characterized in that the apparatus comprises a transmitting unit, a receiving unit and a display unit:

the sending unit is used for sending first information to the cloud end equipment, and the first information is used for indicating the posture and the position of the terminal equipment at a first moment;

the receiving unit is configured to receive information of a first perspective image from the cloud device, where the first perspective image is a perspective image corresponding to the posture and position of the terminal device at the first time;

the display unit is used for displaying the image within the visual angle range of the terminal equipment at a second moment based on the information of the first visual angle image and the posture change and the position change of the terminal equipment from the first moment to the second moment, wherein the second moment is later than the first moment.

11. The apparatus according to claim 10, wherein the information of the first perspective image includes the first perspective image, and the display unit, when displaying the image within the perspective range of the terminal device at the second time, is specifically configured to:

converting the first perspective image into a second perspective image based on the position change of the terminal device from the first moment to the second moment, wherein the second perspective image is a perspective image corresponding to the position of the terminal device at the second moment;

and displaying an image in the second visual angle image within the visual angle range of the terminal equipment based on the posture change of the terminal equipment from the first time to the second time.

12. The apparatus according to claim 11, wherein the display unit, when converting the first perspective image into the second perspective image based on the change in the position of the terminal device at the first time and the second time, is specifically configured to:

according to the position change of the terminal equipment from the first moment to the second moment, determining the depth information of the second perspective image and the pixel change information of the second perspective image converted from the first perspective image;

and converting the first perspective image into the second perspective image according to the depth information and the pixel change information of the second perspective image.

13. The apparatus according to claim 12, wherein the information of the first perspective image further includes depth information and a motion vector, the motion vector is used to characterize a variation trend of pixels on the first perspective image, and the display unit, when determining the depth information of the second perspective image and the pixel variation information of the first perspective image converted into the second perspective image according to the position variation of the terminal device from the first time to the second time, is specifically configured to:

determining depth information of second perspective image information according to position change of the terminal equipment from the first time to the second time based on the depth information of the first perspective image information;

and determining the pixel change information according to the position change of the terminal equipment from the first time to the second time based on the motion vector.

14. The apparatus according to any one of claims 10 to 13, wherein the display unit, before displaying the image within the viewing angle range of the terminal device at the second time, is further configured to:

and determining the visual angle range of the terminal equipment at the second moment according to the posture of the terminal equipment at the second moment.

15. The apparatus according to any one of claims 10 to 14, wherein the first perspective image comprises an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is larger than the perspective range of the terminal device at the first time.

16. An apparatus, characterized in that the apparatus comprises a receiving unit, a processing unit and a transmitting unit:

the receiving unit is used for receiving first information from the terminal equipment, and the first information is used for indicating the posture and the position of the terminal equipment at a first moment;

the processing unit is used for rendering a prestored environment image of the terminal device according to the first information to obtain a first perspective image, and the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first moment;

the sending unit is configured to send information of the first perspective image to the terminal device.

17. The apparatus of claim 16, wherein the information of the first view image comprises the first view image, depth information, and a motion vector.

18. The apparatus of claim 16 or 17, wherein the first perspective image comprises an image within a perspective range of the terminal device at the first time, and wherein the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

19. The system is characterized by comprising a terminal device and a cloud device:

the terminal device is used for sending first information to the cloud device, wherein the first information is used for indicating the posture and the position of the terminal device at a first moment;

the cloud device is used for receiving first information from the terminal device, wherein the first information is used for indicating the posture and the position of the terminal device at a first moment; rendering a prestored environment image of the terminal equipment according to the first information to obtain a first visual angle image, wherein the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal equipment at the first moment; and sending information of the first perspective image to the terminal equipment;

the terminal device is further configured to receive information of a first perspective image from the cloud device, where the first perspective image is a perspective image corresponding to the posture and position of the terminal device at the first time; and displaying the image within the visual angle range of the terminal equipment at a second moment based on the information of the first visual angle image and the posture change and the position change of the terminal equipment from the first moment to the second moment, wherein the second moment is later than the first moment.

20. The system of claim 19, wherein the information of the first perspective image includes the first perspective image, and the terminal device is configured to, when displaying the image within the perspective range of the terminal device at the second time:

converting the first perspective image into a second perspective image based on the position change of the terminal device from the first moment to the second moment, wherein the second perspective image is a perspective image corresponding to the position of the terminal device at the second moment;

and displaying an image in the second visual angle image within the visual angle range of the terminal equipment based on the posture change of the terminal equipment from the first time to the second time.

21. The system according to claim 20, wherein the terminal device, when converting the first perspective image into the second perspective image based on the change in the position of the terminal device at the first time and the second time, is specifically configured to:

according to the position change of the terminal equipment from the first moment to the second moment, determining the depth information of the second perspective image and the pixel change information of the second perspective image converted from the first perspective image;

and converting the first perspective image into the second perspective image according to the depth information and the pixel change information of the second perspective image.

22. The system according to claim 21, wherein the information of the first perspective image further includes depth information and a motion vector, the motion vector is used to characterize a variation trend of pixels on the first perspective image, and when the terminal device determines the depth information of the second perspective image and the pixel variation information of the first perspective image converted into the second perspective image according to a position variation of the terminal device from the first time to the second time, the terminal device is specifically configured to:

determining depth information of second perspective image information according to position change of the terminal equipment from the first time to the second time based on the depth information of the first perspective image information;

and determining the pixel change information according to the position change of the terminal equipment from the first time to the second time based on the motion vector.

23. The system according to any one of claims 19 to 22, wherein before the terminal device displays the image within the viewing angle range of the terminal device at the second time, the system is further configured to:

and determining the visual angle range of the terminal equipment at the second moment according to the posture of the terminal equipment at the second moment.

24. The system of any of claims 19 to 23, wherein the first perspective image comprises an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

Technical Field

The present application relates to the field of image display technologies, and in particular, to a method, an apparatus, and a system for displaying an image.

Background

Virtual Reality (VR) technology is a computer simulation system that can create and experience a virtual world, which uses a computer to create a simulated environment, which is a system simulation of multi-source information-fused, interactive three-dimensional dynamic views and physical behaviors to immerse users in the environment.

At present, in order to achieve better user experience, a VR technology generally adopts a mode of combining VR equipment and a local high-performance host, and the local high-performance host is utilized to implement functions of applying logical operations, rendering pictures, and the like, and finally, a display picture is output to the VR equipment. But this approach undoubtedly increases the user cost, so that VR technology cannot be widely popularized and the user population is limited. Therefore, a Cloud VR (Cloud virtual reality) scheme is provided, and combines the concept of Cloud computing with VR technology; in the Cloud VR scheme, VR contents such as application data, video data and the like are deployed in Cloud equipment, and the functions of application such as logic operation, picture rendering and the like are realized in the Cloud equipment; a user can experience various VR applications such as VR games, VR movies and the like only by having VR equipment; the Cloud VR scheme can effectively reduce the consumption cost of the user and improve the user experience.

However, because the Cloud VR transfers the functions of logical operation and image rendering to the Cloud device, the VR device can only display the received field of view (FOV) image processed by the Cloud device, which undoubtedly increases the display delay; too long a display delay may degrade the user experience.

Disclosure of Invention

The application provides a method, a device and a system for displaying an image, which are used for solving the problem of overlong display delay in Cloud VR in the prior art.

In a first aspect, an embodiment of the present application provides a method for displaying an image, where the method includes: firstly, a terminal device sends first information to a cloud device, wherein the first information needs to indicate the posture and the position of the terminal device at a first moment; then, the terminal device receives information of a first perspective image from the cloud device, wherein the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first moment; after receiving the first perspective image, the terminal device may display an image within a perspective range of the terminal device at a second time based on information of the first perspective image and a posture change and a position change of the terminal device from the first time to the second time, where the second time is later than the first time.

By the method, after the terminal device receives the information of the first perspective image, the terminal device can display the image in the perspective range of the terminal device at the second moment after the first moment through the information of the first perspective image and the posture change and position change of the terminal device from the first moment to the second moment, but not display the image in the perspective range of the terminal device at the first moment, so that the time from the posture and position change of the terminal device at the second moment to the display of the image in the perspective range of the terminal device at the second moment by the terminal device can be effectively shortened, that is, the display delay of the image can be shortened, and the user experience can be further improved.

In a possible design, the first perspective image needs to be converted so that an image within the perspective range of the terminal device at the second moment can be displayed; the information of the first perspective image needs to include the first perspective image, the terminal device may convert the first perspective image into a second perspective image based on a position change of the terminal device from the first time to the second time, and the second perspective image is a perspective image corresponding to a position of the terminal device at the second time; that is, the size of the viewing angle range of the second viewing angle image is consistent with the size of the viewing angle range of the first viewing angle image, and better covers all possible viewing angle ranges of the terminal device at the second moment. And then the terminal equipment displays the image in the visual angle range of the terminal equipment in the second visual angle image based on the posture change of the terminal equipment from the first time to the second time.

By the method, based on the position change of the terminal equipment from the first moment to the second moment, the second visual angle image is obtained through the conversion of the first visual angle image, the image in the visual angle range of the terminal equipment at the second moment can be more conveniently determined from the second visual angle image, the change of the posture and the position of the terminal equipment at the second moment can be further reduced, the time from the display of the image in the visual angle range of the terminal equipment at the second moment can be shortened, and the user experience can be better improved.

In a possible design, when the terminal device converts the first perspective image into the second perspective image based on the position change of the terminal device at the first time and the second time, the terminal device first needs to determine depth information of the second perspective image and pixel change information of the second perspective image into which the first perspective image is converted according to the position change of the terminal device from the first time to the second time; and then, the terminal equipment converts the first perspective image into the second perspective image according to the depth information and the pixel change information of the second perspective image.

By the method, the depth information of the second perspective image and the pixel change information, which are obtained by the terminal device from the position change of the first moment to the second moment, can ensure that the first perspective image is accurately converted into the second perspective image, and then the terminal device can display the image in the perspective range of the terminal device at the second moment based on the second perspective image, so as to reduce the display delay.

In a possible design, when the depth information of the second perspective image and the pixel variation information are obtained, the following steps are specifically required:

the information of the first perspective image further comprises depth information, and the terminal equipment determines the depth information of the second perspective image information according to the position change of the terminal equipment from the first time to the second time based on the depth information of the first perspective image information;

the information of the first perspective image further comprises a motion vector, the motion vector is used for representing the change trend of pixels on the first perspective image, and the terminal equipment determines the pixel change information according to the position change of the terminal equipment from the first time to the second time based on the motion vector.

By the method, according to the related information of the first visual angle image, such as the depth information of the first visual angle image and the motion vector, the depth information of the second visual angle image and the pixel change information can be simply and conveniently determined, the first visual angle image can be ensured to be converted into the second visual angle image subsequently, and further, the display delay can be effectively shortened.

In one possible design, before the terminal device displays the image within the viewing angle range of the terminal device at the second time, the terminal device may determine the viewing angle range of the terminal device at the second time according to the posture of the terminal device at the second time.

By the method, the terminal equipment can be ensured to accurately display the visual angle range of the terminal equipment at the second moment, and user experience can be improved.

In one possible design, the first perspective image includes an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

By the method, the visual angle range of the first visual angle image is larger, so that the second visual angle image can cover a larger visual angle range after the first visual angle image is converted into the second visual angle image, images in all possible visual angle ranges of the terminal equipment at the second moment can be included, and finally the images in the visual angle range of the terminal equipment at the second moment can be better displayed.

In a second aspect, an embodiment of the present application provides a method for displaying an image, where the method includes: firstly, first information from a terminal device is received by a cloud device, wherein the first information is used for indicating the posture and the position of the terminal device at a first moment; then, the cloud device renders a prestored environment image of the terminal device according to the first information to obtain a first perspective image, wherein the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first moment; and after the rendering is finished, the cloud device sends the information of the first visual angle image to the terminal device.

By the method, after the cloud device receives the first information, the first visual angle image can be output, and when the first visual angle image is sent to the terminal device, other information of the first visual angle image is carried, so that the terminal device can display the image within the visual angle range of the terminal device at the second moment based on the information of the first visual angle image and the posture change and position change of the terminal device from the first moment to the second moment.

In one possible design, the information for the first view image includes the first view image, depth information, and a motion vector.

By the method, the information of the first visual angle image is sent to the terminal equipment, the terminal equipment can conveniently convert the first visual angle image, and finally, the terminal equipment can better display the image within the visual angle range of the terminal equipment at the second moment, so that the display delay can be reduced, and the user experience can be improved.

In one possible design, the first perspective image includes an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

By the method, the visual angle range of the first visual angle image is large, so that the second visual angle image converted from the first visual angle image covers a large visual angle range, images in all possible visual angle ranges of the terminal equipment at the second moment can be covered, and finally the images in the visual angle range of the terminal equipment at the second moment can be well displayed.

In a third aspect, an embodiment of the present application further provides an apparatus, where the apparatus is applied to the terminal device, and for beneficial effects, reference may be made to the description of the first aspect and details are not repeated here. The apparatus has the functionality to implement the actions in the method instance of the first aspect described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions. In a possible design, the structure of the apparatus includes a sending unit, a receiving unit, and a display unit, and these units may perform corresponding functions in the method example of the first aspect, for which specific reference is made to the detailed description in the method example, and details are not repeated here.

In a fourth aspect, an embodiment of the present application further provides a device, where the device is applied to a cloud device, and beneficial effects may be described in the description of the second aspect and are not repeated here. The apparatus has the functionality to implement the actions in the method instance of the second aspect described above. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions. In a possible design, the structure of the apparatus includes a receiving unit, a processing unit, and a sending unit, and these units may perform corresponding functions in the method example of the second aspect, for specific reference, detailed description in the method example is given, and details are not repeated here.

In a fifth aspect, an embodiment of the present application further provides an apparatus, where the apparatus is applied to the terminal device, and for beneficial effects, reference may be made to the description of the first aspect and details are not repeated here. The communication apparatus may include a processor and a transceiver, and may further include a memory, where the processor is configured to support the terminal device to perform corresponding functions in the method of the first aspect. The memory is coupled to the processor and retains program instructions and data necessary for the communication device. The transceiver is used for communicating with other equipment, and the transceiver also comprises a display which is used for receiving the indication of the processor and displaying images.

In a sixth aspect, an embodiment of the present application further provides a device, where the device is applied to the cloud device, and beneficial effects may be seen in the description of the second aspect and are not repeated here. The apparatus may include a processor, and may further include a transceiver or a memory, where the processor is configured to support the terminal device to perform the corresponding functions in the method of the second aspect. The memory is coupled to the processor and holds the program instructions and data necessary for the device. The transceiver is used for communicating with other equipment.

In a seventh aspect, the present application further provides a computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the above aspects.

In an eighth aspect, the present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above aspects.

In a ninth aspect, the present application further provides a computer chip, where the chip is connected to a memory, and the chip is used to read and execute a software program stored in the memory, and perform the method of the above aspects.

Drawings

FIG. 1 is a flow chart of a prior art Cloud VR scheme;

fig. 2 is a schematic architecture diagram of a network system provided in the present application;

FIG. 3 is a schematic diagram illustrating an image displaying method according to the present disclosure;

fig. 4 is a schematic view of a viewing angle range of a first viewing angle image and a viewing angle range of a terminal device at a first time provided by the present application;

FIG. 5 is a flowchart of a method for displaying an image according to the present disclosure;

fig. 6 to 9 are schematic structural diagrams of an apparatus provided in the present application.

Detailed Description

The application provides a method, equipment and a system for displaying an image, which are used for solving the problem that the display delay of a Cloud VR scheme is too long in the prior art, so that the user experience is reduced.

In the existing Cloud VR scheme, the Cloud device can implement functions of application, such as logic operation, picture rendering, and the like, and the VR device only receives the FOV image and displays the FOV image.

As shown in fig. 1, first, a tracker in the VR device captures a pose and a position of the VR device, and uploads information of the captured pose and position to the cloud device.

The cloud device can acquire the attitude and position information uploaded by the VR device at a fixed frequency and carries out corresponding refreshing, and after the cloud device acquires the attitude and position information uploaded by the VR device, a logic engine in the cloud device can be triggered to start logic operation.

After the logic engine in the cloud equipment performs logic operation, outputting logic information obtained after the logic operation to a rendering engine in the cloud equipment; and a rendering engine in the cloud equipment performs FOV image rendering according to the logic information, outputs a rendering frame, sends the rendering frame to an encoder in the cloud equipment, performs processing operations such as encoding compression and the like, and sends the processed rendering frame to a packaging plug flow module in the cloud equipment.

And the encapsulation flow pushing module in the cloud equipment encapsulates the processed rendering frame and pushes the flow to the VR equipment.

A decapsulation module in the VR device receives the data sent by the cloud device, and performs decapsulation and decapsulation; and sending the unpackaged data to a decoder in the VR device.

And a decoder in the VR equipment decodes the unpacked data to obtain the rendering frame, and the rendering frame is used as a display frame to be sent to a display module in the VR equipment.

And a display module in the VR equipment refreshes and displays the display frame.

From the above, it can be seen that in the Cloud VR scheme, logic calculation and screen rendering are delivered to the Cloud device, and a series of processing operations such as code compression, encapsulation, stream pushing and sending are added to the Cloud device. In fact, a series of processing operations performed by the cloud device and the VR device display are performed in series, and a new image can be displayed by the VR device only after the cloud device completes a series of processing operations (such as logic operation, FOV image rendering, encoding compression, encapsulation, and the like); the time from the change of the posture and the position of the VR device to the display of the image corresponding to the posture and the position of the VR device by the VR device is increased, and the display time delay is increased, namely, for a moment, the time from the change of the posture and the position of the VR device to the display of the corresponding image at the moment is increased; for the VR technology, the display delay is an important factor affecting the user experience, and the user experience is seriously affected by the too large display delay.

It should be noted that the display delay in the embodiment of the present application refers to a time from when the posture and the position of the terminal device (such as a VR device) change to when the terminal device displays a corresponding image.

Therefore, the application provides a method, a device and a system for displaying an image, which can effectively shorten the display delay in the Cloud VR scheme, and further improve the user experience, and are described in detail below with reference to the accompanying drawings.

As shown in fig. 2, a schematic view of an architecture of a network system according to an embodiment of the present application is shown, where the network system includes a cloud device and a terminal device.

The cloud device is a remote server deployed at a cloud, and the cloud device needs a specific strong image processing function and a data calculation function, such as a rendering operation and a logical operation function; the cloud device may be a super multi-core server, a computer with a Graphics Processing Unit (GPU) cluster deployed, a large distributed computer, a hardware resource pooling cluster computer, and the like, and in the embodiment of the present application, the cloud device may output a view image according to a posture and a position of a terminal device, and transmit information of the view image (such as information of depth information, a motion vector, a view image, and the like) to the terminal device.

The cloud device may further store application data of the terminal device, such as data of an environment image in the application. The application data of the terminal equipment is stored in the cloud equipment, so that on one hand, the data storage pressure of the terminal equipment can be relieved, the safety of the application data in the terminal equipment can also be ensured, and the application data are not easy to steal.

The terminal device can capture the posture and the position of the terminal device, and can also display images in VR application to a user through a display, wherein the terminal device can locally store data of the VR application, also can not locally store the data of the VR application, but stores the data of the VR application in the cloud device, and when the VR application needs to be operated, the data of the VR application is loaded through the cloud device.

The terminal device includes a device worn on the head of a user, such as VR glasses, VR helmets, and the like, and may further include a device worn on other parts of the user, such as devices worn on the hands, elbows, feet, and knees of the user, for example, a gamepad, and the like, in this embodiment of the present application, the terminal device also needs to have certain image processing capability and image display capability, for example, needs to re-project based on a perspective image acquired from the cloud device to acquire an image that can be displayed, and presents the image to the user, and the terminal device needs to capture the posture and position of the terminal device, and has certain motion capture capability, for example, the terminal device needs to have functions of tactile feedback, gesture recognition, eye tracking, and the like.

The equipment worn on the head of the user in the terminal equipment can capture the change of the head of the user, convert the change into the posture and the position of the terminal equipment and display an image in the view angle range of the terminal equipment for the user; the device worn on the head of the user, which is included in the terminal device, can capture the motion changes of other parts of the user and convert the motion changes into the posture and the position of the terminal device.

In the embodiment of the present application, the posture and the position of the terminal device at the first time or the second time should be the posture and the position of the terminal device generated by the motion of the user wearing the terminal device, and the posture and the position of the terminal device at the first time or the second time should reflect the motion direction and the motion termination point of the user.

Based on the network system shown in fig. 2, an embodiment of the present application provides a method for displaying an image, as shown in fig. 3, the method includes:

step 301: the terminal equipment sends first information to the cloud equipment, and the first information is used for indicating the posture and the position of the terminal equipment at a first moment.

The posture and the position of the terminal device include a posture of the terminal device and a position of the terminal device, the posture of the terminal device refers to a rotation state of the terminal device in a space, and the posture of the terminal device is characterized in many ways, such as a position of a central axis of the terminal device in the space and an angle of rotation of the central axis of the terminal device when the terminal device rotates, and projection areas of three surfaces of the terminal device perpendicular to each other in the space when the terminal device rotates; the position of the terminal device refers to a position of the terminal device in a space, and the terminal device position may be represented in many ways, for example, a coordinate point in a three-dimensional coordinate system of the terminal device may also be represented in other ways.

Before the terminal device sends the first information to the cloud device, the terminal device may first obtain a posture and a position of the terminal device at the first time.

The terminal device may obtain the posture and the position of the terminal device at the first time through a built-in apparatus, and may also obtain the posture and the position of the terminal device at the first time through other devices, for example, when a user wears the terminal device to perform a VR game, a device that can capture the motion of the user, such as a sensor, may be disposed near the user.

In a possible implementation manner, a tracker can be deployed in the terminal device, the tracker in the terminal device can capture the posture and the position of the terminal device, the capture manner can be set according to a specific scene, for example, the tracker in the terminal device may capture the pose and position of the terminal device in real time, or may capture the pose and position of the terminal device periodically, and the terminal device may also capture the pose and position of the terminal device according to an application run by the terminal device, the attitude and the position of the terminal device are obtained at a characteristic time point, for example, the application currently run by the terminal device is a dance game with a certain rhythm, the attitude and the position of the terminal device can be changed only at a specific rhythm point, a tracker in the terminal device may start capturing the pose and position of the terminal device just before a certain tempo point.

In another possible embodiment, the terminal device may identify the motion state of the user wearing the terminal device at the first time by deployed sensors, such as infrared sensors, temperature sensors, the deployed sensor can identify the motion state of the user wearing the terminal equipment in real time, when the motion state of the user wearing the terminal equipment is detected to change at the first moment, the motion state of the user wearing the terminal device at the first moment in time may be immediately transmitted to the terminal device, after which, the terminal equipment converts the motion state of the user wearing the terminal equipment into the posture and the position of the terminal equipment at the first moment, the deployed sensor may be deployed in the vicinity of the user wearing the terminal device, or may be deployed on the body of the user wearing the terminal device.

In fact, in practical application, the posture and the position of the terminal device at the first time may be obtained in many ways, and the embodiment of the present application is not limited thereto, and any way that the posture and the position of the terminal device at the first time may be obtained is applicable to the embodiment of the present application.

After the terminal device obtains the posture and the position of the terminal device at the first moment, the terminal device can directly send the first information to the cloud device; for another example, the terminal device may keep a posture and a position of the terminal device at a historical time, where the position at the historical time at least includes a posture and a position of the terminal device at a time before the first time, and the terminal device may send the posture and the position of the terminal device at the first time to the cloud device only when the posture and the position of the terminal device at the first time are different from the posture and the position of the terminal device at a time before the first time.

Step 302: after the cloud end equipment receives the first information from the terminal equipment, the cloud end equipment renders a prestored environment image of the terminal equipment according to the first information to obtain a first visual angle image, and the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal equipment at the first moment.

In a possible implementation manner, after the cloud device receives the first information, the cloud device may periodically refresh the posture and the position of the terminal device, which are stored by the cloud device; that is to say, the gesture and the position of the terminal device saved by the cloud device are refreshed at a certain frequency, after the cloud device receives the first information, the gesture and the position of the terminal device at the first time are not necessarily obtained from the first information immediately, but at a specific time point, the gesture and the position of the terminal device at the first time are obtained from the first information, and the gesture and the position of the terminal device saved locally by the cloud device are refreshed; the cloud device can also refresh the attitude and the position of the terminal device, which are stored by the cloud device, all the time, and as long as the first information is received, the attitude and the position of the terminal device, which are stored by the cloud device, are refreshed, so that the attitude and the position of the terminal device can be obtained all the time.

After the cloud device receives the first information, the cloud device may perform logical operation on the posture and the position of the terminal device at the first time.

The logical operation refers to performing logical judgment according to the posture and the position of the terminal device at the first moment, and determining the image state information required to be displayed by the terminal device at the first moment, wherein the image state information required to be displayed by the terminal device at the first moment represents the state change of the image to be displayed caused by the posture and the position of the terminal device at the first moment, and the state change includes state information of various materials, such as people, objects, backgrounds and the like, included in the image required to be displayed by the terminal device at the first moment.

If the terminal device plays a shooting game currently, the information of the posture and the position of the terminal device at the first moment is displayed in shooting, and the logical operation is to acquire the information of whether the current shooting action hits a target, the hit position and the like.

Specifically, a logic engine may be deployed in the cloud device, and the logic engine in the cloud device performs logic operation based on the posture and the position of the terminal device at the first time to obtain image state information that the terminal device needs to display at the first time.

The logic engine in the cloud device is a logic operation module deployed in the cloud device, in the embodiment of the application, the logic engine represents the logic operation module in the cloud device, and the logic operation module in the cloud device can also be an independent processor; the embodiment of the application does not limit the concrete expression form and name of the logic operation module in the cloud device, and all modules capable of performing logic operation are suitable for the embodiment of the application.

And the cloud end equipment renders the prestored environment image of the terminal equipment based on the image state information which needs to be displayed at the first moment by the terminal equipment.

The cloud device may pre-store application data of the terminal device, such as all environment images of the application of the terminal device, for example, the application of the terminal device is a certain game application, and the cloud device may pre-store basic environment images in the game application, including images of people, objects, backgrounds, and the like of the game application.

The cloud device can determine image data to be displayed by the terminal device at the first moment through the image state information to be displayed by the terminal device at the first moment, wherein the image data comprises images of people, objects, backgrounds and the like to be displayed by the images, and further can determine an environment image to be rendered and render the environment image to obtain the first view angle image.

Rendering can be simply understood as the process of projecting an object in a three-dimensional virtual space onto a plane by the principle of perspective to form a visual image for both eyes.

Specifically, a rendering engine is deployed in the cloud device, and the environment image of the terminal device can be rendered according to the data after the logic operation, so as to obtain the first view image.

The rendering engine in the cloud device is a module for performing image rendering deployed in the cloud device, in the embodiment of the application, the rendering engine represents the module for performing image rendering in the cloud device, and the module for performing image rendering in the cloud device may also be an independent image processor; the embodiment of the application does not limit the concrete representation form and name of the module for image rendering in the cloud equipment, and all the modules capable of image rendering are suitable for the embodiment of the application.

The first perspective image is a perspective image corresponding to the posture and position of the terminal device at the first time, specifically, the first perspective image includes an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image should be larger than the perspective range of the terminal device at the first time.

The view range of the first view image is a maximum view range that can be displayed by the first view image, for example, the view range of the first view image may be regarded as a spatial range from a position where the terminal device is located at the first moment to the terminal device in a three-dimensional space, where materials located at an edge of the image and displayed by the first view image are located; the visual angle range of the first visual angle image can be understood as the size of a space formed by connecting lines of materials located at the edges of the images and the position of the terminal equipment, which are displayed by the first visual angle image, in a three-dimensional space.

The visual angle range of the terminal device at the first time refers to a space range that can be observed by the terminal device at the first time, and the size of the visual angle range of the terminal device may be preset, for example, when the terminal device leaves a factory, a visual field range of human eyes may be set as the visual angle range of the terminal device, or a certain fixed-size visual field angle may be set as the visual angle range of the terminal device.

As a possible implementation manner, the view angle range of the terminal device at the first time may also be set by a user according to a need of the user, and the setting manner of the view angle range of the terminal device at the first time is not limited in this application embodiment.

To facilitate understanding of the relationship between the viewing angle range of the first viewing angle range and the viewing angle range of the terminal device at the first time, the following description is made in an illustrative manner, as shown in fig. 4, where the viewing angle range of the first viewing angle image and the viewing angle range of the terminal device at the first time are schematic diagrams, that is, the first viewing angle image needs to cover an image within the viewing angle range of the terminal device at the first time and needs to further include a partial image outside the viewing angle range of the terminal device at the first time.

The range of the partial image outside the viewing angle range of the terminal device may be uniformly extended around the viewing angle range of the terminal device, or may be extended toward a specific direction from the viewing angle range of the terminal device, and may be determined according to the posture and position of the terminal device at the first time.

It should be understood that the viewing angle range of the first viewing angle image and the viewing angle range of the terminal device at the first time are generally stereo ranges, such as a cone which can be generally abstracted, the vertex of the cone is the terminal device, and the bottom of the cone is the range of the image which can be displayed. And the vertex angle of the cone corresponding to the visual angle range of the first visual angle image needs to be larger than the cone corresponding to the visual angle range of the terminal equipment at the first moment.

Because the terminal device needs to display the image within the view angle range of the terminal device at the second moment according to the first view angle image, the view angle range of the first view angle image needs to be larger than the view angle range of the terminal device at the first moment, so that the terminal device can be ensured to more accurately output the image within the view angle range of the terminal device at the second moment.

For example, in order to make the viewing angle range of the first viewing angle image larger, the viewing angle range of the first viewing angle image may be larger in each direction than the viewing angle range of the terminal device. Specifically, the angle at which the view angle range of the first view angle image is expanded in each direction compared with the view angle range of the terminal device may be set according to an actual scene.

The cloud device can also generate depth information of the first perspective image in the process of rendering the pre-stored environment image of the terminal device, and the depth information is used for indicating the distance from each material (such as people, objects, background images and the like) displayed by the first perspective image to the terminal device. If the first perspective image comprises the person A, the depth information comprises the distance from the person A to the terminal equipment.

The terminal device can display an image, and in fact, the terminal device can be abstracted to be a camera, the image displayed by the terminal device is an image which can be shot by the camera, and the position of the camera is the position of the user in the VR application; the depth information may be understood as a distance from each material (such as a person, an object, a background image, etc.) displayed in the first perspective image to the virtual camera.

Optionally, the depth information of the first perspective image may be a depth map of the first perspective image.

The cloud device may further obtain a motion vector of the first perspective image, where the motion vector is used to represent a change trend of pixels in the first perspective image, that is, each pixel in the first perspective image has a certain movement trend, and the cloud device needs to predict the movement trend of each pixel in the first perspective image and convert the movement trend of each pixel into the motion vector.

In a specific implementation, each pixel in the first perspective image is processed in a block, that is, a plurality of pixels form a pixel block, and the plurality of pixels in the pixel block can be considered to have the same moving trend.

In order to indicate the variation trend of the pixels on the first view image more conveniently, the motion vector is used for representing the variation trend of the pixel block on the first view image.

The motion vector of the first view image is obtained by encoding and compressing the first view image, and a plurality of frames of images including the first view image and images of frames adjacent to the first view image are generated in the process of encoding and compressing the first view image by the cloud device. The specific process of acquiring the motion vector of the first perspective image is as follows:

the cloud device needs to acquire an image of a frame adjacent to the first perspective image (e.g., a frame of an image of a frame previous to the first perspective image or a plurality of frames of images previous to the first perspective image), divide the first perspective image into a plurality of pixel blocks, and search for a position of each pixel block in the image of the frame adjacent to the first perspective image, so that a relative offset between the image of the frame adjacent to the first perspective image and the first perspective image, a relative offset between spatial positions of the same pixel block, and a relative offset between each pixel block can form a motion vector of the first perspective image.

After the cloud device obtains the first perspective image, the depth information of the first perspective image, and the motion vector, the cloud device may use part or all of the information as information of the first perspective image, and then send the information of the first perspective image to the terminal device.

Optionally, in order to efficiently send the information of the first view image to the terminal device, the cloud device may encode and compress the first view image to ensure that less resources are occupied during data transmission, and further, the efficiency of data transmission may be improved.

The cloud device may be deployed with an encoder, and configured to implement encoding compression on the first view image.

Because the first perspective image and the depth information of the first perspective image and the motion vector have a certain corresponding relationship, the cloud device can encapsulate the information of the first perspective image and then send the encapsulated information of the first perspective image to the terminal device.

Step 303: the cloud device sends information of the first visual angle image to the terminal device, and the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal device at the first moment.

The cloud equipment can be provided with a packaging plug-flow module for packaging the information of the first visual angle image, and the packaging plug-flow module in the cloud equipment can also push the packaged information of the first visual angle image to the terminal equipment.

As a possible implementation manner, the information of the first perspective image may further include information of a posture and a position of the terminal device at the first time, so that after the terminal device receives the information of the first perspective image, it can be definitely determined that the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first time.

The information of the attitude and the position of the terminal device at the first moment may not be carried in the information of the first perspective image, when the terminal device sends the first information to the cloud device, the terminal device may store the information of the attitude and the position of the terminal device at the first moment, and when the terminal device receives the information of the first perspective image, the information of the attitude and the position of the terminal device at the first moment may be locally obtained.

It should be noted that, when the cloud device receives information from the terminal device at a certain frequency, the first information sent by the terminal device is not necessarily received by the cloud device, that is, the time point of sending the first information by the terminal device does not reach the time point of receiving information by the cloud device, in order to enable the terminal device to make sure that the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first time without carrying information of the posture and the position of the terminal device at the first time in the information of the first perspective image, the cloud device may send a response message to the terminal device for notifying that the terminal device has successfully received the first information when the first information is successfully received by the cloud device, after receiving the response message, the terminal equipment stores the attitude and position information of the terminal equipment at the first moment; the terminal device may also send the first information to the cloud device according to the frequency of receiving information by the cloud device or the frequency of refreshing the attitude and position of the terminal device stored locally, that is, the first information sent by the terminal device may be received by the cloud device, and may render a pre-stored environment image of the terminal device according to the attitude and position of the terminal device at the first time to obtain the first perspective image, so that when the cloud device issues the information of the first perspective image to the terminal device, the terminal device may determine the attitude and position of the terminal device corresponding to the first perspective image at the first time.

The manner in which the terminal device acquires the posture and the position of the terminal device at the first time is only an example, and any manner that the terminal device determines that the information of the first perspective image is the perspective image corresponding to the posture and the position of the terminal device at the first time when receiving the information of the first perspective image is applicable to the embodiment of the present application.

Step 304: after the terminal equipment receives the information of the visual angle image from the cloud equipment, the terminal equipment displays the image in the visual angle range of the terminal equipment at the second moment based on the information of the first visual angle image and the posture change and the position change of the terminal equipment from the first moment to the second moment, wherein the second moment is later than the first moment.

If the cloud device encapsulates the information of the first view image, the terminal device needs to decapsulate the encapsulated information of the first view image.

That is to say, a decapsulation module may be deployed in the terminal device, and configured to decapsulate the received information of the first perspective image after encapsulation, so as to obtain the information of the first perspective image.

If the cloud device performs encoding compression on the first view image, the terminal device further needs to decode the encoded and compressed first view image to obtain the first view image.

Since the first information is sent from the terminal device to the cloud device, until the terminal device receives the information of the first perspective image, the posture and position of the terminal device may have changed, that is, the posture and position of the terminal device have changed from the posture and position at the first time to the posture and position at the second time.

In order to ensure that the terminal device can correctly display the image and reduce the display delay, the terminal device should display the image within the view angle range of the terminal device at the second moment; in order to normally display the image, the terminal device needs to utilize the posture and the position of the terminal device at the second moment and the received information of the first perspective image.

First, the terminal device needs to acquire the posture and the position of the terminal device at the second time, for example, the terminal device may acquire the posture and the position of the terminal device at the second time through the tracker or the sensor, and a manner in which the terminal device needs to acquire the posture and the position of the terminal device at the second time is the same as a manner in which the terminal device needs to acquire the posture and the position of the terminal device at the first time.

The terminal device needs to determine the posture and the position of the terminal device at the first time, in addition to obtaining the posture and the position of the terminal device at the second time, and the manner for determining the posture and the position of the terminal device at the first time by the terminal device may refer to the description in step 303, which is not described herein again.

When the attitude and the position of the terminal device at the second moment and the attitude and the position of the terminal device at the first moment are determined, the attitude change and the position change of the terminal device from the first moment to the second moment can be determined.

The terminal device may obtain the posture change and the position change of the terminal device from the first time to the second time, that is, may determine the change of the rotation state of the terminal device from the first time to the second time and the position change in space.

Then, the terminal device may convert the first perspective image into a second perspective image based on a position change of the terminal device from the first time to the second time, where the second perspective image is a perspective image corresponding to the posture and the position of the terminal device at the second time.

The terminal equipment can move the material displayed by the first visual angle image to a certain extent after knowing the spatial position change of the terminal equipment from the first moment to the second moment. The second perspective image can be obtained, because the perspective range of the first perspective image is larger than the perspective range of the terminal device, the second perspective image obtained after certain rotation and movement corresponds to the position of the terminal device at the second moment, and the perspective range of the second perspective image is consistent with the perspective range of the first perspective image and is larger than the perspective range of the terminal device at the second moment.

It should be understood that, since the view angle range of the first view angle image is a certain extension of the view angle range of the terminal device, when the extension is performed, the maximum range of possible movement of the terminal device in the first time and the second time may be considered in the extension, so that it may be ensured that the second view angle image may cover all images of the terminal device in the possible view angle range in the second time after the first view angle image is converted into the second view angle image.

The visual angle range of the first visual angle image can be preset, when the visual angle range of the first visual angle image is preset, the cloud end equipment estimates the time from receiving the information from the terminal equipment to outputting the information of the visual angle image in advance, determines the possible movement range of the terminal equipment according to the estimated time, and then determines the visual angle range of the first visual angle image.

Optionally, if the view angle range of the first view angle image is greater than the view angle range of the terminal device at the first time, because the view angle range of the first view angle image is larger, the second view angle image can better cover the images of the terminal device in all possible view angle ranges at the second time, and then the terminal device can more accurately display the images of the terminal device in the view angle range at the second time, so that the user experience is better.

After the terminal device obtains the second perspective image through conversion, the terminal device may determine an image within the perspective range of the terminal device from the second perspective image, and then may display the image within the perspective range of the terminal device in the second perspective image.

Specifically, when the terminal device converts the first perspective image into the second perspective image, it needs to first acquire depth information of the second perspective image and pixel change information of the second perspective image into which the first perspective image is converted.

The pixel change information of the first perspective image converted into the second perspective image refers to a change of a relative position of any one pixel of the first perspective image before and after the conversion after the first perspective image is converted into the second perspective image, and the pixel change information may be a change of a relative position of each pixel block of the first perspective image before and after the conversion since the pixel change is usually a block shift.

And then, the terminal equipment adjusts the first visual angle image according to the depth information and the pixel change information of the second visual angle image, and converts the first visual angle image into the second visual angle image.

Specifically, the following two operations are performed:

the terminal equipment can adjust the depth information of the first perspective image according to the depth information of the second perspective image, and adjust the distance from each material displayed by the first perspective image to the terminal equipment.

And secondly, the terminal equipment adjusts the position of each pixel of the first visual angle image according to the pixel change information.

The two operations may be performed simultaneously or sequentially, and the execution sequence is not limited in this embodiment of the application.

The following describes a manner in which the terminal device acquires the depth information of the second perspective image and the pixel change information:

depth information of the first and second perspective images.

The terminal device may determine the depth information of the second perspective image information according to a position change of the terminal device from the first time to the second time based on the depth information of the first perspective image information.

Specifically, because the depth information of the perspective image is related to the position of the terminal device, the terminal device may determine the position change of the terminal device from the first time to the second time, and then may determine the depth information of the second perspective image information based on the depth information of the first perspective image information, so as to determine the front-back shielding relationship of each material to be displayed by the second perspective image.

The depth information of the second perspective image is used to indicate a distance from each material (such as a person, an object, a background image, and the like) displayed by the second perspective image to the camera (the camera is abstract for the terminal device, and specific description can refer to related description about the camera in the depth information of the first perspective image, which is not described herein again). If the second perspective image includes person B, the depth information includes a distance from person B to the terminal device.

Optionally, the depth information of the second perspective image may be a depth map of the second perspective image.

Second, the pixel variation information.

The terminal device may determine the pixel change information according to a change in position of the terminal device from the first time to the second time based on the motion vector.

Specifically, since the pixel change is caused by a change in the position of the terminal device, the terminal device may determine a change in the position of the terminal device from the first time to the second time, and then may determine the pixel change information based on the position of each pixel of the first perspective image information and the motion vector.

Some occlusion areas exist in materials displayed by the first view image due to different front and back positions, whether the occlusion areas are visible after the first image is converted into the second view image can be judged based on the depth information of the second view image and the pixel change information, and the occlusion parts visible after the conversion need to be restored, that is, pixels need to be interpolated to display the image.

After determining the second perspective image according to the second time depth information and the pixel change information, displaying an image in the second perspective image within the perspective range of the terminal device based on the posture change of the terminal device from the first time to the second time.

The terminal device can determine the change of the view angle range of the terminal device from the first time to the second time according to the posture change of the terminal device from the first time to the second time, and then an image in the view angle range of the terminal device in a second view angle image can be displayed according to the change of the view angle range of the terminal device.

Specifically, since the view angle range of the second view angle image is greater than the view angle range of the terminal device at the second time, in order to be able to display an image within the view angle range of the terminal device at the second time, the terminal device needs to determine the view angle range of the terminal device at the second time.

Specifically, the terminal device determines the view angle range of the terminal device at the second time according to the posture of the terminal device at the second time.

Since the posture of the terminal device may affect the view angle range of the terminal device, the terminal device may determine the posture of the terminal device at the second time, and then may determine the view angle range of the terminal device at the second time according to the posture of the terminal device at the second time.

In the process of converting the first perspective image into the second perspective image, only the position change of the terminal device is considered, in order to finally display the image of the terminal device in the second time perspective range, the posture change of the terminal device from the first time to the second time is required, the change of the first time to the second time perspective range can be determined according to the posture change of the terminal device from the first time to the second time, and further, the second time perspective range of the terminal device can be determined; based on the second view angle image, determining an image of the terminal device within the view angle range at the second moment according to the view angle range of the terminal device at the second moment, and then displaying the image within the view angle range of the terminal device at the second moment.

In the embodiment of the application, after receiving the visual angle image corresponding to the posture and the position of the terminal device at the first moment, the terminal device can display the image within the visual angle range of the terminal device at the second moment according to the posture change and the position change from the first moment to the second moment.

The following further describes the process of determining, by the terminal device, an image within the viewing angle range of the terminal device at the second time, with reference to specific data:

representing the attitude of the terminal equipment by quaternions (rX, rY, rZ, w), wherein rX, rY and rZ respectively represent the components of a rotating shaft when the terminal equipment rotates respectively on an x axis, a y axis and a z axis, and w represents the angle of rotation along the rotating shaft; (pX, pY, pZ) represents location information of the terminal device, and pX, pY, pZ represent components of the terminal device in x, y, and z axes, respectively.

Assuming that an image matrix of the first perspective image is S, and the attitude of the terminal device at the first moment is Rs=(rXs,rYs,rZs,ws) In the position Ps=(pXs,pYs,pZs) The depth information of the first perspective image is DsThe motion vector is MsWherein, the D issAnd said MsAre all matrixes, and the attitude of the terminal equipment at the second moment is Rt=(rXt,rYt,rZt,wt) In the position Pt=(pXt,pYt,pZt) And the image matrix of the image within the visual angle range of the terminal equipment at the second moment is T.

Firstly, according to the position change delta P of the terminal equipment, the first visual angle image and the depth information D thereofsThe motion vector MsAnd obtaining the second view angle image.

The method comprises the following steps:

the method comprises a first step of calculating a position change Δ P of the terminal device from the first time to the second time, wherein Δ P is Pt-Ps

Second, from the depth information D of the first perspective imagesThe three-dimensional coordinate P of a certain pixel point in the first visual angle image can be determinedpixel=(pXpixel,pYpixel,pZpixel). According to the position change of the terminal position, each pixel point P in the second visual angle image can be determinedpixelDepth value d oft=‖Pt-PpixelII, that is to say that the depth information D of the second perspective image can be determinedt

Thirdly, according to the motion vector MsThe position change delta P of the terminal equipment can determine the pixel change information Ms(Δ P/Δ P '), where Δ P' is a change from the location of the terminal device corresponding to the frame adjacent to the first view image to the location of the terminal device at the first time.

Fourthly, adjusting each pixel of the first visual angle image to obtain an image matrix T after pixel adjustment0Wherein, in the step (A),

Figure BDA0001780739770000131

fifthly, according to the depth information D of the second visual angle imagetImage matrix T adjusted for pixels0Adjusting to finally obtain an image matrix T of the second visual angle image0

And after the second visual angle image is obtained through conversion, determining an image within the visual angle range of the terminal equipment at the second moment.

Specifically, the attitude change Δ R of the terminal device from the first time to the second time is calculated, where Δ R ═ Rt-RsAnd Δ R will be converted into a rotation matrix Δ Rr

According to the rotation matrix DeltaRrObtaining an image matrix T of the image within the visual angle range of the terminal equipment at the second moment, wherein T0·ΔRr→T。

Fig. 5 is a schematic flow chart of an image display method according to an embodiment of the present application.

Step 501, first, after capturing the posture and the position of the terminal device at the first time, a tracker in the terminal device uploads information of the posture and the position of the terminal device at the first time to the cloud device (which may be regarded as sending the first information).

Step 502, the cloud device periodically receives information from the terminal device, receives information of the posture and the position of the terminal device at the first time at a receiving time point, and triggers a logic engine in the cloud device to perform logic operation.

Step 503, after that, the logic engine sends the data after the logic operation to a rendering engine in the cloud device, and sends the information of the posture and the position of the terminal device at the first moment to a packaging plug flow module in the cloud device.

Step 504, a rendering engine in the cloud device performs rendering operation to obtain the first perspective image, where the first perspective image includes an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

Step 505, in a rendering process, a rendering engine in the cloud device generates depth information of the first perspective image, the rendering engine in the cloud device sends the first perspective image to an encoder in the cloud device, and sends the depth information of the first perspective image to a packaging plug flow module in the cloud device.

Step 506, an encoder in the cloud device performs encoding compression on the first view image, and outputs the motion vector; and sending the motion vector and the first visual angle image after coding and compression to an encapsulation stream pushing module in the cloud equipment.

Step 507, after receiving the information of the attitude and the position of the terminal device at the first time, the first perspective image after the encoding compression, the depth information of the first perspective image, and the motion vector, an encapsulation plug-flow model in the cloud device takes the information of the attitude and the position of the terminal device at the first time, the first perspective image after the encoding compression, the depth information of the first perspective image, and the motion vector as the information of the first perspective image, encapsulates the information of the first perspective image after the encapsulation, and sends the information of the first perspective image after the encapsulation to the terminal device.

Step 508, the terminal device receives the information of the first view image after being packaged, the packaged information of the first view image reaches a decapsulation module in the terminal device, the decapsulation module in the terminal device decapsulates the packaged information of the first view image to obtain the information of the first view image, and the information of the first view image is sent to a decoder in the terminal device.

Step 509, the decapsulation module in the terminal device sends the information of the pose and the position of the terminal device at the first time and the depth information of the first perspective image to an image processing system in the terminal device.

Step 5010, the decoder decodes the first view image after encoding and compression to obtain the first view image, outputs the motion vector, and sends the first view image and the motion vector to an image processing system in the terminal device.

Step 5011, after the current position is at the second time and the posture and the position of the terminal device at the second time are captured by the tracker in the terminal device, sending information of the posture and the position of the terminal device at the second time to the image processing module in the terminal device.

Step 5012, in an image processing system in the terminal device, the image processing system determines the posture change and the position change of the terminal device from the first time to the second time according to the posture and the position of the terminal device at the first time and the posture and the position of the terminal device at the second time; according to the position change of the terminal equipment from the first moment to the second moment, the depth information of the first perspective image, the depth information of the second perspective image of the motion vector and the pixel change information of the second perspective image converted from the first perspective image; converting the first perspective image into the second perspective image according to the depth information and the pixel change information of the second perspective image; and determining the view angle range of the terminal equipment at the second moment according to the posture change of the terminal equipment from the first moment to the second moment, and further determining the image in the view angle range of the terminal equipment in the second view angle image.

Step 5013, the image processing system in the terminal device sends the image within the view angle range of the terminal device in the second view angle image to the display module in the terminal device.

In step 5014, a display module in the terminal device displays an image within the view angle range of the terminal device in the second view angle image.

It should be noted that, in the image processing system in the terminal device, the image processing system determines the image within the view angle range of the terminal device at the second time based on the information of the first view angle image and the posture change and the position change of the terminal device from the first time to the second time, and performs a single re-projection operation on the first view angle image to obtain the image within the view angle range of the terminal device at the second time, and for ease of understanding, an operation process of the image processing system in the terminal device is referred to as a single re-projection.

In order to measure the concept of MTP delay introduced by display delay in VR technology, motion to display (MTP) delay refers to a time difference from the movement of the terminal device worn by a user (which involves the posture change and the position change of the terminal device) to the change of an image seen by the user, and motion sickness caused by mismatch between motion and perception can be well avoided when the MTP delay is not more than 20 ms.

By adopting the image display method provided by the embodiment of the application, the display delay can be effectively reduced, namely the MTP delay of the Cloud VR scheme can be effectively optimized, and further, the problem that the user motion sickness is easily caused by the Cloud VR scheme can be better solved.

Taking the time delay of the terminal device capturing the posture and the position of the terminal device, the time delay of the terminal device uploading the posture and the position of the terminal device to the Cloud device, the time delay sum of the terminal device rendering and sending the image data to the terminal device as estimated 30-50 ms, the image display frame rate of the terminal device as an example, if the existing Cloud device rendering and terminal device display serial mode is adopted, the time delay estimation of the existing Cloud VR scheme is 36-68 ms, namely, for a moment, the time delay of the terminal device changing the posture and the position at the moment to the terminal device receiving and displaying the image of the terminal device at the moment is estimated 36-68 ms, wherein the time delay of the terminal device capturing the posture and the position of the terminal device, the time delay of the terminal device uploading the posture and the position of the terminal device to the Cloud device are included, and the time delay of the terminal device uploading the posture and the position of the terminal device to the Cloud device is estimated, And the time delay of rendering and image issuing of the cloud equipment and the estimated time delay of image display of the terminal equipment are 6-18 ms. By adopting the image display method provided by the embodiment of the application, as the process of rendering and issuing the information of the first visual angle image by the Cloud device and the process of displaying the image by the terminal device are converted into parallel processing, and the Cloud device and the terminal device are in a coordination processing relationship, the MTP time delay of the Cloud VR scheme applying the image display method provided by the embodiment of the application is irrelevant to the time delay of the terminal device for capturing the posture and the position of the terminal device, the time delay of the terminal device for uploading the posture and the position of the terminal device to the Cloud device and the time delay of rendering and issuing the image by the Cloud device and only relevant to the terminal device, the posture and the position of the terminal device at the first moment are changed, and the terminal device uploads the posture and the position at the first moment to the Cloud device, the terminal device displays an image which is not within the visual angle range of the terminal device at the first moment according to the visual angle image of the first moment acquired from the cloud device, but rather an image within the viewing angle range of the terminal device at a second instant after the first instant, and as such, the time from when the terminal device changes posture and position at the second moment to when the terminal device displays the image within the visual angle range at the second moment is reduced, that is, for one moment, the time delay from the change of the attitude and the position of the terminal equipment to the display of the image of the terminal equipment within the visual angle range at the moment is shortened, the MTP time delay in the Cloud VR scheme is effectively reduced, the MTP time delay is estimated to be 10-25 ms, the method comprises the steps of capturing the attitude and the position of the terminal equipment for 1ms, predicting the time delay of secondary projection for 3-5 ms and predicting the time delay of a refreshed displayed image for 6-18 ms.

Based on the same inventive concept as that of the method embodiment, an apparatus 600 is provided in the embodiment of the present invention, and is specifically configured to implement the method executed by the terminal device in the above method embodiment, where the apparatus has a structure as shown in fig. 6, and includes a sending unit 601, a receiving unit 602, and a display unit 603, where:

the sending unit 601 is configured to send first information to the cloud device, where the first information is used to indicate a posture and a position of the terminal device at a first time.

The receiving unit 602 is configured to receive information of a first perspective image from the cloud device, where the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first time.

The display unit 603 is configured to display an image within a viewing angle range of the terminal device at a second time based on the information of the first viewing angle image and a posture change and a position change of the terminal device from the first time to the second time, where the second time is later than the first time.

In a possible implementation manner, the receiving unit 602 may include a decapsulating unit (the decapsulating unit may also be referred to as a decapsulating module), where if the cloud device encapsulates the information of the first view image, the decapsulating unit needs to decapsulate the encapsulated information of the first view image.

The display unit 603 needs to convert the first view image in order to display the image within the view range of the terminal device at the second moment, and the specific process is as follows:

the information of the first perspective image needs to include the first perspective image, and the display unit 603 may convert the first perspective image into a second perspective image based on a position change of the terminal device from the first time to the second time, where the second perspective image is a perspective image corresponding to a posture and a position of the terminal device at the second time.

Then, the display unit 603 displays an image within the view angle range of the terminal device in the second view angle image based on the posture change of the terminal device from the first time to the second time.

When the display unit 603 converts the first perspective image into the second perspective image, two parameters need to be determined based on the position change of the terminal device at the first time and the second time, which are the depth information of the second perspective image and the pixel change information of the second perspective image converted from the first perspective image.

The display unit 603 may determine depth information of the second perspective image and pixel change information of the conversion of the first perspective image into the second perspective image according to a change in position of the terminal device from the first time to the second time. And then, converting the first perspective image into the second perspective image according to the depth information and the pixel change information of the second perspective image.

In one possible implementation, the display unit 603 determines the depth information of the second perspective image and the pixel variation information of the first perspective image converted into the second perspective image as follows:

the information of the first perspective image further comprises depth information, and the depth information of the second perspective image information is determined according to the position change of the terminal equipment from the first time to the second time based on the depth information of the first perspective image information.

The information of the first perspective image further includes a motion vector, where the motion vector is used to represent a variation trend of a pixel on the first perspective image, and the display unit 603 determines the pixel variation information according to a position variation of the terminal device from the first time to the second time based on the motion vector.

Specifically, the first perspective image may include an image within a perspective range of the terminal device at the first time, and the perspective range of the first perspective image needs to be larger than the perspective range of the terminal device at the first time, so as to ensure that the first perspective image can cover all possible perspective ranges of the terminal device at the second time as much as possible.

Before displaying the image within the viewing angle range of the terminal device at the second time, the display unit 603 needs to determine the viewing angle range of the terminal device at the second time according to the posture of the terminal device at the second time, so that the image within the viewing angle range of the terminal device at the second time can be determined from the second viewing angle image.

Optionally, the display unit 603 may include a decoding unit, and for example, the decoding unit may be the decoder, and the decoding unit is configured to decode the first view image after encoding and compressing.

The display unit 603 may further include an image processing unit, configured to perform secondary projection according to the information of the first perspective image to obtain an image within a second time perspective range of the terminal device. The image processing unit is configured to implement the method executed by the image processing system in the method embodiment, which may be referred to in detail in the foregoing, and details are not described here again.

Based on the same inventive concept as that of the method embodiment, an embodiment of the present invention provides an apparatus 700, which is specifically configured to implement the method executed by the cloud device in the method embodiment, and the apparatus has a structure as shown in fig. 7, and includes a receiving unit 701, a processing unit 702, and a sending unit 703, where:

the receiving unit 701 is configured to receive first information from a terminal device, where the first information is used to indicate a posture and a position of the terminal device at a first time.

The processing unit 702 is configured to render a pre-stored environment image of the terminal device according to the first information to obtain a first perspective image, where the first perspective image is a perspective image corresponding to the posture and the position of the terminal device at the first time.

The sending unit 703 is configured to send information of the first perspective image to the terminal device.

In a possible implementation manner, the sending unit 703 may include an encapsulation plug-flow unit (the encapsulation plug-flow unit may be referred to as the encapsulation plug-flow module), and the encapsulation plug-flow module is configured to implement encapsulation on the information of the first perspective image, and may send the encapsulated information of the first perspective image to the terminal device.

Optionally, the information of the first perspective image includes the first perspective image; the information of the first view image may further include depth information and a motion vector.

The first perspective image comprises an image within the perspective range of the terminal equipment at the first moment, and the perspective range of the first perspective image is larger than the perspective range of the terminal equipment at the first moment.

Optionally, the processing unit 702 may include a logic operation unit, for example, the logic operation unit may be a logic engine, and the logic operation unit is configured to perform logic operation according to the first information to obtain image state information that needs to be displayed by the terminal device at the first time.

The processing unit 702 may further include an image rendering unit, for example, the image rendering unit may be a rendering engine, and the image rendering unit is configured to render a pre-stored environment image of the terminal device to obtain the first perspective image, and specifically, the pre-stored environment image of the terminal device may be rendered to obtain the first perspective image based on the image state information that is output by the logic operation unit and needs to be displayed at the first time by the terminal device.

In a possible implementation manner, the processing unit 702 may further include an encoding unit, such as an encoder, where the encoding unit is configured to implement encoding compression on the first view image.

The division of the units in the embodiments of the present application is schematic, and only one logic function division is used, and there may be another division manner in actual implementation, and in addition, each functional unit in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more units. The integrated unit can be realized in a form of hardware or a form of a software functional module.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a terminal device (which may be a personal computer, a mobile phone, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

In the embodiment of the application, the cloud device and the terminal device can be presented in a form of dividing each functional module in an integrated manner. A "module" herein may refer to a particular ASIC, a circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other device that provides the described functionality.

In a simple embodiment, the terminal device may take the form shown in fig. 8, as will be appreciated by those skilled in the art.

The apparatus 800 shown in fig. 8 includes at least one processor 801, a transceiver 802, and optionally a memory 803.

In one possible implementation, the apparatus 800 may further include a display 804; the apparatus may also include a sensor 805 for capturing the pose and position of the terminal device.

The memory 803 may be a volatile memory, such as a random access memory; the memory may also be a non-volatile memory such as, but not limited to, a read-only memory, a flash memory, a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 803 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 803 may be a combination of the above.

The specific connection medium between the processor 801 and the memory 803 is not limited in the embodiment of the present application. In the embodiment of the present application, the memory 803 and the processor 801 are connected by a bus 806, the bus 806 is represented by a thick line in the figure, and the connection manner between other components is merely illustrative and is not limited. The bus 806 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.

The processor 801 may have a data transceiving function, and may be capable of communicating with other devices, for example, in this embodiment, the processor 801 may send first information to a cloud device, and may also receive information of a first perspective image from the cloud device, and in the apparatus as shown in fig. 8, an independent data transceiving module, for example, the transceiver 802, may also be provided for transceiving data; when the processor 801 communicates with other devices, data transmission may be performed through the transceiver 802, for example, in this embodiment, the processor 801 may send first information to the cloud device through the transceiver 802, and may also receive information of a first perspective image from the cloud device through the transceiver 802.

When the terminal device adopts the form shown in fig. 8, the processor 801 in fig. 8 may make the terminal device execute the method executed by the terminal device in any of the above method embodiments by calling a computer stored in the memory 803 to execute the instructions.

Specifically, the memory 803 stores therein computer-executable instructions for implementing the functions of the transmitting unit, the receiving unit, and the display unit in fig. 6, and the functions/implementation procedures of the transmitting unit, the receiving unit, and the display unit in fig. 6 can be implemented by the processor 801 in fig. 8 calling the computer-executable instructions stored in the memory 803.

Alternatively, a computer-executable instruction for implementing the function of the display unit in fig. 6 is stored in the memory 803, the function/implementation procedure of the display unit in fig. 6 may be implemented by the processor 801 in fig. 8 calling the computer-executable instruction stored in the memory 803, and the function/implementation procedure of the transmitting unit and the receiving unit in fig. 6 may be implemented by the transceiver 802 in fig. 8.

Wherein, when the processor 801 executes the function of the display unit, such as the operation related to displaying the image, such as displaying the image in the view angle range of the terminal device at the second moment, the processor 801 may display the image through the display 804 in the apparatus 800; that is, the processor 801 may display an image within the viewing angle range of the terminal device at the second time through the display 804 based on the information of the first viewing angle image and the posture change and the position change of the terminal device from the first time to the second time.

Optionally, when the processor 801 executes the function of the display unit, the processor may also display an image through a display in another device, for example, send a display instruction to the other device to instruct to display the image; that is, the processor 801 may display an image within the viewing angle range of the terminal device at the second time through the display in the other device based on the information of the first viewing angle image and the posture change and the position change of the terminal device from the first time to the second time.

In a simple embodiment, those skilled in the art will appreciate that the cloud device may take the form shown in fig. 9.

The communication apparatus 900 shown in fig. 9 includes at least one processor 901, and optionally, may further include a memory 902 and a transceiver 903.

Memory 902 may be a volatile memory, such as a random access memory; the memory may also be a non-volatile memory such as, but not limited to, a read-only memory, flash memory, hard disk or solid state disk, or the memory 902 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 902 may be a combination of the above memories.

The embodiment of the present application does not limit the specific connection medium between the processor 901 and the memory 902. In the embodiment of the present application, the memory 902 and the processor 901 are connected by a bus 904, the bus 904 is represented by a thick line in the figure, and the connection manner between other components is merely illustrative and is not limited. The bus 904 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.

The processor 901 may have a data transceiving function, and may be capable of communicating with other devices, and in the apparatus as shown in fig. 9, a separate data transceiving module, such as the transceiver 903, may also be provided for transceiving data; the processor 901 may transmit data via the transceiver 903 when communicating with other devices.

When the cloud device is in the form shown in fig. 9, the processor 901 in fig. 9 may call a computer executing instruction stored in the memory 902, so that the cloud device may execute the method executed by the cloud device in any method embodiment described above.

Specifically, the memory 902 stores computer-executed instructions for implementing the functions of the transmitting unit, the receiving unit and the processing unit in fig. 7, and the functions/implementation processes of the transmitting unit, the receiving unit and the processing unit in fig. 7 can be implemented by the processor 901 in fig. 9 calling the computer-executed instructions stored in the memory 902. Alternatively, the memory 902 stores computer-executable instructions for implementing the functions of the processing unit in fig. 7, the functions/implementation procedures of the processing unit in fig. 7 may be implemented by the processor 901 in fig. 9 calling the computer-executable instructions stored in the memory 902, and the functions/implementation procedures of the transmitting unit and the receiving unit in fig. 7 may be implemented by the transceiver 903 in fig. 9.

Based on the same inventive concept as the method embodiment, the embodiment of the present application further provides a system, which can be seen in fig. 2 and includes a cloud device and a terminal device.

The terminal device is used for sending first information to the cloud device, and the first information is used for indicating the posture and the position of the terminal device at a first moment.

The cloud device is used for receiving first information from the terminal device, wherein the first information is used for indicating the posture and the position of the terminal device at a first moment; rendering a prestored environment image of the terminal equipment according to the first information to obtain a first visual angle image, wherein the first visual angle image is a visual angle image corresponding to the posture and the position of the terminal equipment at the first moment; and sending the information of the first perspective image to the terminal equipment.

Then, the terminal device is further configured to receive information of a first perspective image from the cloud device, where the first perspective image is a perspective image corresponding to the posture and position of the terminal device at the first time; and displaying the image within the visual angle range of the terminal equipment at a second moment based on the information of the first visual angle image and the posture change and the position change of the terminal equipment from the first moment to the second moment, wherein the second moment is later than the first moment.

Specifically, when the terminal device displays the image within the view angle range of the terminal device at the second moment, the first view angle image needs to be converted, and the specific process is as follows:

the information of the first perspective image comprises the first perspective image, the terminal equipment firstly converts the first perspective image into a second perspective image based on the position change of the terminal equipment from the first moment to the second moment, and the second perspective image is a perspective image corresponding to the position of the terminal equipment at the second moment; and then displaying an image in the view angle range of the terminal equipment in the second view angle image based on the posture change of the terminal equipment from the first time to the second time.

In order to convert the first perspective image into the second perspective image, the terminal device needs to determine depth information of the second perspective image and pixel change information of the second perspective image converted from the first perspective image according to a position change of the terminal device from the first time to the second time; and then converting the first perspective image into the second perspective image according to the depth information and the pixel change information of the second perspective image.

The following describes the depth information of the second perspective image and the determination method of the pixel variation information:

the information of the first perspective image further comprises depth information, and the terminal equipment determines the depth information of the second perspective image information according to the position change of the terminal equipment from the first time to the second time based on the depth information of the first perspective image information;

secondly, the information of the first perspective image further comprises a motion vector, the motion vector is used for representing the change trend of pixels on the first perspective image, and the terminal equipment determines the pixel change information according to the position change of the terminal equipment from the first time to the second time based on the motion vector.

In a possible implementation manner, before the terminal device displays the image within the view angle range of the terminal device at the second time, the view angle range of the terminal device at the second time needs to be determined according to the posture of the terminal device at the second time.

In order to convert the first perspective image into the second perspective image and cover all possible perspective ranges of the terminal device at the second time as much as possible, the perspective range of the first perspective image should be larger; specifically, the first perspective image includes an image within the perspective range of the terminal device at the first time, and the perspective range of the first perspective image is greater than the perspective range of the terminal device at the first time.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:具有校正功能的图像获取系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类