Adaptive in-loop filtering for video coding
阅读说明:本技术 用于视频编码的自适应环内滤波 (Adaptive in-loop filtering for video coding ) 是由 张习民 李相熙 基思·W·罗威 于 2019-06-28 设计创作,主要内容包括:本公开涉及用于视频编码的自适应环内滤波。讨论了与使用自适应环内滤波启用对视频进行编码相关的技术。这些技术可包括基于评估视频的图像的最大编码比特限制、图像的量化参数和视频的编码结构来确定是否执行环内滤波。(The present disclosure relates to adaptive in-loop filtering for video coding. Techniques related to encoding video using adaptive in-loop filtering enablement are discussed. The techniques may include determining whether to perform in-loop filtering based on evaluating a maximum coding bit limit for a picture of the video, a quantization parameter for the picture, and an encoding structure of the video.)
1. A video encoding system, comprising:
a memory for storing a single image of a group of images of video for encoding; and
a processor coupled to the memory, the processor to:
determining a maximum coding bit limit and a quantization parameter for the single picture;
setting an in-loop filter indicator for the single picture based at least in part on a comparison of the maximum coded bit limit to a first threshold and a comparison of the quantization parameter to a second threshold, wherein the in-loop filter indicator for the single picture is set to off when the maximum coded bit limit compares unfavorably with the first threshold and the quantization parameter compares favorably with the second threshold; and
encoding the single image based at least in part on the in-loop filter indicator to generate a bitstream.
2. The video coding system of claim 1, wherein the bitstream comprises an open media alliance compliant bitstream, and wherein the in-loop filter indicator indicates whether at least one of a constrained direction enhancement filter or a loop recovery filter is implemented for all coding units of the single picture.
3. The video coding system of claim 1 or 2, wherein when the maximum coded bit limit compares favorably to the first threshold or the quantization parameter compares unfavorably to the second threshold, the processor is further configured to:
determining whether the single image is a scene change image; and
setting the in-loop filter indicator to ON when the single image is determined to be a scene change image.
4. The video coding system of claim 1 or 2, wherein when the maximum coded bit limit compares favorably to the first threshold or the quantization parameter compares unfavorably to the second threshold, the processor is further configured to:
setting an in-loop filter indicator for the single picture based on a coding structure associated with coding the group of pictures.
5. The video coding system of claim 4, wherein the coding structure comprises a hierarchical B structure, the single picture is a non-reference B picture, and the processor is to set the in-loop filter indicator for the single picture comprises: the processor is to set the in-loop filter indicator to off in response to the single picture being a non-reference B picture.
6. The video coding system of claim 4, wherein the coding structure comprises a hierarchical B structure, the single picture comprises at least one of a non-reference B picture or a reference B picture that can only be referenced by non-reference B pictures, and the processor is to set the in-loop filter indicator for the single picture comprises: the processor is to set the in-loop filter indicator to off in response to the single picture being a non-reference B picture or a reference B picture that can only be referenced by non-reference B pictures.
7. The video coding system of claim 4, wherein the coding structure comprises a low-delay coding structure having a constant maximum coding bit limit for pictures other than a first temporal picture of the group of pictures, and the processor is to set the in-loop filter indicator for the single picture comprises: the processor is configured to set the in-loop filter indicator to off for the group of images at a fixed image interval.
8. The video coding system of claim 4, wherein the coding structure comprises an adaptive quantization parameter low delay coding structure, and the processor to set the in-loop filter indicator for the single picture comprises: the processor is to set the in-loop filter indicator to off when a high quantization parameter is associated with the single image and to set the in-loop filter indicator to on when a low quantization parameter is associated with the single image.
9. The video coding system of claim 4, wherein the coding structure comprises at least one I picture, the single picture is an I picture, and the processor is to set the in-loop filter indicator comprises: the processor is to set the in-loop filter indicator to on in response to the single picture being an I picture.
10. The video coding system of claim 1 or 2, wherein, when the in-loop filter indicator is set to on for the single picture, the processor is further to:
determining that the single image matches a reference image associated with the single image; and
in response to the single picture matching the reference picture, setting an in-loop filter indicator of the single picture to off prior to encoding the single picture.
11. The video coding system of claim 1, wherein when the in-loop filter indicator is set to on for the single picture, the processor is further to:
determining, for a coding unit of the single image, at least one of: the motion vector associated with said coding unit is a zero motion vector or the prediction residual associated with the coding compares favorably with a third threshold;
in response to the coding unit having a zero vector motion vector or the prediction residual being unfavorable compared to the third threshold, setting a coding unit level in-loop filter indicator of the coding unit to off; and
in response to the coding unit level in-loop filter indicator being off for the coding unit, in-loop filtering is skipped for the coding unit.
12. The video coding system of claim 1, wherein when the in-loop filter indicator is set to on for the single picture, the processor is further to:
determining, for a coding unit of the single picture, that a motion vector associated with the coding unit is a zero motion vector and a reference coding unit corresponding to the coding unit has a quantization parameter that is favorable compared to a third threshold;
in response to the coding unit having a zero vector motion vector or the prediction residual being unfavorable compared to the third threshold, setting a coding unit level in-loop filter indicator of the coding unit to skip;
in response to the coding unit level in-loop filter indicator being skipped for the coding unit, skip in-loop filter selection for the coding unit; and
performing in-loop filtering on a coding unit using in-loop filter parameters from a reference coding unit.
13. A video encoding method, comprising:
determining a maximum coding bit limit and a quantization parameter for a single picture in the group of pictures;
setting an in-loop filter indicator for the single picture based at least in part on a comparison of the maximum coded bit limit to a first threshold and a comparison of the quantization parameter to a second threshold, wherein the in-loop filter indicator for the single picture is set to off when the maximum coded bit limit compares unfavorably with the first threshold and the quantization parameter compares favorably with the second threshold; and
encoding the single image based at least in part on the in-loop filter indicator to generate a bitstream.
14. The method of claim 13, wherein the bitstream comprises an open media alliance compliant bitstream, and wherein the in-loop filter indicator indicates whether at least one of a constrained direction enhancement filter or a loop recovery filter is implemented for all coding units of the single picture.
15. The method of claim 13 or 14, wherein when the maximum coded bit limit is favorable compared to the first threshold or the quantization parameter is unfavorable compared to the second threshold, the method further comprises:
determining whether the single image is a scene change image; and
setting the in-loop filter indicator to ON when the single image is determined to be a scene change image.
16. The method of claim 13 or 14, wherein when the maximum coded bit limit is favorable compared to the first threshold or the quantization parameter is unfavorable compared to the second threshold, the method further comprises:
setting an in-loop filter indicator for the single picture based on a coding structure associated with coding the group of pictures.
17. The method of claim 16, wherein the coding structure comprises a hierarchical B structure, the single picture is a non-reference B picture, and setting the in-loop filter indicator for the single picture comprises: setting the in-loop filter indicator to OFF in response to the single picture being a non-reference B picture.
18. The method of claim 16, wherein the coding structure comprises a hierarchical B structure, the single picture comprises at least one of a non-reference B picture or a reference B picture that can only be referenced by non-reference B pictures, and setting the in-loop filter indicator for the single picture comprises: setting the in-loop filter indicator to OFF in response to the single picture being a non-reference B picture or a reference B picture that can only be referenced by non-reference B pictures.
19. The method of claim 16, wherein the coding structure comprises a low-delay coding structure having a constant maximum coding bit limit for pictures other than a first temporal picture of the group of pictures, and setting the in-loop filter indicator for the single picture comprises: setting the in-loop filter indicator to off for the group of pictures at a fixed picture interval.
20. The method of claim 16, wherein the coding structure comprises an adaptive quantization parameter low delay coding structure, and setting the in-loop filter indicator for the single picture comprises: setting the in-loop filter indicator to off when a high quantization parameter is associated with the single image and to on when a low quantization parameter is associated with the single image.
21. The method of claim 16, wherein the coding structure comprises at least one I picture, the single picture is an I picture, and setting the in-loop filter indicator comprises: setting the in-loop filter indicator to on in response to the single picture being an I-picture.
22. The method of claim 13 or 14, wherein when the in-loop filter indicator is set to on for the single image, the method further comprises:
determining that the single image matches a reference image associated with the single image; and
in response to the single picture matching the reference picture, setting an in-loop filter indicator of the single picture to off prior to encoding the single picture.
23. The method of claim 13, wherein when the in-loop filter indicator is set to on for the single image, the method further comprises:
determining, for a coding unit of the single image, at least one of: the motion vector associated with said coding unit is a zero motion vector or the prediction residual associated with the coding compares favorably with a third threshold;
in response to the coding unit having a zero vector motion vector or the prediction residual being unfavorable compared to the third threshold, setting a coding unit level in-loop filter indicator of the coding unit to off; and
in response to the coding unit level in-loop filter indicator being off for the coding unit, in-loop filtering is skipped for the coding unit.
24. At least one machine readable medium comprising:
a plurality of instructions that, in response to being executed on a computing device, cause the computing device to carry out the method according to any one of claims 13-23.
25. An apparatus, comprising:
means for performing the method of any of claims 13-23.
Technical Field
The present disclosure relates to adaptive in-loop filtering for video coding.
Background
In compression/decompression (codec) systems, compression efficiency and video quality are important performance criteria. For example, visual quality is an important aspect of the user experience in many video applications, and compression efficiency affects the amount of memory storage required to store video files and/or affects the amount of bandwidth required to transmit and/or stream video content. A video encoder compresses video information so that more information can be sent over a given bandwidth or stored in a given memory space or the like. The compressed signal or data is then decoded by a decoder, which decodes or decompresses the signal or data for display to a user. In most implementations, a higher visual quality with higher compression is desirable.
In-loop filtering (including deblocking filtering and other enhancement filtering) is an important feature in modern video coding standards. Such filtering improves objective and subjective video quality and compression efficiency. In the standard, parameters are defined to adjust such filtering operations. However, in implementations, parameter selection techniques may not be defined.
It may be advantageous to improve in-loop filter selection to provide improved compression efficiency and/or video quality. With respect to these and other considerations, current improvements are needed. As the need to compress and transmit video data becomes more prevalent, such improvements may become critical.
Disclosure of Invention
According to an aspect of the present disclosure, there is provided a video encoding system including: a memory for storing a single image of a group of images of video for encoding; and a processor coupled to the memory, the processor to: determining a maximum coding bit limit and a quantization parameter for the single picture; setting an in-loop filter indicator for the single picture based at least in part on a comparison of the maximum coded bit limit to a first threshold and a comparison of the quantization parameter to a second threshold, wherein the in-loop filter indicator for the single picture is set to off when the maximum coded bit limit compares unfavorably with the first threshold and the quantization parameter compares favorably with the second threshold; and encoding the single image based at least in part on the in-loop filter indicator to generate a bitstream.
According to another aspect of the present disclosure, there is provided a video encoding method including: determining a maximum coding bit limit and a quantization parameter for a single picture in the group of pictures; setting an in-loop filter indicator for the single picture based at least in part on a comparison of the maximum coded bit limit to a first threshold and a comparison of the quantization parameter to a second threshold, wherein the in-loop filter indicator for the single picture is set to off when the maximum coded bit limit compares unfavorably with the first threshold and the quantization parameter compares favorably with the second threshold; and encoding the single image based at least in part on the in-loop filter indicator to generate a bitstream.
According to yet another aspect of the disclosure, there is provided at least one machine readable medium comprising: a plurality of instructions that, in response to being executed on a computing device, cause the computing device to carry out the video encoding method described above.
According to yet another aspect of the present disclosure, there is provided an apparatus comprising: apparatus for performing the above-described video encoding method.
Drawings
The materials described herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. For simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements. In the drawings:
FIG. 1 is an illustration of an example system for providing video encoding;
FIG. 2 illustrates an example video image;
FIG. 3 illustrates example Constrained Direction Enhancement Filter (CDEF) directions;
FIG. 4 shows example pixel values for a pixel to be filtered and neighboring pixels;
FIG. 5 shows example filter taps of an example Constrained Direction Enhancement Filter (CDEF) combination;
FIG. 6 shows example filter taps of an example Loop Recovery Filter (LRF);
FIG. 7 illustrates an example group of images having a hierarchical B structure;
FIG. 8 shows an example group of pictures with a low-latency coding structure;
FIG. 9 shows an example group of pictures with an adaptive quantization parameter low delay coding structure;
FIG. 10 is a flow diagram illustrating an example process of video encoding including in-loop filtered image level and coding unit level skip decisions;
FIG. 11 shows an example bitstream;
FIG. 12 shows a block diagram of an example encoder integrated with adaptive in-loop filtering;
FIG. 13 is a flow diagram illustrating an example process for video encoding including adaptively enabling and disabling in-loop filtering;
FIG. 14 is an illustration of an example system for video encoding including adaptively enabling and disabling in-loop filtering;
FIG. 15 is an illustrative diagram of an example system; and is
Fig. 16 illustrates an example device, all arranged in accordance with at least some embodiments of the present disclosure.
Detailed Description
One or more embodiments or implementations are now described with reference to the drawings. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can be used without parting from the spirit and scope of the description. It will be apparent to one skilled in the relevant art that the techniques and/or arrangements described herein may be employed in a variety of other systems and applications, in addition to those described herein.
Although the following description sets forth various implementations that may be presented in an architecture such as a system-on-a-chip (SoC) architecture, implementations of the techniques and/or arrangements described herein are not limited to a particular architecture and/or computing system and may be implemented by any architecture and/or computing system for a similar purpose. For example, various architectures employing, for example, multiple Integrated Circuit (IC) chips and/or packages, and/or various computing devices and/or Consumer Electronics (CE) devices (such as set-top boxes, smart phones, etc.) may implement the techniques and/or arrangements described herein. In addition, although the following description may set forth numerous specific details (such as logical implementations, types and interrelationships of system components, logical partitioning/integration choices, etc.), claimed subject matter may be practiced without these specific details. In other instances, certain material (e.g., control structures and complete software instruction sequences) may not be shown in detail in order not to obscure the disclosure with material.
The material disclosed herein may be implemented in hardware, firmware, software, or any combination thereof. The materials disclosed herein may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any medium and/or mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include Read Only Memory (ROM); random Access Memory (RAM); a magnetic disk storage medium; an optical storage medium; a flash memory device; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other implementations (whether or not explicitly described herein). The terms "substantially", "close", "about", "close" and "approximately" generally refer to being within +/-10% of a target value. The term "compares favorably" when used with reference to a threshold indicates that the value in question is greater than or equal to the threshold. Similarly, the term "compares unfavorably" when used with reference to a threshold indicates that the value in question is less than or equal to the threshold.
Methods, apparatuses, devices, computing platforms, and articles of manufacture related to video coding and in particular to adaptive enablement of in-loop filters including constrained direction enhancement filters and loop refinement filters are described herein.
As noted above, in-loop filtering is an important feature in modern video coding standards that may provide improved efficiency and/or video quality. As discussed herein, a technique includes receiving a single picture in a group of pictures for an encoded video, a maximum coding bit limit for the single picture, and a picture level Quantization Parameter (QP) for the single picture. As discussed further herein, the group of pictures has a coding structure that defines how each of its pictures is to be coded. The intra-loop filter indicator is set to on (enabled) or off (disabled) for a single picture in response to a maximum coding bit limit, QP, and picture type of the single picture according to coding structure, among other factors. When the in-loop filter indicator is set to on, a specific one or more types of in-loop filtering may be performed on the coding unit of a single image. For example, some coding units may still be set to off at the coding unit level, as discussed further herein. When the in-loop filter indicator is set to off, such one or more types of in-loop filtering are skipped for all coding units of a single picture. Here, enabled or disabled in-loop filtering is any in-loop filtering other than deblock filtering and Sample Adaptive Offset (SAO) filtering, such as filtering using a Constrained Direction Enhancement Filter (CDEF) and/or filtering using a Loop Recovery Filter (LRF) as further described herein. Such CDEF and/or LRF filtering may be described as enhancement filtering, selective filtering, optional filtering, CDEF filtering, LRF filtering, CDEF and LRF filtering, etc., to distinguish them from deblocking and/or SAO filtering. For example, deblocking filtering and/or SAO filtering, also in-loop, may be performed regardless of the results of setting picture level and coding unit level in-loop filter indicators.
In one embodiment, when the maximum coding bit limit compares unfavorably with the first threshold and the quantization parameter compares favourably with the second threshold, the in-loop filter indicator for the single picture is set to off and such in-loop filtering (CDEF and/or LRF) is skipped for the single picture. Encoding of the single picture is then performed based at least in part on the in-loop filter indicator to generate a bitstream such that selective in-loop filtering (e.g., one or both of CDEF and/or LRF) is performed when the in-loop filter indicator is on, noting that certain coding units may be skipped. The bitstream may be a standard conforming to any standard. For example, the bitstream may be a bitstream compliant with the open media alliance.
When the maximum coding bit limit compares favorably to the first threshold or the quantization parameter compares unfavorably to the second threshold, the coding structure of the group of pictures and/or other factors are evaluated to determine whether to set the in-loop filter indicator of a single picture to on or off. In one embodiment, when the single image is a scene change image, the in-loop filter indicator for the single image is set to on. In one embodiment, the in-loop filter indicator for a single picture is set to on when the single picture is an I-picture within the coding structure. In one embodiment, when the coding structure is a hierarchical B structure, the in-loop filter indicator is set to off for non-reference B pictures and on for all other pictures (i.e., I pictures and reference B pictures), as further described herein. In another embodiment, when the coding structure is a hierarchical B structure, the in-loop filter indicator is set to off for non-reference B pictures and reference B pictures that can only be referenced by non-reference B pictures, and is set to on for all other pictures (i.e., I pictures and reference B pictures that can be referenced by other reference B pictures). In one embodiment, the intra-loop filter indicator is set to off at a fixed picture interval (i.e., every other picture, every third picture, or every fourth picture) when the coding structure is a low-latency coding structure having a constant maximum coding bit limit for pictures other than the first temporal picture of the group of pictures. In one embodiment, when the coding structure is an adaptive quantization parameter low delay coding structure, as further described herein, the in-loop filter indicator is set to off when a high quantization parameter is associated with a single picture and is set to on when a low quantization parameter is associated with a single picture. Other embodiments are further discussed herein, including further evaluation of the image and coding units when the in-loop filter indicator is set to on at the image level.
As discussed, CDEF and/or LRF may be enabled or disabled at the picture and/or coding unit level. As used herein, the term coding unit indicates any segment of an image, such as a block, super block, loop recovery unit, etc. In particular, in the scope of the open media Alliance (AOM), AOMedia Video 1(AV1) is the next generation Video codec. In addition to deblock filtering, AV1 includes two additional in-loop filters: CDEF and LRF. These additional in-loop filters remove coding artifacts and improve objective quality. The CDEF includes a directional de-ringing filter in a first stage that detects the direction of a coding unit, such as a super-block, and then adaptively filters along the identified direction, and a constrained low-pass filter in a second stage. The LRF includes mutually exclusive wiener filters and self-guided filters, such that only one is used for each coding unit (e.g., loop recovery unit) at a time. In AV1, in-loop filters are cascaded such that deblocking filtering is performed first, CDEF is performed second, and LRF is performed third. In some encoding contexts, such processing can be a bottleneck in a hardware encoding pipeline. Although the additional bits used by CDEF and LRF are a small part of the overall bitstream, such processing can be difficult, especially in very low bit rate coding.
The techniques discussed herein provide picture (or frame) level adaptive in-loop filter (i.e., CDEF, LRF, or both CDEF and LRF) on/off techniques to adaptively enable and disable in-loop filters for an entire picture based on picture type, group of pictures (GOP) structure, spacing between in-loop filters on the picture, and other factors. In addition, rate and QP adaptive in-loop filter on/off decision techniques are developed to avoid underflow and unnecessary in-loop filter processing. The techniques discussed herein achieve similar objective and subjective quality with less than half the complexity as compared to a full filter implementation. Thus, implementations of the discussed techniques provide improved device performance in terms of speed, processing efficiency, and power consumption.
Fig. 1 is an illustrative diagram of an example system 100 for providing video encoding arranged in accordance with at least some embodiments of the present disclosure. As shown in fig. 1, the system 100 includes a rate control module 101, a
Rate control module 101 and
System 100 may include other modules that are not shown for clarity of presentation. For example, the system 100 may include a transform module, an intra-prediction module, a motion estimation module, a motion compensation module, a reference picture buffer, a scanning module, and so forth, some of which are discussed herein with respect to fig. 12. In some embodiments, system 100 includes a local decoding loop for generating reference pictures or frames for use in the encoding process. These modules are known to those skilled in the art and are not discussed further herein for clarity in presenting the described techniques.
As discussed, rate control module 101 and
As shown in fig. 1, the
Rate control module 101 receives
In addition, the quantization parameter may be a quantization parameter for compressing a range of values to a specific value of the image. The quantization parameter may be determined using any suitable technique or techniques, such as rate control techniques known in the art.
The loop
As used herein, an in-loop filter indicator or loop filter indicator indicates that one or more in-loop or loop filters are to be applied at the level associated with the indicator (i.e., image level, slice level, coding unit level, etc.). The in-loop filter indicator or loop filter indicator may indicate whether one or both of CDEF and LRF are to be used at a particular level. In one embodiment, the in-loop filter indicator or loop filter indicator indicates whether both CDEF and LRF are to be used at a particular level. In one embodiment, a CDEF filter indicator is provided to indicate whether CDEF is to be used at a particular level. In one embodiment, an LRF filter indicator is provided to indicate whether LRF is to be used at a particular level. Additionally, as used herein, when an indicator for a particular level indicates that CDEF, LRF, or both are not to be used (e.g., the indicator is off), then such processing is skipped for the entire sub-level corresponding thereto. For example, if the image level indicator is off, processing is skipped for each slice and coding unit of the image. If the slice level indicator is off, processing is skipped for each coding unit of the slice. However, when a particular level of indicator indicates that CDEF, LRF, or both are to be used (e.g., the indicator is on), such processing may still be skipped at a lower level if the lower level indicator is off. For example, if the picture level indicator is on, processing may still be skipped (e.g., turned off) for a particular slice or coding unit of the picture.
With continued reference to fig. 1, the loop filter setting module 108 provides an in-loop filter indicator for the current image to the loop
As discussed, the maximum coded bit limit of the current picture is compared to a first threshold via the comparator module 105. The first threshold may be any suitable value and may be characterized as a maximum coded bit limit threshold. In one embodiment, the first threshold is an adaptive threshold based on the image resolution of the current image, such that the higher the resolution of the current image, the higher the value of the first threshold. Any one or more suitable techniques may be used to adjust the first threshold based on the resolution of the current image. In some examples, the first threshold is a product of a total number of largest coding units in the current image and a constant value.
Also as discussed, the quantization parameter of the current image is compared to a second threshold via the comparator module 105. The second threshold may be any suitable value and may be characterized as a quantization parameter threshold or the like. In some examples, the second threshold may be a constant value. For example, the second threshold may be a value of about 46 to 51. For example, in some coding contexts, the available quantization parameter may range from 1 to 51, such that the second threshold is a relatively high quantization parameter threshold. In other coding contexts, the available quantization parameter may range from 0 to 255 and the second threshold is in the range of 235-245, where 240 is particularly advantageous. In one embodiment, the second threshold is a particular percentage of the maximum available quantization parameter (e.g., the maximum available quantization parameter allowed by the encoding standard, encoding profile, etc.). In one embodiment, the second threshold is not less than 90% of the maximum available quantization parameter. In one embodiment, the second threshold is not less than 94% of the maximum available quantization parameter.
As discussed, if the maximum coded bit limit compares unfavorably with the first threshold and the quantization parameter compares favorably with the second threshold, the comparator module 105 provides a signal to the loop filter setting module 108 to set the in-loop filter indicator of the current picture to off based on the received signal. Also as discussed, the in-loop filter indicator indicates the use of CDEF, LRF, or both, but does not affect the use of other filtering such as deblock filtering.
If the maximum coded bit limit compares favorably (e.g., greater than or not less than) the first threshold or the quantization parameter compares unfavorably (e.g., less than or not greater than) the second threshold, the comparator module 105 provides a signal to the scene change determination module 106. Based on the received signal, the scene-change-determination module 106 determines whether the current image is a scene-change image (e.g., whether the current image is associated with a scene change in the content represented by the video 121). The scene change determination module 106 may use any suitable technique or techniques to determine whether the current image is a scene change image. In some examples, the scene change determination module 106 determines whether the current image is a scene change image based on a comparison of the temporal complexity of the current image (e.g., as determined via the video analysis module 102) to an average temporal complexity of any number of previous images. For example, if the temporal complexity of the current image is greater than the average temporal complexity by a threshold amount (e.g., the difference between the temporal complexity and the average temporal complexity of the current image is greater than a threshold), or by a certain multiple (e.g., the ratio of the temporal complexity to the average temporal complexity of the current image is greater than a threshold), etc., the current image is considered to be a scene change image.
If it is determined that the current image is a scene change image, the scene change determination module 106 provides a signal to the loop filter setting module 109 to set the in-loop filter indicator of the current image to on based on the image being the scene change image. In addition, the loop filter setting module 109 provides the in-loop filter indicator for the current picture to the loop
If it is determined that the current picture is not a scene change picture, the scene change determination module 106 provides a signal to the coding structure adaptive loop filter decision module 104. Based on the received signal, the coding structure adaptive loop filter decision module 104 determines whether to apply or skip in-loop filtering (i.e., CDEF, LRF, or both) at the picture level or to perform such filtering at the picture level for the current picture based on the coding structure of the group of pictures that includes the current picture, as discussed further below.
Based on the determination as to whether in-loop filtering is to be performed on the current picture, the coding structure adaptive loop filter decision module 104 provides a signal to the loop filter picture level evaluation module 107. If the signal indicates that no in-loop filtering is to be performed (e.g., in-loop filtering is off) such that the current picture is an in-loop filtered skipped picture, then loop filter picture level evaluation module 107 provides a signal to loop filter setting module 110 to set the in-loop filter indicator for the current picture to off. In addition, the loop filter setting module 110 provides an in-loop filter indicator for the current picture to the loop
As discussed, the loop
Fig. 2 illustrates an example video image 201 arranged in accordance with at least some embodiments of the present disclosure. Video image 201 may comprise any image of a video sequence or segment, such as video frames of VGA, HD, full HD, 4K, 5K, etc. As shown, the video image 201 may be segmented into one or more slices as shown with respect to slice 202 of the video image 201. In addition, video image 201 may be partitioned into one or more super blocks as shown with respect to super block 203, which in turn may be partitioned into one or more blocks 205. In the illustrated embodiment, the video image 201 is partitioned into super-blocks, which are partitioned into blocks. However, any frame or image structure that divides a frame into macroblocks, blocks, units, sub-units, etc. may be used. As used herein, the term coding unit or simply unit may refer to any partition or sub-partition of a video image located at the sub-image and sub-slice level. For example, a coding unit may refer to a largest coding unit, a prediction unit, a transform unit, a macroblock, a coding block, a prediction block, a transform block, and so on.
In addition, as shown in fig. 2, the video picture 201 has a maximum coding bit limit (MaxBits)212, a Quantization Parameter (QP)213, a picture type 214, and a loop filter indicator (LF ON/OFF)215 corresponding thereto. QP 213 is any suitable value or parameter that determines a step size for associating transformed coefficients with a finite set of steps during quantization. For example, the residual of the video image 201 may be transformed from the spatial domain to the frequency domain using an integer transform that approximates a transform such as a Discrete Cosine Transform (DCT). The QP 213 determines the step size for associating the transformed coefficients with a limited set of steps, such that lower QP values retain more information, while higher QP values lose more information in the inherently lossy process of quantization. The picture type 214 may be any picture type, such as intra (I), predicted (P), bi-directional (B), non-reference B picture (B2), reference B picture (B1 or B), etc. Also as shown, the video image 201 has a Picture Level (PL) CDEF combination 216 corresponding thereto. PL CDEF combination 216 indicates the combination of CDEFs that can be used for a particular superblock, such as superblock 203, from the available combinations of CDEFs. That is, the PLCDEF combination 216 is selected from available CDEF combinations, and only the PL CDEF combination 216 may be used for the video image 201. In addition, superblock 203 has Superblock (SB) CDEF combination 217 corresponding thereto. SBCDEF combination 217 may be determined using any one or more suitable techniques. It is worth noting that any block 205 of superblock 203 can only perform its CDEF filtering using SBCDEF combination 217. Specifically, during CDEF filtering, for each block 205, a block direction 218 is determined, then SB CDEF combinations 217 are applied according to the block direction 218, and such processing is repeated for each block 205 of the super-block 203. For the next superblock, the process is repeated for each of its blocks with only SB CDEF combinations of the next SB.
Fig. 3 illustrates an example Constrained Direction Enhancement Filter (CDEF)
Fig. 4 illustrates example pixel values 400 of a pixel to be filtered 401 and neighboring
Fig. 5 illustrates example filter taps 500 of an example Constrained Direction Enhancement Filter (CDEF) combination 501 arranged in accordance with at least some embodiments of the present disclosure. As shown in fig. 5, a deringing filter tap 502 (labeled DR) is applied along a detected direction 504, while a low pass filter tap 503 (labeled LP) is applied misaligned with respect to the detected direction 504 (such as at about 45 ° and 135 ° with respect to the detected direction 504). For example, the low-pass filter tap 503 is applied in the shape of a cross, where one line of the cross is about 45 ° with respect to the detection direction of the block. In fig. 5, no blank pixel locations (e.g., no filter taps) are used in filtering the pixel to be filtered 401.
When applying CDEF combination 501 to
Returning to fig. 2, video image 201 is also partitioned into a Loop Recovery Unit (LRU) as shown with respect to LRU 221. For example, video image 201 is fully partitioned into LRUs, which are not shown for clarity of presentation. As shown with respect to LRUs 221, each LRU includes a pixel array 222 (e.g., having pixel values) that extends in both a horizontal dimension and a vertical dimension. In addition, each LRU 221 has corresponding horizontal and vertical filter coefficients for application by
As shown with respect to the
Fig. 6 illustrates example filter taps 600 of an example Loop Recovery Filter (LRF)223 arranged in accordance with at least some embodiments of the present disclosure. As shown, the filter coefficients (labeled f) with corresponding levels are applied along the horizontal dimension
H0-6) And with corresponding vertical filter coefficients (labeled f) applied along the vertical dimension
V0-6) The vertical filter taps 602. When the
Returning now to fig. 1, as discussed, the coding structure adaptive loop filter decision module 104 determines whether in-loop filtering is to be performed on the current picture such that the current picture is an in-loop filtered skipped picture or an in-loop filtered non-skipped picture. Coding structure adaptive loop filter decision module 104 may use any of the techniques discussed herein to determine whether the current picture is an in-loop filtered skipped picture or an in-loop filtered non-skipped picture based on the coding structure of
In some embodiments, system 100 implements video coding with a hierarchical B structure in use. In such embodiments, the coding structure adaptive loop filter decision module 104 determines the picture type of the current picture and if the current picture is a non-reference B picture (e.g., a B picture that is not used to encode any other picture in the group of pictures), the coding structure adaptive loop filter decision module 104 sets the current picture as an in-loop filter skipped picture (e.g., the in-loop filter indicator is set to off). For example, such a non-reference B picture may be characterized as a B2 picture. In addition, in such embodiments, all other hierarchical B structure picture types (e.g., I-pictures, B0 pictures, B-pictures, and B1 pictures) are set to in-loop filter non-skipped pictures (e.g., the in-loop filter indicator is set to on). That is, when the coding structure is a hierarchical B structure and the current picture or single picture is a non-reference B picture, the in-loop filter indicator is set to off in response to the single picture being a non-reference B picture. As used herein, the term reference image (of any type) indicates an image used for motion estimation and compensation of another image. In addition, the term non-reference picture (of any type) indicates a picture that cannot be used for motion estimation and compensation of another picture.
In other embodiments with a hierarchical B structure in use, the coding structure adaptive loop filter decision module 104 may determine the picture type of the current picture and if the current picture is a low level B picture (e.g., a non-reference B picture or a reference B picture that can only be referenced by non-reference B pictures; i.e., a B1 picture or a B2 picture), the coding structure adaptive loop filter decision module 104 sets the current picture as an in-loop filter skipped picture (e.g., the in-loop filter indicator is set to off). In addition, in such embodiments, all other hierarchical B structure picture types (e.g., I-pictures, B0 pictures, and B-pictures) are set to in-loop filter non-skipped pictures (e.g., the in-loop filter indicator is set to on). That is, when the encoding structure is a hierarchical B structure and the current picture or the single picture is one of a non-reference B picture or a reference B picture that can only be referenced by the non-reference B picture, the intra-loop filter indicator is set to off in response to the single picture being a non-reference B picture or a reference B picture that can only be referenced by the non-reference B picture.
Fig. 7 illustrates an example image set 700 having a hierarchical B structure, arranged in accordance with at least some embodiments of the present disclosure. As shown in fig. 7, the image set 700 includes
Each image of
As discussed, the group of
As discussed with respect to fig. 1, in some embodiments, the coding structure adaptive loop filter decision module 104 may determine the picture type of the current picture, and when the current picture is a non-reference B picture (e.g., a B picture that is not used to code any other picture in the group of pictures), the coding structure adaptive loop filter decision module 104 sets the in-loop filter indicator of the current picture to off. In addition, in such embodiments, all other hierarchical B structure picture types (e.g., I-pictures, B0 pictures, B-pictures, and B1 pictures) are set to in-loop non-skipped pictures (e.g., the in-loop filter indicator is set to on).
As shown in fig. 7, an implementation of this technique provides an in-loop filter coding decision 740. For example, setting the in-loop filter indicator to on for
Also as discussed with respect to fig. 1, in other embodiments, the coding structure adaptive loop filter decision module 104 may determine the picture type of the current picture and set the in-loop filter indicator of the current picture to off if the current picture is a low level B picture such that the low level B picture is a non-reference B picture or a reference B picture that can only be referenced by non-reference B pictures (e.g., a B1 picture or a B2 picture). In addition, in such embodiments, all other hierarchical B structure picture types (e.g., I-pictures, B0 pictures, and B-pictures) are set to in-loop non-skipped pictures (e.g., the in-loop filter indicator is set to on).
As shown in fig. 7, an implementation of this technique provides in-loop
Returning to fig. 1, as discussed, the coding structure adaptive loop filter decision module 104 may determine whether to perform in-loop filtering on the current picture based on a coding structure associated with coding a group of pictures including the current picture. In some embodiments, system 100 enables video encoding with a low-delay encoding structure. For example, such a low-latency encoding structure may restrict the display order and encoding order of groups of images to be the same and/or provide other encoding restrictions. In such embodiments, coding structure adaptive loop filter decision module 104 determines whether to perform in-loop filtering on the current picture by applying in-loop filter skip and non-skip indicators at fixed picture intervals. The fixed picture interval may provide in-loop filter skipped pictures at any suitable frequency, such as every other picture, every third picture, or even more frequently, where in-loop filter non-skipped pictures occur every third picture, every fourth picture, etc.
Fig. 8 illustrates an example group of
As discussed with respect to fig. 1, in some embodiments, coding structure adaptive loop filter decision module 104 determines whether to perform in-loop filtering on the current picture by applying in-loop filter skip and non-skip indicators at fixed picture intervals. Implementing this technique at a fixed interval for every other image provides an in-loop filter decision 820, as shown in fig. 8. In such an embodiment, in response to
Also as discussed, the fixed picture interval may provide in-loop filtered skipped pictures at any suitable frequency, such as every other picture (e.g., as shown with respect to in-loop filter decision 820), every third picture, or even more frequently, where in-loop filtering non-skipped pictures occur every third picture or every fourth picture, etc. For example, in-loop filter decision 830 shows an example in which the in-loop filter indicator is set to off every three pictures (e.g., such that the in-loop filter indicator is set to off for
Returning to fig. 1, as discussed, the coding structure adaptive loop filter decision module 104 may use any suitable technique or techniques to determine whether to perform in-loop filtering. In some embodiments, system 100 implements video coding with an adaptive quantization parameter low-delay coding structure. For example, such an adaptive quantization parameter low-delay coding structure may restrict the display order and the coding order of a group of pictures to be the same but may provide adaptive quantization parameters (e.g., lower QP for pictures requiring higher quality and higher QP for pictures allowing lower quality). Such an adaptive quantization parameter low-delay coding structure may allow rate control module 101 and/or other modules of system 100 to provide different quantization parameters for different pictures in a group of pictures. In such embodiments, the coding structure adaptive loop filter decision module 104 determines whether to apply in-loop filtering to the current or single picture in response to the quantization parameters provided for the picture. In one embodiment, coding structure adaptive loop filter decision module 104 sets the picture as an in-loop filter skipped picture when a high quantization parameter is determined for the picture (e.g., via rate control module 101), and coding structure adaptive loop filter decision module 104 sets the picture as an in-loop filter non-skipped picture when a low quantization parameter is determined for the picture (e.g., via rate control module 101).
Fig. 9 illustrates an example group of
As discussed with respect to fig. 1, in some examples, the coding structure adaptive loop filter decision module 104 determines whether to apply in-loop filtering to a current or single picture in response to a quantization parameter provided for the picture, such that in-loop filtering is skipped when the quantization parameter associated with the picture is a high-level (e.g., high) quantization parameter and in-loop filtering is applied when the quantization parameter associated with the picture is a low-level (e.g., low) quantization parameter. In this manner, the in-loop filter skipped picture and the in-loop filter non-skipped picture are tracked with the quantization parameter assigned to the picture, such that the high quality picture is the in-loop filter non-skipped picture and the low quality picture is the in-loop filter skipped picture. In one embodiment, the coding structure adaptive loop filter decision module 104 may determine the in-loop filter skipped and non-skipped images as shown for the in-
Such settings may be provided using any one or more suitable techniques. In one embodiment, the coding structure adaptive loop filter decision module 104 compares the image quantization parameter to a threshold. The in-loop filter indicator is set to off when the quantization parameter compares favorably to the threshold, and the in-loop filter indicator is set to on when the quantization parameter compares unfavorably to the threshold. In one embodiment, the coding structure adaptive loop filter decision module 104 compares the image quantization parameter to the quantization parameter of the immediately preceding picture in the
Returning to FIG. 1, as discussed, the loop filter setup module 108 and 111 provides the loop
If the loop
If the in-loop non-skipped image is changed to an in-loop filter skipped image based on image and reference image matching, such indicator or flag is provided to the
In addition, loop
In some embodiments, for in-loop filtered non-skipped pictures, loop
In other embodiments, for in-loop filtered non-skipped pictures, loop
Any one or more suitable techniques may be used to determine whether a coding unit of a current in-loop filtered non-skipped picture is the same as its reference coding unit. In some embodiments, a coding unit of a current in-loop filtered non-skipped picture is determined to be the same as its reference coding unit when a residual (e.g., difference) between the coding unit of the current in-loop filtered non-skipped picture and the reference coding unit is less than a threshold. For example, the residual may be the sum of the squares of the pixel-by-pixel residuals of the coding unit.
The techniques discussed herein may provide for efficient and effective compression of video data. Such techniques may provide computational efficiency and may be advantageously implemented via hardware. In addition, such techniques may provide in-loop filtered picture level and/or coding unit level skip decisions that reduce the in-loop filtering process and achieve similar subjective quality relative to full in-loop filtered coding. Such techniques, when implemented via a device, provide improved computation time, power usage, and improved objective and subjective video quality.
Fig. 10 is a flow diagram illustrating an
Processing continues at
For example, if the maximum bit is less than a threshold T1 and the quantization parameter is greater than another threshold T2 for the current picture, the intra-loop filter is set to off at the picture level. For example, for rate control based in-loop filter on/off decisions, in some cases it may be advantageous to turn off the in-loop filter to prevent buffer underflow in one encoding. As discussed, two thresholds are used: a maximum coded bit limit threshold T1 and a quantization parameter threshold T2. In one embodiment, the maximum coded bit limit threshold is an image resolution adaptation value such that the higher the resolution, the higher the value of the threshold. In one embodiment, the maximum coded bit limit threshold is determined as a constant multiplied by the total number of blocks (e.g., coding units) in the current picture. In one embodiment, the quantization parameter threshold T2 is a constant and the quantization parameter threshold T2 may be in the range of, for example, 46 to 51 when the maximum quantization parameter is 55, or the quantization parameter threshold T2 may be in the range of 235 to 245 when the maximum quantization parameter is 255.
If the maximum coded bit limit compares favorably to the first threshold or the image-level quantization parameter compares unfavorably to the second threshold, processing continues at
If the current image is not a scene change image, processing continues at
In one embodiment, the in-loop filter indicator is set to on when the current picture is an I picture. In one embodiment, when a hierarchical B coding structure is implemented, the in-loop filter is set to off for all non-reference B pictures (e.g., B2 pictures), and otherwise set to on. In another embodiment, when a hierarchical B coding structure is implemented, the intra-loop filter is set to off for all non-reference B pictures (e.g., B2 pictures) and reference B pictures that can only be referenced by non-reference B pictures (e.g., B1 pictures), and otherwise set to on. In one embodiment, the in-loop filter is set to on with a fixed picture interval when a low-delay coding structure is used (e.g., no B-frames). In one embodiment, the in-loop filter is set to on for pictures with even indices (e.g., picture numbers in coding order starting with the reference picture as picture 0), and the in-loop filter is set to off for pictures with odd indices. In one embodiment, when adaptive quantization parameter allocation is used with low-delay coding structures, the in-loop filter is set on for good quality pictures (e.g., low QP pictures) and is set off for other frames (e.g., high QP pictures).
As shown, processing continues at
If the current picture and the reference picture are not considered a match at
As shown, when processing all coding units (as indicated by the "last CU") is complete at
Fig. 11 illustrates an
Fig. 12 illustrates a block diagram of an
As shown in fig. 12, an
In some embodiments, as discussed with respect to fig. 1,
As shown, the mode selection module 1213 (e.g., via a switch) may select between the optimal intra prediction mode and the optimal inter prediction mode for a coding unit or block, etc., based on a minimum coding cost, etc. Based on the mode selection, the predicted portion of the video frame may be distinguished from the original portion of the video frame (e.g., of the input video 121) via the
In addition, the quantized transform coefficients may be inverse quantized and inverse transformed via an inverse quantization and transform
Fig. 13 is a flow diagram illustrating an
Fig. 14 is an illustration of an example system 1400 for video encoding including adaptively enabling and disabling loop filtering, arranged in accordance with at least some embodiments of the present disclosure. As shown in fig. 14, the system 1400 may include a
As shown, in some embodiments, the rate control module 101, the
Video processor 1402 may include any number and type of video, image, or graphics processing units that may provide operations as discussed herein. Such operations may be implemented via software or hardware or a combination thereof. For example, the video processor 1402 may include circuitry dedicated to manipulating images, image data, and the like obtained from the memory 1403.
In one embodiment, one or more or part of rate control module 101,
Returning to the discussion of fig. 13, the
Processing continues at
In one embodiment, when the maximum coded bit limit compares favorably to the first threshold or the quantization parameter compares unfavorably to the second threshold, the
In one embodiment, when the maximum coding bit limit compares favorably to the first threshold or the quantization parameter compares unfavorably to the second threshold, the
In one embodiment, when the in-loop filter indicator is set to on for a single picture, the
In one embodiment, when the in-loop filter indicator is set to on for a single image,
In one embodiment, when the in-loop filter indicator is set to on for a single image,
Processing continues at
The various components of the systems described herein may be implemented in software, firmware, and/or hardware and/or any combination thereof. For example, various components of the systems or devices discussed herein may be provided, at least in part, by hardware such as a computing system on a chip (SoC) as may be found in a computing system (e.g., a smartphone). Those skilled in the art will recognize that the system described herein may include additional components that have not been depicted in the respective figures. For example, the systems discussed herein may include additional components, such as a bitstream multiplexer or demultiplexer module, etc., that have not been depicted for clarity.
While the embodiments of the example processes discussed herein may include all of the operations shown performed in the order shown, the present disclosure is not limited in this respect and, in various examples, the embodiments of the example processes herein may include only a subset of the operations shown, operations performed in an order different than the order shown, or additional operations.
Further, any one or more of the operations discussed herein may be performed in response to instructions provided by one or more computer program products. Such a program product may include a signal bearing medium that provides instructions, which when executed by, for example, a processor, may provide the functionality described herein. The computer program product may be provided in any form of one or more machine-readable media. Thus, for example, a processor comprising one or more graphics processing units or processor cores may perform one or more blocks of the example processes herein in response to program code and/or instructions or a set of instructions conveyed to the processor by one or more machine-readable media. In general, a machine-readable medium may convey software in the form of program code and/or instructions or a set of instructions that may cause any device and/or system described herein to perform at least part of the operations discussed herein and/or any part of a device, system, or any module or component as discussed herein.
As used in any implementation described herein, the term "module" refers to any combination of software logic, firmware logic, hardware logic, and/or circuitry configured to provide the functionality described herein. Software may be embodied as a software package, code and/or instruction set or instructions, and "hardware", as used in any implementation described herein, may include hardwired circuitry, programmable circuitry, state machine circuitry, fixed-function circuitry, execution unit circuitry, and/or firmware that stores instructions executed by programmable circuitry (e.g., alone or in any combination). Modules may be collectively or individually embodied as circuitry forming part of a larger system, such as an Integrated Circuit (IC), a system on a chip (SoC), or the like.
Fig. 15 is an illustrative diagram of an
In various implementations,
In various embodiments, platform 1502 may include any combination of chipset 1505,
The
The graphics subsystem 1515 may perform processing of images, such as still images or video, for display. For example, graphics subsystem 1515 may be a Graphics Processing Unit (GPU) or a Visual Processing Unit (VPU). An analog or digital interface may be used to communicatively couple the
The graphics and/or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, a separate graphics and/or video processor may be used. As yet another embodiment, graphics and/or video functions may be provided by a general purpose processor, including a multicore processor. In further embodiments, these functions may be implemented in a consumer electronics device.
Radio 1518 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communication techniques. The techniques may involve communication across one or more wireless networks. Example wireless networks include, but are not limited to, Wireless Local Area Networks (WLANs), Wireless Personal Area Networks (WPANs), Wireless Metropolitan Area Networks (WMANs), cellular networks, and satellite networks. In communicating across such a network, radio 1518 may operate according to any version of one or more applicable standards.
In various embodiments, display 1520 may comprise any television-type monitor or display. Display 1520 may include, for example, a computer display screen, touch screen display, video monitor, television-like device, and/or a television. Display 1520 may be digital and/or analog. In various implementations, display 1520 may be a holographic display. Also, display 1520 may be a transparent surface that may receive visual projections. Such projections may convey various forms of information, images, and/or objects. Such a projection may be, for example, a visual overlay for a Mobile Augmented Reality (MAR) application. Under the control of one or
In various implementations, one or more
In various embodiments, one or more
One or more
In various implementations, platform 1502 may receive control signals from
The movement of the navigation features may be replicated on a display (e.g., display 1520) by movement of a pointer, cursor, focus ring, or other visual indicator displayed on the display. For example, under the control of the
In various implementations, for example, when enabled, a driver (not shown) may include technology that enables a user to turn on and off the television-like platform 1502 by touching a button immediately after initial startup. Even when the platform is "off," the program logic may allow the platform 1502 to stream content to a media adapter or other content services device or
In various embodiments, any one or more of the components shown in
In various embodiments,
Platform 1502 may establish one or more logical channels or physical channels to communicate information. The information may include media information and control information. Media information may refer to any data representing content intended for a user. Examples of content may include, for example, data from a voice conversation, videoconference, streaming video, electronic mail ("email") message, voice mail message, alphanumeric symbols, graphics, image, video, text, and so forth. The data from a speech dialog may be, for example, speech information, silence periods, background noise, comfort noise, tones, etc. Control information may refer to any data representing commands, instructions or control words meant for an automated system. For example, control information may be used to route media information through a system or instruct a node to process media information in a predetermined manner. However, embodiments are not limited to the elements or context shown or described in fig. 15.
As described above,
Examples of a mobile computing device may include a Personal Computer (PC), a laptop computer, an ultra-portable computer, a tablet computer, a touch pad, a portable computer, a handheld computer, a palmtop computer, a Personal Digital Assistant (PDA), a cellular telephone, a combination cellular telephone/PDA, a smart device (e.g., a smart phone, a smart tablet computer, or a smart mobile television), a Mobile Internet Device (MID), a messaging device, a data communication device, a camera, and so forth.
Examples of mobile computing devices may also include computers arranged to be worn by a person, such as wrist computers, finger computers, ring computers, eyeglass computers, band clamp computers, arm band computers, shoe computers, apparel computers, and other wearable computers. In various embodiments, for example, the mobile computing device may be implemented as a smart phone capable of executing computer applications as well as voice communications and/or data communications. Although some embodiments may be described with a mobile computing device implemented, for example, as a smartphone, it may be appreciated that other embodiments may be implemented using other wireless mobile computing devices. The embodiments are not limited in this context.
As shown in fig. 16, the
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, Application Specific Integrated Circuits (ASIC), Programmable Logic Devices (PLD), Digital Signal Processors (DSP), Field Programmable Gate Array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, Application Program Interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within a processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine-readable medium and provided to various customers or manufacturing facilities to load into the fabrication machines that actually manufacture the logic or processor.
While certain features set forth herein have been described with reference to various embodiments, this description is not intended to be construed in a limiting sense. Accordingly, various modifications of the embodiments described herein, as well as other embodiments, which are apparent to persons skilled in the art to which the disclosure pertains are deemed to lie within the spirit and scope of the disclosure.
It will be recognized that the embodiments are not limited to the embodiments so described, but may be practiced with modification and alteration without departing from the scope of the appended claims. For example, the embodiments described above may include particular combinations of features. However, the above-described embodiments are not limited in this respect, and in various implementations, the above-described embodiments may include undertaking only a subset of such features, undertaking a different order of such features, undertaking a different combination of such features, and/or undertaking additional features than those features explicitly listed. The scope of the embodiments should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种酒店电视配置方法、系统及存储介质