Sample adaptive compensation method and device of AVS3, electronic equipment and storage medium

文档序号:89989 发布日期:2021-10-08 浏览:44次 中文

阅读说明:本技术 Avs3的样点自适应补偿方法、装置、电子设备及存储介质 (Sample adaptive compensation method and device of AVS3, electronic equipment and storage medium ) 是由 向国庆 赵飞 张鹏 房善华 宋磊 贾惠柱 于 2021-05-25 设计创作,主要内容包括:本申请公开了一种AVS3的样点自适应补偿方法、装置、电子设备及存储介质。该方法包括:在AVS3标准中,对视频图像的每个最大编码单元执行像素偏移操作,得到用于滤波计算的处理单元;统计所述处理单元的像素信息;基于统计得到的像素信息进行样点自适应补偿模式决策,确定样点自适应补偿参数;基于所述样点自适应补偿参数对所述处理单元进行滤波计算。本申请实施例提供的AVS3的样点自适应补偿方法,通过对视频图像的每个最大编码单元执行像素偏移操作,去除了各最大编码单元之间的数据依赖关系,更利于并行处理,提高了并行处理效率。(The application discloses a sample adaptive compensation method and device of AVS3, electronic equipment and a storage medium. The method comprises the following steps: in the AVS3 standard, a pixel shift operation is performed on each maximum coding unit of a video image, resulting in a processing unit for filter computation; counting pixel information of the processing unit; carrying out sample point self-adaptive compensation mode decision based on the pixel information obtained by statistics, and determining sample point self-adaptive compensation parameters; and carrying out filtering calculation on the processing unit based on the sample point adaptive compensation parameter. According to the sample adaptive compensation method of the AVS3, the pixel offset operation is performed on each maximum coding unit of the video image, so that the data dependency among the maximum coding units is removed, the parallel processing is facilitated, and the parallel processing efficiency is improved.)

1. A method for sample adaptive compensation of AVS3, comprising:

in the AVS3 standard, a pixel shift operation is performed on each maximum coding unit of a video image, resulting in a processing unit for filter computation;

counting pixel information of the processing unit;

carrying out sample point self-adaptive compensation mode decision based on the pixel information obtained by statistics, and determining sample point self-adaptive compensation parameters;

and carrying out filtering calculation on the processing unit based on the sample point adaptive compensation parameter.

2. The method of claim 1, wherein performing a pixel shift operation on each largest coding unit of the video image comprises:

and moving the coordinates of the starting point of each maximum coding unit of the video image by four pixels to the left and four pixels to the top to obtain the maximum coding unit after the first offset.

3. The method of claim 2, wherein the performing a pixel shift operation on each largest coding unit of the video image further comprises:

and moving the boundary of the maximum coding unit after the first offset by two pixels to the left and moving the boundary of the maximum coding unit after the first offset by two pixels upwards to obtain the maximum coding unit after the second offset.

4. The method of claim 3, wherein the performing a pixel shift operation on each largest coding unit of the video image further comprises:

and expanding the right boundary and the lower boundary of the maximum coding unit after the second offset to obtain an expanded maximum coding unit.

5. The method of claim 1, wherein the counting pixel information of the processing unit comprises:

counting pixel information of a maximum coding unit before the pixel shift operation as pixel information of the processing unit; the pixel information includes a luminance component and a chrominance component of the pixel.

6. The method of claim 1, wherein the sample adaptive compensation parameters comprise a number of sample adaptive compensation types; the pixel information obtained based on statistics is used for making a sample point adaptive compensation mode decision to determine sample point adaptive compensation parameters, and the method comprises the following steps:

approximating the alternative bit rate by using the number of bits, and calculating the cost of each sample adaptive compensation type;

adding 1 to the value of the bit number whenever it is required to use a syntax element to represent the current mode;

and taking the sample point adaptive compensation type with the minimum cost as the determined sample point adaptive compensation parameter.

7. The method of claim 1, wherein the processing unit comprises a luma block and a chroma block; the sample point self-adaptive compensation parameter comprises a sample point self-adaptive compensation type and a merging mode; the sample adaptive compensation types comprise a skip mode, an edge compensation mode and a strip compensation mode; the pixel information obtained based on statistics is used for making a sample point adaptive compensation mode decision to determine sample point adaptive compensation parameters, and the method comprises the following steps:

performing pixel compensation on the luminance block and the chrominance block in the skip mode, the edge compensation mode and the strip compensation mode respectively to obtain the minimum rate distortion cost of the processing unit in the skip mode, the edge compensation mode and the strip compensation mode;

if the minimum rate distortion cost is less than the optimal coding cost, updating the optimal coding cost by using the minimum rate distortion cost to generate a sample point adaptive compensation parameter of the processing unit; otherwise, not updating the optimal coding cost;

respectively carrying out pixel compensation on the brightness block and the chrominance block in the merging mode to obtain a first rate distortion cost of the processing unit;

if the first rate distortion cost is smaller than the optimal coding cost, updating the optimal coding cost by using the first rate distortion cost, otherwise, not updating the optimal coding cost;

and taking the sample adaptive compensation parameter corresponding to the finally determined optimal coding cost as the determined sample adaptive compensation parameter.

8. An apparatus for sample adaptive compensation of AVS3, comprising:

the offset operation module is used for executing pixel offset operation on each maximum coding unit of a video image in the AVS3 standard to obtain a processing unit for filtering calculation;

the statistical module is used for counting the pixel information of the processing unit;

the decision module is used for carrying out sample point self-adaptive compensation mode decision based on the pixel information obtained by statistics and determining sample point self-adaptive compensation parameters;

and the filtering module is used for carrying out filtering calculation on the processing unit based on the sample point self-adaptive compensation parameter.

9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the method of sample adaptive compensation of AVS3 as recited in any one of claims 1-7.

10. A computer-readable storage medium having stored thereon a computer program for execution by a processor to implement the sample adaptive compensation method of AVS3 as recited in any one of claims 1-7.

Technical Field

The application relates to the technical field of video processing, in particular to a sample adaptive compensation method and device of AVS3, electronic equipment and a storage medium.

Background

The AVS3 video coding standard belongs to the third generation AVS standard. The digital audio and video coding and decoding technology standard (AVS) is a source coding standard with proprietary intellectual property rights in China. The AVS3 generally continues to use a hybrid coding framework, and employs various coding and decoding techniques such as intra-frame prediction, inter-frame prediction, transform quantization, loop filtering, and entropy coding. In the AVS3 standard application scenario, for a strong edge in an image, due to quantization distortion of high frequency ac coefficients, a moire phenomenon may occur around the edge after decoding, and such distortion is called ringing effect, which seriously affects video quality.

Disclosure of Invention

The application aims to provide a sample adaptive compensation method and device of AVS3, an electronic device and a storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.

According to an aspect of an embodiment of the present application, there is provided a sample adaptive compensation method of an AVS3, including:

in the AVS3 standard, a pixel shift operation is performed on each maximum coding unit of a video image, resulting in a processing unit for filter computation;

counting pixel information of the processing unit;

carrying out sample point self-adaptive compensation mode decision based on the pixel information obtained by statistics, and determining sample point self-adaptive compensation parameters;

and carrying out filtering calculation on the processing unit based on the sample point adaptive compensation parameter.

Further, the performing a pixel shift operation on each maximum coding unit of the video image comprises:

and moving the coordinates of the starting point of each maximum coding unit of the video image by four pixels to the left and four pixels to the top to obtain the maximum coding unit after the first offset.

Further, the performing a pixel shift operation on each maximum coding unit of the video image further comprises:

and moving the boundary of the maximum coding unit after the first offset by two pixels to the left and moving the boundary of the maximum coding unit after the first offset by two pixels upwards to obtain the maximum coding unit after the second offset.

Further, the performing a pixel shift operation on each maximum coding unit of the video image further comprises:

and expanding the right boundary and the lower boundary of the maximum coding unit after the second offset to obtain an expanded maximum coding unit.

Further, the counting the pixel information of the processing unit includes:

counting pixel information of a maximum coding unit before the pixel shift operation as pixel information of the processing unit; the pixel information includes a luminance component and a chrominance component of the pixel.

Further, the sample adaptive compensation parameters comprise a plurality of sample adaptive compensation types; the pixel information obtained based on statistics is used for making a sample point adaptive compensation mode decision to determine sample point adaptive compensation parameters, and the method comprises the following steps:

approximating the alternative bit rate by using the number of bits, and calculating the cost of each sample adaptive compensation type;

adding 1 to the value of the bit number whenever it is required to use a syntax element to represent the current mode;

and taking the sample point adaptive compensation type with the minimum cost as the determined sample point adaptive compensation parameter.

Further, the processing unit includes a luminance block and a chrominance block; the sample point self-adaptive compensation parameter comprises a sample point self-adaptive compensation type and a merging mode; the sample adaptive compensation types comprise a skip mode, an edge compensation mode and a strip compensation mode; the pixel information obtained based on statistics is used for making a sample point adaptive compensation mode decision to determine sample point adaptive compensation parameters, and the method comprises the following steps:

performing pixel compensation on the luminance block and the chrominance block in the skip mode, the edge compensation mode and the strip compensation mode respectively to obtain the minimum rate distortion cost of the processing unit in the skip mode, the edge compensation mode and the strip compensation mode;

if the minimum rate distortion cost is less than the optimal coding cost, updating the optimal coding cost by using the minimum rate distortion cost to generate a sample point adaptive compensation parameter of the processing unit; otherwise, not updating the optimal coding cost;

respectively carrying out pixel compensation on the brightness block and the chrominance block in the merging mode to obtain a first rate distortion cost of the processing unit;

if the first rate distortion cost is smaller than the optimal coding cost, updating the optimal coding cost by using the first rate distortion cost, otherwise, not updating the optimal coding cost;

and taking the sample adaptive compensation parameter corresponding to the finally determined optimal coding cost as the determined sample adaptive compensation parameter.

According to another aspect of the embodiments of the present application, there is provided a sample adaptive compensation apparatus of AVS3, including:

the offset operation module is used for executing pixel offset operation on each maximum coding unit of a video image in the AVS3 standard to obtain a processing unit for filtering calculation;

the statistical module is used for counting the pixel information of the processing unit;

the decision module is used for carrying out sample point self-adaptive compensation mode decision based on the pixel information obtained by statistics and determining sample point self-adaptive compensation parameters;

and the filtering module is used for carrying out filtering calculation on the processing unit based on the sample point self-adaptive compensation parameter.

According to another aspect of an embodiment of the present application, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the sample adaptive compensation method of AVS3 described above.

According to another aspect of an embodiment of the present application, there is provided a computer-readable storage medium having a computer program stored thereon, the program being executed by a processor to implement the sample adaptive compensation method of AVS3 described above.

The technical scheme provided by one aspect of the embodiment of the application can have the following beneficial effects:

according to the sample adaptive compensation method of the AVS3, the pixel offset operation is performed on each maximum coding unit of the video image, so that the data dependency among the maximum coding units is removed, the parallel processing is facilitated, and the parallel processing efficiency is improved.

Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application, or may be learned by the practice of the embodiments. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.

FIG. 1 illustrates a flowchart of a sample adaptive compensation method of AVS3 according to an embodiment of the present application;

FIG. 2 shows a flow chart of S10 in the embodiment of FIG. 1;

FIG. 3 is a diagram showing the coordinates of the start point of each maximum coding unit before and after the operation of shifting the coordinates by four pixels to the left and four pixels to the top;

FIG. 4 shows a flowchart of S30 in the embodiment of FIG. 1;

FIG. 5 is a block diagram of an adaptive sample compensation apparatus for AVS3 according to an embodiment of the present application;

FIG. 6 is a block diagram of one embodiment of the decision module 30 in the embodiment of FIG. 5;

FIG. 7 is a block diagram of another implementation of the decision module 30 in the embodiment of FIG. 5;

FIG. 8 is a block diagram of an adaptive sample compensation apparatus for AVS3 according to another embodiment of the present application;

FIG. 9 shows a block diagram of an electronic device of an embodiment of the present application;

FIG. 10 shows a computer-readable storage medium schematic of an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

Sample Adaptive Offset (SAO) is an in-loop filtering technique applied in High Efficiency Video Coding (HEVC) standard.

As shown in fig. 1, an embodiment of the present application provides a sample adaptive compensation method of AVS3, including:

s10, in the AVS3 standard, a pixel shift operation is performed on each maximum coding unit of a video image, resulting in a processing unit for filter calculation.

As shown in fig. 2, in certain embodiments, S10 includes:

s101, taking each maximum coding unit (LCU) of the video image as a basic processing unit, and performing operations of moving coordinates of a starting point of each maximum coding unit (LCU) by four pixels to the left and moving coordinates of a starting point of each maximum coding unit (LCU) by four pixels to the upper so that the boundary of the maximum coding unit (LCU) generates offset, and the maximum coding unit after the first offset is obtained.

As shown in fig. 3, the solid line box represents the original maximum coding unit, and the dotted line box represents the maximum coding unit after S101.

Since it is desirable in hardware design to minimize the data dependency between each processing unit, this migration-based operation is used in this embodiment to organize the data structure and break the data dependency.

In some embodiments, the maximum coding unit after the first offset is used as the processing unit for filtering calculation, which achieves the beneficial technical effect that the data dependency relationship between the processing units can be reduced.

And S102, shifting the boundary of the maximum coding unit subjected to the offset obtained in the S101 to the left by two pixels and shifting the boundary of the maximum coding unit subjected to the offset to the upper direction by two pixels to obtain the maximum coding unit subjected to the second offset.

In consideration of the regularity and simplicity of the structure, in this embodiment, the boundary of the processing unit is shifted to the left by two pixels and to the top by two pixels based on the shift operation.

In some embodiments, the maximum coding unit after the second offset is used as a processing unit for filtering calculation, which achieves the beneficial technical effect of enabling the data organization structure to be more regular and concise.

And S103, expanding the right boundary and the lower boundary of the maximum coding unit after the second offset to obtain an expanded maximum coding unit.

In order to take care of the remaining pixels at the right or lower boundary of the image, the right and lower boundaries of the processing unit are then expanded to make the size of the luminance block reach 71 × 71 pixels and the size of the chrominance block reach 38 × 38 pixels.

In some embodiments, the maximum coding unit after the expansion operation is used as the processing unit for the filtering calculation, which has the beneficial technical effect that the remaining pixels at the right side or the lower side boundary of the image can be taken care of.

By executing pixel offset operation on each maximum coding unit of the video image, maximum coding unit (LCU) level dislocation data organization is obtained, the data dependency relationship among the maximum coding units is removed, parallel processing is facilitated, and parallel processing efficiency is improved.

And S20, pixel information of a statistical processing unit.

In some embodiments, the pixel information of the processing unit includes difference information between data of the original pixel and data of the reconstructed pixel. In the standard implementation of the AVS3, the pixel information may be counted using a small granularity separately calculated strategy. The data of a pixel comprises a luminance component (Y) and a chrominance component (U, V).

In some embodiments, to speed up and simplify the step of S20, the pixel information of the statistical processing unit is: the pixel information of a Largest Coding Unit (LCU) before the pixel shift operation is counted as the pixel information of the processing unit. That is to say, the pixel information of the data organization structure after the pixel offset is not counted in the counting process, so that the complexity of information counting can be reduced, and the counting efficiency is greatly improved. The pixel information comprises a luminance component (Y) and a chrominance component (U, V) of the pixel. Both the luminance component (Y) and the chrominance components (U, V) are processed in parallel. When the luminance component is processed, the luminance component is equally divided into four parts, and the four parts are processed in parallel.

Specifically, each Largest Coding Unit (LCU) uses its original boundaries (i.e., solid line boxes in fig. 1) for information statistics. Experimental results show that the method can enable the processing process to be simpler and reduce the calculation pressure under the condition of small performance loss.

And S30, carrying out sample adaptive compensation mode decision based on the pixel information obtained by statistics, and determining sample adaptive compensation parameters.

And (3) making a sample adaptive compensation mode decision, namely selecting a sample adaptive compensation parameter. The sample adaptive compensation parameters mainly comprise a sample adaptive compensation type, an offset value set and a merging mode; the sample adaptive compensation types include a skip mode, an Edge Offset (EO mode for short), and a Band Offset (BO mode for short).

In certain embodiments, in an AVS3 standard implementation, S30 includes: and calculating rate distortion cost (RD cost) according to the difference data of the original pixel and the reconstructed pixel, and selecting a mode decision of sample self-adaptive compensation according to the rate distortion cost. With the method of this embodiment, operations such as entropy encoding are required, and the consumption of computing resources is large.

In certain embodiments, S30 includes: the approximate bit number estimate is calculated using the syntax elements for which the estimate needs to be encoded. The embodiment greatly simplifies the calculation process of the selection mode decision. Specifically, when calculating the cost of each mode in the mode decision, the number of bits R' is used to approximate the alternative bit rate R; the value of the bit number R' is incremented by 1 whenever it is necessary to use one syntax element to represent the current mode decision. Practical results show that the implementation greatly saves computing resources with little performance loss.

Through simplified information statistics and a simplified rate-distortion cost calculation method, the calculation complexity is reduced, and the processing efficiency is improved.

In certain embodiments, S30 includes: and respectively carrying out sample point adaptive compensation mode decision on the luminance block and the chrominance block of the processing unit.

As shown in fig. 4, specifically, S30 includes:

s301, respectively performing pixel compensation on the luminance block and the chrominance block in the skip mode, the edge compensation mode and the strip compensation mode to obtain the minimum rate-distortion cost of the processing unit in the skip mode, the edge compensation mode and the strip compensation mode, and recording the minimum rate-distortion cost as C1;

s302, if the C1 is less than the optimal coding cost C0, updating the C0 by utilizing C1, and generating a sample adaptive compensation parameter of the processing unit; otherwise, C0 is not updated;

s303, respectively carrying out pixel compensation on the luminance block and the chrominance block in a merging mode to obtain the rate distortion cost of the processing unit, and recording the rate distortion cost as C2;

s304, if the C2 is smaller than the C0, updating the C0 by utilizing the C2, and otherwise, not updating the C0;

s305, using the finally determined adaptive compensation parameter for C0 as the determined adaptive compensation parameter for sample.

For example, after the above steps, if the obtained finally determined sample adaptive compensation parameter corresponding to C0 is the skip mode, the skip mode is used as the determined sample adaptive compensation parameter.

And S40, performing filtering calculation on the processing unit based on the determined sample point adaptive compensation parameter.

In certain embodiments, step S40 includes: parallel filter calculation processing is performed on a luminance component (Y) and chrominance components (U, V) of pixels of the processing unit based on the determined type of sample adaptive compensation.

The parallel filtering calculation processing of the luminance component (Y) and the chrominance components (U, V) of the pixel of the processing unit can accelerate the speed of filtering calculation, improve the processing efficiency and achieve the processing capacity of 1920 × 1080 resolution 60 frames/second on xcu 250.

As shown in fig. 5, another embodiment of the present application provides an apparatus for sample adaptive compensation of AVS3, including:

a shift operation module 10, configured to perform a pixel shift operation on each maximum coding unit of a video image in the AVS3 standard, resulting in a processing unit for filter calculation;

a statistic module 20, configured to count pixel information of the processing unit;

the decision module 30 is configured to perform a sample adaptive compensation mode decision based on the pixel information obtained through statistics, and determine a sample adaptive compensation parameter;

and the filtering module 40 is configured to perform filtering calculation on the processing unit based on the sample adaptive compensation parameter.

In some embodiments, the shift operation module 10 comprises a sub-module for performing a pixel shift operation on each maximum coding unit of the video image, the sub-module being specifically configured to:

and moving the coordinates of the starting point of each maximum coding unit of the video image by four pixels to the left and four pixels to the top to obtain the maximum coding unit after the first offset.

In some embodiments, the sub-module is further specifically configured to:

and moving the boundary of the maximum coding unit after the first offset by two pixels to the left and moving the boundary of the maximum coding unit after the first offset by two pixels upwards to obtain the maximum coding unit after the second offset.

In some embodiments, the sub-module is further specifically configured to:

and expanding the right boundary and the lower boundary of the maximum coding unit after the second offset to obtain an expanded maximum coding unit.

In some embodiments, the statistics module 20 is specifically configured to:

counting pixel information of a maximum coding unit before the pixel shift operation as pixel information of the processing unit; the pixel information includes a luminance component and a chrominance component of the pixel.

The sample adaptive compensation parameters comprise a plurality of sample adaptive compensation types; as shown in fig. 6, in some embodiments, the decision module 30 includes:

a first calculation unit 301 for calculating a cost of each sample adaptive compensation type using the number of bits to approximate the alternative bit rate;

a second calculating unit 302 for adding 1 to the value of the bit number whenever one syntax element is required to represent the current mode;

a determining unit 303, configured to use the sample adaptive compensation type with the smallest cost as the determined sample adaptive compensation parameter.

The processing unit comprises a brightness block and a chroma block; the sample point self-adaptive compensation parameter comprises a sample point self-adaptive compensation type and a merging mode; the sample adaptive compensation types comprise a skip mode, an edge compensation mode and a strip compensation mode; as shown in fig. 7, in some embodiments, the decision module 30 includes:

a first compensation unit 30-1, configured to perform pixel compensation on the luminance block and the chrominance block in the skip mode, the edge compensation mode, and the strip compensation mode, respectively, to obtain a minimum rate-distortion cost of the processing unit in the skip mode, the edge compensation mode, and the strip compensation mode;

a first updating unit 30-2, configured to update the optimal coding cost by using the minimum rate-distortion cost if the minimum rate-distortion cost is less than the optimal coding cost, and generate a sample adaptive compensation parameter of the processing unit; otherwise, not updating the optimal coding cost;

a second compensation unit 30-3, configured to perform pixel compensation on the luminance block and the chrominance block in the merge mode, respectively, to obtain a first rate-distortion cost of the processing unit;

a second updating unit 30-4, if the first rate-distortion cost is smaller than the optimal coding cost, updating the optimal coding cost by using the first rate-distortion cost, otherwise, not updating the optimal coding cost;

the determining unit 30-5 uses the sample adaptive compensation parameter corresponding to the finally determined optimal coding cost as the determined sample adaptive compensation parameter.

As shown in fig. 8, another embodiment of the present application provides a sample adaptive compensation apparatus of AVS3, which includes a statistics module, a mode decision module, and a filtering calculation module.

In certain embodiments, a duplicate Y array, a Y component, a duplicate U array, a duplicate V array, a U component, and a V component are included in the statistics module. The Y-module includes EO 0 ° units, EO90 ° units, EO 135 ° units, EO 45 ° units, and BO units.

The mode decision module comprises a merging unit, a compensation unit, a new mode unit and a determination unit, wherein the new mode unit comprises a Y rate distortion optimization unit (RDO Y), a U rate distortion optimization unit (RDO U) and a V rate distortion optimization unit (RDO V), and the Y rate distortion optimization unit (RDO Y) comprises an RDO EO 0 degree sub-unit, an RDO EO90 degree sub-unit, an RDO EO 135 degree sub-unit, an RDO EO 45 degree sub-unit, an RDO BO sub-unit and a comparison generation sub-unit. The filtering calculation module comprises a Y boundary check unit, a Y component, a U component and a V component.

Another embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the sample adaptive compensation method of the AVS3 of any of the above embodiments. As shown in fig. 9, the electronic device 50 may include: the system comprises a processor 500, a memory 501, a bus 502 and a communication interface 503, wherein the processor 500, the communication interface 503 and the memory 501 are connected through the bus 502; the memory 501 stores a computer program that can be executed on the processor 500, and the processor 500 executes the computer program to perform the method provided by any of the foregoing embodiments of the present application.

The Memory 501 may include a high-speed Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 503 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.

Bus 502 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 501 is used for storing a program, and the processor 500 executes the program after receiving an execution instruction, and the method disclosed in any of the foregoing embodiments of the present application may be applied to the processor 500, or implemented by the processor 500.

The processor 500 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 500. The Processor 500 may be a general-purpose Processor, and may include a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 501, and the processor 500 reads the information in the memory 501, and completes the steps of the method in combination with the hardware thereof.

The electronic device provided by the embodiment of the application and the method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.

Another embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program, which is executed by a processor to implement the sample adaptive compensation method of AVS3 of any of the above embodiments. Referring to fig. 10, a computer-readable storage medium is shown as an optical disc 60, on which a computer program (i.e., a program product) is stored, which when executed by a processor, performs the method provided by any of the foregoing embodiments.

It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.

The computer-readable storage medium provided by the above-mentioned embodiments of the present application and the method provided by the embodiments of the present application have the same advantages as the method adopted, executed or implemented by the application program stored in the computer-readable storage medium.

It should be noted that:

the term "module" is not intended to be limited to a particular physical form. Depending on the particular application, a module may be implemented as hardware, firmware, software, and/or combinations thereof. Furthermore, different modules may share common components or even be implemented by the same component. There may or may not be clear boundaries between the various modules.

The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the examples based on this disclosure. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.

It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

The above-mentioned embodiments only express the embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种分像素运动估计方法及估计系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类