Real-time video dynamic range analysis

文档序号:475010 发布日期:2021-12-31 浏览:5次 中文

阅读说明:本技术 实时视频动态范围分析 (Real-time video dynamic range analysis ) 是由 D·G·贝克 于 2020-04-03 设计创作,主要内容包括:视频分析器测量并输出视频信号的动态范围的视觉指示。视频分析器包括接收视频信号的视频输入,并且累积分布函数生成器从视频信号的分量生成累积分布函数曲线。特征检测器从累积分布函数曲线生成一个或多个特征向量,并且视频动态范围生成器产生指示视频信号的一个或多个部分的亮度的视觉输出。(The video analyzer measures and outputs a visual indication of the dynamic range of the video signal. The video analyzer includes a video input that receives a video signal, and the cumulative distribution function generator generates a cumulative distribution function curve from components of the video signal. A feature detector generates one or more feature vectors from the cumulative distribution function curve and a video dynamic range generator produces a visual output indicative of the brightness of one or more portions of the video signal.)

1. A video analyzer for measuring the dynamic range of a video signal, comprising:

a video input configured to receive a video signal;

a cumulative distribution function generator configured to generate a cumulative distribution function curve from components of the video signal;

a feature detector configured to generate one or more feature vectors from the cumulative distribution function curve; and

a video dynamic range generator configured to generate a visual output indicative of the brightness of one or more portions of the video signal.

2. The video analyzer of claim 1, wherein the visual output comprises a false-color image indicative of a brightness of one or more portions of the video signal.

3. The video analyzer of claim 1, wherein the visual output comprises a waveform indicating a brightness versus screen area percentage of a single frame of a video signal.

4. The video analyzer of claim 3, wherein the visual output comprises a power mask for a particular display to be displayed simultaneously with a waveform.

5. The video analyzer of claim 1, wherein the visual output comprises an average luminance of each frame of at least a portion of a video signal.

6. The video analyzer of claim 5, wherein the visual output further comprises a maximum average brightness of a particular display.

7. The video analyzer of claim 6, wherein the visual output further comprises a minimum average brightness for a particular display.

8. The video analyzer of claim 7, wherein the visual output further comprises an optimal maximum brightness and an optimal minimum brightness for a particular display.

9. The video analyzer of claim 1, wherein the visual output is generated and displayed in real-time or near real-time.

10. A method for measuring the dynamic range of a video signal, comprising:

generating a cumulative distribution function curve from components of the video signal;

generating one or more feature vectors from the cumulative distribution function curve; and

a visual output is generated indicative of the brightness of one or more portions of the video signal.

11. The method of claim 10, wherein the visual output comprises a false-color image indicative of a brightness of one or more portions of a video signal.

12. The method of claim 10, wherein the visual output comprises a waveform indicating a brightness versus screen area percentage for a single frame of a video signal.

13. The method of claim 12, wherein the visual output comprises a power mask for a particular display displayed concurrently with a waveform.

14. The method of claim 10, wherein the visual output comprises an average luminance for each frame of at least a portion of a video signal.

15. The method of claim 14, wherein the visual output further comprises a maximum average brightness and a minimum average brightness for a particular display.

16. The method of claim 10, wherein generating a visual output indicative of the brightness of one or more portions of the video signal comprises generating the visual output in real time or near real time.

17. One or more computer-readable storage media comprising instructions that, when executed by one or more processors of a video analyzer, cause the video analyzer to:

generating a cumulative distribution function curve from components of the video signal;

generating one or more feature vectors from the cumulative distribution function curve; and

a visual output indicative of the brightness of one or more portions of the video signal is generated.

18. The one or more computer-readable storage media of claim 17, further comprising instructions that cause the video analyzer to generate a visual output comprising a pseudo-color image indicative of the brightness of one or more portions of the video signal.

19. The one or more computer-readable storage media of claim 17, further comprising instructions that cause the video analyzer to generate a visual output comprising a waveform indicating a brightness versus screen area percentage for a single frame of the video signal.

20. The one or more computer-readable storage media of claim 17, further comprising instructions that cause a video analyzer to generate a visual output comprising a power mask for a particular display to be displayed concurrently with the waveform.

Technical Field

The present disclosure is directed to systems and methods for measuring video signals, and in particular to automatically analyzing the dynamic range of video signals.

Background

With the advent of 4K and 8K consumer displays, Television (TV) has had a rapid improvement in display size and resolution over the original 1080 × 1920 High Definition (HD) format, which can support content from a streaming media data service with 4K resolution. However, at typical viewing distances, these new high resolution improvements to typical living room screen sizes may be difficult to perceive and fully comprehend, making further improvements to image resolution impractical.

As such, advances in video technology have focused on developing Wider Color Gamuts (WCG), and in particular much wider contrast and peak luminance High Dynamic Range (HDR) for modern displays, as these create very significant improvements in viewer experience that can be readily appreciated under typical living room viewing distances and lighting.

HDR video content providers are rapidly converting classical film archives, which have ever greater dynamic range than Standard Dynamic Range (SDR) video, into new HDR formats for video DVD and streaming media services. In addition, today's cameras have a very large dynamic range, allowing both live and video production recording of HDR content, as well as simulcasting SDR content for those without HDR TV.

Currently, there are three popular display formats and two WCG formats for HDR/WCG content. The high-end studio, referred to as a pixel (pix) monitor otherwise, may directly display the input in one or more of these formats.

HDR/WCG content is sourced from a wide range of camera, film, and digital archive formats other than the HDR/SDR pixel monitor input format. As such, adjustments in red, green, and blue (RGB) brightness, contrast, color saturation, and gamma are required to compress or expand the dynamic range and color space to fit one of the pixel monitor formats.

Embodiments of the present disclosure address these and other deficiencies of the prior art.

Drawings

Aspects, features, and advantages of embodiments of the present disclosure will become apparent from the following description of the embodiments with reference to the accompanying drawings, in which:

fig. 1 is a block diagram of a real-time video dynamic range analysis system according to an embodiment of the present disclosure.

Fig. 2A and 2B are example outputs of the real-time video dynamic range analysis system of fig. 1 for a particular video frame.

Fig. 3 is another example output of the real-time video dynamic range analysis system of fig. 1 for another particular video frame.

Fig. 4 is an output of a luminance graph of a video output by the real-time video dynamic range analysis system of fig. 1.

Fig. 5 is an example of a computer device or system for implementing the real-time video dynamic range analysis system of fig. 1.

Detailed Description

As mentioned above, the high-end studio may display input in one or more formats with reference to a display monitor, also referred to herein as a pixel monitor. Formats include, for example, the HDR pixel monitor display "gamma" format or the WCG pixel monitor primary color format.

The HDR pixel monitor display "gamma" format may include Perceptual Quantization (PQ) graded to 1000 to 4000 nit peak brightness; mixed log gamma is typically graded as 1000 nits peak brightness; and SRLive is typically rated at 1200 nit peak brightness. SCG pixel display primary color formats may include bt.2020 with fully saturated "laser" light purity display primaries, which must be limited because the display primaries of current picture displays cannot achieve pure RGB light, and P3-D65 with wide color gamut but achievable display primaries lack laser light purity.

HDR/WCG content is sourced from a wide range of camera, film, and digital archive formats other than the HDR/SDR pixel display input format. Thus, as mentioned above, adjustments in RGB brightness, contrast, color saturation, and gamma are required to compress or expand the dynamic range and color space to fit one of the pixel monitor gamma formats.

Additionally, the luminance in the image should be quantized according to the display area to ensure that the HDR/WCG content does not exceed the limits of the target display technology. However, it can be difficult to measure the brightness of a video image, which may have very bright highlights that occupy only a small pixel area, or scenes that appear too dark for a TV in a room that is not very dark compared to a movie theater.

Modern TV pixel displays are limited in the display power or luminous intensity they can provide on their very large screens. For example, a seventy inch diagonal television or pixel monitor promoted as capable of displaying a peak luminance of 2000 nits (cd/m ^ 2) may only be capable of delivering the peak luminance at less than two percent of the screen area. In other words, a television or pixel display cannot display a 2000 nit white field on the entire screen, as this would consume too much power given the display technology.

Embodiments of the present disclosure provide systems and methods of determining scene luminance as a function of image area to allow for adjustment of highlight content as well as average luminance when grading or downconverting target HDR content in HDR display gamma format to SDR. Preferably, the system and method operates in real-time for quality control inspection and mapping of content across the entire video sequence.

Other aspects of embodiments of the present disclosure provide real-time luminance tagging associated with particular regions of interest in an image to quantify the luminance of desired mid-tones, specular highlights, and other regions of interest (such as skin tone exposure). In some embodiments, the markers may be selected based on pixel area and have pixel area display indications, such as pseudo-color interpolation.

Other aspects of the embodiments may provide a method of determining whether a limited luminous intensity of a target pixel display monitor has been exceeded.

Fig. 1 is a block diagram of an example system 100 for real-time video dynamic range analysis in accordance with some embodiments of the disclosed technology. The system 100 can analyze video content in real-time and determine scene brightness as a function of image area to allow a user to quickly identify whether highlight content and average brightness should be adjusted while grading or downconverting HDR content in HDR display gamma format to SDR. The system may receive a Cb ' Y ' Cr ' signal or an R ' G ' B ' signal in the converter 102, which may be converted to a luminance Y ' signal. The multiplexer 104 may select the luminance or Y ' component from the Cb ' Y ' Cr ' signal or the converted R ' G ' B ' signal.

Optional pre-processing 106 may be performed on the Y' component to resize it to a different format or to perform either letter box detection and clipping to active pixels. The pixel count for subsequent processing need only be sufficient to provide a reasonable pixel count for the screen area, e.g., as low as 0.01 percent. Areas smaller than this will generally not be of interest in terms of peak brightness, even on a typical large screen. For example, resizing a 4K or 2K image to 720 x 1280 still provides approximately 92 pixels for an area resolution of 0.01%.

Additionally, in some embodiments, the system 100 may only be interested in moving image pixels, so any letter box (side panel or top/bottom black bar) should be detected and gated for subsequent pixel processing. After pre-processing 106, if the Y ' component is an HDR Y ' component, the SDR or HDR luma component Y ' may be sent to the SDR confidence monitor 108 through a multiplexer 110 and a lookup table 112. The lookup table 112 may downconvert the HDR Y' component to SDR, and the SDR confidence monitor 108 provides a video picture confidence monitoring function that may be displayed consistent with a real-time Cumulative Distribution Function (CDF) or a Complementary Cumulative Distribution Function (CCDF), which will be discussed in more detail below. This may be a full color image or a monochrome image with a pseudo color to mark the luminance area.

In some embodiments, a one-dimensional histogram is generated 114 over a range of input data code values. The one-dimensional histogram is similar to a Probability Density Function (PDF) and is a pixel count vector of each input code value over a single frame, and is then normalized to a fractional value with a unit cumulative sum.

In some embodiments, an optional recursive histogram filter 116 is applied to the one-dimensional histogram by providing a time average of the histogram vector with itself. For example, a recursive Low Pass Filter (LPF) is shown whereby each bin of the histogram is averaged (autoregressive), where its values from previous frames create a first order exponential step response. A moving ensemble average time LPF with a finite impulse response may provide the same benefits.

A Cumulative Density Function (CDF) 118, also referred to as a cumulative histogram, is generated as a cumulative sum. Because the mean histogram has a unity sum, the CDF will be a monotonically increasing function from zero to one, as in a conventional statistical function.

From the output of the CDF 118, the feature detection 120 generates a set of real-time dynamic labeling values, such as luminance, stop, and input Code Values (CVs), at a predetermined pixel region of each frame by searching the CDF for the closest CV bin to each of a predetermined set of pixel probabilities, which is in effect a set of screen area fractions or percentages. The input CV value may serve as a pseudo-color threshold to indicate in real time a predetermined pixel area of each marker on the pixel display.

Based on the set of real-time dynamic marker values, a CDF waveform can be drawn from the feature detection 120. The CDF waveform plots the input CV against the pixel probability determined by the feature detection 120, or alternatively, the input CV may be converted to a full scale percentage and the pixel probability may be converted to nits or stops using the known gamma format of the video signal.

In some embodiments, a complementary cdf (ccdf) 122 may also be used to associate the image area with the input CV. The CCDF 122 is determined by subtracting the CDF from one, i.e., 1-CDF = CCDF. The CCDF 122 and feature detection 120 outputs are then received at the video dynamic range generator 124. Video dynamic range generator 124 produces a visual output indicative of the brightness of one or more portions of the video signal in a form viewable by a user of system 100.

The video dynamic range generator 124 may produce a visual output for a single frame 126 of video input, which may include a rendering of nits, stop or input CV values relative to a percentage of screen area or log probability as determined by the feature detection 120. Because the input encoding gamma format is generally known, the CV scale output by the CDF 118 or CCDF 120 can be converted to display light (nits) or scene references (stops or reflections) for referencing points of use in the video production, quality control, and distribution workflow. The gamma format may be received from metadata of the input signal or may be input by a user. The visual output of the signal frame may allow a user to easily identify image areas that may contain HDR highlights that exceed typical display peak brightness, or content that may not be easily downconverted to SDR.

The video dynamic range generator 124 generates visual output in real time or near real time, meaning that the visual output is generated as the video signal is being input to the system 100, or as soon as possible after the signal is received, to provide time for creation of the visual output. In other embodiments, the output of the video dynamic range generator 124 may be temporarily stored for later analysis, or even stored indefinitely.

The video dynamic range generator 124 may also generate visual output for a plurality of frames 128 of the video input, which may include an amount of rendering nits relative to the frames, such as an average luminance per frame, a maximum peak luminance per frame, and a minimum luminance per frame. The multi-frame output 128 may also allow a user to identify scenes with average brightness (large areas), either bright or dark enough to be uncomfortable for normal viewing conditions.

The video dynamic range generator 124 may additionally or alternatively produce a pseudo-image or visual output of a pseudo-color image for a frame of a video input by color coding an image with luminance based on the amount of luminance present in different portions of the image. The pseudo-color image output 130 may visually correlate the HDR highlight with the real-time luminance nit values.

In some embodiments, a power limit mask may also be determined for a particular display and simultaneously displayed on the visual output to allow a user to quickly and easily ascertain whether the input video has portions that are too bright or too dark for the particular display.

Any known operation or method may be used to determine the display power limit. When manufacturers specify a maximum brightness in nits for a television or monitor, they are typically specified for a certain area of the television or monitor screen.

In equation (1) below, the maximum luminance (Lmax) may be 2500, 1500, and 1000 nits, which specify less than 2% of the screen area for three different displays. Equation (2) indicates that 1 nit is equal to 1 candela per square meter.

Maximum display brightness (1)

(2)。

For a 16x9 aspect ratio television or monitor, the diagonal length is 70 inches, and equation (3) converts the diagonal length D to meters.

(3)。

Equation (4) shown below then determines the area of the television or monitor based on the diagonal length.

(4)。

Using equation (4) above, the area of a 70 inch television or monitor is 1.341m2

In equation 5, the peak luminance plenum is determined by multiplying the maximum luminance by the area of 2%.

(5)。

For the three displays above, the peak luminance is shown in equation (6) below:

(6)。

the peak brightness in candelas can be converted to nits as shown in equation (7).

(7)。

For the three displays discussed above, the results are shown in equation (8):

(8)。

in equation (9), the maximum number of nits is indicated by I, which in this example is 10,000, I is from 0 to I-1, j is the screen area, which is 2% above, and the area of each point I is determined by equation (9).

(9)。

Given the peak luminance in equation (5) above, equation (1) can be used to determine the maximum luminance for each point.

(10)

(11)

Equations (9) through (11) are used to plot the power limiting mask on the generated CDF or CCDF.

Fig. 2A depicts an exemplary CCDF waveform generated by system 100 operating on an exemplary frame of HDR PQ format input video, while fig. 2B illustrates an exemplary pseudo-color image generated by system 100. Although the pseudo-color image of fig. 2B is shown in grayscale, in practice the pseudo-color image may be generated in color, with particular colors indicating various brightness or lightness levels of the original image. Fig. 2A illustrates PQ HDR CCDF of a video frame showing a graph 200 of light versus pixel area percentage, which may be an example of the single frame view 126 of fig. 1. The power limiting mask 202 is shown on a graph of a 1500 nit display, such as the power limiting mask determined above using equations (9) - (11). The CCDF waveform 204 is plotted on the graph 200.

For ease of discussion, a number of markers are illustrated on the graph 200, but all of these markers need not be present in the CCDF waveform 204 when displayed to the user. The mark 206 illustrates that 600 nits is present at 0.1% of the display area. Marker 208 illustrates that the 1% display area point is at 400 nits. Marker 210 illustrates 10% of the display area at 20 nits and marker 212 illustrates 25% of the display area at 8 nits. As can be seen by the indicia 214, 3% of the display area is greater than 200 nits. Marker 216 illustrates the mean marker at 18 nits.

As can be seen in graph 200, graph 218 of fig. 2B does not violate the 1500 nit area power limit mask. A color-coding key 220 may be provided to allow the user to quickly identify which region of the false-color map 218 corresponds to which portion of the CCDF waveform 204. Arrow 222 illustrates a 0.1% area of greater than 600 nits, while arrow 224 illustrates a 1% area of greater than 400 nits, and arrow 226 illustrates a 3% area HDR region of greater than 200 nits.

Fig. 3 illustrates another example of a CCDF waveform 302 for another single frame of video. A power limiting mask 202 is also shown on the graph 300. Markers 304, 306, 308, and 310 illustrate nits on the CCDF waveform in area percentages of 0.1%, 1%, 10%, and 25%, respectively.

The mean marker 312 is at 112 nits. As can be seen at 308, the image has an area of greater than 800 nits exceeding 10% of the image, which exceeds the display power limit, as illustrated by power limit mask 202. In addition, an average brightness of over 100 nits (112 nits) may indicate an uncomfortably bright image to most viewers. Although not shown in fig. 3, the pseudo color image 130 may also be displayed simultaneously with the graph 300 and the keys, similar to fig. 2A and 2B above.

Fig. 4 is a graph 400 that may be displayed to a user utilizing the system 100 to test whether video content will be adequately displayed on a particular display. Graph 400 is an example of a multi-frame view 128 generated by video dynamic range generator 124 of fig. 1. Graph 400 shows an average luminance waveform 402 for video, a peak luminance of less than 1% image area waveform 404, and a minimum luminance (peak black) of greater than 1% image area waveform 406 measured by system 100 over the duration of the video input. The difference between the peaks is the dynamic range spanning the darkest 1% and brightest 1% of each frame of the video clip. Lines 408 and 410 indicate the maximum and minimum brightness of the display, respectively. Lines 412 and 414 indicate the optimal region of brightness for the display.

As can be seen in fig. 4, the average luminance 402 is plotted over the entire 600 frame of video. The average brightness 402 remains within the maximum and minimum brightness of the display, as illustrated by lines 408 and 410, except for two over-darkness violations 416 and 418, which are approximately 100 frames and 170 frames around. The user of the system 100 will be able to see these violations and adjust the video as needed to correct the violations so that the video can be displayed correctly.

Fig. 5 is a diagram illustrating elements or components that may be present in a computer device or system configured to implement methods, processes, functions or operations in accordance with embodiments of the present disclosure. As noted, in some embodiments, the systems and methods described herein may be implemented in the form of an apparatus comprising a processing element and a set of executable instructions. The executable instructions may be part of a software application and arranged into a software architecture. In general, embodiments of the disclosure may be implemented using a set of software instructions designed for execution by a suitably programmed processing element (such as a CPU, microprocessor, processor, controller, computing device, or the like). In complex applications or systems, such instructions are typically arranged into "modules," where each such module typically performs a particular task, procedure, function, or operation. An Operating System (OS) or other form of organizational platform may control or coordinate the entire set of modules in terms of their operation.

Each application module or sub-module may correspond to a particular function, method, process, or operation implemented by the module or sub-module. Such functions, methods, procedures, or operations may include functions, methods, procedures, or operations for implementing one or more aspects of the systems and methods described herein.

The application modules and/or sub-modules may comprise any suitable computer-executable code or set of instructions (e.g., as would be executed by a suitably programmed processor, microprocessor or CPU), such as computer-executable code corresponding to a programming language. For example, programming language source code may be compiled into computer executable code. Alternatively or additionally, the programming language may be an interpreted programming language, such as a scripting language. The computer-executable code or set of instructions may be stored in (or on) any suitable non-transitory computer-readable medium. In general, with respect to the embodiments described herein, a non-transitory computer-readable medium may include virtually any structure, technique, or method other than a transitory waveform or similar medium.

As described, the systems, apparatuses, methods, procedures, functions, and/or operations used to implement embodiments of the present disclosure may be implemented, in whole or in part, in the form of a set of instructions executed by one or more programmed computer processors, such as a Central Processing Unit (CPU) or microprocessor. Such processors may be incorporated in an apparatus, server, client, or other computing or data processing device operated by or in communication with other components of the system. By way of example, fig. 5 is a diagram illustrating elements or components that may be present in a computer device or system 500 configured to implement methods, processes, functions or operations in accordance with embodiments of the present disclosure. The subsystems shown in fig. 5 are interconnected via a system bus 502. The subsystems may include a display 504 and peripherals, and input/output (I/O) devices coupled to an I/O controller 506 may be connected to the computer system through any number of components known in the art, such as a serial port 508. For example, serial port 508 or external interface 510 may be used to connect computer device 500 to additional devices and/or systems not shown in fig. 5, including a wide area network such as the internet, a mouse input device, and/or a scanner. The interconnection via system bus 502 allows one or more processors 512 to communicate with each subsystem and to control the execution of instructions, which may be stored in system memory 514 and/or fixed disk 516, and the exchange of information between subsystems. The system memory 514 and/or the fixed disk 516 may embody tangible computer-readable media. Any or all of the views 126, 128, 130 generated by the video dynamic range generator 124 of fig. 1 may be shown to a user of the system 500 on the display 504.

Any of the software components, processes, or functions described herein may be implemented as software code executed by a processor using any suitable computer language, such as, for example, Java, JavaScript, C + + or Perl, using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands in (or on) a non-transitory computer readable medium such as a Random Access Memory (RAM), a Read Only Memory (ROM), a magnetic medium such as a hard drive or floppy disk, or an optical medium such as a CD-ROM. In this context, a non-transitory computer-readable medium is virtually any medium suitable for storing data or a set of instructions, except for transitory waveforms. Any such computer-readable media may reside on or within a single computing device and may exist on or within different computing devices within a system or network.

According to one example implementation, the term processing element or processor as used herein may be a Central Processing Unit (CPU), or conceptualized as a CPU, such as a virtual machine. In this example implementation, the CPU or a device in which the CPU is incorporated may be coupled with, connected to, and/or communicate with one or more peripheral devices (such as a display).

The non-transitory computer readable storage medium referred to herein may include a plurality of physical drive units such as a Redundant Array of Independent Disks (RAID), a floppy disk drive, flash memory, USB flash drive, external hard drive, thumb drive, pen drive, key drive, high-density digital versatile disk (HD-DVD) optical disk drive, internal hard drive, blu-ray disk drive, or Holographic Digital Data Storage (HDDS) optical disk drive, Synchronous Dynamic Random Access Memory (SDRAM), or similar devices or other forms of memory based on similar technology. As mentioned, with respect to embodiments described herein, a non-transitory computer-readable medium may include virtually any structure, technique, or method other than a transitory waveform or similar medium.

Certain implementations of the disclosed technology are described herein with reference to block diagrams of systems and/or flowcharts or flow charts or flow diagrams of functions, operations, processes or methods. It will be understood that one or more blocks of the block diagrams, or one or more stages or steps of the flowcharts or flow diagrams, and combinations of blocks in the block diagrams, or stages or steps of the flowcharts or flow diagrams, respectively, can be implemented by computer-executable program instructions. Note that in some embodiments, one or more blocks, stages or steps may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all.

These computer-executable program instructions may be loaded onto a general purpose computer, special purpose computer, processor, or other programmable data processing apparatus to produce a particular example of a machine, such that the instructions which execute on the computer, processor, or other programmable data processing apparatus create means for implementing one or more of the functions, operations, processes, or methods described herein. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement one or more of the functions, operations, procedures, or methods described herein.

Aspects of the disclosure may operate on specially constructed hardware, firmware, digital signal processors, or specially programmed computers including processors operating according to programmed instructions. The term controller or processor as used herein is intended to include microprocessors, microcomputers, Application Specific Integrated Circuits (ASICs), and dedicated hardware controllers. One or more aspects of the disclosure may be embodied in computer-usable data and computer-executable instructions, such as one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on a computer readable storage medium such as a hard disk, optical disk, removable storage media, solid state memory, Random Access Memory (RAM), and the like. As will be appreciated by one skilled in the art, the functionality of the program modules may be combined or distributed as desired in various aspects. Further, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, FPGAs, and the like. Particular data structures may be used to more effectively implement one or more aspects of the present disclosure, and such data structures are contemplated to be within the scope of computer-executable instructions and computer-usable data described herein.

In some cases, the disclosed aspects may be implemented in hardware, firmware, software, or any combination thereof. The disclosed aspects may also be implemented as instructions carried by or stored on one or more or computer-readable storage media, which may be read and executed by one or more processors. Such instructions may be referred to as a computer program product. As discussed herein, computer-readable media means any media that can be accessed by a computing device. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media.

Computer storage media means any media that can be used to store computer-readable information. By way of example, and not limitation, computer storage media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), Digital Video Disc (DVD) or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and any other volatile or non-volatile, removable or non-removable media implemented in any technology. Computer storage media excludes signals per se and transitory forms of signal transmission.

Communication media means any media that can be used for the communication of computer-readable information. By way of example, and not limitation, communication media may include coaxial cables, fiber optic cables, air or any other medium suitable for communication of electrical, optical, Radio Frequency (RF), infrared, acoustic or other types of signals.

Examples of the invention

Illustrative examples of the techniques disclosed herein are provided below. Embodiments of the technology may include any one or more and any combination of the examples described below.

Example 1 is a video analyzer for measuring a dynamic range of a video signal, comprising a video input configured to receive the video signal; a cumulative distribution function generator configured to generate a cumulative distribution function curve from components of the video signal; a feature detector configured to generate one or more feature vectors from the cumulative distribution function curve; and a video dynamic range generator configured to generate a visual output indicative of the brightness of one or more portions of the video signal.

Example 2 is the video analyzer of example 1, wherein the visual output comprises a false-color image indicative of a brightness of one or more portions of the video signal.

Example 3 is the video analyzer of any of examples 1 or 2, wherein the visual output comprises a waveform indicating a brightness of a single frame of the video signal relative to a percentage of screen area.

Example 4 is the video analyzer of example 3, wherein the visual output includes a power mask for a particular display to be displayed concurrently with the waveform.

Example 5 is the video analyzer of any of examples 1 to 4, wherein the visual output comprises an average luminance of each frame of at least a portion of the video signal.

Example 6 is the video analyzer of example 5, wherein the visual output further comprises a maximum average brightness of the particular display.

Example 7 is the video analyzer of example 6, wherein the visual output further comprises a minimum average brightness for the particular display.

Example 8 is the video analyzer of example 7, wherein the visual output further comprises an optimal maximum brightness and an optimal minimum brightness for the particular display.

Example 9 is the video analyzer of any of examples 1 to 8, wherein the visual output is generated and displayed in real-time or near real-time.

Example 10 is a method for measuring dynamic range of a video signal, comprising generating a cumulative distribution function curve from components of the video signal; generating one or more feature vectors from the cumulative distribution function curve; and generating a visual output indicative of the brightness of one or more portions of the video signal.

Example 11 is the method of example 10, wherein the visual output comprises a false-color image indicative of a brightness of one or more portions of the video signal.

Example 12 is the method of any one of examples 10 or 11, wherein the visual output comprises a waveform indicating a brightness of a single frame of the video signal relative to a percentage of screen area.

Example 13 is the method of example 12, wherein the visual output comprises a power mask for a particular display displayed concurrently with the waveform.

Example 14 is the method of any of examples 10 to 13, wherein the visual output comprises an average luminance of each frame of at least a portion of the video signal.

Example 15 is the method of example 14, wherein the visual output further comprises a maximum average brightness and a minimum average brightness for the particular display.

Example 16 is the method of any one of examples 10 to 15, wherein generating the visual output indicative of the brightness of the one or more portions of the video signal comprises generating the visual output in real time or near real time.

Example 17 is one or more computer-readable storage media comprising instructions that, when executed by one or more processors of a video analyzer, cause the video analyzer to generate a cumulative distribution function curve from components of a video signal; generating one or more feature vectors from the cumulative distribution function curve; and generating a visual output indicative of the brightness of one or more portions of the video signal.

Example 18 is the one or more computer-readable storage media of example 17, further comprising instructions to cause the video analyzer to generate a visual output comprising a false-color image indicative of a brightness of one or more portions of the video signal.

Example 19 is the one or more computer-readable storage media of any of examples 17 or 18, further comprising instructions that cause the video analyzer to generate a visual output comprising a waveform indicating a brightness of a single frame of the video signal relative to a percentage of screen area.

Example 20 is the one or more computer-readable storage media of any of examples 17 to 19, further comprising instructions that cause the video analyzer to generate the visual output comprising a power mask for a particular display to be displayed concurrently with the waveform.

Example 21 is a video analyzer for measuring a dynamic range of a video signal, comprising a video input configured to receive the video signal; a cumulative distribution function generator configured to generate a cumulative distribution function curve from components of the video signal; a feature detector configured to generate one or more feature vectors from the cumulative distribution function curve; and a video dynamic range generator configured to generate a visual output indicative of the brightness of one or more portions of the video signal from the cumulative distribution function curve and from the one or more feature vectors.

Example 22 is a method for measuring dynamic range of a video signal, comprising generating a cumulative distribution function curve from components of the video signal; generating one or more feature vectors from the cumulative distribution function curve; and generating a visual output indicative of the luminance of one or more portions of the video signal from the cumulative distribution function curve and from the one or more feature vectors.

Example 23 is one or more computer-readable storage media comprising instructions that, when executed by one or more processors of a video analyzer, cause the video analyzer to generate a cumulative distribution function curve from components of a video signal; generating one or more feature vectors from the cumulative distribution function curve; and generating a visual output indicative of the luminance of one or more portions of the video signal from the cumulative distribution function curve and from the one or more feature vectors.

The previously described versions of the disclosed subject matter have many advantages, either described or will be apparent to one of ordinary skill. Even so, these advantages or features are not required in all versions of the disclosed apparatus, systems or methods.

Additionally, this written description makes reference to specific features. It will be understood that the disclosure in this specification includes all possible combinations of those specific features. Where a particular feature is disclosed in the context of a particular aspect or example, that feature may also be used, to the extent possible, in the context of other aspects and examples.

Further, when reference is made in this application to a method having two or more defined steps or operations, the defined steps or operations may be carried out in any order or simultaneously, unless the context excludes those possibilities.

While specific examples of the invention have been illustrated and described for purposes of illustration, it will be understood that various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not to be restricted except as by the appended claims.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频编解码中的自适应颜色格式转换

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!