Camera color image processing

文档序号:1804622 发布日期:2021-11-05 浏览:19次 中文

阅读说明:本技术 相机色彩图像处理 (Camera color image processing ) 是由 V·J·尊贾拉奥 于 2020-03-11 设计创作,主要内容包括:图像处理系统接收由成像设备获取的图像数据;并将接收到的图像数据中的一个或多个非彩色色彩与一个或多个彩色色彩分离。至少基于色温和光照度来处理非彩色色彩和彩色色彩,以通过应用分别与非彩色色彩或彩色色彩相关联的一个或多个经定义的权重值来为非彩色色彩生成经调整的色彩外观。与该一个或多个非彩色色彩相关联的一个或多个经定义的权重值不同于与该一个或多个彩色色彩相关联的一个或多个经定义的权重值。使用一个或多个非彩色色彩的经调整的色彩外观和一个或多个彩色色彩的经调整的色彩外观来生成更符合人类视成像和认知系统的最终图像。(An image processing system receives image data acquired by an imaging device; and separating one or more achromatic colors from one or more chromatic colors in the received image data. The achromatic colors and chromatic colors are processed based at least on the color temperature and the illuminance to generate an adjusted color appearance for the achromatic colors by applying one or more defined weight values associated with the achromatic colors or the chromatic colors, respectively. One or more defined weight values associated with the one or more achromatic colors are different from one or more defined weight values associated with the one or more chromatic colors. The adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors are used to generate a final image that is more consistent with the human visual imaging and cognitive system.)

1. An image processing system comprising:

a memory associated with a computing device, the memory comprising a camera color image processing component; and

a processor executing an image color processing system that uses the camera color image processing component to:

receiving image data acquired by an imaging device;

separating one or more achromatic colors from one or more chromatic colors in the received image data;

processing the one or more achromatic colors based at least on color temperature and illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors;

processing the one or more chromatic colors based at least on a color temperature and a illuminance to generate an adjusted color appearance for the one or more chromatic colors by applying one or more defined weight values associated with the one or more chromatic colors, wherein the one or more defined weight values associated with the one or more achromatic colors are different from the one or more defined weight values associated with the one or more chromatic colors; and

generating a final image using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors.

2. The image processing system of claim 1, wherein the defined weight values associated with the one or more chromatic colors and associated with the one or more achromatic colors comprise at least one of a luminance weight or a chrominance weight.

3. The image processing system of any of claims 1 or 2, wherein the color temperature comprises a Correlated Color Temperature (CCT), and the processor executes the image color processing system to access an adjustment table having defined values of one or more defined weight values associated with the one or more achromatic colors and one or more defined weight values associated with the one or more chromatic colors, the defined values corresponding to each of a plurality of CCT values.

4. The image processing system of claim 3, wherein the defined values are adjustable to change color appearance, color preference, and color reproduction of the received image data, and wherein the defined values are set to generate a final image having a color appearance based on human visual and cognitive processing.

5. The image processing system of any of claims 1-4, wherein the received image data is in an RGB color space and the processor executes a color space conversion component to convert the received image data into a YUV color space and the image color processing system to perform saturation/hue enhancement as part of the processing of the one or more achromatic colors and the one or more chromatic colors.

6. The image processing system of any of claims 1-5, wherein the processor executes the image color processing system to generate the final image using a weighted combination of a chroma color weight value associated with the one or more achromatic colors and a chroma color weight value associated with the one or more chromatic colors along with a luminance weight.

7. The image processing system of any of claims 1-6, wherein the processor executes the image color processing system to adjust at least one of one or more defined weight values associated with the one or more chromatic colors and one or more defined weight values associated with the one or more achromatic colors using interpolation.

8. A computerized method for image processing, the computerized method comprising:

receiving image data acquired by an imaging device;

separating one or more achromatic colors from one or more chromatic colors in the received image data;

processing the one or more achromatic colors based at least on color temperature and illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors;

processing the one or more chromatic colors based at least on a color temperature and a illuminance to generate an adjusted color appearance for the one or more chromatic colors by applying one or more defined weight values associated with the one or more chromatic colors, wherein the one or more defined weight values associated with the one or more achromatic colors are different from the one or more defined weight values associated with the one or more chromatic colors; and

generating a final image using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors.

9. The computerized method of claim 8, wherein the defined weight values associated with the one or more chromatic colors and associated with the one or more achromatic colors comprise at least one of a luminance weight or a chrominance weight.

10. The computerized method of any of claims 8 and 9, wherein the color temperature comprises a Correlated Color Temperature (CCT), and further comprising accessing a tuning table having defined values of one or more defined weight values associated with the one or more achromatic colors and one or more defined weight values associated with the one or more chromatic colors, the defined values corresponding to each of a plurality of CCT values.

11. The computerized method of claim 10, wherein the defined values are adjustable to change a color appearance, a color preference, and a color reproduction of the received image data.

12. The computerized method of any of claims 10 and 11, wherein the defined values are set to generate a final image having a color appearance based on human visual and cognitive processing.

13. The computerized method of any of claims 8-12, wherein the received image data is in an RGB color space, and further comprising converting the received image data into a YUV color space and performing saturation/hue enhancement as part of the processing of the one or more achromatic colors and the one or more chromatic colors, and further comprising generating the final image using a weighted combination of a chrominance color weight value associated with the one or more achromatic colors and a chrominance color weight value associated with the one or more chromatic colors, along with a luminance weight.

14. The computerized method of any one of claims 8-13, further comprising adjusting at least one of one or more defined weight values associated with the one or more chromatic colors and one or more defined weight values associated with the one or more achromatic colors using interpolation.

15. One or more computer storage media having computer-executable instructions for image processing, which when executed by a processor, cause the processor to at least:

receiving image data acquired by an imaging device;

separating one or more achromatic colors from one or more chromatic colors in the received image data;

processing the one or more achromatic colors based at least on color temperature and illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors;

processing the one or more chromatic colors based at least on a color temperature and a illuminance to generate an adjusted color appearance for the one or more chromatic colors by applying one or more defined weight values associated with the one or more chromatic colors, wherein the one or more defined weight values associated with the one or more achromatic colors are different from the one or more defined weight values associated with the one or more chromatic colors; and

generating a final image using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors.

Background

Camera color image processing is used to generate color images based on images captured by a camera. Image processing converts the sensed visual data into image data to generate a color image. Conventional processing techniques treat all colors in the same way or in the same direction. However, the human visual imaging and cognitive system treats color based on the color properties of the color and the memory of the color. Simple mathematical models and/or matrices used in conventional processing techniques fail to convert acquired visual data into an image that truly represents human visual imagery. For example, all lighting conditions are not neutral white and most lighting conditions have some color shift associated with them that cannot be accurately represented using conventional processing techniques.

Disclosure of Invention

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

A computerized method for image processing includes receiving image data acquired by an imaging device; and separating one or more achromatic colors from one or more chromatic colors in the received image data. The computerized method further comprises: the one or more achromatic colors are processed based at least on color temperature and illuminance (lux) to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors. The computerized method further comprises: the one or more color colors are processed based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more color colors by applying one or more defined weight values associated with the one or more color colors. One or more defined weight values associated with the one or more achromatic colors are different from one or more defined weight values associated with the one or more chromatic colors. The computerized method further comprises: the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors are used to generate a final image.

Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.

Brief Description of Drawings

The specification will be better understood from a reading of the following detailed description in light of the accompanying drawings, in which:

FIG. 1 is a block diagram illustrating the use of a system configured for camera color image processing according to an embodiment;

FIG. 2 is a block diagram illustrating automatic white balance control according to one embodiment;

FIG. 3 is a block diagram of a system having a camera color image processing pipeline according to an embodiment;

FIG. 4 illustrates an adjustment table according to one embodiment;

FIG. 5 illustrates a color image processing component according to an embodiment;

FIG. 6 is a flow diagram of a process of camera color image processing according to an embodiment;

FIG. 7 is a flow diagram of a process of generating color image processing gain values according to an embodiment; and

fig. 8 is a block diagram of an example computing environment suitable for implementing some of the various examples disclosed herein.

Corresponding reference characters indicate corresponding parts throughout the drawings. In the figures, the systems are illustrated as schematic diagrams. The figures may not be drawn to scale.

Detailed Description

The computing devices and methods described herein are configured to process image data using camera color imaging processes (e.g., processing algorithms) based on complex human visual and cognitive nonlinear processing. The camera color image processing pipeline defines a processing algorithm based on human visual and cognitive processing to provide an improved camera experience (for photos and videos) to users, such as during video teleconferencing.

All lighting conditions are not neutral white and most lighting conditions have some color shift associated with them. For example, tungsten lamps (CIE a) have a strong yellow color shift, the horizontal line light (2400K) has an orange color shift, and cloudy days of sunlight (CIE D65) have a more blue color shift. The human visual system adapts in part to these lighting conditions and, as a result, white paper (achromatic objects) appears more yellow under tungsten lamps and more blue under cloudy days of sunlight. However, due to cognitive processing, colored objects are less affected by color shift, e.g., red apples appear to be almost the same color regardless of lighting conditions. This complex human visual phenomenon has been adapted by the present disclosure to provide camera color processing by creating separately customized color appearance models for achromatic and chromatic colors, which will lead to an improved user experience.

In some examples, the camera color image processing pipeline enhances the visual experience during video teleconferences, including providing skin tone reproduction using face-based (object-based) information, as well as improving overall camera image quality and photographic experience under different lighting conditions. In this manner, the generated image is more consistent with the human visual system.

The present disclosure thus provides a configured image processing pipeline with the ability to generate images that better mimic human visual and cognitive processing. In this manner, when the processor is programmed to perform the operations described herein, the processor is used in an unconventional manner and allows images corresponding to human visual and cognitive processing to be generated more accurately, providing an improved user experience.

Fig. 1 is an exemplary block diagram illustrating the generation of an image (e.g., a video image), particularly an image with improved quality that mimics human visual and cognitive processing, using system 100. A device 102 (e.g., a mobile phone, tablet, laptop, camera, etc.) acquires a plurality of images of a scene 104 (illustrated as a current scene, such as an image including one or more objects therein) using a lens 106 and a sensor 108 (e.g., a Charge Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) active pixel sensor) of the device 102. It should be appreciated that the sensor 108 (which in one example is a camera sensor formed of silicon) "sees" the scene 104 in a manner other than by the human eye. Although the optics and filters may change the images of the scene 104 illustrated as the image 114 being acquired, these images still deviate from what is seen by the human eye with human visual and cognitive processing. The present disclosure processes acquired image data to generate images that better simulate human visual and cognitive processing. That is, the illustrative image 116 better represents what a person's eyes would see if looking directly at the device 102 rather than through it. It should be noted that in some examples, device 102 is located at a remote location and generates images that are seen by people at different locations (e.g., in a video conferencing setting). Thus, the current scene 104 in the field of view 112 of the device 102 is imaged with a better representation of the image generated by the human visual and cognitive processing.

In the illustrated example, a camera color imaging processor 118 (which may form part of the device 102 or be separate from the device 102) is configured to process image data acquired by the device 102 to generate an image with improved visual quality. That is, the camera color imaging processor 118 is configured in various examples with a camera color image processing pipeline implemented based on human visual and cognitive processing to provide an improved camera experience (for photos and videos) for different applications (e.g., video teleconferencing). In one example, the camera color imaging processor 118 is configured to implement a processing pipeline that performs achromatic-chromatic separation on acquired image data to process achromatic and chromatic colors, respectively, to provide Automatic White Balance (AWB) control. As a result, the original "radiation" image is converted into an adapted "perception" image.

More specifically, the camera color imaging processor 118 is operable to perform AWB control to simulate the white balance of the human eye. In one example, the camera color imaging processor 118 implements camera color imaging algorithms based on complex human visual and cognitive nonlinear processing rather than traditional simple mathematical models and/or matrices. The camera color imaging processor 118 performs AWB control, which allows different gains to be applied to achromatic colors and chromatic colors. The processing performed by the camera color imaging processor 118 takes into account lighting variations, such as the phase of daylight, variations in artificial light sources (e.g., different types of light, sample-by-sample variations, etc.), mixed lighting conditions (e.g., multiple light sources, shadows, etc.), and so forth.

Various examples obtain WB gains from the white point as follows:

white points R/G0.682 and B/G0.714;

gain R is 1/0.682 to 1.661;

gain Gr 1;

gain Gb ═ 1; and

gain B is 1/0.714 is 1.400;

using these WB gains, a color adaptation model output is provided that generates an adapted "perceived" image. In some examples, the adjusted white point R/G is based on image processing by camera color imaging processor 118, which includes separately defined achromatic and chromatic color weights. As described in more detail herein, the camera color imaging processor 118 uses the adjustment table to define various weights and parameters for a new R/G (or a new B/G), where R represents red, G represents green, and B represents blue.

In one example, as illustrated in fig. 2, the color processor 200 is configured to process an image to generate an adapted "perceived" image based at least in part on color temperature and light illumination. The color temperature of a light source generally refers to the temperature at which an ideal black body radiator emits light similar to the color phase of the light source. Illuminance (lux) is a measure of luminosity, and generally refers to the number of lumens emitted per square area of a surface, regardless of how the light is distributed in the direction of the light emission. One lux is emitted by one lux falling on a perfectly white surface. Various examples use color temperature and light illumination as part of a color appearance/color preference model. As illustrated in fig. 2, AWB control 202 is output to a model 204 that is based on color appearance, color preferences, and color reproduction, whereas traditional methods are based only on color appearance and color preferences. The present disclosure allows different gains to be applied across achromatic/chromatic color bands based on color temperature and illuminance. However, it should be understood that other image characteristics may be used to perform the processing described herein.

Fig. 3 illustrates an image color processing system 300 having a camera color image processing component 340 with a camera color image processing pipeline 302 according to one example. It should be noted that image color processing system 300 may include additional, fewer, or alternative components. The components shown are provided for ease of illustration. Image color processing system 300 receives image data from a sensor, such as sensor 108 (shown in FIG. 1). The acquired image data is processed by a demosaicing processing component 304. Demosaicing component 304 performs demosaicing, which is digital image processing that reconstructs full-color images from incomplete color samples (and optionally overlaid with a Color Filter Array (CFA)) output from image sensor 304. Accordingly, demosaicing processing component 304 performs color reconstruction.

The camera color matrix 306 is then used for color correction from the device RGB to the standard RGB (srgb) color space. The camera color matrix 306 may be any suitable matrix in camera color technology. That is, the raw sensor data is converted from the camera color space to sRGB. The tone curve 308 is used for sRGB image data to adjust the exposure, light amount, and tone of the data. Any tone curve in the image processing technique may be used.

Color space conversion 310 is performed on the adjusted tonal sRGB image data. The color space conversion 310 converts image data between color spaces using conversion parameters. In the illustrated example, the conversion parameters are configured to convert sRGB image data to YUV image data. That is, color space conversion 310 converts the red, green, and blue color systems to YUV color spaces for use in camera color image processing pipeline 302. The use of YUV color space for image data rather than direct RGB representation allows for a reduction in the bandwidth of data transferred through camera color image processing pipeline 302 (e.g., a reduction in bandwidth for color components). It should be noted that Y represents a luminance component (lightness), and U and V represent color (color) components. It should also be noted that other color models or color coding may be used and will be considered by the present disclosure. Thus, the various examples may be used in any color space. For example, it may be converted into YCbCr color space, CIELAB color space, or LCh color space, HSV color space, or HSL color space, etc. For example, the present disclosure may be implemented with any linear or non-linear image color space.

Color space conversion 310 thus converts image data from the color space of the camera that captured and encoded the image data (e.g., a three-color model such as RGB) to a color space for color image processing, such as YUV. That is, in some examples, pixel data is converted from an image space, in which pixels are considered according to three color channels (red, blue, green) and each channel has a respective intensity, to a color space, in which pixels are considered according to two colors (e.g., chromatic or chrominance channels) and one achromatic channel (which represents the overall luminance of the pixel in terms of brightness or luminance). For example, the two color channels may be red and blue channels. The achromatic channel may be referred to as a luminance or luminosity (luma) channel. Thus, in the YUV color space, Y represents the luminance channel, U represents the blue color channel, and V represents the red color channel.

The YUV image data is then processed by the camera color image processing pipeline 302. This processing includes separating the components by a separator 312, the separator 312 being configured to separate the luminance component (content) from the color component (content). That is, the separator 312 is configured to separate the luma content from the color content of the YUV image data to subsequently allow for separate processing of the achromatic colors and the chromatic colors, including adjusting the achromatic colors and the chromatic colors, respectively, based on the color temperature and the illuminance. It should be noted that the U and V components correspond to color differences (saturations) of the image pixels (e.g., shifting the colors of the pixels without changing the brightness). In one example, the signal corresponding to the UV content 316 is separated from the signal corresponding to the Y content 314 using a signal separation technique among signal processing techniques. That is, achromatic and chromatic separation 318 is performed such that the signals corresponding to the YUV data are separated by a demultiplexer in one example to produce Y content 314 separated from UV component 316, where UV component 316 is separated into an achromatic color component and a chromatic color component.

Camera color image processing pipeline 302 thus separates achromatic color 322 from chromatic color 320. This separation allows for separate processing of achromatic colors 322 and chromatic colors 320 (e.g., achromatic image components and color image components) that are individually adjusted based on color temperature and light illumination level. In one example, adjustments are performed to adjust achromatic colors 322 and chromatic colors 320 by thresholding based on Correlated Color Temperature (CCT) and light illumination level. The separation of achromatic colors 322 and chromatic colors 320 is performed, for example, by one of a grid-based technique, a pixel-based technique, an object and scene detection technique, a machine learning separation technique, and the like. In one example, a detection threshold technique is used, which may be sensor response based, YUV based, or empirical (laboratory based). However, it should be understood that any suitable method may be used to separate the achromatic colors 322 and the chromatic colors 320.

It can be seen that in the illustrated example, the UV content 316 is a camera color image processed by the camera color image processing pipeline 302. While the Y content 314 is not processed by the camera color image processing pipeline 302, this luminance component is used as an input to the achromatic and chromatic separations 318. That is, the color content is a camera color image processed by the camera color image processing pipeline 302. It should be noted that the parameters, characteristics, attributes, etc. used to perform the separation may be adjusted or tuned. Furthermore, the separation may be performed on an image dataset (such as a set of pixels) on a pixel basis, or on other image datasets.

In some examples, achromatic color 322 and chromatic color 320 are processed separately to better approximate a human visual imaging and cognitive system (e.g., a video will have colors that a human would see based on different lighting conditions). In one example, an achromatic color 322 is processed based on a color appearance value 324, and a chromatic color 320 is processed based on a defined preferred color reproduction value 326. That is, different values (e.g., weight values) are applied to the image data corresponding to achromatic color 322 and chromatic color 320, respectively, so that saturation/hue enhancement 328 for achromatic color 322 and saturation/hue enhancement 330 for chromatic color 320 are provided.

More specifically, various examples use the tuning table 400 as shown in fig. 4 to adjust the achromatic color 322 and chromatic color 320, i.e., perform saturation/hue enhancement 328 and saturation/hue enhancement 330 based on the color appearance value 324 and the defined preferred color reproduction value 326. In the illustrated example, the tuning table 400 defines respective values for each of a plurality of color temperature levels corresponding to CCT values in column 402 of the table 400. It should be noted that in one example, the CCT values in the adjustment table 400 correspond to values of different types of light sources (e.g., incandescent lamps, LEDs, etc.). However, it should be understood that the CCT value may be adjusted to include fewer or more values. That is, different values (or ranges of values) may be defined by adjusting the CCT values or the number of CCT values in column 402. In some examples, the AWB control 202 is configured to detect CCT using a color temperature sensing technique of image processing techniques. That is, the AWB control 202 is configured to acquire color temperature information for defining weights to be used within the adjustment table 400.

For each CCT value, the corresponding values of color appearance, luminance level, luminance weight, color level, and chromaticity weight are defined in columns 404a and 404b, 406a and 406b, 408a and 408b, 410a and 410b, and 412a and 412b, respectively. In various examples, the values in each column are determined empirically, such as based on testing or experimentation. In some examples, modeling is used to determine values. Thus, the values in the adjustment table 400 may be adjusted to produce different color image processing results.

More specifically, the color appearance values in columns 404a and 404b define the desired appearance of the color corresponding to the CCT level. That is, these values define how the color should look (the perceptual aspect of human color vision imaging based on lighting conditions). The light illumination level values in columns 406a and 406b define a luminance or brightness threshold. That is, in some examples, these values define low-light and non-low-light (bright) light conditions. The illumination weight values in columns 408a and 408b define brightness weight values corresponding to illumination level values. That is, in some examples, the weights are applied depending on whether a low-light condition or a non-low-light (bright or high) condition exists. The color level values in columns 410a and 410b define whether a color is in achromatic color 322 or chromatic color 320. That is, these values define the levels at which colors above and below (i.e., above C2 and below C1) are considered to be within neutral color 322 and chromatic color 320, respectively. Measurements falling between these levels are considered to have colors that are part of both achromatic color 322 and chromatic color 320. The chroma weight values in columns 412a and 412b define weight values corresponding to achromatic color 322 or chromatic color 320. That is, the chromaticity weight is a value that is applied depending on whether a color is within achromatic color 322 or chromatic color 320.

It should be understood that the values in each row of adjustment table 400 corresponding to each different CCT value are used to process both achromatic colors 322 and chromatic colors 320. However, some values are only used to process one of achromatic color 322 or chromatic color 320, which will be discussed in more detail below. In various examples, the values in each row correspond to desired characteristics that better simulate each color temperature (CCT) handled by the human visual and cognitive system. That is, the color generated and displayed in accordance with the present disclosure provides an improved user experience when viewing video images (e.g., video conferencing), as the color better represents what a user would see if looking directly at the scene as an image, rather than looking at the video display image acquired by the camera device.

The various values in the adjustment table 400 may be changed or adjusted as needed or desired. For example, the color appearance defined in columns 404a, 404b may be adjusted to change the ratio values in these columns by changing any value of RGB. That is, the color appearance defined in columns 404a, 404b defines the way in which one wishes to see the image. In some examples, the color appearance values defined in columns 404a, 404b are based on survey data, experiments capturing data under various lighting conditions, and the like. However, the values of the color appearances defined in columns 404a, 404b may be defined based on other factors or criteria in order to produce an adjusted final image that appears more "real" to the viewer. In some examples, the camera engineer can adjust the final image by adjusting the values in the adjustment table 400.

Accordingly, it should be understood that the values in the adjustment table 400 are exemplary only and not limiting of the present disclosure. Further, for CCT values that fall between values in column 402, color image processing may be performed using rows having values closest to the measured CCT value in one example. However, other methods may be used. For example, in some examples, the adjustment value is determined using interpolation between two values of CCT. That is, if the measured CCT value is between two CCT values in column 402 of table 400 (such as by AWB 202), interpolation may be performed to determine, for example, the corresponding weight values to be applied (e.g., Cw1, Cw2, L1, L2).

An example for processing received image data to perform saturation/hue enhancement 328 and saturation/hue enhancement 330 based on color appearance value 324 and defined preferred color rendering value 326 will now be described. It should be understood that these examples are merely illustrative of calculations performed based on possible light levels and color levels. As an example of processing achromatic colors 322:

if C is less than or equal to C1 and

if L ≦ L1, the color adjustment (new R/G) is calculated as follows:

in this equation, the new color appearanceCurrent color appearance value by using low illumination weight W1 and color level C2 for achromatic colorA weighting is performed to determine. It should be understood that L1 corresponds to a low light level in adjustment table 400 and L2 corresponds to a high light level in adjustment table 400. New color appearanceThe required response is defined. I.e. new color appearanceA more desirable or desirable color appearance is defined, such as would be expected if the image were captured by the human eye and subjected to human visual and cognitive processing. In various examples, the new color appearanceIs output to the image processor to generate a final image 336 (shown in fig. 3).

It should be understood that the new B/G value is similarly calculated using equation 1. Additionally, in a similar manner, when dealing with color colors 320, color values are used, including C1. That is, for different values, such as for color levels, different weight values are provided in the adjustment table 400 for each of the achromatic color 322 and the chromatic color 320. Thus, the color image processing of the present disclosure is performed differently (e.g., weighted differently) for achromatic colors 322 and chromatic colors 320, such that saturation/hue enhancement 328 and saturation/hue enhancement 330 better simulate human visual imaging, including human visual and cognitive nonlinear processing.

As another example:

if L is less than or equal to L1 and

if C < C2, the color adjustment (new R/G) is calculated as follows:

in this equation, the new color appearanceThe current color appearance value is determined by using the low luminance weight W1 in combination with the non-color and color weights (Cw 1 and Cw2, respectively)A weighting is performed to determine. It should be noted that the low light level value W1 is used because L ≦ L1. If it is notL > L1, high light illumination levels are used. In equation 2, in this example, based onAndcw1 and Cw2 are adjusted because the color is between the achromatic color 322 and the chromatic color 320, i.e., between the defined thresholds for each of the achromatic color 322 and the chromatic color 320. Thus, in this example, the weights Cw1 and Cw2 are further weighted because the color is between the defined thresholds of the achromatic color 322 and the chromatic color 320. The various thresholds and values may be set and adjusted, for example, by a camera tuner (e.g., a person experienced in camera adjustments).

It should be noted that values below C1 are defined as achromatic colors and values above C2 as chromatic colors, where the values between C1 and C2 are partly achromatic colors and partly chromatic colors ("intermediate" colors). Thus, based on the color temperature and the level of illuminance, the defined weights are applied depending on the color being one of the achromatic colors 322 and/or one of the chromatic colors 320. It should be noted that the new color appearanceIs calculated using the same equation 2.

The camera color image processing pipeline 302 is thus configured to use the weight values from the adjustment table 400 that define the color appearance model of the achromatic color 322 and the chromatic color 320. For example, the lighting conditions are typically not a neutral white color (such as in a conference room with video conferencing capabilities), but rather lighting conditions with some associated color shift. The camera color image processing pipeline 302 adapts the image data to different lighting conditions in a manner that mimics the human visual system such that, for example, the image data is weighted (e.g., based on tuner weight values in the tuner table 400) to cause white paper (achromatic objects) to appear more yellow under tungsten light and more blue under cloudy days of sunlight. I.e. camera colorColor image processing pipeline 302 uses color appearance values 324, such as achromatic color 322 and preferred color reproduction value 326 for chromatic color 320 from adjustment table 400, to determine a new color appearance as defined by a weight value that adjusts the gain of the image data (i.e.,) (e.g., different weights applied to achromatic color 322 and chromatic color 320). Saturation/hue enhancements 328, 330 are applied to the separately processed image data for achromatic color 322 and chromatic color 320 to allow for adjusted colors. In one example, adjusted color 332 is a combination of blending and compositing of colors. The tuning table 400 generally defines a desired color appearance (R/G and B/G) for each color temperature that is adjustable.

The output from the camera color image processing pipeline 302 is converted by color space conversion 334. In one example, the color space conversion 334 includes JPEG encoding performed using a color mapping technique in a color space conversion technique. In some examples, JPEG compression is performed as part of the color space conversion. Final image 336 is output with an adjusted color appearance based on separate color image processing of achromatic color 322 and chromatic color 320. In some examples, the present disclosure achieves complex human visual phenomena in camera color processing by creating separately customized color appearance models for achromatic colors 322 and chromatic colors 320. Thus, image quality and visual camera appearance are improved over all lighting conditions/lighting level spectra of photographs and video (including, for example, video teleconferencing). Thus, the color image processing of the present disclosure does not process all colors in the same way or in the same direction, but at least in part by processing achromatic color 322 and chromatic color 320 separately, and providing separate different weights for each of achromatic color 322 and chromatic color 320.

Thus, various examples perform camera color image processing by differently weighting achromatic color 322 and chromatic color 320 based on color temperature (e.g., CCT) and illuminance. As described herein, different sets of weights are provided for different color temperature levels for each of achromatic color 322 and chromatic color 320. Thus, the weights in some examples are based on image content,

as illustrated in fig. 5, the final image 336 is a gain calculated based on the present disclosure(s) ((s))And) And (4) generating. That is, the gains are calculated using different defined thresholds and weights by the color image processing component 340. More specifically, and as described herein, some examples use defined values in the adjustment table 400 to calculate a gain to generate the final image 336. It should be understood that the gains may be applied to different levels of granularity, such as pixel levels or object levels. For example, the calculated gain is applied per pixel in some examples, and per area in some examples.

The gains calculated by color image processing component 340 are based on thresholds and weights for each of a plurality of defined color temperature 502 levels, which are different for achromatic color 322 and chromatic color 320. That is, different thresholds and weights are used for different lighting conditions depending on whether the color is within achromatic color 322 or chromatic color 320. The lighting conditions may be defined, for example, based on different light sources used to illuminate the area (e.g., one or more different light sources in a conference room and imaged with a camera for a video conference).

For each level of color temperature 502, the color appearance 504 defines the desired color characteristics of the final image 336. For example, color appearance 504 defines gains (R/G and B/G) that are adjusted to calculate new gains to be applied (And). That is, the tuner gain value is used and is based on the textThe described camera color image processing that separates achromatic color 322 and chromatic color 320 is adjusted to a new gain value.

A threshold is also defined to further determine the weight to be applied to each color temperature 502. For example, different light illumination levels 506 define thresholds for different lighting conditions (e.g., low and high light conditions). Additionally, different chromatic levels 508 define thresholds to identify colors as being within achromatic color 322 or chromatic color 320.

Using the measured color temperature 502 (e.g., CCT value) discussed above and based on the lighting conditions (light illumination level) and whether the color is within achromatic color 322 and/or chromatic color 320, the weight adjustment camera color image processing is applied to generate a new gain based on the gain defined by the value of color appearance 504. That is, luminance weights 510 and/or chrominance weights 512 are applied based on (i) lighting conditions and (ii) whether a color is within achromatic colors 322 and/or chromatic colors 320, respectively. As a result, the final image 336 is more pleasing when viewed, having colors that more closely match those produced by human visual and cognitive processing.

It should be noted that the thresholds and weights in one example are defined as illustrated in the adjustment table 400 shown in fig. 4. For example, depending on camera characteristics and/or use cases, one or more weight values may be optimized to render the best visual experience on the camera.

It should also be understood that the output from the various examples is also useful for further processing, such as in image processing operations performed by the computing device 800, which will be described in more detail with reference to fig. 8.

Fig. 6 is a flowchart 600 illustrating exemplary operations involved in camera color image processing. In some examples, the operations described with respect to flowchart 600 are performed by computing device 800 of fig. 8. The flowchart 600 begins with operation 602, receiving image data acquired by an imaging device. For example, a camera acquires video conference images during an online video conference session. Cameras are some examples located in conference rooms. However, in other examples, the camera forms part of a portable device (e.g., a laptop or tablet) and is operating to acquire images for a video conference. It should be noted that the image data may be any type of image, such as a still image or video.

At operation 604, one or more achromatic colors are separated from one or more chromatic colors in the received image data. As described herein, defined thresholds are used in some examples to identify achromatic color 322 and chromatic color 320. This separation allows for separate processing of achromatic colors 322 and chromatic colors 320.

At operation 606, the one or more achromatic colors are then processed based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors. Additionally, at operation 608, the one or more color colors are processed based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more color colors by applying one or more defined weight values associated with the one or more color colors. For example, as described herein, based at least on one or more defined thresholds (such as a low light level or a high light level), the respective one or more achromatic colors or one or more chromatic colors are processed by adjusting their gain settings. As described herein, new gain settings may be calculated using a plurality of weight values based on a plurality of thresholds. Further, in some examples, the one or more defined weight values associated with the one or more achromatic colors are different from the one or more defined weight values associated with the one or more chromatic colors. As such, the new gain value is different for the processed achromatic color or colors and the processed chromatic color or colors.

It should be noted that the order of processing may be changed to process different portions of the split data before or after splitting other portions of the data.

At operation 610, a final image is generated using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors. For example, the generated image has improved characteristics for different lighting conditions, including an image that is created closer to the human eye under different lighting conditions.

Thus, by creating separately customized color appearance models for achromatic colors and chromatic colors, various examples provide improved images when displaying digital images (e.g., still images or video).

Fig. 7 is a flowchart 700 illustrating exemplary operations involved in generating color image processing gain values. In some examples, the operations described with respect to flowchart 700 are performed by computing device 800 of fig. 8. The flowchart 700 begins with operation 702, receiving image data acquired by an imaging device. For example, a camera acquires video conference images during an online video conference session. Cameras are some examples located in conference rooms. However, in other examples, the camera forms part of a portable device (e.g., a laptop or tablet) and is operating to acquire images for a video conference. It should be noted that the image data may be any type of image, such as a still image or video.

At operation 704, in some examples, the received image data is converted from the camera color space to a pipeline processing color space to allow for more efficient camera color image processing. The conversion in one example is from an RGB color space to a YUV color space. However, as described herein, the present disclosure contemplates other color conversions.

At operation 706, it is determined whether the image data (e.g., pixel data) is below a color threshold. That is, it is determined whether the color level of the portion or component of the image data to be processed is below a color threshold. If the value is below the color threshold, then at operation 708, color image processing is performed using the color weight values. If the value is not below the chromatic threshold, it is determined whether the value is above the achromatic threshold in operation 710. If the value is above the achromatic threshold, color image processing is performed using an achromatic color weight value in operation 712. If the value is not above the achromatic threshold (which means that the value is above the chromatic threshold but below the achromatic threshold), the color is an intermediate color, and then color image processing is performed using a combination of an achromatic color weight value and a chromatic color weight value in operation 714.

After any of operations 708, 712, or 714, it is determined at operation 716 whether the measured color temperature value is equal to a defined color temperature value (such as in an adjustment table). For example, it is determined whether the determined CCT value (e.g., determined by the AWB control 202) is equal to one of the CCT values in an adjustment table (e.g., adjustment table 400). If these values are equal (i.e., the determined CCT value matches the CCT value in the adjustment table), then at 718, the weight value (and other thresholds, such as the light illumination level value) associated with the CCT value in the adjustment table is used in the camera color image processing (e.g., the camera color image processing pipeline 302 uses the table values). If the determined CCT value is different from (equal to) any defined CCT value in the adjustment table, an extrapolated value (extrapolated values) is used in operation 720. That is, an extrapolation operation is performed using a mathematical extrapolation technique to determine an extrapolated weight value (and other extrapolated threshold values, such as light illumination level values) between two CCT values.

A new gain value for processing the image data is then generated in operation 722. That is, as described herein, a new gain to be applied to the image data: (And) Are calculated and used to generate images having improved color appearance for different lighting conditions.

Accordingly, various examples perform camera color image processing with improved display characteristics. For example, in a video conferencing application, the displayed image from a remote camera is more consistent with human visual and cognitive processing.

Additional examples

Some aspects and examples disclosed herein relate to an image processing system, comprising: a memory associated with the computing device, the memory including a camera color image processing component; and a processor executing an image color processing system that uses the camera color image processing component to: receiving image data acquired by an imaging device; separating one or more achromatic colors from one or more chromatic colors in the received image data; processing the one or more achromatic colors based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors; processing the one or more color colors based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more color colors by applying one or more defined weight values associated with the one or more color colors; wherein one or more defined weight values associated with the one or more achromatic colors are different from one or more defined weight values associated with the one or more chromatic colors; and generating a final image using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors.

Additional aspects and examples disclosed herein relate to a computerized method for image processing, the computerized method comprising: receiving image data acquired by an imaging device; separating one or more achromatic colors from one or more chromatic colors in the received image data; processing the one or more achromatic colors based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors; processing the one or more color colors based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more color colors by applying one or more defined weight values associated with the one or more color colors; wherein one or more defined weight values associated with the one or more achromatic colors are different from one or more defined weight values associated with the one or more chromatic colors; and generating a final image using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors.

Additional aspects and examples disclosed herein are directed to the following: one or more computer storage media having computer-executable instructions for image processing, the computer-executable instructions, when executed by a processor, cause the processor to at least: receiving image data acquired by an imaging device; separating one or more achromatic colors from one or more chromatic colors in the received image data; processing the one or more achromatic colors based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more achromatic colors by applying one or more defined weight values associated with the one or more achromatic colors; processing the one or more color colors based at least on the color temperature and the illuminance to generate an adjusted color appearance for the one or more color colors by applying one or more defined weight values associated with the one or more color colors; wherein one or more defined weight values associated with the one or more achromatic colors are different from one or more defined weight values associated with the one or more chromatic colors; and generating a final image using the adjusted color appearance of the one or more achromatic colors and the adjusted color appearance of the one or more chromatic colors.

Alternatively or additionally to other examples described herein, examples include any combination of:

wherein the defined weight values associated with the one or more chromatic colors and the defined weight values associated with the one or more achromatic colors comprise at least one of a luminance weight or a chrominance weight;

wherein the color temperature comprises a correlated color temperature CCT, and further comprising accessing a tuning table having defined values for one or more defined weight values associated with the one or more achromatic colors and one or more defined weight values associated with the one or more chromatic colors, the defined values corresponding to each of a plurality of CCT values;

wherein the defined values are adjustable to change the color appearance, color preference, and color reproduction of the received image data;

wherein the defined values are set to generate a final image having a color appearance based on human visual and cognitive processing;

wherein the received image data is in an RGB color space and further comprising converting the received image data into a YUV color space and performing saturation/hue enhancement as part of the processing of the one or more achromatic colors and the one or more chromatic colors.

Wherein the processor executes the image color processing system to generate the final image using a weighted combination of a chrominance color weight value associated with the one or more achromatic colors and a chrominance color weight value associated with the one or more chromatic colors and a luminance weight; and

interpolation is used to adjust at least one of one or more defined weight values associated with the one or more chromatic colors and one or more defined weight values associated with the one or more achromatic colors.

While aspects of the disclosure have been described in terms of various examples and their associated operations, those skilled in the art will appreciate that combinations of operations from any number of different examples are also within the scope of aspects of the disclosure.

Example operating Environment

Fig. 8 is a block diagram of an example computing device 800 for implementing various aspects disclosed herein, and is generally designated as computing device 800. Computing device 800 is but one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the examples disclosed herein. Neither should the computing device 800 be interpreted as having any dependency or requirement relating to any one or combination of components/modules illustrated. Examples disclosed herein may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components including routines, programs, objects, components, data structures, etc., refer to code that performs particular tasks or implements particular abstract data types. The disclosed examples may be implemented in a variety of system configurations, including personal computers, laptop computers, smart phones, mobile tablets, handheld devices, consumer electronics, professional computing devices, and so forth. The disclosed examples may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.

Computing device 800 includes a bus 810 that directly or indirectly couples the following devices: computer storage memory 812, one or more processors 814, one or more presentation components 816, input/output (I/O) ports 818, I/O components 820, power supplies 822, and network components 824. Although the computing device 800 is depicted as appearing to be a single device, multiple computing devices 800 may work together and share the depicted device resources. For example, computer storage memory 812 may be distributed across multiple devices, processors 814 may be mounted on different devices, and so on.

Bus 810 represents what one or more busses may be (such as an address bus, data bus, or combination thereof). Although the various blocks of FIG. 8 are shown with lines for the sake of clarity, in reality, delineating various components is not so clear, and metaphorically, the lines would more accurately be grey and fuzzy. For example, a presentation component such as a display device may be considered an I/O component. Also, the processor has a memory. This is characteristic of the art, and reiterate that the diagram of FIG. 8 is merely illustrative of an exemplary computing device that can be used in connection with one or more disclosed examples. There is no distinction between categories such as "workstation," server, "" laptop, "" handheld device, "etc., all of which are considered to be within the scope of fig. 8 and are referred to herein as" computing devices. Computer storage memory 812 may take the form of the following computer storage media references and is operable to provide storage of computer readable instructions, data structures, program modules and other data for computing device 1000. For example, computer storage memory 812 may store an operating system, a general application platform, or other program modules and program data. Computer storage memory 812 may be used to store and access instructions configured to perform various operations disclosed herein.

As mentioned below, the computer storage memory 812 may include computer storage media in the form of volatile and/or nonvolatile memory, removable or non-removable memory, a data disk in a virtual environment, or a combination thereof. And computer storage memory 812 may include any number of memories associated with computing device 800 or accessible by computing device 100. Memory 812 may be internal to computing device 800 (as shown in fig. 8), external to computing device 800 (not shown), or both (not shown). Examples of memory 812 include, but are not limited to, Random Access Memory (RAM); read Only Memory (ROM); an Electrically Erasable Programmable Read Only Memory (EEPROM); flash memory or other memory technology; CD-ROM, Digital Versatile Disks (DVD), or other optical or holographic media; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices; a memory wired to the analog computing device; or any other medium that can be used to encode desired information and be accessed by computing device 800. Additionally or alternatively, computer storage memory 812 may be distributed across multiple computing devices 800, for example, in a virtualized environment where instruction processing is performed across multiple devices 800. For purposes of this disclosure, "computer storage medium," "computer storage memory," "memory," and "memory device" are synonymous terms for computer storage memory 812, and none of these terms includes a carrier wave or propagated signaling.

Processor 814 may include any number of processing units that read data from various entities, such as memory 812 or I/O components 820. In particular, processor 814 is programmed to execute computer-executable instructions for implementing aspects of the present disclosure. The instructions may be executed by a processor, by multiple processors within computing device 800, or by a processor external to client computing device 800. In some examples, processor 814 is programmed to execute instructions such as those illustrated in the flowcharts and depicted in the figures discussed below. Also, in some examples, processor 814 represents one implementation of an analog technique to perform the operations described herein. For example, the operations may be performed by the analog client computing device 800 and/or the digital client computing device 800. Presentation component 816 presents the data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, and the like. Those skilled in the art will understand and appreciate that computer data may be presented in a variety of ways, such as visually in a Graphical User Interface (GUI), audibly through speakers, wirelessly between computing devices 800, through a wired connection, or otherwise. The ports 818 allow the computing device 800 to be logically coupled to other devices including I/O components 820, some of which may be built in. Example I/O components 820 include, for example and without limitation, a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, and the like.

The computing device 800 may operate in a networked environment using logical connections to one or more remote computers via a network component 824. In some examples, the network component 824 includes a network interface card and/or computer-executable instructions (e.g., a driver) for operating the network interface card. Communications between computing device 800 and other devices may occur over any wired or wireless connection using any protocol or mechanism. In some examples, network component 824 may be used to wirelessly communicate data between public, private, or hybrid (public and private) devices using a transmission protocol using a short-range communication technology (e.g., Near Field Communication (NFC), bluetooth (tm) brand communication, etc.), or a combination thereof. For example, network component 824 communicates with network 828 via communication link 826.

Although described in connection with an example computing device 1000, the examples of the present disclosure are capable of being implemented with numerous other general purpose or special purpose computing system environments, configurations, or devices. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the disclosure include, but are not limited to: smart phones, mobile tablets, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile phones, mobile computing and/or communication devices with wearable or accessory form factors (e.g., watches, glasses, headphones, or ear buds), network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, VR devices, holographic devices, and the like. Such a system or device may accept input from a user in any manner, including from an input device such as a keyboard or pointing device, by gesture input, proximity input (such as by hovering), and/or by voice input.

Examples of the disclosure may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices in software, firmware, hardware, or a combination thereof. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other examples of the disclosure may include different computer-executable instructions or components having more or less functionality than illustrated and described herein. In examples involving a general-purpose computer, aspects of the present disclosure transform the general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.

By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable memory implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or the like. Computer storage media is tangible and mutually exclusive from communication media. Computer storage media is implemented in hardware and excludes carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se. Exemplary computer storage media include hard disks, flash drives, solid state memory, phase change random access memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media typically embodies computer readable instructions, data structures, program modules or the like in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.

As will be apparent to those skilled in the art, any of the ranges or device values given herein may be extended or altered without losing the effect sought.

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

It will be appreciated that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those embodiments that solve any or all of the problems set forth or to those embodiments having any or all of the benefits and advantages set forth. It will be further understood that reference to "an" item refers to one or more of those items.

The embodiments illustrated and described herein, as well as embodiments not specifically described herein but within the scope of aspects of the claims, constitute exemplary means for digital ink parsing. The illustrated one or more processors 1014, together with computer program code stored in memory 1012, constitute an exemplary processing device for using and/or training a neural network.

The term "comprising" is used in this specification to mean including the feature(s) or action(s) that accompany it, but does not preclude the presence or addition of one or more additional features or actions.

In some examples, the operations illustrated in the figures may be implemented as software instructions encoded on a computer-readable medium, in hardware programmed or designed to perform the operations, or both. For example, aspects of the present disclosure may be implemented as a system on a chip or other circuitry that includes a plurality of interconnected conductive elements.

The order of execution or completion of the operations in the examples of the disclosure illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and examples of the disclosure may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing an operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure.

When introducing elements of aspects of the present disclosure or examples thereof, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. The term "exemplary" is intended to mean an example of "… …". The phrase one or more of: "A, B and C" means "at least one of A and/or at least one of B and/or at least one of C".

Having described aspects of the present disclosure in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the disclosure as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the disclosure, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:成像方法、成像装置、区分成像对象的方法及计算机程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类