Image processing apparatus and method, imaging element, and imaging apparatus
阅读说明:本技术 图像处理装置和方法、成像元件和成像装置 (Image processing apparatus and method, imaging element, and imaging apparatus ) 是由 井原利昇 名云武文 于 2019-02-15 设计创作,主要内容包括:本公开涉及能够抑制由于对放大的信号组进行编码/解码而引起的信号值的偏差的图像处理装置和方法、成像元件和成像装置。在本公开中,针对其信号已经被放大的图像自适应地执行处理,并且对该图像进行编码。例如,对于图像的每个像素值,通过加上在根据对图像执行的信号放大的增益值的值范围内随机设置的偏移值来执行编码。本公开适用于例如图像处理装置、图像编码装置、图像解码装置、成像元件和成像装置等。(The present disclosure relates to an image processing apparatus and method, an imaging element, and an imaging apparatus capable of suppressing deviation of signal values due to encoding/decoding of an amplified signal group. In the present disclosure, processing is adaptively performed for an image whose signal has been amplified, and the image is encoded. For example, for each pixel value of an image, encoding is performed by adding an offset value randomly set within a range of values according to a gain value of signal amplification performed on the image. The present disclosure is applicable to, for example, an image processing apparatus, an image encoding apparatus, an image decoding apparatus, an imaging element, an imaging apparatus, and the like.)
1. An image processing apparatus comprising:
an adaptive processing section that performs adaptive image processing on the image on which the signal amplification has been performed; and
an encoding section that simply encodes the image on which the adaptive image processing has been performed by the adaptive processing section.
2. The image processing apparatus according to claim 1,
the adaptive processing section performs an imaging process of adding an offset value randomly set within a value range to each pixel value of the image, the value range depending on a gain value of the signal amplification performed on the image, and
the encoding section performs simple encoding on the image to which the offset value has been added for each pixel value by the adaptive processing section.
3. The image processing apparatus according to claim 2,
the adaptive processing section adds, as the offset value, a pseudo random number corrected to fall within a value range depending on the gain value to each pixel value of the image.
4. The image processing apparatus according to claim 1,
the adaptive processing section performs image processing of subtracting an offset value based on an average pixel value of the image and a quantization value simply encoded by the encoding section from each pixel value of the image, and
the encoding section simply encodes an image that has been obtained by subtracting the offset value from each pixel value by the adaptive processing section.
5. The image processing apparatus according to claim 4,
the average pixel value includes an average pixel value of images of frames preceding a current frame as a processing target.
6. The image processing apparatus according to claim 5,
the quantization value includes a value depending on a compression rate of the simple encoding.
7. The image processing apparatus according to claim 5,
the quantization value is an average value of quantization values of respective pixels for simple encoding of an image of a frame preceding a current frame as a processing target.
8. The image processing apparatus according to claim 4,
the adaptive processing section subtracts the offset value from each pixel value of the image for each color.
9. The image processing apparatus according to claim 4, further comprising:
a decoding unit configured to simply decode the encoded data generated by the encoding unit; and
and an offset adding unit that adds an offset value based on the average pixel value of the image and the simply encoded quantized value to each pixel value of the decoded image generated by the decoding unit.
10. The image processing apparatus according to claim 1,
the adaptive processing section performs image processing of setting a range of quantization values of the simple encoding performed by the encoding section, and
the encoding section simply encodes the image based on the range of the quantization value set by the adaptive processing section, and generates encoded data including information on the range of the quantization value.
11. The image processing apparatus according to claim 10,
the adaptive processing section sets a range of the quantization value according to a gain value of the signal amplification performed on the image.
12. The image processing apparatus according to claim 10, further comprising:
a decoding unit configured to simply decode the encoded data based on the information on the range of the quantization value included in the encoded data generated by the encoding unit.
13. The image processing apparatus according to claim 1,
the adaptive processing section performs image processing of dividing each pixel value of the image by a gain value of the signal amplification performed on the image, and
the encoding section simply encodes an image in which each pixel value has been divided by the gain value by the adaptive processing section.
14. The image processing apparatus according to claim 13, further comprising:
a decoding unit configured to simply decode the encoded data generated by the encoding unit; and
and a gain value multiplying unit configured to multiply each pixel value of the decoded image generated by the decoding unit by the gain value.
15. The image processing apparatus according to claim 1, further comprising:
an amplification section that performs signal amplification on the image, wherein,
the adaptive processing section performs adaptive image processing on the image on which the signal amplification has been performed by the amplification section.
16. The image processing apparatus according to claim 1, further comprising:
a gain value setting section that sets a gain value of the signal amplification performed on the image.
17. The image processing apparatus according to claim 1, further comprising:
and a recording unit that records the encoded data generated by the encoding unit.
18. An image processing method comprising:
performing adaptive image processing on the image on which the signal amplification has been performed; and
the image on which the adaptive image processing has been performed is simply encoded.
19. An imaging element comprising:
an imaging section that captures an image of a subject;
an adaptive processing section that performs adaptive image processing on the captured image generated by the imaging section and on which signal amplification has been performed; and
an encoding section that performs simple encoding on the captured image on which the adaptive image processing has been performed by the adaptive processing section.
20. An image forming apparatus comprising:
an imaging section that captures an image of a subject,
an adaptive processing section that performs adaptive image processing on a captured image that is generated by the imaging section and on which signal amplification has been performed; and
an encoding section that simply encodes the captured image on which the adaptive image processing has been performed by the adaptive processing section to generate encoded data; and
and a decoding unit configured to simply decode the encoded data generated by the encoding unit.
Technical Field
The present disclosure relates to an image processing apparatus and method, an imaging element, and an imaging apparatus, and particularly relates to an image processing apparatus and method, an imaging element, and an imaging apparatus, which can suppress signal value deviation generated by encoding and decoding an amplified signal group.
Background
Various methods are generally proposed as methods for encoding (compressing) and decoding (decompressing) an image. For example, a method of encoding (compressing) image data to a fixed length by DPCM (differential pulse code modulation) of an image data set and by adding thinning data is proposed (for example, see patent document 1). (ii) a
[ list of references ]
[ patent document ]
[ patent document 1]
Japanese patent laid-open publication No. 2014-103543.
Disclosure of Invention
[ problem ] to
However, when a captured image obtained by high digital gain imaging as amplifying a pixel signal by using an imaging element or the like is encoded and decoded by this method, a pixel shift may occur in the decoded image.
The present disclosure is achieved for these cases, and can suppress signal value deviation generated by encoding and decoding an amplified signal group.
[ solution of problem ]
An image processing apparatus according to an aspect of the present technology includes: an adaptive processing section that performs adaptive image processing on the image on which the signal amplification has been performed; and an encoding section that simply encodes the image on which the adaptive image processing has been performed by the adaptive processing section.
An image processing method according to an aspect of the present technology includes: performing adaptive image processing on the image on which the signal amplification has been performed; and performing simple encoding on the image that has undergone the adaptive image processing.
An imaging element according to another aspect of the present technology includes: an imaging section that captures an image of a subject; an adaptive processing section that performs adaptive image processing on the captured image generated by the imaging section and on which signal amplification has been performed; and an encoding section that performs simple encoding on the captured image on which the adaptive image processing has been performed by the adaptive processing section.
An imaging device according to still another aspect of the present technology includes an imaging element including: an imaging section that captures an image of a subject, an adaptive processing section that performs adaptive image processing on the captured image that has been generated by the imaging section and that has performed signal amplification; and an encoding section that generates encoded data by performing simple encoding on the captured image on which the adaptive image processing has been performed by the adaptive processing section; and a decoding unit that performs simple decoding on the encoded data generated by the encoding unit.
In an image processing apparatus according to an aspect of the present technology, adaptive image processing is performed on an image on which signal amplification has been performed; and performs simple encoding on the image that has undergone the adaptive image processing.
In an imaging element according to another aspect of the present technology, adaptive image processing is performed on a captured image that is generated by capturing an image of a subject and on which signal amplification has been performed; and performs simple encoding on the image on which the adaptive image processing has been performed.
In an imaging apparatus according to still another aspect of the present technology, adaptive image processing is performed on a captured image that is generated by capturing an image of a subject and on which signal amplification has been performed, simple encoding is performed on the captured image on which the adaptive image processing has been performed, and simple decoding is performed on encoded data thus generated.
[ advantageous effects of the invention ]
According to the present disclosure, images may be processed. In particular, it is possible to suppress signal value deviation generated by encoding and decoding the amplified signal group.
Drawings
[ FIG. 1]
Fig. 1 is a diagram showing a histogram of a captured image.
[ FIG. 2]
Fig. 2 is a diagram showing an example of fixed length coding.
[ FIG. 3]
Fig. 3 is a diagram showing an example of the DC offset.
[ FIG. 4]
Fig. 4 is a diagram showing a list of processing methods employing the present technology.
[ FIG. 5]
Fig. 5 is a block diagram showing a main configuration example of an image processing system for executing the
[ FIG. 6]
Fig. 6 is a diagram showing an example of a change in the histogram generated as a result of the processing.
[ FIG. 7]
Fig. 7 is a diagram showing a main configuration example of the random offset addition section.
[ FIG. 8]
Fig. 8 is a diagram showing an example of syntax for imposing a limit on the value range of the offset.
[ FIG. 9]
Fig. 9 is a diagram showing an example of imposing a limit on the value range of the offset.
[ FIG. 10]
Fig. 10 is a flowchart for explaining an example of the flow of the encoding process based on the
[ FIG. 11]
Fig. 11 is a flowchart for explaining an example of the flow of the offset addition processing.
[ FIG. 12]
Fig. 12 is a flowchart for explaining an example of the flow of the decoding process based on the
[ FIG. 13]
Fig. 13 is a block diagram showing another configuration example of an image processing system that executes the
[ FIG. 14]
Fig. 14 is a block diagram showing a main configuration example of an image processing system that executes the
[ FIG. 15]
Fig. 15 is a block diagram showing a main configuration example of the subtraction offset setting section.
[ FIG. 16]
Fig. 16 is a diagram showing an example of a table for selecting an offset.
[ FIG. 17]
Fig. 17 is a flowchart for explaining an example of the flow of the encoding process based on the
[ FIG. 18]
Fig. 18 is a flowchart for explaining an example of the flow of the offset value setting process.
[ FIG. 19]
Fig. 19 is a flowchart for explaining an example of the flow of the decoding process based on the
[ FIG. 20]
Fig. 20 is a block diagram showing another configuration example of the subtraction offset setting section.
[ FIG. 21]
Fig. 21 is a flowchart for explaining an example of the flow of the offset value setting process.
[ FIG. 22]
Fig. 22 is a block diagram showing another configuration example of an image processing system that executes the
[ FIG. 23]
Fig. 23 is a block diagram showing another configuration example of an image processing system that executes the
[ FIG. 24]
Fig. 24 is a diagram showing an example of a table for selecting a range of quantization values.
[ FIG. 25]
Fig. 25 is a diagram showing an example of setting the range of quantization values.
[ FIG. 26]
Fig. 26 is a diagram showing a configuration example of encoded data.
[ FIG. 27]
Fig. 27 is a flowchart for explaining an example of the flow of the encoding process based on the
[ FIG. 28]
Fig. 28 is a flowchart for explaining an example of the flow of the decoding process based on the
[ FIG. 29]
Fig. 29 is a block diagram showing another configuration example of an image processing system that executes the
[ FIG. 30]
Fig. 30 is a block diagram showing a main configuration example of an image processing system that executes the
[ FIG. 31]
Fig. 31 is a flowchart for explaining an example of the flow of the encoding process based on the
[ FIG. 32]
Fig. 32 is a flowchart for explaining an example of the flow of the decoding process based on the
[ FIG. 33]
Fig. 33 is a block diagram showing another configuration example of the image processing system which executes the
[ FIG. 34]
Fig. 34 is a diagram showing a main configuration example of an imaging element to which the present technology is applied.
[ FIG. 35]
Fig. 35 is a diagram showing a main configuration example of an imaging element to which the present technology is applied.
[ FIG. 36]
Fig. 36 is a flowchart for explaining an example of the flow of the imaging process.
[ FIG. 37]
Fig. 37 is a diagram showing a main configuration example of an imaging apparatus to which the present technology is applied.
[ FIG. 38]
Fig. 38 is a flowchart for explaining an example of the flow of the imaging process.
Detailed Description
Hereinafter, an embodiment for carrying out the present disclosure (hereinafter referred to as an embodiment) will be explained. Note that description will be made in the following order.
1. Fixed length coding
2. General concept (overview of the method)
3. First embodiment (details of method # 1)
4. Second embodiment (details of method # 2)
5. Third embodiment (details of method # 3)
6. Fourth embodiment (details of method # 4)
7. Fifth embodiment (application example: imaging element)
8. Sixth embodiment (application example: image forming apparatus)
9. Supplementary notes
<1. fixed length coding >
< supporting documents supporting technical contents and technical terms, etc. >
The scope of the present technical disclosure encompasses not only the disclosure in the embodiments but also the disclosure in the following documents that are well known at the time of filing this application.
Patent document 1: (see above)
Patent document 2: japanese laid-open patent publication No. 2006-303689
Patent document 3: US 2011/0292247
Patent document 4: US 2012/0219231
That is, the disclosures in the above documents also constitute a basis for determining the support requirement.
< high digital gain imaging >
For example, there is an imaging method called high digital gain imaging which multiplies a captured image by a specified (prescribed) gain value to perform imaging in a dark place. For example, assume a case where the histogram of a of fig. 1 is obtained from a captured image obtained by capturing a black image (for example, a captured image obtained when the lens hood is not detached). Note that, in the histogram shown in a of fig. 1, the horizontal axis indicates a pixel value, and the vertical axis indicates a frequency (number of pixels).
When the captured image is multiplied by a digital gain increased to eight times to enhance the sensitivity, the difference between the pixel values of the respective pixels is increased to eight times. Therefore, the histogram of the image is expanded as shown in B of fig. 1. That is, the histogram that is dense in a of fig. 1 becomes sparse in B of fig. 1, where the values are scattered to multiples of 8, i.e., 48, 56, 64, 72, 80, etc., for example.
< Generation of DC offset by encoding and decoding >
Meanwhile, various methods have been generally proposed as methods for encoding (compressing) and decoding (decompressing) an image. For example, as disclosed in
However, if a captured image obtained by the above-described digital gain imaging is encoded and decoded by this method, for example, for a decoded image, a histogram such as that shown in C of fig. 1 is obtained. That is, an error in pixel value occurs only on the + direction side. Therefore, a deviation of the average pixel value (also referred to as a DC deviation) may occur in the decoded image.
< principle of generating DC offset >
The generation of the DC offset will be explained more specifically. First, the above-described fixed length coding will be described. Fig. 2 is a schematic diagram showing image data including a pixel block including 16 pixels (
The above-described fixed length coding is performed for each block. First, each pixel value in the block is quantized, and a specified number of bits (lower bits) are deleted from the LSB. That is, only the bits represented by the white blocks in fig. 2 remain. Next, the difference of the quantized pixel value and the pixel value of the next pixel is calculated (DPCM is performed). The obtained differential value (DPCM residual) is encoded data.
More specifically, for example, the pixel data in the block of fig. 2 is processed in the order from the left side to the right side in fig. 2. PCM (pulse code modulation) encoding is performed on the upper 7 bits (7 bits from the MSB) of the pixel data to be processed first (the leftmost column in fig. 2). That is, when in an uncompressed state, the upper 7 bits of the pixel data to be processed first are output as encoded data. Then, DPCM (differential pulse code modulation) encoding is performed on the pixel data to be processed second or later. That is, for the upper 7 bits in the second pixel data or later pixel data from the left side of fig. 2, subtraction of the upper 7 bits of the previous (left side of fig. 2) pixel data is performed, and the differential value therebetween is output as encoded data.
Then, in order to adjust the respective lengths of the encoded data to fixed lengths, the difference between the specified data amount at this time and the data amount of the encoded data (i.e., the data shortage) is calculated, and the shortages of bits in the deleted lower bits are added (refinement is performed). In fig. 2, light gray squares represent bits added by thinning.
To decode the encoded data, the bits added by thinning are first extracted, and the DPCM differential value of the higher bits is added in order from the right. Thus, the higher bits in the pixel data are decoded. The extracted bits are added to the upper bits and further subjected to inverse quantization. That is, the bits lost by encoding are replaced with the specified values.
In other words, as a result of this encoding, information about the bits represented by the dark grey squares in fig. 2 is lost. That is, such fixed length encoding/decoding is performed in an irreversible manner.
In such fixed-length encoding and decoding, image data is encoded and decoded in a simpler manner than encoding and decoding methods such as AVC (advanced video coding) or HEVC (high efficiency video coding). Therefore, the fixed length encoding and decoding involves a lower load than AVC, HEVC, or the like, so that encoding and decoding can be performed at higher speed. In addition, miniaturization can be easily achieved, so that encoding and decoding can be performed at lower cost.
Such encoding is sometimes referred to as simple encoding (or simple compression). Also, decoding corresponding to such simple encoding is sometimes referred to as simple decoding (or simple decompression). Simple coding is an image coding technique for reducing the data transmission rate and the storage bandwidth. In simple encoding, data is encoded (compressed) to keep subjective image quality at the same level. In order to maintain subjective image quality at the same level, the compression rate of simple encoding is generally lower than that of general encoding such as AVC (e.g., approximately 50%).
In such simple encoding (simple compression) and simple decoding (simple decompression), the code amount is a fixed length. Therefore, management of encoded data becomes easier than in the case where the code amount is variable. Therefore, for example, management of encoded data in a DRAM in which encoded data is recorded is also easy, so that reading and writing can be performed at higher speed, and cost can be further reduced.
Also, in such simple encoding (simple compression) and simple decoding (simple decompression), blocks of image data are independently encoded and decoded. Accordingly, the entire picture may be encoded and decoded, and only a portion of the picture may also be encoded and decoded. That is, in the case of encoding and decoding only a part of a picture, encoding and decoding of unnecessary data can be suppressed, so that more efficient encoding and decoding can be performed. That is, unnecessary increase in load of encoding and decoding can be suppressed, so that the processing speed can be increased and the cost can be reduced.
As previously described, the information (uncoded bits) lost by simple coding and simple decoding (upon quantization and inverse quantization) is decompressed by intermediate values during decoding (fig. 3). For example, as shown in fig. 3, in the case where the lower 1 bits are lost due to quantization, "1" is set at the lower 1 bits during decoding. In addition, in the case where the lower 2 bits are lost due to quantization, "10 (═ 2)" is set at the lower 2 bits during decoding. Also, in the case of low missing 3 bits due to quantization, "100 (═ 4)" is set at the low 3 bits during decoding.
When the uncoded bits are decompressed with a specified value (e.g., an intermediate value) in the manner described above, an input-output error is generated. Such an error between the input pixel value and the output pixel value generated by quantization is also referred to as quantization error. For example, as shown in the upper side of fig. 3, it is assumed that a pixel value "63" (also referred to as an input pixel value) is input (0000111111). In the case where the lower 1 bits are lost due to quantization, "1" is set at the lower 1 bits in the above-described manner, and the decompressed pixel value (also referred to as an output pixel value) is "63" (0000111111). That is, in this case, the quantization error is "0".
In addition, in the case where the lower 2 bits are lost due to quantization, "10" is set at the lower 2 bits in the above-described manner, and therefore, the output pixel value is "62" (0000111110). Therefore, the quantization error is "-1". In the case where the lower 3 bits are lost due to quantization, "100" is set at the lower 3 bits in the above-described manner, and therefore, the output pixel value is "60" (00001111100). Therefore, the quantization error is "-3".
Meanwhile, it is assumed that the input pixel value is "64" (0001000000), as shown in the lower side of fig. 3. In the case where the lower 1 bit is lost due to quantization, "1" is set at the lower 1 bit in the above-described manner, and therefore, the output pixel value is "65" (0001000001). Therefore, the quantization error is "+ 1".
In addition, in the case where the lower 2 bits are lost due to quantization, "10" is set at the lower 2 bits in the above-described manner, and therefore, the output pixel value is "66" (0001000010). Therefore, the quantization error is "+ 2". In the case where the lower 3 bits are lost due to quantization, "100" is set at the lower 3 bits in the above-described manner, and therefore, the output pixel value is "68" (0001000100). Therefore, the quantization error is "+ 4".
I.e. the direction of the quantization error depends on the input pixel value. In contrast, in the case where the captured image is multiplied by a digital gain as described above, as shown in B of fig. 1, the dense histogram shown in a of fig. 1 becomes sparse according to the gain value being spread. As a result of this expansion, many pixel values are converted into pixel values having quantization errors in the same direction. Therefore, there is a possibility that the direction of the quantization error is shifted to one side. For example, in the case where many pixel values are assigned multiples of 8 as shown in B of fig. 1, the direction of the quantization error is shifted toward the + direction as shown in C of fig. 1.
When the direction in which the quantization error is generated is shifted to one side, it is possible that the average value of an image (also referred to as a decoded image) obtained by decompressing the input image (captured image) that is encoded and decoded is deviated from the average pixel value of the input image (DC deviation is generated).
When the average pixel value deviation (DC deviation) is generated, the subjective image quality of the decoded image is reduced (degraded) (i.e., the visual difference between the decompressed image and the input image is increased). For example, if the average pixel value shifts in the + direction as described above, the decompressed image may be brighter than the input image.
Further, for example, in a case where an input image (captured image) is regarded as a measurement result (sensor data), there is a possibility that data accuracy is lowered (data with lower accuracy is obtained). When the data accuracy is lowered, there is a possibility that the influence on subsequent processing (control, calculation, and the like) using the decoded image (sensor data) increases. For example, in the case where black level setting is performed on a captured image (sensor data) obtained by imaging a black image as in the example of fig. 1, the pixel value to be set to the black level may deviate due to DC offset.
Note that when the captured image is multiplied by a digital gain as described above, the pixel value difference increases according to the gain value. Therefore, the DPCM residual increases, so that the coding efficiency may be reduced. As described above, since such encoding is irreversible fixed-length encoding, there is a possibility that a reduction in encoding efficiency leads to a reduction (degradation) in subjective image quality of a decoded image.
<2. general concept >
< adaptive processing of digital gain >
For this reason, adaptive image processing is performed on an image on which signal amplification has been performed, and simple encoding is performed on an image subjected to the adaptive image processing.
For example, an image processing apparatus includes: an adaptive processing section that performs adaptive image processing on the image on which the signal amplification has been performed; and an encoding section that performs simple encoding on the image that has undergone the adaptive image processing performed by the adaptive processing section.
As a result of this configuration, it is possible to suppress signal value deviation (e.g., DC deviation) generated by encoding and decoding the signal group amplified with the digital gain.
More specifically, as the adaptive image processing, for example, any one of the processes described in the table of fig. 4 (any one of
For example, in
Accordingly, a sparse histogram in which pixel values are concentrated on several values as in B of fig. 1 can be prevented, so that the direction of quantization errors of the respective pixel values generated by simple encoding and simple decoding can be suppressed from being shifted toward one side. That is, the DC offset can be suppressed.
Therefore, when the
Also, in
As explained above with reference to fig. 1, when an image is multiplied by a digital gain, the histogram is expanded to have intervals (establish a sparse state) according to the gain values. Further, in many pixels, quantization errors of pixel values are generated toward the same direction. That is, the direction of the quantization error is shifted to one side. However, when the offset is subtracted from each pixel value in the above-described manner, the quantization error becomes small. As a result, the offset of the quantization error to one side is reduced. That is, it is possible to suppress a deviation in the direction of a quantization error of a pixel value generated by simple encoding and simple decoding.
It is noted that the pixel value (e.g., median value) that produces a smaller quantization error depends on the number of bits lost by quantization. Therefore, it is sufficient to set the offset value according to the number of bits lost. That is, in this method, an offset depending on the number of bits lost by quantization is given to an image. In addition, in this method, since it is sufficient to shift the pixel value to a desired value, the offset may be subtracted from the pixel value or may be added to the pixel value in the above-described manner.
Further, as a result of multiplication with the digital gain in the above-described manner, many pixel values are converted into values for generating respective quantization errors oriented in the same direction. Thus, each pixel value is given (e.g., subtracted from) an offset such as previously described, so that quantization error can be reduced for many pixel values. That is, the quantization error as a whole can be suppressed from shifting to one side. Thus, the offset value need only be set according to the average pixel value (and the number of missing bits) of the image. As a result, the offset value can be easily obtained as compared with the case where the offset value is obtained for each pixel.
In addition, for example, in
In general, when the quantization value (qf) increases, the number of bits lost becomes large, thereby enhancing coding efficiency, but subjective image quality of a decoded image is degraded. Therefore, for example, in conventional fixed length codes such as those disclosed in
However, when the image is multiplied by the digital gain in the above manner, the number of lower bits of the pixel value is reduced (an error value is obtained) according to the gain value. In other words, even if these degraded lower bits are lost due to quantization, the quantization influence exerted on the subjective image quality of the decoded image is small (the degree of reduction in image quality is substantially the same as in the case when quantization is not performed). Therefore, there is no need to verify the quantized value (qf) of the bits lower than the number of bits corresponding to the gain value (obviously, the quantized value (qf) is preferably set to be equal to or larger than the number of bits corresponding to the gain value). That is, it is sufficient to verify only the encoding result of the quantized value (qf) of the bit equal to or more than the number of bits.
That is, the limit of the gain value according to the digital gain is imposed on the value range of the quantization value (qf). As a result, an increase in the load of verifying the aforementioned encoding result can be suppressed. That is, an increase in the load of the encoding process can be suppressed.
In addition, information indicating the quantization value (qf) thus selected is contained in the encoded data and transmitted to the decoding side. As a result of the above-described limitation on the value range of the quantized value (qf), the quantized value (qf) can be expressed with fewer bits (word length). That is, since the code amount can be reduced, a decrease in encoding efficiency can be suppressed.
Also, for example, in
Therefore, the DC offset can be suppressed. In addition, an increase in pixel value difference can be suppressed, so that a decrease in encoding efficiency can be suppressed.
<3 > first embodiment
< image processing System >
Next, the method in fig. 4 will be explained more specifically. In the present embodiment,
As shown in fig. 5, the
The encoding-
The amplifying
Under the control of the
Under the control of the
The decoding-
Under the control of the
< random offset adding section >
Fig. 7 is a block diagram showing a main configuration example of the random offset adding
The pseudo random
The value
When performing processing according to the syntax of fig. 8, the value
In addition, for example, in the case where the gain value is an even number (for example, gain ═ 8), the value
The value
The
The
As described previously, the simple encoding is performed after the random offset value is added to the image data, so that the simple encoding and the simple decoding can be performed when the histogram is in the dense state. Therefore, it is possible to prevent quantization errors of the respective pixel values from being shifted to one side by simple encoding and simple decoding. That is, when the
It is to be noted that the influence of subjective image quality applied to a decoded image is small because, even if an offset value is added in the above manner, only the lower bits including an error mainly caused by a digital gain are changed. That is, although the influence on the subjective image quality applied to the decoded image is suppressed, the DC offset generated by encoding and decoding can be suppressed.
< flow of encoding processing >
Next, the flow of processing performed in the
When the encoding process is started, at step S101, the amplifying
At step S102, the random offset adding
At step S103, the
At step S104, the
When step S104 is completed, the encoding process ends.
< flow of offset addition processing >
Next, an example of the flow of offset addition processing of adding a random offset to a pixel value will be described with reference to the flowchart in fig. 11.
When the offset addition process is started, the pseudo random
At step S122, the value
At step S123, the
When step S123 is completed, the offset addition processing ends. Then, the process returns to fig. 10.
< flow of decoding processing >
Next, an example of the flow of the decoding process performed in the decoding-
When the decoding process is started, the
At step S142, the
When step S142 is completed, the decoding process ends.
By performing the above-described processing, the
Therefore, for example, the
< Another configuration of image processing System >
It is to be noted that the configuration of the
In this case, as shown in fig. 13, the
In the encoding-
The
In the above manner, the encoded data (bit stream) subjected to simple encoding can be transmitted from the encoding side to the decoding side by a scheme conforming to a specified communication standard. Therefore, in this case, for example, an existing communication standard can be adopted as the communication standard, and development of the communication standard can be facilitated.
<4 > second embodiment
< image processing System >
In the present embodiment,
In fig. 14, the encoding-
The subtraction offset setting
The
The
The
Further, in fig. 14, the decoding-
The addition offset setting
The
The clipping section 223 performs clipping on the supplied subtraction result (image data that has been decompressed and to which the addition offset has been added), and clips the upper limit (maximum value) thereof. The cutting section 223 outputs the cut image data to the outside of the
In this case, as described previously with reference to fig. 4, as a result of subtracting the subtraction offset from each pixel value of the image data multiplied by the digital gain in the encoding-
Then, as a result of adding an additive offset to each pixel value of the decompressed image data in the
Through the above-described processing, it is possible to perform simple encoding and simple decoding while reducing quantization errors. Therefore, as a result, the shift of the direction of the quantization error to one side can be reduced. That is, it is possible to suppress a shift of the direction of quantization error of pixel values to one side due to encoding and decoding.
< subtraction Displacement setting section >
Fig. 15 is a block diagram showing a main configuration example of the subtraction offset setting
The average
The offset
As described above, the offset value that makes the quantization error smaller depends on the average pixel value of the image data multiplied by the digital gain and the maximum bit loss amount of quantization. For example, in the case where the image data corresponds to the histogram shown in B of fig. 1, as shown in the table of fig. 16, a value of the subtraction offset that makes the quantization error smaller may be obtained based on the average pixel value and the maximum bit loss amount of the image.
That is, the offset
The offset
It is to be noted that the average pixel value may be calculated by using two or more frames prior to the current frame. That is, the subtraction offset may be calculated by using two or more frames prior to the current frame. However, when the average pixel value is obtained using a frame closer to the current frame, a subtraction offset of a more accurate value (a value that makes a quantization error smaller) may be obtained.
In addition, a subtraction offset may be set for each color in the image data (e.g., for each of R, G and B). In this case, the average
It should be noted that the addition offset setting
As described above, when the
< flow of encoding processing >
Next, the flow of processing performed in the
When the encoding process is started, at step S201, the amplifying
At step S202, the subtraction offset setting
At step S203, the
At step S204, the
At step S205, the
At step S206, the
When step S206 is completed, the encoding process ends.
< flow of offset value setting processing >
Next, the flow of the offset value setting process of setting the subtraction offset performed at step S202 in fig. 17 will be described with reference to the flowchart in fig. 18.
When the offset value setting process is started, at step S221, the offset
At step S222, the average
At step S223, the offset
When step S223 is completed, the offset value setting process ends. Then, the process returns to fig. 17.
< decoding processing flow >
Next, an example of the flow of the decoding process performed in the decoding-
When the decoding process is started, the addition offset setting
At step S242, the
At step S243, the
At step S244, the
At step S245, the clipping section 223 clips the upper limit of the decoded image to which the addition offset has been added at step S244.
When step S245 is completed, the decoding process ends.
By performing the above-described processing, the
Therefore, the
< Another configuration example of a subtraction offset setting section >
Note that the bit loss amount may be calculated from the image data, and the subtraction offset may be set by using the calculated bit loss amount.
Fig. 20 is a block diagram showing another example of the subtraction offset setting
The
The average
The offset
< flow of offset value setting processing >
An example of the flow of the offset value setting process in this case will be described with reference to the flowchart in fig. 21.
When the offset value setting process starts, the offset
At step S262, the average
At step S263, the
At step S264, the average
At step S265, the offset
When step S265 is completed, the offset value setting process ends. Then, the process proceeds to fig. 17.
As described above, also in this case, the
Note that, in the above description, the image data of the current frame is processed and the subtraction offset of the next frame is set, but the subtraction offset setting
< Another configuration of image processing System >
It should be noted that the configuration of the
In this case, as shown in fig. 22, the
That is, for example, the transmitting
As a result, encoded data (bit stream) that has been simply encoded can be transmitted from the encoding side to the decoding side by a scheme conforming to a specified communication standard. Therefore, in this case, for example, an existing communication standard can be adopted as the communication standard, and development of the communication standard can be facilitated.
<5 > third embodiment >
< image processing System >
In the present embodiment,
In fig. 23, the encoding-
The quantized value
For example, the quantization value
For example, in the case of multiplying the image data by a digital gain of 8 times the gain value as shown in a of fig. 25, information on the lower 3 bits is degraded due to the digital gain as shown in B of fig. 25. Therefore, these lower 3-bit losses can be achieved by quantization while suppressing degradation of subjective image quality of the decoded image. That is, a restriction is imposed such that the value range of the quantization value (qf) is changed from 0 to 9 to 3 to 9 (even if such a restriction is imposed, deterioration of subjective image quality of a decoded image can be suppressed).
Such a restriction is imposed on the value range of the quantized value (qf), so that verification of the encoding result of a portion to which the restriction has been imposed on the value range of the quantized value (qf) can be omitted. Therefore, an increase in load of simple encoding can be suppressed.
In addition, fig. 26 is a diagram showing a main configuration example of encoded data. The encoded data 341 shown in fig. 26 contains information (value of qf) indicating the quantized value (qf) (shaded portion in fig. 26). As described previously, when a limit is imposed on the value range of the quantized value (qf), the quantized value (qf) can be expressed with fewer bits (word length). Therefore, the code amount of information representing the quantized value (qf) in the encoded data can be suppressed. That is, deterioration in encoding efficiency can be suppressed, and deterioration in subjective image quality of a decoded image can be suppressed.
After the value range of the quantization value (qf) is set, the quantization value
The
The generated fixed-length encoded data is recorded in a recording medium or transmitted via a transmission medium by the
The decoding-
Under the control of the
During simple decoding, the
As described above, when the
< flow of encoding processing >
An example of the flow of the encoding process executed in the encoding-
When the encoding process is started, at step S301, the amplifying
At step S302, the quantized value
At step S303, the
At step S304, the
When step S304 is completed, the encoding process ends.
< flow of decoding processing >
Next, an example of the flow of the decoding process performed in the decoding-
When the decoding process is started, the
At step S322, the
Here, the
When step S322 is completed, the decoding process ends.
By performing the above-described processing, the
< Another configuration of image processing System >
Note that the configuration of the
In this case, as shown in fig. 29, the
That is, for example, the transmitting
In the manner described so far, encoded data (bit stream) generated by simple encoding can be transmitted from the encoding side to the decoding side by a scheme conforming to a specified communication standard. Therefore, in this case, for example, an existing communication standard can be adopted as the communication standard, and development of the communication standard can be facilitated.
<6 > fourth embodiment
< image processing System >
In the present embodiment,
The
That is, for example, simple encoding of image data is performed while establishing a dense state such as shown in a of fig. 1. Therefore, the DC offset generated by simple encoding can be suppressed. In addition, an increase in the pixel value difference generated by multiplying by the digital gain is also suppressed. Therefore, an increase in DPCM residual can be suppressed, and an increase in coding efficiency can be suppressed.
The generated fixed-length encoded data is recorded on a recording medium or transmitted via a transmission medium by the
In addition, in fig. 30, the decoding-
As a result, the
< flow of encoding processing >
Next, an example of the encoding process performed in the encoding-
When the encoding process is started, at step S401, the
At step S402, the
At step S403, the
At step S404, the
When step S404 is completed, the encoding process ends.
< flow of decoding processing >
Next, an example of the flow of the decoding process performed in the decoding-
When the decoding process is started, the
At step S422, the
At step S423, the
When step S423 is completed, the decoding process ends.
By performing the processing in the above-described manner, the
Therefore, for example, the
In addition, the
< Another configuration of image processing System >
It should be noted that the configuration of the
In this case, as shown in fig. 33, the
That is, for example, the transmitting
In the manner described so far, encoded data (bit stream) that has been simply encoded can be transmitted from the encoding side to the decoding side by a scheme that conforms to a specified communication standard. Therefore, in this case, for example, an existing communication standard can be adopted as the communication standard, and development of the communication standard can be facilitated.
<7 > fifth embodiment
< application example: imaging element >
Next, an example in which the present technology described so far is applied to a specific apparatus will be explained. Fig. 34 is a block diagram showing a main configuration example of a
As shown in fig. 34, the
Further, an
As described above, in the
Fig. 35 shows an example of the configuration of circuits formed on the respective semiconductor substrates. For convenience of explanation, the
A
The a/
An
A DRAM (dynamic random access memory) 561 is formed on the
By using the DRAM561 thus configured, the
When the above-described encoding-side member 102 (e.g., fig. 5, 14, 23, or 30) of the
As a result, even in the case where high digital gain imaging in which image data is multiplied by a digital gain is performed by means of the
Further, an
That is, the
When the encoded data (compressed data) is transmitted via the
When the above-described encoding-side member 102 (e.g., fig. 13, 22, 29, or 33) of the
As a result, even in the case where high digital gain imaging in which image data is multiplied by a digital gain is performed by means of the
An example of the flow of the imaging process of capturing an image by using the stacked
When the imaging process is started, at step S501, the light-receiving
At step S502, the a/
At step S503, the
At step S504, the
At step S505, the DRAM561 acquires the encoded data generated at step S505 via the
At step S506, the DRAM561 reads out the encoded data corresponding to the request from the encoded data recorded therein, and supplies the read-out data to the
At step S507, the
At step S508, the
At step S509, the
The
When step S509 is completed, the imaging process ends.
By performing the imaging process in the manner described so far, the
Note that the configuration of the
Further, the number of semiconductor substrates in the
<8 > sixth embodiment
< application example: image forming apparatus >
Fig. 37 is a block diagram showing a main configuration example of an imaging apparatus to which the present technology is applied. An
As shown in fig. 37, the
The
Light from a subject (incident light) enters the
The
The
The
For example, the
In addition, for example, the
Further, the
For example, under the control of the
The display section 615 includes an arbitrary display device such as an LCD (liquid crystal display), and is driven under the control of the
The
The
The
The
Under the control of the
As the
As a result, the
An example of the flow of imaging processing performed by the
When the imaging process is started, at step S601, the
The
At step S602, the
At step S603, the
At step S604, the
At step S605, the display section 615 acquires image data via the
At step S606, the
At step S607, the
At step S608, the
When step S609 is completed, the imaging process ends.
By performing the imaging process in the above-described manner, the
Note that the configuration of the
As an example of applying the present technology, the imaging element and the imaging apparatus have been described above. However, the present technology can be applied to any apparatus or any system as long as the apparatus or system performs fixed-length encoding and decoding of an amplified signal group while involving quantization as disclosed in any one of
For example, the present technology is also applicable to an image processing apparatus that acquires image data from the outside without performing imaging and performs image processing on the data. In addition, since the target to be encoded is arbitrary, it is not necessary to be image data. For example, arbitrary detection signals of sound, temperature, humidity, acceleration, and the like, which are not related to light, may be targeted for encoding. In addition, the present technology can also be applied to, for example, an apparatus or a system that processes image data in consideration of the image data being a set of light (luminance) detection results (detection signals). For example, the present technology is also applicable to an apparatus or system that sets a black level based on a set of detection signals.
<9. supplementary notes >
< computer >
The series of processes described above may be executed by hardware, or may be executed by software. In the case where a series of processes is executed by software, a program constituting the software is installed into a computer. Here, examples of the computer include a computer incorporated in dedicated hardware and a general-purpose personal computer capable of executing various functions by installing various programs.
In the case where a series of processes is performed by software, only a device or a system (for example, the
In the case where the series of processes described above is executed by software, for example, a program or the like constituting the software may be installed from a recording medium. For example, in the
In addition, the program may be provided via a wired/wireless transmission medium such as a local area network, the internet, or digital satellite broadcasting. For example, in the
Alternatively, the program may be installed in advance. For example, in the
< objects of application of the present technology >
The present technology is applicable to any image encoding and decoding method. That is, the specification of the processing regarding image encoding and decoding may be arbitrarily defined as long as no contradiction with the present technology described so far is caused. The specification is not limited to any of the above examples.
In addition, the case where the present technology is applied to an imaging apparatus has been described above, but the present technology is applicable not only to an imaging apparatus but also to an arbitrary apparatus (electronic apparatus). For example, the present technology is also applied to an image processing apparatus or the like for performing image processing on a captured image obtained by high digital gain imaging performed by another apparatus.
In addition, for example, the present technology can be realized by any of the following structures installed in any device or devices constituting a system: a processor (e.g., a video processor) or the like serving as a system LSI (large scale integration), a module (e.g., a video module) using a plurality of processors or the like, a unit (e.g., a video unit) using a plurality of modules, a set (e.g., a video set) obtained by adding other functions to the unit (i.e., the structure represents a part of the apparatus).
Further, the present technology is also applicable to a network system including a plurality of devices. For example, the present technology is applicable to a cloud service for providing an image (video) -related service to any terminal such as a computer, an AV (audio visual) device, a mobile information processing terminal, or an IoT (internet of things) device.
It should be noted that the systems, devices, processing sections, etc. to which the present technology is applied may be used in any field related to traffic, medicine, safety, agriculture, animal husbandry, mining, cosmetics, industry, household appliances, weather or nature monitoring. Further, the application thereof can be arbitrarily defined.
For example, the present technology is applicable to a system or apparatus for providing viewing content or the like. Further, for example, the present technology is applicable to systems and devices for traffic use (e.g., monitoring of traffic conditions or control of autonomous driving). Further, for example, the present technology is applicable to a system or an apparatus for security use. Further, for example, the present technology is applicable to a system and an apparatus for automatic control of a machine or the like. Furthermore, the present techniques are applicable to systems and devices for use in agriculture or animal husbandry, for example. Further, the present technology is applicable to systems and devices for monitoring the state of nature such as volcanoes, forests or oceans, and wildlife, for example. Further, for example, the present techniques are applicable to systems and environments for athletic use.
< others >
In this specification, the "flag" means information for distinguishing a plurality of states from each other. This information covers not only information for distinguishing two states of true (1) and false (0) from each other, but also information for distinguishing three or more states from each other. Thus, the number that the "flag" may take may be two values, such as 1/0, or may be three or more values. That is, the number of bits constituting the "flag" is arbitrarily defined, and thus may be 1 bit or more. In addition, regarding the identification information (including the flag), it may be assumed that the identification information is contained in the bitstream, and also, information regarding a difference of the identification information from some reference information is contained in the bitstream. Therefore, the terms "flag" and "identification information" include not only information thereof but also information of a difference related to the reference information.
In addition, various types of information (metadata, etc.) about encoded data (bit stream) may be transmitted or recorded in any form as long as the information is associated with the encoded data. Herein, the term "associated" refers to enabling one data set (enabling a link to be established between two data sets), for example, when processing another data set. That is, the data sets associated with each other may be integrated into a single data set, or may be formed as separate data sets. For example, information associated with the encoded data (image) may be transmitted on a transmission path different from the transmission path of the encoded data (image). Further, for example, information associated with the encoded data (image) may be recorded in a recording medium different from the recording medium in which the encoded data (image) is recorded (or in a different recording area of the same recording medium). It is noted that "associating" may be performed for the entire data but for a portion of the data. For example, an image and information corresponding to the image may be associated with each other in units arbitrarily defined (e.g., a plurality of frames, one frame, or a part of a frame).
It is noted that in this specification, the terms "synthesizing", "multiplexing", "adding", "integrating", "including", "storing", "placing", "putting", "inserting", and the like all mean that a plurality of things are collected together, that is, for example, encoded data and metadata are collected into one data set, and thus refer to one method for the above-described "associating".
Further, the embodiments of the present technology are not limited to the above-described embodiments, and various modifications may be made within the gist of the present technology.
In addition, for example, the present technology may be implemented by any of the following components that constitute an apparatus or system: a processor serving as a system LSI (large scale integrated circuit), a module using a plurality of processors, or the like, a unit using a plurality of modules, a set (i.e., a part of a component representation apparatus) obtained by adding other functions to the unit.
It is to be noted that, in this specification, a system refers to a set of a plurality of constituent elements (devices, modules (components), and the like). It is not important whether or not all the constituent elements are included in the same housing. Therefore, both a group of a plurality of devices accommodated in different housings and connected to each other through a network and a single device having a plurality of modules accommodated in a single housing are referred to as a system.
In addition, for example, the configuration explained above as a single device (or processing section) may be divided into a plurality of devices (or processing sections). In contrast, a configuration explained as a plurality of devices (or processing sections) may be formed as a single device (or processing section). In addition, a configuration not described above may be added to the configuration of the apparatus (or the processing section). Also, a part of a certain apparatus (or processing section) may be included in another apparatus (or processing section) as long as the configuration or operation in the entire system is substantially the same.
In addition, for example, the present technology may have a configuration of cloud computing in which a function is shared by a plurality of devices on a network and collectively processed.
In addition, for example, the above-described program may be executed by any device. In this case, it is sufficient that the apparatus has necessary functions (function blocks and the like) and can acquire necessary information.
Also, for example, the steps of the above-described flowcharts may be performed by one apparatus, or may be performed by a plurality of apparatuses in combination. Further, in the case where a plurality of processes are included in one step, one step may be performed by one apparatus, or may be performed by a plurality of apparatuses in combination. In other words, a plurality of processes included in one step can be performed as a plurality of steps. In contrast, a plurality of steps in the above description may be performed together as one step.
Note that the program executed by the computer may be a program for executing processing according to the time sequence described in this specification, or may be a program for executing processing separately at a machine when necessary, for example, every call. That is, the steps may be performed in a different order from the above-described order as long as no inconsistency is generated. Further, steps written in a program may be executed in parallel with the processing of another program, or may be executed in combination with the processing of another program.
It is to be noted that a plurality of examples of the present technology described in this specification can be independently implemented as long as no contradiction is produced. A number of arbitrarily defined examples of the present technology may be implemented in combination. For example, a part or all of the present technology described in any one of the embodiments may be implemented in combination with a part or all of the present technology described in another embodiment. Additionally, any defined portion or all of the present technology can be implemented in conjunction with another technology not described above.
Note that the present technology may also have the following configuration.
(1)
An image processing apparatus comprising:
an adaptive processing section that performs adaptive image processing on the image on which the signal amplification has been performed; and
an encoding section that simply encodes the image on which the adaptive image processing has been performed by the adaptive processing section.
(2)
The image processing apparatus according to (1), wherein,
the adaptive processing section performs an imaging process of adding an offset value randomly set within a value range to each pixel value of the image, the value range depending on a gain value of the signal amplification performed on the image, and
the encoding section performs simple encoding on the image to which the offset value has been added for each pixel value by the adaptive processing section.
(3)
The image processing apparatus according to (2), wherein,
the adaptive processing section adds, as the offset value, a pseudo random number corrected to fall within a value range depending on the gain value to each pixel value of the image.
(4)
The image processing apparatus according to any one of (1) to (3),
the adaptive processing section performs image processing of subtracting an offset value based on an average pixel value of the image and a quantization value simply encoded by the encoding section from each pixel value of the image, and
the encoding section simply encodes an image that has been obtained by subtracting the offset value from each pixel value by the adaptive processing section.
(5)
The image processing apparatus according to (4), wherein,
the average pixel value includes an average pixel value of images of frames preceding a current frame as a processing target.
(6)
The image processing apparatus according to (5), wherein,
the quantization value includes a value depending on a compression rate of the simple encoding.
(7)
The image processing apparatus according to (5) or (6), wherein,
the quantization value is an average value of quantization values of respective pixels for simple encoding of an image of a frame preceding a current frame as a processing target.
(8)
The image processing apparatus according to any one of (4) to (7),
the adaptive processing section subtracts the offset value from each pixel value of the image for each color.
(9)
The image processing apparatus according to (4), further comprising:
a decoding unit configured to simply decode the encoded data generated by the encoding unit; and
and an offset adding unit that adds an offset value based on the average pixel value of the image and the simply encoded quantized value to each pixel value of the decoded image generated by the decoding unit.
(10)
The image processing apparatus according to any one of (1) to (9),
the adaptive processing section performs image processing of setting a range of quantization values of the simple encoding performed by the encoding section, and
the encoding section simply encodes the image based on the range of the quantization value set by the adaptive processing section, and generates encoded data including information on the range of the quantization value.
(11)
The image processing apparatus according to (10), wherein,
the adaptive processing section sets a range of quantization values according to a gain value of the signal amplification performed on the image.
(12)
The image processing apparatus according to (10), further comprising:
a decoding unit configured to simply decode the encoded data based on information on a range of quantization values included in the encoded data generated by the encoding unit.
(13)
The image processing apparatus according to any one of (1) to (12), wherein,
the adaptive processing section performs image processing of dividing each pixel value of the image by a gain value of the signal amplification performed on the image, and
the encoding section simply encodes an image in which each pixel value has been divided by the gain value by the adaptive processing section.
(14)
The image processing apparatus according to (13), further comprising:
a decoding unit that decodes the encoded data generated by the encoding unit; and
and a gain value multiplying unit configured to multiply each pixel value of the decoded image generated by the decoding unit by the gain value.
(15)
The image processing apparatus according to any one of (1) to (14), further comprising:
an amplification section that performs signal amplification on the image, wherein,
the adaptive processing section performs adaptive image processing on the image on which the signal amplification has been performed by the amplification section.
(16)
The image processing apparatus according to any one of (1) to (15), further comprising:
a gain value setting section that sets a gain value of the signal amplification performed on the image.
(17)
The image processing apparatus according to any one of (1) to (16), further comprising:
and a recording unit that records the encoded data generated by the encoding unit.
(18)
An image processing method comprising:
performing adaptive image processing on the image on which the signal amplification has been performed; and
the image on which the adaptive image processing has been performed is simply encoded.
(19)
An imaging element comprising:
an imaging section that captures an image of a subject;
an adaptive processing section that performs adaptive image processing on the captured image generated by the imaging section and on which signal amplification has been performed; and
an encoding section that performs simple encoding on the captured image on which the adaptive image processing has been performed by the adaptive processing section.
(20)
An image forming apparatus comprising:
an imaging section that captures an image of a subject,
an adaptive processing section that performs adaptive image processing on a captured image that is generated by the imaging section and on which signal amplification has been performed; and
an encoding section that simply encodes the captured image on which the adaptive image processing has been performed by the adaptive processing section to generate encoded data; and
and a decoding unit configured to simply decode the encoded data generated by the encoding unit.
[ list of reference numerals ]
100 image processing system, 101 control unit, 102 encoding side member, 103 decoding side member, 111 amplification unit, 112 random offset addition unit, 113 encoding unit, 121 decoding unit, 141 pseudo random number generation unit, 142 value range limitation unit, 143 calculation unit, 144 clipping unit, 171 transmission unit, 172 reception unit, 211 subtraction offset setting unit, 212 calculation unit, 213 clipping unit, 221 addition offset setting unit, 222 calculation unit, 223 clipping unit, 231 average value measurement unit, 232 offset value selection unit, 233 offset value supply unit, 251 compression unit, 252 average value measurement unit, 311 quantized value range setting unit, 411 calculation unit, 421 calculation unit, 510 stacked image sensor, 511 to 513 semiconductor substrate, 521, DRAM bus, 523 interface, 530 circuit substrate, 541, 542A/D conversion unit, 551 image processing unit, 561, 571 image processing unit, 600 imaging device, 601 control unit, 610 bus, 611 optical part, 612 image sensor, 613 image processing part, 614 codec processing part, 615 display part, 616 recording part, 617 communication part, 621 input part, 622 output part, 625 driver
- 上一篇:一种医用注射器针头装配设备
- 下一篇:影像显示装置、影像显示装置的控制方法、以及程序