Apparatus and method for hierarchical wireless video and graphics transmission based on video pre-processing

文档序号:621628 发布日期:2021-05-07 浏览:7次 中文

阅读说明:本技术 基于视频预处理来进行分级无线视频和图形传输的设备和方法 (Apparatus and method for hierarchical wireless video and graphics transmission based on video pre-processing ) 是由 朱磊 于 2018-09-30 设计创作,主要内容包括:提供用于以下操作的方法和系统:对内容进行预处理;控制与可移动物体相关联的一个或多个状态参数;以及控制预处理操作。例如,对于图像处理方法,其包括:通过处理器从可移动物体上承载的成像装置接收一个或多个图像;以及调整一个或多个图像中的至少一个图像的一个或多个成像参数,以获得调整后的图像。该方法还包括:对调整后的图像进行编码,以生成编码后的图像数据;将编码后的图像数据从可移动物体发送至远程终端。(Methods and systems are provided for: preprocessing the content; controlling one or more state parameters associated with the movable object; and controlling the preprocessing operation. For example, with respect to an image processing method, it includes: receiving, by a processor, one or more images from an imaging device carried on a movable object; and adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image. The method further comprises the following steps: encoding the adjusted image to generate encoded image data; the encoded image data is transmitted from the movable object to the remote terminal.)

1. An image processing method comprising:

receiving, by a processor, one or more images from an imaging device carried on a movable object;

adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image;

encoding the adjusted image to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

2. The method of claim 1, wherein adjusting one or more imaging parameters comprises:

reducing one or more imaging parameters of the at least one of the one or more images.

3. The method of claim 1 or 2, wherein the one or more imaging parameters comprise at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

4. The method of any of claims 1-3, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to one or more state parameters associated with the movable object not being within a preset range.

5. The method of claim 4, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

6. The method of claim 5, wherein the quantization parameter is adjusted according to a bandwidth of a communication channel between the movable object and the remote terminal.

7. The method of claim 4, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

8. The method of claim 4, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

9. The method of any of claims 1-8, wherein adjusting one or more imaging parameters comprises:

adjusting a spatial frequency of the at least one of the one or more images.

10. The method of claim 9, wherein adjusting spatial frequencies comprises:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

11. The method of claim 10, wherein the filter comprises a bilateral filter.

12. The method of claim 11, further comprising:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

13. The method of claim 12, wherein adjusting one or more configuration parameters of the bilateral filter comprises:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

14. The method of any of claims 1-8, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a color space of the at least one of the one or more images.

15. The method of claim 14, wherein adjusting the dimensions of the color space comprises:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

16. The method of any of claims 1-8, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images.

17. The method of claim 16, wherein adjusting the dimensions of the luminance space comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

18. The method of any of claims 1-17, wherein the one or more imaging parameters include at least two imaging parameters, and adjusting one or more imaging parameters includes:

adjusting the at least two imaging parameters of the at least one of the one or more images.

19. The method of claim 18, wherein adjusting the at least two imaging parameters comprises:

adjusting the at least two imaging parameters based on a preset priority.

20. The method of claim 19, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

21. The method of claim 1, wherein the movable object is an unmanned aerial vehicle.

22. An image processing method comprising:

receiving, by a processor, one or more images from an imaging device of a movable object; and

determining, prior to encoding the one or more images, whether to adjust one or more imaging parameters of at least one of the one or more images based on one or more state parameters associated with the movable object to obtain an adjusted image.

23. The method of claim 22, wherein determining whether to adjust one or more imaging parameters comprises:

determining whether the one or more state parameters associated with the movable object are within a preset range; and

adjusting the one or more imaging parameters in response to the one or more state parameters associated with the movable object not being within a preset range.

24. The method of claim 23, further comprising:

encoding the adjusted image to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

25. The method of any of claims 23-24, wherein adjusting one or more imaging parameters comprises:

reducing one or more imaging parameters of the at least one of the one or more images.

26. The method of any of claims 23-25, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

27. The method of any of claims 23-26, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

28. The method of claim 27, wherein the quantization parameter used for encoding is adjusted according to a bandwidth of a communication channel between the movable object and a remote terminal.

29. The method of any of claims 23-26, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

30. The method of any of claims 23-26, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

31. The method of any of claims 23-30, wherein adjusting one or more imaging parameters comprises:

adjusting a spatial frequency of the at least one of the one or more images.

32. The method of claim 31, wherein adjusting spatial frequencies comprises:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

33. The method of claim 32, wherein the filter comprises a bilateral filter.

34. The method of claim 33, further comprising:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

35. The method of claim 34, wherein adjusting one or more configuration parameters of the bilateral filter comprises:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

36. The method of any of claims 23-30, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a color space of the at least one of the one or more images.

37. The method of claim 36, wherein adjusting the dimensions of the color space comprises:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

38. The method of any of claims 23-30, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images.

39. The method of claim 38, wherein adjusting the dimensions of the luminance space comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

40. The method of any of claims 23-39, wherein the one or more imaging parameters include at least two imaging parameters, and adjusting one or more imaging parameters includes:

adjusting the at least two imaging parameters of the at least one of the one or more images.

41. The method of claim 40, wherein adjusting the at least two imaging parameters comprises:

adjusting the at least two imaging parameters of the at least one of the one or more images based on a preset priority.

42. The method of claim 41, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

43. The method of claim 22, wherein the movable object is an unmanned aerial vehicle.

44. The method of claim 22, further comprising:

in response to the one or more state parameters associated with the movable object being within a preset range, encoding the at least one of the one or more images to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

45. An image processing method comprising:

receiving, by a processor, one or more images from an imaging device of a movable object;

determining whether one or more state parameters associated with the movable object are within a preset range;

in response to the one or more state parameters not being within a preset range:

adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image; and

encoding the adjusted image to generate encoded image data;

in response to the one or more state parameters being within the preset range, encoding the at least one of the one or more images to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

46. The method of claim 45, wherein adjusting one or more imaging parameters comprises:

reducing one or more imaging parameters of the at least one of the one or more images.

47. The method of any of claims 45-46, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

48. The method of any of claims 45-47, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

49. The method of claim 48, wherein the quantization parameter used for encoding is adjusted according to a bandwidth of a communication channel between the movable object and the remote terminal.

50. The method of any of claims 45-47, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

51. The method of any of claims 45-47, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

52. The method of any of claims 45-51, wherein adjusting one or more imaging parameters comprises:

adjusting a spatial frequency of the at least one of the one or more images.

53. The method of claim 52, wherein adjusting spatial frequencies comprises:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

54. The method of claim 53, wherein the filter comprises a bilateral filter.

55. The method of claim 54, further comprising:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

56. The method of claim 55, wherein adjusting one or more configuration parameters of the bilateral filter comprises:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

57. The method of any of claims 45-51, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a color space of the at least one of the one or more images.

58. The method of claim 57, wherein adjusting the dimensions of the color space comprises:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

59. The method of any of claims 45-51, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images.

60. The method of claim 59, wherein adjusting the dimensions of the luminance space comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

61. The method of any of claims 45-60, wherein the one or more imaging parameters include at least two imaging parameters, and adjusting one or more imaging parameters includes:

adjusting the at least two imaging parameters of the at least one of the one or more images.

62. The method of claim 61, wherein adjusting the at least two imaging parameters comprises:

adjusting the at least two imaging parameters based on a preset priority.

63. The method of claim 62, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

64. The method of claim 45, wherein the movable object is an unmanned aerial vehicle.

65. An imaging system, comprising:

an imaging device carried on the movable object and configured to capture one or more images; and

one or more processors, which when executing instructions are individually or collectively configured to:

adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image;

encoding the adjusted image to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal using a transceiver.

66. The imaging system of claim 65, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

reducing the one or more imaging parameters of the at least one of the one or more images.

67. The imaging system of claim 65 or 66, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

68. The imaging system of any of claims 65-67, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to one or more state parameters associated with the movable object not being within a preset range.

69. The imaging system of claim 68, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

70. The imaging system of claim 69, wherein the quantization parameter is adjusted according to a bandwidth of a communication channel between the movable object and the remote terminal.

71. The imaging system of claim 68, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

72. The imaging system of claim 68, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

73. The imaging system of any of claims 65-72, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a spatial frequency of the at least one of the one or more images.

74. The imaging system of claim 73, wherein to adjust spatial frequency, the one or more processors are configured to:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

75. The imaging system of claim 74, wherein the filter comprises a bilateral filter.

76. The imaging system of claim 75, wherein the one or more processors are further configured to:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

77. The imaging system of claim 76, wherein to adjust one or more configuration parameters of the bilateral filter, the one or more processors are configured to:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

78. The imaging system of any of claims 65-72, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a dimension of a color space of the at least one of the one or more images.

79. The imaging system of claim 78, wherein to adjust the dimensions of the color space, the one or more processors are configured to:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

80. The imaging system of any of claims 65-72, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a dimension of a luminance space of the at least one of the one or more images.

81. The imaging system of claim 80, wherein to adjust the dimensions of the luminance space, the one or more processors are configured to:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

82. The imaging system of any of claims 65-81, wherein the one or more imaging parameters include at least two imaging parameters, and to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the at least two imaging parameters of the at least one of the one or more images.

83. The imaging system of claim 82, wherein to adjust the at least two imaging parameters, the one or more processors are configured to:

adjusting the at least two imaging parameters based on a preset priority.

84. The imaging system of claim 83, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

85. The imaging system of claim 65, wherein the movable object is an unmanned aerial vehicle.

86. An imaging system, comprising:

an imaging device carried on the movable object and configured to capture one or more images; and

one or more processors, which when executing instructions are individually or collectively configured to:

receiving the one or more images from the imaging device; and

determining, prior to encoding the one or more images, whether to adjust one or more imaging parameters of at least one of the one or more images based on one or more state parameters associated with the movable object to obtain an adjusted image.

87. The imaging system of claim 86, wherein to determine whether to adjust one or more imaging parameters, the one or more processors are configured to:

determining whether the one or more state parameters associated with the movable object are within a preset range; and

adjusting the one or more imaging parameters in response to the one or more state parameters associated with the movable object not being within a preset range.

88. The imaging system of claim 87, wherein the one or more processors are further configured to:

encoding the adjusted image to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal using a transceiver.

89. The imaging system of any of claims 87-88, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

reducing the one or more imaging parameters of the at least one of the one or more images.

90. The imaging system of any of claims 87-89, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

91. The imaging system of any of claims 87-90, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

92. The imaging system of claim 91, wherein the quantization parameter used for encoding is adjusted according to a bandwidth of a communication channel between the movable object and a remote terminal.

93. The imaging system of any of claims 87-90, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

94. The imaging system of any of claims 87-90, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

95. The imaging system of any of claims 87-94, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a spatial frequency of the at least one of the one or more images.

96. The imaging system of claim 95, wherein to adjust spatial frequency, the one or more processors are configured to:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

97. The imaging system of claim 96, wherein the filter comprises a bilateral filter.

98. The imaging system of claim 97, wherein the one or more processors are further configured to:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

99. The imaging system of claim 98, wherein to adjust one or more configuration parameters of the bilateral filter, the one or more processors are configured to:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

100. The imaging system of any of claims 87-94, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a dimension of a color space of the at least one of the one or more images.

101. The imaging system of claim 100, wherein to adjust dimensions of a color space, the one or more processors are configured to:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

102. The imaging system of any of claims 87-94, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a dimension of a luminance space of the at least one of the one or more images.

103. The imaging system of claim 102, wherein to adjust the dimensions of the luminance space, the one or more processors are configured to:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

104. The imaging system of any of claims 87-103, wherein the one or more imaging parameters include at least two imaging parameters, and to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the at least two imaging parameters of the at least one of the one or more images.

105. The imaging system of claim 104, wherein to adjust the at least two imaging parameters, the one or more processors are configured to:

adjusting the at least two imaging parameters of the at least one of the one or more images based on a preset priority.

106. The imaging system of claim 105, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

107. The imaging system of claim 86, wherein the movable object is an unmanned aerial vehicle.

108. The imaging system of claim 86, wherein the one or more processors are further configured to:

in response to the one or more state parameters associated with the movable object being within a preset range, encoding the at least one of the one or more images to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

109. An imaging system, comprising:

an imaging device carried on the movable object and configured to capture one or more images; and

one or more processors, which when executing instructions are individually or collectively configured to:

determining whether one or more state parameters associated with the movable object are within a preset range;

in response to the one or more state parameters not being within a preset range:

adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image; and

encoding the adjusted image to generate encoded image data;

in response to the one or more state parameters being within the preset range, encoding the at least one of the one or more images to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal using a transceiver.

110. The imaging system of claim 109, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

reducing the one or more imaging parameters of the at least one of the one or more images.

111. The imaging system of any of claims 109-110, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

112. The imaging system of any of claims 109-111, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

113. The imaging system of claim 112, wherein the quantization parameter used for encoding is adjusted according to a bandwidth of a communication channel between the movable object and the remote terminal.

114. The imaging system of any of claims 109-111, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

115. The imaging system of any of claims 109-111, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

116. The imaging system of any of claims 109-115, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a spatial frequency of the at least one of the one or more images.

117. The imaging system of claim 116, wherein to adjust spatial frequency, the one or more processors are configured to:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

118. The imaging system of claim 117, wherein the filter comprises a bilateral filter.

119. The imaging system of claim 118, wherein the one or more processors are further configured to:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

120. The imaging system of claim 119, wherein to adjust one or more configuration parameters of the bilateral filter, the one or more processors are configured to:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

121. The imaging system of any of claims 109-115, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a dimension of a color space of the at least one of the one or more images.

122. The imaging system of claim 121, wherein to adjust the dimensions of the color space, the one or more processors are configured to:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

123. The imaging system of any of claims 109-115, wherein to adjust one or more imaging parameters, the one or more processors are configured to:

adjusting a dimension of a luminance space of the at least one of the one or more images.

124. The imaging system of claim 123, wherein to adjust the dimensions of the luminance space, the one or more processors are configured to:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

125. The imaging system of any of claims 109-124, wherein the one or more imaging parameters include at least two imaging parameters, and to adjust the one or more imaging parameters, the one or more processors are configured to:

adjusting the at least two imaging parameters of the at least one of the one or more images.

126. The imaging system of claim 125, wherein to adjust the at least two imaging parameters, the one or more processors are configured to:

adjusting the at least two imaging parameters based on a preset priority.

127. The imaging system of claim 126, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

128. The imaging system of claim 109, wherein the movable object is an unmanned aerial vehicle.

129. A non-transitory computer program product comprising machine-readable instructions for causing a programmable processing apparatus to perform operations comprising:

receiving one or more images from an imaging device carried on a movable object;

adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image;

encoding the adjusted image to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

130. The non-transitory computer program product of claim 129, wherein adjusting one or more imaging parameters comprises:

reducing one or more imaging parameters of the at least one of the one or more images.

131. The non-transitory computer program product of claim 129 or 130, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

132. The non-transitory computer program product of any one of claims 129-131 wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to one or more state parameters associated with the movable object not being within a preset range.

133. The non-transitory computer program product of claim 132, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

134. The non-transitory computer program product of claim 133, wherein the quantization parameter is adjusted according to a bandwidth of a communication channel between the movable object and the remote terminal.

135. The non-transitory computer program product of claim 132, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

136. The non-transitory computer program product of claim 132, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

137. The non-transitory computer program product of any one of claims 129-136 wherein adjusting one or more imaging parameters comprises:

adjusting a spatial frequency of the at least one of the one or more images.

138. The non-transitory computer program product of claim 137, wherein adjusting spatial frequencies comprises:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

139. The non-transitory computer program product of claim 138, wherein the filter comprises a bilateral filter.

140. The non-transitory computer program product of claim 139, the operations further comprising:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

141. The non-transitory computer program product of claim 140, wherein adjusting one or more configuration parameters of the bilateral filter comprises:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

142. The non-transitory computer program product of any one of claims 129-136 wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a color space of the at least one of the one or more images.

143. The non-transitory computer program product of claim 142, wherein adjusting the dimensions of the color space comprises:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

144. The non-transitory computer program product of any one of claims 129-136 wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images.

145. The non-transitory computer program product of claim 144, wherein adjusting the dimensions of the luminance space comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

146. The non-transitory computer program product of any one of claims 129-145, wherein the one or more imaging parameters include at least two imaging parameters and adjusting one or more imaging parameters includes:

adjusting the at least two imaging parameters of the at least one of the one or more images.

147. The non-transitory computer program product of claim 146, wherein adjusting the at least two imaging parameters comprises:

adjusting the at least two imaging parameters based on a preset priority.

148. The non-transitory computer program product of claim 147, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

149. The non-transitory computer program product of claim 129, wherein the movable object is an unmanned aerial vehicle.

150. A non-transitory computer program product comprising machine-readable instructions for causing a programmable processing apparatus to perform operations comprising:

receiving one or more images from an imaging device of a movable object; and

determining, prior to encoding the one or more images, whether to adjust one or more imaging parameters of at least one of the one or more images based on one or more state parameters associated with the movable object to obtain an adjusted image.

151. The non-transitory computer program product of claim 150, wherein determining whether to adjust one or more imaging parameters comprises:

determining whether one or more of the state parameters associated with the movable object is within a predetermined range; and

adjusting the one or more imaging parameters in response to the one or more state parameters associated with the movable object not being within a preset range.

152. The non-transitory computer program product of claim 151, the operations further comprising:

encoding the adjusted image to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

153. The non-transitory computer program product of any one of claims 151-152, wherein adjusting one or more imaging parameters comprises:

reducing one or more imaging parameters of the at least one of the one or more images.

154. The non-transitory computer program product of any one of claims 151-153, wherein the one or more imaging parameters include at least one of the following parameters of the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

155. The non-transitory computer program product of any one of claims 151-154, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

156. The non-transitory computer program product of claim 155, wherein the quantization parameter used for encoding is adjusted according to a bandwidth of a communication channel between the movable object and a remote terminal.

157. The non-transitory computer program product of any one of claims 151-154, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

158. The non-transitory computer program product of any one of claims 151-154, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

159. The non-transitory computer program product of any one of claims 151-158, wherein adjusting one or more imaging parameters comprises:

adjusting a spatial frequency of the at least one of the one or more images.

160. The non-transitory computer program product of claim 159, wherein adjusting spatial frequencies comprises:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

161. The non-transitory computer program product of claim 160, wherein the filter comprises a bilateral filter.

162. The non-transitory computer program product of claim 161, the operations further comprising:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

163. The non-transitory computer program product of claim 162, wherein adjusting one or more configuration parameters of the bilateral filter comprises:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

164. The non-transitory computer program product of any one of claims 151-158, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a color space of the at least one of the one or more images.

165. The non-transitory computer program product of claim 164, wherein adjusting the dimensions of the color space comprises:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

166. The non-transitory computer program product of any one of claims 151-158, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images.

167. The non-transitory computer program product of claim 166, wherein adjusting the dimensions of the luminance space comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

168. The non-transitory computer program product of any one of claims 151-167, wherein the one or more imaging parameters include at least two imaging parameters and adjusting one or more imaging parameters includes:

adjusting the at least two imaging parameters of the at least one of the one or more images.

169. The non-transitory computer program product of claim 168, wherein adjusting the at least two imaging parameters comprises:

adjusting the at least two imaging parameters of the at least one of the one or more images based on a preset priority.

170. The non-transitory computer program product of claim 169, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

171. The non-transitory computer program product of claim 150, wherein the movable object is an unmanned aerial vehicle.

172. The non-transitory computer program product of claim 150, the operations further comprising:

in response to the one or more state parameters associated with the movable object being within a preset range, encoding the at least one of the one or more images to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

173. A non-transitory computer program product comprising machine-readable instructions for causing a programmable processing apparatus to perform operations comprising:

receiving, by a processor, one or more images from an imaging device of a movable object;

determining whether one or more state parameters associated with the movable object are within a preset range;

in response to the one or more state parameters not being within a preset range:

adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image; and

encoding the adjusted image to generate encoded image data;

in response to the one or more state parameters being within the preset range, encoding the at least one of the one or more images to generate encoded image data; and

transmitting the encoded image data from the movable object to a remote terminal.

174. The non-transitory computer program product of claim 173, wherein adjusting one or more imaging parameters comprises:

reducing one or more imaging parameters of the at least one of the one or more images.

175. The non-transitory computer program product of any one of claims 173-174, wherein the one or more imaging parameters include at least one of the following parameters for the at least one of the one or more images: spatial frequency, a dimension of a color space, or a dimension of a luminance space.

176. The non-transitory computer program product of any one of claims 173-175, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to the quantization parameter used for encoding being equal to or greater than a first quantization parameter threshold.

177. The non-transitory computer program product of claim 176, wherein the quantization parameter used for encoding is adjusted according to a bandwidth of a communication channel between the movable object and the remote terminal.

178. The non-transitory computer program product of any one of claims 173-175, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more imaging parameters in response to a peak signal-to-noise ratio, PSNR, value of one or more previous images received from the imaging device being equal to or less than a PSNR threshold.

179. The non-transitory computer program product of any one of claims 173-175, wherein adjusting one or more imaging parameters comprises:

adjusting the one or more parameters in response to an occupied storage space in a buffer being equal to or greater than a threshold, wherein the buffer is to buffer encoded image data of one or more previous images received from the imaging device.

180. The non-transitory computer program product of any one of claims 173-179, wherein adjusting one or more imaging parameters comprises:

adjusting a spatial frequency of the at least one of the one or more images.

181. The non-transitory computer program product of claim 180, wherein adjusting spatial frequencies comprises:

adjusting a spatial frequency of the at least one of the one or more images using a filter.

182. The non-transitory computer program product of claim 181, wherein the filter comprises a bilateral filter.

183. The non-transitory computer program product of claim 182, the operations further comprising:

prior to adjusting the spatial frequency, adjusting one or more configuration parameters of the bilateral filter, wherein the one or more configuration parameters of the bilateral filter include at least one of a spatial scale parameter or a value scale parameter.

184. The non-transitory computer program product of claim 183, wherein adjusting one or more configuration parameters of the bilateral filter comprises:

adjusting the spatial scale parameter and the value scale parameter based on a preset order.

185. The non-transitory computer program product of any one of claims 173-179, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a color space of the at least one of the one or more images.

186. The non-transitory computer program product of claim 185, wherein adjusting the dimensions of the color space comprises:

adjusting a dimension of a color space of the at least one of the one or more images by a preset value.

187. The non-transitory computer program product of any one of claims 173-179, wherein adjusting one or more imaging parameters comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images.

188. The non-transitory computer program product of claim 187, wherein adjusting the dimensions of the luminance space comprises:

adjusting a dimension of a luminance space of the at least one of the one or more images by a preset value.

189. The non-transitory computer program product of any one of claims 173-188, wherein the one or more imaging parameters include at least two imaging parameters and adjusting one or more imaging parameters includes:

adjusting the at least two imaging parameters of the at least one of the one or more images.

190. The non-transitory computer program product of claim 189, wherein adjusting the at least two imaging parameters comprises:

adjusting the at least two imaging parameters based on a preset priority.

191. The non-transitory computer program product of claim 190, wherein the at least two imaging parameters include at least a spatial frequency of the at least one of the one or more images, and the spatial frequency has a highest priority among the preset priorities.

192. The non-transitory computer program product of claim 173, wherein the movable object is an unmanned aerial vehicle.

Technical Field

The present disclosure relates generally to pre-processing of content, such as images, video or graphics, and bit rate control of wireless transmission of the content.

Background

In conventional video transmission systems, video is first encoded using video encoding (e.g., compression) techniques. The encoded video is then transmitted to a receiver device over a communication channel. Encoding techniques and parameters used for video encoding may affect, for example, a bit rate associated with the encoded video, a peak signal-to-noise ratio (PSNR) of the encoded video, and/or a space occupied in a buffer, which may affect a quality of the encoded video at playback. Additionally, a transmitter device transmitting the encoded video, a receiver device receiving the encoded data, or a communication channel used to transmit the encoded video may have constraints that may affect the quality of the encoded video at playback.

Disclosure of Invention

The described embodiments relate to methods and systems for pre-processing content prior to encoding the content. For example, a system may include: a pre-processing circuit for processing input data and controlling one or more parameters associated with the pre-processing circuit; an encoder for encoding the processed data to generate encoded data; a rate controller for controlling a bit rate associated with the encoded data; and a transmitter for transmitting the encoded data.

Some embodiments relate to a method of processing, the method of image processing comprising: receiving, by a processor, one or more images from an imaging device carried on a movable object; and adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image. The method further comprises the following steps: the adjusted image is encoded to generate encoded image data, which is transmitted from the movable object to the remote terminal.

Some embodiments relate to an image processing method, the image processing method comprising: receiving, by a processor, one or more images from an imaging device of a movable object; and determining whether to adjust one or more imaging parameters of at least one of the one or more images to obtain an adjusted image based on one or more state parameters associated with the movable object prior to encoding the one or more images.

Some embodiments relate to an image processing method, the image processing method comprising: receiving, by a processor, one or more images from an imaging device of a movable object; and determining whether one or more state parameters associated with the movable object are within a preset range. The method comprises the following steps: adjusting one or more imaging parameters of at least one of the one or more images in response to the one or more status parameters not being within a preset range to obtain an adjusted image; and encoding the adjusted image to generate encoded image data. The method comprises the following steps: at least one of the one or more images is encoded to generate encoded image data in response to the one or more state parameters being within a preset range. The method further comprises the following steps: the encoded image data is transmitted from the movable object to the remote terminal.

Some embodiments relate to an imaging system. The imaging system includes: an imaging device carried on the movable object and configured to capture one or more images; and one or more processors. The one or more processors, when executing the instructions, individually or collectively: adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image; and encoding the adjusted image to generate encoded image data. The one or more processors also transmit the encoded image data from the movable object to a remote terminal using the transceiver.

Some embodiments relate to an imaging system. The imaging system includes: an imaging device carried on the movable object and configured to capture one or more images; and one or more processors. The one or more processors, when executing the instructions, receive, individually or collectively, one or more images from an imaging device; and determining whether to adjust one or more imaging parameters of at least one of the one or more images to obtain an adjusted image based on one or more state parameters associated with the movable object prior to encoding the one or more images.

Some embodiments relate to an imaging system. The imaging system includes: an imaging device carried on the movable object and configured to capture one or more images; and one or more processors. The one or more processors, when executing the instructions, individually or collectively determine whether one or more state parameters associated with the movable object are within a preset range. In response to one or more state parameters not being within a preset range, the one or more processors: adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image; and encoding the adjusted image to generate encoded image data. In response to the one or more state parameters being within a preset range, the one or more processors encode at least one of the one or more images to generate encoded image data. The one or more processors also transmit the encoded image data from the movable object to a remote terminal using the transceiver.

Some embodiments relate to a non-transitory computer program product comprising machine-readable instructions. The machine-readable instructions cause a programmable processing apparatus to perform operations comprising: receiving one or more images from an imaging device carried on a movable object; and adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image. The operations further include: encoding the adjusted image to generate encoded image data; the encoded image data is transmitted from the movable object to the remote terminal.

Some embodiments relate to a non-transitory computer program product comprising machine-readable instructions. The machine-readable instructions cause a programmable processing apparatus to perform operations comprising: receiving one or more images from an imaging device of a movable object; and determining whether to adjust one or more imaging parameters of at least one of the one or more images to obtain an adjusted image based on one or more state parameters associated with the movable object prior to encoding the one or more images.

Some embodiments relate to a non-transitory computer program product comprising machine-readable instructions. The machine-readable instructions cause a programmable processing apparatus to perform operations comprising: receiving one or more images from an imaging device of a movable object; and determining whether one or more state parameters associated with the movable object are within a preset range. In response to one or more of the state parameters not being within a preset range, the operations further comprise: adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image; and encoding the adjusted image to generate encoded image data. In response to one or more state parameters being within a preset range, the operations further comprise: at least one of the one or more images is encoded to generate encoded image data. The operations further include: the encoded image data is transmitted from the movable object to the remote terminal.

This summary is provided merely for purposes of illustrating some embodiments to provide an understanding of the subject matter described herein. Accordingly, the above-described features are merely examples and should not be construed to narrow the scope or spirit of the subject matter in the present disclosure. Other features, aspects, and advantages of the disclosure will become apparent from the following detailed description, the drawings, and the claims.

Drawings

The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present disclosure and, together with the description, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the disclosure.

Fig. 1 is a block diagram depicting an example of a system for performing preprocessing and parameter control, in accordance with some embodiments.

Fig. 2A is a block diagram depicting an example of a system for performing preprocessing and parameter control, in accordance with some embodiments.

Fig. 2B is a block diagram depicting an example of a pre-processing circuit, in accordance with some embodiments.

Fig. 3A is a flow diagram depicting an example method for preprocessing, in accordance with some embodiments.

3B-3D are flow diagrams depicting example methods for implementing step 303 of method 300 of FIG. 3A, according to some embodiments.

4A-4D are flow diagrams depicting example methods for pre-processing and parameter control, according to some embodiments.

FIG. 5 is an example computer system that may be used to implement some embodiments, or portions thereof.

The present disclosure is described with reference to the accompanying drawings. In the drawings, like reference numbers generally indicate identical or functionally similar elements. Additionally, the left-most digit(s) of a reference number generally identifies the figure in which the reference number first appears.

Detailed Description

According to some embodiments, content, including but not limited to video data, image data, graphics data, etc., is encoded (e.g., compressed) into encoded data before the encoded data is transmitted or stored. According to some examples, the number of bits used to encode a unit of data (e.g., a frame of video) per unit of time (e.g., a second) is referred to as a bit rate. Due to some constraints associated with, for example, a transmitter that transmits encoded data, a receiver that receives encoded data, a storage medium for storing encoded data, or a communication channel for transmitting encoded data, bit rate control techniques may be used to control one or more parameters associated with the encoding technique used to generate the encoded data. For example, due to transmission bandwidth constraints of the communication channel, bit rate control techniques may be used to control the quantization parameters of the encoding techniques used to generate the encoded data. For example, these bit rate control techniques may be used to achieve matching between the bit rate associated with the encoded data and the bandwidth of the communication channel.

However, the parameters of the encoding technique (e.g., quantization parameters) may have upper or lower value limits, such that the bit rate control technique is unable to further change these parameters to control the bit rate. For example, the quantization parameter has a maximum limit defined by the encoding technique. In some circumstances, it may not be possible to achieve the target bit rate even if the quantization parameter reaches this maximum value. According to some examples, video lag will be experienced at the receiver device side because the target bit rate may not be achieved.

Encoding techniques used to encode content into encoded data may affect the quality of the encoded data when played back at the receiver device. For example, a peak signal-to-noise ratio (PSNR) value (e.g., determined by decoding and comparing encoded data to content) and/or storage space of one or more buffers used to store the encoded data may affect the quality of the encoded data when played back at the receiver device. In addition, bit rate control techniques used to control one or more parameters of the encoding technique may also affect the quality of the encoded data. In some examples, video quality (particularly subjective video quality) degradation and loss is determined by the encoding techniques and algorithms being used, rather than under user control.

Embodiments of the present disclosure relate to controlling one or more parameters associated with an encoding technique (e.g., without limitation, bit rate, PSNR, buffer size, etc.), for example, by actively pre-processing content and actively controlling the pre-processed one or more parameters prior to applying the encoding technique. Thus, the quality of the encoded data can be actively controlled. Furthermore, according to some embodiments, even if the upper or lower value limit of the parameters of the encoding technique is reached, the target bit rate may still be obtained by pre-processing the content before encoding. This may provide for smooth transmission of the encoded data, for example. Embodiments of the present disclosure may achieve a target bit rate, a target PSNR, a target buffer size, and prevent interruption in encoded data, for example, in transmission over long distances.

The pre-processing and parameter control of embodiments of the present disclosure may improve the quality of the encoding technique and may keep parameters (e.g., quantization parameters) of the encoding technique within a high quality range. For example, if the parameters of the coding technique fall outside the high quality range during bit rate control, the pre-processing techniques of the present disclosure are used to bring and maintain the parameters of the coding technique back within the high quality range. The preprocessing and parameter control of embodiments of the present disclosure may result in higher quality encoded data, better transmission quality (e.g., less lag time) for transmitting the encoded data, achieving a target bit rate, and the like.

In addition, by using the pre-processing and parameter control of embodiments of the present disclosure, the quality of the encoded data (e.g., at playback) conforms to a predefined quality classification. In other words, important information of the content (e.g., edge information, high frequency information, etc.) can be preserved by using the embodiments of the present disclosure, and the image quality is not controlled only by the bit rate control technique.

Fig. 1 depicts a block diagram of an example 100 of a system for performing preprocessing and parameter control, in accordance with some embodiments. As shown in fig. 1, system 100 may include a movable object (such as, but not limited to, an Unmanned Aerial Vehicle (UAV))101 and a remote terminal (e.g., a receiver device) 103 in communication with each other over a communication channel 105.

According to some embodiments, UAV 101 may be configured to: collecting data; processing the collected data; and transmit the processed data to the receiver device 103 over the communication channel 105. For example, UAV 101 may be configured to collect data that may include, but is not limited to: video data, image data, graphics data, audio data, text data, and the like. For example, UAV 101 collects data generated by one or more sensors, such as, but not limited to: visual sensors (e.g., cameras, infrared sensors), microphones, proximity sensors (e.g., ultrasound, lidar), position sensors, temperature sensors, touch sensors, and the like. In some examples, the data collected by UAV 101 may include data from the user such as biometric information, including but not limited to: facial features, fingerprint scans, retinal scans, voice recordings, DNA samples, and the like.

According to some embodiments, the receiver device 103 may include, but is not limited to: remote control devices, laptop computers, desktop computers, tablet computers, television receivers, display devices, mobile phones, in-vehicle devices, aircraft-carried devices, and the like. The receiver device 103 is configured to receive data transmitted from the UAV 101 over the communication channel 105. The receiver apparatus 103 is further configured to: processing the received data; and displaying the data, for example, on a display device. In some embodiments, the receiver device 103 is also configured to transmit information related to the received data or communication channel 105 back to the UAV 101 over the communication channel 105.

According to some examples, the communication channel 105 may include or be associated with a wired or wireless network, such as the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Storage Area Network (SAN), a peer-to-peer network (P2P), a WiFi network, bluetooth low energy, a radio network, Long Term Evolution (LTE), 3G, 4G, 5G networks, or other networks.

According to some embodiments, and as discussed in more detail below, the UAV 101 is configured to encode the collected data prior to transmitting the encoded data to the receiver device 103 to generate encoded data. UAV 101 is also configured to control one or more state parameters associated with UAV 101. The one or more state parameters associated with UAV 101 may include, but are not limited to: one or more parameters of an encoder of UAV 101 (e.g., bit rate, quantization parameters, PSNR, storage space of a buffer, etc.). In some embodiments, UAV 101 is configured to pre-process the collected data prior to encoding the collected data. UAV 101 is configured to: the collected data is pre-processed if one or more state parameters associated with UAV 101 are not within a preset range. UAV 101 is also configured to: if the one or more state parameters associated with UAV 101 are not within the preset range, the one or more parameters of the pre-processing are adjusted. Thus, the quality of the encoded data can be actively controlled. This may provide for a smooth transmission of the encoded data with less delay in the transmission, for example.

By applying the pre-processing and parameter control of embodiments of the present disclosure, UAV 101 may improve the quality of the encoding techniques it uses and may keep the parameters of its encoding techniques within a high quality range. If the parameters of its encoding technique fall outside the high quality range, UAV100 is configured to use the pre-processing techniques of the present disclosure to bring and maintain the parameters of the encoding technique back within the high quality range. Thus, UAV 101 may be configured to: by using the pre-processing and parameter control of embodiments of the present disclosure, higher quality encoded data is transmitted with better transmission quality (e.g., less lag time) for transmitting encoded data, a target bit rate is obtained, and so on.

It should be noted that although one UAV and one receiver device are depicted in fig. 1, embodiments of the present disclosure may include one or more UAVs communicating with one or more receiver devices over one or more communication channels. Also, the system 100 of FIG. 1 is provided as an exemplary environment. Embodiments of the present disclosure are not limited to this system, UAV 101 may comprise any movable object, and receiver device 103 may comprise any remote terminal. These embodiments of the present disclosure may be applied to systems including other devices, such as but not limited to: unmanned Aerial Systems (UAS), bicycles, cars, trucks, ships, yachts, trains, helicopters, aircraft, robots, and the like.

Fig. 2A is a block diagram depicting an example of a system 200 for performing preprocessing and parameter control, in accordance with some embodiments. For example, system 200 may be part of or associated with UAV 101 of fig. 100.

As shown in fig. 2A, system 200 may include: a preprocessing circuit 201, an imaging device 202, an encoder 203, a transceiver 205, and a storage device 231. As discussed in more detail below, the system 200 is configured to: for example, the bit rate associated with the encoded data is controlled by using, for example, the pre-processing circuit 201, while keeping the encoding parameters of the encoder 203 within a high quality range. According to some examples, the high quality range of the encoding parameters of the encoder 203 may comprise a predetermined range of encoded data encoded by the encoder 203 having a predefined quality. According to some embodiments, the rate controller 207 is configured to: the bit rate associated with the encoded data is controlled by adjusting the encoding parameters of encoder 203. If the encoding parameters of the encoder 203 fall outside the high quality range during bit rate control, the rate controller 207 and the pre-processing circuit 201 are configured to bring the encoding parameters back to and stay within the high quality range.

According to some embodiments, imaging device 202 may include one or more sensors, such as, but not limited to: visual sensors (e.g., cameras, infrared sensors), microphones, proximity sensors (e.g., ultrasound, lidar), position sensors, temperature sensors, touch sensors, and the like. Data captured by the imaging device 202 is input to the preprocessing circuit 201. Although the present disclosure discusses images and image data as the data captured by imaging device 202 and as input data 211, embodiments of the present disclosure are not limited to image data. The input data 211 may include, but is not limited to: video data, image data, graphics data, audio data, text data, or any other data to be encoded. In some examples, the input data 201 may be data from a user such as biometric information, including but not limited to: facial features, fingerprint scans, retinal scans, voice recordings, DNA samples, and the like.

The preprocessing circuit 201 receives or retrieves input data 211. As discussed in more detail below, the pre-processing circuit 201 is configured to process the input data 211 before the input data 211 is encoded by the encoder 203. Encoder input data 213, which is an output of the preprocessing circuit 201, is input to the encoder 203. The encoder 203 encodes encoder input data 213 to generate encoded data 215. The rate controller 207 is configured to: the bit rate associated with the encoded data 215 is controlled while the pre-processing circuitry 201 is controlled so that the encoding parameters of the encoder 203 are within an acceptable range. In some examples, the encoded data 215 is transmitted 217 to a remote terminal over a communication channel using, for example, the transceiver 205. Additionally or alternatively, the encoded data 215 is stored in a storage device.

According to some examples, when the pre-processing circuitry 201 receives input data 211 (e.g., one or more images) from the imaging device 202, the pre-processing circuitry 201 (alone or in conjunction with the rate controller 207) is configured to determine whether to pre-process the input data 211. Preprocessing the input data 211 may include adjusting one or more imaging parameters of the input data 211. For example, preprocessing the input data 211 may include reducing one or more imaging parameters of the input data 211. According to some examples, the one or more imaging parameters may include, but are not limited to: a spatial frequency of the image, a dimension of a color space of the image, and/or a dimension of a luminance space of the image. According to some examples, the pre-processing circuit 201 is configured to adjust one of the imaging parameters. Additionally or alternatively, the processing circuit 201 may adjust two or more of the imaging parameters. For example, the processing circuitry 201 may adjust two or more of the imaging parameters based on a preset priority. In some examples, adjusting the spatial frequency may have a higher priority than adjusting the dimension of the color space and/or adjusting the dimension of the luminance space.

The preprocessing circuit 201 is configured to determine whether to preprocess the input data 211 based on one or more state parameters associated with the system 200. As discussed in more detail below, the pre-processing circuit 201 (alone or in conjunction with the rate controller 207) is configured to: determining one or more state parameters associated with system 200; comparing the one or more status parameters to one or more preset values (e.g., one or more preset ranges); and determining whether to preprocess the input data 211 based on the comparison.

According to some embodiments, the one or more state parameters associated with system 200 may include quantization parameters of encoder 203. The quantization parameter may comprise a quantization parameter used to encode previous input data (e.g., one or more images received prior to input data 211). Additionally or alternatively, the quantization parameter may comprise an adjusted quantization parameter obtained by adjusting a quantization parameter used to encode one or more images received prior to the input data 211, wherein the adjustment may be made based at least on a bandwidth of a communication channel (e.g., the communication channel 105 of fig. 1). For example, the pre-processing circuit 201 may determine to pre-process the input data 211 if the quantization parameter of the encoder 203 used to encode the previous input data is equal to or greater than a first quantization parameter threshold.

Additionally or alternatively, the one or more state parameters associated with system 200 may include PSNR values of previous input data (e.g., one or more images received prior to input data 211). For example, the preprocessing circuit 201 may determine to preprocess the input data 211 if the PSNR value of the previous input data is equal to or less than the PSNR threshold. According to some embodiments, the PSNR value associated with an image received before the input data 211 is obtained by encoding and decoding the image. For example, an image is first encoded using, for example, encoder 203, then the encoded image is decoded (using, for example, a decoder (not shown)), and the decoded version of the image is compared to the original version to determine the PSNR value for the image.

In addition to or instead of the quantization parameter and/or PSNR value, one or more state parameters associated with system 200 may include: storage space occupied in one or more buffers used to store encoded data associated with previous input data (e.g., one or more images received prior to input data 211). For example, the pre-processing circuitry 201 may determine to pre-process the input data 211 if the storage space occupied in one or more buffers used to store encoded data associated with previous input data is equal to or greater than a threshold.

It is noted that although quantization parameters, PSNR values, and/or storage space in a buffer are discussed as examples of one or more state parameters associated with system 200, embodiments of the present disclosure are not limited to these examples, and other parameters of system 200 may be used by preprocessing circuit 201 to determine whether to preprocess input data 211.

Some examples of using the quantization parameter of the encoder 203 as one state parameter for determining whether to pre-process the input data 211 will now be discussed in more detail. Encoding the encoder input data 213 (input to the encoder 203) may include data compression, encryption, error coding, format conversion, and the like. For example, the encoder input data 213 may be compressed to reduce the number of bits sent over the communication channel. In another example, the encoder input data 213 may be encrypted to protect the encoder input data 213 during transmission and/or storage. Different types of encoding techniques may be used to encode the encoder input data 213. The type of encoding technique may be determined based on, for example: the type of encoder input data 213, the requirements of the device used to encode the encoder input data 213, the type of storage device used to store the encoded data and/or the type of communication channel used to transmit the encoded data, security requirements, etc. Encoding techniques may include, but are not limited to: video compression, audio compression, lossy compression, lossless compression, huffman coding, Lempel-Ziv-welch (lzw) compression, and the like.

According to some examples, the encoding may include a transforming step, a quantizing step, and an entropy encoding step. For example, during the transforming step, the original encoder input data 213 is transformed from a first domain to a different domain (e.g., a spatial frequency domain) that is appropriate for the data content (e.g., video data) of the encoder input data 213. Any suitable transform coding technique may be used, including but not limited to fourier type transforms such as Discrete Cosine Transforms (DCTs) or modified DCTs. According to some examples using DCT, the DCT matrix is determined based on, for example, the size of the data unit. The data units may include blocks of 4x4 or 8x8 pixels, macroblocks of 16x16 pixels, any suitable set of data. The DCT matrix is then applied to the data units using matrix multiplication to create a transformed matrix comprising transform coefficients.

In the quantization step, the coefficients in the transformed matrix may be quantized, for example, by: dividing each transform coefficient by a corresponding element in a quantization matrix; and then rounded toThe nearest integer value. The quantization matrix may be derived using a quantization parameter (also referred to as a quantization index). According to some examples, the quantization parameter may be a value of each element of the quantization matrix. As another example, some or all of the elements in the quantization matrix may be multiplied by a quantization parameter (e.g., scaled by the quantization parameter), and the scaled quantization matrix may be used to quantize the transformed matrix. According to some embodiments, the quantization parameter may be a particular range (e.g., at a lower threshold Q)LAnd an upper threshold QHAnd includes QLAnd QH) A value (e.g., an integer) within. In a non-limiting example, the quantization parameter may be between 0 and 50 and include 0 and 50. According to some examples, the higher the value of the quantization parameter, the larger the quantization step size and the larger the elements in the quantization matrix. This may allow more transform coefficients to be quantized to 0 or close to 0 and fewer bits to be used to encode the quantized coefficients. The more coefficients that are zero or close to zero, the fewer bits are used to encode the coefficients, resulting in a smaller bit size (and hence lower bit rate) of the data unit represented by the coefficients. Vice versa, i.e. a lower value of the quantization parameter corresponds to a smaller quantization step size, a higher number of bits are used for encoding the quantized coefficients, and the larger the bit size (and thus the higher the bit rate) the data unit is encoded using the quantization parameter.

Some embodiments of the present disclosure relate to methods and systems for: the encoder input data 213 is pre-processed and the bit rate of the encoded data 215 is controlled such that parameters such as quantization parameters are kept within a high quality range.

According to some embodiments, in the entropy encoding step, the quantized coefficients in the quantized matrix are scanned in a predetermined order and encoded using any suitable encoding technique. In some examples, a zigzag scan pattern from top left to bottom right is typical because most of the DCT coefficients other than 0 may be concentrated in the top left corner of the matrix. Alternatively, other scanning orders such as raster scanning may be used. The scanning order can be used to maximize the probability of obtaining a long run of consecutive 0 coefficients. The scanned coefficients may be encoded using run-length encoding, variable length encoding, or any other entropy encoding technique to generate encoded data 215.

According to some examples, rate controller 207 is configured to control the bit rate of encoded data 215. For example, the rate controller 207 is configured to control the bit rate within a certain range (e.g., less than the maximum bit rate and greater than the minimum bit rate). Additionally or alternatively, the rate controller 207 may control the bit rate to be close (or substantially close) to a target bit rate (e.g., an average target bit rate). In some examples, the rate controller 207 is configured to control the bit rate to vary according to the encoder input data 213.

According to some examples, to control the bit rate of the encoded data 215, the rate controller 207 is configured to determine or adjust (e.g., update) encoding parameters associated with the encoder 203. In some embodiments, the encoding parameters may include one or more quantization parameters used to control the quantization step of the encoding process of encoder 203 and/or to control the bit rate of the resulting encoded data 215 accordingly. According to some embodiments, the quantization parameter may include a quantization step size, a value related to the quantization step size (e.g., a Quantization Parameter (QP) used in an h.264 encoder or similar), a quantization matrix or one or more parameters related to the quantization matrix, or other related parameters.

It should be noted that although in some embodiments encoding parameters are discussed as quantization parameters, embodiments of the present disclosure are not limited to these examples and other encoding parameters may be used by rate controller 207 to control the bit rate. For example, the encoding parameters may include parameters for controlling other aspects of the encoding process, such as, but not limited to: a prediction step, a transformation step, or an entropy coding step. For example, the encoding parameters may include a cutoff index that is used to remove a particular high frequency coefficient before the coefficient is entropy encoded. As another example, the encoding parameters may include bit allocation information (e.g., maximum, minimum, or target bits allocated for encoding a data unit), frame rate, size of the data unit to be transformed and quantized, motion detection thresholds for determining whether to encode or skip encode the data unit (e.g., a macroblock), lagrangian multipliers for rate distortion optimization, algorithms and parameters for prediction, transform, or entropy encoding steps, or other similar parameters.

In accordance with some embodiments, to adjust encoding parameters (e.g., quantization parameters), rate controller 207 receives or obtains transport information 219, input information 223, output information 225, or encoder information 227. Based on the received or obtained information, the rate controller 207 is configured to adjust encoding parameters associated with the encoder 203. The rate controller 207 sends the adjusted encoding parameters 229 to the encoder 203. In some examples, the encoder 203 is configured to obtain the adjusted encoding parameters 229 from the rate controller 207. Additionally or alternatively, the rate controller 207 may store the adjusted encoding parameters 229 in the storage 231 and the encoder 203 may retrieve the stored adjusted encoding parameters from the storage 231.

According to some examples, the input information 223 may include information associated with the encoder input data 213. For example, input information 223 may include any characteristic of encoder input data 213 that may be used for rate control, such as, but not limited to: resolution, size, image complexity, texture, luminance, chrominance, motion information, or other similar characteristics. For example, higher complexity input data may be encoded using a higher bit rate than lower complexity input data.

According to some examples, the output information 225 may include information associated with the encoded data 215. For example, the output information 225 may include any characteristic of the encoded data 215 that may be used for rate control, such as, but not limited to: size, PSNR value associated with the encoded data 215, error rate, or other similar characteristics.

According to some examples, encoder information 227 may include information associated with encoded data 215. For example, encoder information 227 may include, but is not limited to: the number of bits used to encode a unit of data (e.g., a frame, slot, macroblock), the bit rate associated with the encoded data 215, parameters used to encode a unit of data, encoder resource information (e.g., CPU/memory usage, buffer usage), or other similar characteristics. It should be noted that although output information 225 and encoder information 227 are shown as different inputs to rate controller 207, they may have overlapping information.

Additionally or alternatively, rate controller 207 may also receive transmission information 219 for adjusting encoding parameters from, for example, transceiver 205. According to some embodiments, the transmission information 219 may include any characteristic of the transceiver 205 or communication channel (used to transmit the encoded data 215) that may be used for rate control, such as, but not limited to: bandwidth associated with the communication channel, feedback information received by transceiver 205 (e.g., SNR associated with the channel, channel error, distance to a receiver device associated with system 200, parameters associated with the receiver device, playback quality at the receiver device, etc.), parameters associated with transceiver 205 used to transmit encoded data 215, or other similar characteristics.

In some embodiments, rate controller 207 may use one or more thresholds to adjust encoding parameters in addition to information 219, 223, 225, or 227 to control the bit rate of encoded data 215. In some examples, the threshold may be stored, for example, in storage 231, and may be retrieved by rate controller 207. According to some embodiments, the value of the threshold may be predetermined or dynamically updated by a user, a system administrator, rate controller 207, or other component/device. The threshold values stored in the storage 231 may include, but are not limited to: a threshold or range associated with a bit rate, a threshold or range associated with an encoding parameter, a threshold or range associated with the pre-processing circuit 201, or the like.

According to some embodiments, the rate controller 207 adjusts the encoding parameters of the encoder 203 based on the received input information such that the bit rate associated with the encoded data 215 is within a predetermined range or close to (or substantially close to) the target bit rate. In some examples, the predetermined range or target bit rate is (or is determined based on) a bandwidth associated with the communication channel. After adjusting the encoding parameters of the encoder 203, the rate controller 207 is configured to determine whether the adjusted encoding parameters are within an acceptable range of the encoding parameters. If the adjusted encoding parameters are within the acceptable range, the system 200 continues the process of encoding the next input data and rate control. However, if the adjusted encoding parameter is not within the acceptable range, the rate controller 207 is configured to instruct the preprocessing circuit 201 to preprocess the next input data 211. In addition, the rate controller 207 is configured to adjust one or more parameters of the pre-processing circuit 201. Instructions for preprocessing the next input data and/or the adjusted parameters 221 of the preprocessing circuit 201 are sent to the preprocessing circuit 201.

In addition to or instead of using the quantization parameter as one or more state parameters of the system 200, the PSNR value of the previous input data (e.g., one or more images received prior to the input data 211) may be used to trigger the preprocessing and/or adjust one or more parameters of the preprocessing circuit 201. In this example, rate controller 207 may determine a PSNR value associated with the previously input data and/or encoded data 215 and compare the determined PSNR value to a PSNR threshold. In response to the determined PSNR value being equal to or less than the PSNR threshold, the rate controller 207 may instruct the preprocessing circuit 201 to preprocess the next input data and/or adjust one or more parameters associated with the preprocessing circuit 201. For example, the rate controller 207 may be configured to compare data obtained by decoding the encoded data 215 with the encoder input data 213 to determine the PSNR value. Additionally or alternatively, the rate controller 207 may receive the PSNR value associated with the encoder 203 in the encoder information 227. In some examples, the PSNR value may be an average PSNR determined over a time period in which the encoder 203 encodes data. In some examples, the PSNR value associated with the encoder 203 may be a PSNR value determined for the encoded data 215. Rate controller 207 may compare the determined PSNR value to a PSNR threshold. In some examples, the PSNR threshold is stored in storage 231. In response to the determined PSNR value being equal to or less than the PSNR threshold, the rate controller 207 is configured to instruct the preprocessing circuit 201 to preprocess the next input data 211. In addition, the rate controller 207 is configured to adjust one or more parameters of the pre-processing circuit 201. Instructions for preprocessing the next input data and/or the adjusted parameters 221 of the preprocessing circuit 201 are sent to the preprocessing circuit 201.

In addition to or instead of using the quantization parameter and/or the PSNR as one or more state parameters of the system 200, the storage space occupied in the one or more buffers for storing the encoded data 215 may be used to trigger the preprocessing and/or to adjust one or more parameters of the preprocessing circuit 201. In this example, rate controller 207 may determine a memory space occupied in one or more buffers associated with encoder 203 and/or transceiver 205 and compare the determined memory space in the one or more buffers to a threshold. In response to a response that the determined storage space in the one or more buffers is equal to or greater than the threshold, the rate controller 207 may instruct the preprocessing circuit 201 to preprocess the next input data and/or adjust one or more parameters associated with the preprocessing circuit 201. For example, the rate controller 207 may be configured to compare the encoded data 215 with the encoder input data 213 to determine storage space in one or more buffers associated with the encoder 203. Additionally or alternatively, rate controller 207 may receive storage space in one or more buffers associated with encoder 203 in encoder information 227. In some examples, the storage space in the one or more buffers associated with the encoder 203 may be an average storage space determined over a period of time and/or over multiple buffers. In some examples, rate controller 207 may receive storage space in one or more buffers associated with transceiver 205 in transmission information 219. Rate controller 207 may compare the determined storage space in one or more buffers to a threshold. In some examples, the threshold is stored in storage 231. In response to a response that the determined storage space in the one or more buffers is equal to or greater than the threshold, the rate controller 207 is configured to instruct the preprocessing circuit 201 to preprocess the next input data 211. In addition, the rate controller 207 is configured to adjust one or more parameters of the pre-processing circuit 201. Instructions for preprocessing the next input data and/or the adjusted parameters 221 of the preprocessing circuit 201 are sent to the preprocessing circuit 201.

Fig. 2B is a block diagram depicting an example of the pre-processing circuit 201, in accordance with some embodiments. As shown in fig. 2B, according to some examples, the pre-processing circuitry 201 may include spatial frequency control 241, color control 243, and brightness control 245. Although the spatial frequency control 241, color control 243, and brightness control 245 are shown as separate circuits, they may be combined into one or more circuits. In addition, the preprocessing circuit 201 may include other circuits.

As discussed with respect to fig. 2A, the pre-processing circuit 201 may receive input data 211 and generate encoder input data 213. According to some examples, pre-processing circuit 211 may receive instructions from rate controller 207 for pre-processing input data to generate encoder input data 213. In some embodiments, if pre-processing circuit 211 does not receive any instructions for pre-processing, pre-processing circuit 211 may pass input data 211 as encoder input data 213 without pre-processing input data 211.

In addition, the preprocessing circuit 211 receives adjusted parameters 221 from the rate controller 207. According to some embodiments, the adjusted parameters 221 are parameters associated with one or more of spatial frequency control 241, color control 243, and brightness control 245.

The input data 211 is input to one or more of spatial frequency control 241, color control 243, and brightness control 245, such that the input data 211 is pre-processed before being encoded by the encoder 203. As discussed above, for example, preprocessing the input data 211 may include adjusting one or more imaging parameters of the input data 211. According to some examples, the one or more imaging parameters may include, but are not limited to: a spatial frequency of the image, a dimension of a color space of the image, and/or a dimension of a luminance space of the image. According to some embodiments, adjusting one or more imaging parameters of the input data 211 may include: reducing the input data 211 may result in the encoder input data 213 having one or more imaging parameters of lower quality than the input data 211.

According to some examples, the spatial frequency control 241 may include a filter configured to control the spatial frequency of the input data 211. For example, spatial frequency control 241 may include a bilateral filter configured to control the spatial frequency of input data 211 by, for example: one or more gaussian kernel sigma parameters, i.e., spatial scale parameters and value scale parameters, are controlled. In this example, the bilateral filter is configured to reduce noise associated with the input data 211. The bilateral filter may be a non-linear filter configured to smooth the input data 211.

By using a bilateral filter, the input data 211 may be smoothed while preserving edges associated with the input data 211. The bilateral filter replaces the intensity of each pixel within a frame of the input data 211 with a weighted average of the intensity values from neighboring pixels of the frame. In some examples, the weight may be based on a gaussian distribution. In some examples, the weights may depend on a distance (e.g., a euclidean distance) between pixels of the frames in the input data 211. Additionally or alternatively, the weighting of the bilateral distance may depend on radiance differences (e.g., range differences such as color intensity, depth distance, etc.) between pixels of the frame of input data 211.

As an example of the bilateral filter, pixel (i, j) of the frame of input data 211 may be filtered by using the spatial distance of the pixel from its neighboring pixels and the intensity difference between the pixel and its neighboring pixels. For example, considering a pixel (i, j) of a frame of input data 211 and one of its neighboring pixels (k, l), the weights assigned to the pixel (k, l) for filtering (e.g., denoising) the pixel (i, j) are given as follows:

here, f is the intensity of the pixel in the original frame of the original input data 211. Likewise, σd(spatial scale parameter) and σrThe (value scale parameter) is the smoothing parameter of the bilateral filter.

After applying a bilateral filter to pixel (i, j) using its neighbors, the intensity of the filtered (e.g., denoised) pixel (i, j) is determined as follows:

here, g (i, j) is the intensity of the pixel (i, j) of the frame of the input data 211 after filtering (for example, after denoising) by the neighboring pixel (k, 1) of the frame of the input data 211.

The bilateral filter of the pre-processing circuit 201, as one example of the spatial frequency control 241, is configured to pre-process the input data 211. As discussed in more detail below, the rate controller 207 is configured to: the parameters of filter 241 (e.g., σ of bilateral filter) are controlled based on one or more state parameters (e.g., quantization parameters, PSNR, storage space in buffer, etc.) associated with system 200d(spatial scale parameter) and σr(value scale parameter)). Although a bilateral filter is discussed as one example of spatial frequency control 241, embodiments of the present disclosure are not limited to this example, and other filters may be used as spatial frequency control 241.

According to some embodiments, in addition to spatial frequency control 241, pre-processing circuitry 201 may include color control 243 and brightness control 245. In some examples, the color control 243 may be configured to control a dimension of a color space associated with the input data 211, and the brightness control may be configured to control a dimension of a brightness space associated with the input data 211.

According to some examples, color control 243 may be configured to adjust a dimension of a color space associated with input data 211 by a preset value. This preset value may be stored in a storage device in the preprocessing circuit 201 (and/or accessible by the preprocessing circuit 201) and/or in the storage device 231. As a non-limiting example, the input data 211 may include a color space containing 256 levels. Color control 243 may be configured to: the level of the color space is reduced based on a control signal (e.g., adjusted parameter 221) from the rate controller 207. In one example, the color control 243 may reduce the dimension of the color space by a power of 2 in each iteration (e.g., 256 to 128 to 64 to 32 to 16 to 8 to 4 to 2 to 1). In some examples, by reducing the dimension of the color space using the pre-processing circuitry 201 and the rate control 207, the number of bits used to encode the encoder input data 213 may be reduced, and thus the bit rate associated with the encoded data 215 may be controlled.

According to some embodiments, the color control 243 may be configured to adjust a dimension of a luminance space associated with the input data 211 by a preset value. This preset value may be stored in a storage device in the preprocessing circuit 201 (and/or accessible by the preprocessing circuit 201) and/or in the storage device 231. As a non-limiting example, the input data 211 may include a luminance space having 256 levels. Brightness control 245 may be configured to: the dimension of the luminance space is reduced based on a control signal (e.g., adjusted parameter 221) from the rate controller 207. In one example, brightness control 245 may reduce the dimension of the brightness space by powers of 2 in each iteration (e.g., 256 to 128 to 64 to 32 to 16 to 8 to 4 to 2 to 1). In some examples, by reducing the dimension of the luminance space using the pre-processing circuitry 201 and the rate control 207, the number of bits used to encode the encoder input data 213 may be reduced, and thus the bit rate associated with the encoded data 215 may be controlled.

The dimensions of the color space and the luminance space are provided as examples, and embodiments of the present disclosure are not limited to these examples. Other numbers of dimensions and other schemes for controlling dimensions may be used.

According to some examples, one or more of spatial frequency control 241, color control 243, and brightness control 245 may be applied to input data 211 to generate encoder input data 213. One or more of spatial frequency control 241, color control 243, and brightness control 245 are used to adjust one or more imaging parameters of input data 211. According to some examples, the application of spatial frequency control 241, color control 243, and brightness control 245 to input data 211 may be done hierarchically and based on preset priorities. In one example, spatial frequency control 241 may have the highest priority. In this example, the spatial frequency control 241 is first applied to the input data 211, and then, if necessary, the color control 243 and the brightness control 245 are applied. However, this is one example, and other orders for applying spatial frequency control 241, color control 243, and brightness control 245 (and/or other control mechanisms) to adjust one or more imaging parameters of the input data 211 may be applied.

Similarly, control (e.g., adjustment) of the parameters of spatial frequency control 241, color control 243, and brightness control 245 may be accomplished in stages. For example, the rate control 207 is configured to: the parameters of spatial frequency control 241 are first controlled based on one or more state parameters of system 200. After the parameters of the spatial frequency control 241 reach one or more thresholds, then the rate control 207 may control the parameters of the color control 243 based on one or more state parameters of the system 200. After the parameters of color control 243 reach one or more thresholds, then rate controller 207 may control the parameters of brightness control 245 based on one or more state parameters of system 200. The order of control of spatial frequency control 241, color control 243 and brightness control 245 is provided as an example, and other orders may be used to control these circuits.

Fig. 3A is a flow diagram depicting an example method 300 for preprocessing, in accordance with some embodiments. For convenience, fig. 3A will be described with reference to fig. 1, 2A, and 2B, but, as will be understood by those skilled in the art, the method 300 is not limited to the specific embodiments depicted in these figures, and other systems may be used to perform the method. It should be understood that not all of the steps may be required, and that the steps may not be performed in the same order as shown in fig. 3A.

According to some examples, the method 300 begins at 301 when the preprocessing circuit 201 receives input data 211. The input data 211 may include one or more images (e.g., one or more frames of video data) captured by the imaging device 202 of the system 200. At 303, it is determined whether one or more state parameters associated with the system 200 are within a preset range. According to some examples, this determination may be made by the preprocessing circuitry 201. Additionally or alternatively, this determination may be made by the rate controller 207.

According to some embodiments, the one or more state parameters associated with system 200 may include quantization parameters of encoder 203. The quantization parameter may comprise a quantization parameter used to encode previous input data (e.g., one or more images received prior to input data 211). Additionally or alternatively, the quantization parameter may comprise an adjusted quantization parameter obtained by adjusting a quantization parameter used to encode one or more images received prior to the input data 211, wherein the adjustment may be made based at least on a bandwidth of a communication channel (e.g., the communication channel 105 of fig. 1). For example, the pre-processing circuit 201 (alone or in conjunction with the rate controller 207) may determine whether a quantization parameter of the encoder 203 used to encode previous input data is equal to or greater than a first quantization parameter threshold. Additionally or alternatively, the one or more state parameters associated with system 200 may include PSNR values of previous input data (e.g., one or more images received prior to input data 211). For example, the preprocessing circuit 201 can determine whether the PSNR value of the previous input data is equal to or less than the PSNR threshold. In addition to or instead of the quantization parameter and/or PSNR value, one or more state parameters associated with system 200 may include storage space occupied in one or more buffers used to store encoded data associated with previous input data (e.g., one or more images received prior to input data 211). For example, the pre-processing circuitry 201 may determine whether the storage space occupied in one or more buffers used to store encoded data associated with previously input data is equal to or greater than a threshold.

If the pre-processing circuitry 201 (alone or in conjunction with the rate controller 207) determines that one or more state parameters associated with the system 200 are within a preset range, then at 311, the encoder 203 encodes at least one of the one or more images in the input data 211 to generate encoded image data (encoded data 215). According to some embodiments, the preprocessing circuit 201 does not preprocess at least one of the one or more images in the input data 211 because the one or more state parameters associated with the system 200 are within a preset range. At 309, the transceiver 205 transmits the encoded image data.

However, if the pre-processing circuitry 201 (alone or in conjunction with the rate controller 207) determines that one or more state parameters associated with the system 200 are not within a preset range, then at 305 the pre-processing circuitry 201 pre-processes at least one of the one or more images in the input data 211. According to some examples, the preprocessing at 305 may include adjusting one or more imaging parameters of at least one of the one or more images to obtain an adjusted image (encoder input data 213 of fig. 2A).

According to some embodiments, adjusting one or more imaging parameters at 305 may include: one or more imaging parameters of at least one of the one or more images in input data 211 are reduced to generate an adjusted image (encoder input data 213 of fig. 2A). Reducing the one or more imaging parameters may result in a reduction in quality of at least one of the one or more images in the input data 211. According to some examples, the one or more imaging parameters may include, but are not limited to, a spatial frequency, a dimension of a color space, and/or a dimension of a brightness space of at least one of the one or more images.

Adjusting one or more imaging parameters at 305 may include: a filter (e.g., a bilateral filter) is used to adjust the spatial frequency of at least one of the one or more images. Further, the adjustment at 305 may include adjusting one or more configuration parameters (e.g., spatial scale parameters, value scale parameters, etc.) of the filter. In some examples, adjusting one or more configuration parameters may be based on a preset order.

Additionally or alternatively, adjusting one or more imaging parameters at 305 may include: the dimension of the color space of at least one of the one or more images is adjusted using, for example, a preset value. Likewise, adjusting one or more imaging parameters at 305 may include: the dimension of the luminance space of at least one of the one or more images is adjusted using, for example, a preset value.

It is noted that adjusting one or more imaging parameters at 305 may include: adjusting one imaging parameter, adjusting two imaging parameters, adjusting three imaging parameters, or adjusting any number of imaging parameters. Further, adjusting one or more imaging parameters at 305 may include: the imaging parameters are adjusted based on the preset priority. As a non-limiting example, adjusting the spatial frequency may have the highest priority, followed by adjusting the dimensions of the color space, and followed by adjusting the dimensions of the luminance space. However, any other order and pre-set priority may be used.

After preprocessing at least one of the one or more images of input data 211 to generate an adjusted image (encoder input data 213) at 305, encoder 203 encodes the adjusted image (encoder input data 213) to generate encoded image data (encoded data 215). At 309, the transceiver 205 transmits the encoded image data. The encoded image data may come from 307 or 311.

3B-3D are flow diagrams depicting example methods for implementing step 303 of method 300 of FIG. 3A, according to some embodiments. For convenience, fig. 3B-3D will be described with reference to fig. 1, 2A, and 2B, but, as will be understood by those skilled in the art, the method 303-1 of fig. 3B, the method 303-2 of fig. 3C, and the method 303-3 of fig. 3D are not limited to the specific embodiments depicted in these figures, and other systems may be used to perform the methods. It should be understood that not all steps may be required, and that the steps may not be performed in the same order as shown in fig. 3B-3D.

According to some embodiments, the one or more state parameters associated with system 200 used in the determination of 303 of fig. 3A may include a quantization parameter of encoder 203. The quantization parameter may include a quantization parameter used to encode previous input data (e.g., one or more images received prior to the one or more images received at 301 of fig. 3A). Additionally or alternatively, the quantization parameter may comprise an adjusted quantization parameter obtained by adjusting a quantization parameter used to encode one or more images received prior to the one or more images received at 301 of fig. 3A, wherein the adjustment may be made based at least on a bandwidth of a communication channel (e.g., communication channel 105 of fig. 1). For example, as shown in method 303-1 of fig. 3B, the pre-processing circuitry 201 (alone or in conjunction with the rate controller 207) may determine to pre-process the input data 211 if the quantization parameter of the encoder 203 used to encode the previous input data is equal to or greater than a first quantization parameter threshold.

At 321, the rate controller 207 is configured to: a bit rate associated with one or more pictures that were encoded prior to the one or more pictures received at 301 of fig. 3A is received and/or determined. As discussed above with respect to fig. 2A, rate controller 207 is configured to receive one or more of transport information 219, input information 223, output information 225, and encoder information 229. In some examples, the received information may include a bit rate associated with one or more pictures previously encoded. Additionally or alternatively, the bit controller 207 may use the received information to determine a bit rate associated with one or more images that were previously encoded.

In an exemplary embodiment, the rate controller 207 is configured to: the transmission information 219 is used to determine the bit rate associated with the previously transmitted encoded data. For example, the transmission information 219 may include a feedback signal received from the receiver device 103 of fig. 1. According to some examples, the feedback signal is a response to previously encoded data received at the receiver device 103. The rate controller 207 may receive the feedback signal and feedback information from the receiver device 103 via the transceiver 205. Using the received feedback signal, the rate controller 207 may determine the quality of the previously transmitted encoded data 215 received by the receiver device 103 or may determine the approximate distance between the system 200 and the receiver device 103. In this example, rate controller 207 may determine a bit rate associated with previously transmitted encoded data based on the determined quality and/or the determined approximate distance.

At 323, rate controller 207 is configured to compare the bit rate to one or more thresholds. For example, the rate controller 207 is configured to control the encoder 203 such that the bit rate is within a certain range (e.g., less than a maximum value and greater than a minimum value). In another example, the rate controller 207 is configured to control the encoder 203 such that the bit rate approaches a target value (e.g., an average target bit rate). In this example, the rate controller 207 is configured to control the encoder 203 such that the bit rate is less than a channel bandwidth associated with a communication channel used to transmit the encoded data 215. Rate controller 207 may receive the channel bandwidth or determine the channel bandwidth based on, for example, output information 225 or transmission information 219.

At 325, rate controller 207 determines whether a bit rate associated with one or more previously encoded data is within a predetermined range or less than a target bit rate. For example, the rate controller 207 determines whether the bit rate is greater than the channel bandwidth. If the bit rate is not greater than (i.e., less than or equal to) the channel bandwidth, method 303-1 of FIG. 3B returns to 311 of method 300 of FIG. 3A. However, if the bit rate is greater than the channel bandwidth, the method continues to 327.

At 327, the rate controller 207 adjusts one or more encoding parameters (parameters associated with the encoder 203) using a bit rate control algorithm based on the determined bit rate and a predetermined bit rate range or target bit rate. In some examples, the one or more encoding parameters include one or more quantization parameters of the encoder 203. The bit rate control algorithm may include any algorithm for adjusting one or more encoding parameters based on the determined bit rate and one or more thresholds associated with the bit rate. For example, the bit rate control algorithm may include, for example, a model in which the bit rate is a function of one or more encoding parameters (e.g., quantization parameters), and thus, one or more encoding parameters (e.g., quantization parameters) may be adjusted based on a comparison between the bit rate and one or more thresholds associated with the bit rate. However, embodiments of the present disclosure are not limited to this example, and other models and algorithms for bit rate control may be used.

At 329, the rate controller 207 compares the adjusted quantization parameter to the first quantization parameter threshold to determine whether the adjusted quantization parameter is equal to or greater than the first quantization parameter threshold. If the quantization parameter of encoder 203 used to encode the previous input data is equal to or greater than the first quantization parameter threshold, method 303-1 of FIG. 3B continues at 305 of method 303 of FIG. 3A to pre-process at least one of the one or more images received at 301 of FIG. 3A. However, if the quantization parameter of the encoder 203 used to encode the previous input data is less than the first quantization parameter threshold, the method 303-1 of fig. 3B continues at 311 of the method 303 of fig. 3A.

According to some embodiments, the one or more status parameters associated with system 200 used in the determination of 303 of fig. 3A may include PSNR values of previous input data (e.g., one or more images received prior to the one or more images received at 301 of fig. 3A). For example, as shown in method 303-2 of FIG. 3C, if the PSNR value of the previous input data is equal to or less than the PSNR threshold, then the pre-processing circuit 201 (alone or in conjunction with the rate controller 207) may determine to pre-process the input data 211.

At 341, the rate controller 207 (alone or in conjunction with the pre-processing circuit 201) is configured to: PSNR values associated with one or more images encoded prior to the one or more images received at 301 of fig. 3A are received and/or determined. In some examples, the rate controller 207 may determine the PSNR value associated with the encoder 203 and/or associated with one or more images that were encoded prior to the one or more images received at 301. For example, the rate controller 207 may be configured to: data obtained by decoding the previously encoded one or more images is compared with its corresponding previously received one or more images to determine the PSNR value. Additionally or alternatively, the rate controller 207 may receive the PSNR value in the encoder information 227. In some examples, the PSNR value may be an average PSNR determined over a time period in which the encoder 203 encodes data.

At 343, rate controller 207 (alone or in conjunction with pre-processing circuit 201) compares the PSNR value to the PSNR threshold to determine whether the PSNR value is equal to or less than the PSNR threshold. If the PSNR value is equal to or less than the PSNR threshold, the method 303-2 of FIG. 3C continues at 305 of the method 303 of FIG. 3A to pre-process at least one of the one or more images received at 301 of FIG. 3A. However, if the PSNR value is greater than the PSNR threshold, method 303-2 of FIG. 3C continues at 311 of method 303 of FIG. 3A.

According to some embodiments, the one or more state parameters associated with system 200 used in the determination of 303 of fig. 3A may include storage space occupied in one or more buffers used to store encoded data associated with previous input data (e.g., one or more images received prior to the one or more images received at 301 of fig. 3A). For example, as shown in method 303-2 of fig. 3C, preprocessing circuit 201 (alone or in conjunction with rate controller 207) may determine to preprocess input data 211 if the storage space occupied in the one or more buffers used to store encoded data associated with the previous input data is equal to or greater than a threshold.

At 351, the rate controller 207 (alone or in conjunction with the pre-processing circuitry 201) is configured to: storage space occupied in one or more buffers used to store encoded data associated with previous input data encoded prior to one or more images received at 301 of fig. 3A is received and/or determined. For example, the rate controller 207 may be configured to: the previously encoded data is compared to corresponding previously received input data to determine storage space in one or more buffers associated with encoder 203. Additionally or alternatively, rate controller 207 may receive storage space in one or more buffers associated with encoder 203 in encoder information 227. In some examples, the storage space in the one or more buffers associated with the encoder 203 may be an average storage space determined over a period of time and/or over multiple buffers. In some examples, rate controller 207 may receive storage space in one or more buffers associated with transceiver 205 in transmission information 219.

At 353, rate controller 207 (alone or in conjunction with preprocessing circuit 201) compares the memory space occupied in the one or more buffers to a threshold to determine whether the memory space occupied in the one or more buffers is equal to or greater than the threshold. If the occupied storage space in the one or more buffers is equal to or greater than the threshold, the method 303-3 of FIG. 3D continues at 305 of the method 303 of FIG. 3A to pre-process at least one of the one or more images received at 301 of FIG. 3A. However, if the storage space occupied in the one or more buffers is less than the threshold, method 303-3 of FIG. 3D continues at 311 of method 303 of FIG. 3A.

Fig. 4A-4D are flow diagrams depicting example methods according to some embodiments. For convenience, fig. 4A-4D will be described with reference to fig. 1, 2A, and 2B, but, as will be understood by those skilled in the art, the method 400 of fig. 4A-4D is not limited to the specific embodiments depicted in these figures, and other systems may be used to perform the method. It should be understood that not all steps may be required, and that the steps may not be performed in the same order as shown in fig. 4A-4D.

It is noted that while fig. 4A-4D are discussed with respect to bit rate control and quantization parameters as one of the state parameters of system 200, the method of fig. 4A-4D may be performed using other state parameters of system 200 as previously discussed. Also, while fig. 4A-4D are discussed as first adjusting the spatial frequency of an image, then adjusting the dimensions of the color space of the image, and then adjusting the dimensions of the brightness space of the image, the method of fig. 4A-4D may use different parameters and/or different orders of parameters to adjust one or more imaging parameters of the image.

According to some examples, method 400 begins at 401 when encoder 203 encodes one or more images or one or more adjusted images in encoder input data 213 to generate encoded image data (encoded data 215). In some examples, the one or more pictures or the one or more adjusted pictures in the encoder input data 213 include one or more frames of video data. One or more images or one or more adjusted images are received from, for example, imaging device 202 or pre-processing circuitry 201 of system 200 of fig. 2.

At 403, the encoded image data (encoded data 215) is transmitted using, for example, transceiver 205. According to some examples, the encoded data 215 is transmitted over a communication channel, for example, to the receiver device 103 of fig. 1.

At 405, rate controller 207 (alone or in conjunction with pre-processing circuitry 201-referred to herein as rate controller 207) is configured to receive or determine a bit rate associated with the encoded image data (encoded data 215).

At 407, the rate controller 207 is configured to compare the bit rate to one or more thresholds. For example, the rate controller 207 is configured to control the encoder 203 such that the bit rate is within a certain range (e.g., less than a maximum value and greater than a minimum value). In another example, the rate controller 207 is configured to control the encoder 203 such that the bit rate approaches a target value (e.g., an average target bit rate). In this example, the rate controller 207 is configured to control the encoder 203 such that the bit rate is less than a channel bandwidth associated with a communication channel used to transmit the encoded data 215. Rate controller 207 may receive the channel bandwidth or determine the channel bandwidth based on, for example, output information 225 or transmission information 219. At 407, rate controller 207 also determines whether the bit rate associated with the encoded image data (encoded data 215) is within a predetermined range or less than the target bit rate. For example, the rate controller 207 determines whether the bit rate is greater than the channel bandwidth. If the bit rate associated with the encoded data 215 is not greater than (i.e., less than or equal to) the channel bandwidth, the method 400 continues to 409.

At 409, the preprocessing circuit 201 receives the next image (e.g., the next frame) within the input data 211 and the method 400 continues at 401. In some examples, the pre-processing circuit 201 does not pre-process the next image at 409.

However, if the bit rate associated with the encoded image data (encoded data 215) is greater than the channel bandwidth, the method continues to 411. At 411, rate controller 207 adjusts one or more encoding parameters (parameters associated with encoder 203) using a bit rate control algorithm based on the determined bit rate and a predetermined bit rate range or target bit rate. In some examples, the one or more encoding parameters include one or more quantization parameters of the encoder 203.

At 413, the rate controller 207 compares the adjusted one or more encoding parameters to one or more thresholds. In some examples, the adjusted encoding parameters include adjusted quantization parameters. For example, the adjusted quantization parameter may be a value within a particular range (e.g., at a lower threshold Q)LAnd an upper threshold QHAnd includes QLAnd QH). At 413, the rate controller 207 compares the adjusted quantization parameter to, for example, a lower threshold QLAnd an upper threshold QH. At 413, speedThe rate controller 207 determines whether the adjusted encoding parameters satisfy one or more thresholds. For example, the rate controller 207 determines whether the adjusted quantization parameter is within a predetermined range (e.g., at a lower threshold Q)LAnd an upper threshold QHIn between).

If rate controller 207 determines that the adjusted encoding parameters satisfy one or more thresholds, then the method continues at 409, at 409 preprocessing circuit 201 receives the next image (e.g., the next frame) within input data 211 and method 400 continues at 401. In some examples, the pre-processing circuit 201 does not pre-process the next image at 409.

However, if, at 413, rate controller 207 determines that the one or more encoding parameters do not satisfy the one or more predetermined thresholds, rate controller 207 may adjust the spatial frequency of the image and/or adjust one or more configuration parameters of a spatial frequency control 241 (e.g., a filter) used to adjust the spatial frequency of the image. For example, at 415, the rate controller 207 is configured to adjust a first parameter of the filter 241 of the pre-processing circuit 201. In one example, the first parameter of the filter 241 is σd(spatial scale parameter). For example, the rate controller 207 is configured to: (e.g., by being a spatial scale parameter (σ)d) Adding a spatial scale step size or by subtracting a spatial scale parameter (σ)d) Subtract the spatial scale step size) to obtain the spatial scale parameter (σ)d) Adjusting a preset value. According to some examples, if the adjusted quantization parameter determined at 411 is greater than the upper threshold QHThe rate controller 207 is configured to determine the spatial scale parameter (σ) by mapping to the spatial scale parameterd) Adding a spatial scale step size to adjust a spatial scale parameter (σ)d). In some examples, if the adjusted quantization parameter determined at 411 is less than the lower threshold QLThe rate controller 207 is configured to determine the spatial scale parameter (σ) by subtracting the spatial scale parameter (σ) from the spatial scale parameter (c)d) Adjusting the spatial scale parameter (σ) by subtracting the spatial scale step sized). In these examples, the spatial scale step size may be stored in storage 231 accessible to rate controller 207. It should be noted that other partiesThe method can be used to adjust a spatial scale parameter (σ)d). As an example, the spatial scale parameter (σ)d) May be stored, for example, in the storage device 231 when the spatial scale parameter (σ) is to be adjustedd) The rate controller 207 may select from the storage device 231.

In adjusting the spatial scale parameter (σ)d) The rate controller 207 may then compare the adjusted spatial scale parameter (σ) at 417 and 419d) And one or more thresholds. In one example, rate controller 207 may compare the adjusted spatial scale parameter (σ)d) And a lower thresholdAnd/or an upper thresholdAt 419, if the adjusted spatial scale parameter (σ)d) Still within the lower thresholdAnd an upper thresholdWithin the scope of definitionThe bit rate controller 207 may adjust the adjusted spatial scale parameter (σ)d) To a spatial frequency control 241 (e.g., a filter) of the preprocessing circuit 201. The method 400 may then continue at 421 and 423. In this example, at 421, preprocessing circuit 201 receives a next image (e.g., a next video frame) within input data 211 from imaging device 202. At 423, the pre-processing circuit 201 adjusts configuration parameters of the spatial frequency control 241 (e.g., filters) and uses the updated configuration parameters (e.g., adjusted spatial scale parameters (σ)d) To adjust the spatial frequency of the next image received. The method 400 then returns to 401.

At 419, if the adjusted spatial scale parameter (σ)d) Less than or equal to the upper thresholdAnd is also less than the lower thresholdThe bit rate controller 207 may apply the spatial scale parameter (σ)d) Adjusted to be equal to the lower thresholdAnd the adjusted spatial scale parameter (sigma) can be adjustedd) To the filter 241 of the preprocessing circuit 201. The method 400 may then continue at 421 and 423.

At 417, if the rate controller 207 determines the adjusted spatial scale parameter (σ)d) Out of the lower thresholdAnd an upper thresholdWithin a defined range (e.g. adjusted spatial scale parameter (σ)d) Greater than an upper threshold) Then, at 427, the rate controller 207 passes the spatial scale parameter (σ)d) Adjusted to be equal to the upper thresholdAnd adjusts a second parameter of the filter 241 of the preprocessing circuit 201. In one example, the second parameter of the filter 241 is σr(value scale parameter). For example, the rate controller 207 is configured to: (e.g., by scaling the parameter (σ) to a valuer) Adding value scale step size or by scaling a parameter (σ) from a valuer) Subtract value scale step size) to scale the parameter (σ) to a value scaler) Adjusting a preset value. According to aSome examples, if the adjusted quantization parameter determined at 411 is greater than the upper threshold QHThe rate controller 207 is configured to scale the parameter (σ) by a vector valuer) Adding a value scale step size to adjust a value scale parameter (σ)r). In some examples, if the adjusted quantization parameter determined at 411 is less than the lower threshold QLThen the rate controller 207 is configured to: by scaling the parameter (σ) from a valuer) Subtracting the value scale step size to adjust the value scale parameter (σ)r). In this example, the value of the step size may be stored in a storage 231 accessible to rate controller 207. It should be noted that other methods may be used to adjust the value scale parameter (σ)r). As one example, the value scale parameter (σ)r) May be stored, for example, in the storage means 231 when the value scale parameter (σ) is to be adjustedr) The rate controller 207 may select from the storage device 231.

At the adjustment value scale parameter (sigma)r) The rate controller 207 may then compare the adjusted value scale parameter (σ) at 429 and at 431r) And one or more thresholds. In one example, rate controller 207 may compare the adjusted value scale parameter (σ)r) And a lower thresholdAnd/or an upper thresholdAt 429 and at 431, if the adjusted value scales the parameter (σ)r) Still within the lower thresholdAnd an upper thresholdWithin the scope of definitionThe bit rate controller 207 may modulateThe scaled value scale parameter (σ)r) To a spatial frequency control 241 (e.g., a filter) of the preprocessing circuit 201. The method 400 may continue at 421 and 423. In this example, at 421, preprocessing circuit 201 receives a next image (e.g., a next video frame) within input data 211 from imaging device 202. At 423, the pre-processing circuit 201 adjusts configuration parameters of the spatial frequency control 241 (e.g., filters) and uses the updated configuration parameters (e.g., adjusted spatial scale parameters (σ)d) And/or adjusted value scale parameter (σ)r) To adjust the spatial frequency of the next image received. The method 400 then returns to 401.

At 431, if the adjusted value scale parameter (σ)r) Less than or equal to the upper thresholdAnd is also less than the lower thresholdThe bit rate controller 207 may scale the value to the parameter (σ)r) Adjusted to be equal to the lower thresholdAnd the adjusted value scale parameter (sigma) can be adjustedr) To the filter 241 of the preprocessing circuit 201. The method 400 may then continue at 421 and 423.

At 429, if the rate controller 207 determines an adjusted value scale parameter (σ)r) Out of the lower thresholdAnd an upper thresholdWithin a defined range (e.g. adjusted value scale parameter (σ)r) Greater than an upper threshold) Then, at 435, rate controller 207 scales the parameter (σ) by valuer) Adjusted to be equal to the upper threshold

At 440, the pre-processing circuitry 201 bases, for example, on the adjusted spatial scale parameter (σ)d) And/or adjusted value scale parameter (σ)r) To adjust configuration parameters of the spatial frequency control 241 (e.g., filter). At 440, the pre-processing circuit 201 may receive the next image from the imaging device 202 and may adjust the spatial frequency of the received next image using the updated configuration parameters of the spatial frequency control 241.

The method 400 also continues with adjusting configuration parameters of the color control 243 and/or the brightness control 245. In one example, at 441, the rate controller 207 adjusts the dimension of the color space of the next image received by a preset value to generate an adjusted image. According to some non-limiting examples, adjusting the dimension of the color space may include reducing the dimension of the color space by a preset valueIn this example, KcIs a parameter associated with the color control 243 of the pre-processing 201. Parameter KcMay be an integer and may be initialized to have a value of 1 at the beginning of the control procedure. In some examples, parameter KcMay be stored in the storage device 231. According to some embodiments, reducing the dimension of the color space may include dividing a color value associated with each pixel of the image by a color value associated with each pixel of the image

At 443, the encoder 203 encodes the adjusted image to generate encoded image data. At 445, the transceiver 205 transmits the encoded image data. At 447, rate controller 207 determines and/or receives a bit rate associated with the encoded data, and at 449, rate controller compares the bit rate to, for example, a determined bandwidth of the communication channel. If the determined bit rate is less than or equal to the bandwidth, the method 400 continues at 440.

However, if at 449, the rate controller 207 determines that the determined bit rate is greater than the bandwidth, the rate controller 207 adjusts one or more encoding parameters (e.g., one or more quantization parameters) of the encoder 203 at 451. At 453, the rate controller 207 compares the adjusted one or more parameters to one or more thresholds. If the one or more encoding parameters satisfy the one or more thresholds, the method 400 continues at 440. Steps 443-453 are similar to steps 401-413 discussed above.

If one or more encoding parameters do not satisfy one or more thresholds, the method 400 continues at 455, the rate controller 207 adjusts a preset value (e.g., parameter K) associated with the color control 243 of the pre-processing 201c). According to some embodiments, the rate controller 207 controls the rate by applying a parameter K to the ratecBy adding step sizes or by deriving from parameter KcSubtracting the step size to adjust the parameter Kc. According to some examples, if the adjusted quantization parameter determined at 451 is greater than the upper threshold QHThe rate controller 207 is configured to control the rate by applying a parameter K to the ratecAdding step size to adjust parameter Kc. In some examples, if the adjusted quantization parameter determined at 451 is less than the lower threshold QLThe rate controller 207 is configured to control the rate by deriving the parameter K from the rate control signalcSubtracting the step size to adjust the parameter Kc. In some examples, for adjusting the parameter KcThe step size of (d) may be 1. It should be noted that other methods and/or other values of the step size may be used to adjust the parameter Kc. According to some embodiments, for adjusting the parameter KcMay be stored in the storage 231.

In adjusting the parameter KcThe rate controller 207 then determines 457 an adjusted parameter KcWhether a threshold value is reached. In some examples, rate controller 207 compares adjusted parameter KcAnd upper thresholdIn some embodiments, the upper thresholdMay be 7, but other upper threshold values may be used. Upper threshold valueMay be stored in the storage 231. If the rate controller 207 determines 457 the adjusted parameter KcIf the upper threshold is not reached, the rate controller 207 may adjust the parameter KcTo the color control 243 of the preprocessing circuit 201 and the method 400 may continue at 440. In this example, the parameter K associated with the color control 243 of the pre-processing 201 is adjustedcAnd parameter K is applied 441cIs used for the next image of the input data 201. If at 457 the rate controller 207 determines the adjusted parameter KcHas reached the upper thresholdThe rate controller 207 will set the parameter KcAdjusted to be equal to the upper thresholdAnd the method 400 continues at 460.

In some examples, rate controller 207 compares adjusted parameter KcAnd a lower thresholdIn some embodiments, the lower thresholdMay be 0, but other lower thresholds may be used. Lower thresholdCan be used forIs stored in the storage device 231. If the rate controller 207 determines 457 the adjusted parameter KcGreater than a lower thresholdThe rate controller 207 may adjust the adjusted parameter KcTo the color control 243 of the preprocessing circuit 201 and the method 400 may continue at 440. In this example, the parameter K associated with the color control 243 of the pre-processing 201 is adjustedcAnd parameter K is applied 441cIs used for the next image of the input data 201. According to some examples, if at 457 the rate controller 207 determines the adjusted parameter KcLess than a lower thresholdThe rate controller 207 will set the parameter KcAdjusted to be equal to the lower thresholdAnd the method 400 may continue at 427.

At 460, the pre-processing circuitry 201 bases, for example, on the adjusted spatial scale parameter (σ)d) And/or adjusted value scale parameter (σ)r) To adjust configuration parameters of the spatial frequency control 241 (e.g., filter). At 460, the preprocessing circuit 201 may receive the next image from the imaging device 202 and may adjust the spatial frequency of the received next image using the updated configuration parameters of the spatial frequency control 241. At 460, the pre-processing circuit 201 may also use the adjusted preset value from 455 to adjust the dimensions of the color space of the next image.

Method 400 also continues with adjusting parameters that configure brightness control 245. In one example, the rate controller 207 also adjusts the dimension of the luminance space of the next image received by a preset value to generate an adjusted image at 461. According to some non-limiting examples, adjusting the dimension of the luminance space may include reducing the dimension of the color space by a preset valueIn this example, KbIs a parameter associated with brightness control 245 of pre-processing 201. Parameter KbMay be an integer and may be initialized to have a value of 1 at the beginning of the control procedure. In some examples, parameter KbMay be stored in the storage device 231. According to some embodiments, reducing the dimension of the luminance space may include dividing the luminance value associated with each pixel of the image by the luminance value associated with each pixel of the image

At 463, the encoder 203 encodes the adjusted image to generate encoded image data. At 465, the transceiver 205 transmits the encoded image data. At 467, the rate controller 207 determines and/or receives a bit rate associated with the encoded data, and at 469, the rate controller compares the bit rate to, for example, a determined bandwidth of the communication channel. If the determined bit rate is less than or equal to the bandwidth, the method 400 continues at 460.

However, if at 469, rate controller 207 determines that the determined bit rate is greater than the bandwidth, rate controller 207 adjusts one or more encoding parameters (e.g., one or more quantization parameters) of encoder 203 at 471. At 473, the rate controller 207 compares the adjusted one or more parameters to one or more thresholds. If the one or more encoding parameters satisfy the one or more thresholds, the method 400 continues at 460. Steps 463-473 are similar to steps 401-413 discussed above.

If one or more encoding parameters do not satisfy one or more thresholds, method 400 continues at 475 where, at 475, rate controller 207 adjusts a preset value (e.g., parameter K) associated with brightness control 245 of pre-processing 201b). According to some embodiments, the rate controller 207 controls the rate by applying a parameter K to the ratebBy adding step sizes or by deriving from parameter KbSubtracting the step size to adjust the parameter Kb. According to some examples, if determined at 471The adjusted quantization parameter is larger than the upper threshold QHThe rate controller 207 is configured to control the rate by applying a parameter K to the ratebAdding step size to adjust parameter Kb. In some examples, if the adjusted quantization parameter determined at 471 is less than the lower threshold QLThe rate controller 207 is configured to control the rate by deriving the parameter K from the rate control signalcSubtracting the step size to adjust the parameter Kb. In some examples, for adjusting the parameter KbThe step size of (d) may be 1. It should be noted that other methods and/or other values of the step size may be used to adjust the parameter Kb. According to some embodiments, for adjusting the parameter KbMay be stored in the storage 231.

In adjusting the parameter KbThe rate controller 207 then determines 477 the adjusted parameter KbWhether a threshold value is reached. In some examples, rate controller 207 compares adjusted parameter KbAnd upper thresholdIn some embodiments, the upper thresholdMay be 7, but other thresholds may be used. Upper threshold valueMay be stored in the storage 231. If the rate controller 207 determines 477 the adjusted parameter KbIf the upper threshold is not reached, the rate controller 207 may adjust the parameter KbTo brightness control 245 of pre-processing circuit 201 and method 400 may continue at 460. In this example, a parameter K associated with brightness control 245 of pre-processing 201 is adjustedbAnd parameter K is applied at 461bIs used for the next image of the input data 201.

According to some embodiments, if at 477 the rate controller 207 determines the adjusted parameter KbHas reached the upper thresholdThe rate controller 207 will set the parameter KbAdjusted to be equal to the upper thresholdAnd the method 400 may continue at 479 by issuing an error message. The error message may indicate that the state parameters of the system 200 (e.g., the bit rate associated with the encoded image data, encoding parameters (e.g., quantization parameters), PSNR, and/or storage space of one or more buffers, etc.) are not within a preset range and that the parameters of the pre-processing circuit 201 are also outside a predetermined range. Additionally or alternatively, the method 400 may continue to 460, at 460, the next frame within the input data 211 may be preprocessed using the preprocessing circuit 201 with parameters of the preprocessing circuit 201, the parameters of the preprocessing circuit 201 currently being at their maximum threshold.

In some examples, rate controller 207 compares adjusted parameter KbAnd a lower thresholdIn some embodiments, the lower thresholdMay be 0, but other lower thresholds may be used. Lower thresholdMay be stored in the storage 231. If the rate controller 207 determines 477 the adjusted parameter KbGreater than a lower thresholdThe rate controller 207 may adjust the adjusted parameter KbTo brightness control 245 of pre-processing circuit 201 and method 400 may continue at 460. In this example, a parameter K associated with brightness control 245 of pre-processing 201 is adjustedbAnd parameter K is applied at 461cIs used for the next image of the input data 201. According to some examples, if rate controller 207 determines 477 an adjusted parameter KbLess than a lower thresholdThe rate controller 207 will set the parameter KbAdjusted to be equal to the lower thresholdAnd the method 400 may continue at 427.

Various embodiments may be implemented, for example, using one or more well-known computer systems (e.g., computer system 500 shown in fig. 5). For example, each of the components and/or operations described with reference to fig. 1, 2A, 2B, 3A-3C, and 4A-4D may be implemented using one or more computer systems 500 or portions thereof. The computer system 500 may be used, for example, to implement the method 300 of fig. 3A-3D or the method 400 of fig. 4A-4D. For example, according to some embodiments, the computer system 500 may be used for pre-processing and parameter control. Computer system 500 may be any computer capable of performing the functions described herein.

Computer system 500 includes one or more processors (also referred to as central processing units, or CPUs), such as processor 504. The processor 506 is connected to a communication infrastructure or bus 506.

Processor 506 may be, for example, a Graphics Processing Unit (GPU). In some embodiments, the GPU is a processor that: the processor is a specialized electronic circuit designed to handle mathematically intensive applications. GPUs can have a parallel structure that is effective for parallel processing of large blocks of data (e.g., mathematically intensive data common to computer graphics applications, images, video, etc.).

Computer system 500 also includes user input/output/display devices 522, such as monitors, keyboards, pointing devices, etc., that communicate with communication infrastructure 504.

Computer system 500 also includes a main or primary memory 508, such as Random Access Memory (RAM). Main memory 508 may include one or more levels of cache. The main memory 508 stores control logic 528A (e.g., computer software) and/or data.

The computer system 500 may also include one or more secondary storage devices or secondary memories 510. The secondary memory 510 may include: such as a hard disk drive 512 and/or a removable storage device or a removable storage drive 514. Removable storage drive 514 may be a floppy disk drive, a magnetic tape drive, an optical disk drive, an optical storage device, a tape backup device, and/or any other storage device/drive.

Removable storage drive 514 may interact with a removable storage unit 516. Removable storage unit 518 includes a computer usable or readable storage device that stores control logic 528B (e.g., computer software) and/or data. Removable storage unit 518 may be a floppy disk, magnetic tape, optical disk, DVD, optical storage disk, and/or any other computer data storage device. Removable storage drive 514 reads from and/or writes to removable storage unit 516.

Computer system 500 may also include a communications or network interface 518. Communication interface 518 enables computer system 500 to communicate and interact with any combination of remote devices, remote networks, remote entities, and the like, singly or collectively referenced by reference numeral 530. For example, communication interface 518 may allow computer system 500 to communicate with remote device 530 via communication path 526, which communication path 526 may be wired and/or wireless and may include any combination of a LAN, WAN, the Internet, or the like. Control logic and/or data can be sent to computer system 500 and from computer system 500 via communications path 526.

In some embodiments, a tangible device or article of manufacture including a tangible computer usable or readable medium storing control logic (software) is also referred to herein as a "computer program product" or "program storage device. This includes, but is not limited to: computer system 500, main memory 508, secondary memory 510, and removable storage unit 516, as well as tangible articles of manufacture embodying any combination of the foregoing. The control logic, when executed by one or more data processing devices (e.g., computer system 500), causes the data processing devices to operate as described herein.

It will be apparent to one skilled in the relevant art(s) how to make and use embodiments of the present disclosure using data processing apparatus, computer systems, and/or computer architectures other than the one shown in fig. 5 based on the teachings contained in the present disclosure. In particular, embodiments may operate using software implementations, hardware implementations, and/or operating system implementations other than those described herein.

It should be understood that the detailed description section, and not the summary and abstract sections, is intended to be used to interpret the claims. The summary and abstract sections may set forth one or more, but not all exemplary embodiments of the present disclosure as contemplated by the inventors and are therefore not intended to limit the present disclosure and the appended claims in any way.

The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. Boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.

The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.

References herein to "one embodiment," "an example embodiment," or similar phrases, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is within the knowledge of one skilled in the relevant art to incorporate such feature, structure, or characteristic in other embodiments, whether or not explicitly mentioned or described herein.

The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims appended hereto and their equivalents.

The claims in this application are distinct from any parent or other related application. The applicant hereby gives notice of any disclaimer of the scope of the claims made in the parent application or any previous or related application in connection with the present application. It is therefore suggested that the reviewer may need to review any previous such disclaimer and the cited reference to have it avoided. The examiner is also reminded that any disclaimer made in this application should not be read or interpreted in or against the parent application or related application.

51页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用3D辅助数据的运动估计

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类