Video transmitting apparatus and video receiving apparatus

文档序号:1549842 发布日期:2020-01-17 浏览:25次 中文

阅读说明:本技术 视频发送装置和视频接收装置 (Video transmitting apparatus and video receiving apparatus ) 是由 吉持直树 杉冈达也 于 2018-05-11 设计创作,主要内容包括:根据本公开的一个实施例的视频发送装置设置有发送单元,用于通过长分组的有效载荷数据发送图像中ROI的图像数据,并通过嵌入数据发送与ROI相关的信息。(A video transmission apparatus according to one embodiment of the present disclosure is provided with a transmission unit for transmitting image data of an ROI in an image through payload data of a long packet and transmitting information related to the ROI through embedded data.)

1. A video transmitting apparatus comprising:

a transmitting section that transmits image data of an ROI (region of interest) in an image as payload data of a long packet and transmits information on the ROI as embedded data.

2. The video transmission apparatus according to claim 1, wherein the transmission section transmits the image data of the respective ROIs through virtual channels different from each other.

3. The video transmission apparatus according to claim 1, wherein the transmission section transmits the image data of each ROI through a virtual channel common to each other.

4. The video transmission apparatus according to claim 3, wherein the transmission section transmits the data type of each ROI by putting it in a packet header of the payload data.

5. The video transmission apparatus according to claim 3, wherein the transmission section transmits at least one of the number of the ROIs included in the image, the area numbers of the ROIs, the data lengths of the ROIs, and the image formats of the ROIs in the payload data.

6. The video transmission apparatus according to claim 3, wherein the transmission section transmits at least one of the number of the ROIs included in the image, the area numbers of the ROIs, the data lengths of the ROIs, and the image formats of the ROIs in short packets.

7. The video transmission apparatus according to claim 1, wherein the transmission section transmits the image data of the ROI by an image data frame, and transmits the information on the ROI by a header or a trailer of the image data frame.

8. The video transmission device according to claim 1, wherein the transmission section transmits the signal in MIPI (mobile industry processor interface) CSI (camera serial interface) -2 specification, MIPI CSI-3 specification, or MIPIDSI (display serial interface) specification.

9. A video transmitting apparatus comprising:

a detector that detects an overlapping region where two or more ROIs (regions of interest) overlap each other based on information on the ROIs in an image; and

a transmitting section that transmits a plurality of pieces of third image data by payload data of a long packet and transmits information on the respective ROIs in the image by embedding data, the plurality of pieces of third image data being obtained by omitting second image data of the overlapping region from a plurality of pieces of first image data of the ROIs in the image so as to avoid the second image data from being redundantly included in the plurality of pieces of first image data.

10. The video transmission apparatus according to claim 9, wherein the transmission section transmits each of the ROIs through a virtual channel different from each other.

11. The video transmission apparatus according to claim 9, wherein the transmission section transmits each ROI through a virtual channel common to each other.

12. The video transmission apparatus according to claim 11, wherein the transmission section transmits the data type of each ROI in a packet header of the payload data.

13. The video transmission apparatus according to claim 11, wherein the transmission section transmits at least one of the number of the ROIs included in the image, the area numbers of the ROIs, the data lengths of the ROIs, and the image formats of the ROIs in the payload data.

14. The video transmission apparatus according to claim 11, wherein the transmission section transmits at least one of the number of the ROIs included in the image, the area numbers of the ROIs, the data lengths of the ROIs, and the image formats of the ROIs in a data field of a short packet.

15. The video transmission apparatus according to claim 9, wherein the transmission section transmits the image data of the ROI by an image data frame, and transmits the information on the ROI by a header or a trailer of the image data frame.

16. The video transmission apparatus according to claim 9, wherein the transmission section transmits the signal in MIPI (mobile industry processor interface) CSI (camera serial interface) -2 specification, MIPI CSI-3 specification, or MIPIDSI (display serial interface) specification.

17. A video receiving apparatus comprising:

a receiving section that receives image data of an ROI (region of interest) in an image included in payload data of a long packet and a transmission signal of information on the ROI included in embedded data; and

an information processor that extracts information about the ROI from the embedded data included in the transmission signal received by the receiving section, and extracts image data of the ROI from the payload data included in the transmission signal received by the receiving section based on the extracted information.

18. The video receiving apparatus according to claim 17, wherein the information processor detects an overlapping region where two or more of the ROIs overlap with each other based on the extracted information, and extracts image data of each of the ROIs from the payload data included in the transmission signal received by the receiving section based on the extracted information and the detected information of the overlapping region.

19. The video receiving apparatus according to claim 17, wherein the transmitting section receives a signal in a MIPI (mobile industry processor interface) CSI (camera serial interface) -2 specification, a MIPI CSI-3 specification, or a MIPIDSI (display serial interface) specification.

Technical Field

The present disclosure relates to a video transmitting apparatus and a video receiving apparatus.

Background

In recent years, applications for transmitting a large amount of data with a large data volume have been increasing. The transmission system is likely to be heavily loaded and in the worst case, the transmission system may be down and data transmission may not be performed.

In order to prevent the transmission system from being stopped, for example, instead of transmitting the entire captured image, only a partial image obtained by designating an object to be captured and cutting out the recognized object is transmitted. It should be noted that, for example, the following patent documents describe cutting out a partial image from a captured image.

Reference list

Patent document

PTL 1: japanese unexamined patent application publication No. 2016-201756

PTL 2: japanese unexamined patent application publication No. 2014-39219

PTL 3: japanese unexamined patent application publication No. 2013-164834

PTL 4: japanese unexamined patent application publication No. 2012-209831

Disclosure of Invention

Incidentally, as a system for transmitting from the image sensor to the application processor, MIPI (mobile industry processor interface) CSI (camera serial interface) -2, MIPI CSI-3, or the like is used in some cases. Further, as a system for transmission from an application processor to a display, MIPI DSI (display serial interface) or the like is used in some cases. In the case of transmitting a partial region (ROI (region of interest)) extracted from a captured image using these systems, transmission of the ROI may not be easy due to various limitations. Therefore, it is desirable to provide a video transmitting apparatus and a video receiving apparatus capable of transmitting an ROI even under various restrictions.

The first video transmission apparatus according to an embodiment of the present disclosure includes a transmission section that transmits image data of an ROI in an image in payload data of a long packet and transmits information on the ROI in embedded data. The payload data of the long packet refers to main data (application data) to be transmitted between devices. Embedded data refers to other information that may be embedded in the header or trailer of a frame of image data.

In the first video transmission apparatus according to the embodiment of the present disclosure, image data of an ROI in an image is transmitted in payload data of a long packet, and information on the ROI is transmitted in embedded data. This makes it possible to easily extract image data of the ROI from the transmission signal in the apparatus that has received the transmission signal transmitted from the video transmission apparatus.

The second video transmission apparatus according to one embodiment of the present disclosure includes a detector that detects an overlapping region where two or more ROIs overlap with each other based on information about the respective ROIs in an image. The second video transmitting apparatus further includes a transmitting section that transmits the plurality of pieces of third image data, which are obtained by omitting the second image data of the overlapping area from the plurality of pieces of first image data of the ROI in the image, in the payload data of the long packet so as to avoid the second image data from being redundantly included in the plurality of pieces of first image data. The transmitting section also transmits information of the corresponding ROI in the image with the embedded data.

In the second video transmission apparatus according to the embodiment of the present disclosure, the plurality of pieces of third image data are transmitted in payload data of a long packet, and information on the corresponding ROIs in the image is transmitted in embedded data. This makes it possible to easily extract image data of the ROI from the transmission signal in the apparatus that has received the transmission signal transmitted from the video transmission apparatus.

The video receiving apparatus according to an embodiment of the present disclosure includes a receiving section that receives image data of an ROI (region of interest) in an image included in payload data of a long packet and a transmission signal of information on the ROI included in embedded data. The video receiving apparatus further includes an information processor that extracts information about the ROI from the embedded data included in the transmission signal received by the receiving section, and extracts image data of the ROI from payload data included in the transmission signal received by the receiving section based on the extracted information.

In the video receiving apparatus according to the embodiment of the present disclosure, the information on the ROI is extracted from the embedded data included in the transmission signal received by the receiving section, and the image data of the ROI is extracted from the payload data of the long packet included in the transmission signal received by the receiving section based on the extracted information on the ROI. This makes it possible to easily extract image data of the ROI from the transmission signal.

According to the first and second video transmitting apparatuses and the video receiving apparatus of the embodiments of the present disclosure, it is possible to easily extract image data of an ROI from a transmission signal in an apparatus that has received a transmission signal transmitted from the video transmitting apparatus, which makes it possible to transmit the ROI even under various restrictions. It should be noted that the effect of the present disclosure is not necessarily limited to the effect described here, and may be any effect described in the present specification.

Drawings

Fig. 1 is a diagram showing a schematic configuration example of a video transmission system;

fig. 2 is a diagram showing a schematic configuration example of the video transmitting apparatus in fig. 1;

fig. 3 is a diagram showing an example of a transmission data generation process in a case where two ROIs are included in a captured image;

fig. 4 is a diagram showing a configuration example of a packet header;

fig. 5 is a diagram showing a configuration example of transmitting data;

fig. 6 is a diagram showing a configuration example of transmitting data;

fig. 7 is a diagram showing a configuration example of payload data of a long packet;

fig. 8 is a diagram showing a schematic configuration example of the video receiving apparatus in fig. 1;

fig. 9 is a diagram showing an example of a process for generating two ROI images included in a captured image in the case where two images are included in transmission data;

fig. 10 is a diagram showing a modification of the schematic configuration of the video receiving apparatus in fig. 1;

fig. 11 is a diagram showing a modification of the configuration of one row;

fig. 12 is a diagram showing a modification example of the configuration of one row;

fig. 13 is a diagram showing a modification of the schematic configuration of the video transmitting apparatus in fig. 1;

fig. 14 is a diagram showing a configuration example of transmitting data.

Detailed Description

Some embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. The following description is given of specific examples of the present disclosure, and the present disclosure is not limited to the following embodiments.

< embodiment >

[ arrangement ]

In recent years, in portable devices such as smartphones, camera devices, and the like, the amount of image data to be processed has increased, and higher speed and lower power consumption are demanded in data transmission in the devices or between different devices. To meet these requirements, standardization of high-speed interface specifications (e.g., C-PHY specification and D-PHY specification) defined by the MIPI alliance has been promoted as a coupling interface for portable devices and camera devices. The C-PHY specification and the D-PHY specification are physical layer (PHY) interface specifications for communication protocols. Further, the DSI for the display of the portable device and the CSI for the camera device exist as upper protocol layers of the C-PHY specification and the D-PHY specification.

The video transmission system 1 according to an embodiment of the present disclosure is a system that transmits and receives signals according to the MIPI CSI-2 specification, the MIPI CSI-3 specification, or the MIPI DSI specification. Fig. 1 shows an overview of a video transmission system 1 according to the present embodiment. The video transmission system 1 is applied to transmission of a data signal, a clock signal, and a control signal, and includes a video transmission apparatus 100 and a video reception apparatus 200. The video transmission system 1 includes, for example, a data channel D1, a clock channel C1, and a camera control interface CCI on the video transmitting device 100 and the video receiving device 200. The data channel transmits data signals, e.g., image data. Clock channel C1 sends a clock signal. The camera control interface CCI sends a control signal. Although fig. 1 shows an example in which one data channel D1 is provided, a plurality of data channels D1 may be provided. The camera control interface CCI is2C (inter integrated circuit) specification compliant bi-directional control interface.

The video transmission apparatus 100 is an apparatus that transmits a signal according to the MIPI CSI-2 specification, the MIPI CSI-3 specification, or the MIPI DSI specification, and includes a CSI transmitter 100A and a CCI slave 100B. The video receiving device 200 includes a CSI receiver 200A and a CCI master device 200B. In clock channel C1, CSI transmitter 100A and CSI receiver 200A are coupled to each other by a clock signal line. In data channel D1, CSI transmitter 100A and CSI receiver 200A are coupled to each other by a clock signal line. In the camera control interface CCI, the CCI slave 100B and the CCI master 200B are coupled to each other through a control signal line.

The CSI transmitter 100A functions as a differential signal transmission circuit that generates a differential clock signal as a clock signal and outputs the differential clock signal to a clock signal line. The CSI transmitter 100A also functions as a differential signal transmission circuit that generates a differential data signal as a data signal and outputs the differential data signal to a data signal line. The CSI receiver 200A functions as a differential signal receiving circuit that receives a differential clock signal as a clock signal through a clock signal line and performs predetermined processing on the received differential signal clock signal. The CSI receiver 200A also functions as a differential signal receiving circuit that receives a differential data signal as a data signal through a data signal line and performs predetermined processing on the received differential data signal.

(video transmission apparatus 100)

Fig. 2 shows a configuration example of the video transmission apparatus 100. The video transmission device 100 corresponds to a specific example of the CSI transmitter 100A. The video transmission apparatus 100 includes, for example, an imaging section 110, image processors 120 and 130, and a transmission section 140. The video transmission apparatus 100 transmits the transmission data 147A to the video reception apparatus 200 through the data channel D1. The transmission data 147A is generated by performing predetermined processing on the captured image 111 obtained by the imaging section 110. Fig. 3 shows an example of a process for generating the transmission data 147A.

The imaging section 110 converts an optical image signal obtained through, for example, an optical lens into image data. The imaging section 110 includes, for example, a CCD (charge coupled device) image sensor or a CMOS (complementary metal oxide semiconductor) image sensor. The imaging section 110 includes an analog-to-digital conversion circuit, and converts analog image data into digital image data. The format of the conversion data may be a YCbCr format in which the color of the pixel is represented by a luminance component Y and color difference components Cb and Cr, or may be an RGB format. The imaging section 110 outputs a captured image 111 (digital image data) obtained by imaging to the image processor 120.

The image processor 120 is a circuit that performs predetermined processing on the captured image 111 input from the imaging section 110. In the case where a control signal indicating an instruction for cutting out the ROI is input from the video receiving apparatus 200 through the camera control interface CCI, the image processor 120 performs predetermined processing on the captured image 111 input from the imaging section 110. As a result, the image processor 120 generates various data (120A, 120B, and 120C), and outputs the data to the transmission section 140. The image processor 130 is a circuit that performs predetermined processing on the captured image 111 input from the imaging section 110. In the case where a control signal indicating an instruction for outputting a normal image is input from the video receiving apparatus 200 through the camera control interface CCI, the image processor 130 performs predetermined processing on the captured image 111 input from the imaging section 110. As a result, the image processor 130 generates image data 130A, and outputs the image data 130A to the transmission section 140.

The image processor 130 includes, for example, an encoder 131. The encoder 131 encodes the captured image 111 to generate compressed image data 130A. As a format of the compressed image data 130A, the image processor 130 compresses the captured image 111 by a compression format conforming to, for example, JPEG (joint photographic experts group) specification or the like.

The image processor 120 includes, for example, an ROI cutter 121, an ROI interpreter 122, an overlay detector 123, a priority setting section 124, an encoder 125, and an image processing controller 126.

The ROI cutting section 121 identifies one or more objects to be photographed included in the captured image 111 input from the imaging section 110, and sets a region of interest ROI for each designated object. The region of interest ROI is for example a square region comprising the identified object. The ROI cutting section 121 cuts an image (e.g., the ROI image 112 in fig. 3) of each region of interest ROI from the captured image 111. The ROI cutting section 121 also assigns a region number as an identifier to each set region of interest ROI. For example, in the case where two regions of interest ROI are set in the captured image 111, the ROI cutting section 121 supplies the region number 1 to one region of interest ROI (for example, the region of interest ROI1 in fig. 3) and supplies the region number 2 to the other region of interest ROI (for example, the region of interest ROI2 in fig. 3). The ROI cutting unit 121 stores, for example, an assigned identifier (region number) in the storage unit. The ROI cutting section 121 stores, for example, each ROI image 112 cut from the captured image 111 in the storage section. The ROI cutting section 121 also stores identifiers (region numbers) assigned to the respective regions of interest ROI in the storage section, for example, in association with the ROI image 112.

The ROI interpreter 122 derives for each region of interest ROI position information 113 of the region of interest ROI in the captured image 111. The position information 113 includes, for example, the upper left coordinates (Xa, Ya) of the region of interest ROI, the length of the region of interest ROI in the X-axis direction, and the length of the region of interest ROI in the Y-axis direction. The length of the region of interest ROI in the X-axis direction is, for example, a physical region length XLa of the region of interest ROI in the X-axis direction. The length of the region of interest ROI in the Y-axis direction is, for example, the physical region length YLa of the region of interest ROI in the Y-axis direction. The physical region length refers to the physical length (data length) of the region of interest ROI. The position information 113 may comprise coordinates of positions other than the upper left corner of the region of interest ROI. For example, the ROI interpreter 122 stores the derived position information 113 in a storage section. The ROI interpreter 122 performs storage in the storage section in association with, for example, an identifier (region number) assigned to the region of interest ROI.

The ROI interpreter 122 may further derive, for each region of interest ROI, an output region length XLc of the region of interest ROI in the X-axis direction and an output region length YLc of the region of interest ROI in the Y-axis direction as the position information 113. The output region length refers to a physical length (data length) after resolution change is performed on the region of interest ROI by, for example, thinning processing, pixel addition, or the like. In addition to the position information 113, the ROI interpreter 122 may derive, for example, sensing information, exposure information, gain information, AD (analog-to-digital) word length, image format, and the like for each region of interest ROI, and may store them in a storage section.

The sensing information refers to the content of arithmetic operations on objects included in the region of interest ROI and supplementary information to subsequent-stage signal processing of the ROI image 112. The exposure information refers to the exposure time of the region of interest ROI. The gain information refers to gain information of the region of interest ROI. The AD word length refers to a data word length of each pixel that has been AD-converted in the region of interest ROI. The image format refers to the image format of the region of interest ROI. For example, the ROI interpreter 122 may derive the number of regions of interest ROI (the number of ROIs) included in the captured image 111 and store the number of regions of interest ROI in the storage section.

In a case where a plurality of objects to be photographed are specified in the captured image 111, the overlap detector 123 detects an overlap region (ROO (overlap region)) in which two or more regions of interest ROI overlap each other, based on the position information 113 of the plurality of regions of interest ROI in the captured image 111. That is, the overlap detector 123 derives the position information 114 of the overlap area ROO in the captured image 111 for each area of the overlap ROO. For example, the overlap detector 123 stores the derived position information 114 in the storage section. For example, the overlap detector 123 stores the derived position information 114 in the storage section in association with the overlap area ROO. The overlap region ROO is, for example, a square region having a size equal to or smaller than a minimum region of interest ROI among two or more regions of interest ROI that overlap each other. The position information 114 includes, for example, the upper left coordinates (Xb, Yb) of the overlapping region ROO, the length of the overlapping region ROO in the X-axis direction, and the length of the overlapping region ROO in the Y-axis direction. The length of the overlap area ROO in the X-axis direction refers to, for example, the physical area length XLb. The length of the overlap area ROO in the Y-axis direction refers to, for example, the physical area length YLb. The position information 114 may comprise position coordinates other than the upper left corner which may comprise the region of interest ROI.

The priority setting section 124 assigns a priority 115 to each region of interest ROI in the captured image 111. For example, the priority setting section 124 stores the assigned priorities 115 in the storage section. For example, the priority setting section 124 stores the assigned priorities 115 in the storage section in association with the region of interest ROI. The priority setting section 124 may assign the priorities 115 to the respective regions of interest ROI separately from the region numbers assigned to the respective regions of interest ROI, or may assign the region numbers assigned to the respective regions of interest ROI instead of the priorities 115. For example, the priority setting section 124 may store the priorities 115 in the storage section in association with the regions of interest ROI, or may store the area numbers assigned to the respective regions of interest ROI in the storage section in association with the regions of interest ROI.

The priority 115 is an identifier of each region of interest ROI, and is identification information from which one of the plurality of regions of interest ROI in the captured image 111 the overlapping region ROO has been omitted. For example, the priority setting section 124 assigns 1 as the priority 115 to one of the two regions of interest ROIs, each ROI including the overlapping region ROO, and assigns 2 as the priority 115 to the other of the two regions of interest ROIs. In this case, the overlap region ROO is omitted from the region of interest ROI having a larger value of the priority 115 when creating the transmission image 116 to be described later. It should be noted that the priority setting section 124 may assign the same number as the area number assigned to each region of interest ROI to the region of interest ROI as the priority 115. For example, the priority setting section 124 stores, in the storage section, the priorities 115 assigned to the respective regions of interest ROIs in association with the ROI image 112.

The encoder 125 encodes the corresponding transmission image 116 to generate compressed image data 120A. The encoder 125 compresses the corresponding transmission image 116 in a compression format (as a format of the compressed image data 120A) conforming to, for example, the JPEG specification. The encoder 125 generates a corresponding transmission image 116 before performing the compression process described above. The encoder 125 generates the plurality of transmission images 116 obtained by omitting the image 118 of the overlapping region ROO from the plurality of ROI images 112 obtained from the captured image 111 to avoid the image 118 from being redundantly included in the plurality of ROI images 112 obtained from the captured image 111.

The encoder 125 determines from which of the plurality of ROI images 112 the image 118 is omitted, based on, for example, the priority 115 assigned to the respective region of interest ROI. It should be noted that the encoder 125 may determine from which of the plurality of ROI images 112 the image 118 is omitted by using, for example, the region number assigned to the respective region of interest ROI as the priority 115. The encoder 125 regards an image obtained by omitting the image 118 from the ROI image 112 specified as described above as the transmitted image 116 (e.g., the transmitted image 116a2 in fig. 3). As a result of the above determination, the encoder 125 treats the ROI image 112 itself as the transmitted image 116 (e.g., the transmitted image 116a1 in fig. 3) of the ROI image 112, the ROI image 112 does not include the overlapping region ROO, and the image 118 is not omitted from the ROI image 112.

The image processing controller 126 generates the ROI information 120B and the frame information 120C, and transmits the ROI information 120B and the frame information 120C to the transmission section 140. The ROI information 120B includes, for example, each position information 113. The ROI information 120B further includes, for example, at least one of a data type of each region of interest ROI, the number of the region of interest ROIs included in the captured image 111, a region number (or priority 115) of each region of interest ROI, a data length of each region of interest ROI, or an image format of each region of interest ROI. The frame information 120C includes, for example, a virtual channel number assigned to each frame, a data type of each region of interest ROI, a payload length of each line, and the like. The data types include, for example, YUM data, RGB data, RAW data, and the like. The data types also include, for example, ROI format data, normal format data, and the like. The payload length is, for example, the number of pixels included in the payload of the long packet and is, for example, the number of pixels per region of interest ROI. Here, the payload refers to main data (application data) to be transmitted between the information transmitting apparatus 100 and the information receiving apparatus 200. A long packet refers to a packet provided between a packet header PH and a packet trailer PF.

The transmission section 140 is a circuit that generates and transmits transmission data 147A based on various data (120A, 120B, 120C, and 130A) input from the image processors 120 and 130. The transmitting section 140 transmits the ROI information 120B of the corresponding region of interest ROI in the captured image 111 with the embedded data. The transmitting section 140 also transmits the image data of the corresponding region of interest ROI (compressed image data 120A) in payload data of a long packet in the case where a control signal providing an instruction for cutting out the ROI is input from the video receiving apparatus 200 through the camera control interface CCI. At this time, the transmission unit 140 transmits the image data (compressed image data 120A) of the corresponding region of interest ROI through a virtual channel shared by the transmission unit. Further, the transmitting section 140 transmits the image data (compressed image data 120A) of the corresponding region of interest ROI in the image data frame, and transmits ROI information 120B about the corresponding region of interest ROI in the header of the image data frame. The transmission section 140 also transmits normal image data (compressed image data 130A) in payload data of a long packet in the case where a control signal providing an instruction for outputting a normal image is input from the video receiving apparatus 200 through the camera control interface CCI.

The transmission section 140 includes, for example, a LINK controller 141, an ECC generator 142, a PH generator 143, an EBD buffer 144, an ROI data buffer 145, a normal image data buffer 146, and a synthesizer 147. In the case where a control signal providing an instruction for cutting out the ROI is input from the video receiving apparatus 200 through the camera control interface CCI, the LINK controller 141, the ECC generator 142, the PH generator 143, the EBD buffer 144, and the ROI data buffer 145 perform output to the synthesizer 147. The normal image data buffer 146 performs output to the synthesizer 147 in a case where a control signal providing an instruction for outputting a normal image is input from the video receiving apparatus 200 through the camera control interface CCI.

It should be noted that the ROI data buffer 145 may also be used as the normal image data buffer 146. In this case, the transmitting part 140 may include a selector between an output of each of the ROI data buffer 145 and an input of the synthesizer 147, the selector selecting one of the outputs of the ROI data buffer 145 and the ROI data buffer 145.

For example, the LINK controller 141 outputs the frame information 120C to the ECC generator 142 and the PH generator 143 row by row. Based on, for example, data of a line in the frame information 120C (e.g., a virtual channel number, a data type of each region of interest ROI, a payload length of each line, etc.), the ECC generator 142 generates an error correction code for the line. For example, the ECC generator 142 outputs the generated error correction code to the PH generator 143. The PH generator 143 generates a packet header PH for each line using, for example, the frame information 120C and an error correction code generated by the ECC generator 142. At this time, the packet header PH is, for example, a packet header of payload data of a long packet, as shown in fig. 4. The packet header PH includes, for example, DI, WC, and ECC. The WC indicates a region for indicating packet termination to the information receiving apparatus 200 by word number. WC comprises e.g. the payload length and comprises e.g. the number of pixels per region of interest ROI. The ECC indicates an area storing a value for correcting a bit error. The ECC comprises an error correction code. DI denotes an area storing a data identifier. The DI includes a VC (virtual channel) number and a data type (data type of each region of interest ROI). VC (virtual channel) is a concept introduced for packet flow control, which is a mechanism to support multiple independent data flows sharing the same link. The PH generator 143 outputs the generated packet header PH to the synthesizer 147.

The EBD buffer 144 temporarily stores the ROI information 120B, and outputs the ROI information 120B to the synthesizer 147 as embedded data at a predetermined time. Embedded data refers to other information that may be embedded in the header or trailer of a frame of image data (see fig. 5 below). The embedded data includes, for example, ROI information 120B.

The ROI data buffer 145 temporarily holds the compressed image data 120A, and outputs the compressed image data 120A to the synthesizer 147 at a predetermined time as payload data of a long packet. In the case where a control signal providing an instruction for cutting out the ROI is input from the video receiving apparatus 200 through the camera control interface CCI, the ROI data buffer 145 outputs the compressed image data 120A to the synthesizer 147 as payload data of a long packet. The normal image data buffer 146 temporarily holds the compressed image data 130A, and outputs the compressed image data 130A to the synthesizer 147 at a predetermined time as payload data of a long packet. In the case where a control signal providing an instruction for outputting a normal image is input from the video receiving apparatus 200 through the camera control interface CCI, the normal image data buffer 146 outputs the compressed image data 130A to the synthesizer 147 as payload data of a long packet.

In the case where a control signal providing an instruction for outputting a normal image is input from the video receiving apparatus 200 through the camera control interface CCI, the synthesizer 147 generates the transmission data 147A based on the input data (the compressed image data 130A). The synthesizer 147 outputs the generated transmission data 147A to the video receiving apparatus 200 through the data path D1. In contrast, in the case where a control signal providing an instruction for cutting out the ROI is input from the video receiving apparatus 200 through the camera control interface CCI, the synthesizer 147 generates the transmission data 147A based on various input data (the packet header PH, the ROI information 120B, and the compressed image data 120A). The synthesizer 147 outputs the generated transmission data 147A to the video receiving apparatus 200 through the data path D1. That is, the synthesizer 147 puts the data type (the data type of each region of interest ROI) in the packet header PH of the payload data of the long packet and transmits the data type. Further, the synthesizer 147 transmits the image data (compressed image data 120A) of the corresponding region of interest ROI through a virtual channel common to each other.

The transmission data 147A includes, for example, the image data frame shown in fig. 5. The image data frame generally includes a header region, a packet region, and a trailer region. In fig. 5, for convenience, the trailer area is not shown. The header region R1 of the transmission data 147A includes embedded data. At this time, the embedded data includes ROI information 120B. In fig. 5, the packet region R2 of the transmission data 147A includes the payload data of the long packet for each line, and also includes a packet header PH and a packet trailer PH at positions sandwiching the payload data of the long packet. Further, the low power mode LP is included in a position where the packet header PH and the packet trailer PH are sandwiched therebetween.

The packet header PH includes, for example, DI, WC, and ECC. WC comprises e.g. the payload length and comprises e.g. the number of pixels per region of interest ROI. The ECC comprises an error correction code. The DI includes a VC (virtual channel number) and a data type (data type per region of interest ROI). In the present embodiment, virtual channel numbers that are shared with each other are assigned to VCs of corresponding rows. Further, in fig. 5, the packet area R2 of the transmission data 147A includes compressed image data 147B. The compressed image data 147B includes one piece of compressed image data 120A or a plurality of pieces of compressed image data 120A. Here, in fig. 5, the packet group closer to the packet header PH includes the compressed image data 120A (120A1) such as the transmission image 116a1 in fig. 3, and the packet group farther from the packet header PH includes the compressed image data 120A (120A2) such as the transmission image 116a2 in fig. 3. The two pieces of compressed image data 120a1 and 120a2 configure compressed image data 147B. The payload data of the long packet of each line includes pixel data of one line in the compressed image data 147B.

Fig. 6 shows a configuration example of the transmission data 147A. The transmission data 147A includes, for example, a frame header region R1 and a packet region R2. It should be noted that fig. 6 shows an example of the contents of the frame header region R1 in detail. In fig. 6, the low power mode LP is not shown.

The frame header region R1 includes, for example, a frame number F1 as an identifier of the transmission data 147A. The header region R1 includes information on the compressed image data 147B included in the grouping region R2. The frame header region R1 includes, for example, the number of compressed image data 120A (the number of ROIs) included in the compressed image data 147B and information on the ROI image 112 (ROI information 120B) corresponding to each compressed image data 120A included in the compressed image data 147B.

For example, the synthesizer 147 supplies the compressed image data 147B separately for each pixel line of the compressed image data 120A in the packet region R2 of the transmission data 147A. Therefore, the grouping region R2 of the transmission data 147A does not include compressed image data of the image 118 redundantly corresponding to the overlap region ROO. Further, the synthesizer 147 omits, for example, a pixel line that does not correspond to the transmission image 116 of the captured image 111 in the grouping region R2 of the transmission data 147A. Therefore, the grouping region R2 of the transmission data 147A does not include a pixel row that does not correspond to each transmission image 116 of the captured image 111. It should be noted that in the grouping region R2 in fig. 6, the portion enclosed by the broken line corresponds to the compressed image data of the image 118 of the overlap region ROO.

The boundary between the packet group closer to the packet header PH (e.g., 1(n) in fig. 6) and the packet group farther from the packet header PH (e.g., 2(1) in fig. 6) is specified by the physical region length XLa1 of the ROI image 112 of the compressed image data corresponding to the packet group closer to the packet header PH (e.g., 1(n) in fig. 6). In the compressed image data of the image 118 corresponding to the overlapping region ROO included in the packet group closer to the packet header PH (e.g., 1(n) in fig. 6), the start position of the packet is specified by the physical region length XLa2 corresponding to the ROI image 112 of the packet group farther from the packet header PH (e.g., 2(1) in fig. 6).

For example, in the case where long-grouped payload data is generated line by line in the grouping region R2 of the transmission data 147A, the synthesizer 147 may put, for example, the ROI information 120B into the long-grouped payload data as shown in fig. 7, in addition to the pixel data of one line in the compressed image data 147B. That is, the synthesizer 147 may put the ROI information 120B in the payload data of the long packet and transmit the ROI information 120B. At this time, the ROI information 120B includes, for example, at least one of the number of regions of interest ROI (the number of ROIs), the number of regions per region of interest ROI (or the priority 115), the data length per region of interest ROI, or the image format per region of interest ROI, which are included in the captured image 111, as shown in fig. 7(a) to 7 (K). The ROI information 120B is preferably set at the end on the packet header PH side in the payload data of the long packet (i.e., the header of the payload data of the long packet).

(video receiving apparatus 200)

Next, a description is given of the video receiving apparatus 200. Fig. 8 shows an example of the configuration of the video receiving apparatus 200. Fig. 9 shows an example of a process for generating the ROI image 223A in the video receiving apparatus 200. Video receiving device 200 is a device that receives signals according to a specification common to video transmitting device 100 (e.g., the MIPI CSI-2 specification, the MIPI CSI-3 specification, or the MIPIDSI specification). The video receiving apparatus 200 includes, for example, a receiving section 210 and an information processor 220. The reception section 210 is a circuit that receives the transmission data 147A output from the video transmission apparatus 100 through the data channel D1, and performs predetermined processing on the received transmission data 147A, thereby generating various data (214A, 215A, and 215B), and outputs the data to the information processor 220. The information processor 220 is a circuit that generates an ROI image 223A based on various data (214A and 215A) received from the receiving section 210 and generates a normal image 224A based on data (215B) received from the receiving section 210.

The receiving part 210 includes, for example, a header separator 211, a header interpreter 212, a payload separator 213, an EBD interpreter 214, and a ROI data separator 215.

The header separator 211 receives the transmission data 147A from the video transmission apparatus 100 through the data channel D1. That is, the header separator 211 receives the transmission data 147A in which the ROI information 120B about the corresponding region of interest ROI in the captured image 111 is included in the embedded data, and the image data (compressed image data 120A) of the corresponding region of interest ROI is included in the payload data of the long packet. The header separator 211 separates the received transmission data 147A into a frame header region R1 and a packet region R2. The header interpreter 212 specifies the position of the payload data of the long packet included in the packet region R2 based on the data (specifically, embedded data) included in the frame header region R1. The payload separator 213 separates the payload data of the long packet included in the packet region R2 from the packet region R2 based on the position of the payload data of the long packet designated by the header interpreter 212.

The EBD interpreter 214 outputs the embedded data as EBD data 214A to the information processor 220. The EBD interpreter 214 also determines whether the image data included in the payload data of the long packet is the compressed image data 120A of the image data 116 of the ROI or the compressed image data 130A of the normal image data according to the type of data included in the embedded data. The EBD interpreter 214 outputs the result of this determination to the ROI data separator 215.

In the case where the image data included in the long-packetized payload data is the compressed image data 120A of the image data 116 of the ROI, the ROI data separator 215 outputs the long-packetized payload data to the information processor 220 (specifically, the ROI decoder 222) as the payload data 215A. In the case where the image data included in the payload data is the compressed image data 130A of the normal image data, the ROI data separator 215 outputs the long-packetized payload data to the information processor 220 (specifically, the normal image decoder 224) as the payload data 215B. In the case where the RIO information 120B is included in the payload data of the long packet, the payload data 215A includes the RIO information 120B and pixel data of one line in the compressed image data 147B.

The information processor 220 extracts the ROI information 120B from the embedded data contained in the EBD data 214A. The information processor 220 extracts an image (ROI image 112) of each region of interest ROI in the captured image 111 from the payload data of the long packet included in the transmission data 147A received by the receiving section 210 based on the ROI information 120B extracted by the information extractor 221. The information processor 220 includes, for example, an information extractor 221, an ROI decoder 222, an ROI image generator 223, and a normal image decoder 224.

The normal image decoder 224 decodes the payload data 215B to generate a normal image 224A. The ROI decoder 222 decodes the compressed image data 147B included in the payload data 215A to generate image data 222A. Image data 222A includes one or more transmitted images 116.

The information extractor 221 extracts ROI information 120B from the embedded data contained in the EBD data 214A. The information extractor 221 extracts, from the embedded data included in the EBD data 214A, for example, the number of regions of interest ROI included in the captured image 111, the area number (or priority 115) of each region of interest ROI, the data length of each region of interest ROI, and the image format of each region of interest ROI. That is, the transmission data 147A includes a region number (or priority 115) corresponding to the region of interest ROI of each transmission image 116 as the discrimination information, which enables determination of the image 118 in which the overlapping region ROO has been omitted from which one of the plurality of transmission images 116 obtained by the transmission data 147A.

The ROI image generator 223 detects an overlapping region ROO where two or more ROIs overlap with each other based on the ROI information 120B obtained by the information extractor 221.

The information extractor 221 extracts, for example, coordinates (e.g., upper left coordinates (Xa1, Ya1)), a length (e.g., physical region lengths XLa1 and YLa1), and a region number 1 (or priority 115(═ 1)) corresponding to the region of interest ROI of the ROI image 112a1 from the embedded data contained in the EBD data 214A. The information extractor 221 also extracts, for example, coordinates (e.g., upper left coordinates (Xa2, Ya2)), a length (e.g., physical region lengths XLa2 and YLa2), and a region number 2 (or priority 115(═ 2)) corresponding to the region of interest ROI of the ROI image 112a2 from the embedded data contained in the EBD data 214A.

At this time, the ROI image generator 223 derives the position information 114 of the overlap region ROO based on the information thus extracted (hereinafter referred to as "extraction information 221A"). The ROI image generator 223 derives, for example, the coordinates (e.g., the upper left coordinates (Xb1, Yb1)) and the length (e.g., the physical region lengths XLb1 and YLb1)) of the overlapping region ROO as the position information 114 of the overlapping region ROO.

Instead of obtaining the RIO information 120B from the embedded data contained in the EBD data 214A, the ROI image generator 223 may obtain the RIO information 120B from the payload data 215A. In this case, the ROI image generator 223 may detect an overlap region ROO where two or more regions of interest ROI overlap each other based on the RIO information 120B included in the payload data 215A. Further, the ROI image generator 223 may extract the extraction information 221A from the RIO information 120B included in the payload data 215A, and may derive the position information 114 of the overlap region ROO based on the extraction information 221A thus extracted.

The ROI image generator 223 also generates images (ROI images 112A1 and 112A2) of the corresponding region of interest ROI in the captured image 111 based on the image data 222A, the extraction information 221A, and the position information 114 of the overlap region ROO. The ROI image generator 223 outputs the generated image as an ROI image 223A.

[ Process ]

Next, a description is given of an example of a data transmission process in the video transmission system 1 with reference to fig. 3 and 9.

First, the imaging section 110 outputs a captured image 111 (digital image data) obtained by imaging to the image processor 120. The ROI cutting section 121 specifies two regions of interest ROI1 and ROI2 included in the captured image 111 input from the imaging section 110. The ROI cutter 121 cuts the images of the region of interest ROI1 and ROI2 (ROI images 112a1 and 112a2) from the captured image 111. The ROI cutting section 121 designates the region number 1 as an identifier of the region of interest ROI1, and designates the region number 2 as an identifier of the region of interest ROI 2.

The ROI interpreter 122 derives for each region of interest ROI position information 113 of the region of interest ROI in the captured image 111. The ROI interpreter 122 derives the upper left coordinates (Xa1, Ya1) of the region of interest ROI1, the length of the region of interest ROI1 in the X-axis direction (Xa1), and the length of the region of interest ROI1 in the Y-axis direction (YL1) based on the region of interest ROI 1. The ROI interpreter 122 derives the upper left coordinates (Xa2, Ya2) of the region of interest ROI2, the length (Xa2) of the region of interest ROI2 in the X-axis direction, and the length (Yl2) of the region of interest ROI2 in the Y-axis direction, based on the region of interest ROI 2.

The overlap detector 123 detects an overlap region ROO where the two regions of interest ROI1 and ROI2 overlap each other based on the position information 113 of the two regions of interest ROI1 and ROI2 in the captured image 111. That is, the overlap detector 123 derives the position information 114 of the overlap area ROO in the captured image 111. The overlap detector 123 derives the upper left coordinate (Xb1, Yb1) of the overlap region ROO, the length (XLb1) of the overlap region ROO in the X-axis direction, and the length (YLb1) of the overlap region ROO in the Y-axis direction as the position information 114 of the overlap region ROO in the captured image 111.

The priority setting section 124 assigns 1 as the priority 115 to one of the regions of interest ROI1 and ROI2, i.e., the region of interest ROI1, and assigns 2 as the priority 115 to the other region, i.e., the region of interest ROI 2.

The encoder 125 generates two transmission images 116a1 and 116a2 obtained by omitting the image 118 of the overlapping region ROO from the two ROI images 112a1 and 112a2 obtained from the captured image 111 to avoid the image 118 being redundantly included in the two regions of interest ROI1 and ROI 2.

The encoder 125 determines from which of the two ROI images 112a1 and 112a2 the image 118 is omitted based on the region numbers (or priorities 115) of the two regions of interest ROI1 and ROI 2. The encoder 125 omits the image 118 from the ROI image 12a2 corresponding to the region of interest ROI2 having the larger region number (or priority 115) of the two regions of interest ROI1 and ROI2, thereby generating the transmission image 116a 2. The encoder 125 regards the ROI image 112a itself as the transmission image 116a of the ROI image 112a1 having a smaller area number (or priority 115) of the two regions of interest ROI1 and ROI 2.

The image processing controller 126 generates the ROI information 120B and the frame information 120C, and transmits the ROI information 120B and the frame information 120C to the transmission section 140. The transmission section 140 generates transmission data 147A based on various data (120A, 120B, 120C, and 130A) input from the image processors 120 and 130. The transmitter 140 transmits the generated transmission data 147A to the video receiver 200 via the data channel D1.

The reception unit 210 receives the transmission data 147A output from the video transmission apparatus 100 via the data channel D1. The reception section 210 performs predetermined processing on the received transmission data 147A to generate EBD data 214A and payload data 215A, and outputs the EBD data 214A and payload data 215A to the information processor 220.

The information extractor 221 extracts ROI information 120B from the embedded data contained in the EBD data 214A. The information extractor 221 extracts coordinates (e.g., upper left coordinates (Xa1, Ya1)), a length (e.g., physical region lengths XLa1 and YLa1), and a region number 1 (or priority 115(═ 1)) corresponding to the region of interest ROI of the ROI image 112a1 from the embedded data contained in the EBD data 214A. The information extractor 221 also extracts coordinates (e.g., upper left coordinates (Xa2, Ya2)), a length (e.g., physical region lengths XLa2 and YLa2), and a region number 2 (or priority 115(═ 2)) of the region of interest ROI corresponding to the ROI image 112a 2. The ROI decoder 222 decodes the compressed image data 147B included in the payload data 215A to generate image data 222A.

The ROI image generator 223 derives the position information 114 of the overlap region ROO based on the information thus extracted (extraction information 221A). The ROI image generator 223 derives, for example, the coordinates (e.g., the upper left coordinates (Xb1, Yb1)) and the length (e.g., the physical region lengths XLb1 and YLb1)) of the overlapping region ROO as the position information 114 of the overlapping region ROO described above. The ROI image generator 223 also generates images (ROI images 112A1 and 112A2) of the corresponding region of interest ROI in the captured image 111 based on the image data 222A, the extraction information 221A, and the position information 114 of the overlap region ROO.

[ Effect ]

Next, effects of the video transmission system 1 according to the present embodiment are described.

In recent years, applications for transmitting a large amount of data with a large data volume have been increasing. The transmission system is likely to be heavily loaded and in the worst case, the transmission system may be down and data transmission may not be performed.

In order to prevent the transmission system from being stopped, for example, instead of transmitting the entire captured image, only a partial image obtained by designating an object to be captured and cutting out the recognized object is transmitted.

Incidentally, as a system for transmission from the image sensor to the application processor, MIPI CSI-2 is used in some cases. In the case of transmitting the ROI using the system, transmission of the ROI may not be easy due to various restrictions.

In contrast, in the present embodiment, the ROI information 120B about the respective region of interest ROIs in the captured image 111 is transmitted in the embedded data, and the image data of each region of interest ROI is transmitted in the payload data of the long packet. This makes it possible to easily extract the image data (ROI image 211) of each region of interest ROI from the transmission data 147A in the apparatus (video receiving apparatus 200) that has received the transmission data 147A transmitted from the video transmitting apparatus 100. As a result, the region of interest ROI can be transmitted even under various restrictions.

In addition, in the present embodiment, the image data (compressed image data 120A) of the corresponding region of interest ROI is transmitted through a virtual channel shared with each other. This makes it possible to transmit the plurality of ROI images 211 in the same packet, which makes it possible to achieve high transmission efficiency when transmitting the plurality of ROI images 211 without the need to include the LP mode.

Further, in the present embodiment, the data type of each region of interest ROI is put in the packet header PH of the payload data of the long packet and transmitted. Thus, the data type of each region of interest ROI is obtained only by accessing the packet header PH of the payload data of the long packet, and not the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

Further, in the present embodiment, when the ROI information 120B is put in the payload data of the long packet and transmitted, the ROI information 120B is obtained only by accessing the payload data of the long packet without accessing the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

Further, in the present embodiment, the ROI information 120B about the corresponding region of interest ROI is extracted from the embedded data included in the transmission data 147A, and an image (ROI image 112) of each region of interest ROI is extracted from the payload data of the long packet included in the transmission data 147A based on the extracted ROI information 120B. This makes it possible to easily extract an image (ROI image 112) of each region of interest ROI from the transmission data 147A. As a result, the region of interest ROI can be transmitted even under various restrictions.

<2. modification >

[ modification A ]

Fig. 10 shows a modification of the configuration of the information transmitting apparatus 100 installed in the communication system 1 according to the foregoing embodiment. In the information transmitting apparatus 100 according to the present modification, the ROI data separator 215 is omitted, and the payload separator 213 outputs the payload data 215A or the payload data 215B.

In the present modification, the payload separator 213 determines whether the image data included in the payload data of the long packet is the compressed image data 120A of the image data 116 of the ROI or the compressed image data 130A of the normal image data, according to the data type (data type of each region of interest ROI) included in the packet header PH of the payload data of the long packet. As a result, in the case where the image data included in the long-packetized payload data is the compressed image data 120A of the image data 116 of the ROI, the payload separator 213 outputs the long-packetized payload data to the information processor 220 (specifically, the ROI decoder 222) as the payload data 215A. In the case where the image data included in the long-packetized payload data is the compressed image data 130A of the normal image data, the payload separator 213 outputs the long-packetized payload data to the information processor 220 (specifically, the normal image decoder 224) as the payload data 215B.

In this variant, the data type of each region of interest ROI can be determined only by accessing the packet header PH of the payload data of the long packets, without accessing the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

[ modification B ]

In the communication system 1 according to the foregoing embodiment, the packet header PH of the payload data of the long packet may not include the data type (data type of each region of interest ROI). Even in this case, it can be determined from the data type included in the embedded data whether the image data included in the payload data of the long packet is the compressed image data 120A of the image data 116 of the region of interest ROI or the compressed image data 130A of the normal image data. This causes a reduction in the data size of the packet header PH, which makes it possible to reduce the transmission capacity.

[ modification C ]

In the communication system 1 according to the foregoing embodiment, the synthesizer 147 transmits the image data (compressed image data 120A) of the corresponding region of interest ROI through the virtual channel shared with each other. However, the synthesizer 147 may transmit the image data (compressed image data 120A) of the corresponding region of interest ROI through virtual channels different from each other. However, in this case, for example, as shown in fig. 11, the low power mode LP is comprised between the payload data of two long packets corresponding to different regions of interest ROI.

Incidentally, the low power mode LP is included between the payload data of the two long packets corresponding to the different regions of interest ROI, meaning that a process of separating the payload data of the two long packets corresponding to the different regions of interest ROI is unnecessary. This makes it possible to eliminate the processing time required for such a separation process in the present modification.

[ modification D ]

In the communication system 1 according to the foregoing embodiment, in some cases, the synthesizer 147 puts the ROI information 120B in the payload data of the long packet in addition to the pixel data of one line in the compressed image data 147. However, in the communication system 1 according to the foregoing embodiment, for example, the synthesizer 147 may put the ROI information 120B in the data field (data field) SP of one or more short packets (short packets) disposed before the payload data of the long packet, and transmit the ROI information 120B. For example, the compositor 147 may put at least one of the number of regions of interest ROI (number of ROIs), the area number (or priority 115) of each region of interest ROI, the data length of each region of interest roirii, or the image format of each region of interest ROI in the data field SP of one or more short packets disposed in front of the payload data of the long packets, and transmit at least one thereof. For example, as shown in fig. 12, the compositor 147 may put and transmit the number of regions of interest ROI (the number of ROIs), the area number (or priority 115) of each region of interest ROI, the data length of each region of interest ROIROI, and the image format of each region of interest ROI into the data field SP of one or more short packets disposed in front of the payload data of the long packets.

In the present modification, the ROI information 120B may be obtained only by accessing the data field SP of one or more short packets disposed before the payload data of the long packet without accessing the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

[ modification E ]

In the communication system 1 according to the foregoing embodiment and its modifications (modifications a to D), the transmission data 120A is generated using the compressed image data 147B corresponding to the plurality of transmission images 116 obtained by omitting the image 118 from the plurality of ROI images 112. However, in the communication system 1 according to the foregoing embodiment and its modifications (modifications a to D), the transmission data 120A may be generated using the compressed image data 120A corresponding to the respective ROI images 112 as the transmission data 120A, regardless of whether the image 118 of the overlap region ROO exists in the images of the plurality of regions of interest ROI (ROI images 112). That is, in the communication system 1 according to the foregoing embodiment and its modifications (modifications a to D), the compressed image data 147B includes the compressed image data 120A corresponding to the respective ROI images 112.

In this case, for example, as shown in fig. 13, the ROI interpreter 122, the overlap detector 123, and the priority setting section 124 may be omitted from the image processor 120. Even in the case where the ROI interpreter 122, the overlap detector 123, and the priority setting section 124 are omitted from the image processor 120 as described above, the transmitting section 140 transmits ROI information 120B about the corresponding region of interest ROI in the captured image 111 embedded in the data, similarly to the communication system 1 according to the foregoing embodiment and its modifications (modifications a to D). The transmission section 140 also transmits the image data (compressed image data 120A) of the corresponding region of interest ROI in the payload data of the long packet. Further, the transmitting section 140 transmits the image data (compressed image data 120A) of the corresponding region of interest ROI in the image data frame, and transmits the ROI information 120B about the corresponding region of interest ROI in the header of the image data frame. This makes it possible to easily extract the image data (ROI image 211) of each region of interest ROI from the transmission data 147A in the apparatus (video receiving apparatus 200) that has received the transmission data 147A transmitted from the video transmitting apparatus 100. As a result, the region of interest ROI can be transmitted even under various restrictions.

Further, in the present modification, in the case where the image data (compressed image data 120A) of the respective regions of interest ROI are transmitted through a virtual channel common to each other, the transmitting section 140 may transmit the plurality of ROI images 211 in the same group. This eliminates the need to include the LP mode when transmitting the multiple ROI images 211, which makes it possible to achieve high transmission efficiency.

Further, in the present modification, in the case where the transmission section 140 transmits the image data (compressed image data 120A) of the respective regions of interest ROI through the virtual channels different from each other, the process of separating the payload data of the two long packets corresponding to the different regions of interest ROI becomes unnecessary. This makes it possible to eliminate the processing time required for such a separation process in the present modification.

Further, in the present modification, in the case where the transmission section 140 puts the data type of each region of interest ROI in the packet header PH of the payload data of the long packet and transmits the data type of each region of interest ROI, the data type of each region of interest ROI can be obtained only by accessing the packet header PH of the payload data of the long packet without accessing the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

Further, in the present modification, in the case where the transmission section 140 puts the ROI information 120B in the payload data of the long packet and transmits the ROI information 120B, the ROI information 120B may be obtained only by accessing the payload data of the long packet without accessing the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

Further, in the present modification, in the case where the transmitting section 140 puts the ROI information 120B in the data field SP of one or more short packets disposed in front of the payload data of the long packet and transmits the ROI information 120B, the ROI information 120B can be obtained only by accessing the data field SP of one or more short packets disposed in front of the payload data of the long packet without accessing the embedded data. This makes it possible to increase the processing speed in the information receiving apparatus 200, which makes it possible to achieve high transmission efficiency.

Further, in the present modification, in the case where the transmission section 140 extracts the ROI information 120B on the respective region of interest ROIs from the embedded data included in the transmission data 147A and extracts an image (ROI image 112) of each region of interest ROI from the payload data of the long packets included in the transmission data 147A based on the extracted ROI information 120B, the image (ROI image 112) of each region of interest ROI can be easily extracted from the transmission data 147A. As a result, the region of interest ROI can be transmitted even under various restrictions.

[ modification F ]

In the communication system 1 according to the foregoing embodiment and its modifications (modifications a to E), the transmitting section 140 transmits the ROI information 120B about the corresponding region of interest ROI in the frame header (frame header region R1) of the image data frame. However, in the communication system 1 according to the foregoing embodiment and its modifications (modifications a to E), the transmitting section 140 may transmit the ROI information 120B on the corresponding region of interest ROI in the trailer of the image data frame (frame trailer region R3). For example, as shown in fig. 14, in the case where the ROI information 120B on the corresponding region of interest ROI in the captured image 111 is transmitted in the embedding data, the transmitting section 140 may transmit the ROI information 120B in the end-of-frame region R3. It should be noted that in fig. 14, the header region R1 is not shown for convenience. Even in the case of adopting the present modification, the region of interest ROI can be transmitted even under various restrictions.

[ modification G ]

In the communication system 1 according to the foregoing embodiment and its modifications (modifications a to F), the image data frame includes the frame header region R1, the packet region R2, and the frame trailer R3. However, in the communication system 1 according to the foregoing embodiment and its modifications (modifications a to F), the image data frame may not include the frame end R3. In addition, in the foregoing modification F, the image data frame may not include the frame header region R1.

Although the present disclosure has been described with reference to the embodiments and the modifications thereof, the present disclosure is not limited to the foregoing embodiments and the like, and may be modified in various ways. It should be noted that the effects described in this specification are merely illustrative. The effects of the present disclosure are not limited to the effects described in the present specification. The present disclosure may have effects different from those described in the present specification.

Further, for example, the present disclosure may have the following configuration.

(1) A video transmitting apparatus comprising:

a transmitting section that transmits image data of an ROI (region of interest) in an image as payload data of a long packet and transmits information on the ROI as embedded data.

(2) The video transmission apparatus according to (1), wherein the transmission section transmits the image data of the respective ROIs through virtual channels different from each other.

(3) The video transmission apparatus according to (1), wherein the transmission section transmits the image data of the corresponding ROI through a virtual channel shared with each other.

(4) The video transmission apparatus according to (3), wherein the transmission section puts the data type of the corresponding ROI in a packet header of the payload data for transmission.

(5) The video transmission apparatus according to (3), wherein the transmission section transmits at least one of the number of ROIs included in the image, the region number of each ROI, the data length of each ROI, or the image format of each ROI in the payload data.

(6) The video transmission apparatus according to (3), wherein the transmission section transmits at least one of the number of ROIs included in the image, the region number of each ROI, the data length of each ROI, or the image format of each ROI in a short packet.

(7) The video transmission apparatus according to any one of (1) to (6), wherein the transmission section transmits the image data of the ROI in the image data frame, and transmits the information on the ROI in a frame header or a frame trailer of the image data frame.

(8) The video transmission apparatus according to any one of (1) to (7), wherein the transmission section transmits the signal in a MIPI (mobile industrial processor interface) CSI (camera serial interface) -2 specification, a MIPI CSI-3 specification, or a MIPI DSI (display serial interface) specification.

(9) A video transmitting apparatus comprising:

a detector that detects an overlapping region where two or more ROIs overlap with each other based on information on corresponding ROIs (regions of interest) in the image; and

a transmitting section that transmits the plurality of pieces of third image data with the payload data of the long packet and transmits the information on the respective ROIs in the image with the embedding data, the plurality of pieces of third image data being obtained by omitting the second image data of the overlapping region from the plurality of pieces of first image data of the ROIs in the image so as to avoid the second image data from being redundantly included in the plurality of pieces of first image data.

(10) The video transmission apparatus according to (9), wherein the transmission section transmits the respective ROIs through virtual channels different from each other.

(11) The video transmission apparatus according to (9), wherein the transmission section transmits the corresponding ROIs through a virtual channel shared with each other.

(12) The video transmission apparatus according to (11), wherein the transmission section puts the data type of the corresponding ROI in a packet header of the payload data for transmission.

(13) The video transmission apparatus according to (11), wherein the transmission section transmits at least one of the number of ROIs included in the image, the region number of each ROI, the data length of each ROI, or the image format of each ROI in the payload data.

(14) The video transmission apparatus according to (9), wherein the transmission section transmits at least one of the number of ROIs included in the image, the region number of each ROI, the data length of each ROI, or the image format of each ROI in the data field of the short packet.

(15) The video transmission apparatus according to any one of (9) to (14), wherein the transmission section transmits the image data of the ROI by the image data frame, and transmits the information on the ROI by a frame header or a frame trailer of the image data frame.

(16) The video transmission apparatus according to any one of (9) to (15), wherein the transmission section transmits the signal in a MIPI (mobile industrial processor interface) CSI (camera serial interface) -2 specification, a MIPI CSI-3 specification, or a MIPI DSI (display serial interface) specification.

(17) A video receiving apparatus comprising:

a receiving section that receives a transmission signal including image data of an ROI (region of interest) in an image and information on the ROI, the image data of the ROI being included in payload data of a long packet, the information on the ROI being included in embedded data; and

an information processor which extracts information about the ROI from the embedded data included in the transmission signal received by the receiving section, and extracts image data of the ROI from payload data included in the transmission signal received by the receiving section based on the extracted information.

(18) The video receiving apparatus according to (17), wherein the information processor detects an overlapping region where two or more ROIs overlap with each other based on the extracted information, and extracts image data of the respective ROIs from payload data included in the transmission signal received by the receiving section based on the extracted information and the information of the detected overlapping region.

(19) The video receiving apparatus according to (17) or (18), wherein the transmitting section receives a signal in a MIPI (mobile industrial processor interface) CSI (camera serial interface) -2 specification, a MIPI CSI-3 specification, or a MIPI DSI (display serial interface) specification.

This application claims the benefit of japanese priority patent application JP2017-114690, filed on.9.6.2017 with the office, the entire contents of which are incorporated herein by reference.

It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may be made according to design requirements and other factors insofar as they come within the scope of the appended claims or the equivalents thereof.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:中继装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类