Sensor device, electronic device, sensor system, and control method

文档序号:1145924 发布日期:2020-09-11 浏览:4次 中文

阅读说明:本技术 传感器设备、电子设备、传感器系统和控制方法 (Sensor device, electronic device, sensor system, and control method ) 是由 瀬田涉二 于 2019-10-23 设计创作,主要内容包括:根据本发明,传感器设备即使在传感器中存在劣化时也获得准确的信息。传感器设备包括:获得传感器信息的传感器(11);对由传感器获得的传感器信息进行预定处理的现场可编程门阵列(FPGA)(12);以及用于存储使FPGA进行预定处理的数据的存储器(15)。(According to the present invention, the sensor device obtains accurate information even when there is deterioration in the sensor. The sensor device includes: a sensor (11) that obtains sensor information; a Field Programmable Gate Array (FPGA) (12) that performs predetermined processing on sensor information obtained by the sensor; and a memory (15) for storing data for causing the FPGA to perform a predetermined process.)

1. A sensor apparatus, comprising:

a sensor that obtains sensor information;

an FPGA (field programmable gate array) that performs predetermined processing on the sensor information obtained by the sensor; and

a memory for storing data for causing the FPGA to perform the predetermined processing.

2. The sensor device according to claim 1, wherein the data stored in the memory is updated according to the analysis result of the sensor information.

3. The sensor device of claim 1, further comprising:

a transmission unit that transmits the sensor information that has been subjected to the predetermined processing to a predetermined network; and

a receiving unit that receives update data generated according to an analysis result of the sensor information transmitted to the predetermined network, for updating the FPGA, wherein,

updating the data stored in the memory with the update data.

4. The sensor device of claim 3,

the transmitting unit transmits the sensor information to the predetermined network using wireless communication, and

the receiving unit receives the update data from the predetermined network using wireless communication.

5. The sensor device of claim 4, further comprising:

an encoding unit that encodes the sensor information; and

a decoding unit that releases encoding of the update data.

6. The sensor device of claim 1, further comprising a processor, the processor

-analyzing the sensor information in a manner such that,

generating update data for updating the FPGA according to the analysis result, an

Updating the data stored in the memory with the generated update data.

7. The sensor device of claim 6, further comprising DNN (deep neural network) circuitry that analyzes sensor information by performing machine learning, wherein,

the processor analyzes the sensor information based on results of the machine learning by the DNN circuit.

8. The sensor device of claim 1, further comprising:

a transmission unit that transmits the sensor information that has been subjected to the predetermined processing to a predetermined network;

a receiving unit that receives update data for updating the FPGA, the update data being generated according to an analysis result of the sensor information transmitted to the predetermined network;

a processor that analyzes the sensor information and generates update data for updating the FPGA according to the analysis result; and

a switching unit that switches between transmitting the sensor information to the predetermined network using the transmitting unit and inputting the sensor information to the processor, wherein,

updating the data stored in the memory with the update data received by the receiving unit or with the update data generated by the processor.

9. The sensor device of claim 1,

the sensor information represents image data, an

The sensor comprises

A light receiving unit having a plurality of photoelectric conversion elements, an

A signal processing circuit that reads image data from the light receiving unit.

10. The sensor device according to claim 9, wherein the predetermined process includes at least one of black level processing, defect correction, shading correction, distortion correction, auto exposure, auto focus, auto white balance adjustment, synchronization processing, linear matrix processing, gamma correction, luminance color separation, edge enhancement processing, chromatic aberration matrix processing, and resizing/scaling.

11. The sensor device of claim 1, wherein the data comprises

Circuit data for embedding the circuit configuration subjected to the predetermined processing in the FPGA, and

setting data containing parameters to be set in the circuit configuration.

12. The sensor device of claim 1, further comprising a processor cooperating with the FPGA to perform the predetermined processing.

13. The sensor device of claim 1, further comprising:

a first chip on which the sensor is mounted;

the second chip is provided with the FPGA; and

a third chip on which the memory is mounted, wherein,

the sensor device has a stacked structure formed by stacking the first chip to the third chip.

14. The sensor device of claim 13, wherein the third chip is positioned between the first chip and the second chip.

15. The sensor device according to claim 13, further comprising a fourth chip on which a processor for performing the predetermined processing in cooperation with the FPGA is mounted, wherein,

the stacked structure is formed by stacking the first chip to the fourth chip.

16. The sensor device of claim 15,

the first chip is positioned as a topmost layer of the stack structure, an

The fourth chip is positioned as a lowermost layer of the stacked structure.

17. The sensor device of claim 13,

the sensor information is representative of the image data,

the sensor comprises

A light receiving unit having a plurality of photoelectric conversion elements, an

A signal processing circuit for reading image data from the light receiving unit, an

The first chip comprises

A fifth chip on which the light receiving unit is mounted, and

and the sixth chip is provided with the signal processing circuit.

18. An electronic device, comprising:

a sensor that obtains sensor information;

an FPGA that performs predetermined processing on the sensor information obtained by the sensor; and

a memory for storing data for causing the FPGA to perform the predetermined processing.

19. A sensor system in which an electronic device and a server are connected via a predetermined network, wherein,

the electronic device includes:

a sensor that obtains sensor information,

an FPGA that performs predetermined processing on the sensor information obtained by the sensor,

a memory for storing data for causing the FPGA to perform the predetermined process,

a transmission unit that transmits the sensor information that has undergone the predetermined processing to a predetermined network, an

A receiving unit that receives update data for updating the FPGA, the update data being generated according to an analysis result of the sensor information transmitted to the predetermined network,

the server

Analyzing the sensor information received from the electronic device via the predetermined network,

generating update data for updating the FPGA according to the analysis result, an

Transmitting the generated update data to the predetermined network, an

Updating the data stored in the memory with the update data received by the receiving unit via the predetermined network.

20. A control method, comprising:

a step for analyzing sensor information obtained by the sensor; and

a step for modifying at least one of a circuit configuration of an FPGA that performs predetermined processing on the sensor information and a set value of the circuit configuration according to an analysis result of the sensor information.

Technical Field

The related application relates to a sensor device, an electronic device, a sensor system and a control method.

Background

In recent years, as IoT (Internet of Things) has become popular in society, "Things" such as sensors and devices exchange information by establishing connections with clouds, fog, and servers via the Internet, making it more and more popular to develop systems for mutual control between the "Things". In addition, systems for providing various services to users by using large data collected from IoT have been actively developed.

Reference list

Patent document

Patent document 1: JP 2000-235644A

Patent document 2: JP 2018-26682A

Disclosure of Invention

Technical problem

However, not only in the case of IoT but also in the case of obtaining information using a sensor such as a camera, there arises a problem that accurate information cannot be collected due to deterioration of the sensor caused by use and aging.

In this regard, in the related application, a sensor apparatus, an electronic apparatus, a sensor system, and a control method capable of obtaining accurate information even when there is sensor deterioration are proposed.

Solution to the problem

In order to solve the above problem, a sensor apparatus includes: a sensor that obtains sensor information; an FPGA (Field-Programmable Gate Array) that performs predetermined processing on sensor information obtained by the sensor; and a memory for storing data for causing the FPGA to perform a predetermined process.

Drawings

Fig. 1 is a schematic diagram showing an exemplary schematic configuration of a sensor system 1 according to a first embodiment.

Fig. 2 is a block diagram showing an exemplary schematic configuration of a communication device representing an electronic device according to the first embodiment.

Fig. 3 is a schematic diagram showing a chip configuration of an image sensor according to the first embodiment.

Fig. 4 is a diagram showing another configuration example of the flexible logic chip according to the first embodiment.

Fig. 5 is a diagram showing still another configuration example of the flexible logic chip according to the first embodiment.

Fig. 6 is a diagram for explaining an operation performed by the image sensor according to the first embodiment.

Fig. 7 is a flowchart for explaining an example of an overall operation performed in the communication apparatus according to the first embodiment.

Fig. 8 is a flowchart for explaining an example of an overall operation performed in the server according to the first embodiment.

Fig. 9 is a diagram showing an example of an item for morphing according to the first embodiment.

Fig. 10 is a block diagram showing a conventional device configuration.

Fig. 11 is a diagram for explaining a flow of data processing performed in the device configuration shown in fig. 10.

Fig. 12 is a diagram showing clock cycle counts required when processing 1000 sets of data in the device configuration shown in fig. 10.

Fig. 13 is a block diagram showing the device configuration of the image sensor according to the first embodiment.

Fig. 14 is a diagram for explaining a flow of data processing performed in the image sensor according to the first embodiment.

Fig. 15 is a diagram showing clock cycle counts required when processing 1000 sets of data in the image sensor according to the first embodiment.

Fig. 16 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the second embodiment.

Fig. 17 is a schematic diagram showing a chip configuration of an image sensor according to a second embodiment.

Fig. 18 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the third embodiment.

Fig. 19 is a schematic diagram showing a chip configuration of an image sensor according to a third embodiment.

Fig. 20 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the fourth embodiment.

Fig. 21 is a schematic diagram showing a chip configuration of an image sensor according to a fourth embodiment.

Fig. 22 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the fifth embodiment.

Fig. 23 is a schematic diagram showing a chip configuration of an image sensor according to a fifth embodiment.

Fig. 24 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the sixth embodiment.

Fig. 25 is a block diagram showing another exemplary schematic configuration of a communication apparatus according to the sixth embodiment.

Fig. 26 is a block diagram showing still another exemplary schematic configuration of a communication apparatus according to the sixth embodiment.

Fig. 27 is a block diagram showing still another exemplary schematic configuration of a communication apparatus according to the sixth embodiment.

Fig. 28 is a block diagram showing still another exemplary schematic configuration of a communication apparatus according to the sixth embodiment.

Fig. 29 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the seventh embodiment.

Fig. 30 is a diagram for explaining an example of DNN analysis operation (machine learning operation) according to the seventh embodiment.

Fig. 31 is a flowchart for explaining an example of the overall operation performed according to the seventh embodiment.

Fig. 32 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the eighth embodiment.

Fig. 33 is a block diagram showing an exemplary schematic configuration of a vehicle control system.

Fig. 34 is an explanatory diagram showing an example of the mounting positions of the vehicle exterior information detector and the imaging unit.

Detailed Description

Preferred embodiments of the related application are described in detail below with reference to the accompanying drawings. In the embodiments described below, the same constituent elements are denoted by the same reference numerals, and a description thereof is not repeatedly given.

The description of the related applications is given in the following sequence of items.

1. Overview

2. First embodiment

2.1 System configuration

2.2 device configuration

2.3 Stacking Structure

2.4 operation of the image sensor

2.5 relationship between operations and chips

2.6 degradation correction of image sensor

2.7 degradation correction sequence

2.8 analysis of image data (machine learning)

2.9 flow of operation

2.9.1 operation in a communication device

2.9.2 operations in a Server

2.10 modification of setup data/Circuit data

2.11 high speed processing method

2.12 action/Effect

3. Second embodiment

3.1 device configuration

3.2 chip configuration

3.3 action/Effect

4. Third embodiment

4.1 device configuration

4.2 chip configuration

4.3 action/Effect

5. Fourth embodiment

5.1 device configuration

5.2 chip configuration

5.3 action/Effect

6. Fifth embodiment

6.1 device configuration

6.2 chip configuration

6.3 action/Effect

7. Sixth embodiment

7.1 device configuration

7.2 variations of the device configuration

7.3 action/Effect

8. Seventh embodiment

8.1 device configuration

8.2DNN analysis procedures

8.3 flow of operation

8.4 action/Effect

9. Eighth embodiment

10. Examples of the applications

1. Overview

Currently, as a device in which a sensor such as a camera module is mounted, for example, there are various devices including: wearable terminals such as smart phones and cellular phones; stationary imaging devices such as pointing cameras and surveillance cameras; mobile devices such as drones, automobiles, home robots, Factory Automation (FA) robots, monitoring robots, and autonomous robots; and medical devices. In such devices, the camera is subject to aged deterioration due to increases in the frequency and age of use. For example, due to aged deterioration of the camera, the following problems may occur.

First, along with the deterioration of the camera, it is necessary to adjust image quality and control by modifying the setting values using a personal computer. For this reason, time and effort are required to send the camera to the manufacturer for maintenance.

Second, although image quality and control can be altered by updating the software installed in the device, overriding the adjustment of individual devices in a software update is a difficult task because of the individual differences between devices related to changes in image quality and control.

Third, with respect to devices such as automobiles, unmanned planes, and various types of robots that analyze image data in real time and perform autonomous movement based on the analysis result, it is necessary to correct the setting values in real time in the case where the image quality deteriorates or in the case where a failure occurs in control adjustment.

Fourth, in the medical field, for example, when a camera of the capsule endoscope deteriorates, re-examination is necessary, and thus the burden on the patient increases from the viewpoint of physical strength and cost.

Fifth, if image quality and control need to be adjusted in the apparatus itself, an adjustment system needs to be installed in the apparatus, or the apparatus needs to be connected with an external system. This results in an increase in the size of the overall system configuration, thereby resulting in increased cost, floor space, and weight.

Sixth, in an apparatus requiring real-time image analysis, the operations of each stage are performed in a sequential manner. However, in that case, since the memory is shared across stages, for example, if interrupt processing is instructed, the processing time becomes very long.

In this regard, in the embodiments described below, explanations are given with reference to an example regarding a sensor device, an electronic device, a sensor system, and a control method capable of obtaining accurate information even when there is degradation of a sensor (such as a camera) due to use and aging.

2. First embodiment

The first embodiment is described in detail below with reference to the drawings. In the first embodiment, it is assumed that the image sensor is a target sensor for degradation correction, and the communication apparatus has the image sensor mounted therein. However, the sensor is not limited to the image sensor, and various types of sensors such as a temperature sensor, a humidity sensor, or a radiation dosimeter may be used.

2.1 System configuration

Fig. 1 is a schematic diagram showing an exemplary schematic configuration of a sensor system 1 according to a first embodiment. As shown in fig. 1, in a sensor system 1, one or more communication devices 2 having a communication function are connected to a server 3 via a network 4.

The communication apparatus 2 is equipped with an imaging function in addition to the above-described communication function of communicating with the server 3 via the network 4. As for the communication device 2, various types of devices having a sensing function and a communication function, such as: wearable terminals such as smart phones and cellular phones; stationary imaging devices such as pointing cameras and surveillance cameras; mobile devices such as drones, automobiles, home robots, Factory Automation (FA) robots, monitoring robots, and autonomous robots; and medical devices.

The server 3 may be, for example, a cloud server or a fog server connected to a network. As the network 4, for example, various types of networks such as the internet, a LAN (local area network), a mobile communication network, and a public line network can be used.

2.2 device configuration

Fig. 2 is a block diagram showing an exemplary schematic configuration of a communication device representing an electronic device according to the first embodiment. As shown in fig. 2, the communication device 2 includes an image sensor 10 representing, for example, a stationary imaging device, and a transceiver unit 18. The image sensor 10 includes, for example, a light receiving unit 11, a high-speed signal processing circuit 12, a flexible logic circuit 13, a main processor 14, a memory 15, a driver 16, and a nonvolatile memory 17.

For example, the light receiving unit 11 includes: an optical sensor array (also referred to as a pixel array) in which photoelectric conversion elements such as photodiodes are arranged in a two-dimensional matrix in a row direction and a column direction; an optical system such as a lens mounted on a light receiving surface of the optical sensor array; and an actuator that drives the optical system.

The high-speed signal processing circuit 12 includes: an analog circuit such as an ADC for converting an analog pixel signal read from the photoelectric conversion element of the light receiving unit 11 into a digital pixel signal; and a logic circuit for performing digital processing such as CDS (correlated double sampling) based on the pixel signal converted into the digital value by the ADC.

The memory 15 is used to store digital image data output from the high-speed signal processing circuit 12. Further, the memory 15 is used to store image data that has been subjected to predetermined processing in the flexible logic circuit 13 and a main processor 14 (described later). In addition, the memory 15 is used to store various data for realizing a predetermined circuit configuration in an FPGA (Field-Programmable Gate Array) included in the flexible logic circuit 13. In the following description, data used when a circuit configuration is realized by connecting logic components of an FPGA is referred to as circuit data, and parameters supplied to the circuit configuration realized using the circuit data are referred to as setting data.

The flexible logic circuit 13 includes an FPGA as described above, and cooperates with a main processor 14 (described later) to perform processing such as black level processing, defect correction, shading correction, distortion correction, linear matrix processing, gamma correction, luminance color separation, edge enhancement, color difference matrix processing, and resizing/scaling on the image data stored in the memory 15. In addition, the flexible logic circuit 13 also performs various other processes such as control system correction, Auto Exposure (AE), Auto Focus (AF), auto white balance Adjustment (AWB), and synchronous processing output Interface (IF) processes.

The main processor 14 controls the constituent elements of the communication device 2. Further, the main processor 14 operates in cooperation with the flexible logic circuit 13, and performs various operations listed above as pipeline processing.

The driver 16 includes, for example, a vertical drive circuit, a horizontal transfer circuit, and a timing control circuit, and drives a pixel circuit (described later) mounted in the high-speed signal processing circuit 12 so as to cause the high-speed signal processing circuit 12 to read a pixel signal from the light receiving unit 11. Further, the driver 16 also controls an actuator in the light receiving unit 11 that drives an optical system including a lens and a shutter.

The nonvolatile Memory 17 is configured using, for example, an EEPROM (Electrically Erasable programmable read-only Memory), and is used to store parameters to be used when the driver 16 controls the circuits in the high-speed signal processing circuit 12 and controls the actuators in the light receiving unit 11.

The transceiver unit 18 is a communication unit that communicates with the server 3 via the network 4, and includes, for example, a DAC 181 that performs DA (Digital to Analog) conversion of transmission data, a transmission antenna 182 that transmits the DA-converted data to the network 4, a reception antenna 184 that receives data from the network 4, and an ADC 183 that performs AD (Analog to Digital) conversion of data received by the reception antenna 184. Here, the transceiver unit 18 is not limited to a wireless unit, and may instead be a wired unit.

2.3 Stacking Structure

Fig. 3 is a schematic diagram showing a chip configuration of an image sensor according to the first embodiment. In fig. 3, the driver 16 and the nonvolatile memory 17 are not shown for simplicity.

As shown in fig. 3, in the image sensor 10, for example, the light receiving unit 11, the high-speed signal processing circuit 12, the flexible logic circuit 13, the main processor 14, and the memory 15 are configured on an individual chip.

In the light receiving unit 11, an optical sensor array 111 is formed on a light receiving chip 110 made of a semiconductor substrate. In the high-speed signal processing circuit 12, the pixel circuit 121, the ADC 122, the CDS circuit 123, and the gain adjustment circuit 124 are formed on an analog/logic chip 120 made of a semiconductor substrate. The gain adjustment circuit 124 may be, for example, a circuit for adjusting the gain of the CDS-processed digital pixel signal in each of the RGB colors.

In the flexible logic circuit 13, the FPGA131 is formed on a flexible logic chip 130 made of a semiconductor substrate. In the main processor 14, an MPU141 is formed on a processor chip 140 made of a semiconductor substrate. Here, it is not necessary to form only a single MPU141 on the processor chip 140, and alternatively, a plurality of MPUs 141 may be formed on the processor chip 140.

In the memory 15, a memory region 151 of an SRAM (static RAM) or a DRAM (dynamic RAM) is formed on a memory chip 150 made of a semiconductor substrate. In the memory area 151, some portions serve as a memory area (hereinafter referred to as a programmable memory area) 152 for storing circuit data for setting a circuit configuration in the FPGA131 and storing related setting data.

Chips 110, 120, 130, 140, and 150 are stacked from the top in the order shown in fig. 3. Thus, the image sensor 10 has a stacked structure in which the light receiving chip 110, the analog/logic chip 120, the memory chip 150, the flexible logic chip 130, and the processor chip 140 are stacked in this order.

The remaining constituent elements of the image sensor 10, such as the driver 16 and the nonvolatile memory 17, may be formed on separate chips, or on a common chip, or on any one of the chips 110, 120, 130, 140, and 150. In the same manner, the transceiver unit 18 may be formed on a separate chip or any one of the chips described above.

Further, on the flexible logic chip 130, in addition to the FPGA131, a logic circuit 132 may be formed, as shown in fig. 4 or fig. 5. A case where the FPGA131 and the logic circuit 132 are formed in separate areas is shown in fig. 4, and a case where the FPGA131 is formed in some parts of the logic circuit 132 is shown in fig. 5.

Meanwhile, in the first embodiment, an explanation is given about a stacked structure in which the light receiving unit 11, the high-speed signal processing circuit 12, the flexible logic circuit 13, the main processor 14, and the memory 15 are stacked after being formed on the individual chips 110, 120, 130, 140, and 150, respectively. The stack structure may be modified in various ways, as explained in the above embodiments. For example, if a wearable terminal (such as a smart phone or a cellular phone) or a stationary imaging device (such as a fixed point camera) that does not require high-speed image processing is used as the communication device 2, the main processor 14, the flexible logic circuit 13, and the memory 15 are formed on a single chip, so that the manufacturing cost can be reduced. Alternatively, if a mobile device such as a drone, an automobile, a home robot, a Factory Automation (FA) robot, a supervisory robot, or an autonomous robot is used as the communication device 2, a stacked structure as explained in the first embodiment or the following embodiments can be realized, so that the processing speed can be increased and the real-time can be enhanced.

2.4 operation of the image sensor

The operation performed by the image sensor 10 in the communication apparatus 2 shown in fig. 2 is explained below with reference to fig. 6.

As shown in fig. 6, the operation performed by the image sensor 10 can be roughly divided into the following five steps: a photoelectric conversion step S100; a signal processing step S200; a basic step S300; a control system step S400; and a picture forming step S500.

In the photoelectric conversion step S100, the photoelectric conversion element in the light receiving unit 11 performs photoelectric conversion 101 in which incident light is converted into electric charges according to the amount of the incident light.

In the signal processing step S200, the electric charges accumulated in the photoelectric conversion elements are read as analog image signals by the pixel circuits in the high-speed signal processing circuit 12. Here, the high-speed signal processing circuit 12 may be, for example, a signal processing circuit that: a method of enabling pixel signals to be read simultaneously from a plurality of pixels, such as a method for reading pixel signals in units of rows, is implemented in the signal processing circuit.

Then, the read analog pixel signal is converted into a digital pixel signal by the ADC in the high-speed signal processing circuit 12 (201). Then, the CDS circuit in the high-speed signal processing circuit 12 performs CDS processing on the AD-converted pixel signal (201). As a result, image data from which noise is removed is generated. Then, the generated image data is temporarily stored in the memory 15.

In a basic step S300, the image data stored in the memory 15 is operated in the order of black level processing 301, defect correction 302, shading correction 303, and distortion correction 304.

Here, the black level processing 301 is, for example, an operation for correcting the black level of the image data by removing noise due to a dark current generated in the light receiving unit 11.

The defect correction 302 is, for example, an operation of interpolating a pixel value of a defective pixel due to a defective element based on pixel values of corresponding adjacent pixels.

The shading correction 303 includes correcting for non-uniformity in brightness due to the effects of the orientation of the light source and lens aberrations.

Distortion correction 304 includes correcting lens-based image distortion in regions with high imaging height.

In the control system step S400, control system correction 401 for correcting various parameters used by the driver 16 in driving the actuator and the pixel circuit of the light receiving unit 11 is performed. Meanwhile, the control system step S400 need not always be performed after the signal processing step S200, and may be performed at an arbitrary timing.

In the picture forming step S500, the following operations are performed in order: auto Exposure (AE)/Auto Focus (AF)/auto white balance Adjustment (AWB) (501); synchronization 502; linear matrix processing 503; gamma correction 504; luminance color separation 505; edge enhancement 506; color difference matrix processing 507; and resizing/scaling 508.

The Automatic Exposure (AE) includes automatically adjusting the charge accumulation period of the photoelectric conversion element of the light receiving unit 11. The Auto Focusing (AF) includes automatically adjusting the focus of the optical system of the light receiving unit 11. The automatic white balance Adjustment (AWB) includes automatically adjusting the white level by adjusting the gain of the RGB values read by the pixel circuit of the high-speed signal processing circuit 12.

Synchronization 502 is an operation for synchronizing the results of asynchronous processing.

The linear matrix processing 503 includes matrix-converting the RGB values read by the pixel circuits of the high-speed signal processing circuit 12, and adjusting the input spectral characteristics of the light receiving unit 11 to ideal values.

The gamma correction 504 includes correcting a luminance level of image data according to gamma characteristics of a display device, for example, a CRT (cathode ray tube).

The luminance color separation 505 is also called YC conversion, in which the color space of RGB image data is converted to generate Y/B-Y/R-Y image data.

The edge enhancement 506 includes improving blurring of contours due to degradation of spatial frequency characteristics by compensating for the degradation of the spatial frequency characteristics of the image data and by reinforcing the contours.

The color difference matrix processing 507 includes matrix-converting the B-Y/R-Y components in the YC-converted image data, and adjusting the hue and saturation without changing the luminance.

Resizing/scaling 508 includes changing the size of the image data according to the display size of the display device, and includes zooming in on a particular area.

Subsequently, output I/F processing 509 is performed to output image data (hereinafter, referred to as processed image data) that has been processed in the above-described manner to an external ISP (image signal processor) or an external AP (application processor) via an interface (not shown). In the case of NTSC, for example, the output I/F process 509 may be a process including YUV, encoding, and camera ringing. Here, the processed image data may be temporarily stored in the memory 15 again before being output to the outside, or may be directly output from the flexible logic circuit 13 or the main processor 14 via an interface.

2.5 relationship between operations and chips

In the above-described flow, for example, the photoelectric conversion 101 is performed in the optical sensor array 111 of the light receiving unit 11. Further, for example, AD/CDS processing 201 is performed in the ADC 122 and CDS circuit 123 of the high-speed signal processing circuit 12.

For example, as a result of reading circuit data intended for realizing the respective circuit configurations from the programmable memory area 152 of the FPGA131 in the flexible logic circuit 13 and registering setting data of each circuit configuration in a corresponding register, black level processing 301, defect correction 302, shading correction 303, distortion correction 304, control system correction 401, AE/AF/AWB 501, synchronization 502, linear matrix processing 503, gamma correction 504, luminance color separation 505, edge enhancement 506, color difference matrix processing 507, resizing/scaling 508, and output I/F processing 509 are performed. Therefore, by changing the setting data and the circuit data, the output of the input with respect to each operation can be adjusted.

Meanwhile, as shown in fig. 4 or 5, when some parts of the flexible logic circuit 13 are regarded as the FPGA131 and the remaining parts are regarded as the logic circuit 132, the FPGA131 may be configured to perform the defect correction 302, the shading correction 303, the distortion correction 304, the control system correction 401, the AE/AF/AWB 501, the synchronization 502, the linear matrix processing 503, and the edge enhancement 506, and the logic circuit 132 may be configured to perform the black level processing 301, the gamma correction 504, the luminance color separation 505, the color difference matrix processing 507, the resizing/scaling 508, and the output I/F processing 509.

The main processor 14 operates in cooperation with the flexible logic circuit 13 so that the operation of the flexible logic circuit 13 is performed as a pipeline process.

2.6 degradation correction of image sensor

In the configuration explained above, for example, the optical sensor array 111 of the image sensor 10 is subjected to aged deterioration due to an increase in the frequency of use and the age. Such deterioration of the image sensor 10 can be corrected by changing the circuit configuration of the FPGA131 and its parameters, for example.

In this regard, in the first embodiment, the deterioration state of the image sensor 10 is detected constantly or periodically or at an arbitrary timing, and the circuit configuration and/or parameters of the FPGA131 are changed depending on the detected deterioration state. As a result, the image sensor 10 can be customized according to its degradation state, so that accurate information (e.g., image data) can be obtained even when the image sensor 10 is subject to degradation due to use and aging.

To correct the deterioration of the image sensor 10, for example, image data obtained by the image sensor 10 is transmitted to the server 3 via the network 4.

The server 3 analyzes, for example, image data received from the communication device 2 via the network 4, and identifies a deterioration position and a deterioration cause in the image sensor 10. Then, in order to correct the identified deterioration position and the identified cause of deterioration, the server 3 generates setting data and/or circuit data to be set in the FPGA131 in the flexible logic circuit 13 of the image sensor 10, and transmits (feeds back) the setting data and/or the circuit data to the communication device 2 via the network 4.

Upon receiving the setting data and/or the circuit data from the server 3, the communication device 2 stores the received data in the memory 15 of the image sensor 10. Then, the image sensor 10 sets the setting data and/or the circuit data stored in the memory 15 in the FPGA131, and thus corrects the deterioration position and the deterioration cause.

Meanwhile, the setting data and/or the circuit data for correcting the deterioration position and the cause of deterioration may be generated using, for example, a learning model obtained by machine learning of past data.

2.7 degradation correction sequence

As for the sequence of analyzing the image data and changing the settings and/or circuit configuration of the flexible logic circuit 13 in the communication device 2 in the server 3, the following method can be cited as an example.

First, image data is transmitted from the communication device 2 to the server 3.

Second, the image data is analyzed (machine learning is performed) in the server 3.

Third, based on the result of the analysis, setting data and/or circuit data are generated in the server 3.

Fourth, setting data and/or circuit data (binary data transfer) are fed back from the server 3 to the communication apparatus 2.

Fifth, in the communication device 2, the received setting data and/or the received circuit data are written at a predetermined address in the programmable memory area 152 of the memory 15.

Sixth, the setup data and/or circuit data in the programmable memory area 152 is loaded to configure new circuits in the FPGA131 or to change parameters of circuit configurations implemented in the FPGA 131.

The FPGA131 may be constantly updated as a result of, for example, the above operations on a frame-by-frame basis.

Meanwhile, the layer configuration of the FPGA131 may be appropriately changed depending on the intended end use, such as a configuration including only the FPGA131 (see fig. 3), or a configuration including the FPGA131 and the logic circuit 132 (see fig. 4), or a configuration including a combination of the FPGA131 and the logic circuit 132 (i.e., a configuration in which the logic circuit 132 serving as a base is circuit-deformed in the FPGA131, see fig. 5).

Further, it is also possible to add a new circuit to the FPGA131 or modify the circuit configuration of the FPGA131 (for example, eliminate a part of the functions) based on the result of machine learning performed in the server 3 for the purpose of increasing the speed. For example, by modifying the circuit configuration of the FPGA131, the image data output from the high-speed signal processing circuit 12 can be modified from 10-bit image data to 14-bit image data or 8-bit image data.

2.8 analysis of image data (machine learning)

The degradation state of the image sensor 10 may be determined, for example, by analyzing image data obtained by the image sensor 10. During the analysis of the image data, for example, the image data obtained by the image sensor 10 is stored in the server 3, and when the image data is analyzed, the stored image data may be compared with the newly obtained image data, and it may be determined whether there is degradation of the image sensor 10.

At this time, as the image data to be stored in the server 3, image data obtained before shipment of the communication device 2, or image data obtained when initial setting of the communication device 2 is performed after being handed over to the user, or image data obtained at a stage where the aging degradation of the image sensor 10 is small may be used.

Alternatively, the image data transmitted from the communication device 2 to the server 3 in order to determine the degradation purpose may be either image data obtained at an arbitrary timing or image data obtained when a predetermined condition is satisfied. The predetermined condition may be set to a condition obtained by capturing the same area as that captured in the image data stored in the server 3, or a condition in which the image data is obtained by imaging under the same illumination condition as that used to obtain the image data stored in the server 3.

Alternatively, for example, if the image sensor 10 includes a mechanical shutter, an image obtained in a closed state of the mechanical shutter may be stored in the server 3. Subsequently, at the time of the deterioration determination, the image data may be obtained with the mechanical shutter kept closed, and the image data may be transmitted to the server 3. In this case, the deterioration state of the image sensor 10 can be confirmed by referring to the black level, noise, and defective pixels.

Further, during analysis of the image data, for example, the deterioration state and the cause of deterioration of the image data are learned by performing machine learning, and a learning model is constructed. As a result, in the subsequent analysis, the accuracy and immediacy in finding the cause can be improved. Meanwhile, as far as a method for machine learning is concerned, various techniques such as RNN (Current Neural Network) or CNN (Convolvulus Neural Network) may be used.

2.9 flow of operation

Operations performed when detecting and correcting deterioration of the image sensor 10 are described below with reference to flowcharts. Fig. 7 is a flowchart for explaining an example of an overall operation performed in the communication apparatus according to the first embodiment. Fig. 8 is a flowchart for explaining an example of an overall operation performed in the server according to the first embodiment.

2.9.1 operation in a communication device

As shown in fig. 7, first, the communication device 2 constantly or periodically requests the server 3 to analyze the image data obtained by the image sensor 10 (step S101), and waits for an analysis permission response from the server 3 (no at step S102). Upon receiving the analysis permission response from the server (yes at step S102), the communication device 2 stores "1" in the value N of the repetition count for monitoring analysis (step S103). Then, the communication device 2 drives the image sensor 10 and obtains image data (step S104). The image data obtained at this time may be processed image data on which the operations of the respective steps shown in fig. 6 have been performed.

Then, the communication device 2 performs DA conversion for converting the processed image data into analog image data, and performs encoding (step S105). Here, the encoding may alternatively be performed in, for example, the main processor 14 or an application processor (encoding unit) (not shown). Subsequently, the communication apparatus 2 transmits the encoded image data to the server 3 via the network 4 (step S106), and waits for a response from the server 3 (no at step S107). On the other hand, as explained later with reference to fig. 8, the server 3 analyzes the image data received from the communication apparatus 2, and if image degradation is recognized, generates setting data for solving the image degradation to be set in the FPGA 131.

If the analysis result indicating no image degradation is received from the server 3 (yes at step S107), the communication apparatus 2 ends the current operation. On the other hand, if the analysis result indicating the image deterioration is received (no in step S107), the communication apparatus 2 receives the encoded setting data from the server 3 via the network 4 (step S108), and releases the encoding of the encoded setting data (step S109). Here, the release of encoding (i.e., decoding) may be alternately performed, for example, in the main processor 14 or an application processor (decoding unit) (not shown). Then, the communication device 2 updates the setting data of the FPGA131 stored in the programmable memory area 152 of the memory area 151 with the decoded setting data (step S110), and sets the updated setting data in the FPGA131 (step S111). Meanwhile, if the received setting data contains setting data about an actuator that drives the optical system of the light receiving unit 11 or contains setting data about a constituent element of the high-speed signal processing circuit 12, the communication device 2 updates a predetermined parameter in the nonvolatile memory 17 with the setting data. As a result, the driving of the constituent elements of the driver 16 is adjusted.

Subsequently, the communication device 2 increments the repetition count N by 1 (step S112), and determines whether the incremented repetition count N is larger than a predetermined upper limit value (3 in this example) of the repetition count (step S113). If the repetition count N is equal to or smaller than the upper limit value (no at step S113), the control system returns to step S104, and the communication apparatus 2 performs the subsequent operation again. On the other hand, when the repetition count N is larger than the upper limit value (yes in step S113), the control system proceeds to step S114.

In step S114, the communication device 2 resets the repetition count N to "1". Then, in the same manner as in steps S104 to S107 described previously, the communication apparatus 2 performs DA conversion on the image data obtained from the image sensor 10, encodes, and transmits the encoded image data to the server 3 (step S106), and then waits for a response from the server 3 (steps S115 to S118). On the other hand, as will be described later with reference to fig. 8, the server 3 analyzes the image data received from the communication apparatus 2, and if image degradation is recognized, generates circuit data incorporated in the FPGA131 for solving the image degradation.

If the analysis result indicating no image degradation is received from the server 3 (yes at step S118), the communication device 2 ends the current operation. On the other hand, if the analysis result indicating the image degradation is received (no in step S118), the communication apparatus 2 receives the encoded circuit data from the server 3 via the network 4 (step S119), and releases the encoding of the encoded circuit data (step S120). Then, the communication device 2 updates the circuit data of the FPGA131 stored in the programmable memory area 152 of the memory area 151 with the decoded circuit data (step S121), and incorporates the updated circuit data in the FPGA131, thereby modifying the circuit configuration of the FPGA131 (step S122).

Subsequently, the communication device 2 increments the repetition count N by 1 (step S123), and determines whether the incremented repetition count N is larger than a predetermined upper limit value (3 in this example) of the repetition count (step S124). If the repetition count N is equal to or smaller than the upper limit value (no in step S124), the control system returns to step S115, and the communication device 2 performs the subsequent operation again. On the other hand, if the repetition count N is larger than the upper limit value (yes in step S124), the communication apparatus 2 ends the current operation.

2.9.2 operations in a Server

As shown in fig. 8, after the current operation is started, the server 3 waits for an analysis request from the communication device 2 (no at step S131). Upon receiving the analysis request (yes in step S132), first, the server 3 identifies the communication apparatus 2 that issued the analysis request (step S132).

Upon successful identification of the communication device 2 from which the analysis request was issued, the server 3 reads the circuit data and the setting data stored in the programmable memory area 152 of the identified communication device from the predetermined memory device (step S133), and transmits an analysis permission response to the communication device 2 from which the analysis request was issued (step S134). In the memory device of the server 3, the circuit data and the setting data stored in the programmable memory area 152 of the communication device 2 registered respectively are stored in a manner corresponding to the communication device 2. That is, the circuit data and the setting data of each communication device 2 are shared between the communication device 2 and the server 3.

Then, the server 3 sets the repetition count N to "1" (step S135), and waits until the encoded image data is received from the communication apparatus 2 (no in step S136). Upon receiving the encoded image data (yes at step S136), the server 3 releases the encoding of the encoded image data (step S137), analyzes the decoded image data (step S138), and thus determines whether there is image degradation (step S139).

If there is no image degradation (no at step S139), the server 3 notifies the communication apparatus 2 that there is no image degradation (step S157), and the system control proceeds to step S158. On the other hand, if there is image degradation (yes in step S139), the server 3 identifies the cause of image degradation in the image sensor 10 based on the analysis result obtained in step S138, and generates new setting data for the identified position (step S140). Subsequently, the server 3 stores the generated setting data in a predetermined memory device in a manner corresponding to the communication device 2 concerned (step S141), encodes the generated setting data (step S142), and transmits the encoded setting data to the communication device 2 concerned via the network 4 (step S143). Meanwhile, as described earlier, new setting data may be generated using a learning model obtained by machine learning of past data.

Then, the server 3 increments the repetition count N by 1 (step S144), and determines whether the incremented repetition count N is larger than a predetermined upper limit value (3 in this example) of the repetition count (step S145). If the repetition count N is equal to or smaller than the upper limit value (no at step S145), the control system returns to step S136, and the server 3 performs the subsequent operation again. On the other hand, when the repetition count N is larger than the upper limit value (yes in step S145), the communication device 2 proceeds to step S146.

In step S146, the server 3 resets the repetition count N to "1". Then, the server 3 waits until the encoded image data is received from the communication apparatus 2 (no in step S147). Upon receiving the encoded image data (yes at step S147), the server 3 releases the encoding of the encoded image data (step S148), analyzes the decoded image data (step S149), and thus determines whether there is image degradation (step S150).

If there is no image degradation (no at step S150), the server 3 notifies the communication apparatus 2 that there is no image degradation (step S157), and the system control proceeds to step S158. On the other hand, if there is image degradation (yes in step S150), the server 3 identifies a position in the image sensor 10 that causes image degradation based on the analysis result of step S149, and generates new circuit data for the identified position (step S151). Subsequently, the server 3 stores the generated circuit data in a predetermined memory device in a manner corresponding to the communication device 2 (step S152), encodes the generated circuit data (step S153), and transmits the encoded circuit data to the communication device 2 via the network 4 (step S154). Meanwhile, as described above, new circuit data may be generated using a learning model obtained by machine learning of past data.

Then, the server 3 increments the repetition count N by 1 (step S155), and determines whether the incremented repetition count N is larger than a predetermined upper limit value (3 in this example) of the repetition count (step S156). If the repetition count N is equal to or smaller than the upper limit value (no in step S156), the control system returns to step S147, and the server 3 performs the subsequent operation again. On the other hand, when the repetition count N is larger than the upper limit value (yes in step S156), the communication device 2 proceeds to step S158.

In step S158, the server 3 determines whether to end the current operation. If it is determined to end the operation (YES at step S158), the server 3 ends the operation. However, if it is determined that the operation is not to be ended (no at step S158), the system control returns to step S131, and the server 3 performs the subsequent operation.

As a result of performing such an operation, the circuit configuration and/or parameters of the FPGA131 in the flexible logic circuit 13 of the communication device 2 are customized, and the deterioration of the image sensor 10 is corrected. As a result, in the communication apparatus 2, image data in a good state can be obtained.

Meanwhile, the frequency of uploading image data from the communication device 2 to the server 3 can be changed as appropriate. Further, for example, in a drone, an automobile, or a robot in which real-time performance is important, it is desirable that image data transmitted from the communication device 2 to the server 3 have a small data amount. In this case, in order to reduce the data amount of the image data, the target image data for transmission may be compressed to a VGA level or a QVGA level, or may be compressed by binning.

2.10 modification of setup data/Circuit data

A table of exemplary items (setting data and circuit data) modified according to the degradation state of the image sensor 10 identified by analyzing the image data is shown in fig. 9.

As shown in fig. 9, the causes of the deterioration include a cause due to a photoelectric conversion element in the optical sensor array 111 of the image sensor 10 and a cause due to an optical system module such as a lens and an actuator. For example, if the cause of the deterioration can be identified as in the black level processing 301, AE/AF/AWB 501, synchronization 502, linear matrix processing 503, gamma correction 504, luminance color separation 505, edge enhancement 506, or color difference matrix processing 507, the cause of the deterioration is considered to be the photoelectric conversion element (sensor). On the other hand, if the cause of the degradation can be identified as being in the shading correction 303, the distortion correction 304, the control system correction 401, or the resizing/scaling 508, the cause of the degradation is considered to be the optical system module. Further, if the cause of the deterioration can be identified as in the defect correction 302, it is considered that the cause of the deterioration is at least the photoelectric conversion element or the optical system module.

In this regard, in the first embodiment, as shown in the table in fig. 9, when the cause of degradation is identified as in the black level processing 301, for example, the Optical Black (OB) value representing a part of the setting data is modified.

When the cause of deterioration is identified as in the defect correction 302, for example, a defect correction value is added to the setting data, or a correction circuit equal to or larger than the cluster in the circuit data is modified, or a shimmy (shimmy) improvement circuit in the circuit data is modified.

When the cause of the deterioration is identified as being in the shading correction 303, for example, a shading correction value representing a part of the setting data is modified, or a shading algorithm in the circuit data is modified (for example, a shading curve function is modified).

When the cause of the deterioration is identified as being in the distortion correction 304, for example, a distortion correction circuit (for correcting pincushion distortion or barrel distortion) is added to the circuit data.

When the cause of degradation is identified as being in the control system correction 401, for example, AF, OIS (i.e., anti-shake function) and/or calibration data is modified, or a driver control adjustment forward circuit is added to the circuit data for the purpose of servo adjustment, full gain adjustment and/or optical axis adjustment.

When the cause of the deterioration is identified as being in the AE/AF/AWB 501, for example, an algorithm of a circuit for executing the AE and the AWB is modified in the circuit data.

When the cause of the deterioration is identified as being in the synchronization 502, for example, an interpolation correction value representing a part of the setting data is modified, or an interpolation algorithm in the circuit data is modified.

When the cause of the deterioration is identified in the linear matrix processing 503, for example, a dispersion correction value representing a part of the setting data is modified, or an equivalent function algorithm in the circuit data is modified.

When the cause of the deterioration is identified as being in the gamma correction 504, for example, a gamma correction value (contrast) representing a part of the setting data is modified.

When the cause of the degradation is identified as being in the edge enhancement 506, for example, a contour (aperture) correction/noise correction value representing a part of the setting data is modified, or a contour (aperture) correction/noise correction algorithm in the circuit data is modified.

When the cause of the deterioration is identified as in the color difference matrix processing 507, for example, a color matrix value representing a part of the setting data is modified.

When the cause of the deterioration is identified as being in the resizing/scaling 508, for example, a scaling value representing a part of the setting data is modified.

When the cause of the deterioration is identified as being in the output I/F process 509, for example, a mode-based output channel representing a part of the setting data is modified.

As a result of modifying the setting data and/or the circuit data, image degradation in a region where the imaging height is equal to or greater than 80% can be suppressed, for example. Meanwhile, when a memory is to be added, a configuration may be made to enable the addition of the memory.

2.11 high speed processing method

A description is given below of the high-speed processing performed in the communication apparatus 2 according to the first embodiment while comparing it with the conventional case.

Fig. 10 is a block diagram showing a conventional device configuration. Fig. 11 is a diagram for explaining a flow of data processing performed in the device configuration shown in fig. 10. Fig. 12 is a diagram showing clock cycle counts required when processing 1000 sets of data in the device configuration shown in fig. 10. Fig. 13 is a block diagram showing the device configuration of the image sensor according to the first embodiment. Fig. 14 is a diagram for explaining a flow of data processing performed in the image sensor according to the first embodiment. Fig. 15 is a diagram showing clock cycle counts required when processing 1000 sets of data in the image sensor according to the first embodiment.

As shown in fig. 10, in a conventional device configuration in which the logic circuit 913 is connected to the main processor 914 and the memory 915 via the bus 919, the logic circuit 913, the main processor 914, and the memory 15 are incorporated in a single layer. Therefore, the sequential processing enables complex programs to be executed in a flexible manner.

However, since the memory 15 is shared between circuits (also referred to as arithmetic units) that perform various operations, there are disadvantages such as performance degradation accompanying an increase in the number of processor cores, and a long period of time is required for parallel processing. For example, in performing various operations shown in fig. 6, the main processor 914 needs to retrieve target data sets one by one from the memory 915 via the bus 919 and perform the operations by sequentially inputting the data sets to the logic circuit 913. Therefore, in the conventional apparatus configuration, as shown in fig. 11, there is a sequential process flow in which the process of each data set D is performed in a sequential manner.

For this reason, for example, in the case of processing 1000 instructions of the same level, since only a single instruction can be executed per clock, at least 1000 clock cycles are required to process all the instructions, as shown in fig. 12.

In contrast, the image sensor 10 according to the first embodiment has a stacked structure formed by stacking the chips 110, 120, 130, 140, and 150 that perform various operations. Therefore, as shown in fig. 13, in the image sensor 10, the flexible logic circuit 13 becomes able to directly retrieve data from the memory 15 and perform processing.

Thus, with the benefit of this stacked structure, there are advantages: for example, a complicated program can be executed in a flexible manner, and by generating registers and arithmetic circuits in the flexible logic circuit 13, idle data in the main processor 14 can be retrieved and subjected to parallel processing in the flexible logic circuit 13. For example, as shown in fig. 14, in the image sensor 10, pipeline processing that can process a plurality of data sets D in parallel may be performed.

In this way, as a result of enabling parallel processing, the real-time can be enhanced. Further, even in the case where the next process has a different type, the circuit configuration of the FPGA131 can be modified, so that a complicated program can also be executed in a flexible manner.

For example, two instructions may be executed per clock, that is, if the degree of parallelism in pipeline processing is equal to 2, the number of clock cycles required to process 1000 instructions of the same level may be reduced to 500, as shown in fig. 15, which is half the number required in the conventional device configuration shown in fig. 12. That is, by further increasing the parallelism in pipeline processing, the number of clock cycles required to process instructions of the same level can be reduced to about 1/(a multiple of the number of clock cycles).

Further, as shown in fig. 14, if the first instance of the process S1 is subjected to machine learning, the number of data sets D to be processed can be reduced at and after the second instance of the process S2, thereby enabling further high-speed processing.

Meanwhile, machine learning based on analysis of image data in the server 3 may be used to make an addition to a new circuit of the FPGA131 or a modification of a circuit configuration for enhancing a processing speed (such as improving parallel processing or eliminating some functions).

2.12 action/Effect

As described above, according to the first embodiment, based on the image data obtained by the image sensor 10, the parameters and circuit configuration of the FPGA131 can be modified to correct image degradation. As a result, even when the image sensor 10 is subjected to deterioration, it can obtain accurate image data.

Further, the image sensor 10 according to the first embodiment has a stacked structure formed by stacking the light receiving chip 110, the analog/logic chip 120, the memory chip 150, the flexible logic chip 130, and the processor chip 140. As a result, the light receiving unit 11, the high-speed signal processing circuit 12, the memory 15, the flexible logic circuit 13, and the main processor 14 become connectable without involving a bus. Therefore, it is possible to execute a complicated program in a flexible manner, retrieve idle data from the main processor 14 and generate registers and arithmetic circuits in the FPGA131, and perform parallel processing in a pipeline. This enables enhancement of real-time and enables complex programs to be processed in a flexible manner.

3. Second embodiment

The second embodiment is described in detail below with reference to the drawings. In the first embodiment described above, the light receiving unit 11 and the high-speed signal processing circuit 12 are formed on different chips (the light receiving chip 110 and the analog/logic chip 120, respectively, see fig. 3). In contrast, in the second embodiment, a description is given of a case where the light receiving unit 11 and the high-speed signal processing circuit 12 are formed on the same chip with reference to an example. In the following description, the same configuration, operation, and effect as those of the first embodiment will not be described.

3.1 device configuration

Fig. 16 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the second embodiment. As shown in fig. 16, for example, the image sensor 20 according to the second embodiment includes a light receiving unit/high-speed signal processing circuit 21, a flexible logic circuit 13, a main processor 14, a memory 15, a driver 16, and a nonvolatile memory 17. Here, the flexible logic circuit 13, the main processor 14, the memory 15, the driver 16 and the nonvolatile memory 17, and the transceiver unit 18 may be the same as the constituent elements described with reference to fig. 2 according to the first embodiment.

3.2 chip configuration

Fig. 17 is a schematic diagram showing a chip configuration of an image sensor according to a second embodiment. In fig. 17, the driver 16 and the nonvolatile memory 17 are not shown in the same manner as in fig. 3 for the sake of simplicity.

As shown in fig. 17, in the image sensor 20, an optical sensor array 111 constituting the light receiving unit 11, a pixel circuit 121 constituting the high-speed signal processing circuit 12, an ADC 122, a CDS circuit 123, and a gain adjustment circuit 124 are formed on a single light receiving chip 110.

3.3 action/Effect

According to the configuration described above, in the same manner as the first embodiment, based on the image data obtained by the image sensor 20, the parameters and circuit configuration of the FPGA131 can be modified to correct image degradation. As a result, even when the image sensor 20 is subjected to deterioration, it can obtain accurate image data.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the first embodiment. Therefore, detailed description thereof will not be repeated.

4. Third embodiment

In the foregoing first embodiment, the memory 15 is arranged immediately below the high-speed signal processing circuit 12. However, the location of the memory 15 is not limited to being immediately below the high-speed signal processing circuit 12. In this regard, in the third embodiment, a description is given of a case where the memory 15 is arranged as the lowermost layer in the stacked structure with reference to an example. In the following description, the same configuration, operation, and effects as those of the above-described embodiment will not be described.

4.1 device configuration

Fig. 18 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the third embodiment. As shown in fig. 18, in the image sensor 30 according to the third embodiment, for example, in the same configuration as the image sensor 10 according to the first embodiment described with reference to fig. 2, the memory 15 is disposed below the main processor 14, representing the lowermost layer. Further, in the image sensor 30, the signal processing circuit 32 replaces the high-speed signal processing circuit 12 of the image sensor 10. The signal processing circuit 32 may be a signal processing circuit capable of high-speed reading in the same manner as the high-speed signal processing circuit 12, or may be a signal processing circuit for reading at a lower speed than the high-speed signal processing circuit 12. The other configuration may be the same as the configuration described with reference to fig. 2 according to the first embodiment.

4.2 chip configuration

Fig. 19 is a schematic diagram showing a chip configuration of an image sensor according to a third embodiment. In fig. 19, the driver 16 and the nonvolatile memory 17 are not shown in the same manner as in fig. 3 for the sake of simplicity.

As shown in fig. 19, in the image sensor 30, on the processor chip 140 existing between the flexible logic chip 130 and the memory chip 150, a through wiring 142 is laid out for directly connecting the FPGA131 on the flexible logic chip 130 and the programmable memory region 152 of the memory region 151 on the memory chip 150. Therefore, the FPGA131 and the programmable memory region 152 are directly connected to each other via the through wiring 142.

Here, the through wiring 142 may be a TSV (through silicon via) passing through the processor chip 140, for example.

4.3 action/Effect

Also according to such a configuration, in the same manner as the above-described embodiment, based on the image data obtained by the image sensor 30, the parameters and circuit configuration of the FPGA131 can be modified to correct image degradation. As a result, even when the image sensor 30 is subjected to deterioration, it can obtain accurate image data.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the first embodiment. Therefore, detailed description thereof will not be repeated.

5. Fourth embodiment

In the foregoing third embodiment, the light receiving unit 11 and the signal processing circuit 32 are formed on different chips (the light receiving chip 110 and the analog/logic chip 120, respectively, see fig. 19). In contrast, in the fourth embodiment, a description is given of a case where the light receiving unit 11 and the signal processing circuit 32 are formed on the same chip with reference to an example. In the following description, the same configuration, operation, and effects as those of the above-described embodiment will not be described.

5.1 device configuration

Fig. 20 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the fourth embodiment. As shown in fig. 20, for example, the image sensor 40 according to the fourth embodiment includes a receiving unit/signal processing circuit 41, a flexible logic circuit 13, a main processor 14, a memory 15, a driver 16, and a nonvolatile memory 17. Here, the flexible logic circuit 13, the main processor 14, the memory 15, the driver 16 and the nonvolatile memory 17, and the transceiver unit 18 may be the same as the constituent elements described with reference to fig. 2 according to the first embodiment.

5.2 chip configuration

Fig. 21 is a schematic diagram showing a chip configuration of an image sensor according to a fourth embodiment. In fig. 21, the driver 16 and the nonvolatile memory 17 are not shown in the same manner as in fig. 3 for the sake of simplicity.

As shown in fig. 21, in the image sensor 40, an optical sensor array 111 constituting the light receiving unit 11, a pixel circuit 121 constituting the signal processing circuit 32, an ADC 122, a CDS circuit 123, and a gain adjustment circuit 124 are formed on a single light receiving chip 110.

5.3 action/Effect

Also according to such a configuration, in the same manner as the above-described configuration, based on the image data obtained by the image sensor 40, the parameters and circuit configuration of the FPGA131 can be modified to correct image degradation. As a result, even when the image sensor 40 is subject to deterioration, it can obtain accurate image data.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the above-described embodiment. Therefore, detailed description thereof will not be repeated.

6. Fifth embodiment

In the fourth embodiment described above, the light receiving unit 11 and the signal processing circuit 32 are formed on a single light receiving chip 110 (see fig. 21). In contrast, in the fifth embodiment, a description is given of a case where the signal processing circuit 32 and the flexible logic circuit 13 are formed on the same chip with reference to an example. In the following description, the same configuration, operation, and effects as those of the above-described embodiment will not be described.

6.1 device configuration

Fig. 22 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the fifth embodiment. As shown in fig. 22, for example, the image sensor 50 according to the fifth embodiment includes a light receiving unit 11, a signal processing circuit/flexible logic circuit 53, a main processor 14, a memory 15, a driver 16, and a nonvolatile memory 17. Here, the light receiving unit 11, the main processor 14, the memory 15, the driver 16 and the nonvolatile memory 17, and the transceiver unit 18 may be the same as the constituent elements described with reference to fig. 2 according to the first embodiment.

6.2 chip configuration

Fig. 23 is a schematic diagram showing a chip configuration of an image sensor according to a fifth embodiment. In fig. 23, the driver 16 and the nonvolatile memory 17 are not shown in the same manner as in fig. 3 for the sake of simplicity.

As shown in fig. 23, in the image sensor 50, the pixel circuit 121, the ADC 122, the CDS circuit 123, the gain adjustment circuit 124, and the FPGA131 (and the logic circuit 132) constituting the flexible logic circuit 13 are formed on a single flexible logic chip 130.

6.3 action/Effect

Also according to such a configuration, in the same manner as the above-described embodiment, based on the image data obtained by the image sensor 50, the parameters and circuit configuration of the FPGA131 can be modified to correct image degradation. As a result, even when the image sensor 50 is subjected to deterioration, it can obtain accurate image data.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the above-described embodiment. Therefore, detailed description thereof will not be repeated.

7. Sixth embodiment

In the above-described first to fifth embodiments, the images respectively obtained by the image sensors 10, 20, 30, 40, and 50 are analyzed in the server 3 to identify the cause of degradation, and based on the identified cause of degradation, update data for the setting data and/or circuit data of the FPGA131 is generated in the server 3. In contrast, in the sixth embodiment, a description is given of a case where analysis of image data and generation of update data are performed in a communication apparatus with reference to an example.

7.1 device configuration

Fig. 24 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the sixth embodiment. As shown in fig. 24, the image sensor 10 in the communication apparatus according to the sixth embodiment has the same configuration as, for example, the image sensor 10 according to the first embodiment described with reference to fig. 2. However, in the sixth embodiment, for example, the main processor 14 analyzes the processed image data stored in the memory 15, identifies the cause of degradation, and thus generates update data of the setting data and/or circuit data of the FPGA 131. In the same manner as in the above-described embodiment, update data for the setting data and/or the circuit data is stored in the predetermined programmable memory area 152 of the memory 15 and set in the FPGA 131.

Meanwhile, in the same manner as the server 3 according to the above-described embodiment, the main processor 14 may be configured to generate a learning model by machine learning the past data, and generate update data for the setting data and/or the circuit data using the learning model.

7.2 variations of the device configuration

A configuration based on the first embodiment is shown in fig. 24. However, the configuration of the image sensor need not be based on the first embodiment. Alternatively, for example, the configuration may be based on the image sensor 20 according to the second embodiment as shown in fig. 25; or the configuration may be based on the image sensor 30 according to the third embodiment as shown in fig. 26; or the configuration may be based on the image sensor 40 according to the fourth embodiment as shown in fig. 27; or the configuration may be based on the image sensor 50 according to the fifth embodiment as shown in fig. 28.

7.3 action/Effect

In this way, even when the communication device is configured to perform operations from the analysis of the image data to the generation of the update data, in the same manner as the above-described embodiment, based on the image data obtained by any one of the image sensors 10, 20, 30, 40, and 50, the parameters and circuit configuration of the FPGA131 can be modified to correct image degradation. As a result, even when the image sensors concerned among the image sensors 10, 20, 30, 40, and 50 are subjected to deterioration, accurate image data can be obtained.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the above-described embodiment. Therefore, detailed description thereof will not be repeated.

8. Seventh embodiment

In the sixth embodiment described above, the main processor 14 generates a learning model by performing machine learning, and generates update data for the setting data and/or the circuit data using the learning model. However, in the case where operations from the analysis of the image data to the generation of the update data are performed in the communication apparatus, a dedicated chip for performing machine learning may be mounted in the communication apparatus.

8.1 device configuration

Fig. 29 is a block diagram showing an exemplary schematic configuration of a communication apparatus according to the seventh embodiment. As shown in fig. 29, the image sensor 60 in the communication device according to the seventh embodiment is configured by, for example, adding a DNN (Deep Neural Network) circuit 61 for performing machine learning to the configuration of the image sensor 10 described with reference to fig. 24. The DNN circuit 61 may be arranged, for example, between the flexible logic circuit 13 and the main processor 14.

8.2DNN analysis procedures

Fig. 30 is a diagram for explaining an example of DNN analysis operation (machine learning operation) according to the seventh embodiment. As shown in fig. 30, for example, according to the first embodiment, in the five steps (i.e., the photoelectric conversion step S100, the signal processing step S200, the base step S300, the control system step S400, and the picture formation step S500) described with reference to fig. 6, the results of the operations in the signal processing step S200, the base step S300, the control system step S400, and the picture formation step S500 are input to the input layer of the DNN analysis step S600. In the DNN analysis step S600, a weight is obtained with respect to each edge of a node (also referred to as a neuron) added from the input layer to each layer of the output layer via the hidden layer, and a learning model is created to ensure that the most appropriate setting data and/or the most appropriate circuit data for reducing degradation in the image data appear in the output layer.

Using the learning model created in the above-described manner, the DNN circuit 61 and the main processor 14 generate update data of the most appropriate setting data and/or the most appropriate circuit data for reducing degradation in the image data, and store the update data in the programmable memory area 152 of the memory 15.

8.3 flow of operation

Operations performed when detecting and correcting deterioration of the image sensor 60 are described below with reference to flowcharts. Fig. 31 is a flowchart for explaining an example of the overall operation performed according to the seventh embodiment.

As shown in fig. 31, in the present operation, first, the main processor 14 sets "1" in the value N of the repetition count for monitoring analysis (step S201). Then, the main processor 14 controls the high-speed signal processing circuit 12 so that the image data is read from the light receiving unit 11 (step S202).

Then, with respect to the obtained image data, the main processor 14 and the flexible logic circuit 13 perform the operations of the steps shown in fig. 6, and analyze the image data by inputting the result of the operation of each step to the DNN circuit 61 (step S203). Then, based on the analysis result, the main processor 14 determines whether there is image degradation (step S204).

If there is no image degradation (NO in step S204), the main processor 14 ends the current operation. On the other hand, if there is image degradation (yes in step S204), based on the analysis result obtained in step S203, the main processor 14 and the DNN circuit 61 analyze the position in the image sensor 60 where the image degradation is caused (step S205), and thus generate new setting data and/or new circuit data (step S206).

Subsequently, the main processor 14 updates the setting data and/or the circuit data of the FPGA131 stored in the programmable memory area 152 of the memory area 151 with the generated setting data and/or the generated circuit data (step S207), and modifies the circuit configuration of the FPGA131 by setting the updated setting data in the FPGA131 and embedding the updated circuit data in the FPGA131 (step S208). Meanwhile, in the case of updating the setting data of the actuator of the optical system that drives the light receiving unit 11, or the setting data of the constituent elements of the high-speed signal processing circuit 12, the predetermined parameter in the nonvolatile memory 17 is updated by the setting data. Therefore, the driving of the constituent elements by the driver 16 is adjusted.

Then, the main processor 14 increments the repetition count N by 1 (step S209), and determines whether the incremented repetition count N is larger than the upper limit value of the repetition count (3 in this example) (step S210). If the repetition count N is equal to or smaller than the upper limit value (no in step S210), the control system returns to step S202, and the main processor 14 performs the subsequent operation. On the other hand, if the repetition count N is larger than the upper limit value (yes in step S210), the main processor 14 ends the current operation.

8.4 action/Effect

In this way, by incorporating the DNN circuit 61 into the communication apparatus, operations from analyzing image data to generating update data can be performed in the communication apparatus based on machine learning. As a result, even when the image sensors concerned among the image sensors 10, 20, 30, 40, and 50 are subjected to deterioration, accurate image data can be obtained.

Meanwhile, in the seventh embodiment, a description is given with reference to the image sensor 10 according to the sixth embodiment, which is described with reference to fig. 24. However, this is not the only possible case. Alternatively, the description may be given with reference to any one of the image sensors 20, 30, 40, and 50 described with reference to fig. 25 to 28, respectively.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the above-described embodiment. Therefore, detailed description thereof will not be repeated.

9. Eighth embodiment

In the above-described embodiments, in the first to fifth embodiments, an explanation is given of a case where operations from analyzing image data to generating update data are performed in the server 3. On the other hand, in the sixth embodiment, a description is given of a case where operations from analyzing image data to generating update data are performed in a communication apparatus. However, only one of the server 3 and the communication device is not limited to be configured to perform operations from analyzing the image data to generating the update data.

For example, as shown in fig. 32, the server 3 or the communication device may be selected to be configured to perform operations from analyzing image data to generating update data.

For example, when the communication device is a mobile device such as a drone, automobile, or autonomous robot, the communication device may be configured to perform operations from analyzing image data to generating update data while in motion. However, when the communication device is not in motion, it may operate in the server 3.

Alternatively, among the operations from analyzing the image data to generating the update data, some operations are performed in the server 3 and some operations are performed in the communication device.

As to whether the operation from analyzing the image data to generating the update data is performed in the server 3 or the communication device 2, switching may be performed in the main processor 14 or in an application processor (switching unit) (not shown).

Meanwhile, in the eighth embodiment, description is made with reference to the image sensor according to the first embodiment described with reference to fig. 2 and with reference to the image sensor 10 according to the sixth embodiment described with reference to fig. 24. However, this is not the only possible case, and the description may also be given with reference to other embodiments.

The remaining configuration, the remaining operation, and the remaining effects may be the same as those of the above-described embodiment. Therefore, detailed description thereof will not be repeated.

10. Examples of the applications

The techniques disclosed in the referenced applications may be applied to a variety of products. For example, the technology disclosed in the referenced application may be implemented as a device installed in any type of moving object, such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobile vehicle, an airplane, a drone, a ship, a robot, construction equipment, or an agricultural machine (tractor).

Fig. 33 is a block diagram showing an exemplary schematic configuration of a vehicle control system 7000 that represents an example of a mobile object control system in which the technology disclosed in the related application is applicable. The vehicle control system 7000 includes a plurality of electronic control units connected to each other via a communication network 7010. In the example shown in fig. 33, a vehicle control system 7000 includes a power train control unit 7100, a vehicle body control unit 7200, a battery control unit 7300, a vehicle external information detection unit 7400, a vehicle internal information detection unit 7500, and an integrated control unit 7600. The communication Network 7010 that connects such a plurality of control units may be an in-vehicle communication Network compatible with any standard, such as CAN (Controller area Network), LIN (local interconnect Network), LAN (local area Network), or FlexRay (registered trademark).

Each control unit includes: a microcomputer for performing arithmetic processing according to various programs; a memory unit for storing a program to be executed in the microcomputer and for storing parameters to be used in various arithmetic processes; and a driving circuit that drives various target devices for control. Further, each control unit includes a network I/F for communicating with other control units via the communication network 7010, and includes a communication I/F for wired or wireless communication with devices or sensors installed inside or outside the vehicle concerned. Referring to fig. 33, functional configurations of the integrated control unit 7600 include a microcomputer 7610, a general communication I/F7620, an application-specific communication I/F7630, a positioning unit 7640, a beacon receiving unit 7650, a vehicle-internal device I/F7660, an audio-video output unit 7670, an in-vehicle network I/F7680, and a memory unit 7690. In the same manner, the other control units also include a microcomputer, a communication I/F, and a memory unit.

The power train control unit 7100 follows instructions from various programs, and controls the operation of devices related to the power train of the vehicle. For example, the power train control unit 7100 functions as a control device of: a driving force generation device that generates a driving force of the vehicle, such as an internal combustion engine or a drive motor; a driving force transmission mechanism that transmits a driving force to a wheel; a steering mechanism that adjusts a steering angle of the vehicle; and a brake device that generates a braking force of the vehicle. The drive train Control unit 7100 may also have a function as a Control device for ABS (Antilock Brake System) or ESC (electronic stability Control).

The vehicle state detection unit 7110 is connected to the power train control unit 7100. For example, the vehicle state detection unit 7110 includes at least: a gyro sensor for detecting an angular velocity of a shaft rotational motion of the vehicle body; or an accelerator sensor for detecting acceleration of the vehicle; or a sensor for detecting the operation amount of an accelerator pedal, detecting the operation amount of a brake pedal, detecting the steering angle of a steering wheel, or detecting the engine speed or the wheel rotation speed. The power train control unit 7100 performs arithmetic operations using a signal input from the vehicle state detection unit 7110, and controls an internal combustion engine, a drive motor, an electric power steering apparatus, and a brake apparatus.

The vehicle body control unit 7200 follows instructions from various programs, and controls the operation of various devices mounted in the vehicle body. For example, the vehicle body control unit 7200 functions as a keyless entry system; a smart key system; an automatic window device; and control devices for various lamps such as head lamps, tail lamps, brake lamps, turn signal lamps, and fog lamps. In this case, the vehicle body control unit 7200 may receive an input of a radio wave transmitted from a portable device instead of a key, and receive an input of signals of various switches. Upon receiving an input of a radio wave or a signal, the vehicle body control unit 7200 controls a door lock device, a power window device, and a lamp of the vehicle.

The battery control unit 7300 follows instructions from various programs, and controls the secondary battery 7310 that represents a power source that drives a motor. For example, information such as a battery temperature, a battery output voltage, and a remaining battery capacity is input to the battery control unit 7300 from a battery device including the secondary battery 7310. Battery control unit 7300 performs arithmetic processing using such signals, and controls temperature adjustment of secondary battery 7310 and controls a cooling apparatus mounted in the battery apparatus.

Vehicle external information detection unit 7400 detects information on the outside of the vehicle to which vehicle control system 7000 is mounted. For example, at least one of the imaging unit 7410 or the vehicle external information detector 7420 is connected to the vehicle external information detection unit 7400. The imaging unit 7410 includes at least one of a ToF (Time of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and some other cameras. For example, the vehicle external information detector 7420 includes at least one of an environmental sensor for detecting a current weather condition or a meteorological phenomenon, or a surrounding information detection sensor for detecting surrounding vehicles, obstacles, and pedestrians around the vehicle in which the vehicle control system 7000 is installed.

For example, the environment sensor may be at least one of a raindrop sensor for detecting weather on a rainy day, or a fog sensor for detecting fog, a sunlight sensor for detecting the amount of sunlight, or a snowfall sensor for detecting snowfall. The surrounding information detecting sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging) device. The imaging unit 7410 and the vehicle-outside information detector 7420 may be provided as separate sensors or devices, or may be provided as a device formed by integrating a plurality of sensors and devices.

An example of the mounting positions of the imaging unit 7410 and the vehicle outside information detector 7420 is shown in fig. 34. Here, for example, the imaging units 7910, 7912, 7914, 7916, and 7918 are installed at least at the upper portions of the front nose, the rear view mirror, the rear bumper, the rear door, and the windshield on the vehicle interior. The imaging unit 7910 mounted at the nose and the imaging unit 7918 mounted at the upper portion of the windshield on the vehicle interior mainly obtain images of the front side of the vehicle 7900. The imaging units 7912 and 7914 mounted on the rear view mirror mainly obtain images of the side of the vehicle 7900. The imaging unit 7916 mounted on the rear bumper or the rear door mainly obtains an image of the rear side of the vehicle 7900. The imaging unit 7918 installed in the upper portion of the windshield on the vehicle interior is mainly used to detect a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, and a lane.

Meanwhile, an example of an imaging range of each of the imaging units 7910, 7912, 7914, and 7916 is shown in fig. 34. The imaging range "a" indicates an imaging range of the imaging unit 7910 mounted on the front nose; imaging ranges "b" and "c" represent imaging ranges of the imaging units 7912 and 7914 mounted on the rear view mirror, respectively; and the imaging range "d" denotes an imaging range of the imaging unit 7916 mounted on the rear bumper or the back door. For example, if image data obtained by imaging of the imaging units 7910, 7912, 7914, and 7916 is superimposed, a top view image of the vehicle 7900 viewed from above is obtained.

In the vehicle 7900, the vehicle exterior information detectors 7920, 7922, 7924, 7926, 7928, and 7930 mounted at the front side, the rear side, the lateral side, the corner, and the upper portion of the windshield in the vehicle interior may be, for example, ultrasonic sensors or radar devices. The vehicle exterior information detectors 7920, 7926 and 7930 mounted at the upper portions of the front nose, rear bumper, rear door and windshield on the vehicle interior may be LIDAR devices, for example. These vehicle outside information detectors 7920 to 7930 are mainly used to detect a preceding vehicle, a pedestrian, and an obstacle.

Referring back to the description of fig. 33, the vehicle external information detecting unit 7400 causes the imaging unit 7410 to capture an image of the outside of the vehicle, and receives image data of the captured image. Further, vehicle external information detection unit 7400 receives detection information from vehicle external information detector 7420 connected thereto. If the vehicle-outside information detector 7420 is an ultrasonic sensor, a laser device, or a LIDAR device, the vehicle-outside information detecting unit 7400 transmits ultrasonic waves or electromagnetic waves, and receives information of reflected waves. Then, the vehicle external information detecting unit 7400 may perform an object detecting operation or a distance detecting operation for detecting a person, a vehicle, an obstacle, a traffic sign, and a road characteristic based on the received information. Further, the vehicle external information detection unit 7400 may perform an environment recognition operation for recognizing rainfall, fog, road conditions based on the received information. In addition, the vehicle external information detection unit 7400 may calculate a distance to an object existing outside the vehicle based on the received information.

Further, the vehicle external information detecting unit 7400 may perform an image recognition operation or a distance detection operation for recognizing a person, a vehicle, an obstacle, a traffic sign, and a road feature based on the received information. Further, the vehicle external information detection unit 7400 may perform operations such as distortion correction or position adjustment on the received image data, synthesize image data obtained by imaging by different imaging units 7410, and generate an overhead view image or a panoramic image. Further, the vehicle external information detecting unit 7400 may perform a viewpoint converting operation using image data obtained through imaging by a different imaging unit 7410.

The vehicle interior information detection unit 7500 detects information about the vehicle interior. For example, the vehicle interior information detection unit 7500 is connected to a driver state detection unit 7510 that detects the state of the driver. The driver state detection unit 7510 may include a camera for taking an image of the driver, a bio sensor for detecting bio information of the driver, and a microphone for collecting sound inside the vehicle. The biosensor is arranged in, for example, a seat and a steering wheel, and detects biological information of a person sitting in the seat or a driver holding the steering wheel. The vehicle interior information detection unit 7500 may calculate the fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, or may determine whether the driver is asleep while driving. Further, the vehicle interior information detection unit 7500 may perform an operation such as noise cancellation on the collected sound signal.

The integrated control unit 7600 follows instructions from various programs and controls the overall operations performed in the vehicle control system 7000. The input unit 7800 is connected to the integrated control unit 7600. For example, the input unit 7800 is implemented using a device such as a touch-sensitive panel, a button, a microphone, a switch, or a lever that is operable by a passenger. Data obtained as a result of performing voice recognition of voice input from the microphone may be input to the integrated control unit 7600. The input unit 7800 may be, for example, a remote control device using infrared rays or some other radio waves, or an external connection device such as a cellular phone or a PDA (Personal Digital Assistant) compatible with the operation of the vehicle control system 7000. Alternatively, for example, the input unit 7800 may be a camera, and the passenger may input information using gestures. Still alternatively, the data may be obtained by detecting a motion of a wearable device worn by the passenger. Further, the input unit 7800 may include, for example, an input control circuit that generates an input signal based on information input by the passenger from the input unit 7800 and outputs the input signal to the integrated control unit 7600. The passenger operates the input unit 7800, and inputs various data and instruction operations to the vehicle control system 7000.

The Memory unit 7690 may include a ROM (read only Memory) for storing various programs to be executed by the microcomputer and a RAM (random access Memory) for storing various parameters, calculation results, and sensor values. The memory unit 7690 may be implemented using a magnetic memory device such as an HDD (Hard disk Drive), or a semiconductor memory device, or an optical memory device, or a magneto-optical memory device.

The general communication I/F7620 is a general communication I/F for relaying communication with various devices present in the external environment 7750. The general communication I/F7620 may be installed with a cellular communication protocol such as GSM (registered trademark) (global system of Mobile Communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution: Long Term Evolution), or LTE-a (LTE-Advanced: LTE Advanced), or may be installed with some other wireless communication protocol such as wireless LAN (also referred to as Wi-Fi (registered trademark)) or Bluetooth (registered trademark). Further, the general communication I/F7620 may establish connection with devices (e.g., an application server and a control server) existing in an external network (e.g., the internet, a cloud network, or a network dedicated to a commercial operator) via, for example, a base station or an access point. Further, the general Communication I/F7620 may establish a connection with a terminal existing in the vicinity of the vehicle (for example, a terminal owned by a driver, a pedestrian, and a shop) or a connection with an MTC (Machine Type Communication) terminal using, for example, P2P (Peer to Peer) technology.

The dedicated communication I/F7630 is a communication I/F supporting a communication protocol developed for use in a vehicle. For example, the Dedicated communication I/F7630 may implement WAVE (Wireless Access in Vehicle Environment) or DSRC (Dedicated Short Range Communications) which represents a combination of IEEE 802.11p for a lower layer and IEEE 1609 for a higher layer, or may implement a standard protocol such as a cellular communication protocol. Generally, the dedicated communication I/F7630 performs V2X communication, and the V2X communication is a concept including one or more of vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-pedestrian communication.

The positioning unit 7640 performs positioning by receiving GNSS signals (for example, GPS signals from GPS satellites (GPS stands for global positioning system)) from GNSS satellites (GNSS stands for global navigation satellite system), and generates position information containing the latitude, longitude, and altitude of the vehicle. The positioning unit 7640 may identify the current position by exchanging signals with a wireless access point, or may obtain position information from a terminal having a positioning function, such as a cellular phone, PHS, or smart phone.

The beacon receiving unit 7650 receives radio waves or electromagnetic waves transmitted from wireless stations installed on a road, and obtains information such as the current location, congestion, road closure, and required time. Meanwhile, the function of the beacon receiving unit 7650 may be alternatively included in the dedicated communication I/F7630.

The vehicle interior device I/F7660 is a communication interface for relaying connection between the microcomputer 7610 and various vehicle-mounted devices 7760 existing in the vehicle. The vehicle interior device I/F7660 may establish a Wireless connection using a Wireless Communication protocol such as Wireless LAN, bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB: Wireless USB). Alternatively, the vehicle interior apparatus I/F7660 may establish a wired connection such as USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface), or MHL (Mobile High-Definition Link) via a connection terminal (not shown) (and a cable as necessary). For example, the in-vehicle device 7760 may include at least one of: a mobile device or wearable device owned by a passenger, and an information device carried in or attached to a vehicle. Further, the in-vehicle device 7760 may include a navigation device for searching for a route to an arbitrary destination. The vehicle interior device I/F7660 exchanges control signals or data signals with the in-vehicle device 7760.

The in-vehicle network I/F7680 is an interface for relaying communication with the microcomputer 7610 and the communication network 7010. The in-vehicle network I/F7680 transmits and receives signals according to a predetermined protocol supported by the communication network 7010.

The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 according to various protocols and based on information obtained via at least one of the general communication I/F7620, the special communication I/F7630, the positioning unit 7640, the beacon receiving unit 7650, the vehicle-interior device I/F7660, and the in-vehicle network I/F7680. For example, the microcomputer 7610 may calculate control target values of the driving force generation apparatus, the steering mechanism, and the control apparatus based on the obtained information on the inside and outside of the vehicle, and may output a control instruction to the power train control unit 7100. For example, the microcomputer 7610 can perform cooperative control to realize functions of an ADAS (Advanced Driver Assistance System) including collision avoidance and impact mitigation of vehicles, follow-up running based on a distance between vehicles, speed-keeping running, warning for a vehicle collision, and warning for a lane departure. Further, the microcomputer 7610 can control the driving force generation apparatus, the steering mechanism, and the brake apparatus based on information about the surroundings of the vehicle, and can perform cooperative control for the purpose of a self-driving mode in which the vehicle runs in an autonomous manner without an operation by the driver.

The microcomputer 7610 may generate three-dimensional distance information between the vehicle and surrounding objects such as buildings and people based on information obtained via at least one of the general communication I/F7620, the special communication I/F7630, the positioning unit 7640, the beacon receiving unit 7650, the vehicle internal device I/F7660, and the in-vehicle network I/F7680, and may create local map information containing surrounding information of the current location of the vehicle. Further, based on the obtained information, the microcomputer 7610 can predict a danger such as a collision of a vehicle, proximity to a pedestrian, or entry into a road with a road closure, and can generate a warning signal. The warning signal may be, for example, a signal for generating a warning sound or activating a warning lamp.

The audio-video output unit 7670 transmits an output signal of at least one of audio or video to an output device capable of visually or aurally notifying information to an occupant in the vehicle or notifying information to the outside of the vehicle. In the example shown in fig. 33, the audio speaker 7710, the display unit 7720, and the dash panel 7730 are shown as output devices. The display unit 7720 may include, for example, at least one of an in-vehicle display or a head-up display. In addition, the display unit 7720 may also include an AR (Augmented Reality) display function. In addition to such examples, the output device may be a wearable device such as a headphone or glasses type display used by a passenger, or may be some other device such as a projector or a light. When the output device is a display device, results of various operations performed by the microcomputer 7610 and information received from other control units are displayed in various visual forms (such as text, images, tables, and graphics) by the display device. When the output device is an audio output device, it converts an audio signal composed of reproduced audio data or acoustic data into an analog signal and outputs it in an audible manner.

In the example shown in fig. 33, of the control units connected via the communication network 7010, at least two control units may be integrated into a single control unit. Alternatively, a plurality of control units may also be used to configure the individual control units. Furthermore, the vehicle control system 7000 may comprise some other control unit (not shown). Further, in the description given above, some or all of the functions of any control unit may be provided in another control unit. That is, as long as information can be received and transmitted via the communication network 7010, predetermined arithmetic processing may be performed in any control unit. In the same manner, a sensor or a device connected to a control unit may be connected to another control unit, and a plurality of control units may transmit detection information to each other and receive detection information from each other via the communication network 7010.

Meanwhile, a computer program for implementing the functions of the sensor system 1 described with reference to fig. 1 according to the embodiment may be installed in any control unit. Further, a computer-readable recording medium storing the computer program may be provided. Examples of the recording medium include a magnetic disk, an optical disk, a magneto-optical disk, and a flash memory. Alternatively, instead of using the recording medium, the computer program may be distributed via a network, for example.

In the vehicle control system 7000, the communication apparatus 2 explained with reference to fig. 2 according to the embodiment can be implemented in an integrated control unit 7600 representing an application example shown in fig. 33. For example, in the communication device 2, the main processor 14, the memory 15, and the transceiver unit 18 correspond to the microcomputer 7610, the memory unit 7690, and the in-vehicle network I/F7680 of the integrated control unit 7600.

Meanwhile, at least some constituent elements of the communication device 2 explained with reference to fig. 2 may be implemented in a module (for example, an integrated circuit module using a single die configuration) for the integrated control unit 7600 shown in fig. 33. Alternatively, the sensor system 1 explained with reference to fig. 1 may be implemented using a plurality of control units of the vehicle control system 7000 shown in fig. 33.

Although the application referred to is described in detail above in the form of embodiments with reference to the drawings, the technical scope of the application referred to is not limited to the above embodiments. That is, the referenced applications should be construed as embodying all modifications, such as other embodiments, additions, alternative constructions, and deletions, which may occur to one skilled in the art and which fall well within the basic teachings set forth herein.

The effects described in the embodiments described in this written description are illustrative and exemplary only, and the scope is not limited. That is, the techniques disclosed in the referenced applications can achieve other effects that can occur to those skilled in the art.

Meanwhile, the configuration described below also falls within the technical scope of the related application.

(1) A sensor apparatus, comprising:

a sensor that obtains sensor information;

an FPGA (field programmable gate array) that performs predetermined processing on sensor information obtained by the sensor; and

a memory that stores data for causing the FPGA to perform a predetermined process.

(2) The sensor device according to (1), wherein the data stored in the memory is updated according to the analysis result of the sensor information.

(3) The sensor device according to (1) or (2), further comprising:

a transmission unit that transmits the sensor information that has been subjected to the predetermined processing to a predetermined network; and

a receiving unit that receives update data for updating the FPGA, the update data being generated according to an analysis result of the sensor information transmitted to a predetermined network,

the data stored in the memory is updated with the update data.

(4) The sensor device according to (3), wherein,

the transmitting unit transmits the sensor information to a predetermined network using wireless communication, and

the receiving unit receives the update data from the predetermined network using wireless communication.

(5) The sensor device according to (4), further comprising:

an encoding unit that encodes the sensor information; and

a decoding unit that releases encoding of the update data.

(6) The sensor device according to (1) or (2), further comprising a processor that:

the information of the sensor is analyzed and,

generates update data for updating the FPGA according to the analysis result, an

The data stored in the memory is updated with the generated update data.

(7) The sensor device according to (6), further comprising a DNN (Deep Neural Network) circuit that analyzes the sensor information by performing machine learning, wherein the processor analyzes the sensor information based on a result of the machine learning performed by the DNN circuit.

(8) The sensor device according to (1) or (2), further comprising:

a transmission unit that transmits the sensor information that has been subjected to the predetermined processing to a predetermined network;

a receiving unit that receives update data for updating the FPGA, the update data being generated according to an analysis result of the sensor information transmitted to the predetermined network;

a processor analyzing the sensor information and generating update data for updating the FPGA according to the analysis result; and

a switching unit that switches between transmitting the sensor information to a predetermined network using the transmitting unit and inputting the sensor information to the processor, wherein,

the data stored in the memory is updated with the update data received by the receiving unit or with the update data generated by the processor.

(9) The sensor device according to any one of (1) to (8),

the sensor information represents image data, an

The sensor comprises

A light receiving unit having a plurality of photoelectric conversion elements, an

A signal processing circuit that reads the image data from the light receiving unit.

(10) The sensor device according to (9), wherein the predetermined process includes at least one of black level processing, defect correction, shading correction, distortion correction, auto exposure/auto focus/auto white balance adjustment, synchronization, linear matrix processing, gamma correction, luminance color separation, edge enhancement, color difference matrix processing, and resizing/scaling.

(11) The sensor device according to any one of (1) to (10), wherein the data includes

Circuit data for embedding a circuit configuration for predetermined processing in an FPGA, and

setting data containing parameters to be set in the circuit configuration.

(12) The sensor device according to (1) or (2), further comprising a processor that performs predetermined processing in cooperation with the FPGA.

(13) The sensor device according to any one of (1) to (12), further comprising:

a first chip on which a sensor is mounted;

the second chip is provided with an FPGA; and

a third chip having a memory mounted thereon, wherein,

the sensor device has a stacked structure formed by stacking a first chip to a third chip.

(14) The sensor device of (13), wherein the third chip is positioned between the first chip and the second chip.

(15) The sensor device according to (13) or (14), further comprising a fourth chip on which a processor for performing predetermined processing in cooperation with the FPGA is mounted, wherein,

the stack structure is formed by stacking the first chip to the fourth chip.

(16) The sensor device according to (15), wherein,

the first chip is positioned as the topmost layer of the stack structure, an

The fourth chip is positioned as the lowest layer of the stacked structure.

(17) The sensor device according to any one of (13) to (16),

the sensor information is representative of the image data,

the sensor comprises

A light receiving unit having a plurality of photoelectric conversion elements, an

A signal processing circuit which reads the image data from the light receiving unit, and

the first chip comprises

A fifth chip on which a light receiving unit is mounted, an

And the sixth chip is provided with a signal processing circuit.

(18) An electronic device, comprising:

a sensor that obtains sensor information;

an FPGA that performs predetermined processing on sensor information obtained by a sensor; and

a memory that stores data for causing the FPGA to perform a predetermined process.

(19) A sensor system in which an electronic device and a server are connected via a predetermined network, wherein,

the electronic device includes:

a sensor, the sensor obtaining sensor information,

an FPGA that performs predetermined processing on sensor information obtained by the sensor,

a memory that stores data for causing the FPGA to perform a predetermined process,

a transmission unit that transmits the sensor information that has been subjected to the predetermined processing to a predetermined network, an

A receiving unit that receives update data for updating the FPGA, the update data being generated based on an analysis result of the sensor information transmitted to a predetermined network,

server

Analyzing sensor information received from an electronic device via a predetermined network,

generating update data for updating the FPGA according to the analysis result,

transmitting the generated update data to a predetermined network, and

the data stored in the memory is updated with the update data received by the receiving unit via a predetermined network.

(20)

A control method, comprising:

a step for analyzing sensor information obtained by the sensor; and

a step for modifying at least one of a circuit configuration of an FPGA that performs predetermined processing on the sensor information and a set value of the circuit configuration according to an analysis result of the sensor information.

REFERENCE SIGNS LIST

1 sensor system

2 communication device

3 server

10. 20, 30, 40, 50, 60 image sensor

11 light receiving unit

12 high-speed signal processing circuit

13 flexible logic circuit

14 main processor

15 memory

16 driver

17 non-volatile memory

18 transceiver unit

21 light receiving unit/high speed signal processing circuit

32 signal processing circuit

41 light receiving unit/signal processing circuit

53 signal processing circuit/flexible logic circuit

61 DNN circuit

110 light receiving chip

111 optical sensor array

120 analog/logic chip

121 pixel circuit

122 ADC

123 CDS circuit

124 gain adjustment circuit

130 flexible logic chip

131 FPGA

132 logic circuit

140 processor chip

141 MPU

150 memory chip

151 memory area

152 programmable memory region

181 DAC

182 transmission antenna

183 ADC

184 receiving antenna

101 photoelectric conversion

201 AD/CDS processing

301 black level processing

302 defect correction

303 shadow correction

304 distortion correction

401 control system correction

501 AE/AF/AWB

502 synchronization

503 linear matrix processing

504 gamma correction

505 luminance color separation

506 edge enhancement

507 color difference matrix processing

508 resizing/scaling

509 output I/F processing

62页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:车辆监测器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!