Stereo camera depth determination using hardware accelerators

文档序号:1713844 发布日期:2019-12-13 浏览:42次 中文

阅读说明:本技术 使用硬件加速器的立体摄像机深度确定 (Stereo camera depth determination using hardware accelerators ) 是由 亢乐 李宇鹏 漆维 包英泽 于 2017-12-08 设计创作,主要内容包括:本文描述的是允许在给定输入图像的情况下进行密集深度图估计的系统和方法。在一个或多个实施方式中,开发了与先前方法显著不同的神经网络模型。深度神经网络模型的实施方式包括计算效率更高的结构和更少的层,但仍然产生良好的质量结果。此外,在一个或多个实施方式中,深度神经网络模型可以被特别配置和训练为使用一个或多个硬件加速器组件来操作,该硬件加速器组件可以加速计算并产生良好的结果,即使在硬件加速器组件的计算期间使用更低精度的位表示。(Described herein are systems and methods that allow dense depth map estimation given an input image. In one or more embodiments, a neural network model is developed that is significantly different from previous approaches. Embodiments of the deep neural network model include more computationally efficient structures and fewer layers, but still produce good quality results. Further, in one or more embodiments, the deep neural network model may be specifically configured and trained to operate using one or more hardware accelerator components that may accelerate computations and produce good results, even if lower precision bit representations are used during the computations of the hardware accelerator components.)

1. An image processing system comprising:

a processor unit; and

a non-transitory computer readable medium or media comprising one or more sequences of instructions which, when executed by the processor unit, cause performance of steps comprising:

Receiving a pair of images of a scene, wherein the pair of images includes a first image and a second image;

Performing depth map inference using the pair of images and a trained neural network model comprising a plurality of operations, wherein at least some operations of the plurality of operations of the trained neural network model are performed by a hardware accelerator component communicatively coupled to the processor unit; and

Outputting a depth map comprising distance information to surfaces in the scene; and

A hardware accelerator component configured to perform at least some operations of the trained neural network model using a bit representation that is different from a bit representation used by the processor unit.

2. The image processing system of claim 1, wherein the processor unit operates using a floating-point bit representation and the hardware accelerator component uses fewer bits and uses a fixed bit representation.

3. The image processing system of claim 1, wherein the trained neural network model is obtained at least in part by training a neural network model using bit representation conversion to simulate operations to be performed by the hardware accelerator component when deploying the trained neural network.

4. The image processing system of claim 3, wherein the step of using bit representation conversion to simulate operations to be performed by the hardware accelerator component when deploying the trained neural network comprises:

Converting input data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

Converting operating parameter data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

Dequantizing a fixed-bit representation of the input data and a fixed-bit representation of the operating parameter data to a floating-point representation;

Performing the operation using the dequantized floating-point representation of the input data and the operational parameter data; and

Outputting a set of result data from the operation in the dequantized floating-point representation.

5. The image processing system of claim 4, wherein the dequantized floating point representation is a floating point bit representation used by the processor unit.

6. the image processing system of claim 3, wherein the neural network model used for training includes one or more data enhancement layers and one or more sampling layers that are removed when creating the trained neural network model to increase processing time during deployment.

7. The image processing system of claim 1, wherein the trained neural network model reduces computations by including two early convolution operations in the trained neural network model, each early convolution operation operating on image-related data corresponding to the first and second images separately rather than on a set of data representing a combination of the image-related data corresponding to the first and second images, wherein the two early convolution operations share parameters.

8. The image processing system of claim 1, wherein the image processing system performs at least some operations of the trained neural network model using a bit representation different from a bit representation used by the processor unit by performing steps comprising:

Converting input data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

Converting operating parameter data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

performing the operation at the hardware accelerator unit using a fixed bit representation of the input data and a fixed bit representation of the operational parameter data to obtain result data; and

dequantizing the fixed-bit representation of the result data to a floating-point representation.

9. the image processing system of claim 8, wherein the result data is temporal result data, and the image processing system further performs steps comprising:

Performing one or more operations on the dequantized fixed-bit representation of the transient result data using the hardware accelerator unit prior to submitting the result data to the processor unit.

10. An image processing system comprising:

A processor unit; and

A non-transitory computer readable medium or media comprising one or more sequences of instructions which, when executed by the processor unit, cause performance of steps comprising:

Receiving a pair of images of a scene, wherein the pair of images includes a first image and a second image;

Performing depth map inference using the pair of images and a trained neural network model comprising a plurality of operations, the plurality of operations comprising a plurality of convolution and deconvolution, and the trained neural network model configured to reduce computational requirements by:

Comprising at least two convolution operations, each convolution operation operating on image-related data corresponding to the first and second images separately, rather than on a combination of the image-related data corresponding to the first and second images, and wherein the two early convolution operations share parameters; and

excluding a set of data enhancement operations and a set of one or more sampling operations, wherein the set of data enhancement operations and the set of one or more sampling operations are included in a neural network model from which the trained neural network is obtained; and

Outputting a depth map comprising distance information to surfaces in the scene; and

A hardware accelerator component communicatively coupled to the processor unit configured to perform at least some operations of the trained neural network model.

11. The image processing system of claim 10, wherein the processor unit operates using a floating-point bit representation and the hardware accelerator component uses fewer bits and uses a fixed bit representation.

12. The image processing system of claim 10, wherein the trained neural network model is obtained at least in part by training a neural network model using bit representation conversion to simulate operations to be performed by the hardware accelerator component when deploying the trained neural network.

13. The image processing system of claim 12, wherein the step of using bit representation conversion to simulate operations to be performed by the hardware accelerator component when deploying the trained neural network comprises:

converting input data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

Converting operating parameter data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

Dequantizing a fixed-bit representation of the input data and a fixed-bit representation of the operating parameter data to a floating-point representation;

Performing the operation using the dequantized floating-point representation of the input data and the operational parameter data; and

outputting a set of result data from the operation in the dequantized floating-point representation.

14. The image processing system of claim 10, wherein the image processing system performs at least some operations of the trained neural network model using a bit representation different from a bit representation used by the processor unit by performing steps comprising:

Converting input data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

converting operating parameter data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator unit;

performing the operation at the hardware accelerator unit using a fixed bit representation of the input data and a fixed bit representation of the operational parameter data to obtain result data; and

dequantizing the fixed-bit representation of the result data to a floating-point representation.

15. The image processing system of claim 14, wherein the result data is temporal result data, and the image processing system further performs steps comprising:

Performing one or more operations on the dequantized fixed-bit representation of the transient result data using the hardware accelerator unit prior to submitting the result data to the processor unit.

16. A method of processing image data to obtain depth information relating to a scene captured by a pair of images, the method comprising:

Receiving the pair of images of the scene at an image processing system, the pair of images including a first image and a second image, the image processing system comprising:

A processor unit configured to coordinate a workflow of a trained neural network model by allocating at least some computational tasks of the trained neural network model to hardware accelerator components;

a non-transitory computer readable memory communicatively coupled with the processor unit for storing data related to the pair of images and data comprising one or more sequences of instructions related to the trained neural network; and

A hardware accelerator component communicatively coupled with the processor unit configured to perform at least some operations of a trained neural network model using a bit representation different from a bit representation used by the processor unit;

Performing, using the image processing system, depth map inference using the pair of images and the trained neural network model comprising a plurality of operations, wherein at least some operations of the plurality of operations of the trained neural network model are performed by the hardware accelerator component communicatively coupled to the processor unit; and

outputting a depth map comprising distance information to surfaces in the scene.

17. The method of claim 16, wherein the processor unit operates using a floating point bit representation and the hardware accelerator component uses fewer bits and uses a fixed bit representation.

18. The method of claim 16, wherein the neural network model used to train the trained neural network model includes one or more data enhancement layers and one or more sampling layers that are removed when creating the trained neural network model to increase processing time during deployment.

19. the method of claim 16, wherein the trained neural network model reduces computations by including two early convolution operations in the trained neural network model, each early convolution operation operating on image-related data corresponding to the first and second images separately rather than on a set of data representing a combination of the image-related data corresponding to the first and second images, wherein the two early convolution operations share parameters.

20. The method of claim 16, wherein the image processing system performs at least some operations of the trained neural network model using a bit representation different from a bit representation used by the processor unit by performing steps comprising:

Converting input data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator component;

Converting operating parameter data for the operation from a floating-point bit representation used by the processor unit to a fixed-bit representation used by the hardware accelerator component;

Performing the operation at the hardware accelerator component using the fixed bit representation of the input data and the fixed bit representation of the operational parameter data to obtain result data.

Technical Field

The present disclosure relates generally to systems and methods for image processing, and more particularly, to systems and methods for efficiently generating depth information from stereoscopic images.

Background

Depth images and regular images are useful inputs for addressing many computer vision tasks, such as three-dimensional reconstruction, moving structures, visual synchronous positioning and mapping (SLAM), face recognition, security monitoring, autonomous driving of vehicles, scene understanding, etc. A typical camera acquires color information (red, green, and blue (RGB)) for each pixel of an image. The depth camera or depth camera system attempts to acquire the spatial coordinates of each pixel in the image. Traditionally, depth images and regular images have been captured by two different physical cameras or two different sets of sensors.

Existing depth cameras generally fall into two categories: active depth cameras and passive depth cameras. Active depth cameras emit energy (typically in the form of infrared light or laser light) into the environment, capture reflections of the energy, and calculate depth information based on the reflections. Examples of active cameras include the Kinect system from microsoft corporation of redmond, washington. However, such systems are expensive, especially compared to passive depth cameras. Furthermore, because such systems typically use infrared emitters and collectors, they do not work well in outdoor environments because sunlight is too strong. Other active depth cameras use lasers, but these systems are very expensive, cost tens of thousands of dollars or more, and tend to consume large amounts of energy.

Passive depth cameras typically measure natural light to estimate depth. Most passive depth cameras are equipped with two cameras, also known as stereo cameras. Depth information is estimated by comparing the disparity in the scene captured by the same element in the two camera images. Stereoscopic depth cameras using the native method simply extract textures or features from the images and measure their disparity in the stereoscopic (e.g., left and right) images. For regions containing any features or textures, such as white walls, bright floors, uniform color, etc., disparity may not be successfully extracted and thus depth information cannot be estimated. Unfortunately, non-textured or featureless areas are common in natural scenes. Thus, the depth images produced by stereoscopic depth cameras using native algorithms often miss many pixels that severely and negatively impact the application.

Some stereoscopic depth camera systems use complex algorithms for some of these problems. However, these complex methods generally require high computational power. Thus, their respective products typically require expensive graphics processor units, high-end central processing units, or both. In addition to energy and computational cost, another problem with using complex methods to determine depth is the time required to determine depth by comparing stereo images. Even with increasing processor speed, this time delay is significant enough that this approach is impractical for applications that benefit from receiving depth information in real-time or near real-time, such as robotics or autonomous cars. For example, if the delay in determining the depth information is too great, the autonomous vehicle may collide or otherwise cause serious injury.

Therefore, what is needed is a system and method that can provide high quality dense depth maps in real-time or near real-time.

disclosure of Invention

Embodiments of the present disclosure provide an image processing system and method for processing image data to obtain depth information relating to a scene captured by a pair of images.

In one aspect of the disclosure, an image processing system includes: a processor unit; and a non-transitory computer readable medium or media comprising one or more sequences of instructions which, when executed by the processor unit, cause performance of steps comprising: receiving a pair of images of a scene, wherein the pair of images includes a first image and a second image; performing depth map inference using the pair of images and a trained neural network model comprising a plurality of operations, wherein at least some operations of the plurality of operations of the trained neural network model are performed by a hardware accelerator component communicatively coupled to the processor unit; and outputting a depth map comprising distance information to surfaces in the scene; and a hardware accelerator component configured to perform at least some operations of the trained neural network model using a bit representation that is different from a bit representation used by the processor unit.

In another aspect of the present disclosure, an image processing system includes: a processor unit; and

A non-transitory computer-readable medium or media comprising one or more sequences of instructions which, when executed by at least one of the one or more processors, cause performance of steps comprising: receiving a pair of images of a scene, wherein the pair of images includes a first image and a second image; performing depth map inference using the pair of images and a trained neural network model comprising a plurality of operations, the plurality of operations comprising a plurality of convolution and deconvolution, and the trained neural network model configured to reduce computational requirements by: comprising at least two convolution operations, each convolution operation operating on image-related data corresponding to the first and second images separately, rather than on a combination of the image-related data corresponding to the first and second images, and wherein the two early convolution operations share parameters; and excluding a set of data enhancement operations and a set of one or more sampling operations, wherein the set of data enhancement operations and the set of one or more sampling operations are included in a neural network model from which the trained neural network is obtained; and outputting a depth map comprising distance information to surfaces in the scene; and a hardware accelerator component, communicatively coupled to the processor unit, configured to perform at least some operations of the trained neural network model.

In yet another aspect of the disclosure, a method of processing image data to obtain depth information related to a scene captured by a pair of images includes: receiving the pair of images of the scene at an image processing system, the pair of images including a first image and a second image, the image processing system comprising: a processor unit configured to coordinate a workflow of a trained neural network model by allocating at least some computational tasks of the trained neural network model to hardware accelerator components; a non-transitory computer readable memory communicatively coupled with the processor unit for storing data related to the pair of images and data comprising one or more sequences of instructions related to the trained neural network; and a hardware accelerator component communicatively coupled with the processor unit configured to perform at least some operations of the trained neural network model using a bit representation that is different from a bit representation used by the processor unit; performing, using the image processing system, depth map inference using the pair of images and the trained neural network model comprising a plurality of operations, wherein at least some operations of the plurality of operations of the trained neural network model are performed by the hardware accelerator component communicatively coupled to the processor unit; and outputting a depth map comprising distance information to surfaces in the scene.

Drawings

Reference will now be made to embodiments of the invention, examples of which are illustrated in the accompanying drawings. The drawings are intended to be illustrative, not restrictive. While the invention is generally described in the context of these embodiments, it should be understood that the scope of the invention is not intended to be limited to these particular embodiments. The items in the drawings are not to scale.

Figure ("figure") 1 depicts a depth map generator system according to an embodiment of the present disclosure.

FIG. 2 depicts a simplified block diagram of a computing device/information handling system according to embodiments of this document.

3A-3M illustrate an exemplary deep neural network model that has been trained and may be deployed to infer depth information from stereo images, in accordance with embodiments of the present disclosure.

4A-4N illustrate exemplary deep neural network models that may be used in a training phase according to embodiments of the present disclosure.

FIG. 5 depicts a general overall method of training and using a neural network model for depth map estimation, according to an embodiment of the present invention.

Fig. 6 depicts an exemplary method of training a deep neural network model for depth estimation in accordance with an embodiment of the present disclosure.

FIG. 7 depicts a method of fine-tuning (as part of training) a floating point neural network model by simulating different bit representations to produce a neural network for use on a hardware accelerator component that uses the bit representations, according to an embodiment of the present disclosure.

FIG. 8 illustrates a method of fine-tuning (as part of training) a floating point neural network model by simulating a bit representation to produce a neural network for use on a hardware accelerator component that uses the bit representation, according to an embodiment of the disclosure.

Fig. 9 illustrates a method of quantizing values represented in one bit representation scheme to different bit representation schemes according to an embodiment of the present disclosure.

Fig. 10 depicts a method of providing dense depth map information in real-time (or near real-time) using a trained neural network model with hardware acceleration units, according to an embodiment of the present disclosure.

FIG. 11 illustrates a method of converting between a processor-dependent bit representation and a hardware accelerator component bit representation, performing an integer calculation, and converting the integer back to a floating point number for the next layer, according to an embodiment of the disclosure.

Detailed Description

In the following description, for purposes of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. Furthermore, those skilled in the art will recognize that the embodiments of the invention described below can be implemented in various ways (e.g., as a process, an apparatus, a system, a device, or a method) on a tangible computer-readable medium.

The components or modules illustrated in the drawings are exemplary of embodiments of the invention and are intended to avoid obscuring the invention. It should also be understood that throughout this discussion, components may be described as separate functional units (which may include sub-units), but those skilled in the art will recognize that various components or portions thereof may be divided into separate components or may be integrated together (including being integrated within a single system or component). It should be noted that the functions or operations discussed herein may be implemented as components. The components may be implemented in software, hardware, or a combination thereof.

Furthermore, connections between components or systems within the figures are not intended to be limited to direct connections. Rather, data between these components may be modified, reformatted, or otherwise changed by the intermediate components. Additionally, additional or fewer connections may be used. It should also be noted that the terms "coupled," "connected," or "communicatively coupled" should be understood to include direct connections, indirect connections through one or more intermediate devices, and wireless connections.

Reference in the specification to "one embodiment," "one or more embodiments," "a preferred embodiment," "an embodiment," or "embodiments" means that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the invention, and may be included in more than one embodiment. Moreover, the appearances of the above-described phrases in various places in the specification are not necessarily all referring to the same embodiment or a plurality of the same embodiments.

Certain terminology is used in various places throughout this specification for the purpose of description and should not be construed as limiting. A service, function, or resource is not limited to a single service, single function, or single resource; the use of these terms may refer to a distributable or aggregatable grouping of related services, functions, or resources.

the terms "comprising," "including," "containing," and "containing" are to be construed as open-ended terms, and any listing thereafter is an example and not intended to be limiting on the listed items. Any headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. Each reference mentioned in this patent document is incorporated herein by reference in its entirety.

Further, one skilled in the art will recognize that: (1) certain steps may optionally be performed; (2) the steps may not be limited to the specific order set forth herein; (3) certain steps may be performed in a different order; and (4) certain steps may be performed simultaneously.

A.General overview

aspects of the present invention include systems and methods that enable the generation of dense depth map images even if the scene includes regions that are non-textured or non-featured. In one or more embodiments, the depth map may be generated in real-time (or near real-time) using certain techniques in modeling and by using hardware accelerators or accelerators, such as Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), and the like.

In one or more embodiments, the depth map model may share some conceptual similarities with Dispnet, as described by Mayer-et al in "A Large data set to Train in Convolutional Networks for disparity, Optical Flow, and Scene Flow Estimation", IEEE computer Vision and Pattern recognition International Conference (CVPR), 2016 (also available in arXiv preprint arXiv: 1512.02134, 2015), which is incorporated herein by reference in its entirety. However, as will be explained in more detail below, embodiments herein include, among other things, simplified neural network layers and other modifications. Furthermore, implementations may be designed at training time, during deployment, or both, for efficient processing using 8-bit processing using hardware accelerators. Due to the limitation of the computing capability and power consumption of the FPGA, the traditional three-dimensional camera system based on the FPGA can only adopt a simple three-dimensional depth algorithm. However, embodiments herein provide a new architecture for running Convolutional Neural Network (CNN) models using hardware accelerators like FPGAs. It should also be noted that implementing aspects of the invention using hardware accelerators may help reduce cost and power consumption in addition to reducing processing time.

B.Camera system embodiment

fig. 1 depicts a depth map generator system according to an embodiment of the present disclosure. The exemplary embodiment shown in fig. 1 includes two cameras, camera a105A and camera B105B, which may be mounted on a rigid physical structure, such as a camera mount, and point in substantially the same direction. For convenience, the cameras may be referred to herein as right (camera a105A) and left (camera B105B), although it should be noted that they may be oriented differently (such as up and down). In one or more embodiments, the distance between the left 105B and right 105A cameras is typically between 5-50 centimeters, although other distances may be used. In one or more embodiments, two cameras capture images (which should be understood to mean still images, video images, or both) of the same scene but from different locations. The disparity of the same elements in the two images provides a clue for estimating depth.

Also depicted in fig. 1 is a microcontroller 110 communicatively coupled to each camera. In one or more embodiments, the microcontroller sends one or more control signals to the camera, receives image data from the camera, and transmits the image data to a processing unit (e.g., CPU 115) that is also communicatively coupled to the microcontroller 110. The microcontroller may send exposure and gain parameters to the cameras and may send one or more exposure signals to both cameras to ensure simultaneous exposure so that both cameras are at the same point in timetheir respective images are captured. If the scene contains dynamic objects, the synchronized exposure is important for depth estimation. An exemplary microcontroller is the Z-USB FX3 from Cypress (Selaplace) semiconductor corporation of san Jose, Calif., USATMsuperspeed USB 3.0 peripheral controller, but other microcontrollers may be used.

As mentioned above, also depicted in the exemplary system embodiment shown in FIG. 1 is a CPU 115, which may be an Advanced RISC Machine (ARM) CPU or an x86 CPU. ARM Cortex-A53, by ARM Holdings, Inc. of Cambridge, UK, is an exemplary CPU that may be used, and any x86 processor may work, such as that designed by Intel corporation of Santa Clara, CalifKurui i3TM2310M. In one or more embodiments, the CPU 115 receives image data from the microcontroller 110, performs overall depth map generation, and utilizes a hardware accelerator 120 communicatively coupled to the CPU to complete portions of the depth map generation process. In one or more embodiments, hardware accelerator 120 may be an FPGA, ASIC, or DSP configured to compute the results of portions of a neural network. In one or more embodiments, the microcontroller 110 may be eliminated from the system 100 if the CPU 115 is used as a microcontroller for camera control.

In one or more implementations, the system 100 outputs a 125 depth image, such as a 16-bit image with a resolution of 640 x 480, where each pixel value represents a depth value. In one or more embodiments, output 125 may also include the original camera images (e.g., two 640 x 480 grayscale or color images) from left and right cameras 105. The output rate depends at least in part on the CPU processing rate (e.g., 10 Hz). It should be noted that other bit sizes, resolutions, and output rates may be used.

it should also be noted that system 100 may include other computing system elements, such as power supplies, power management, memory, interfaces, etc., which are not shown in FIG. 1 to avoid obscuring aspects of the invention. Some examples of such elements and computing systems are provided with reference to fig. 2.

In one or more embodiments, aspects of this patent document may relate to, may include, or be implemented on one or more information handling systems/computing systems. A computing system may include any instrumentality or combination of instrumentalities operable to compute, determine, classify, process, transmit, receive, retrieve, originate, route, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data. For example, a computing system may be or include a personal computer (e.g., a laptop), a tablet, a Personal Digital Assistant (PDA), a smartphone, a smartwatch, a smart package, a server (e.g., a blade server or a rack server), a network storage device, a camera, or any other suitable device and may vary in size, shape, performance, functionality, and price. The computing system may include Random Access Memory (RAM), one or more processing resources (e.g., a Central Processing Unit (CPU) or hardware or software control logic), ROM, and/or other types of memory. Additional components of the computing system may include one or more disk drives, one or more ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, a touch screen, and/or a video display. The computing system may also include one or more buses operable to transmit communications between the various hardware components.

FIG. 2 depicts a simplified block diagram of a computing device/information handling system (or computing system) according to an embodiment of the present disclosure. It should be understood that the computing system may be configured differently and include different components, including fewer or more components as shown in fig. 2, but it should be understood that the functionality shown for system 200 may be operable to support various embodiments of the computing system.

As shown in FIG. 2, computing system 200 includes one or more Central Processing Units (CPUs) 201, CPU 201 providing computing resources and controlling the computer. The CPU 201 may be implemented with a microprocessor or the like, and may also include one or more Graphics Processing Units (GPUs) 219 and/or floating point coprocessors for mathematical computations. The system 200 may also include a system memory 202, which system memory 202 may be in the form of Random Access Memory (RAM), Read Only Memory (ROM), or both.

As shown in fig. 2, a plurality of controllers and peripherals may also be provided. The input controller 203 represents an interface to various input devices 204, such as a keyboard, a mouse, a touch screen, and/or a stylus. The computing system 200 may also include a storage controller 207, the storage controller 207 for interfacing with one or more storage devices 208, each of which includes a storage medium (such as tape or disk) or an optical medium (which may be used to record programs of instructions for operating systems, utilities and applications, which may include embodiments of programs that implement aspects of the present invention). The storage device 208 may also be used to store processed data or data to be processed in accordance with the present invention. The system 200 may also include a display controller 209, the display controller 209 configured to provide an interface to a display device 211, the display device 211 may be a Cathode Ray Tube (CRT), a Thin Film Transistor (TFT) display, an organic light emitting diode, an electroluminescent panel, a plasma panel, or other type of display. Computing system 200 may also include one or more peripheral controllers or interfaces 205 for one or more peripheral devices 206. Examples of peripheral devices may include one or more printers, scanners, input devices, output devices, sensors, and so forth. The communication controller 214 may interface with one or more communication devices 215, which enables the system 200 to connect to remote devices over any of a variety of networks, including the internet, cloud resources (e.g., ethernet cloud, fibre channel over ethernet (FCoE)/Data Center Bridge (DCB) cloud, etc.), Local Area Networks (LANs), Wide Area Networks (WANs), Storage Area Networks (SANs), or by any suitable electromagnetic carrier signal, including infrared signals.

In the system shown, all major system components may be connected to bus 216, and bus 216 may represent more than one physical bus. However, the various system components may or may not be physically proximate to each other. For example, input data and/or output data may be remotely transmitted from one physical location to another. In addition, programs embodying aspects of the present invention may be accessed from a remote location (e.g., a server) via a network. Such data and/or programs may be conveyed by any of a variety of machine-readable media, including but not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; a magneto-optical medium; and hardware devices that are specially configured to store or store and execute program code, such as Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), flash memory devices, and ROM and RAM devices.

aspects of the invention may be encoded on one or more non-transitory computer readable media with instructions for one or more processors or processing units to cause steps to be performed. It should be noted that the one or more non-transitory computer-readable media should include both volatile and non-volatile memory. It should be noted that alternative implementations are possible, including hardware implementations or software/hardware implementations. The hardware-implemented functions may be implemented using ASICs, FPGAs, programmable arrays, digital signal processing circuits, and the like. Thus, the term "means" in any claim is intended to encompass both software implementations and hardware implementations. Similarly, the term "computer-readable medium or media" as used herein includes software and/or hardware or a combination thereof having a program of instructions embodied thereon. With these alternative implementations contemplated, it should be understood that the figures and accompanying description provide those skilled in the art with the functional information required to write program code (i.e., software) and/or fabricate circuits (i.e., hardware) to perform the required processing.

It should be noted that embodiments of the present invention may also relate to computer products having a non-transitory tangible computer-readable medium with computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known and available to those having skill in the relevant arts. Examples of tangible computer-readable media include, but are not limited to: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROMs and holographic devices; a magneto-optical medium; and hardware devices that are specially configured to store or store and execute program code, such as ASICs, FPGAs, Programmable Logic Devices (PLDs), flash memory devices, and ROM and RAM devices. Examples of computer code include machine code, such as code produced by a compiler, and files containing higher level code that may be executed by a computer using an interpreter. Embodiments of the invention may be implemented in whole or in part as machine-executable instructions in program modules that are executed by a processing device. Examples of program modules include libraries, programs, routines, objects, components, and data structures. In a distributed computing environment, program modules may be physically located in local, remote, or both settings.

those skilled in the art will recognize that neither the computing system nor the programming language is important to the practice of the invention. Those skilled in the art will also recognize that a number of the above elements may be physically and/or functionally divided into sub-modules or combined together.

C.Exemplary network architecture implementation for inference

3A-3M illustrate an exemplary deep neural network model that has been trained and can be used to infer depth information from stereo images, in accordance with embodiments of the present disclosure. In one or more embodiments, each box 310-x represents a convolution or deconvolution layer, including most of the computations in all types of layers in the network 300. In one or more embodiments, each rectangle 315-x represents a corrected linear unit (ReLU) layer that follows a convolutional or deconvolution layer. In one or more embodiments, each rectangle 320-x represents one of several types of layers, including data input, slicing, operating by element, cascading, and output. In one or more embodiments, each octagon 325-x represents a block of data or intermediate result that is passed between layers.

1.Main branch embodiment

The main structure of the depicted network implementation 300 is described first. Then, additional branching and hopping connections between non-adjacent layers will be described.

A general overview of the main structure of the depicted network implementation follows. In one or more implementations, the network 300 receives as input a pair of images (e.g., a left image and a right image) represented by a dual image layer 305 and reduces its pixel values by operating the layers on elements depicted as a slice _ pair 320-1. In one or more embodiments, each image passes through the same two convolutional layers, conv1s 310-1 and conv2s 310-2. In one or more embodiments, the two early convolutional layers share parameters. Such a configuration has at least several significant benefits. First, by having each convolutional layer operate on an image separately, rather than on a stack containing two images, means that the convolutional layers are smaller and therefore require less memory. Second, the convolution at this early stage is filtered at a low level; thus, they may share parameters. Attempts to share at other layers that filter at higher levels will result in reduced model performance.

in one or more embodiments, the resulting feature maps (i.e., intermediate data chunks) are concatenated by the connection layer cct 2320-5, meaning that from that layer, feature maps from the two images are combined to be processed together. The next eight convolutional layers, including conv 3310-3, conv3_ 1310-4, conv 4310-5, conv4_ 1310-6, conv 5310-7, conv5_ 1310-8, conv 6310-9, and conv6_1m 310-10, may be arranged in a typical manner as shown in FIGS. 3C-E. In one or more embodiments, the layers represent a compression phase in which the spatial resolution (i.e., width and height) of the feature map in the network is reduced while the number of channels is increased.

In one or more embodiments, the next stage extends the spatial resolution using deconvolution. After conv6_1m 310-10, deconvolution, convolution and concatenation are almost interleaved to the output; these include deconv5m 310-11, concat2320-6, convolution 2m 310-14, deconv 4310-15, concat 3320-7, convolution 4310-18, deconv 3310-20, concat 4320-8, convolution 6310-22, deconv 2310-24, concat 5320-9, convolution 8310-26, deconv 1310-27, concat 6320-10, convolution 10310-30, convolution 11310-31.

In one or more embodiments, convolutional layer convolutions 11310-31 predict disparity maps. In a typical disparity map, the depth of a point in a scene is inversely proportional to the difference in the distance of the corresponding image point in the image from the center of the camera. From the disparity map, depth information for pixels in the image can be derived. The final layer, depth output 330, converts the disparity map to a depth map and resets its size to the required resolution.

2.Additional Branch implementation

as shown in fig. 3A-3M, the depicted model includes additional branches in addition to the main branch. In one or more embodiments, convolution 1m (FIG. 3E, 310-12) branches at conv6_1 (FIG. 3E, 325-19), followed by upsampling _ disp 6-5 (FIG. 3F, 310-13), which eventually reconnects to the primary branch at Concat2 (FIG. 3F, 320-6). In the depicted embodiment, convolution 3 (FIG. 3G, 320-16) branches at Concat5 (FIG. 3G, 325-24), which includes deconvolution upsampling _ disps 5-4 (FIG. 3G, 310-17) and reconnecting at Concat3 (FIG. 3H, 320-7). In one or more embodiments, convolution 5 (FIG. 3H, 310-19) branches after Concat4 (FIG. 3H, 325-29), which includes deconvolution upsampling _ disp 4-3 (FIG. 3I, 310-21) and concatenating at Concat4 (FIG. 3H, 320-29). In one or more embodiments, the model also includes convolution, convolution 7 (FIG. 3J, 310-23), which branches at concat3 (FIG. 3J, 325-34); this branch includes deconvolved upsampled _ disps 3-2 (FIG. 3J, 310-25) and is reconnected to the main branch at Concat5 (FIG. 3K, 320-9). In the depicted embodiment, the model also branches after concat2 (FIGS. 3K, 325-39). This branch includes convolution 9 (FIG. 3K, 310-28) and upsampling _ disp 2-1 (FIG. 3L, 310-29) and reconnecting at Concat6 (FIG. 3L, 320-10). It should be noted that in the depicted embodiment, the octagon entry indicates a block of data, which may also be referred to as a BLOB (binary large object), and the "concatX" octagon (325-x) is not a cascading layer.

3.skip branch implementation

As shown in fig. 3A-3L, the depicted model includes a skip branch in addition to a main branch and an additional branch. For example, in the depicted embodiment, the outputs of convolution conv1s (FIG. 3A, 310-1) for the left and right images are cascaded by layer cc1 (FIG. 3B, 320-4), which layer cc1 is in turn connected to Concat6 (FIG. 3L, 320-10). In the depicted embodiment, the output of the cascade layer cc2 (FIG. 3B, 320-5) is connected to Concat5 (FIG. 3K, 320-9). As shown, at conv3_1 (FIG. 3C, 325-13), a jump branch is formed that connects to concat4 (FIG. 3I, 320-8). In one or more embodiments, at conv4_1 (FIG. 3D, 325-15), another skip branch is formed and connected to Concat3 (FIG. 3h, 320-7). Finally, at conv5_1 (FIG. 3E, 325-17), a jump branch is formed that connects to Concat2 (FIG. 3F, 320-6).

D.Exemplary network architecture implementation at training time

Fig. 4A-4K depict exemplary network models that may be used in training according to embodiments of the present disclosure. It should be noted that the training model has many similarities to the deployment model shown in figures 3A-3L and described in the previous section. Thus, to avoid unnecessary repetition, this section describes the differences between the training network implementation depicted in FIGS. 4A-4K as compared to the inferred network implementation shown in FIGS. 3A-3L.

As shown in fig. 4A-4K, the first layer image pair of the network and GT (fig. 4A, 405) take as input a pair of training images and a corresponding Ground Truth (GT) disparity map at each training iteration. In one or more embodiments, data enhancement is performed on the image pair by layers img0s _ aug (fig. 4A, 420-3), GenAugParam (fig. 4B, 420-4), and img1s _ aug (fig. 4B, 420-6), while corresponding enhancement is performed on the disparity group channel by layer Disp enhancement 1 (fig. 4B, 420-5). In one or more embodiments, these data enhancement layers randomly generate and apply image transformations to the image pair, including translation, rotation, and color changes. In one or more embodiments, the enhanced images are input separately to convolution conv1s (FIG. 4B, 410-1) as in the inference.

In one or more embodiments, the enhanced group truth disparity map from layer disparity enhancement 1 (fig. 4B, 420-5) is passed through a plurality of downsampling layers, including downsampling 1 (fig. 4H, 420-11), downsampling 2 (fig. 4H, 420-10), downsampling 3 (fig. 4J, 420-15), downsampling 4 (fig. 4K, 420-18), downsampling 5 (fig. 4M, 420-21), and downsampling 6 (fig. 4N, 420-24), respectively. In one or more embodiments, each is directly or indirectly connected to a loss layer, such as disp _ loss 6 (fig. 4I, 420-12), disp _ loss 5 (fig. 4I, 420-13), disp _ loss 4 (fig. 4J, 420-16), disp _ loss 3 (fig. 4L, 420-19), disp _ loss 2 (fig. 4M, 420-22), and disp _ loss 1 (fig. 4N, 420-25), along with an auxiliary prediction layer (including convolution 1M (fig. 4F, 410-11), convolution 3 (fig. 4H, 410-15), convolution 5 (fig. 4I, 410-19), convolution 7 (fig. 4K, 410-23), convolution 9 (fig. 4L, 410-27), or a final disparity prediction layer (convolution 11 (fig. 4N, 410-31)) to compute a loss from the auxiliary prediction of the branch or a final disparity prediction, these branches are described with reference to the inferred network structures in FIGS. 3A-3K. These layers are referred to as auxiliary predictions because they predict disparities among the network to help propagate losses back to the early layers during training, which helps speed convergence.

It should be noted that the network at the time of training includes more layers, including data enhancement layers and sampling layers, which may be intentionally removed from the deployed network implementation. It was found that removing these layers had little effect on the final performance of the network inference depth, but had a substantial reduction in processing requirements. These processing reductions are one of the reasons, at least in part, that a hardware acceleration unit (such as an FPGA) can be used to implement the deployed network. Furthermore, by reducing computational requirements, depth inference can be performed in real-time (or near real-time).

E.Exemplary layer configuration

It should be noted that the diagrams of the deployed or trained network model implementation (fig. 3A-3M) and the diagrams of the network model implementation during training (fig. 4A-4N), provide graphical and textual descriptions of the model components/layers and their associated parameters. However, for convenience, table 1 depicts information related to some layers of the illustrated deep neural network implementation.

Table 1: exemplary parameters for certain layers in a network implementation

those skilled in the art will recognize that these parameters are provided as examples, and that one or more of these parameters may be varied without departing from the spirit and scope of the present disclosure.

F.method embodiment

1.General method embodiments

FIG. 5 depicts a general overall method for training and using neural network models for depth map estimation, according to an embodiment of the present invention.

a)initialization

as shown in the depicted embodiment, it may be desirable to initialize 505A camera system, such as cameras 105A and 105B in fig. 1. Initializing the camera helps to set the correct exposure and gain parameters for the camera and may also involve calibration of the camera. At least two methods can be used. The first method involves the use of known parameters. Alternatively, a second method may be used that includes collecting a few sample images with a fixed set of parameters and calculating camera parameters based on the sample images. Camera initialization/calibration is well known in the art and no particular method is critical to the present disclosure.

b)Training deep neural network model

Next, a neural network model may be trained (510). It should be noted that the model may be trained using real data (i.e., captured images with corresponding group channel depth information/disparity maps), using synthetic data (i.e., computer-generated images with corresponding group channel depth information/disparity maps), or both.

In one or more embodiments, a neural network (e.g., the neural network model shown in fig. 4A-4N) is trained using both synthetic and real data (515). Synthetic training data may be generated from a synthetic scene of a three-dimensional (3D) object model. To generate the training data, the 3D object model may be placed in a virtual space and binocular cameras of arbitrary pose may be simulated to obtain pairs of images and corresponding disparity maps. Real data may be collected from a depth sensing device, such as a red, green, blue depth (RGBD) camera or a light detection and ranging (LIDAR) camera system.

FIG. 6 depicts an exemplary method of training a deep neural network model for depth estimation in accordance with an embodiment of the present disclosure. It should be noted that the embodiment shown in FIG. 6 contemplates training the neural network using a workstation with computing capabilities, but deploying the trained neural network using a hardware accelerator component (which may not have computing capabilities like the workstation) is efficient and inexpensive.

In the depicted embodiment, an initial training data set may be used (605) to train a neural network in floating point mode using one or more workstations, preferably with one or more Graphics Processor Units (GPUs), to help meet the heavy computational requirements of the training. In one or more implementations, the initial training data set may be synthetic training data (i.e., computer-generated images with corresponding disparity maps).

after the floating point pattern network converges on the synthetic training data, additional training may be performed (610) using the second set of training data. In one or more embodiments, the second set of training data may be real images along with their corresponding disparity maps as a ground route to fine tune the network over the real data to improve performance in the real environment.

In an embodiment, if a hardware accelerator component (which is computed using a different bit representation than the training workstation) is used for deployment, the model may be fine-tuned using the different bit representation to better align it for its deployment. In one or more embodiments, assuming the hardware accelerator unit is represented using an 8-bit fixed value, 8-bit training trims (615) may be performed on the floating point network described above in an 8-bit mode to produce an 8-bit network in which the network parameters are quantized to an 8-bit representation.

FIG. 7 depicts a method of fine-tuning (as part of training) a floating point neural network model by simulating a positioning representation to produce a neural network for use on a hardware accelerator component that uses the positioning representation, in accordance with an embodiment of the present disclosure. For purposes of illustration, it is assumed that the workstation uses a 32-bit floating-point representation of values and the hardware accelerator is an FPGA that performs operational calculations using an 8-bit fixed-point representation, although other representations and implementations may be used. As shown in fig. 7, during the computation of each layer, image-related input data for network operations (e.g., data that is input image data or data derived from the input image data, e.g., by performing one or more previous operations) may be converted (705) from 32-bit floating point values to an 8-bit fixed value representation. Similarly, operating parameter data (e.g., layer weights) may be converted (710) from 32-bit floating point values to 8-bit fixed point values. In one or more embodiments, the values of these 8-bit fixed representations of the input data and the operating parameter data are dequantized (715) to 32-bit floating point values. Then, in one or more embodiments, one or more neural network operations (e.g., layer operations such as convolution, deconvolution, etc.) are performed (720) using the dequantized 32-bit floating point representation's values, and the result data of the one or more operations may be output as 32-bit floating point values (725). It should be noted that converting and dequantizing may involve converting to one or more intermediate bit representations.

fig. 8 shows graphically an alternative embodiment of the method. As shown in fig. 8, the image-dependent input data 802 may be converted (805) from a value in a 32-bit floating point representation to a value in an 18-bit floating point representation. In an embodiment, the hardware accelerator may automatically handle the process when the CPU initiates a request/command to write data to a memory of the hardware accelerator component (e.g., a double data rate random access memory (DDR RAM)). It should be noted that the layer parameter data is fixed or relatively fixed and may be stored in memory as an 8-bit integer. Instead, the input data for each layer changes and has a different range; thus, the input data is not represented directly in 8 bits in memory. In an embodiment, floating point values are used for this data and to save space and time, shorter floating point values may be used in memory. In the depicted embodiment, 18-bit floating points are used, but other sizes, such as 16-bit floating points, may also be used. As shown, to perform layer one or more operation calculations, the 18-bit floating-point values may be dynamically converted (810) to 8-bit integers using a conversion such as the ABSMAX method (described below) each time. The rest of the embodiment depicted in fig. 8 proceeds in the same manner as depicted in fig. 7. It should be noted that alternative methods may include fewer or more bits to represent the conversion.

with respect to converting a floating-point bit representation used by a processor unit to a fixed-bit representation used by a hardware acceleration unit, FIG. 9 illustrates a method of quantizing values represented in one bit representation to a different bit representation in accordance with an embodiment of the present disclosure. In the graphical depiction, the top row 905 represents a first bit representation scheme, in this example a 32-bit floating point representation, but may be a different representation, while the bottom row 910 represents a second bit representation scheme, in this example an 8-bit floating point representation, but may be a different representation. In one or more embodiments, the conversion between different representations of the parametric or other data of the layers (each referred to generally below as "Blob" data) of the image-related input data may be as follows:

BlobFix8=BlobFP32/ABSMAX*127 (1)

BlobFP32=BlobFix8/127*ABSMAX (2)

Where ABSMAX is the maximum of the absolute values of the data array (e.g., in the data associated with the image ("image") or in the filter array ("filter").

As an example, an operation (e.g., convolution) using quantized data may be performed as follows:

wherein ABSMAXImageIs shown in the figureLike the absolute maximum in the correlation data, ABSMAXFilteris the absolute maximum in the operating filter parameters. Those skilled in the art will recognize that other conversion operations may be performed.

c)Modification of neural network models

returning to fig. 5, 6, the trained neural network model may be modified (515/620) for deployment to help reduce computational costs. In one or more embodiments, it was found that by removing the data enhancement layer, the sampling layer, or both, the trained model still performed well, but required less computational time and expense. Thus, for a deployed trained network, one or more of these layers may be removed; examples of which may be seen by comparing the neural network model embodiment of figures 4A-4N to the deployed neural network model embodiment of figures 3A-3M for training.

It is noted that the neural network model may have other modifications than the typical model. In one or more embodiments, these changes can be made to the neural network model from the outset.

In one or more embodiments, at least two early convolution operations in the neural network model may be configured to operate on image-related data corresponding to the first image and the second image, respectively, rather than on a set of data representing a combination of image-related data. For example, as shown in FIGS. 3 and 4, two convolutions, convol1s (310-1 in FIG. 3A/410-1 in FIG. 4C) and convol2s (310-2 in FIG. 3B/410-2 in FIG. 4C), each operate on data corresponding to two input images, respectively, and then concatenate the results, rather than operating on a data stack associated with the two input images. Operating on a double-sized stack requires much larger convolutions, which increases memory and computational requirements. Further, in one or more embodiments, these convolution operations may share parameters. Since these are early convolutions, they operate on low-level features of the image, which makes parameter sharing more appropriate. The post-operations operate on higher-level features that differ more significantly, and thus parameter sharing may be less appropriate. Furthermore, in an embodiment, for high layers, features from both the left and right side are mixed to find correspondence and disparity; thus, features from the left and right images may not be separated throughout the model. Therefore, the parameters of the higher layer are not shared because the parameters are for the two images combined.

In other embodiments, certain operations may reduce the number of channels to help reduce computations. For example, in one or more embodiments, the deconvolution operation deconv5m (310-11 in fig. 3E/410-12 in fig. 4F) and the convolution operation Conv6_1m (310-10 in fig. 3E/410-10 in fig. 4F) may each reduce the number of channels by half. This reduction greatly reduces the computational and memory burden on the hardware acceleration unit, but has little negative impact on the output performance of the training model.

Unlike typical deep neural networks, embodiments of the neural network model may employ simple modified linear units (relus) rather than more complex units, such as leaky relus or noisy ReLU. An exemplary ReLU function that may be used is:

Also, a simple RELU reduces computational cost, but does not seriously impact the overall performance of the neural network.

returning to fig. 5, once the network has been trained and modified, it may be deployed (520) to obtain depth information, such as a disparity map or depth map, given a stereo image as input.

2.Depth estimation using a deployed trained neural network model

a)Method embodiments for generic deployment

FIG. 10 depicts a method of providing dense depth map information in real-time (or near real-time) using a trained neural network model with hardware acceleration units, according to an embodiment of the present disclosure. In one or more embodiments, a depth map estimation system, such as the system shown in FIG. 1, is used to capture (1005) a set of volumetric images of a scene. These images represent two views of the scene. In one or more embodiments, the images may be captured by having the CPU send a signal to the microcontroller that a new pair of stereoscopic images is desired. The microcontroller may then cause both cameras to capture images simultaneously. After exposure is complete, image data may be sent from the camera to the CPU via the microcontroller. For example, if the resolution of the camera is 640 × 480, if the camera is gray, the CPU receives 640 × 480 × 2 × 8 bytes of data; if the camera is color, the CPU receives 640 × 480 × 2 × 3 × 8 bytes of data. As previously mentioned, the system may not include a microcontroller, and its functions may be performed by the CPU. It should be noted that the depicted method embodiment does not include an initialization/calibration phase; however, if initialization/calibration is required, it may be performed in a similar manner as described above.

in one or more embodiments, the input image data may then be processed (1010) according to a deployed neural network model, such as the neural network model shown in fig. 3a 3-M. In one or more embodiments, for each pair of captured images, the CPU and hardware accelerator components cooperate to run a deployed deep neural network. For example, in one or more embodiments, a CPU may control a general workflow and sequentially allocate one or more layers of computing tasks to hardware accelerator components. In one or more embodiments, for each layer allocated to a hardware accelerator component, it retrieves data and layer parameters from the CPU and/or memory, performs calculations (e.g., convolution, deconvolution, concatenation, etc.) for that layer, and returns (1015) the processed data to the CPU and/or memory.

In one or more embodiments, once the final result (which may be the depth map image or the depth image data and the original input image data) is obtained, it is output (1020). This final output data may be stored for later use, transmitted via a communication protocol (e.g., Universal Serial Bus (USB), ethernet, serial, parallel, etc.), and/or used by a corresponding system or the same system for subsequent tasks. For example, the depth map information may be used for obstacle detection of an autonomous vehicle.

In one or more embodiments, after the results have been output and/or used, the system may return (1025) to the step of capturing the next pair of stereoscopic images to begin the next cycle. This process may be repeated until a stop condition is reached. The stop condition depends on the application of the depth map information. In the case of an autonomous vehicle, it may continue as long as the vehicle is in motion. Other stop conditions may include obtaining a set number of depth maps, operating for a certain amount of time, operating until a stop instruction is received, etc.

G.Implementation of hardware accelerator component quantization

As previously described, the hardware accelerator component may not use the same bit representation scheme as used by the processor unit (or units). Thus, in one or more embodiments, for a process processed by a hardware accelerator component, the required data (e.g., input data and layer parameters) are converted to an appropriate bit representation. For example, the CPU and/or hardware accelerator components convert numbers for each of the convolutional layers, deconvolution layers, cascades, etc. processed by the hardware accelerator components. FIG. 11 illustrates a method for converting between a processor-related bit representation to a hardware accelerator component bit representation, performing an integer calculation, and converting the integer back to a floating point number for use by a next layer, according to an embodiment of the disclosure. In the example depicted in FIG. 11, the hardware accelerator is an FPGA, but other components may be used as well, as previously described. Additionally, it should be noted that the depicted embodiment shows steps associated with the components involved (i.e., CPU 1160, FPGA memory 1165, and FPGA chip 1170); however, these steps may be distributed differently and still fall within the scope of the present disclosure.

To illustrate the depicted method, assume that the CPU uses a 32-bit floating point representation of the value, while the FPGA hardware accelerator uses an 8-bit fixed point representation for operation computations. As shown in fig. 11, during the computation of a layer, the image dependent input data 1102 for the operation may be converted 1105 from high precision of 32-bit floating point values to 18-bit floating point values. The hardware accelerator may automatically handle the process when the CPU initiates a request/command to write data to the DDR memory 1165 of the FPGA. The values in the 18-bit floating point representation may then be quantized (1110) by FPGA 1170 into 8-bit fixed representation values. In the depicted embodiment, the operating parameter data (e.g., layer weights) 1104 is converted (1115) directly from 32-bit floating point values to 8-bit fixed point values and stored in the FPGA's memory 1165. In an embodiment, since the tier weights do not change and have a fixed range at deployment, they can be represented directly in memory in 8 bits.

in one or more embodiments, when the FPGA performs layer operation computations, the FPGA accesses and quantizes (1110) the input data in its memory and also accesses the parameters that are already in the 8-bit fixed representation. The two sets of data may be subjected to an operation such as a fixed multiply-accumulate operation (1120) to produce result data, which may be a 64-bit fixed representation. In one or more embodiments, the result data may be dequantized (1125) to a floating-point 32-bit representation.

In one or more embodiments, the result data may be temporary or intermediate result data that may undergo one or more additional operations. For example, the data may undergo one or more additional operations (e.g., 1130 and 1135), such as scaling, migration, batch normalization, ReLU operations, max pooling, and the like.

Once all operations to be performed on this layer by the FPGA are complete, the result data is converted (1140) to an 18-bit floating point representation and stored in memory. It should be noted that if the FPGA memory supports 32-bit floating point memory, the 18-bit conversion from the CPU to the FPGA memory and from the FPGA core to the FPGA memory can be skipped (1105 and 1140). Thus, it should be noted that the method may involve fewer or more bit representation conversions.

finally, the CPU may access the stored value, where the 18-bit floating point representation of the value may be converted (1145) to a 32-bit floating point value. Depending on the layer stage, the output results 1150 may be the final results of the neural network (e.g., a depth map), or may be intermediate results of the neural network, where these results may be used for subsequent layers. For example, in an embodiment, the result after stage 1140 may be an "image" of the next layer into block 1110.

those skilled in the art will appreciate that the foregoing examples and embodiments are illustrative and do not limit the scope of the disclosure. It is intended that all substitutions, enhancements, equivalents, combinations, or improvements of the present invention which would become apparent to those skilled in the art upon a reading of the specification and a study of the drawings, are included within the true spirit and scope of the present disclosure. It should also be noted that the elements of any claim may be arranged differently, including having multiple dependencies, configurations and combinations.

56页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频信号处理方法和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类