DNNDK: AI Edge Platform on ZynQ FPGA
DNNDK allows the productivity in deploying AI inference on Xilinx platforms. In fact, it provides a solution for deep neural network applications. So, in this article we will present the DNNDK tool for the FPGA integration of Deep Neural Network and Convolutional Neural Network.
Artificial Neural Networks
Artificial neural networks (ANN) or connectionist systems are computing systems similar to biological neural networks likes animal brains. In particular, such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules. At this point, you can get it to learn things, recognize patterns, and make decisions in a "human-like" way.
How does a neural network learn?
Information flows through a neural network in two different ways. When the model is learning (being trained) or operating normally (after being trained either being used or tested), patterns of information from the dataset are being fed into the network via the input neurons, which trigger the layers of hidden neurons, and these, in turn, arrive at the output neurons (called a feedforward network). Moreover, not all neurons “fire” all the time. In fact, each neuron receives inputs from the neurons to its left, and the inputs are multiplied by the weights of the connections they travel along. Also, every neuron adds up all the inputs it receives in this way and (this is the simplest neural network) if the sum is more than a certain threshold value, the neuron “fires” and triggers the neurons it’s connected to (the neurons on its right).
In other words, a possible definition of Deep Learning is:
Deep Learning refers to neural networks with multiple hidden layers that can learn increasingly
abstract representation of the input data.
DNNDK™ (Deep Neural Network Development Kit)
DNNDK allows the productivity and efficiency of deploying AI inference on Xilinx Edge AI platforms.
In particular it provides a unified solution for deep neural network inference applications by providing pruning, quantization, compilation, optimization, and runtime support.
DNNDK consists of:
- DEep ComprEssioN Tool (DECENT)
- Then Deep Neural Network Compiler (DNNC)
- Finally Neural Network Runtime (N2Cube)
So, the most important feature of DNNDK v3.0 is that popular deep learning framework TensorFlow is officially supported by DNNDK toolchain; also TensorFlow framework is supported by DECENT. Finally, Caffe and TensorFlow are supported by one single DNNC binary tool.
The evaluation boards supported for this framework (at version 3.1) are:
- Xilinx® ZCU102
- Xilinx ZCU104
- Avnet Ultra96
In particular, the performances on these boards are very impressive, taking a look at a ZCU104 (performances reported here: https://github.com/Xilinx/AI-Model-Zoo ):
|Model||E2E latency (ms)
|E2E throughput (fps)
|E2E throughput (fps)
Firstly it provides a complete set of toolchains with compression, compilation, deployment, and profiling.
Secondly it supports mainstream frameworks and the latest models capable of diverse deep learning tasks
Moreover it provides a lightweight standard C/C++ programming API (no RTL programming knowledge required)
Furthermore scalable board support from cost-optimized to performance-driven platforms
Finally supports system integration with both SDSoC and Vivado
DNNDK is composed by different parts.
Firstly, the process of inference is computation-intensive and requires a high memory bandwidth to satisfy the low latency and high throughput requirement of edge applications. Furthermore, the Deep Compression Tool, DECENT, employs coarse-grained pruning, trained quantization and weight sharing to address these issues while achieving high performance and high energy efficiency with very small accuracy degradation.
Secondly, DNNC™ (Deep Neural Network Compiler) is the dedicated proprietary compiler designed for the DPU. In fact, it maps the neural network algorithm to the DPU instructions to achieve maxim utilization of DPU resources by balancing computing workload and memory access.
Thirdly, the Deep Neural Network Assembler (DNNAS) is responsible for assembling DPU instructions into ELF binary code. So, DNNC code generating backend cannot be invoked alone.
Moreover, the DPU profiler is composed of two components: DPU tracer and DSight. DPU tracer is implemented in the DNNDK runtime N2 cube, and it is responsible for gathering the raw profiling data while running neural networks on DPU. On the other hand, with the provided raw profiling data, DSight can help to generate the visualized charts for performance analysis.
Finally, the Cube of Neutral Networks (N2Cube) is the DPU runtime engine. In particular, it acts as the loader for the DNNDK applications and handles resource allocation and DPU scheduling. In addition to this, its core components include DPU driver, DPU loader, tracer, and programming APIs for application development.
Network Deployment Overview
There are two stages for developing deep learning applications: training and inference.
Specifically, the DNNDK toolchain provides an innovative workflow to efficiently deploy deep learning inference applications on the DPU with five simple steps:
- Compress the neural network model;
- Compile the neural network model;
- Program with DNNDK APIs;
- Compile the hybrid DPU application;
- Run the hybrid DPU executable;
- Programming with DNNDK.
To develop deep learning applications on the DPU, three types of work must be done:
- Use DNNDK APIs to manage DPU kernels;
- DPU kernel creation and destruction;
- DPU task creation;
- Managing input and output tensors;
- Implement kernels not supported by the DPU on the CPU;
- Add pre-processing and post-processing routines to read in data or calculate results.
To conclude, we have presented the DNNDK tool for the FPGA integration of Deep Neural Network and Convolutional Neural Network. Finally, if you need more information about this tool or if you need support or consultancy, please send us a mail at