AI Edge Platform on ZynQ FPGA
Artificial Neural Networks
Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules; so you can get it to learn things, recognize patterns, and make decisions in a "human-like" way. The patterns recognized are numerical, contained in tensors, into which all real-world data, be it images, sound, text or time series, must be translated.
How does a neural network learn?
Information flows through a neural network in two different ways. When the model is learning (being trained) or operating normally (after being trained either being used or tested), patterns of information from the dataset are being fed into the network via the input neurons, which trigger the layers of hidden neurons, and these, in turn, arrive at the output neurons; this is called a feedforward network. Not all neurons “fire” all the time. Each neuron receives inputs from the neurons to its left, and the inputs are multiplied by the weights of the connections they travel along. Every neuron adds up all the inputs it receives in this way and (this is the simplest neural network) if the sum is more than a certain threshold value, the neuron “fires” and triggers the neurons it’s connected to (the neurons on its right).
A practical definition of Deep Learning is:
Deep Learning refers to neural networks with multiple hidden layers that can learn increasingly
abstract representation of the input data.
DNNDK™ (Deep Neural Network Development Kit)
DNNDK unleashes the productivity and efficiency of deploying AI inference on Xilinx Edge AI platforms.
Deep Neural Network Development Kit (DNNDK) is a full-stack deep learning SDK for the Deep learning Processor Unit (DPU). It provides a unified solution for deep neural network inference applications by providing pruning, quantization, compilation, optimization, and run-time support.
DNNDK consists of:
- DEep ComprEssioN Tool (DECENT)
- Deep Neural Network Compiler (DNNC)
- Neural Network Runtime (N2Cube)
The most important feature of DNNDK v3.0 is that popular deep learning framework TensorFlow is officially supported by DNNDK toolchains.
TensorFlow framework is supported by DECENT. And since DNNDK v3.0, both CPU version and GPU version DECENT binary tools are released for user convenience.
Caffe and TensorFlow are supported by one single DNNC binary tool.
The evaluation boards supported for this framework (at version 3.1) are:
- Xilinx® ZCU102
- Xilinx ZCU104
- Avnet Ultra96
The performances on these boards are very impressive. Take a look at a ZCU104 (performances based on Xilinx AI Zoo https://github.com/Xilinx/AI-Model-Zoo ):
|Model|| E2E latency (ms) |
| E2E throughput (fps)|
| E2E throughput (fps)|
- Provides a complete set of toolchains with compression, compilation, deployment, and profiling.
- Supports mainstream frameworks and the latest models capable of diverse deep learning tasks
- Provides a lightweight standard C/C++ programming API (no RTL programming knowledge required)
- Scalable board support from cost-optimized to performance-driven platforms
- Supports system integration with both SDSoC and Vivado
DNNDK is composed of Deep Compression Tool (DECENT), Deep Neural Network Compiler (DNNC), Deep Neural Network Assembler (DNNAS), Neural Network Runtime (N2Cube), DPU Simulator, and Profiler.
The process of inference is computation-intensive and requires a high memory bandwidth to satisfy the low latency and high throughput requirement of edge applications. The Deep Compression Tool, DECENT, employs coarse-grained pruning, trained quantization and weight sharing to address these issues while achieving high performance and high energy efficiency with very small accuracy degradation.
DNNC™ (Deep Neural Network Compiler) is the dedicated proprietary compiler designed for the DPU. It maps the neural network algorithm to the DPU instructions to achieve maxim utilization of DPU resources by balancing computing workload and memory access.
The Deep Neural Network Assembler (DNNAS) is responsible for assembling DPU instructions into ELF binary code. It is a part of the DNNC code generating back end, and cannot be invoked alone.
The DPU profiler is composed of two components: DPU tracer and DSight. DPU tracer is implemented in the DNNDK runtime N2 cube, and it is responsible for gathering the raw profiling data while running neural networks on DPU. With the provided raw profiling data, DSight can help to generate the visualized charts for performance analysis.
The Cube of Neutral Networks (N2Cube) is the DPU runtime engine. It acts as the loader for the DNNDK applications and handles resource allocation and DPU scheduling. Its core components include DPU driver, DPU loader, tracer, and programming APIs for application development.
Network Deployment Overview
There are two stages for developing deep learning applications: training and inference.
The DNNDK toolchain provides an innovative workflow to efficiently deploy deep learning inference applications on the DPU with five simple steps:
1. Compress the neural network model.
2. Compile the neural network model.
3. Program with DNNDK APIs.
4. Compile the hybrid DPU application.
5. Run the hybrid DPU executable.
Programming with DNNDK
To develop deep learning applications on the DPU, three types of work must be done:
- Use DNNDK APIs to manage DPU kernels;
- DPU kernel creation and destruction;
- DPU task creation;
- Managing input and output tensors;
- • Implement kernels not supported by the DPU on the CPU;
- • Add pre-processing and post-processing routines to read in data or calculate results.
We have presented the DNNDK tool for the FPGA integration of Deep Neural Network and Convolutional Neural Network. If you need more information about this tool or if you need support or consultancy, please send us a mail at