Project Start Date:
Project Objectives and Scope
This project aims to investigate the acceleration of deep neural networks (DNN) by Field Programmable Gate Array (FPGA) on network anomalies detection at the edge. Specially, we will evaluate the temporal based intelligent anomaly detection algorithms and implement them on FPGAs. The scalability of these algorithms on various FPGA architectures will be investigated.
The current state-of-the-art DNNs demonstrate high performance, however, require huge computing power. They are normally implemented in Graphic Processing Unit (GPU), which is hard to be deployed at the edge. Compared with GPU acceleration, DNNs acceleration by FPGA offers great potential because of energy efficiency. However, FPGA has relatively limited computing resources, memory, and I/O bandwidths. It is challenging to implement complex DNNs on FPGA. For the temporal based DNNs, such as Long short-term memory, large model size is built to achieve high prediction accuracy, leading to computation and memory complex and intensive. Therefore, high performance and energy efficient FPGA architecture designs are needed to tackle these challenges to implement DNNs.
In this project, we will investigate and develop FPGA design techniques in both software and hardware level for time series neural network acceleration on network anomalies detection at the edge. We will explore various model compression methods, such as data quantization and weight reduction, to achieve efficient acceleration with high model accuracy and high throughput. The compressed neural models with high performance or minimum accuracy loss will significantly reduce the workload of FPGA implementation. Furthermore, we will design efficient FPGA architecture to maximally utilize the memory and logic recourses, including hardware parallelism techniques, reuse of the computing units, minimization of data transfer operations, local memory promotion, and etc. The scalability of these algorithms on various FPGA architectures will also be investigated.