https://doi.org/10.5573/JSTS.2025.25.4.451
(Juntae Park) ; (Dahun Choi) ; (Hyun Kim)
The deployment of compact convolutional neural network (CNN) models with skip connections on edge devices through dedicated hardware accelerators is increasingly prevalent. However, optimizing the use of limited on-chip memory (OCM) across multiple CNN layers, especially those with skip connections, remains a challenge.
In this paper, we propose a novel CNN accelerator technique that reorders the computation sequence for each layer to maximize data reuse within the OCM, thereby minimizing DRAM access and improving the utilization of both the OCM and the convolution processor. Additionally, we introduce a shared buffer design that efficiently manages OCM usage across different layers, particularly those involving skip connections. Finally, we present a ResNet-18 accelerator IP, RADAR, implemented with the proposed technique on a Xilinx ZCU102 FPGA. RADAR achieves 64.9 GOPS/W and 446.9 GOPS while maintaining high accuracy, demonstrating significant improvements over prior works in terms of the trade-off between throughput, hardware resource efficiency, and model accuracy.