Progressive Approximation-Aware Training with Regularization and
Transfer Learning for DNN Acceleration
Abstract
This study introduces a novel progressive approximation-aware training
(AAT), which efficiently integrates regularization and transfer learning
techniques. The primary objective is to capture the inherent
characteristics of approximate hardware designs. By considering the
accuracy requirements and computational constraints inherent in the
application optimizer, AAT strives to achieve an optimal balance between
accuracy and power consumption. Initiating with a quantified deep neural
network (DNN) model, AAT employs a range of approximation strategies to
pinpoint the optimal model space and minimize energy cost. When compared
to cutting-edge techniques, our approach provides remarkable energy
savings, enhanced resilience against adversarial attacks, and maintains
consistent accuracy.