loading page

Progressive Approximation-Aware Training with Regularization and Transfer Learning for DNN Acceleration
  • +1
  • Pengfei Huang,
  • Ke Chen,
  • Chenghua Wang,
  • Weiqiang Liu
Pengfei Huang
Nanjing University of Aeronautics and Astronautics
Author Profile
Ke Chen
Nanjing University of Aeronautics and Astronautics
Author Profile
Chenghua Wang
Nanjing University of Aeronautics and Astronautics
Author Profile
Weiqiang Liu
Nanjing University of Aeronautics and Astronautics

Corresponding Author:[email protected]

Author Profile

Abstract

This study introduces a novel progressive approximation-aware training (AAT), which efficiently integrates regularization and transfer learning techniques. The primary objective is to capture the inherent characteristics of approximate hardware designs. By considering the accuracy requirements and computational constraints inherent in the application optimizer, AAT strives to achieve an optimal balance between accuracy and power consumption. Initiating with a quantified deep neural network (DNN) model, AAT employs a range of approximation strategies to pinpoint the optimal model space and minimize energy cost. When compared to cutting-edge techniques, our approach provides remarkable energy savings, enhanced resilience against adversarial attacks, and maintains consistent accuracy.
Submitted to Electronics Letters
Submission Checks Completed
Assigned to Editor
Reviewer(s) Assigned
03 Jul 2024Review(s) Completed, Editorial Evaluation Pending
21 Aug 2024Reviewer(s) Assigned