已收录 273628 条政策
 政策提纲
  • 暂无提纲
Hardware-Software Co-design to Accelerate Neural Network Applications
[摘要] Many applications, such as machine learning and data sensing, are statistical in nature and can tolerate some level of inaccuracy in their computation. A variety of designs have been put forward exploiting the statistical nature of machine learning through approximate computing. With approximate multipliers being the main focus due to their high usage in machine-learning designs. In this article, we propose a novel approximate floating point multiplier, called CMUL, which significantly reduces energy and improves performance of multiplication while allowing for a controllable amount of error. Our design approximately models multiplication by replacing the most costly step of the operation with a lower energy alternative. To tune the level of approximation, CMUL dynamically identifies the inputs that produces the largest approximation error and processes them in precise mode. To use CMUL for deep neural network (DNN) acceleration, we propose a framework that modifies the trained DNN model to make it suitable for approximate hardware. Our framework adjusts the DNN weights to a set of "potential weights" that are suitable for approximate hardware. Then, it compensates the possible quality loss by iteratively retraining the network. Our evaluation with four DNN applications shows that, CMUL can achieve 60.3% energy efficiency improvement and 3.2x energy-delay product (EDP) improvement as compared to the baseline GPU, while ensuring less than 0.2% quality loss. These results are 38.7% and 2.0x higher than energy efficiency and EDP improvement of the CMUL without using the proposed framework.
[发布日期] 2019-06-01 [发布机构] 
[效力级别]  Proceedings Paper [学科分类] 
[关键词] Approximate computing;neural network;floating point unit;energyefficiency [时效性] 
   浏览次数:8      统一登录查看全文      激活码登录查看全文