关键词:
Spiking Neural Networks (SNNs)
SNNs for deep learning
Hardware cost evaluation
Hardware acceleration
摘要:
Artificial Neural Networks (ANNs) achieve high accuracy in various cognitive tasks (i.e., inferences), but often fail to meet power and latency budgets due to intensive computational overheads. To address the challenge, Spiking Neural Networks (SNNs) have emerged as high-performance and power-efficient alternatives thanks to their theoretically efficient spike-driven computations. The spike-based computations have a high potential of achieving cost-effective inferencing with their low-precision data representations, simple neuron operations, and new parallelization opportunities. To determine which network (i.e., ANN or SNN) is suitable for which purposes, it is essential to accurately evaluate the costeffectiveness of an SNN and compare it to the corresponding ANN. However, existing studies overestimate or underestimate the cost-effectiveness of SNNs over ANNs as they consider only the limited design points and compare SNNs against naive ANN hardware baselines. In this study, we propose a full-stack SNN evaluation methodology to accurately evaluate the costeffectiveness of SNNs. Quantifying the potential of SNNs is highly challenging as the evaluations require full-stack knowledge on SNNs' unique computational features and how each affects the mechanisms of the underlying hardware. For this purpose, we identify five representative SNN-specific design points that affect hardware performance and demonstrate the impact of each design point with the quantified experimental results. Next, we modify the existing ANN accelerator to support the identified SNN-specific design points and implement a cycle-accurate simulator to evaluate how each point affects the overall cost-effectiveness. As a case study, we evaluate SNNs converted from pretrained ANNs and compare them against the ANN counterparts using our simulator. Our study is the first work to accurately evaluate the cost-effectiveness of SNNs and make fair comparisons against ANNs. In addition, our methodology provide