Modeling ADC Energy and Area Trade-offs for Compute-In-Memory Accelerators

Authors

  • Ankit Rai Department of Electrical Engineering, University of Middlesex, UK
  • Priya Kaur Department of Computer Science, University of East London, UK

Keywords:

Compute-in-Memory (CIM), Analog-to-Digital Converter (ADC), Energy Efficiency, Area Efficiency, Accelerator Design, Memory Computing, Hardware Architecture, Low-Power Design

Abstract

The need for energy-efficient and high-performance computational systems has led to the exploration of specialized architectures like Compute-in-Memory (CIM). CIM accelerators are designed to process data directly within memory arrays, significantly reducing data movement and improving energy efficiency. A critical component of CIM systems is the Analog-to-Digital Converter (ADC), which converts analog signals from memory into digital data for processing. This paper investigates the energy and area trade-offs of ADCs used in CIM accelerators. By exploring various ADC architectures and their integration with memory-based computing, we highlight the challenges and strategies for optimizing energy consumption and area efficiency in these systems.

References

[1] R. Chen, "Analog-to-Digital Converters for Secure and Emerging AIoT Applications," Massachusetts Institute of Technology, 2023.

[2] M. Caselli, P. Debacker, and A. Boni, "Memory Devices and A/D Interfaces: Design Tradeoffs in Mixed-Signal Accelerators for Machine Learning Applications," IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 69, no. 7, pp. 3084-3089, 2022.

[3] T. Andrulis, R. Chen, H.-S. Lee, J. S. Emer, and V. Sze, "Modeling analog-digital-converter energy and area for compute-in-memory accelerator design," arXiv preprint arXiv:2404.06553, 2024.

[4] Z. Chen, "Energy Efficient and Accurate In-Memory Computing Accelerators for Machine Learning," Rice University, 2022.

[5] R. Chen, H. Kung, A. Chandrakasan, and H.-S. Lee, "A bit-level sparsity-aware SAR ADC with direct hybrid encoding for signed expressions for AIoT applications," in Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design, 2022, pp. 1-6.

[6] K. He, I. Chakraborty, C. Wang, and K. Roy, "Design space and memory technology co-exploration for in-memory computing based machine learning accelerators," in Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 2022, pp. 1-9.

[7] S. Huang, H. Jiang, and S. Yu, "Hardware-aware Quantization/Mapping Strategies for Compute-in-Memory Accelerators," ACM Transactions on Design Automation of Electronic Systems, vol. 28, no. 3, pp. 1-23, 2023.

[8] S. Ji, "ANALYSIS OF THE ENERGY VS. ACCURACY TRADEOFF IN THE 10T1C BITCELL-BASED IN-MEMORY ARCHITECTURE," 2023.

[9] A. Kaul et al., "Co-Optimization for Robust Power Delivery Design in 3D-Heterogeneous Integration of Compute In-Memory Accelerators," in 2024 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2024: IEEE, pp. 1-2.

[10] S. Seyedfaraji, S. Jager, S. Shakibhamedan, A. Aftab, and S. Rehman, "OPTIMA: Design-Space Exploration of Discharge-Based In-SRAM Computing: Quantifying Energy-Accuracy Trade-offs," in Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024, pp. 1-6.

[11] C. Wolters, X. Yang, U. Schlichtmann, and T. Suzumura, "Memory Is All You Need: An Overview of Compute-in-Memory Architectures for Accelerating Large Language Model Inference," arXiv preprint arXiv:2406.08413, 2024.

[12] S. Yu, H. Jiang, S. Huang, X. Peng, and A. Lu, "Compute-in-memory chips for deep learning: Recent trends and prospects," IEEE circuits and systems magazine, vol. 21, no. 3, pp. 31-56, 2021.

Downloads

Published

2024-12-23