Energy-Efficient Architecture for Neural Spikes Acquisition

被引:0
|
作者
Osipov, Dmitry [1 ]
Paul, Steffen [1 ]
Stemmann, Heiko [2 ]
Kreiter, Andreas K. [2 ]
机构
[1] Univ Bremen, Inst Electrodynam & Microelect, Bremen, Germany
[2] Univ Bremen, Dept Theoret Neurobiol, Brain Res Inst, Bremen, Germany
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A new energy-efficient architecture for neural spikes acquisition implements pre-detection of neural spikes before fine conversion. The prediction is based on the low-bit coarse conversion of input data with the subsequent detection of spikes with the Teager energy operator (TEO). The example data constructed based on real neuronal data recorded from the visual cortex of an anesthetized rat shows that the systems architecture is more energy efficient as both the binary detection with further digitization and as the full conversion of all data without pre-detection. The proposed system achieves 34% energy savings compared with the full digitization with 10-bit ADC. The proposed analog memory reduces dramatically the ADCs input capacitance, and thus lowers power. Furthermore, it allows for the combination of highly linear bottom-plate sampling with an arbitrary power-efficient switching scheme. The simulation of the proposed system was performed on transistor level in 65 nm CMOS technology.
引用
收藏
页码:439 / 442
页数:4
相关论文
共 50 条
  • [1] A Reconfigurable Spatial Architecture for Energy-Efficient Inception Neural Networks
    Luo, Lichuan
    Kang, Wang
    Liu, Junzhan
    Zhang, He
    Zhang, Youguang
    Liu, Dijun
    Ouyang, Peng
    [J]. IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2023, 13 (01) : 7 - 20
  • [2] An Energy-Efficient Architecture for Binary Weight Convolutional Neural Networks
    Wang, Yizhi
    Lin, Jun
    Wang, Zhongfeng
    [J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2018, 26 (02) : 280 - 293
  • [3] An Energy-Efficient Systolic Pipeline Architecture for Binary Convolutional Neural Network
    Liu, Baicheng
    Chen, Song
    Kang, Yi
    Wu, Feng
    [J]. 2019 IEEE 13TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2019,
  • [4] Hybrid Convolution Architecture for Energy-Efficient Deep Neural Network Processing
    Kim, Suchang
    Jo, Jihyuck
    Park, In-Cheol
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2021, 68 (05) : 2017 - 2029
  • [5] Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks
    Chen, Yu-Hsin
    Emer, Joel
    Sze, Vivienne
    [J]. 2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, : 367 - 379
  • [6] Energy-efficient encoding by shifting spikes in neocortical neurons
    Malyshev, Aleksey
    Tchumatchenko, Tatjana
    Volgushev, Stanislav
    Volgushev, Maxim
    [J]. EUROPEAN JOURNAL OF NEUROSCIENCE, 2013, 38 (08) : 3181 - 3188
  • [7] An energy-efficient architecture for DTN throwboxes
    Banerjee, Nilanjan.
    Comer, Mark D.
    Levine, Brian Neil
    [J]. INFOCOM 2007, VOLS 1-5, 2007, : 776 - +
  • [8] Energy-efficient and Sustainable Architecture in Masonry
    Mueller, Helmut F. O.
    [J]. MAUERWERK, 2009, 13 (05) : 316 - 322
  • [9] Neural architecture search for energy-efficient always-on audio machine learning
    Daniel T. Speckhard
    Karolis Misiunas
    Sagi Perel
    Tenghui Zhu
    Simon Carlile
    Malcolm Slaney
    [J]. Neural Computing and Applications, 2023, 35 : 12133 - 12144
  • [10] PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-Efficient ReRAM
    Ankit, Aayush
    El Hajj, Izzat
    Chalamalasetti, Sai Rahul
    Agarwal, Sapan
    Marinella, Matthew
    Foltin, Martin
    Strachan, John Paul
    Milojicic, Dejan
    Hwu, Wen-Mei
    Roy, Kaushik
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (08) : 1128 - 1142