Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate

dc.contributor.advisor Joseph A. Zambreno
dc.contributor.advisor Henry J. Duwe
dc.contributor.author Saha, Saunak
dc.contributor.department Department of Electrical and Computer Engineering
dc.date 2019-11-04T21:57:20.000
dc.date.accessioned 2020-06-30T03:19:16Z
dc.date.available 2020-06-30T03:19:16Z
dc.date.copyright Thu Aug 01 00:00:00 UTC 2019
dc.date.embargo 2001-01-01
dc.date.issued 2019-01-01
dc.description.abstract <p>Spiking neural networks are increasingly becoming popular as low-power alternatives to deep learning architectures. To make edge processing possible in resource-constrained embedded devices, there is a requirement for reconfigurable neuromorphic accelerators that can cater to various topologies and neural dynamics typical to these networks. Subsequently, they also must consolidate energy consumption in emulating these dynamics. Since spike processing is essentially memory-intensive in nature, a significant proportion of the system's power consumption can be reduced by eliminating redundant memory traffic to off-chip storage that holds the large synaptic data for the network. In this work, I will present CyNAPSE, a digital neuromorphic acceleration fabric that can emulate different types of spiking neurons and network topologies for efficient inference. The accelerator is functionally verified on a set of benchmarks that vary significantly in topology and activity while solving the same underlying task. By studying the memory access patterns, locality of data and spiking activity, we establish the core factors that limit conventional cache replacement policies from performing well. Accordingly, a domain-specific memory management scheme is proposed which exploits the particular use-case to attain visibility of future data-accesses in the event-driven simulation framework. To make it even more robust to variations in network topology and activity of the benchmark, we further propose static and dynamic network-specific enhancements to adaptively equip the scheme with more insight. The strategy is explored and evaluated with the set of benchmarks using a software simulation of the accelerator and an in-house cache simulator. In comparison to conventional policies, we observe up to 23% more reduction in net power consumption.</p>
dc.format.mimetype application/pdf
dc.identifier archive/lib.dr.iastate.edu/etd/17556/
dc.identifier.articleid 8563
dc.identifier.contextkey 15681598
dc.identifier.s3bucket isulib-bepress-aws-west
dc.identifier.submissionpath etd/17556
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/31739
dc.language.iso en
dc.source.bitstream archive/lib.dr.iastate.edu/etd/17556/Saha_iastate_0097M_18256.pdf|||Fri Jan 14 21:25:23 UTC 2022
dc.subject.disciplines Artificial Intelligence and Robotics
dc.subject.disciplines Computer Engineering
dc.subject.disciplines Electrical and Electronics
dc.subject.keywords Accelerator
dc.subject.keywords Caches
dc.subject.keywords Computational Neuroscience
dc.subject.keywords Energy Efficiency
dc.subject.keywords Neuromorphic
dc.subject.keywords Spiking Neural Network
dc.title Towards energy-efficient hardware acceleration of memory-intensive event-driven kernels on a synchronous neuromorphic substrate
dc.type thesis en_US
dc.type.genre thesis en_US
dspace.entity.type Publication
relation.isOrgUnitOfPublication a75a044c-d11e-44cd-af4f-dab1d83339ff
thesis.degree.discipline Electrical Engineering
thesis.degree.level thesis
thesis.degree.name Master of Science
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Saha_iastate_0097M_18256.pdf
Size:
11.05 MB
Format:
Adobe Portable Document Format
Description: