Abstract

When deploying neural networks for inference on specialized hardware devices such as field-programmable gate arrays (FPGAs), designers often face challenges due to limited on-chip memory. Techniques like pruning and quantization reduce network size but often lead to suboptimal designs when transitioning from software frameworks to hardware implementations. A co-design approach using Logic Neural Networks (LNNs) offers an alternative that takes hardware characteristics into account from the outset, replacing conventional neurons with logic gates. However, this approach introduces scalability issues for k-input Boolean functions, with parameter complexity of O(2(2k)) and high routing overhead, making FPGA implementation both complex and time-consuming. To address these limitations, we propose a novel look-up table (LUT) logic neural network architecture (LLNN), where each neuron is implemented as a LUT encoding a logic function. This architecture leverages the LUT-rich structure of FPGAs, reducing parameter complexity to O(2(k)) and improving routing efficiency. Our LLNN architectures achieve ultra-low latency-around 2 nanoseconds-on both MNIST and JSC datasets, while maintaining accuracy comparable to state-of-the-art co-designed models. Code available at https://github.com/capo-urjc/llnn
Loading...

Quotes

0 citations in WOS
0 citations in

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Description

Citation

Ramirez I; Garcia-Espinosa FJ; Concha D; Aranda LA; Schiavi E (2025). LLNN: A Scalable LUT-Based Logic Neural Network Architecture for FPGAs. Ieee Transactions On Circuits And Systems I-Regular Papers, (), -. DOI: 10.1109/TCSI.2025.3606054

Endorsement

Review

Supplemented By

Referenced By

Statistics

Views
10
Downloads
28

Bibliographic managers

Document viewer

Select a file to preview:
Reload