Implementation of a Two-Input Discrete Perceptron with Shifted Synaptic Signals on FPGA Using AlteraHDL
DOI:
https://doi.org/10.31649/1997-9266-2025-181-4-186-194Keywords:
perceptron, binary signals, Boolean functions, signal processing, field-programmable gate arrays (FPGA), neural networksAbstract
The paper proposes and experimentally investigates a hardware implementation of a discrete two-input probabilistic perceptron on a Field-Programmable Gate Array (FPGA). The perceptron is constructed from three elementary modules — shift2b (synaptic signal shifting via simple addition), cnts (aggregation based on counting the number of unique values), and cmp2b (a two-bit activation comparator). The hardware implementation of the discrete perceptron relies on shifting input signals using an integer addition operation, which significantly reduces hardware requirements.
Additionally, the proposed perceptron architecture ensures minimal component complexity (2–3 logic elements per block) and enables, by merely altering the weights and threshold, the emulation of six basic Boolean operations — OR, AND, XOR, NOR, NAND, and XNOR. This approach enables the creation of hardware mono-structural components based on a unified block, capable of implementing different logical functions depending on application requirements.
Functional simulation confirmed the correctness of all implemented truth tables, and timing analysis indicated a critical path delay of 16.7 ns, corresponding to an operating frequency of approximately 60 MHz without pipelining. The derived analytical relations demonstrate the potential for reducing hardware resource usage compared to traditional linear adders when synthesizing first- and second-order logic functions.
The proposed approach paves the way for scaling to a higher number of inputs, integration of statistical (probabilistic) aggregation criteria, and the development of embedded on-chip learning procedures. The results confirm the viability of using discrete perceptron structures as lightweight, energy-efficient classifiers in real-time systems and specialized neural network accelerators.
References
P. Bartoli, C. Veronesi, A. Giudici, D. Siorpaes, D. Trojaniello, and F. Zappa, “Benchmarking Energy and Latency in Tinyml: A Novel Method for Resource-Constrained AI,” ArXiv, 2025, 15622. https://doi.org/10.48550/arXiv.2505.15622 .
C. Kachris, “A survey on hardware accelerators for large language models,” Appl. Sci., vol. 15, no. 2, p. 586, 2025.
Y. Zhu, “Analysis and application on enhancing CNN performance via FPGA integration,” in Int. Conf. Electron. Elect. Inf. Eng., S. Li and B. Hu, Eds. Haikou, China, Aug. 16-18, 2024. SPIE, 2024, p. 31. https://doi.org/10.1117/12.3052318 .
R. Appuswamy, et al., “Breakthrough Low-Latency, High-Energy-Efficiency LLM Inference Performance Using NorthPole,” in 2024 IEEE High Perform. Extreme Comput. Conf. (HPEC), Wakefield, MA, USA, Sep. 23-27, 2024. IEEE, 2024, pp. 1-8. https://doi.org/10.1109/hpec62836.2024. 10938418 .
A. Nechi, L. Groth, S. Mulhem, F. Merchant, R. Buchty, and M. Berekovic, “FPGA-based deep learning inference accelerators: Where are we standing?” ACM Trans. Reconfigurable Technol. Syst., Sep. 2023. https://doi.org/10.1145/3613963 .
J. Yik, et al., “The neurobench framework for benchmarking neuromorphic computing algorithms and systems,” Nature Commun., vol. 16, no. 1, Feb. 2025. https://doi.org/10.1038/s41467-025-56739-4 .
С. І. Мельничук, i С. В. Яковин, «Спосіб реалізації перцептрона на основі імовірнісних характеристик зміщених синаптичних сигналів,» Патент України 126753, січ. 25, 2023.
S. Melnychuk, M. Kuz, and S. Yakovyn, “Emulation of logical functions NOT, AND, OR, and XOR with a perceptron implemented using an information entropy function,” in 2018 14th Int. Conf. Adv. Trends Radioelecrtronics, Telecommun. Comput. Eng. (TCSET), Lviv-Slavske, Ukraine, Feb. 20-24, 2018. IEEE, 2018. https://doi.org/10.1109/tcset.2018.8336337 .
S. V. Yakovyn, and S. I. Melnychuk, “Discrete perceptron based on probabilistic estimates of shifted synaptic signals,” Nauk. Visnyk Natsionalnoho Hirnychoho Universytetu, no. 2, pp. 189-196, 2025. https://doi.org/10.33271/nvngu/2025-2/189 .
A. Guesmi, I. Alouani, M. Baklouti, T. Frikha, and M. Abid, “SIT: Stochastic input transformation to defend against adversarial attacks on deep neural networks,” IEEE Des. & Test, vol. 39, pp. 63-72, 2022. https://doi.org/10.1109/mdat.2021.3077542 .
A. R. Omondi, and J. C. Rajapakse, Eds., FPGA Implementations of Neural Networks. Springer US, 2006. https://doi.org/10.1007/0-387-28487-7 .
A. Ananthakrishnan, and M. G. Allen, “All-Passive hardware implementation of multilayer perceptron classifiers,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1-10, 2020. https://doi.org/10.1109/tnnls.2020.3016901 .
Downloads
-
pdf (Українська)
Downloads: 17
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).