Strength Reduction Techniques in Compilers for Optimizing Inference on Edge Devices
2025 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE credits
Student thesis
Abstract [en]
Neural networks are widely used nowadays and they start finding their applications on resource-constrained devices. This thesis will address the challenge of optimizing artificial intelligence inference focusing on compiler techniques, particularly strength reduction and input data pre-processing. This thesis explores an alternative to the traditional multiply-accumulate operations with more efficient operations, such as shift and add operations.
Even though the strength reduction optimization is a well-known technique in the field of compiler optimization, its application to neural network inference on embedded systems has not been extensively studied. The evaluation is conducted across multiple embedded architectures, including ARM Cortex-M0, AVR, MSP430, and RISC-V, utilizing hardware emulation through QEMU to benchmark performance. Experimental results demonstrate that strength reduction can significantly decrease the computational overhead of neural network inference at the cost of increased memory requirements. The findings highlight the potential of compiler-assisted optimizations in enabling efficient AI inference on edge devices, paving the way for improved TinyML applications.
Place, publisher, year, edition, pages
2025. , p. 61
Series
IT ; mINS 25 001
Keywords [en]
compilers, optimization, microcontrollers, neural networks, emulation, QEMU
National Category
Computer Engineering
Identifiers
URN: urn:nbn:se:uu:diva-551565OAI: oai:DiVA.org:uu-551565DiVA, id: diva2:1940306
Supervisors
Examiners
2025-02-262025-02-262025-02-26Bibliographically approved