Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Hardware acceleration of convolutional neural networks on FPGA
2020 (English)Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

With the evolution of machine learning algorithms they are seeing a wider use in traditional signal processing applications. One of these areas is in radios for improved signal identification algorithms. With the large computational complexity of convolutional neural networks, it is of importance to use platforms that are as fast and energy efficient as possible. This thesis investigates hardware acceleration of convolutional neural networks on field programmable gate arrays, an reconfigurable integrated circuit. An existing toolflow, Haddoc2, is used and evaluated. This tool automates the mapping of a convolutional neural network from a high level description in Caffe to a synthesisable hardware description in VHDL hardware description language. Four models of different sizes are trained on the MNIST dataset and accelerators for these at different bitwidths are generated and then simulated in a VHDL testbench. The resulting accuracies are tolerable for the target problem and Haddoc2 can produce fast accelerators that would work well for smaller networks. Big networks was found to consume large amounts of resources in the field programmable gate array and is not feasible for a practical application. The treatment of weights as constants makes the accelerators fast since there is no memory bottleneck but makes the accelerator less flexible since a new set of weights would require to re-synthesize the design and reprogramming the field programmable gate array.

Place, publisher, year, edition, pages
2020. , p. 47
Series
UPTEC E, ISSN 1654-7616 ; 20 001
Keywords [en]
Hardware acceleration, Convolutional neural network, Machine learning, FPGA
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:uu:diva-402937OAI: oai:DiVA.org:uu-402937DiVA, id: diva2:1387462
External cooperation
Saab
Educational program
Master Programme in Electrical Engineering
Supervisors
Examiners
Available from: 2020-01-22 Created: 2020-01-21 Last updated: 2020-01-22Bibliographically approved

Open Access in DiVA

fulltext(2045 kB)57 downloads
File information
File name FULLTEXT01.pdfFile size 2045 kBChecksum SHA-512
22f41e6f35a7b98c1ec14c780ca1d161335d016aacc4b5b4bfcf52dc1aa688a3f66fea1fb1b5412da8b801ef3290da5bba45adfdb4f2b1dfc7c6cb35f806ae17
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Myrén, Adam
Other Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 57 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 95 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf