Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Preimage of Rectifier Network Activities
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Computer Science and Communication (CSC), Computational Science and Technology (CST).ORCID iD: 0000-0001-5211-6388
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-2784-7300
Show others and affiliations
2017 (English)In: International Conference on Learning Representations (ICLR), International Conference on Learning Representations, ICLR , 2017Conference paper, Published paper (Refereed)
Abstract [en]

The preimage of the activity at a certain level of a deep network is the set of inputs that result in the same node activity. For fully connected multi layer rectifier networks we demonstrate how to compute the preimages of activities at arbitrary levels from knowledge of the parameters in a deep rectifying network. If the preimage set of a certain activity in the network contains elements from more than one class it means that these classes are irreversibly mixed. This implies that preimage sets which are piecewise linear manifolds are building blocks for describing the input manifolds specific classes, ie all preimages should ideally be from the same class. We believe that the knowledge of how to compute preimages will be valuable in understanding the efficiency displayed by deep learning networks and could potentially be used in designing more efficient training algorithms.

Place, publisher, year, edition, pages
International Conference on Learning Representations, ICLR , 2017.
Keywords [en]
Heuristic algorithms, Piecewise linear techniques, General structures, Input space, Network activities, Optimisations, Piecewise linear, Preimages, Regularization algorithms, Rectifying circuits
National Category
Computer graphics and computer vision Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-259164Scopus ID: 2-s2.0-85093029593OAI: oai:DiVA.org:kth-259164DiVA, id: diva2:1350663
Conference
5th International Conference on Learning Representations, ICLR 2017, 24-26 April 2017, Toulon, France
Note

QC 20230609

Available from: 2019-09-11 Created: 2019-09-11 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(359 kB)210 downloads
File information
File name FULLTEXT01.pdfFile size 359 kBChecksum SHA-512
e0989b5d434118f78b07f7fbef35e0a400aba8657efadcc1eb26950039f7e0b786313e2d48b79774a9df6bdad3fb76e6bd11ca80101cf0013f20e2a6a3dafbd6
Type fulltextMimetype application/pdf

Scopus

Search in DiVA

By author/editor
Carlsson, StefanAzizpour, HosseinRazavian, Ali SharifSullivan, JosephineSmith, Kevin
By organisation
Robotics, Perception and Learning, RPLComputational Science and Technology (CST)Computational Science and Technology (CST)
Computer graphics and computer visionComputer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 210 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 622 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf