Learning and Storing the Parts of Objects: IMF
2014 (English)In: IEEE International Workshop on Machine Learning for Signal Processing / [ed] IEEE, IEEE , 2014, 1-6 p.Conference paper (Refereed)
A central concern for many learning algorithms is how to efficiently store what the algorithm has learned. An algorithm for the compression of Nonnegative Matrix Factorizations is presented. Compression is achieved by embedding the factorization in an encoding routine. Its performance is investigated using two standard test images, Peppers and Barbara. The compression ratio (18:1) achieved by the proposed Matrix Factorization improves the storage-ability of Nonnegative Matrix Factorizations without significantly degrading accuracy (≈ 1-3dB degradation is introduced). We learn as before, but storage is cheaper.
Place, publisher, year, edition, pages
IEEE , 2014. 1-6 p.
matrix decomposition, signal processing, Barbara, IMF, Peppers, compression ratio, learning algorithm, nonnegative matrix factorization, Approximation methods, Dictionaries, Encoding, Quantization (signal), Signal processing algorithms, Signal to noise ratio, Vectors, compression, matrix factorization
Research subject Applied and Computational Mathematics
IdentifiersURN: urn:nbn:se:kth:diva-173808DOI: 10.1109/MLSP.2014.6958926ScopusID: 2-s2.0-84912570608ISBN: 978-147993694-6OAI: oai:DiVA.org:kth-173808DiVA: diva2:854967
2014 24th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2014; Reims; France
QC 201509212015-09-182015-09-182016-03-10Bibliographically approved