Accelerating Text Mining Workloads in a MapReduce-based Distributed GPU Environment
2013 (English)In: Journal of Parallel and Distributed Computing, ISSN 0743-7315, E-ISSN 1096-0848, Vol. 73, no 2, 198-206 p.Article in journal (Refereed) Published
Scientific computations have been using GPU-enabled computers successfully, often relying on distributed nodes to overcome the limitations of device memory. Only a handful of text mining applications benefit from such infrastructure. Since the initial steps of text mining are typically data intensive, and the ease of deployment of algorithms is an important factor in developing advanced applications, we introduce a flexible, distributed, MapReduce-based text mining workflow that performs I/O-bound operations on CPUs with industry-standard tools and then runs compute-bound operations on GPUs which are optimized to ensure coalesced memory access and effective use of shared memory. We have performed extensive tests of our algorithms on a cluster of eight nodes with two NVidia Tesla M2050s attached to each, and we achieve considerable speedups for random projection and self-organizing maps.
Place, publisher, year, edition, pages
Elsevier Inc , 2013. Vol. 73, no 2, 198-206 p.
GPU computing, MapReduce, Text mining, Self-organizing maps, Random projection, Library and Information Science
Computer and Information Science
Research subject Library and Information Science
IdentifiersURN: urn:nbn:se:hb:diva-1419DOI: 10.1016/j.jpdc.2012.10.001ISI: 000314139800008Local ID: 2320/11734OAI: oai:DiVA.org:hb-1419DiVA: diva2:869474