Fault tolerant parallel data-intensive algorithms

Mucahid Kutlu, Gagan Agrawal, Oguz Kurt

Research output: Contribution to conferencePaperpeer-review

2 Scopus citations

Abstract

Fault-tolerance is rapidly becoming a crucial issue in high-end and distributed computing, as increasing number of cores are decreasing the mean-time to failure of the systems. While checkpointing, including checkpointing of parallel programs like MPI applications, provides a general solution, the overhead of this approach is becoming increasingly unacceptable. Thus, algorithm-based fault-tolerance provides a nice practical alternative, though it is less general. Although this approach has been studied for many applications, there is no existing work for algorithm-based fault-tolerance for the growing class of data-intensive parallel applications. In this paper, we present an algorithm-based fault tolerance solution that handles fail-stop failures for a class of data intensive algorithms. We divide the dataset into smaller data blocks and in replication step, we distribute the replicated blocks with the aim of keeping the maximum data intersection between any two processors minimum. This allows us to have minimum data loss when multiple failures occur. In addition, our approach enables better load balance after failure, and decreases the amount of re-processing of the lost data. We have evaluated our approach by using two popular parallel data mining algorithms, which are k-means and apriori. We show that our approach has negligible overhead when there are no failures, and allows us to gracefully handle different number of failures, and failures at different points of processing. We also provide the comparison of our approach with the MapReduce based solution for fault tolerance, and show that we outperform Hadoop both in absence and presence of failures.

Original languageEnglish (US)
Pages133
DOIs
StatePublished - 2012
Externally publishedYes
Event2012 19th International Conference on High Performance Computing, HiPC 2012 - Pune, India
Duration: Dec 18 2012Dec 21 2012

Conference

Conference2012 19th International Conference on High Performance Computing, HiPC 2012
Country/TerritoryIndia
CityPune
Period12/18/1212/21/12

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Fault tolerant parallel data-intensive algorithms'. Together they form a unique fingerprint.

Cite this