TY - GEN
T1 - Robust Network Supercomputing with Malicious Processes
AU - Konwar, Kishori M.
AU - Rajasekaran, Sanguthevar
AU - Shvartsman, Alexander A.
PY - 2006
Y1 - 2006
N2 - Internet supercomputing is becoming a powerful tool for harnessing massive amounts of computational resources. However in typical master-worker settings the reliability of computation crucially depends on the ability of the master to depend on the computation performed by the workers. Fernandez, Georgiou, Lopez, and Santos [12,13] considered a system consisting of a master process and a collection of worker processes that can execute tasks on behalf of the master and that may act maliciously by deliberately returning fallacious results. The master decides on the correctness of the results by assigning the same task to several workers. The master is charged one work unit for each task performed by a worker. The goal is to design an algorithm that enables the master to determine the correct result with high probability, and at the least possible cost. Fernandez et al. assume that the number of faulty processes or the probability of a process acting maliciously is known to the master. In this paper this assumption is removed. In the setting with n processes and n tasks we consider two different failure models, viz., model Fa, where f-fraction, 0 < f < 5, of the workers provide faulty results with probability 0 < p < 5, given that the master has no a priori knowledge of the values of p and f; and model Fb, where at most f-fraction, 0 < f < 5, of the workers can reply with arbitrary results and the rest reply with incorrect results with probability p, 0 < p < 1/2, but the master knows the values of f and p. For model Fa, we provide an algorithm-based on the Stopping Rule Algorithm by Dagum, Karp, Luby, and Ross [10] - that can estimate f and p with (ε, δ-approximation, for any 0 < δ < 1 and ε > 0. This algorithm runs in O(log n) time, O(log2n) message complexity, and O(log2 n) task-oriented work and O(n log n) total-work complexities. We also provide a randomized algorithm for detecting the faulty processes, i.e., identifying the processes that have non-zero probability of failures in model Fa, with task-oriented work O(n), and time O(log n). A lower bound on the total-work complexity of performing n tasks correctly with high probability is shown. Finally, two randomized algorithms to perform n tasks with high probability are given for both failure models with closely matching upper bounds on total-work and task-oriented work complexities, and time O(log n).
AB - Internet supercomputing is becoming a powerful tool for harnessing massive amounts of computational resources. However in typical master-worker settings the reliability of computation crucially depends on the ability of the master to depend on the computation performed by the workers. Fernandez, Georgiou, Lopez, and Santos [12,13] considered a system consisting of a master process and a collection of worker processes that can execute tasks on behalf of the master and that may act maliciously by deliberately returning fallacious results. The master decides on the correctness of the results by assigning the same task to several workers. The master is charged one work unit for each task performed by a worker. The goal is to design an algorithm that enables the master to determine the correct result with high probability, and at the least possible cost. Fernandez et al. assume that the number of faulty processes or the probability of a process acting maliciously is known to the master. In this paper this assumption is removed. In the setting with n processes and n tasks we consider two different failure models, viz., model Fa, where f-fraction, 0 < f < 5, of the workers provide faulty results with probability 0 < p < 5, given that the master has no a priori knowledge of the values of p and f; and model Fb, where at most f-fraction, 0 < f < 5, of the workers can reply with arbitrary results and the rest reply with incorrect results with probability p, 0 < p < 1/2, but the master knows the values of f and p. For model Fa, we provide an algorithm-based on the Stopping Rule Algorithm by Dagum, Karp, Luby, and Ross [10] - that can estimate f and p with (ε, δ-approximation, for any 0 < δ < 1 and ε > 0. This algorithm runs in O(log n) time, O(log2n) message complexity, and O(log2 n) task-oriented work and O(n log n) total-work complexities. We also provide a randomized algorithm for detecting the faulty processes, i.e., identifying the processes that have non-zero probability of failures in model Fa, with task-oriented work O(n), and time O(log n). A lower bound on the total-work complexity of performing n tasks correctly with high probability is shown. Finally, two randomized algorithms to perform n tasks with high probability are given for both failure models with closely matching upper bounds on total-work and task-oriented work complexities, and time O(log n).
KW - Distributed algorithms
KW - Fault-tolerance
KW - Internet supercomputing
KW - Randomized algorithms
KW - Reliability
UR - http://www.scopus.com/inward/record.url?scp=33845248674&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33845248674&partnerID=8YFLogxK
U2 - 10.1007/11864219_33
DO - 10.1007/11864219_33
M3 - Conference contribution
AN - SCOPUS:33845248674
SN - 3540446249
SN - 9783540446248
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 474
EP - 488
BT - Distributed Computing - 20th International Symposium, DISC 2006, Proceedings
PB - Springer Verlag
T2 - 20th International Symposium on Distributed Computing, DISC 2006
Y2 - 18 September 2006 through 20 September 2006
ER -