The problem of cooperatively performing a collection of tasks in a decentralized setting where the computing medium is subject to adversarial perturbations is one of the fundamental problems in distributed computing. Such perturbations can be caused by processor failures, unpredictable delays, and communication breakdowns. To develop efficient distributed solutions for computation problems ranging from distributed search such as SETI to parallel simulation and multi-agent collaboration, it is important to understand efficiency trade-offs characterizing the ability of p processors to cooperate on t-tasks in the presence of adversity. This paper surveys recent results grouped by the following topics: (i) failure-sensitive bounds for distributed cooperation problems for synchronous processors subject to crash failures, (ii) bounds on redundant work for distributed cooperation when individual asynchronous processors may experience prolonged absence of communication, and (in) competitive analysis of cooperative work performed by groups of asynchronous processors, when the groups may be fragmented and merged during the computation. These research results are motivated by the earlier work of the third author with Paris C. Kanellakis at Brown University.