TY - GEN
T1 - Performance Testing for Cloud Computing with Dependent Data Bootstrapping
AU - He, Sen
AU - Liu, Tianyi
AU - Lama, Palden
AU - Lee, Jaewoo
AU - Kim, In Kee
AU - Wang, Wei
N1 - Funding Information:
This work was supported by the National Science Foundation under grants CCF-1617390, CCF-1618310, and CNS-1911012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied of NSF. The authors would like to thank the anonymous reviewers for their insightful comments.
Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - To effectively utilize cloud computing, cloud practice and research require accurate knowledge of the performance of cloud applications. However, due to the random performance fluctuations, obtaining accurate performance results in the cloud is extremely difficult. To handle this random fluctuation, prior research on cloud performance testing relied on a non-parametric statistic tool called bootstrapping to design their stop criteria. However, in this paper, we show that the basic bootstrapping employed by prior work overlooks the internal dependency within cloud performance test data, which leads to inaccurate performance results.We then present Metior, a novel automated cloud performance testing methodology, which is designed based on statistical tools of block bootstrapping, the law of large numbers, and autocorrelation. These statistical tools allow Metior to properly consider the internal dependency within cloud performance test data. They also provide better coverage of cloud performance fluctuation and reduce the testing cost. Experimental evaluation on two public clouds showed that 98% of Metior's tests could provide performance results with less than 3% error. Metior also significantly outperformed existing cloud performance testing methodologies in terms of accuracy and cost - with up to 14% increase in the accurate test count and up to 3.1 times reduction in testing cost.
AB - To effectively utilize cloud computing, cloud practice and research require accurate knowledge of the performance of cloud applications. However, due to the random performance fluctuations, obtaining accurate performance results in the cloud is extremely difficult. To handle this random fluctuation, prior research on cloud performance testing relied on a non-parametric statistic tool called bootstrapping to design their stop criteria. However, in this paper, we show that the basic bootstrapping employed by prior work overlooks the internal dependency within cloud performance test data, which leads to inaccurate performance results.We then present Metior, a novel automated cloud performance testing methodology, which is designed based on statistical tools of block bootstrapping, the law of large numbers, and autocorrelation. These statistical tools allow Metior to properly consider the internal dependency within cloud performance test data. They also provide better coverage of cloud performance fluctuation and reduce the testing cost. Experimental evaluation on two public clouds showed that 98% of Metior's tests could provide performance results with less than 3% error. Metior also significantly outperformed existing cloud performance testing methodologies in terms of accuracy and cost - with up to 14% increase in the accurate test count and up to 3.1 times reduction in testing cost.
KW - Cloud computing
KW - Non-parametric statistics
KW - Performance testing
KW - Resource contention
KW - Single point of estimations
UR - http://www.scopus.com/inward/record.url?scp=85125436663&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125436663&partnerID=8YFLogxK
U2 - 10.1109/ASE51524.2021.9678687
DO - 10.1109/ASE51524.2021.9678687
M3 - Conference contribution
AN - SCOPUS:85125436663
T3 - Proceedings - 2021 36th IEEE/ACM International Conference on Automated Software Engineering, ASE 2021
SP - 666
EP - 678
BT - Proceedings - 2021 36th IEEE/ACM International Conference on Automated Software Engineering, ASE 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 36th IEEE/ACM International Conference on Automated Software Engineering, ASE 2021
Y2 - 15 November 2021 through 19 November 2021
ER -