The joint council on thoracic surgery education coronary artery assessment tool has high interrater reliability

Richard Lee, Daniel Enter, Xiaoying Lou, Richard H. Feins, George L. Hicks, Mario Gasparri, Hiroo Takayama, J. Nilas Young, John H. Calhoon, Fred A. Crawford, Nahush A. Mokadam, James I. Fann

Research output: Contribution to journalArticle

11 Citations (Scopus)

Abstract

Background: Barriers to incorporation of simulation in cardiothoracic surgery training include lack of standardized, validated objective assessment tools. Our aim was to measure interrater reliability and internal consistency reliability of a coronary anastomosis assessment tool created by the Joint Council on Thoracic Surgery Education. Methods: Ten attending surgeons from different cardiothoracic residency programs evaluated nine video recordings of 5 individuals (1 medical student, 1 resident, 1 fellow, 2 attendings) performing coronary anastomoses on two simulation models, including synthetic graft task station (low fidelity) and porcine explant (high fidelity), as well as in the operative setting. All raters, blinded to operator identity, scored 13 assessment items on a 1 to 5 (low to high) scale. Each performance also received an overall pass/fail determination. Interrater reliability and internal consistency were assessed as intraclass correlation coefficients and Cronbach's α, respectively. Results: Both interrater reliability and internal consistency were high for all three models (intraclass correlation coefficients = 0.98, 0.99, and 0.94, and Cronbach's α = 0.99, 0.98, and 0.97 for low fidelity, high fidelity, and operative setting, respectively). Interrater reliability for overall pass/fail determination using κ were 0.54, 0.86, 0.15 for low fidelity, high fidelity, and operative setting, respectively. Conclusions: Even without instruction on the assessment tool, experienced surgeons achieved high interrater reliability. Future resident training and evaluation may benefit from utilization of this tool for formative feedback in the simulated and operative environments. However, summative assessment in the operative setting will require further standardization and anchoring.

Original languageEnglish (US)
Pages (from-to)2064-2070
Number of pages7
JournalAnnals of Thoracic Surgery
Volume95
Issue number6
DOIs
StatePublished - Jun 1 2013

Fingerprint

Thoracic Surgery
Coronary Vessels
Joints
Education
Video Recording
Internship and Residency
Medical Students
Swine
Transplants
Surgeons
Formative Feedback

ASJC Scopus subject areas

  • Surgery
  • Pulmonary and Respiratory Medicine
  • Cardiology and Cardiovascular Medicine

Cite this

The joint council on thoracic surgery education coronary artery assessment tool has high interrater reliability. / Lee, Richard; Enter, Daniel; Lou, Xiaoying; Feins, Richard H.; Hicks, George L.; Gasparri, Mario; Takayama, Hiroo; Young, J. Nilas; Calhoon, John H.; Crawford, Fred A.; Mokadam, Nahush A.; Fann, James I.

In: Annals of Thoracic Surgery, Vol. 95, No. 6, 01.06.2013, p. 2064-2070.

Research output: Contribution to journalArticle

Lee, R, Enter, D, Lou, X, Feins, RH, Hicks, GL, Gasparri, M, Takayama, H, Young, JN, Calhoon, JH, Crawford, FA, Mokadam, NA & Fann, JI 2013, 'The joint council on thoracic surgery education coronary artery assessment tool has high interrater reliability', Annals of Thoracic Surgery, vol. 95, no. 6, pp. 2064-2070. https://doi.org/10.1016/j.athoracsur.2012.10.090
Lee, Richard ; Enter, Daniel ; Lou, Xiaoying ; Feins, Richard H. ; Hicks, George L. ; Gasparri, Mario ; Takayama, Hiroo ; Young, J. Nilas ; Calhoon, John H. ; Crawford, Fred A. ; Mokadam, Nahush A. ; Fann, James I. / The joint council on thoracic surgery education coronary artery assessment tool has high interrater reliability. In: Annals of Thoracic Surgery. 2013 ; Vol. 95, No. 6. pp. 2064-2070.
@article{dc1a269fddb84628bbf0131963a9fdca,
title = "The joint council on thoracic surgery education coronary artery assessment tool has high interrater reliability",
abstract = "Background: Barriers to incorporation of simulation in cardiothoracic surgery training include lack of standardized, validated objective assessment tools. Our aim was to measure interrater reliability and internal consistency reliability of a coronary anastomosis assessment tool created by the Joint Council on Thoracic Surgery Education. Methods: Ten attending surgeons from different cardiothoracic residency programs evaluated nine video recordings of 5 individuals (1 medical student, 1 resident, 1 fellow, 2 attendings) performing coronary anastomoses on two simulation models, including synthetic graft task station (low fidelity) and porcine explant (high fidelity), as well as in the operative setting. All raters, blinded to operator identity, scored 13 assessment items on a 1 to 5 (low to high) scale. Each performance also received an overall pass/fail determination. Interrater reliability and internal consistency were assessed as intraclass correlation coefficients and Cronbach's α, respectively. Results: Both interrater reliability and internal consistency were high for all three models (intraclass correlation coefficients = 0.98, 0.99, and 0.94, and Cronbach's α = 0.99, 0.98, and 0.97 for low fidelity, high fidelity, and operative setting, respectively). Interrater reliability for overall pass/fail determination using κ were 0.54, 0.86, 0.15 for low fidelity, high fidelity, and operative setting, respectively. Conclusions: Even without instruction on the assessment tool, experienced surgeons achieved high interrater reliability. Future resident training and evaluation may benefit from utilization of this tool for formative feedback in the simulated and operative environments. However, summative assessment in the operative setting will require further standardization and anchoring.",
author = "Richard Lee and Daniel Enter and Xiaoying Lou and Feins, {Richard H.} and Hicks, {George L.} and Mario Gasparri and Hiroo Takayama and Young, {J. Nilas} and Calhoon, {John H.} and Crawford, {Fred A.} and Mokadam, {Nahush A.} and Fann, {James I.}",
year = "2013",
month = "6",
day = "1",
doi = "10.1016/j.athoracsur.2012.10.090",
language = "English (US)",
volume = "95",
pages = "2064--2070",
journal = "Annals of Thoracic Surgery",
issn = "0003-4975",
publisher = "Elsevier USA",
number = "6",

}

TY - JOUR

T1 - The joint council on thoracic surgery education coronary artery assessment tool has high interrater reliability

AU - Lee, Richard

AU - Enter, Daniel

AU - Lou, Xiaoying

AU - Feins, Richard H.

AU - Hicks, George L.

AU - Gasparri, Mario

AU - Takayama, Hiroo

AU - Young, J. Nilas

AU - Calhoon, John H.

AU - Crawford, Fred A.

AU - Mokadam, Nahush A.

AU - Fann, James I.

PY - 2013/6/1

Y1 - 2013/6/1

N2 - Background: Barriers to incorporation of simulation in cardiothoracic surgery training include lack of standardized, validated objective assessment tools. Our aim was to measure interrater reliability and internal consistency reliability of a coronary anastomosis assessment tool created by the Joint Council on Thoracic Surgery Education. Methods: Ten attending surgeons from different cardiothoracic residency programs evaluated nine video recordings of 5 individuals (1 medical student, 1 resident, 1 fellow, 2 attendings) performing coronary anastomoses on two simulation models, including synthetic graft task station (low fidelity) and porcine explant (high fidelity), as well as in the operative setting. All raters, blinded to operator identity, scored 13 assessment items on a 1 to 5 (low to high) scale. Each performance also received an overall pass/fail determination. Interrater reliability and internal consistency were assessed as intraclass correlation coefficients and Cronbach's α, respectively. Results: Both interrater reliability and internal consistency were high for all three models (intraclass correlation coefficients = 0.98, 0.99, and 0.94, and Cronbach's α = 0.99, 0.98, and 0.97 for low fidelity, high fidelity, and operative setting, respectively). Interrater reliability for overall pass/fail determination using κ were 0.54, 0.86, 0.15 for low fidelity, high fidelity, and operative setting, respectively. Conclusions: Even without instruction on the assessment tool, experienced surgeons achieved high interrater reliability. Future resident training and evaluation may benefit from utilization of this tool for formative feedback in the simulated and operative environments. However, summative assessment in the operative setting will require further standardization and anchoring.

AB - Background: Barriers to incorporation of simulation in cardiothoracic surgery training include lack of standardized, validated objective assessment tools. Our aim was to measure interrater reliability and internal consistency reliability of a coronary anastomosis assessment tool created by the Joint Council on Thoracic Surgery Education. Methods: Ten attending surgeons from different cardiothoracic residency programs evaluated nine video recordings of 5 individuals (1 medical student, 1 resident, 1 fellow, 2 attendings) performing coronary anastomoses on two simulation models, including synthetic graft task station (low fidelity) and porcine explant (high fidelity), as well as in the operative setting. All raters, blinded to operator identity, scored 13 assessment items on a 1 to 5 (low to high) scale. Each performance also received an overall pass/fail determination. Interrater reliability and internal consistency were assessed as intraclass correlation coefficients and Cronbach's α, respectively. Results: Both interrater reliability and internal consistency were high for all three models (intraclass correlation coefficients = 0.98, 0.99, and 0.94, and Cronbach's α = 0.99, 0.98, and 0.97 for low fidelity, high fidelity, and operative setting, respectively). Interrater reliability for overall pass/fail determination using κ were 0.54, 0.86, 0.15 for low fidelity, high fidelity, and operative setting, respectively. Conclusions: Even without instruction on the assessment tool, experienced surgeons achieved high interrater reliability. Future resident training and evaluation may benefit from utilization of this tool for formative feedback in the simulated and operative environments. However, summative assessment in the operative setting will require further standardization and anchoring.

UR - http://www.scopus.com/inward/record.url?scp=84878244077&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84878244077&partnerID=8YFLogxK

U2 - 10.1016/j.athoracsur.2012.10.090

DO - 10.1016/j.athoracsur.2012.10.090

M3 - Article

C2 - 23706430

AN - SCOPUS:84878244077

VL - 95

SP - 2064

EP - 2070

JO - Annals of Thoracic Surgery

JF - Annals of Thoracic Surgery

SN - 0003-4975

IS - 6

ER -