Can Residents Assess Other Providersʼ Infant Lumbar Puncture Skills? Validity Evidence for a Global Rating Scale and Subcomponent Skills Checklist

Colleen Braun, David O. Kessler, Marc Auerbach, Renuka Mehta, Anthony J. Scalzo, James M. Gerard

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

OBJECTIVES: The aims of this study were to provide validity evidence for infant lumbar puncture (ILP) checklist and global rating scale (GRS) instruments when used by residents to assess simulated ILP performances and to compare these metrics to previously obtained attending rater data. METHODS: In 2009, the International Network for Simulation–based Pediatric Innovation, Research, and Education (INSPIRE) developed checklist and GRS scoring instruments, which were previously validated among attending raters when used to assess simulated ILP performances. Video recordings of 60 subjects performing an LP on an infant simulator were collected; 20 performed by subjects in 3 categories (beginner, intermediate, and expert). Six blinded pediatric residents independently scored each performance (3 via the GRS, 3 via the checklist). Four of the 5 domains of validity evidence were collected: content, response process, internal structure (reliability and discriminant validity), and relations to other variables. RESULTS: Evidence for content and response process validity is presented. When used by residents, the checklist performed similarly to what was found for attending raters demonstrating good internal consistency (Cronbach α = 0.77) and moderate interrater agreement (intraclass correlation coefficient = 0.47). Residents successfully discerned beginners (P < 0.01, effect size = 2.1) but failed to discriminate between expert and intermediate subjects (P = 0.68, effect size = 0.34). Residents, however, gave significantly higher GRS scores than attending raters across all subject groups (P < 0.001). Moderate correlation was found between GRS and total checklist scores (P = 0.49, P < 0.01). CONCLUSIONS: This study provides validity evidence for the checklist instrument when used by pediatric residents to assess ILP performances. Compared with attending raters, residents appeared to over-score subjects on the GRS instrument.

Original languageEnglish (US)
JournalPediatric Emergency Care
DOIs
StateAccepted/In press - Oct 18 2016

Fingerprint

Spinal Puncture
Checklist
Pediatrics
Video Recording
Reproducibility of Results
Education
Research

ASJC Scopus subject areas

  • Pediatrics, Perinatology, and Child Health
  • Emergency Medicine

Cite this

Can Residents Assess Other Providersʼ Infant Lumbar Puncture Skills? Validity Evidence for a Global Rating Scale and Subcomponent Skills Checklist. / Braun, Colleen; Kessler, David O.; Auerbach, Marc; Mehta, Renuka; Scalzo, Anthony J.; Gerard, James M.

In: Pediatric Emergency Care, 18.10.2016.

Research output: Contribution to journalArticle

@article{5bfcd992f7734572ad52e7563167c52b,
title = "Can Residents Assess Other Providersʼ Infant Lumbar Puncture Skills?: Validity Evidence for a Global Rating Scale and Subcomponent Skills Checklist",
abstract = "OBJECTIVES: The aims of this study were to provide validity evidence for infant lumbar puncture (ILP) checklist and global rating scale (GRS) instruments when used by residents to assess simulated ILP performances and to compare these metrics to previously obtained attending rater data. METHODS: In 2009, the International Network for Simulation–based Pediatric Innovation, Research, and Education (INSPIRE) developed checklist and GRS scoring instruments, which were previously validated among attending raters when used to assess simulated ILP performances. Video recordings of 60 subjects performing an LP on an infant simulator were collected; 20 performed by subjects in 3 categories (beginner, intermediate, and expert). Six blinded pediatric residents independently scored each performance (3 via the GRS, 3 via the checklist). Four of the 5 domains of validity evidence were collected: content, response process, internal structure (reliability and discriminant validity), and relations to other variables. RESULTS: Evidence for content and response process validity is presented. When used by residents, the checklist performed similarly to what was found for attending raters demonstrating good internal consistency (Cronbach α = 0.77) and moderate interrater agreement (intraclass correlation coefficient = 0.47). Residents successfully discerned beginners (P < 0.01, effect size = 2.1) but failed to discriminate between expert and intermediate subjects (P = 0.68, effect size = 0.34). Residents, however, gave significantly higher GRS scores than attending raters across all subject groups (P < 0.001). Moderate correlation was found between GRS and total checklist scores (P = 0.49, P < 0.01). CONCLUSIONS: This study provides validity evidence for the checklist instrument when used by pediatric residents to assess ILP performances. Compared with attending raters, residents appeared to over-score subjects on the GRS instrument.",
author = "Colleen Braun and Kessler, {David O.} and Marc Auerbach and Renuka Mehta and Scalzo, {Anthony J.} and Gerard, {James M.}",
year = "2016",
month = "10",
day = "18",
doi = "10.1097/PEC.0000000000000890",
language = "English (US)",
journal = "Pediatric Emergency Care",
issn = "0749-5161",
publisher = "Lippincott Williams and Wilkins",

}

TY - JOUR

T1 - Can Residents Assess Other Providersʼ Infant Lumbar Puncture Skills?

T2 - Validity Evidence for a Global Rating Scale and Subcomponent Skills Checklist

AU - Braun, Colleen

AU - Kessler, David O.

AU - Auerbach, Marc

AU - Mehta, Renuka

AU - Scalzo, Anthony J.

AU - Gerard, James M.

PY - 2016/10/18

Y1 - 2016/10/18

N2 - OBJECTIVES: The aims of this study were to provide validity evidence for infant lumbar puncture (ILP) checklist and global rating scale (GRS) instruments when used by residents to assess simulated ILP performances and to compare these metrics to previously obtained attending rater data. METHODS: In 2009, the International Network for Simulation–based Pediatric Innovation, Research, and Education (INSPIRE) developed checklist and GRS scoring instruments, which were previously validated among attending raters when used to assess simulated ILP performances. Video recordings of 60 subjects performing an LP on an infant simulator were collected; 20 performed by subjects in 3 categories (beginner, intermediate, and expert). Six blinded pediatric residents independently scored each performance (3 via the GRS, 3 via the checklist). Four of the 5 domains of validity evidence were collected: content, response process, internal structure (reliability and discriminant validity), and relations to other variables. RESULTS: Evidence for content and response process validity is presented. When used by residents, the checklist performed similarly to what was found for attending raters demonstrating good internal consistency (Cronbach α = 0.77) and moderate interrater agreement (intraclass correlation coefficient = 0.47). Residents successfully discerned beginners (P < 0.01, effect size = 2.1) but failed to discriminate between expert and intermediate subjects (P = 0.68, effect size = 0.34). Residents, however, gave significantly higher GRS scores than attending raters across all subject groups (P < 0.001). Moderate correlation was found between GRS and total checklist scores (P = 0.49, P < 0.01). CONCLUSIONS: This study provides validity evidence for the checklist instrument when used by pediatric residents to assess ILP performances. Compared with attending raters, residents appeared to over-score subjects on the GRS instrument.

AB - OBJECTIVES: The aims of this study were to provide validity evidence for infant lumbar puncture (ILP) checklist and global rating scale (GRS) instruments when used by residents to assess simulated ILP performances and to compare these metrics to previously obtained attending rater data. METHODS: In 2009, the International Network for Simulation–based Pediatric Innovation, Research, and Education (INSPIRE) developed checklist and GRS scoring instruments, which were previously validated among attending raters when used to assess simulated ILP performances. Video recordings of 60 subjects performing an LP on an infant simulator were collected; 20 performed by subjects in 3 categories (beginner, intermediate, and expert). Six blinded pediatric residents independently scored each performance (3 via the GRS, 3 via the checklist). Four of the 5 domains of validity evidence were collected: content, response process, internal structure (reliability and discriminant validity), and relations to other variables. RESULTS: Evidence for content and response process validity is presented. When used by residents, the checklist performed similarly to what was found for attending raters demonstrating good internal consistency (Cronbach α = 0.77) and moderate interrater agreement (intraclass correlation coefficient = 0.47). Residents successfully discerned beginners (P < 0.01, effect size = 2.1) but failed to discriminate between expert and intermediate subjects (P = 0.68, effect size = 0.34). Residents, however, gave significantly higher GRS scores than attending raters across all subject groups (P < 0.001). Moderate correlation was found between GRS and total checklist scores (P = 0.49, P < 0.01). CONCLUSIONS: This study provides validity evidence for the checklist instrument when used by pediatric residents to assess ILP performances. Compared with attending raters, residents appeared to over-score subjects on the GRS instrument.

UR - http://www.scopus.com/inward/record.url?scp=84992093401&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84992093401&partnerID=8YFLogxK

U2 - 10.1097/PEC.0000000000000890

DO - 10.1097/PEC.0000000000000890

M3 - Article

C2 - 27763954

AN - SCOPUS:84992093401

JO - Pediatric Emergency Care

JF - Pediatric Emergency Care

SN - 0749-5161

ER -