The Surgical Procedure Feedback Rubric for Assessing Resident Performance in the Operating Room: Does it Work?
MetadataShow full item record
Competency-based training models in surgical education challenge programs to adequately document and assess trainees’ clinical decision-making, problem solving, and procedural skills in the operating room. While a variety of tools for the assessment of procedural skills exist, validated intraoperative assessment tools are scarce. Those that have been validated often employ checklists or numerical rating scales, which are prone to bias and provide limited feedback to residents. The Surgical Procedure Feedback Rubric (SPR) was developed by the Department of Surgery at Queen’s University to document the quality of resident performance during a single, directly observed operative encounter and provide targeted feedback to support learning. It differs from other assessment tools because it defines performance criteria by increasing complexity through the use of behavioural anchors, thus embedding standards of performance in the tool. This study begins the process of building a validity argument for the SPR. Validity in educational assessment is an evaluation of the appropriateness of inferences made with the use of an assessment tool. Kane and Stobart’s validity frameworks guided this study. The purpose of this study was to examine the inference that the SPR is able to distinguish between the intraoperative performances of different levels of learners. A 14-month observational study was conducted in the General Surgery, Orthopaedic and Obstetrics and Gynecology training programs. Document analysis of the SPR provided evidence that the SPR was being used as intended. Exploratory factor analysis identified a 3-factor structure of the SPR consisting of Operating Room Preparation, Technical Skills and Intrinsic Competencies. Three-way (PGY x Program x Role) ANOVA confirmed the results of factor analysis, showed the utility of the SPR in discriminating between residents by postgraduate training year (PGY), and provided evidence for partial transferability of the SPR between programs and different roles in surgery. These results contributed to evidence to support scoring, generalizability and extrapolations inferences in the validity argument of the SPR. In addition to supporting the validity argument for the SPR itself, results of this study provide evidence for the use of rubric-based assessment tools to support competency-based assessment systems in medical education.