Confidence intervals and measures of significant change for KINARM Standard Tests

Loading...
Thumbnail Image
Date
Authors
Early, Spencer
Keyword
KINARM
Abstract
Neurological assessment plays a pivotal role in the overall diagnostic and prognostic process of patient care. The effectiveness of the various examinations and methods of neurological assessment are based on the accuracy and precision of the measure. In other words, a good assessment tool should properly reflect the participant’s abilities in any tested domain, while also providing the same score of those abilities over repeat assessments. Traditional neurological assessment tools are faced with many challenges when attempting to meet this criteria. This is primarily the result of many tools relying on the subjective decisions and rely on coarse rating scales in order for clinician’s in order that clinicians can maintain similar scores. A potential alternative is robotic technologies for neurological assessment. Overall, these robotic tools can provide a more objective and reproducible measure of participant performance, when compared to traditional tools. The purpose of this thesis was to assess participants on the KINARM standard tests with the goal of quantifying performance variability across repeat evaluations. Utilizing the KINARM exoskeleton robot, control participants were assessed twice with the maximum time between evaluations set at 1 week. The tasks are divided into 4 categories, motor tasks, motor-cognitive tasks, cognitive tasks, and one sensory task. The results of this thesis revealed confidence intervals smaller than the population interval in 99% of task parameters. Significant learning effects were more prevalent in cognitive and motor-cognitive tasks when compared to sensory and motor tasks. Confidence intervals for TaskScores, the global performance measure, ranged from 0.74 to 1.68 for all KINARM standard tests. Moreover, significant change cut-off values ranged from 1.03 to 2.36 with 55% of tasks shown to have significant learning effects over repeat assessment. These metrics will improve the interpretation of participant performance with respect to individual parameter scores. They also provide a solid framework for many future KINARM studies, most notably the emerging problem of reducing overall assessment time.
External DOI