AI-Mediated Feedback for Medical Training

Transferring expert skills to computational models

There are two key challenges in medical procedures training:

1. Access to medical experts for feedback
2. Access to patients under various pathologies.

Fortunately, #2 is beginning to be solved through advances in medical simulators, from diverse manikins (e.g. motorized soft tissue manikins that can simulate different stages of a heart attack) to virtual reality experiences.

On the contrary, problem #1 remains virtually untouched. Expert time remains valuable, which means they can’t provided 1:1 time to individual learners. Expert attendants and nurses are also not geographically scalable. Finally, studies show that students tend to learn better [1] when they are provided a certain degree of independence, with the right guidance.

Our RetroActivity platform is able to track the status of a complex medical procedure from just a handful of expert demonstrations for assessment, and we are working with NASA Exploratory Medical Capability team (ExMC) and CAE Healthcare to commercialize this capability. This capability enables building computational models of expert performance of medical tasks, and comparing learner performances against these benchmarks.

[1] Comparison of learning outcomes for teaching focused cardiac ultrasound to physicians: A supervised human model course versus an eLearning guided self- directed simulator course. Canty et al. Journal of Critical Care 49 (2019) 38-44.