We are kicking off 2022 by building from our “Pilot Implementations on Clinical Reasoning” held from June 2021 until December 2022. This will provide added applicable information gathered from our recently published “Evaluation and analysis of learner activities of the pilot implementations of the train-the-trainer course“.
Evaluation for Quality Control
To ensure the quality of our curriculum’s development, our pilots accompanied a questionnaire for participants and facilitators. We are using this feedback to create necessary emphasis and/ or create a clearer final product for our learners. These responses were coupled with monitoring our chosen learning management system (LMS), Moodle, and virtual patient system, CASUS. DID-ACT’s six institutional partners took part in the evaluation by facilitating 9 pilot courses across Europe. In brief, approximately 100 teachers participated in the 5 clinical reasoning teaching topics in the train-the-trainer course pilots. Approximately half of the participants returned their evaluation questionnaires alongside 12 responses from facilitators. The results, discussed further here, coding was double-checked and disagreements were solved by consensus.
Survey tools for clinical reasoning curriculum assessment
In our pilots we made the decision to use survey-based tools for measuring the train-the-trainer (TTT) and students courses. Our goal was to capture responses using fewer questions in a way that allowed for comparison between piloted units. In the end, we used Evaluation of Technology-Enhanced Learning Materials (ETELM). This tool, developed by David Cook and Rachel Ellaway, gave us our launch pad to have questionnaires be our standard evaluation tool. This was attractive for many reasons, including facilitating implementation into our learning unit template within our LMS.
Lessons learned around evaluations for our pilot implementation
Through iteration and collaboration with a psychologist, experienced educators, and researchers, we found the following pertinent for our project and beyond:
- Ensure you are using consistent language; i.e use either ‘course’ or ‘learning unit’, as pertinent to your project
- Be mindful of using the word ‘practice’ as it can be interpreted many ways; i.e in DID-ACT’s case, we changed, “This learning unit will change my practice” to “This learning unit will improve my clinical reasoning”
- Providing participants an option to write free-text in their native language, as a project allows
- Avoid too many questions that may lead to overloading participants
- Asking about years in a profession versus age provides more succinct answers for what we needed.
TTT Pilot Implementation Survey Results
We set up our questionnaires using a scale of 1 (definitely agree) to 7 (definitely disagree). The average score of responses was 5.8 when prompted with the question about whether the courses would improve the teaching of clinical reasoning. The pilots excelled in the areas of selection of topics, small group discussions, the facilitators, and inter-professional learning opportunities. Growth was suggested in the areas of technical navigation of the LMS, assessment and feedback on process, and content that was tailored more to professions other than physicians.
Analysis of Pilot Implementations
The survey questionnaires were analyzed on Microsoft Excel where, using quantitative methodology, we calculated the descriptive statistics. In contrast, for open-ended responses, we performed a content analysis. Participant utterances were coded with the categories proposed in D3.2 (Didactical, Content, Technical, Interaction/ Collaboration, Implementation/ Time, Implementation /Facilitators). As well, we extended by adding three more categories (Content/Assessment, Overall and Others). All data was processed anonymously with each statement being set as positive, negative, or neutral.
Overall, the TTT pilot implementations were a success as well as were our efforts in evaluating them. We will implement constructive feedback applicable to other learning units as we continue to develop them. Alongside this, we will return to the original pilot implementations and amend what needs to be improved. You can read a more detailed overview of D5.2 Evaluation and analysis of learner activities during these TTT pilot implementations here.