After we had tested the train-the-trainer courses of the DID-ACT curriculum, it was time to evaluate the quality of learning units for students. We have conducted a series of pilot studies that validated five different learning units in eight evaluation events across all partner institutions including also associate partners. We have recorded student activities in the virtual patient collection connected with the DID-ACT curriculum available for deliberate practice. In addition, we evaluated the usability of the project’s learning management system in several test scenarios.
Overview about student activities in piloted learning units
Overall, students agreed to a large extent that the piloted DID-ACT learning units improved their clinical reasoning skills (average 5.75 in 7-point Likert scale). As a special strength of the curriculum students frequently named the benefit of virtual patients integrated with the learning units. Another highlight were small-group group discussions, often conducted in multinational teams which broadened their views on clinical reasoning. However, a challenge in the tested version of the curriculum implementation was navigation in the learning management system (Moodle). As a consequence, we have further analyzed these data and, furthermore, conducted a series of usability tests. These analyses and tests led to a process to address the issues wherever it is possible. We have also received several requests for modifications of the developed learning material that we will address in the next deliverable, in which we refine courses based on pilot implementation.
To ensure the quality of our curriculum’s development, our pilots accompanied a questionnaire for participants and facilitators. We are using this feedback to create necessary emphasis and/ or create a clearer final product for our learners. These responses were coupled with monitoring our chosen learning management system (LMS), Moodle, and virtual patient system, CASUS. DID-ACT’s six institutional partners took part in the evaluation by facilitating 9 pilot courses across Europe. In brief, approximately 100 teachers participated in the 5 clinical reasoning teaching topics in the train-the-trainer course pilots. Approximately half of the participants returned their evaluation questionnaires alongside 12 responses from facilitators. The results, discussed further here, coding was double-checked and disagreements were solved by consensus.
Survey tools for clinical reasoning curriculum assessment
In our pilots we made the decision to use survey-based tools for measuring the train-the-trainer (TTT) and students courses. Our goal was to capture responses using fewer questions in a way that allowed for comparison between piloted units. In the end, we used Evaluation of Technology-Enhanced Learning Materials (ETELM). This tool, developed by David Cook and Rachel Ellaway, gave us our launch pad to have questionnaires be our standard evaluation tool. This was attractive for many reasons, including facilitating implementation into our learning unit template within our LMS.
Lessons learned around evaluations for our pilot implementation
Through iteration and collaboration with a psychologist, experienced educators, and researchers, we found the following pertinent for our project and beyond:
Ensure you are using consistent language; i.e use either ‘course’ or ‘learning unit’, as pertinent to your project
Be mindful of using the word ‘practice’ as it can be interpreted many ways; i.e in DID-ACT’s case, we changed, “This learning unit will change my practice” to “This learning unit will improve my clinical reasoning”
Providing participants an option to write free-text in their native language, as a project allows
Avoid too many questions that may lead to overloading participants
Asking about years in a profession versus age provides more succinct answers for what we needed.
TTT Pilot Implementation Survey Results
We set up our questionnaires using a scale of 1 (definitely agree) to 7 (definitely disagree). The average score of responses was 5.8 when prompted with the question about whether the courses would improve the teaching of clinical reasoning. The pilots excelled in the areas of selection of topics, small group discussions, the facilitators, and inter-professional learning opportunities. Growth was suggested in the areas of technical navigation of the LMS, assessment and feedback on process, and content that was tailored more to professions other than physicians.
Analysis of Pilot Implementations
The survey questionnaires were analyzed on Microsoft Excel where, using quantitative methodology, we calculated the descriptive statistics. In contrast, for open-ended responses, we performed a content analysis. Participant utterances were coded with the categories proposed in D3.2 (Didactical, Content, Technical, Interaction/ Collaboration, Implementation/ Time, Implementation /Facilitators). As well, we extended by adding three more categories (Content/Assessment, Overall and Others). All data was processed anonymously with each statement being set as positive, negative, or neutral.
Overall, the TTT pilot implementations were a success as well as were our efforts in evaluating them. We will implement constructive feedback applicable to other learning units as we continue to develop them. Alongside this, we will return to the original pilot implementations and amend what needs to be improved. You can read a more detailed overview of D5.2 Evaluation and analysis of learner activities during these TTT pilot implementations here.
Looking back at our last chunk of time in the DID-ACT project, we have a lot to be proud of and a lot to look forward to. One of the most exciting and hands-on aspects of this clinical reasoning curriculum launch is where we are right now: Pilot Implementations.
Train the trainer course on clinical reasoning
In our latest report published, Pilot implementations of the train-the-trainer course, we focused on the train the trainer learning units for our curriculum. These pilots were valuable and insightful in terms of helping the team iron out kinks in content, strategy, communications, etc. Overall, we had 7 courses that covered 4 different clinical reasoning topics up until the end of October. We are pleased to share that this made for a total of 69 participants who included professions such as medicine, nursing, paramedics, basic sciences and physiotherapy, from various partner-, associate-partner, and external institutions. We also had student participants. Overall the feedback was very positive. Next up is to take this feedback given and implement it into the curriculum.
More than 50 participants from partner and associate partner institutions, as well as external participants
Covering a wide range of topics of the train-the-trainer courses that fit to the partner faculty development programs
Piloting of at least two same learning units by 2-3 partners
Thoroughly evaluated based on questionnaires for participants and instructors and learning analytics (in alignment with WP5).
Methods for a train the trainer implementation
We used our chosen learning platform Moodle to host our blended learning curriculum. There were several steps taken to ensure implementations were as smooth and consistent as possible. It began with a succinct planning phase.
Most of the train-the-trainer courses were chosen in tandem with their student curriculum counterparts. This was done intentionally so that trainers would be adequately prepared themselves to teach the students. Each institution chose their learning unit based on their individual needs and requirements. During this time, the consortium met on a regular basis to plan and ensure that quality criteria would be met. As well, alongside doing pilots within our consortium, we elected to have external participants as well for external applicability.
The implementation happened differently at each institution and recruitment ranged from emails to specific cohorts to full public university call. During this time, each member was supported by Instruct to ensure that course access, structure of the pilots, and required facilitator resources were accessible and clear. This included a roadmap on how to use the Moodle platform as that was highlighted previously as an area with need support. Differences were also du to use of course forums and analysis of feedback within the learning platform.
Analysis and feedback phase
One of the deliverables for work package 5 was an evaluation questionnaire, as well as an analysis of the usage data. The former was given to participants at the end of the learning unit. Alongside this evaluation, each facilitator was given a short template to fill in for more qualitative reporting on their experience. Each of the responses was categorized and discussed together.
In the end, we piloted 4 interprofessional sessions and 3 with external participants. Feedback was generally positive and otherwise anything that could be termed as less than ideal is being used as constructive feedback for further refinement. Our biggest wins were that the interactive aspect was found to be highly valuable and the facilitators from varied professions was appreciated. Our constructive feedback was around Moodle and Casus being unclear as tools, too little time, teaching topic vs teaching how to teach was a crunch, as well as how the conversation veered toward medicine due to the unbalanced participant professions (i.e too many phyisicans versus physiotherapists in one group).
Pilot implementation conclusions
Overall, the consortium deems this round of clinical reasoning pilot implementations a success. There are points we need to work on, such as Moodle clarification, additional tutorial video was already produced, and time constraints, which will all be addressed in the coming review period for the learning units. What’s more, the consortium will be delving during to the conclusions on the didactical and content-level for the learning units via the evaluation results reported in D5.2. These will all be brought forward during the overall revisions and improvements slated for D3.3 which begins in January 2022.
How to teach synchronously in a virtual setting
You need a reliable camera, microphone, and virtual platform and be familiar with its features, such as whiteboard, chat, polling, breakout rooms, etc.
At the beginning establish communication rules, e.g. whether participants should raise their (virtual) hand, use the chat, and/or just speak. Also, we recommend asking participants to turn on their camera
For small group work break out rooms work very well, just be clear about the tasks the groups should work on prior to dividing them into the groups.
For collaboration the use of integrated virtual whiteboards or other platforms such as Padlet are very useful. Just make sure prior to the session that you have everything setup and the links at hand, e.g. to post them in the chat.
Allow a bit more time for starting the session and the group works as there might be participants who are not familiar with the platform or technical problems might occur.
How to motivate unprepared participants
Make clear that the asynchronous assignments are a core part of the course and that its content will not be repeated. Even if it is difficult, stick to that when starting the synchronous teaching session.
If you expect unprepared participants, you can start the session with a student-centered group exercise mixing prepared and unprepared students to increase peer-pressure and make them realize that being unprepared does not feel good.
Use the introductory or closing quizzes / tests so that participants can self- assess whether they have the required knowledge and you as a facilitator can see the level of knowledge and preparation of your participants.
Further recommended reading:
Hege I, Tolks D, Adler M, Härtl A. Blended learning: ten tips on how to implement it into a curriculum in healthcare education. GMS J Med Educ. 2020;37(5):Doc45. (Article)
How to involve participants with different levels of experience
To account for such different levels, we recommend making use of the asynchronous preparatory phases which also include introductory quizzes in which participants can self-assess their prior knowledge and you as a facilitator can assess the differences within your group. Participants with less prior experience can also be guided to additional preparatory resources.
Encourage participants to work in pairs or small groups when preparing so that they can help and learn from each other. You could even facilitate this by dividing them into groups with different levels of experience.
Similarly, during the synchronous phases, we recommend forming groups with participants different levels of experience and emphasize the peer support aspects of such group activities.
We also recommend starting with rather smaller groups and allow more time than stated in the course outlines, if you expect a heterogenous level of experience. This way you can better manage this challenge.
Encourage your participants to ask questions, emphasizing that nobody knows everything and that it is important for learning to ask questions.
Especially in the train-the-trainer course you might have to deal with over-confident participants, who especially in an interprofessional setting can dominate the group. This is a complex cultural challenge, but you could try to establish (and follow) communication rules at the beginning of a session.
How to address potential overlaps or redundancies
Identify what is already included and what is missing in your curriculum related to clinical reasoning outcomes and compare it to the DID-ACT blueprint. Prioritize learning outcomes that are not yet covered but regarded as important.
Identify activities, resources, or teaching sessions with similar learning outcomes that might be in need for change anyway because of low evaluation results, teachers or students struggle with it. These could be suitable for adding or replacing parts with DID-ACT activities.
Ask teachers and students about overlaps and gaps they see in their teaching / learning of clinical reasoning and where they struggle. This could also be done by a reflection round after related teaching activities in the curriculum
Although ideally a longitudinal integration is aimed at, we recommend to starting small with a pilot implementation to gain experience and develop a show case.
How to teach in an interprofessional setting
Allow for enough time prior to the teaching for the organization and motivation / encouragement of stakeholders and participants
Allow for enough time and guidance during the course so that the participants from the different professions can get to know each other and their professions and discuss their different perspectives. This might mean that you need to calculate some extra time in addition to the suggested duration of the learning unit.
There may be a different understanding of clinical reasoning in the different health professions, so we recommend making participants aware of this. You could for example use and adapt activities from the learning units on the health profession roles to facilitate this.
Courses in an interprofessional setting should not come too early in the curriculum (not before professions have formed their own professional identity - however, this also depends on the aim of the course).
Make sure you have enough participants from different professions. If possible, the facilitator could divide the participants in smaller groups with an equal distribution of professions.
Similarly, you need an equal distribution of facilitators / facilitators from different professions.
Develop customized learning materials considering the different professions. If needed you can adapt the material and activities provided in the DID-ACT curriculum.
Further recommended reading:
van Diggele, C., Roberts, C., Burgess, A. et al. Interprofessional education: tips for design and implementation. BMC Med Educ 20, 455 (2020). (Link)