On May 30th the DID-ACT team met for our third hybrid project meeting in the beautiful city of Maribor in Slovenia.
We started the day by celebrating our achievements so far, which include the completing of our learning units including piloting and evaluation, a wide range of dissemination activities, and the expansion of our network.
We then discussed the remaining deliverables for the upcoming seven months including the refinement of the student courses, the development of a certificate for the train-the-trainer courses, the finalization of the integration guideline, and the development of a sustainability model and long-term integration plans at our institutions.
To further work on the integration guideline, we split into three small groups and discussed recommendations for three main challenges we came across during our pilot implementations:
How to teach in an interprofessional setting
How to address potential overlaps of the DID-ACT courses with the local curriculum
How to deal with different levels of experience of participants
We summarized our results on Padlet and will integrate these recommendations into the guideline. Similarly, we worked in smaller groups on the development of a business canvas as part of the DID-ACT sustainability model. We will continue our work on this during our next planned project meeting in Kraków, Poland.
We concluded the day with a wonderful dinner at a local restaurant in the center of Maribor.
Elisa is a sustainable design student studying in Cologne and has been active in the DID-ACT Project since May 2021! Even though it is her first experience in medical education, Elisa easily managed to help various members of our team by creating storyboards, videos, audio recordings and animations based on summaries given to her in the curriculum development process. She works on a piece approximately every month, and has worked on over 12 pieces in total, each of which can take her days to create. Needless to say, Elisas’ contribution to the project has been essential. A big thank you to Elisa for her contribution to the DID-ACTProject and to @Daniel Donath from EDU-A Degree Smarter for the blog post!
After we had tested the train-the-trainer courses of the DID-ACT curriculum, it was time to evaluate the quality of learning units for students. We have conducted a series of pilot studies that validated five different learning units in eight evaluation events across all partner institutions including also associate partners. We have recorded student activities in the virtual patient collection connected with the DID-ACT curriculum available for deliberate practice. In addition, we evaluated the usability of the project’s learning management system in several test scenarios.
Overview about student activities in piloted learning units
Overall, students agreed to a large extent that the piloted DID-ACT learning units improved their clinical reasoning skills (average 5.75 in 7-point Likert scale). As a special strength of the curriculum students frequently named the benefit of virtual patients integrated with the learning units. Another highlight were small-group group discussions, often conducted in multinational teams which broadened their views on clinical reasoning. However, a challenge in the tested version of the curriculum implementation was navigation in the learning management system (Moodle). As a consequence, we have further analyzed these data and, furthermore, conducted a series of usability tests. These analyses and tests led to a process to address the issues wherever it is possible. We have also received several requests for modifications of the developed learning material that we will address in the next deliverable, in which we refine courses based on pilot implementation.
With Q1 rapidly coming to a close, we are racing through our final year of the DID-ACT Project with excitement and fervor. We have created so much over the past 2 years and have solidified a place to clearly consolidate the knowledge we have created and amassed. We therefore created the DID-ACT integration guide. We will continue to update it in an ongoing manner as we develop our integration guideline.
DID-ACT’s Integration Guide
The DID-ACT Integration Guide is found under our Curriculum heading and is broken into 5 categories. The goal of this resource is to provide an overview of the curriculum in its entirety, a guide to getting started for both educators and students, as well as our additional resources and FAQ about the curriculum.
The DID-ACT curriculum is incredibly succinct and provides content for both educators and students. In order to effectively maneuver through the learning units, we put together the curricular blueprint earlier in the project. This page provides this blueprint in the form of an interactive table where you can view all of the learning units, broken down by level, audience, and theme. Using this table will help you best organize your learning and facilitate finding specific learning opportunities either as a student or educator.
Clinical reasoning learning outcomes
Building from the blueprint, learners and facilitators are able to explore the themes and overarching learning objectives we have defined as a basis for our curriculum. There are 14 themes in our curriculum, ranging from topics like decision-making and attitudes toward clinical reasoning, to the basic theories around clinical reasoning.
How to use the DID-ACT curriculum
For students, there are 25 learning units to navigate from novice to advanced-level students in health professions. This page holds many functions, one of them being the provision of the who, why, what and how around making this curriculum be all it can be.
For educators, there are 8 train-the-trainer learning units. The content includes multiple aspects of how to teach clinical reasoning to students including models and theories, cognitive errors and biases, as well as differences and similarities within different health professions. To support all of this learning are virtual patients, group work, and facilitator guides.
Clinical reasoning integration guideline
Our Integration Guide is still currently growing. The integral guideline provides insight on how institutions and educators can easily integrate content from the curriculum into their pre-existing learning structure. Coupled with examples from our pilots but also from interactions with associate partners and external stakeholders that highlight real-world experience and application, this guideline is a key to the successful adoption of the DID-ACT clinical reasoning curriculum.
If you have used our resources successfully, we would love to hear from you as your input and experience would be very valuable. Just leave a comment!
An important part of our project is the testing of the developed learning units for students of the healthcare professions. With such pilot implementations we receive valuable feedback from the course participants, which will help us to further improve the learning units. Thus, each medical school in our consortium planned such a pilot implementation during fall/winter 2021.
Planning Pilot Implementations
Planning for the student units began in parallel to the train-the-trainer (TTT) units. This was done intentionally as it allowed for us to coordinate the trainers to be trained during the TTT pilots, ready for the student pilots later in the year. The consortium coordinated together in a document and focused on the following data:
Topic of the course
Dates of asynchronous phases and synchronous meetings
Anticipated number of participants
Professions of participants, i.e. whether the pilot was conducted in a mono-, multi-, or interprofessional setting
Continuing with our regular meeting pace, if not adding in a few more, we continued to meet consistently to work out any questions and support each other while we prepared. We also sought advice from our associate partners in Sweden, Switzerland, and the US in order to have a strong variety of opinions and input. Some of the pilots were also open to external participants so that we could also test outside our immediate consortium.
Implementing the pilots
Overall, we conducted 9 courses covering five different clinical reasoning topics until the end of January 2022.
While course registration and invitations were handled locally, all technical support was handled in collaboration with Instruct. Having a central partner leading this was very helpful as they explained course access, structure of pilots, and any facilitator resources needed for each pilot. Instruct also created a specific roadmap document for each partner alongside a more general short manual for how to register and access the DID-ACT Moodle platform.
We managed to recruit 188 medical and nursing students from partner-, associate partner-, and external institutions. In addition, we piloted our virtual patient units with a total of 618 students.
Pilot Implementation Results
Overall, the feedback from the facilitators and the interest of students participating in our courses especially in international and interprofessional settings was very positive. The facilitators highlighted the discussion among participants, but in some units more time than anticipated was required. Similar to the train-the-trainer courses technical challenges were reported by participants due to their unfamiliarity with our learning platform. The collected feedback together with the participant questionnaires will provide the basis for improving the courses.
It is hard to believe our clinical reasoning project is already ⅔ complete. Being the final year of the project, we look back at our learning units, pilots, and evaluations with appreciation for how they bring us into the project’s next phase.
The DID-ACT project team is currently full speed toward our next round of project deliverables. Building from last year’s pilots implementations, we are taking our curriculums into their refinement stages. We will be working on the train-the-trainer (TTT) course’s refinements (D3.3), which we aim to have ready by May. Alongside this, we are finalising the student curriculum (D 4.1), which we will have ready by the end of March.
Upcoming Student Clinical Reasoning Curriculum
Happening simultaneously is the evaluation and analysis of learner activities for the student curriculum (D5.3). Our results, retrieved from the pilot implementations of the student curriculum, are being sorted and reported on under the leadership of Dr A. Kononowicz (Jagiellonian University). We look forward to these results as they will be implemented as refinements for the student course, which is planned to start in April 2022.
Curriculum Integration Guideline
Apart from the curriculum, the curriculum integration guideline is being prepared. This document will serve as a guide for various institutions aiming to integrate the curriculum into their own institutions. While it is currently being drafted as part of WP7, led by EDU and supported by Instruct, it will need to be refined after the curriculum is completed. This will go hand in hand with the sustainability model, which is due at the end of the project.
Dissemination of the results of the project and research done by partners surrounding the project will also continue; results will be shown at medical education conferences, including the AMEE. Time speeds forward as we are working to bring this project to life and help support educators and students develop their clinical reasoning skills. Here’s to the DID-ACT project starting off 2022!
To ensure the quality of our curriculum’s development, our pilots accompanied a questionnaire for participants and facilitators. We are using this feedback to create necessary emphasis and/ or create a clearer final product for our learners. These responses were coupled with monitoring our chosen learning management system (LMS), Moodle, and virtual patient system, CASUS. DID-ACT’s six institutional partners took part in the evaluation by facilitating 9 pilot courses across Europe. In brief, approximately 100 teachers participated in the 5 clinical reasoning teaching topics in the train-the-trainer course pilots. Approximately half of the participants returned their evaluation questionnaires alongside 12 responses from facilitators. The results, discussed further here, coding was double-checked and disagreements were solved by consensus.
Survey tools for clinical reasoning curriculum assessment
In our pilots we made the decision to use survey-based tools for measuring the train-the-trainer (TTT) and students courses. Our goal was to capture responses using fewer questions in a way that allowed for comparison between piloted units. In the end, we used Evaluation of Technology-Enhanced Learning Materials (ETELM). This tool, developed by David Cook and Rachel Ellaway, gave us our launch pad to have questionnaires be our standard evaluation tool. This was attractive for many reasons, including facilitating implementation into our learning unit template within our LMS.
Lessons learned around evaluations for our pilot implementation
Through iteration and collaboration with a psychologist, experienced educators, and researchers, we found the following pertinent for our project and beyond:
Ensure you are using consistent language; i.e use either ‘course’ or ‘learning unit’, as pertinent to your project
Be mindful of using the word ‘practice’ as it can be interpreted many ways; i.e in DID-ACT’s case, we changed, “This learning unit will change my practice” to “This learning unit will improve my clinical reasoning”
Providing participants an option to write free-text in their native language, as a project allows
Avoid too many questions that may lead to overloading participants
Asking about years in a profession versus age provides more succinct answers for what we needed.
TTT Pilot Implementation Survey Results
We set up our questionnaires using a scale of 1 (definitely agree) to 7 (definitely disagree). The average score of responses was 5.8 when prompted with the question about whether the courses would improve the teaching of clinical reasoning. The pilots excelled in the areas of selection of topics, small group discussions, the facilitators, and inter-professional learning opportunities. Growth was suggested in the areas of technical navigation of the LMS, assessment and feedback on process, and content that was tailored more to professions other than physicians.
Analysis of Pilot Implementations
The survey questionnaires were analyzed on Microsoft Excel where, using quantitative methodology, we calculated the descriptive statistics. In contrast, for open-ended responses, we performed a content analysis. Participant utterances were coded with the categories proposed in D3.2 (Didactical, Content, Technical, Interaction/ Collaboration, Implementation/ Time, Implementation /Facilitators). As well, we extended by adding three more categories (Content/Assessment, Overall and Others). All data was processed anonymously with each statement being set as positive, negative, or neutral.
Overall, the TTT pilot implementations were a success as well as were our efforts in evaluating them. We will implement constructive feedback applicable to other learning units as we continue to develop them. Alongside this, we will return to the original pilot implementations and amend what needs to be improved. You can read a more detailed overview of D5.2 Evaluation and analysis of learner activities during these TTT pilot implementations here.
While developing the DID-ACT learning unit about cognitive errors and biases, we came across the this great video on YouTube by Dr. Joanna Kempner about gender biases and the underrepresentation of especially women of color in healthcare. Stereotypes are still deeply incorporated into our culture and as Dr. Kempner illustrates they are still visible in (medical) advertisement, healthcare institutions, or workplaces. Although women have finally been included into clinical trials in the 1990s in the US, funding for diseaseas that are more prevalent in women is still very low, which also results in less or lower quality treatment. For more information and details, we highly recommend the video by Dr. Kempner:
Just before the holiday season, we finalized a couple of interesting deliverables, reports, and updates.
Most importantly we completed the development of the DID-ACT train-the-trainer courses on clinical reasoning. Overall, eight learning units are available in our learning platform moodle including comprehensive information and documents for future course facilitators. The course development was accompanied with pilot testing of learning units with participants from partner-, associate-, and external institutions. On our website we provide a summary of these pilots and the extensive results of the evaluation activities. These results will inform the refinements of the train-the-trainer courses we will start implementing in January 2022.
We also repeated our Social Network Analysis and published the results including our website and learning management platform hits in this updated summary.
We wish you all peaceful holidays and a happy New Year!
Looking back at our last chunk of time in the DID-ACT project, we have a lot to be proud of and a lot to look forward to. One of the most exciting and hands-on aspects of this clinical reasoning curriculum launch is where we are right now: Pilot Implementations.
Train the trainer course on clinical reasoning
In our latest report published, Pilot implementations of the train-the-trainer course, we focused on the train the trainer learning units for our curriculum. These pilots were valuable and insightful in terms of helping the team iron out kinks in content, strategy, communications, etc. Overall, we had 7 courses that covered 4 different clinical reasoning topics up until the end of October. We are pleased to share that this made for a total of 69 participants who included professions such as medicine, nursing, paramedics, basic sciences and physiotherapy, from various partner-, associate-partner, and external institutions. We also had student participants. Overall the feedback was very positive. Next up is to take this feedback given and implement it into the curriculum.
More than 50 participants from partner and associate partner institutions, as well as external participants
Covering a wide range of topics of the train-the-trainer courses that fit to the partner faculty development programs
Piloting of at least two same learning units by 2-3 partners
Thoroughly evaluated based on questionnaires for participants and instructors and learning analytics (in alignment with WP5).
Methods for a train the trainer implementation
We used our chosen learning platform Moodle to host our blended learning curriculum. There were several steps taken to ensure implementations were as smooth and consistent as possible. It began with a succinct planning phase.
Most of the train-the-trainer courses were chosen in tandem with their student curriculum counterparts. This was done intentionally so that trainers would be adequately prepared themselves to teach the students. Each institution chose their learning unit based on their individual needs and requirements. During this time, the consortium met on a regular basis to plan and ensure that quality criteria would be met. As well, alongside doing pilots within our consortium, we elected to have external participants as well for external applicability.
The implementation happened differently at each institution and recruitment ranged from emails to specific cohorts to full public university call. During this time, each member was supported by Instruct to ensure that course access, structure of the pilots, and required facilitator resources were accessible and clear. This included a roadmap on how to use the Moodle platform as that was highlighted previously as an area with need support. Differences were also du to use of course forums and analysis of feedback within the learning platform.
Analysis and feedback phase
One of the deliverables for work package 5 was an evaluation questionnaire, as well as an analysis of the usage data. The former was given to participants at the end of the learning unit. Alongside this evaluation, each facilitator was given a short template to fill in for more qualitative reporting on their experience. Each of the responses was categorized and discussed together.
In the end, we piloted 4 interprofessional sessions and 3 with external participants. Feedback was generally positive and otherwise anything that could be termed as less than ideal is being used as constructive feedback for further refinement. Our biggest wins were that the interactive aspect was found to be highly valuable and the facilitators from varied professions was appreciated. Our constructive feedback was around Moodle and Casus being unclear as tools, too little time, teaching topic vs teaching how to teach was a crunch, as well as how the conversation veered toward medicine due to the unbalanced participant professions (i.e too many phyisicans versus physiotherapists in one group).
Pilot implementation conclusions
Overall, the consortium deems this round of clinical reasoning pilot implementations a success. There are points we need to work on, such as Moodle clarification, additional tutorial video was already produced, and time constraints, which will all be addressed in the coming review period for the learning units. What’s more, the consortium will be delving during to the conclusions on the didactical and content-level for the learning units via the evaluation results reported in D5.2. These will all be brought forward during the overall revisions and improvements slated for D3.3 which begins in January 2022.
How to teach synchronously in a virtual setting
You need a reliable camera, microphone, and virtual platform and be familiar with its features, such as whiteboard, chat, polling, breakout rooms, etc.
At the beginning establish communication rules, e.g. whether participants should raise their (virtual) hand, use the chat, and/or just speak. Also, we recommend asking participants to turn on their camera
For small group work break out rooms work very well, just be clear about the tasks the groups should work on prior to dividing them into the groups.
For collaboration the use of integrated virtual whiteboards or other platforms such as Padlet are very useful. Just make sure prior to the session that you have everything setup and the links at hand, e.g. to post them in the chat.
Allow a bit more time for starting the session and the group works as there might be participants who are not familiar with the platform or technical problems might occur.
How to motivate unprepared participants
Make clear that the asynchronous assignments are a core part of the course and that its content will not be repeated. Even if it is difficult, stick to that when starting the synchronous teaching session.
If you expect unprepared participants, you can start the session with a student-centered group exercise mixing prepared and unprepared students to increase peer-pressure and make them realize that being unprepared does not feel good.
Use the introductory or closing quizzes / tests so that participants can self- assess whether they have the required knowledge and you as a facilitator can see the level of knowledge and preparation of your participants.
Further recommended reading:
Hege I, Tolks D, Adler M, Härtl A. Blended learning: ten tips on how to implement it into a curriculum in healthcare education. GMS J Med Educ. 2020;37(5):Doc45. (Article)
How to involve participants with different levels of experience
To account for such different levels, we recommend making use of the asynchronous preparatory phases which also include introductory quizzes in which participants can self-assess their prior knowledge and you as a facilitator can assess the differences within your group. Participants with less prior experience can also be guided to additional preparatory resources.
Encourage participants to work in pairs or small groups when preparing so that they can help and learn from each other. You could even facilitate this by dividing them into groups with different levels of experience.
Similarly, during the synchronous phases, we recommend forming groups with participants different levels of experience and emphasize the peer support aspects of such group activities.
We also recommend starting with rather smaller groups and allow more time than stated in the course outlines, if you expect a heterogenous level of experience. This way you can better manage this challenge.
Encourage your participants to ask questions, emphasizing that nobody knows everything and that it is important for learning to ask questions.
Especially in the train-the-trainer course you might have to deal with over-confident participants, who especially in an interprofessional setting can dominate the group. This is a complex cultural challenge, but you could try to establish (and follow) communication rules at the beginning of a session.
How to address potential overlaps or redundancies
Identify what is already included and what is missing in your curriculum related to clinical reasoning outcomes and compare it to the DID-ACT blueprint. Prioritize learning outcomes that are not yet covered but regarded as important.
Identify activities, resources, or teaching sessions with similar learning outcomes that might be in need for change anyway because of low evaluation results, teachers or students struggle with it. These could be suitable for adding or replacing parts with DID-ACT activities.
Ask teachers and students about overlaps and gaps they see in their teaching / learning of clinical reasoning and where they struggle. This could also be done by a reflection round after related teaching activities in the curriculum
Although ideally a longitudinal integration is aimed at, we recommend to starting small with a pilot implementation to gain experience and develop a show case.
How to teach in an interprofessional setting
Allow for enough time prior to the teaching for the organization and motivation / encouragement of stakeholders and participants
Allow for enough time and guidance during the course so that the participants from the different professions can get to know each other and their professions and discuss their different perspectives. This might mean that you need to calculate some extra time in addition to the suggested duration of the learning unit.
There may be a different understanding of clinical reasoning in the different health professions, so we recommend making participants aware of this. You could for example use and adapt activities from the learning units on the health profession roles to facilitate this.
Courses in an interprofessional setting should not come too early in the curriculum (not before professions have formed their own professional identity - however, this also depends on the aim of the course).
Make sure you have enough participants from different professions. If possible, the facilitator could divide the participants in smaller groups with an equal distribution of professions.
Similarly, you need an equal distribution of facilitators / facilitators from different professions.
Develop customized learning materials considering the different professions. If needed you can adapt the material and activities provided in the DID-ACT curriculum.
Further recommended reading:
van Diggele, C., Roberts, C., Burgess, A. et al. Interprofessional education: tips for design and implementation. BMC Med Educ 20, 455 (2020). (Link)