DID-ACT’s evaluation process for pilot implementations of the train-the-trainer courses

We are kicking off 2022 by building from our “Pilot Implementations on Clinical Reasoning” held from June 2021 until December 2022. This will provide added applicable information gathered from our recently published “Evaluation and analysis of learner activities of the pilot implementations of the train-the-trainer course“.

Evaluation for Quality Control

To ensure the quality of our curriculum’s development, our pilots accompanied a questionnaire for participants and facilitators. We are using this feedback to create necessary emphasis and/ or create a clearer final product for our learners. These responses were coupled with monitoring our chosen learning management system (LMS), Moodle, and virtual patient system, CASUS. DID-ACT’s six institutional partners took part in the evaluation by facilitating 9 pilot courses across Europe. In brief, approximately 100 teachers participated in the 5 clinical reasoning teaching topics in the train-the-trainer course pilots. Approximately half of the participants returned their evaluation questionnaires alongside 12 responses from facilitators. The results, discussed further here, coding was double-checked and disagreements were solved by consensus.  

Survey tools for clinical reasoning curriculum assessment

In our pilots we made the decision to use survey-based tools for measuring the train-the-trainer (TTT) and students courses. Our goal was to capture responses using fewer questions in a way that allowed for comparison between piloted units. In the end, we used Evaluation of Technology-Enhanced Learning Materials (ETELM). This tool, developed by David Cook and Rachel Ellaway, gave us our launch pad to have questionnaires be our standard evaluation tool. This was attractive for many reasons, including facilitating implementation into our learning unit template within our LMS.

Lessons learned around evaluations for our pilot implementation

Through iteration and collaboration with a psychologist, experienced educators, and researchers, we found the following pertinent for our project and beyond: 

  • Ensure you are using consistent language; i.e use either ‘course’ or ‘learning unit’, as pertinent to your project
  • Be mindful of using the word ‘practice’ as it can be interpreted many ways; i.e in DID-ACT’s case, we changed, “This learning unit will change my practice” to “This learning unit will improve my clinical reasoning” 
  • Providing participants an option to write free-text in their native language, as a project allows 
  • Avoid too many questions that may lead to overloading participants 
  • Asking about years in a profession versus age provides more succinct answers for what we needed.

 TTT Pilot Implementation Survey Results

We set up our questionnaires using a scale of 1 (definitely agree) to 7 (definitely disagree). The average score of responses was 5.8 when prompted with the question about whether the courses would improve the teaching of clinical reasoning. The pilots excelled  in the areas of selection of topics, small group discussions, the facilitators, and inter-professional learning opportunities. Growth was suggested in the areas of technical navigation of the LMS, assessment and feedback on process, and content that was tailored more to professions other than physicians. 

Analysis of Pilot Implementations

The survey questionnaires were analyzed on Microsoft Excel where, using quantitative methodology, we calculated the descriptive statistics. In contrast, for open-ended responses, we performed a content analysis. Participant utterances were coded with the categories proposed in D3.2 (Didactical, Content, Technical, Interaction/ Collaboration, Implementation/ Time, Implementation /Facilitators). As well, we extended by adding three more categories (Content/Assessment, Overall and Others). All data was processed anonymously with each statement being set as positive, negative, or neutral. 

Overall, the TTT pilot implementations were a success as well as were our efforts in evaluating them. We will implement constructive feedback applicable to other learning units as we continue to develop them. Alongside this, we will return to the original pilot implementations and amend what needs to be improved. You can read a more detailed overview of D5.2 Evaluation and analysis of learner activities during these TTT pilot implementations here.

Gender Disparities and Biases in Healthcare – still not a topic from the past

While developing the DID-ACT learning unit about cognitive errors and biases, we came across the this great video on YouTube by Dr. Joanna Kempner about gender biases and the underrepresentation of especially women of color in healthcare. Stereotypes are still deeply incorporated into our culture and as Dr. Kempner illustrates they are still visible in (medical) advertisement, healthcare institutions, or workplaces. Although women have finally been included into clinical trials in the 1990s in the US, funding for diseaseas that are more prevalent in women is still very low, which also results in less or lower quality treatment. For more information and details, we highly recommend the video by Dr. Kempner:

The aforementioned learning unit for students is publicly available in our learning managment system: Biases and cognitive errors – an introduction.

Wrap-up of 2021

Just before the holiday season, we finalized a couple of interesting deliverables, reports, and updates.

Most importantly we completed the development of the DID-ACT train-the-trainer courses on clinical reasoning. Overall, eight learning units are available in our learning platform moodle including comprehensive information and documents for future course facilitators. The course development was accompanied with pilot testing of learning units with participants from partner-, associate-, and external institutions. On our website we provide a summary of these pilots and the extensive results of the evaluation activities. These results will inform the refinements of the train-the-trainer courses we will start implementing in January 2022.

We also repeated our Social Network Analysis and published the results including our website and learning management platform hits in this updated summary.

We wish you all peaceful holidays and a happy New Year!

Pilot Implementations on Clinical Reasoning

Looking back at our last chunk of time in the DID-ACT project, we have a lot to be proud of and a lot to look forward to. One of the most exciting and hands-on aspects of this clinical reasoning curriculum launch is where we are right now: Pilot Implementations.

Train the trainer course on clinical reasoning 

In our latest report published, Pilot implementations of the train-the-trainer course, we focused on the train the trainer learning units for our curriculum. These pilots were valuable and insightful in terms of helping the team iron out kinks in content, strategy, communications, etc. Overall, we had 7 courses that covered 4 different clinical reasoning topics up until the end of October. We are pleased to share that this made for a total of 69 participants who included professions such as medicine, nursing, paramedics, basic sciences and physiotherapy, from various partner-, associate-partner, and external institutions. We also had student participants. Overall the feedback was very positive. Next up is to take this feedback given and implement it into the curriculum. 

Quality criteria for pilot implementations

Our quality criteria, which we were successful in achieving, were the following: 

  • More than 50 participants from partner and associate partner institutions, as well as external participants
  • Covering a wide range of topics of the train-the-trainer courses that fit to the partner faculty development programs
  • Piloting of at least two same learning units by 2-3 partners
  • Thoroughly evaluated based on questionnaires for participants and instructors and learning analytics (in alignment with WP5).

Methods for a train the trainer implementation

We used our chosen learning platform Moodle to host our blended learning curriculum. There were several steps taken to ensure implementations were as smooth and consistent as possible. It began with a succinct planning phase. 

Planning phase 

Most of the train-the-trainer courses were chosen in tandem with their student curriculum counterparts. This was done intentionally so that trainers would be adequately prepared themselves to teach the students. Each institution chose their learning unit based on their individual needs and requirements. 
During this time, the consortium met on a regular basis to plan and ensure that quality criteria would be met. As well, alongside doing pilots within our consortium, we elected to have external participants as well for external applicability. 

Implementation phase 

The implementation happened differently at each institution and recruitment ranged from emails to specific cohorts to full public university call. During this time, each member was supported by Instruct to ensure that course access, structure of the pilots, and required facilitator resources were accessible and clear. This included a roadmap on how to use the Moodle platform as that was highlighted previously as an area with need support. Differences were also du to use of course forums and analysis of feedback within the learning platform.

Analysis and feedback phase

One of the deliverables for work package 5 was an evaluation questionnaire, as well as an analysis of the usage data. The former was given to participants at the end of the learning unit. Alongside this evaluation, each facilitator was given a short template to fill in for more qualitative reporting on their experience. Each of the responses was categorized and discussed together.

Results 

In the end, we piloted 4 interprofessional sessions and 3 with external participants. Feedback was generally positive and otherwise anything that could be termed as less than ideal is being used as constructive feedback for further refinement. Our biggest wins were that the interactive aspect was found to be highly valuable and the facilitators from varied professions was appreciated. Our constructive feedback was around Moodle and Casus being unclear as tools, too little time, teaching topic vs teaching how to teach was a crunch, as well as how the conversation veered toward medicine due to the unbalanced participant professions (i.e too many phyisicans versus physiotherapists in one group).

Pilot implementation conclusions

Overall, the consortium deems this round of clinical reasoning pilot implementations a success. There are points we need to work on, such as Moodle clarification, additional tutorial video was already produced,  and time constraints, which will all be addressed in the coming review period for the learning units. What’s more, the consortium will be delving during to the conclusions on the didactical and content-level for the learning units via the evaluation results reported in D5.2. These will all be brought forward during the overall revisions and improvements slated for D3.3 which begins in January 2022. 

A Review of Reviewing Itself: Improvements on DID-ACT’s Learning Unit Review Process

The DID-ACT project’s in-person September meeting in Bern, Switzerland, brought forward many interesting insights and opportunities for streamlining tasks. Aspects of effective project management in our development of a clinical reasoning curriculum were brought up a few times. Some key takeaways were small, like how to more clearly manage our folders using the feedback from the interim report. Larger topics, like tools for effectively writing blog posts and reports, were also brought up. These tools help to ensure the language of posts are at an appropriate audience level. One of our largest takeaways was how to streamline the review process for our learning units (LU) in a way that was more time-efficient and thorough.

Streamlining the curriculum review process

Our previous process for reviewing the learning units developed was to set up a small working group. Groups would be given a week or two for review, then come together to discuss our thoughts using a standardized review template. The team who developed the learning unit would then implement the necessary implementations based on feedback. Following this, there was a final review opened to the group.  We noticed a few downfalls to this method: 

  1. It took many weeks to get the review done due to requiring a sync between the entire team;
  2. Things slipped through the cracks upon the more scrutinized review that the Moodle implementation required; 
  3. By not being as high a stakeholder as, for example, someone who was testing the learning unit themselves, reviewers were not as engaged as needed for proper scrutiny.

This third point was the experience of one of the EDU teammates when preparing to implement the Person-Centred Care learning unit for trainers.

New review process for clinical reasoning learning units

omething many of us know about preparing anything is that running through it in detail, as close to how it will be used in real life, is a key part of ensuring you are producing a quality item. This is exactly the circumstance Jennifer and Daniel found themselves in when preparing their PCC learning unit pilot. Despite having made it through the pre-described learning unit review process, tiny errors slipped through. Ideas around how to more effectively use time, adequate prep for an activity, and Moodle implementation itself were all aspects of the curriculum we could streamline before the actual pilot. The EDU team brought this experience forward to the consortium at the Bern meeting and from this fruitful discussion came the following modified review process: 

  1. LU’s are to be completed in batches
  2. When a working group has their LU prepared for review, they email the consortium and a review group of 2-3 people, including 2 of the authors, is formed using a Google doc sign up sheet
  3. A review date is agreed on
  4. At this point, the team is to review the learning unit asynchronously
  5. Upon meeting synchronously, one of the authors is to proceed to run through the learning unit as if they were piloting it. The second is to take notes of their own as well as marking the reflections of the ‘learners’
  6. Amendments are to be made and when completed go through a final review before being added to Moodle. 

This modified review process helps to ensure that there is a stronger stakeholder in the review, the person who has to actually teach it to their teammates. We will keep you posted on how it goes!

For any questions regarding this process, please contact us!

DID-ACT meets in Bern: Interim report, sustainability and dissemination

After a long wait due to the pandemic, the DID-ACT project team with partners and associate partners had the opportunity to once again meet face-to-face. From the 22-23rd of September, teammates from Slovenia, Malta, Germany, and Poland travelled to Bern, Switzerland. Regrettably, due to travel restrictions, the Örebro team members, as well as Steve Durning from the USA, could not attend physically. Despite this limitation, they were fully present virtually alongside other associate partners. Thanks to the fantastic technical support by Bern University, all partners from home could be switched to the meeting and were present on a separate screen in the room. The audio and video quality were very good and synchronous discussion was possible.

Our virtual participants from Örebro University

Objectives of Meeting in Bern

The main objectives of the meeting were to many beyond catching up with the status of the project. We spent significant time discussing the evaluation and feedback results from the interim report, immediate and longer term next steps, as well as initiating the sustainability and integration guideline deliverables.

Interim Report for the DID-ACT Project

The interim report feedback was quite positive. However, there is also some room for improvement. Improvements highlighted include documentation and visibility of project outcomes concerning quality indicators, document structure, and better connection between related work packages (WP) 5, 6 and 8. Alongside these, connection to our central work packages and creating the learning units (LUs) in WP3 and 4.

Our next challenge is the upcoming pilot implementations to be held at the various institutions. Starting in September 2021, we still have some learning units in the realm of clinical reasoning left to develop. The curriculum development workload continues at high speed until the end of the year. Our previous process, including our process for reviewing learning units, will be fine-tuned for a more practical and effective approach. These were discussed during the meeting at Bern and will be further highlighted in a coming blogpost.

Sustainability & Dissemination in a Clinical Reasoning Curriculum

While the topics of dissemination and sustainability have been ongoing throughout the project, we took our face-to-face meeting as an opportunity to cement next steps. We feel that the sustainability concepts resulting from the pilots will be very valuable. There will also be external feedback included. We will create a minimal plan for cost-covering in the first years after the project ends based on the many ideas that surfaced in the meeting. Additionally, we will focus on integration of project results into partner curricula and inclusion of associate partners to also recruit people and keep the project content alive.

In addition to the very fruitful and motivating discussions held during the day, the evening was equally well-spent. We had a team lunch followed up by some ice cream, as well as dinner and a walk around the ‘old town’.

Group picture (from left to right: Martin Adler ( Instruct), Christian Fässler (ETH Zürich), Živa Ledinek (University of Maribor), Alice Bienvenu (University of Augsburg), Jennifer Vrouvides (EDU), Inga Hege (University of Augsburg), Melina Körner (University of Augsburg), Sören Huwendiek (University of Bern), Claudia Schlegel (Berner Bildungszentrum Pflege), Monika Sobočan (University of Maribor), Małgorzata Sudacka (Jagiellonian University), Andrzej Kononiwcz (Jagiellonian University) and virtual participants Desiree Wiegleb Edström (Örebro University), Samuel Edelbring (Örebro University), Marie Lidskog (Örebro University), Daniel Donath (EDU), Steve Durning (Uniformed Services University)).

It was a great pleasure to at least meet the vast majority of the team in a face-to-face environment. We plan to have our next face to face meeting in Maribor early next year. Following that, we hope that rescheduling our next meeting in May 2022 in Krakow can be held. We are hopeful that the COVID19 situation will allow these meetings. This face-to-face time is a great experience for the development of the project as well as for our development as colleagues. 

Thanks to our host Sören Huwendiek organizing the meeting and all partners and associate partners contributing to this project meeting.

Project presence at AMEE – the largest European conference on Health Professions Education

The AMEE 2021 Conference was held as a virtual conference from 27-30th August 2021. The conference attracted thousands of participants from around the world.

The DID-ACT project was represented by two oral presentations and active participation from several project members.

Samuel Edelbring and colleagues presented and discussed our curriculum framework in a presentation called “Development of our framework for a structured curriculum: Outcomes from a multi professional European project”.

Key points from the presentation were:

  • An overarching model for curriculum development (Kern 2016)
  • Presentation of our 35 learning objectives in 11 themes and 4 levels
  • Characteristics on the what and the how of our clinical reasoning curriculum
  • Some practical examples of learning activity designs

Magorzata Sudacka from Jagiellonian University presented an E-poster about the complexity and diversity of barriers hindering introducing clinical reasoning into health professions curricula – results of interprofessional European DID- ACT project.

Inga Hege and colleagues presented and discussed “Differences in clinical reasoning between female and male medical students in a virtual patient environment”. They found that female students created more elaborate concept maps than the male students. They were also more likely to complete the VP’s. However, no differences were found on the diagnostic accuracy.

Project Half Time – Our Clinical Reasoning Curriculum Development

Time has flown by quickly and the DID-ACT project, which began in January 2020. The project’s kick off began with the analysis of specific learner and educator needs for the development of a curriculum. We developed, in the beginning, a structured analysis of the needs and from that a set of learning goals and objectives on what a clinical reasoning curriculum should cover. Previous group work demonstrated that in medical education, explicit clinical reasoning curricula is needed, but not many health care institutions currently teach it explicitly. The project was therefore a welcome stepping stone to the development of the needed curricula. A big effort of our project is therefore to incorporate all needs identified through the survey prior to the project, and the in-depth needs for a clinical reasoning curriculum developed through the needs analysis, within the DID-ACT project.

The year 2021 marked an important step in the development of our clinical reasoning curriculum: Initiating the development of our first learning units. The learning units are the building blocks for our curriculum. The project intends to build 40 learning units for students and educators in total that educators can use, according to their needs, to implement either the whole curriculum or parts of it in their home institutions. The learning units focus on aspects such as theories of clinical reasoning, collaboration and interprofessional learning, or errors and biases in clinical reasoning (see Deliverable 2.1).

Our development groups spent and continue to spend time on developing and refining the learning units for both applicability and adaptability so that educators can use teaching content to their fullest potential. The learning units also include specific teaching methods and thus can be adapted to a particular institution’s framework.  Reviewing the learning units is an integral part of this process. Upon initial completion, all learning units are further refined by a collaborative peer review done asynchronously followed by a synchronous session during a team DID-ACT meeting by multiple health professionals, learning designers, and other educational experience providers. Once the review process and revision following feedback is done, the learning units are implemented on our chosen learning management system.

The learning units are publicly available at with your institutional credentials or after registering with any email address :

Reflections from the Medical Education Forum 2021

The 2nd Medical Education Forum (MEF) hosted from 4 to 6 May 2021 as a virtual meeting was an opportunity to review and summarise current research outcomes in medical education. It was organised by Jagiellonian University Medical College, McMaster University and Polish Institute for Evidence-Based Medicine. The live event had five speakers from the DID-ACT project (Samuel Edelbring, Inga Hege, Sören Huwendiek, Małgorzata Sudacka & me) and had 110 participants from 24 countries, most of them from Canada, Poland and Ukraine.

During the MEF conference, I took on the task of reviewing the most recent systematic reviews of virtual patients effectiveness. A review of reviews is called an umbrella review. Effectiveness of virtual patients is an important topic for the DID-ACT project because we use this type of education resources as a vehicle to deliver interactive exercises to practice clinical reasoning in the designed DID-ACT curriculum. To see how effectiveness is measured of clinical reasoning outcomes is also important to inform the DID-ACT project pilot evaluations. 

I have identified in the recent three years five systematic reviews of virtual patients effectiveness. This included a systematic review I completed with my colleagues from the Digital Health Education collaboration in 2019. For me personally, preparation of the MEF presentation was an interesting exercise that gave an opportunity to see how much the results obtained in our former review align with the outcomes reported in other reviews published afterwards. To check it makes sense as systematic reviews often have unique scopes defined by the selected inclusion criteria, data extraction and synthesis methods and therefore may differ. 

The reviews published after 2019 were carried out by international teams from France, New Zealand, South Korea, UK and USA. Only one, similar as we, included all health professions; the remaining focused on particular health professions: nursing, medicine, pharmacy. The studies either included all possible outcomes or selected a particular skill. It was interesting to see that the skill that was in particular in the scope of interest in syntheses in the recent years were communication skills. The conclusions of the studies were consistent across the different professions and topics. The studies reported benefits of application of virtual patients in education with hardly any exceptions. As Lee and colleagues (Med Educ, 54(9), 2020) concluded in their systematic review, the effectiveness of virtual patients can be even more improved when their use is preceded or followed by reflection exercises and human-teacher provided feedback. The technological features of virtual patient platforms were less important. 

You may learn more about the result of my umbrella review, presentation of the other DID-ACT project speakers and the follow-up Question & Answers sessions as video recording.

More about virtual patients ….

In this blog post we would like to point another Erasmus+ funded project “iCoViP” – International collection of virtual patients. This strategic partnership with participants from Poland, Germany, France, Spain, and Portugal aims to create a well-designed high-quality collection of virtual patients to train clinical reasoning. Other than DID-ACT, iCoViP focuses specifically on the training of medical students by providing opportunities to identify symtpoms and findings, develop differential diagnoses, document tests and treatment options, and decide about the final diagnosis.

Screenshot of a virtual patient in CASUS

The project started in April 2021 and continues until March 2023. As a first stept the consortium develops a blueprint that describes the virtual patients based on key symptoms, final diagnosis, and (virtual) patient-related data, such as age, sexual orientation, disability, profession, etc. This approach ensures that the collection is a realistic representation of a real-world patient collective.

More information about the project can be found at icovip.eu