Evaluation results of piloting of student learning units

After we had tested the train-the-trainer courses of the DID-ACT curriculum, it was time to evaluate the quality of learning units for students. We have conducted a series of pilot studies that validated five different learning units in eight evaluation events across all partner institutions including also associate partners. We have recorded student activities in the virtual patient collection connected with the DID-ACT curriculum available for deliberate practice. In addition, we evaluated the usability of the project’s learning management system in several test scenarios.

Overview about student activities in piloted learning units


Overall, students agreed to a large extent that the piloted DID-ACT learning units improved their clinical reasoning skills (average 5.75 in 7-point Likert scale). As a special strength of
the curriculum students frequently named the benefit of virtual patients integrated with the learning units. Another highlight were small-group group discussions, often conducted in multinational teams which broadened their views on clinical reasoning. However, a challenge in the tested version of the curriculum implementation was navigation in the learning management system (Moodle). As a consequence, we have further analyzed these data and, furthermore, conducted a series of usability tests. These analyses and tests led to a process to address the issues wherever it is possible. We have also received several requests for modifications of the developed learning material that we will address in the next deliverable, in which we refine courses based on pilot implementation.

You can find the full report, that was published iend of March 2022 here: Evaluation and analysis of the pilot implementations of the student curriculum

Reflections from the Medical Education Forum 2021

The 2nd Medical Education Forum (MEF) hosted from 4 to 6 May 2021 as a virtual meeting was an opportunity to review and summarise current research outcomes in medical education. It was organised by Jagiellonian University Medical College, McMaster University and Polish Institute for Evidence-Based Medicine. The live event had five speakers from the DID-ACT project (Samuel Edelbring, Inga Hege, Sören Huwendiek, Małgorzata Sudacka & me) and had 110 participants from 24 countries, most of them from Canada, Poland and Ukraine.

During the MEF conference, I took on the task of reviewing the most recent systematic reviews of virtual patients effectiveness. A review of reviews is called an umbrella review. Effectiveness of virtual patients is an important topic for the DID-ACT project because we use this type of education resources as a vehicle to deliver interactive exercises to practice clinical reasoning in the designed DID-ACT curriculum. To see how effectiveness is measured of clinical reasoning outcomes is also important to inform the DID-ACT project pilot evaluations. 

I have identified in the recent three years five systematic reviews of virtual patients effectiveness. This included a systematic review I completed with my colleagues from the Digital Health Education collaboration in 2019. For me personally, preparation of the MEF presentation was an interesting exercise that gave an opportunity to see how much the results obtained in our former review align with the outcomes reported in other reviews published afterwards. To check it makes sense as systematic reviews often have unique scopes defined by the selected inclusion criteria, data extraction and synthesis methods and therefore may differ. 

The reviews published after 2019 were carried out by international teams from France, New Zealand, South Korea, UK and USA. Only one, similar as we, included all health professions; the remaining focused on particular health professions: nursing, medicine, pharmacy. The studies either included all possible outcomes or selected a particular skill. It was interesting to see that the skill that was in particular in the scope of interest in syntheses in the recent years were communication skills. The conclusions of the studies were consistent across the different professions and topics. The studies reported benefits of application of virtual patients in education with hardly any exceptions. As Lee and colleagues (Med Educ, 54(9), 2020) concluded in their systematic review, the effectiveness of virtual patients can be even more improved when their use is preceded or followed by reflection exercises and human-teacher provided feedback. The technological features of virtual patient platforms were less important. 

You may learn more about the result of my umbrella review, presentation of the other DID-ACT project speakers and the follow-up Question & Answers sessions as video recording.

Ocean waves, footprints and dashboards: the selection of DID-ACT evaluation and learning analytics tools

Every project needs evaluation. Even though it might sometimes be considered as cumbersome or stressful for those whose work is evaluated, it is important that the merits and limitations of any given project are clearly laid out. A well-conducted evaluation ideally goes beyond highlighting the particular achievements of a program by delivering ideas for improvement. Furthermore it justifies the need to continue the efforts surrounding the project and its aims. It is commonplace that evaluation and feedback are employed during the last stage of the curriculum development cycle. However, it is well-founded that initiating evaluations in program development should be started as early as possible. The benefits are many with the central reasoning being that evaluating early on maintains and ensures that the chosen tools align with the planned outcome(s).

In terms of evaluation for the DID-ACT project, the Evaluation Work Package is a shared effort of the consortium partners. Jagiellonian University in Kraków, Poland, is responsible for its coordination. Its first year of activities finished in December 2020 with a report published on the project’s website. During the first half of the year, the activities were focused on gauging the needs of potential users by developing a web survey to collect the specific expectations. From the data gathered, the DID-ACT project’s set of learning objectives and curricular framework were developed by another working group of the project. The goal of the second half of the year in terms of the evaluation work package was to propose a set of evaluation and learning analytic tools. Combined, these measure the planned outcome of the DID-ACT student curriculum and train-the-trainer course.

At the time of commencing our evaluation work, the specific set of learning objectives had not yet been set. Thus we first reviewed the literature in search of existing tools that measure participant satisfaction and perceived effectiveness of clinical reasoning training. This brought us the productive advantage and opportunity to reuse the outcomes of former projects. We experience this as an important point that demonstrates continuity and sustainability of research in this area. Our literature review identified a number of studies in which evaluation questionnaires of clinical reasoning learning activities were presented. Based on the analysis of the questions that aimed to measure student satisfaction, we were able to identify seven common themes of interest: course organisation, clear expectations, relevance, quality of group work, feedback, teaching competencies, and support for self-directed learning. We collected plenty of exemplar questions in each of the themes. Additionally, for the self-assessment questions we have assigned the gathered items to the DID-ACT learning goals and objectives.

Surprisingly our literature review did not yield any evaluation questions specific to clinical reasoning that could be used for our train-the-trainer courses. We resolved this challenge by broadening our goal. We adapted our search to include faculty development evaluation questionnaires that focused on honing teaching skills in general (not necessarily exclusively clinical reasoning). There was one evaluation tool from this group that caught our attention in particular: the Stanford Faculty Development Program Model (SFDP-26). We value its wide dissemination in many domains and clearly formulated set of 26 questions grouped in seven dimensions. An additional strength is that it has already been translated and validated in languages other than English, for example, in German. 

An interesting discovery for us was a tool that measures the impact of curricular innovation following the Concerns-Based Adoption Model (CBAM). This tool, developed at the University of Texas, proposes an imaginative way of measuring the progress of curriculum innovation. It does so by identifying the types of concerns teachers voice regarding new topics. These concerns  can range from disinterest, through concerns about efficiency of teaching of this element, and end with ideas for expanding the idea. 

The CBAM model is based on the assumption that certain types of statements are characteristic to particular developmental stages when introducing an innovation into a curriculum. The developmental stage of introducing the innovation is captured effectively by the Stage of Concern (SoC) questionnaire. When collecting the data from a particular school the outcome is a curve that displays the intensity of concerns found within the seven consecutive stages of innovation. The value this brings is that comparing the curves across several institutions can help us visualise any progress implementing the curriculum is having. We find this visualisation to be akin to following how waves traverse the ocean.

As the DID-ACT curriculum is planned to be a blended model of face-to-face and e-learning activities, we intend to use learning analytics in our curriculum evaluation. More specifically we will capture, process and interpret the digital footprints learners leave while using electronic learning environments. It is of course pivotal to be transparent about the purpose and to obtain consent regarding the collection of educational data. Upon receiving consent, computational power can be harnessed to optimise educational processes to the benefit of both learners and teachers. From the perspective of the curriculum developer, it is particularly important to know which activities attracted the most versus least engagement from students. 

This information, when triangulated with other data evaluation sources, e.g. from questionnaires or interviews, allows us to identify elements of the curriculum that are particularly challenging, attractive or in need of promotion or better alignment. The learning analytics dashboards are viewed for our purposes a bit like a car’s dashboard where our fuel, odometers, speedometer display key information; for DID-ACT, analytics present a clear range of visualised progress indicators in one place.

We selected then analysed two electronic tools that will be used to implement the technical side of the DID-ACT curriculum: “Moodle” (a learning management system) and “Casus” (a virtual patient platform). Our goal was to look for the relevant learner data that could be collected. In addition, we intended to determine how it is visualised when following learner progress and trajectories. To systematise the process, we have produced a table we dubbed the ‘Learning Analytic Matrix’ that shows how engagement in attaining individual DID-ACT learning goals and objectives is captured by these electronic tools. Logs of such activities, like the opening of learning resources, time spent on activities, number and quality of posts in discussion boards, or success rate in formative questions, will enable us to map what is happening in the learning units developed by the DID-ACT consortium. 

This is augmented by recording traces of some learning events which are characteristic to the clinical reasoning process. These events can be qualified as success rates in making the right diagnoses in virtual patient cases, student use of formal medical terminology in summary statements, or making reasonable connections in clinical reasoning concept maps. We also inspected the ways the captured data are presented graphically, identifying at the moment a predominance in tabular views. We foresee the possibility of extending the functionality of learning analytic tools available in the electronic learning environment by introducing a more diverse way of visualising evaluation results in learning clinical reasoning. 

The collection and interpretation of all that data related to the enactment of the DID-ACT curriculum using the described tools is something we are looking forward to pursuing in the two upcoming years of the DID-ACT project. 

DID-ACT at AMEE 2020

The Association for Medical Education in Europe (AMEE) is one of the biggest organisations focused on excellence and research in health professions education. It has been organising annual conferences for scholars engaged in this topic for close to 50 years.  The interest in these meetings is rising and has reached the level of around 4000 participants last year. The DID-ACT consortium decided to disseminate its outcomes at AMEE by submitting an abstract informing about the results of the project’s needs analysis.

This year’s conference was originally planned to be held in Glasgow, United Kingdom, however, changes had to be made due to the ongoing COVID-19 pandemic; AMEE’s traditionally face-to-face format was adapted to be a virtual conference that  rose to the challenge and exceeded expectations. Rather than following suit to mainstream reliance on primarily traditional audio-video teleconferencing tools, AMEE took on the challenge to host the conference in a virtual world. The virtual venue encompassed a group of interconnected locations with different purposes. A palm tree grown lobby with information booths led to several lecture theatres, exhibition halls, networking areas and poster rooms. The participants, prior to joining the conference, designed their own avatar and then navigated it through the locations meeting on the way avatars of other participants. The meetings enabled interactions either by typing in a chat window or an audio conversation. Participation in the events held in parallel conference communications could be interactive as well, enabling the audience to applaud, raise hands, and talk to the next-sited neighbour.

The DID-ACT submission was accepted for AMEE 2020 as a virtual poster. This presentation format involves constructing a digital “stack” of multimedia resources which could be presented either in a smartphone app or in a web browser. The content is organised in nested sections depicted as rectangular tiles, each containing resources as text entries, images, web links. Each conference presenter was encouraged to incorporate in the poster a short video showing a voice-over PowerPoint presentation giving an overview of the most important content. In addition it was required to prepare a one page digital print-out of the poster including a QR-code for easy access by smartphones from the real world. The DID-ACT poster was prepared by Andrzej Kononowicz, Małgorzata Sudacka, Felicitas L. Wagner,  Samuel Edelbring, Inga Hege and Sören Huwendiek on behalf of the consortium. In the image below we present the poster print-out. The content is available via this link https://api.ltb.io/show/BWPMF.

The virtual conference was held from 7th until 9th September. Several DID-ACT members participated in the conference events and networked with fellow researchers. In particular there were several conference presentations around the topic of clinical reasoning. By the end of conference the participants form DID-ACT project decided to gather virtually in one of the exhibition hall for a virtual group selfie:

Standing from the left are: Desiree Wiegleb Edstöm, Živa Ledinek, Małgorzata Sudacka, Maria Elvén, Andrzej Kononowicz and Inga Hege

The conference contributions presented at the virtual AMEE conference will be available at least throughout the next year and by that enable playback of the presentations and sustainable project dissemination. Participation in the conference was a memorable event, impressive by its innovation and showing how far virtualisation of education and research can nowadays go. Despite the many benefits of the virtual conference, and thankful it was possible to be held in these troubled pandemic times, we hope we will be able to meet up at the face-to-face conference next year at AMEE 2021 in real world Glasgow to present the community more news around the DID-ACT project.

How to teach synchronously in a virtual setting

  • You need a reliable camera, microphone, and virtual platform and be familiar with its features, such as whiteboard, chat, polling, breakout rooms, etc.
  • At the beginning establish communication rules, e.g. whether participants should raise their (virtual) hand, use the chat, and/or just speak. Also, we recommend asking participants to turn on their camera
  • For small group work break out rooms work very well, just be clear about the tasks the groups should work on prior to dividing them into the groups.
  • For collaboration the use of integrated virtual whiteboards or other platforms such as Padlet are very useful. Just make sure prior to the session that you have everything setup and the links at hand, e.g. to post them in the chat.
  • Allow a bit more time for starting the session and the group works as there might be participants who are not familiar with the platform or technical problems might occur.

How to motivate unprepared participants

  • Make clear that the asynchronous assignments are a core part of the course and that its content will not be repeated. Even if it is difficult, stick to that when starting the synchronous teaching session.
  • If you expect unprepared participants, you can start the session with a student-centered group exercise mixing prepared and unprepared students to increase peer-pressure and make them realize that being unprepared does not feel good.  
  • Use the introductory or closing quizzes / tests so that participants can self- assess whether they have the required knowledge and you as a facilitator can see the level of knowledge and preparation of your participants.

Further recommended reading:

How to involve participants with different levels of experience

  • To account for such different levels, we recommend making use of the asynchronous preparatory phases which also include introductory quizzes in which participants can self-assess their prior knowledge and you as a facilitator can assess the differences within your group. Participants with less prior experience can also be guided to additional preparatory resources.
  • Encourage participants to work in pairs or small groups when preparing so that they can help and learn from each other. You could even facilitate this by dividing them into groups with different levels of experience.
  • Similarly, during the synchronous phases, we recommend forming groups with participants different levels of experience and emphasize the peer support aspects of such group activities.
  • We also recommend starting with rather smaller groups and allow more time than stated in the course outlines, if you expect a heterogenous level of experience. This way you can better manage this challenge.
  • Encourage your participants to ask questions, emphasizing that nobody knows everything and that it is important for learning to ask questions.  
  • Especially in the train-the-trainer course you might have to deal with over-confident participants, who especially in an interprofessional setting can dominate the group. This is a complex cultural challenge, but you could try to establish (and follow) communication rules at the beginning of a session.  

How to address potential overlaps or redundancies

  • Identify what is already included and what is missing in your curriculum related to clinical reasoning outcomes and compare it to the DID-ACT blueprint. Prioritize learning outcomes that are not yet covered but regarded as important.
  • Identify activities, resources, or teaching sessions with similar learning outcomes that might be in need for change anyway because of low evaluation results, teachers or students struggle with it. These could be suitable for adding or replacing parts with DID-ACT activities.
  • Ask teachers and students about overlaps and gaps they see in their teaching / learning of clinical reasoning and where they struggle. This could also be done by a reflection round after related teaching activities in the curriculum
  • Although ideally a longitudinal integration is aimed at, we recommend to starting small with a pilot implementation to gain experience and develop a show case.

How to teach in an interprofessional setting

  • Allow for enough time prior to the teaching for the organization and motivation / encouragement of stakeholders and participants
  • Allow for enough time and guidance during the course so that the participants from the different professions can get to know each other and their professions and discuss their different perspectives. This might mean that you need to calculate some extra time in addition to the suggested duration of the learning unit.
  • There may be a different understanding of clinical reasoning in the different health professions, so we recommend making participants aware of this. You could for example use and adapt activities from the learning units on the health profession roles to facilitate this.
  • Courses in an interprofessional setting should not come too early in the curriculum (not before professions have formed their own professional identity - however, this also depends on the aim of the course). 
  • Make sure you have enough participants from different professions. If possible, the facilitator could divide the participants in smaller groups with an equal distribution of professions. 
  • Similarly, you need an equal distribution of facilitators / facilitators from different professions.
  • Develop customized learning materials considering the different professions. If needed you can adapt the material and activities provided in the DID-ACT curriculum.

Further recommended reading:

van Diggele, C., Roberts, C., Burgess, A. et al. Interprofessional education: tips for design and implementation. BMC Med Educ 20, 455 (2020). (Link)

Skip to content