After a long wait due to the pandemic, the DID-ACT project team with partners and associate partners had the opportunity to once again meet face-to-face. From the 22-23rd of September, teammates from Slovenia, Malta, Germany, and Poland travelled to Bern, Switzerland. Regrettably, due to travel restrictions, the Örebro team members, as well as Steve Durning from the USA, could not attend physically. Despite this limitation, they were fully present virtually alongside other associate partners. Thanks to the fantastic technical support by Bern University, all partners from home could be switched to the meeting and were present on a separate screen in the room. The audio and video quality were very good and synchronous discussion was possible.
Objectives of Meeting in Bern
The main objectives of the meeting were to many beyond catching up with the status of the project. We spent significant time discussing the evaluation and feedback results from the interim report, immediate and longer term next steps, as well as initiating the sustainability and integration guideline deliverables.
Interim Report for the DID-ACT Project
The interim report feedback was quite positive. However, there is also some room for improvement. Improvements highlighted include documentation and visibility of project outcomes concerning quality indicators, document structure, and better connection between related work packages (WP) 5, 6 and 8. Alongside these, connection to our central work packages and creating the learning units (LUs) in WP3 and 4.
Our next challenge is the upcoming pilot implementations to be held at the various institutions. Starting in September 2021, we still have some learning units in the realm of clinical reasoning left to develop. The curriculum development workload continues at high speed until the end of the year. Our previous process, including our process for reviewing learning units, will be fine-tuned for a more practical and effective approach. These were discussed during the meeting at Bern and will be further highlighted in a coming blogpost.
Sustainability & Dissemination in a Clinical Reasoning Curriculum
While the topics of dissemination and sustainability have been ongoing throughout the project, we took our face-to-face meeting as an opportunity to cement next steps. We feel that the sustainability concepts resulting from the pilots will be very valuable. There will also be external feedback included. We will create a minimal plan for cost-covering in the first years after the project ends based on the many ideas that surfaced in the meeting. Additionally, we will focus on integration of project results into partner curricula and inclusion of associate partners to also recruit people and keep the project content alive.
In addition to the very fruitful and motivating discussions held during the day, the evening was equally well-spent. We had a team lunch followed up by some ice cream, as well as dinner and a walk around the ‘old town’.
It was a great pleasure to at least meet the vast majority of the team in a face-to-face environment. We plan to have our next face to face meeting in Maribor early next year. Following that, we hope that rescheduling our next meeting in May 2022 in Krakow can be held. We are hopeful that the COVID19 situation will allow these meetings. This face-to-face time is a great experience for the development of the project as well as for our development as colleagues.
Thanks to our host Sören Huwendiek organizing the meeting and all partners and associate partners contributing to this project meeting.
The AMEE 2021 Conference was held as a virtual conference from 27-30th August 2021. The conference attracted thousands of participants from around the world.
The DID-ACT project was represented by two oral presentations and active participation from several project members.
Samuel Edelbring and colleagues presented and discussed our curriculum framework in a presentation called “Development of our framework for a structured curriculum: Outcomes from a multi professional European project”.
Key points from the presentation were:
An overarching model for curriculum development (Kern 2016)
Presentation of our 35 learning objectives in 11 themes and 4 levels
Characteristics on the what and the how of our clinical reasoning curriculum
Some practical examples of learning activity designs
Magorzata Sudacka from Jagiellonian University presented an E-poster about the complexity and diversity of barriers hindering introducing clinical reasoning into health professions curricula – results of interprofessional European DID- ACT project.
Inga Hege and colleagues presented and discussed “Differences in clinical reasoning between female and male medical students in a virtual patient environment”. They found that female students created more elaborate concept maps than the male students. They were also more likely to complete the VP’s. However, no differences were found on the diagnostic accuracy.
Time has flown by quickly and the DID-ACT project, which began in January 2020. The project’s kick off began with the analysis of specific learner and educator needs for the development of a curriculum. We developed, in the beginning, a structured analysis of the needs and from that a set of learning goals and objectives on what a clinical reasoning curriculum should cover. Previous group work demonstrated that in medical education, explicit clinical reasoning curricula is needed, but not many health care institutions currently teach it explicitly. The project was therefore a welcome stepping stone to the development of the needed curricula. A big effort of our project is therefore to incorporate all needs identified through the survey prior to the project, and the in-depth needs for a clinical reasoning curriculum developed through the needs analysis, within the DID-ACT project.
The year 2021 marked an important step in the development of our clinical reasoning curriculum: Initiating the development of our first learning units. The learning units are the building blocks for our curriculum. The project intends to build 40 learning units for students and educators in total that educators can use, according to their needs, to implement either the whole curriculum or parts of it in their home institutions. The learning units focus on aspects such as theories of clinical reasoning, collaboration and interprofessional learning, or errors and biases in clinical reasoning (see Deliverable 2.1).
Our development groups spent and continue to spend time on developing and refining the learning units for both applicability and adaptability so that educators can use teaching content to their fullest potential. The learning units also include specific teaching methods and thus can be adapted to a particular institution’s framework. Reviewing the learning units is an integral part of this process. Upon initial completion, all learning units are further refined by a collaborative peer review done asynchronously followed by a synchronous session during a team DID-ACT meeting by multiple health professionals, learning designers, and other educational experience providers. Once the review process and revision following feedback is done, the learning units are implemented on our chosen learning management system.
The learning units are publicly available at with your institutional credentials or after registering with any email address :
The 2nd Medical Education Forum (MEF) hosted from 4 to 6 May 2021 as a virtual meeting was an opportunity to review and summarise current research outcomes in medical education. It was organised by Jagiellonian University Medical College, McMaster University and Polish Institute for Evidence-Based Medicine. The live event had five speakers from the DID-ACT project (Samuel Edelbring, Inga Hege, Sören Huwendiek, Małgorzata Sudacka & me) and had 110 participants from 24 countries, most of them from Canada, Poland and Ukraine.
During the MEF conference, I took on the task of reviewing the most recent systematic reviews of virtual patients effectiveness. A review of reviews is called an umbrella review. Effectiveness of virtual patients is an important topic for the DID-ACT project because we use this type of education resources as a vehicle to deliver interactive exercises to practice clinical reasoning in the designed DID-ACT curriculum. To see how effectiveness is measured of clinical reasoning outcomes is also important to inform the DID-ACT project pilot evaluations.
I have identified in the recent three years five systematic reviews of virtual patients effectiveness. This included a systematic review I completed with my colleagues from the Digital Health Education collaboration in 2019. For me personally, preparation of the MEF presentation was an interesting exercise that gave an opportunity to see how much the results obtained in our former review align with the outcomes reported in other reviews published afterwards. To check it makes sense as systematic reviews often have unique scopes defined by the selected inclusion criteria, data extraction and synthesis methods and therefore may differ.
The reviews published after 2019 were carried out by international teams from France, New Zealand, South Korea, UK and USA. Only one, similar as we, included all health professions; the remaining focused on particular health professions: nursing, medicine, pharmacy. The studies either included all possible outcomes or selected a particular skill. It was interesting to see that the skill that was in particular in the scope of interest in syntheses in the recent years were communication skills. The conclusions of the studies were consistent across the different professions and topics. The studies reported benefits of application of virtual patients in education with hardly any exceptions. As Lee and colleagues (Med Educ, 54(9), 2020) concluded in their systematic review, the effectiveness of virtual patients can be even more improved when their use is preceded or followed by reflection exercises and human-teacher provided feedback. The technological features of virtual patient platforms were less important.
You may learn more about the result of my umbrella review, presentation of the other DID-ACT project speakers and the follow-up Question & Answers sessions as video recording.
In this blog post we would like to point another Erasmus+ funded project “iCoViP” – International collection of virtual patients. This strategic partnership with participants from Poland, Germany, France, Spain, and Portugal aims to create a well-designed high-quality collection of virtual patients to train clinical reasoning. Other than DID-ACT, iCoViP focuses specifically on the training of medical students by providing opportunities to identify symtpoms and findings, develop differential diagnoses, document tests and treatment options, and decide about the final diagnosis.
The project started in April 2021 and continues until March 2023. As a first stept the consortium develops a blueprint that describes the virtual patients based on key symptoms, final diagnosis, and (virtual) patient-related data, such as age, sexual orientation, disability, profession, etc. This approach ensures that the collection is a realistic representation of a real-world patient collective.
More information about the project can be found at icovip.eu
For the development of the DID-ACT’s online clinical reasoning curriculum we will be following the six step Kern cycle for curriculum development. Kicking off our third and fourth work package in January 2021 signalled the start for detailed planning and development of learning units. As of now, we have started with 4 units, but based on our curricular framework, we will be developing a total of 40 learning units. These units will be aligned with a collection of case scenarios and virtual patients. Together, these will allow for interprofessional and deliberate practice of clinical reasoning.
We started this process by providing an exemplary learning unit (“What is clinical reasoning – an introduction”) and a template for describing a learning unit. We then divided into smaller teams to develop the first four learning units on different clinical reasoning-related theories in parallel. Each team includes partners from different health professions across Europe. Each team has varying levels of expertise, ranging from students to experienced clinicians or educators. The diversity of knowledge and experience are key elements for developing a clinical reasoning curriculum that reflects the various needs of health professions across Europe and beyond.
Upon completion of each learning unit, each unit will be reviewed within our team and by associate partners. The units will then be revised accordingly and implemented into Moodle, our learning management system, using available OERs. We will create new resources ourselves, if needed. After a final review, all learning units will be made publicly available to be freely used by students and educators.
We will pilot selected learning units from the student curriculum and the train-the-trainer course within our partner institutions during the summer and fall 2021. The evaluation results of course participants and facilitators will be the basis for further refinements of our clinical reasoning curriculum. For that purpose, we aim to include at least 500 students and 50 educators.
At the end of December 2020, the DID-ACT project consortium welcomed two things: the holidays and the successful completion of Work Package 2. This post aims to provide an update on what that entailed, what we completed, as well as provide a short overview of what we are going to be developing in Work Package 3. To begin, we will provide a brief overview of what we learned using the age-old rhetorical question: “How do you eat an elephant?” To which the answer is, “not in one bite”.
This rhetorical question is often used to illustrate how overcoming large and complex challenges is done by dividing them into smaller chunks. That when broken down into bite size pieces, these challenges are easier to manage. In the case of the DID-ACT project and beyond, every clinician, educator or researcher who has tried to describe the nature of teaching clinical reasoning, has realized this challenge. As a team, we most definitely learned this throughout work package 2 as we represent a collection of diverse professionals with the same ultimate goal, but with different ideas on how to get there.
When broken down further, we explored and learned how teaching clinical reasoning is a challenge that is inherently multifaceted. One facet, for example, is the complexity of a clinical situation, a second, is the need to grasp the nature of the varied competencies required to address the situation at hand, and third is how to support the learning of these competencies effectively.
Developing a clinical reasoning framework
Last fall the DID-ACT consortium developed a clinical reasoning curriculum framework that included clinical reasoning quality criteria. In addition to the above challenges, we emphasized having an interprofessional clinical reasoning curriculum. Our interprofessional framework was conceived with the input from different nations and educational cultures, all conducted online due to the current pandemic. So – similar to a complex clinical situation, we faced a plethora of challenges when producing and describing a clinical reasoning framework. This work led us to the development of two curricula: one directed to health professions students and one for teachers.
What makes a good clinical reasoning curriculum?
When we zoomed out to get a clear idea of the big picture, we noted a few crucial pedagogical aspects: a strong focus on student-centeredness; a perspective in which the student takes responsibility for their own learning process; as well as a strong connection to relevant clinical situations which means that knowledge and competencies were applied to context. We also noted that the philosophy of “constructive alignment” will be used when designing our clinical reasoning learning units. In practice, this means that the intended learning outcome should direct choices when designing learning activities; thereby creating a harmony between the clinical reasoning learning activities and how they are assessed. This means that the intended learning outcomes hold a central position when designing your clinical reasoning learning activities, assessments, and learning units overall. That is why we structured our interprofessional clinical reasoning framework according to ~ 50 learning objectives in an interdisciplinary consensus process.
DID-ACT’s Health Professions Education Framework
Did we eat the metaphorical elephant in this project phase? Yes!
By using our various knowledge and skills collectively, then by dividing the bigger task into parts and iteratively working in small and greater teams, we put parts back together to form a much clearer picture. Building from Work Package 1, we had a framework that is grounded in an interdisciplinary needs analysis directed towards a breadth of European health professions schools to launch from. When taking our learning into WP2, our work entailed bringing forward and evaluating a large amount of open learning resources for clinical reasoning based on our desired learning outcomes. It was essential that these learning resources were accessible and of a strong quality for our online clinical reasoning curriculum. When we tied our learning objectives, outcomes, assessment ideas, and open resources together, we created a well-rounded, interprofessional framework and the beginnings of an actual online clinical reasoning course.
Every project needs evaluation. Even though it might sometimes be considered as cumbersome or stressful for those whose work is evaluated, it is important that the merits and limitations of any given project are clearly laid out. A well-conducted evaluation ideally goes beyond highlighting the particular achievements of a program by delivering ideas for improvement. Furthermore it justifies the need to continue the efforts surrounding the project and its aims. It is commonplace that evaluation and feedback are employed during the last stage of the curriculum development cycle. However, it is well-founded that initiating evaluations in program development should be started as early as possible. The benefits are many with the central reasoning being that evaluating early on maintains and ensures that the chosen tools align with the planned outcome(s).
In terms of evaluation for the DID-ACT project, the Evaluation Work Package is a shared effort of the consortium partners. Jagiellonian University in Kraków, Poland, is responsible for its coordination. Its first year of activities finished in December 2020 with a report published on the project’s website. During the first half of the year, the activities were focused on gauging the needs of potential users by developing a web survey to collect the specific expectations. From the data gathered, the DID-ACT project’s set of learning objectives and curricular framework were developed by another working group of the project. The goal of the second half of the year in terms of the evaluation work package was to propose a set of evaluation and learning analytic tools. Combined, these measure the planned outcome of the DID-ACT student curriculum and train-the-trainer course.
At the time of commencing our evaluation work, the specific set of learning objectives had not yet been set. Thus we first reviewed the literature in search of existing tools that measure participant satisfaction and perceived effectiveness of clinical reasoning training. This brought us the productive advantage and opportunity to reuse the outcomes of former projects. We experience this as an important point that demonstrates continuity and sustainability of research in this area. Our literature review identified a number of studies in which evaluation questionnaires of clinical reasoning learning activities were presented. Based on the analysis of the questions that aimed to measure student satisfaction, we were able to identify seven common themes of interest: course organisation, clear expectations, relevance, quality of group work, feedback, teaching competencies, and support for self-directed learning. We collected plenty of exemplar questions in each of the themes. Additionally, for the self-assessment questions we have assigned the gathered items to the DID-ACT learning goals and objectives.
Surprisingly our literature review did not yield any evaluation questions specific to clinical reasoning that could be used for our train-the-trainer courses. We resolved this challenge by broadening our goal. We adapted our search to include faculty development evaluation questionnaires that focused on honing teaching skills in general (not necessarily exclusively clinical reasoning). There was one evaluation tool from this group that caught our attention in particular: the Stanford Faculty Development Program Model (SFDP-26). We value its wide dissemination in many domains and clearly formulated set of 26 questions grouped in seven dimensions. An additional strength is that it has already been translated and validated in languages other than English, for example, in German.
An interesting discovery for us was a tool that measures the impact of curricular innovation following the Concerns-Based Adoption Model (CBAM). This tool, developed at the University of Texas, proposes an imaginative way of measuring the progress of curriculum innovation. It does so by identifying the types of concerns teachers voice regarding new topics. These concerns can range from disinterest, through concerns about efficiency of teaching of this element, and end with ideas for expanding the idea.
The CBAM model is based on the assumption that certain types of statements are characteristic to particular developmental stages when introducing an innovation into a curriculum. The developmental stage of introducing the innovation is captured effectively by the Stage of Concern (SoC) questionnaire. When collecting the data from a particular school the outcome is a curve that displays the intensity of concerns found within the seven consecutive stages of innovation. The value this brings is that comparing the curves across several institutions can help us visualise any progress implementing the curriculum is having. We find this visualisation to be akin to following how waves traverse the ocean.
As the DID-ACT curriculum is planned to be a blended model of face-to-face and e-learning activities, we intend to use learning analytics in our curriculum evaluation. More specifically we will capture, process and interpret the digital footprints learners leave while using electronic learning environments. It is of course pivotal to be transparent about the purpose and to obtain consent regarding the collection of educational data. Upon receiving consent, computational power can be harnessed to optimise educational processes to the benefit of both learners and teachers. From the perspective of the curriculum developer, it is particularly important to know which activities attracted the most versus least engagement from students.
This information, when triangulated with other data evaluation sources, e.g. from questionnaires or interviews, allows us to identify elements of the curriculum that are particularly challenging, attractive or in need of promotion or better alignment. The learning analytics dashboards are viewed for our purposes a bit like a car’s dashboard where our fuel, odometers, speedometer display key information; for DID-ACT, analytics present a clear range of visualised progress indicators in one place.
We selected then analysed two electronic tools that will be used to implement the technical side of the DID-ACT curriculum: “Moodle” (a learning management system) and “Casus” (a virtual patient platform). Our goal was to look for the relevant learner data that could be collected. In addition, we intended to determine how it is visualised when following learner progress and trajectories. To systematise the process, we have produced a table we dubbed the ‘Learning Analytic Matrix’ that shows how engagement in attaining individual DID-ACT learning goals and objectives is captured by these electronic tools. Logs of such activities, like the opening of learning resources, time spent on activities, number and quality of posts in discussion boards, or success rate in formative questions, will enable us to map what is happening in the learning units developed by the DID-ACT consortium.
This is augmented by recording traces of some learning events which are characteristic to the clinical reasoning process. These events can be qualified as success rates in making the right diagnoses in virtual patient cases, student use of formal medical terminology in summary statements, or making reasonable connections in clinical reasoning concept maps. We also inspected the ways the captured data are presented graphically, identifying at the moment a predominance in tabular views. We foresee the possibility of extending the functionality of learning analytic tools available in the electronic learning environment by introducing a more diverse way of visualising evaluation results in learning clinical reasoning.
The collection and interpretation of all that data related to the enactment of the DID-ACT curriculum using the described tools is something we are looking forward to pursuing in the two upcoming years of the DID-ACT project.
Generally, clinical reasoning refers to a health professional’s thinking and decision-making process. It guides practical actions, implying a process limited to the cognitive activities of health professionals. In more elaborated definitions of clinical reasoning, we may also find concepts such as collaboration and context. These imply a broader view of the reasoning process where the client and situational factors also come into play. The number of definitions of clinical reasoning are innumerable. Variations within and between different professional disciplines are equally as many. There is no established singular definition of the nature, relevant components or boundaries of a health professional’s clinical reasoning. The co-existence of multiple definitions leads to a plethora of variation in clinicians’ view(s) of clinical reasoning. These variations in turn influence their consistent and uniform application of reasoning in practice.
My name is Maria Elvén, I’m a lecturer and researcher in physiotherapy. In my PhD-work, focusing on clinical reasoning in physiotherapy, clinical reasoning was studied from a biopsychosocial and person-centred perspective. As such, clinical reasoning in relation to health and illness are dependent on biomedical, psychological and social aspects. Accordingly, to be in ‘good health’ represents different realities for different individuals. This variation emphasizes the need to grasp the persons’ individual perceptions. We must understand their own definitions of health and life situation in tandem, with their mental and bodily status from a health care perspective in the clinical reasoning process. Person-centredness, e.g., that the clinician considers the unique needs and specific health concerns of the person, and treats the individual as competent to make decisions about their own care, is a way to empower clients to take an active part in the clinical reasoning process.
Let’s pause here and reflect! Based on your view/definition of clinical reasoning, what is the goal of the clinical reasoning process? A correct diagnostic decision? A well-performed analytical thought process? A well-founded treatment decision? A satisfied clinician? A satisfied client?
If I ask myself these questions, the ultimate goal of effective clinical reasoning is that the client achieves their own goal(s) related to their current health problem. To be able to reach this goal, the clinical reasoning process cannot be confined to the mind of the clinician, the process needs to be articulated and shared with the client. To make full use of the client as an active partner in the clinical process, shared treatment decisions are not sufficient. The client needs to be aided in supporting their role as an important contributor to the analysis of the problem as well. This process also involves selecting and prioritizing among various treatments and management strategies that fit their actual life situation, as well as continuously evaluate their effectiveness.
We have covered a lot of ground surrounding the clinician’s inner process. The next step is to look beyond the cognitive processes of the clinician and to further elaborate on what meaningful client participation and involvement implies in the reasoning process. As suggested in my definition of clinical reasoning in physiotherapy, clinical reasoning is a process performed in partnership between the client and the clinician; ultimately stressing their shared responsibilities and equal values (Elvén, 2019). Performance of such reasoning may need training and competence development for clinicians to be able to support clients in their role as clinical reasoning partners.
That brings us to where we are today and what I am hoping to bring into the DID-ACT project. In the DID-ACT project, the client perspective in clinical reasoning is clearly in focus. This is done by the inclusion of learning objectives related to client participation and shared decision-making. These learning objectives will influence learning activities and assessments in the forthcoming clinical reasoning curriculum development. The aim is that the client-perspective will play an increasingly important role in the learning and teaching of clinical reasoning in this project.
I’m looking forward to contributing to a strengthened role for the client or patient in general health care, and, more specifically, clinical reasoning and I hope you’ll join me!
Elvén, M. (2019) Clinical reasoning focused on clients’ behaviour change in physiotherapy: Development and evaluation of the Reasoning 4 Change instrument. Doctoral dissertation. Mälardalen University. Västerås.
The goal of this deliverable was to provide our curricular framework with teaching/ assessment methods for the student curriculum and the train-the-trainer course.
Having already established our initial needs assessment and definition of goals and objectives (Deliverable 2.1), we have reached the exciting point of providing educational strategies in terms of a curriculum framework for clinical reasoning. We followed the constructive alignment theory to ensure an optimal alignment of learning objectives, teaching, and assessment. We have employed a theme-based approach. We plan to continue using a blended-learning format to help ensure flexibility for our learners while also utilizing an optimal match of teaching and assessment.
Blended learning combines online activities, such as virtual patients and interactive videos, with face-to-face methods such as bedside teaching. We aim for our courses to have the learner at the centre, meaning that the student is actively engaged in their learning. In this set up, the teacher is more to support and facilitate learning.
Some of our biggest wins in this work package have been:
Defining 35 general learning objectives in D2.1 and aligned them in 14 themes/ categories to describe the DID-ACT student curriculum and the train-the-trainer course.
We have defined four different learner levels: Novice, Intermediate, Advanced, and Teacher.
Our list of suitable learning and assessment methods that align with our previously defined categories.
A breakdown of our teaching and learning assessment strategies for clinical reasoning clearly defined.
Overarching curricular outline for the categories, theories, errors, and aspects of patient participation related to the clinical reasoning process. These outlines include the specific learning objectives, teaching and learning activities, as well as assessments, both summative and formative, for our courses.
Our most recent deliverable is a big step as it establishes the framework for the next steps in our curriculum development process. Our team is both multi-professional international; thereby reflecting the needs of the different health profession curricula and curricular formats of the partner schools. Due to the current COVID-19 pandemic, we could not organize as originally planned in a face-to-face meeting to discuss the framework. However, we were able to organize the work in small groups across professions and contexts who worked asynchronously and met online according to their needs. In addition, we held a series of online meetings to discuss specific aspects and make decisions in consensus.
What’s next? Coming later in December 2020 will be our “Collection of available Open Educational Resources (OER)”, “Publication of recommendations for learning objectives of a clinical reasoning curriculum”, and our “Set of evaluation and analysis tools”
You can keep track of what is upcoming in the project on our Results page, or by clicking here.