Ocean waves, footprints and dashboards: the selection of DID-ACT evaluation and learning analytics tools

by Andrzej Kononowicz

Every project needs evaluation. Even though it might sometimes be considered as cumbersome or stressful for those whose work is evaluated, it is important that the merits and limitations of any given project are clearly laid out. A well-conducted evaluation ideally goes beyond highlighting the particular achievements of a program by delivering ideas for improvement. Furthermore it justifies the need to continue the efforts surrounding the project and its aims. It is commonplace that evaluation and feedback are employed during the last stage of the curriculum development cycle. However, it is well-founded that initiating evaluations in program development should be started as early as possible. The benefits are many with the central reasoning being that evaluating early on maintains and ensures that the chosen tools align with the planned outcome(s).

In terms of evaluation for the DID-ACT project, the Evaluation Work Package is a shared effort of the consortium partners. Jagiellonian University in Kraków, Poland, is responsible for its coordination. Its first year of activities finished in December 2020 with a report published on the project’s website. During the first half of the year, the activities were focused on gauging the needs of potential users by developing a web survey to collect the specific expectations. From the data gathered, the DID-ACT project’s set of learning objectives and curricular framework were developed by another working group of the project. The goal of the second half of the year in terms of the evaluation work package was to propose a set of evaluation and learning analytic tools. Combined, these measure the planned outcome of the DID-ACT student curriculum and train-the-trainer course.

At the time of commencing our evaluation work, the specific set of learning objectives had not yet been set. Thus we first reviewed the literature in search of existing tools that measure participant satisfaction and perceived effectiveness of clinical reasoning training. This brought us the productive advantage and opportunity to reuse the outcomes of former projects. We experience this as an important point that demonstrates continuity and sustainability of research in this area. Our literature review identified a number of studies in which evaluation questionnaires of clinical reasoning learning activities were presented. Based on the analysis of the questions that aimed to measure student satisfaction, we were able to identify seven common themes of interest: course organisation, clear expectations, relevance, quality of group work, feedback, teaching competencies, and support for self-directed learning. We collected plenty of exemplar questions in each of the themes. Additionally, for the self-assessment questions we have assigned the gathered items to the DID-ACT learning goals and objectives.

Surprisingly our literature review did not yield any evaluation questions specific to clinical reasoning that could be used for our train-the-trainer courses. We resolved this challenge by broadening our goal. We adapted our search to include faculty development evaluation questionnaires that focused on honing teaching skills in general (not necessarily exclusively clinical reasoning). There was one evaluation tool from this group that caught our attention in particular: the Stanford Faculty Development Program Model (SFDP-26). We value its wide dissemination in many domains and clearly formulated set of 26 questions grouped in seven dimensions. An additional strength is that it has already been translated and validated in languages other than English, for example, in German. 

An interesting discovery for us was a tool that measures the impact of curricular innovation following the Concerns-Based Adoption Model (CBAM). This tool, developed at the University of Texas, proposes an imaginative way of measuring the progress of curriculum innovation. It does so by identifying the types of concerns teachers voice regarding new topics. These concerns  can range from disinterest, through concerns about efficiency of teaching of this element, and end with ideas for expanding the idea. 

The CBAM model is based on the assumption that certain types of statements are characteristic to particular developmental stages when introducing an innovation into a curriculum. The developmental stage of introducing the innovation is captured effectively by the Stage of Concern (SoC) questionnaire. When collecting the data from a particular school the outcome is a curve that displays the intensity of concerns found within the seven consecutive stages of innovation. The value this brings is that comparing the curves across several institutions can help us visualise any progress implementing the curriculum is having. We find this visualisation to be akin to following how waves traverse the ocean.

As the DID-ACT curriculum is planned to be a blended model of face-to-face and e-learning activities, we intend to use learning analytics in our curriculum evaluation. More specifically we will capture, process and interpret the digital footprints learners leave while using electronic learning environments. It is of course pivotal to be transparent about the purpose and to obtain consent regarding the collection of educational data. Upon receiving consent, computational power can be harnessed to optimise educational processes to the benefit of both learners and teachers. From the perspective of the curriculum developer, it is particularly important to know which activities attracted the most versus least engagement from students. 

This information, when triangulated with other data evaluation sources, e.g. from questionnaires or interviews, allows us to identify elements of the curriculum that are particularly challenging, attractive or in need of promotion or better alignment. The learning analytics dashboards are viewed for our purposes a bit like a car’s dashboard where our fuel, odometers, speedometer display key information; for DID-ACT, analytics present a clear range of visualised progress indicators in one place.

We selected then analysed two electronic tools that will be used to implement the technical side of the DID-ACT curriculum: “Moodle” (a learning management system) and “Casus” (a virtual patient platform). Our goal was to look for the relevant learner data that could be collected. In addition, we intended to determine how it is visualised when following learner progress and trajectories. To systematise the process, we have produced a table we dubbed the ‘Learning Analytic Matrix’ that shows how engagement in attaining individual DID-ACT learning goals and objectives is captured by these electronic tools. Logs of such activities, like the opening of learning resources, time spent on activities, number and quality of posts in discussion boards, or success rate in formative questions, will enable us to map what is happening in the learning units developed by the DID-ACT consortium. 

This is augmented by recording traces of some learning events which are characteristic to the clinical reasoning process. These events can be qualified as success rates in making the right diagnoses in virtual patient cases, student use of formal medical terminology in summary statements, or making reasonable connections in clinical reasoning concept maps. We also inspected the ways the captured data are presented graphically, identifying at the moment a predominance in tabular views. We foresee the possibility of extending the functionality of learning analytic tools available in the electronic learning environment by introducing a more diverse way of visualising evaluation results in learning clinical reasoning. 

The collection and interpretation of all that data related to the enactment of the DID-ACT curriculum using the described tools is something we are looking forward to pursuing in the two upcoming years of the DID-ACT project. 

No Clinical Reasoning Without Me – It’s Time to Put the Client’s Perspective in the Forefront

by Maria Elvén

Generally, clinical reasoning refers to a health professional’s thinking and decision-making process. It guides practical actions, implying a process limited to the cognitive activities of health professionals. In more elaborated definitions of clinical reasoning, we may also find concepts such as collaboration and context. These imply a broader view of the reasoning process where the client and situational factors also come into play. The number of definitions of clinical reasoning are innumerable. Variations within and between different professional disciplines are equally as many. There is no established singular definition of the nature, relevant components or boundaries of a health professional’s clinical reasoning. The co-existence of multiple definitions leads to a plethora of variation in clinicians’ view(s) of clinical reasoning. These variations in turn influence their consistent and uniform application of reasoning in practice.

My name is Maria Elvén, I’m a lecturer and researcher in physiotherapy. In my PhD-work, focusing on clinical reasoning in physiotherapy, clinical reasoning was studied from a biopsychosocial and person-centred perspective. As such, clinical reasoning in relation to health and illness are dependent on biomedical, psychological and social aspects. Accordingly, to be in ‘good health’ represents different realities for different individuals. This variation emphasizes the need to grasp the persons’ individual perceptions. We must understand their own definitions of health and life situation in tandem, with their mental and bodily status from a health care perspective in the clinical reasoning process. Person-centredness, e.g., that the clinician considers the unique needs and specific health concerns of the person, and treats the individual as competent to make decisions about their own care, is a way to empower clients to take an active part in the clinical reasoning process.

Let’s pause here and reflect! Based on your view/definition of clinical reasoning, what is the goal of the clinical reasoning process? A correct diagnostic decision? A well-performed analytical thought process? A well-founded treatment decision? A satisfied clinician? A satisfied client? 

If I ask myself these questions, the ultimate goal of effective clinical reasoning is that the client achieves their own goal(s) related to their current health problem. To be able to reach this goal, the clinical reasoning process cannot be confined to the mind of the clinician, the process needs to be articulated and shared with the client. To make full use of the client as an active partner in the clinical process, shared treatment decisions are not sufficient. The client needs to be aided in supporting their role as an important  contributor to the analysis of the problem as well. This process also involves selecting and prioritizing among various treatments and management strategies that fit their actual life situation, as well as continuously evaluate their effectiveness.

We have covered a lot of ground surrounding the clinician’s inner process. The next step is to look beyond the cognitive processes of the clinician and to further elaborate on what meaningful client participation and involvement implies in the reasoning process. As suggested in my definition of clinical reasoning in physiotherapy, clinical reasoning is a process performed in partnership between the client and the clinician; ultimately stressing their shared responsibilities and equal values (Elvén, 2019). Performance of such reasoning may need training and competence development for clinicians to be able to support clients in their role as clinical reasoning partners.  

That brings us to where we are today and what I am hoping to bring into the DID-ACT project. In the DID-ACT project, the client perspective in clinical reasoning is clearly in focus. This is done by the inclusion of learning objectives related to client participation and shared decision-making. These learning objectives will influence learning activities and assessments in the forthcoming clinical reasoning curriculum development. The aim is that the client-perspective will play an increasingly important role in the learning and teaching of clinical reasoning in this project. 

I’m looking forward to contributing to a strengthened role for the client or patient in general health care, and, more specifically, clinical reasoning and I hope you’ll join me!

Elvén, M. (2019) Clinical reasoning focused on clients’ behaviour change in physiotherapy: Development and evaluation of the Reasoning 4 Change instrument. Doctoral dissertation. Mälardalen University. Västerås.

Curricular Framework for DID-ACT

By Jennifer Vrouvides

We published our most recent deliverable this week: Developing, implementing, and disseminating an adaptive clinical reasoning curriculum for healthcare students and educators.

Team meeting to discuss the framework and work in virtual small groups

The goal of this deliverable was to provide our curricular framework with teaching/ assessment methods for the student curriculum and the train-the-trainer course.

Having already established our initial needs assessment and definition of goals and objectives (Deliverable 2.1), we have reached the exciting point of providing educational strategies in terms of a curriculum framework for clinical reasoning. We followed the constructive alignment theory to ensure an optimal alignment of learning objectives, teaching, and assessment. We have employed a theme-based approach. We plan to continue using a blended-learning format to help ensure flexibility for our learners while also utilizing an optimal match of teaching and assessment. 

Blended learning combines online activities, such as virtual patients and interactive videos, with face-to-face methods such as bedside teaching. We aim for our courses to have the learner at the centre, meaning that the student is actively engaged in their learning. In this set up, the teacher is more to support and facilitate learning. 

Some of our biggest wins in this work package have been:

  • Defining 35 general learning objectives in D2.1 and aligned them in 14 themes/ categories to describe the DID-ACT student curriculum and the train-the-trainer course.
  • We have defined four different learner levels: Novice, Intermediate, Advanced, and Teacher. 
  • Our list of suitable learning and assessment methods that align with our previously defined categories.
  • A breakdown of our teaching and learning assessment strategies for clinical reasoning clearly defined. 
  • Overarching curricular outline for the categories, theories, errors, and aspects of patient participation related to the clinical reasoning process. These outlines include the specific learning objectives, teaching and learning activities, as well as assessments, both summative and formative, for our courses. 

Our most recent deliverable is a big step as it establishes the framework for the next steps in our curriculum development process. Our team is both multi-professional international; thereby reflecting the needs of the different health profession curricula and curricular formats of the partner schools. Due to the current COVID-19 pandemic, we could not organize as originally planned in a face-to-face meeting to discuss the framework. However, we were able to organize the work in small groups across professions and contexts who worked asynchronously and met online according to their needs. In addition, we held a series of online meetings to discuss specific aspects and make decisions in consensus. 

Read more about the deliverable in detail here.

What’s next? Coming later in December 2020 will be our “Collection of available Open Educational Resources (OER)”, “Publication of recommendations for learning objectives of a clinical reasoning curriculum”, and our “Set of evaluation and analysis tools”

You can keep track of what is upcoming in the project on our Results page, or by clicking here. 

Teaching Clinical Reasoning – A student’s perspective

by Ada Frankowska

Hello! I’m Ada, a junior doctor from Cracow, Poland. I would like to describe how teaching clinical reasoning looks like from a student’s perspective – I just graduated university in June, so my memories are still fresh. Let’s get on with it!

At Jagiellonian University in Cracow, clinical reasoining (CR) teaching started at the second year, with the course Introduction to the Clinical Sciences and Laboratory Training in Clinical Skills.

The first course lasted a whole year. We were divided into 9-person groups, each supervised by a teacher and served with patient’s cases. The aim was for each group to manage a case to the best of their abilities – gather information, outline which lab and imaging tests would be needed, and finally how to treat the patient. We were allowed to use any books and websites we deemed necessary as this was only the second year of our studies, and all we knew were basic sciences – anatomy, physiology, biochemistry etc. The teacher was more a moderator of our discussion, trying not to impose his views and solutions on us. At the end of each class, we summarized the most important issues about each case, and where we lacked knowledge the most. At the next class, volunteers gathered necessary information about the issues and delivered a short presentation to the rest of the group. I remember this course as a nice brainstorming experience, where every idea was valid and counted.

The second course lasted four years – till the end of our studies. We were learning how to gather information from patients, be it adult, pediatric or a “poor historian”. We also were taught how to examine a patient and how to suture wounds and even the basics of laparoscopy. There was a particular emphasis placed on an ability to communicate with a patient effectively, for example how to make sure they understand us and how to deliver difficult news. During this course we also had to solve virtual patients’ cases in the CASUS system. We were evaluated by OSCE on the third and last year of our studies.

At 4th, 5th and 6th years high fidelity simulations took place. This meant we were divided into 5-person teams, with one leader, and had to take care of a simulated patient or a mannequin. We mostly dealt with emergency cases there, and the ability to see how our actions affected, for example, the patient’s blood pressure or consciousness was irreplaceable. It once more evaluated our efficacy of gathering information, but for the first time we were able to test our ability to act promptly. On the 6th year we also had an occasion to work with a nurse as a part of our team, which bettered our interdisciplinary communication. After each case we analysed our steps with the supervising teacher – what was done right and what could be done better next time.

Above all, since the 3rd year we had clinical rotations, and while the aforementioned courses really added to my knowledge, the rotations really varied in quality. Of course, there were talented teachers, and going to their classes was an enlightening experience – but there were also teachers who acted as if they lacked ideas about how to make a student an active participant in classes or at least be heard. I don’t doubt their knowledge was vast – it’s just that sometimes it was hard for them to describe their reasoning process or they didn’t feel the need to do so to us – “it’s just done like that” or “it’s in the guidelines”; not to mention that many of them didn’t even know that clinical reasoning is something that can be actively taught. This way I feel that many opportunities to learn clinical reasoning in rotations were lost.

This is why I joined the DID-ACT project – I experienced from the first-person perspective on how much could be done to better the quality of teaching at medical universities. I’m also keen on teaching myself – and when I’ll become a teacher, I want to do it the best way possible. I’ve experienced classes where the topic didn’t seem too interesting, but the teacher transformed it into something fascinating and inspiring.

With all the professionals involved and great ideas created in the DID-ACT project, I think that it’s the best place to begin a change in teaching clinical reasoning.

DID-ACT at AMEE 2020

by Andrzej Kononowicz

The Association for Medical Education in Europe (AMEE) is one of the biggest organisations focused on excellence and research in health professions education. It has been organising annual conferences for scholars engaged in this topic for close to 50 years.  The interest in these meetings is rising and has reached the level of around 4000 participants last year. The DID-ACT consortium decided to disseminate its outcomes at AMEE by submitting an abstract informing about the results of the project’s needs analysis.

This year’s conference was originally planned to be held in Glasgow, United Kingdom, however, changes had to be made due to the ongoing COVID-19 pandemic; AMEE’s traditionally face-to-face format was adapted to be a virtual conference that  rose to the challenge and exceeded expectations. Rather than following suit to mainstream reliance on primarily traditional audio-video teleconferencing tools, AMEE took on the challenge to host the conference in a virtual world. The virtual venue encompassed a group of interconnected locations with different purposes. A palm tree grown lobby with information booths led to several lecture theatres, exhibition halls, networking areas and poster rooms. The participants, prior to joining the conference, designed their own avatar and then navigated it through the locations meeting on the way avatars of other participants. The meetings enabled interactions either by typing in a chat window or an audio conversation. Participation in the events held in parallel conference communications could be interactive as well, enabling the audience to applaud, raise hands, and talk to the next-sited neighbour.

The DID-ACT submission was accepted for AMEE 2020 as a virtual poster. This presentation format involves constructing a digital “stack” of multimedia resources which could be presented either in a smartphone app or in a web browser. The content is organised in nested sections depicted as rectangular tiles, each containing resources as text entries, images, web links. Each conference presenter was encouraged to incorporate in the poster a short video showing a voice-over PowerPoint presentation giving an overview of the most important content. In addition it was required to prepare a one page digital print-out of the poster including a QR-code for easy access by smartphones from the real world. The DID-ACT poster was prepared by Andrzej Kononowicz, Małgorzata Sudacka, Felicitas L. Wagner,  Samuel Edelbring, Inga Hege and Sören Huwendiek on behalf of the consortium. In the image below we present the poster print-out. The content is available via this link https://api.ltb.io/show/BWPMF.

The virtual conference was held from 7th until 9th September. Several DID-ACT members participated in the conference events and networked with fellow researchers. In particular there were several conference presentations around the topic of clinical reasoning. By the end of conference the participants form DID-ACT project decided to gather virtually in one of the exhibition hall for a virtual group selfie:

Standing from the left are: Desiree Wiegleb Edstöm, Živa Ledinek, Małgorzata Sudacka, Maria Elvén, Andrzej Kononowicz and Inga Hege

The conference contributions presented at the virtual AMEE conference will be available at least throughout the next year and by that enable playback of the presentations and sustainable project dissemination. Participation in the conference was a memorable event, impressive by its innovation and showing how far virtualisation of education and research can nowadays go. Despite the many benefits of the virtual conference, and thankful it was possible to be held in these troubled pandemic times, we hope we will be able to meet up at the face-to-face conference next year at AMEE 2021 in real world Glasgow to present the community more news around the DID-ACT project.

Covid-19 Summer Term 2020

by Martin Adler

Summer term 2020 was special. Most universities start their summer term in April and thus, with the onset of the Corona outbreak in March, their preparation time given the circumstances was reduced drastically. One of the major challenges was that face-to-face lectures had to be planned online. In a very short amount of time, new online conference systems were established, and the necessary technical support was partially organized with the help of student tutors. Even though people who work on international projects are already used to video conferencing solutions, the amount of potential technical difficulties is still high and bandwidth issues can destroy all previous efforts.

Instruct, as an e-learning provider, observed a strong increase in online lectures and virtual patient usage in our system. Even institutions that already use our system CASUS and offer numerous online courses outperformed their previous numbers (Graphic 1).

Graphic 1: Comparison of completed virtual patient cases by students from one exemplary university from April to July 2019 and April to July 2020.

The peak in July is the result of exam preparation. We also registered slightly more support requests, however, these were still easily manageable.

In the media, both researchers and newspapers made online learning one of their central themes, especially regarding possibilities on how the current circumstances will impact e-learning in the future. Their findings are not necessarily novel, as can be seen in an article from the New York Times (https://www.nytimes.com/2020/06/13/health/school-learning-online-education.html?smid=em-share), “[…] students tend to learn less efficiently than usual in online courses […]. But if they have a facilitator or mentor on hand, someone to help with the technology and focus their attention — an approach sometimes called blended learning — they perform about as well in many virtual classes, and sometimes better.”

In an article from the German newspaper, Sueddeutsche Zeitung, called „Schluss mit dem Digitalgejammer!“ (“Stop complaining about Digitalization!”) (https://www.sueddeutsche.de/bildung/hochschulen-und-corona-schluss-mit-dem-digitalgejammer-1.4985116 ), the author highlights the discrepancy between educators moaning about a lack of personal interaction, exchange and dialogue in e-learning, while in reality seminars and lectures are quite often overcrowded permitting no interactivity whatsoever, not to mention dialogues and conversations are rarely feasible. The author states that students might miss pre-Corona campus life, but educators believe and hope that various other reasons also play into this.

It’s exciting to see how this transformation will shape the future, and it seems as though we have a special winter term in store for us, too – this time we will have slightly longer to prepare. One thing is for sure: it’s time for more and better blended learning solutions.

A look back on an eventful first half year

by Felicitas Wagner & Sören Huwendiek, Universität Bern

The first phase of the DID-ACT project (January – June 2020) was a very intense and insightful time. The main goal of the first project phase was to conduct a needs assessment among different stakeholder groups regarding a longitudinal clinical reasoning curriculum for students and a train-the-trainer course for teachers. Also, barriers for the implementation for such a curriculum and course as well as potential solutions were investigated.

At the first meeting of the project-team in Augsburg in January, the foundations were laid for the following work. In the next months, we carried out surveys and interviews. The data collection unfortunately collided with the onset of the Corona crisis, which made it a challenging task to recruit participants in the health sector. Nevertheless, we were able to conduct a considerable number of interviews and almost 200 people took part in the survey. During these difficult times, our bi-weekly online-meetings were especially valuable to keep the project on track, coordinate and support each other with the ongoing tasks.

DID-ACT team meeting in zoom

The analysis of the needs for the student curriculum showed that cases and simulations are seen as especially important in the teaching of clinical reasoning while oral and written exams are seen as most useful to assess clinical reasoning. For the train-the-trainer course, a blended-learning approach is favored. Results on barriers are described in our blog entry “Barriers for a clinical reasoning curriculum“. More detailed results can be found in the D1.1a report and the D1.1b report.

After finishing the data analysis regarding needs and barriers for the planned longitudinal clinical reasoning curriculum and the train-the-trainer course, solutions to overcome the identified barriers were sought. We analyzed the answers from our interview partners and conducted an online ideation workshop with the team members to develop further solutions (described in our blog entry “Online ideation workshop”). More details on solutions are described in the D1.2 report.

After this exciting first phase of the project, we are now looking forward to the next months where a curricular framework with learning goals and educational methods for the student curriculum and the train-the-trainer course will be developed.

Barriers for a clinical reasoning curriculum

As part of the DID-ACT project we conducted over 40 interviews with educators, students and clinical reasoning experts asking them among other questions, what barriers they see for developing a clinical reasoning curriculum for students and a train-the-trainer course for teachers. Interestingly, one of the most important barriers mentioned by the interviewees were cultural barriers. This includes aspects such as a lack of collaboration among educators, no culture of reflection, no culture of dealing with errors, and a resistance to change. A second category of barriers was related to the teaching process. Interviewees identified obstacles such as a lack of awareness that clinical reasoning can be taught, a lack of qualified educators to teach students, and also a lack of guidance and standards on how to teach clinical reasoning.

The results of the interviews can be found in the D1.1b report.

As already started in our ideation workshop we are now discussing solutions to overcoming these barriers – the results will be published by the end of June!

Online ideation workshop

In our specific needs analysis we have identified a wide range of barriers and needs for the implementation of a clinical reasoning curriculum in a survey and semi-structured interviews. As a next step we had planned a face-to-face design thinking workshop on May 5th in Krakow, Poland, to develop solutions to overcoming these barriers and addressing the needs. Due to the travel restrictions we decided to try something new and do the workshop in a synchronous online meeting after an asynchronous individual preparation phase.

For the preparation phase we setp up a course in our learning management system and the team members had time to familiarize themselves with the identified barriers and needs. They were asked to submit at least five (better ten) ideas on how these needs and barriers could be addressed including at least one crazy/absurd idea.

In our online meeting on May 5th we divided the 18 participants into four small groups (using the zoom break out rooms) in which they had to present their ideas and clarify any questions. After 20 minutes we met in the plenary in which each group presented their ideas and we documented and clustered those (using the integrated whiteboard). Finally, each particpant was asked to identify three solutions that are easy to do / hard to do / have a high impact.

Overall, the online workshop worked very well, only the simultaneous presentation of ideas and clustering on the whiteboard was tough, and could be better done in two phases with first collecting the ideas and then clustering them.

During the next weeks we will continue the discussion and refinement of the solutions and publish a final version as D1.2 report by the end of June.