Article Text

Download PDFPDF

Original research
Strategies for the assessment of competences during rheumatology training across Europe: results of a qualitative study
  1. Aurélie Najm1,
  2. Alessia Alunno2,
  3. Francisca Sivera3,4,
  4. Sofia Ramiro5,6 and
  5. Catherine Haines7,8
  6. Working Group on Training in Rheumatology across Europe
    1. 1 Rheumatology, University of Glasgow Institute of Infection Immunity and Inflammation, Glasgow, UK
    2. 2 Department of Medicine, Rheumatology Unit, University of Perugia, Perugia, Italy
    3. 3 Department of Rheumatology, Hospital General Universitario Elda, Elda, Spain
    4. 4 Department of Rheumatology, Universidad Miguel Hernandez De Elche, Elche, Spain
    5. 5 Leiden University Medical Center, Leiden, Netherlands
    6. 6 Zuyderland Medical Centre Heerlen, Heerlen, Limburg, Netherlands
    7. 7 EULAR, Zurich, Switzerland
    8. 8 Clinical Education, King’s College London, London, UK
    1. Correspondence to Dr Aurélie Najm; aurelie.najm{at}gmail.com

    Abstract

    Objectives To gain insight into current methods and practices for the assessment of competences during rheumatology training, and to explore the underlying priorities and rationales for competence assessment.

    Methods We used a qualitative approach through online focus groups (FGs) of rheumatology trainers and trainees, separately. The study included five countries—Denmark, the Netherlands, Slovenia, Spain and the United Kingdom. A summary of current practices of assessment of competences was developed, modified and validated by the FGs based on an independent response to a questionnaire. A prioritising method (9 Diamond technique) was then used to identify and justify key assessment priorities.

    Results Overall, 26 participants (12 trainers, 14 trainees) participated in nine online FGs (2 per country, Slovenia 1 joint), totalling 12 hours of online discussion. Strong nationally (the Netherlands, UK) or institutionally (Spain, Slovenia, Denmark) standardised approaches were described. Most groups identified providing frequent formative feedback to trainees for developmental purposes as the highest priority. Most discussions identified a need for improvement, particularly in developing streamlined approaches to portfolios that remain close to clinical practice, protecting time for quality observation and feedback, and adopting systematic approaches to incorporating teamwork and professionalism into assessment systems.

    Conclusion This paper presents a clearer picture of the current practice on the assessment of competences in rheumatology in five European countries and the underlying rationale of trainers’ and trainees’ priorities. This work will inform EULAR Points-to-Consider for the assessment of competences in rheumatology training across Europe.

    • Autoimmunity
    • Early rheumatoid arthritis
    • Rheumatoid arthritis
    • Synovitis
    • Treatment
    • Sjøgren’s syndrome
    • Tcells
    • Chondrocalcinosis
    • Gout
    • Health services research
    • Synovial fluid
    • Ankylosing spondylitis
    • Spondyloarthritis
    • Outcomes research
    • Epidemiology
    http://creativecommons.org/licenses/by-nc/4.0/

    This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

    Statistics from Altmetric.com

    Request Permissions

    If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

    INTRODUCTION

    A rheumatologist is defined as a physician who has received further training in the diagnosis (detection) and treatment of musculoskeletal disorders and systemic autoimmune conditions, commonly referred to as rheumatic and musculoskeletal diseases.1 2 Rheumatology is recognised as a specialty or sub-specialty in most of the European League Against Rheumatism (EULAR) countries.3 However, the scope of rheumatology practice varies across countries.4,6 Indeed, in some countries, rheumatologists focus on inflammatory joint and connective tissue diseases whereas in others, rheumatology covers a broader scope, including soft tissue lesions, fibromyalgia and rehabilitation.5

    In order to become a rheumatologist, trainees must successfully complete a rheumatology training programme.2 6 Both the content and the assessments within these programmes are regulated by national authorities. Some initiatives aiming at harmonising training across countries of the European Union (EU) exist. The European Union of Medical Specialists, a professional body of representatives from medical specialities from the EU member states, has developed a general European curriculum with the competences to be achieved at the completion of training, including theoretical and clinical knowledge, practical skills and non-clinical competences.2 7

    Over the past years, in addition to this European curriculum, efforts have been made to gain insights and provide an in-depth analysis of the differences and similarities in national curricula and assessment methods across EULAR countries.6 8 In a prior study, a questionnaire was answered by young rheumatologists and trainees to assess the acquisition of competences during the training and other information not clearly stated in the curricula (eg, assessment of competences).6 8 9 Interestingly, while this approach provided useful information on some of the differences and similarities on training across countries, data on the assessment of competences were incomplete and several limitations hampered the interpretation of the findings.6 8 A further attempt was made to gather information from a principal investigator (PI) per country with a short questionnaire with open questions. This further highlighted the difficulty in obtaining useful and reliable information from a single person. Inquiring into assessments during rheumatology training leads to answers reflecting a personal experience and perception; therefore, a more comprehensive evaluation, obtained from different sources, is needed before a full picture can be obtained. For this reason, we decided on a qualitative approach in a representative selection of European countries.

    The present study aimed at gathering information and in-depth views on the assessment methods of competences in rheumatology and the experiences around them, as well as underlying priorities for competence assessment through focus groups (FGs).

    This qualitative approach will ultimately inform EULAR Points-to-Consider on the assessment of competences in rheumatology across Europe.

    METHODS

    Focus groups

    FGs across different European countries were run online to gain insights into assessment methods.

    Countries were selected in order to provide a geographical spread across Europe and to represent different educational contexts: larger and smaller countries, localised and centralised approaches to assessment. In order to be included, countries had to have (a) a national regulatory document for both curriculum and assessment methods, (b) a portfolio and (c) a structured framework for feedback closely related to curriculum. Additionally, a minimum of one country per geographical area (East, North and South Europe) was included to give a spread of contexts. Eleven countries fulfilled all criteria; for feasibility, five countries were finally included Denmark, the Netherlands, Spain, Slovenia and the United Kingdom.

    Information on the individual countries in order to check the eligibility criteria above-mentioned was obtained from a PI in each country through a questionnaire seeking a general description of the country’s assessment methods. In qualitative research, this approach, ‘purposive sampling’, is used to ensure that those participating in the discussion are likely to have the experience to be able to contribute to the study. The sample is not random, is a small study size and is only able to contribute to the understanding of that particular population but is considered likely to have experience and factors in common with other very similar groups.

    Current assessment methods and practice

    The PI of each included country was responsible for two tasks. First, to fill a questionnaire for which they could receive the help of their local team and/or head of the unit for maximum accuracy (online supplementary text S1). Second, to identify FG participants from their country through their personal or institutional network that ideally comprised four trainers and four trainees. Participation was voluntary and anonymised. Trainee and trainer FGs were run separately to avoid pressure between groups.

    Supplemental material

    Participants were sent preparatory material to review before the FG explaining the process and guiding the preparation for the discussion. This included a summary of the aims and methods of the project and an overall description of the country’s training and assessment methods previously developed by the PI through the above-mentioned questionnaire (online supplementary files 1 and 2).

    Supplemental material

    The FGs were conducted online in English and moderated by an experienced qualitative researcher in medical education (CH) and assisted by a rheumatologist (AN). The FGs were audio-recorded through Zoom software. Some quotes were then used to illustrate the findings. First, the participants were introduced, then the responses from the PI to the questionnaire were shared on the screen during the discussion. The account of practice in each country was discussed and amended in detail until a full account was agreed upon by all participants. This allowed the incorporation of the perspective of experienced trainers and trainees from different centres within each country, thereby ensuring more reliable data. The following aspects were discussed during the FG: portfolio, formative feedback, summative assessments, clinical practice and skills, professionalism, trainer certification, knowledge tests and national standards.

    Priorities for assessment from trainees and trainers

    In order to gain insights into FG participants’ views on assessment methods, a prioritising technique, known as the 9 Diamond method,10 11 was used to identify key assessment priorities and justifications. The 9 Diamond technique employs a common method used to stimulate discussion in face-to-face educational settings, where the underlying values and beliefs about a topic have a strong bearing on priorities in professional practice. The technique has also been employed in educational research settings as a way of providing a semi-structured framework for discussing the rationale behind complex choices in a time-effective manner.10 The possible limitations are that the statements may not produce the optimal or expected response from the participants, which is a risk for any interview or discussion-based method of qualitative research. The method also relies on the skill of the person leading the discussion to ensure that participants have the best opportunity to voice their thoughts. The discussion leader has over 30 years of experience in leading professional discussions of this type, 15 of which have been in clinical education.

    In qualitative research, the subjective experience and opinions of the selected group are the subject of study and so the discussion was specifically targeted to participants’ own experience within their own setting. The trainers were all experts in their field and so their experience was extremely relevant. The trainees were not yet experts in their field nor able to fully judge the role any assessment would play in their future career; however, they were experts in their own current experience of being assessed in training. Trainee perception of the assessment regime is an important aspect in the overall effectiveness of the programme and can assist in evolving practice for future trainees.

    Participants were provided with a set of nine statements about assessing competences in rheumatology training (table 1).

    Table 1

    Nine areas of competence assessment summarised in nine statements

    Statements are presented unprioritised, as presented to the FGs.

    Each participant was asked to rank these statements into top, bottom and middle three priorities, giving reasons. This process stimulated discussion between participants, until the group was able to reach a consensus agreement on the priority order. Statements were framed to prompt discussion on the underlying values and beliefs related to nine key areas of competence assessment.

    These statements were developed by the medical educator (CH) based on general principles and medical education literature and a systematic literature review on the assessment of competences.12 Quotations were thoroughly collected from this part for each group. Participants were finally asked to specify any aspect which had, in their view, been omitted in order to ensure comprehensiveness of the final picture and to assess whether data saturation had been reached.

    For aggregated analysis, priorities scored 1 if they were the first choice and gradually increased their score until the last statement, which scored 9. These scores were calculated for each FG and then aggregated. Thus, the lowest total was the most popular choice.

    Results were analysed by country and also by participant group: trainer and trainees. The findings were ordered and presented using colours and statements to demonstrate the variety of decisions across the groupings. The recordings from this section of the discussion were used to provide key quotations, illustrating themes discussed and justifications given within the groups relating to each competence area.

    RESULTS

    Current assessment methods and practice

    In total, 29 volunteers (15 trainers and 14 trainees) participated in nine FGs. Table 2 summarises demographic data of the FG participants.

    Table 2

    Demographic data of the focus group participants

    Two online FGs composed of three to four persons per group were conducted in each country, except Slovenia, where only one FG was performed due to a lower number of participants. The 12 hours of online discussion resulted in over 15 000 words. A summary for each of the five countries’ assessment systems is available in table 3.

    Table 3

    Summary of the current practice on the assessment of competences for each of the five countries, including quotes

    Overall, some sort of portfolio was used in every included country; this was commonly seen as useful (especially by trainers), but time-consuming (especially by trainees).

    “The portfolio provides the framework for a common standard that can be applied nationally (…) and includes the curriculum which is crucial. (UK Trainer)”

    “I have so many other things to be done first and then I’ll think about it. (Slovenia Trainee)”

    Its positive aspects were due to the framework it provided to the overall training and assessments. Formative feedback is felt to be essential by both trainers and trainees; timings of the assessments were country-specific, from a 3-monthly basis (Denmark, the Netherlands), to a yearly basis (Slovenia, Spain).

    “Feedback on clinical learning is the most important thing for trainees. (Spanish Trainee)”

    The Slovenian oral final examination, performed for 1 or 2 days in a clinic setting, was felt by participants to be very stressful. Professionalism was formally assessed in three countries (Denmark, the Netherlands, UK) through multisource feedback, and highlighted to be important by participants. Mandatory courses for trainers in teaching methods took place in two countries, Denmark and the UK.

    9 Diamond priority ordering results

    Table 4 provides an aggregated summary of participants’ priorities, and of both trainees and trainers.

    Table 4

    Priorities on the assessment of competences for all participants and stratified by trainees and trainers

    Providing regular feedback to trainees and the need to achieve a balance between service provision and protected time for training were rated highly by both groups. Knowledge tests during postgraduate training were rated in the lowest priorities by most of the respondents.

    Despite overall high agreement, there were differences between trainers and trainees. Trainees were much less keen on portfolios than trainers in general. Trainees were more concerned with interventions to support trainees at risk of failure and demonstrating effectiveness in a clinical setting than trainers.

    Differences were also observed across countries (table 5). All countries agreed in giving a high priority to providing regular feedback. Denmark and the Netherlands seemed less concerned with trainees at risk of failure, whereas Slovenian participants seemed to be keener on knowledge tests. UK, Spanish and Slovenian trainees were less keen on portfolios, highlighting their administrative burden. On the other hand, trainers were very keen on them irrespective of their country of origin.

    Table 5

    Priorities on the assessment of competences stratified by country

    On top of prespecified discussion items, the Dutch group (both trainees and trainers) made further suggestions which would be important to their competence assessment: research, resilience and reducing the administrative burden within healthcare. None of the other countries made additional proposals, suggesting that the key issues were well covered by the nine statements.

    DISCUSSION

    This work is, to our knowledge, the first qualitative study gaining insights into competence assessments in rheumatology across Europe. Online FGs are an innovative methodology and have shown to be a feasible method to involve discussants across a wide geographical area. Through this study, we highlighted interesting differences between countries regarding the current assessment of competences strategies and methods. While a portfolio was mandatory in all five countries, in a mostly electronic format, other assessment types, such as assessment of professionalism through multisource feedback, occur in a structured manner in only some countries. Portfolios were supported by trainers but were also felt to be burdensome by trainees, sometimes reduced to an ineffective, time-consuming checklist of requirements. Participants particularly valued portfolios which provided a framework to integrate all the required aspects of performance, often derived from the CanMeds approach. Where components were closely linked to the curriculum and included non-clinical competences, such as professionalism and teamwork, portfolios were seen as particularly useful and valid. Trainees were commenting on their current experience while compiling their portfolios and understandably expressed their current challenges and frustrations, which would be expected to become more balanced in retrospect.

    Structured feedback took place regularly in all five countries but with a variable frequency. Providing feedback was highly valued by both trainers and trainees. One main difficulty, described by many FG participants, was the lack of protected time for giving and receiving feedback, which limits its feasibility. Indeed, giving feedback is a skilled educational task, particularly when related to attitudes and professionalism.13 Many trainers and trainees described how they valued or would welcome opportunities to develop their skills in a constructive and consistent approach, where feedback received regularly and in a positive manner could help catalyse changes in performance and motivation.

    A prolonged oral test in a clinical setting was performed in only one country, and knowledge tests were generally felt to be neither effective nor desirable as key assessment methods at this stage in training. Indeed, there is evidence that summative oral exams should not be strongly relied on to assess competences and are not supported in the literature across medical education as the main approach to assessment for postgraduate education.14 15 In addition, knowledge tests might also be inappropriate at this level as assessment in advanced stages of training focuses on the acquisition of complex competences integrating knowledge, skills and attitudes.16

    Interestingly, this work provides a summary of current practices, and the initial document provided by the country’s PI evolved with the interventions from FG participants. This allowed us to obtain a more comprehensive view of the current assessment strategies within specific countries. Indeed, reflections on assessment practices are not fully objective and include a perception and judgement on them. Moreover, recent reviews highlight person al and environmental factors influencing the way trainees perceive the developmental value of assessment, self-motivation also being reported as an important driver of feedback seeking.17 18 This is a possible reason explaining why previous attempts for gathering such information8 were felt as incomplete and unreliable. By using a qualitative approach, we were able to combine and report in-depth views of several individuals from different positions and countries.

    In the second part of this work, we used a novel technique, the 9 Diamond methodology, in order to identify underlying beliefs and attitudes likely to influence how assessment of competences take place. Through this work, it has been possible to identify general priorities in assessment of competences in rheumatology across Europe. This work has the potential to help some countries to develop their approach to assess competences in rheumatology training and avoid pitfalls. One of the main priorities of FG participants, especially for trainees, was the identification of trainees at risk of failure and their support for progression. Failure signs should be monitored and addressed in a timely fashion, so that a variety of appropriate solutions can be offered. While the importance of regular skill assessment was discussed, the particular case of technical procedural skills such as joint aspiration was not specifically addressed, which could be perceived as a lacking element.

    Some limitations of the study must be considered. Qualitative research often relies on purposive sampling. By its nature, the sample was predisposed towards participants with an interest in education and current interaction with EULAR, and so may well be biased towards those with a fuller understanding of and commitment towards the assessment of competences and current best practice. However, the FG participants, through their engagement, are also likely to have critical views, as well as positive ones, and are therefore suitable for such a qualitative study. It is likely that there are other examples of good practice either nationally, institutionally or locally that have not been captured by this approach, but they were considered out of scope of this project. Online FGs present unique challenges. The group size must remain small, but on some occasions, one or two participants were unavailable at the last minute, making the number of participants potentially low. However, the overall sample size and methods of data collection, building a comprehensive account of practice with contributions by different groups (trainers and trainees) and in different stages, provide a significant sample for a qualitative study of this type. In addition, general agreement in priority ordering across groups suggests that variation in FG sizes has not significantly affected the results. One limitation of the 9 Diamond method lies in the framing of the statements. It is important to formulate statements in a way that provokes discussion and leads to disagreement. This strategy was used in particular for the professionalism statement, which was worded in a negative manner. Despite leading to subsequent discussion, this particular statement ended up appearing low in the priority order, precisely because it was felt to be very important. This illustrates the fact that the order of priorities might not fully reflect importance in a linear way. Traditional qualitative data (ie, quotations from transcriptions of the discussion) clearly demonstrated that assessing professionalism was felt to be extremely important and have a high priority in training. Of the 11 countries fulfilling our initial selection criteria, the inclusion of five countries allowed data saturation, while ensuring feasibility. Despite this limited number of countries, very few further aspects were mentioned by participants in response to prompts about what else needed to be included. Although we strove for representativity of different assessment systems and cultures, given the wide heterogeneity in training programmes across the 41 EULAR countries, selection bias could have hampered retrieving other useful insights or best practices.

    In conclusion, we identified current practice in the assessment of competences across five countries, incorporating the views of expert trainers and current trainees. Additionally, priorities and underlying beliefs about the assessment of competences were identified. Together, these provide a rich and coherent picture on the assessment of competences in rheumatology training across European countries, which will inform the EULAR Points-to-Consider for the assessment of competences in rheumatology training.

    Key messages

    What is already known about this subject

    • Providing medical education in rheumatology is a challenging balance between delivering training to time and with high standards while delivering service.

    What does this study add

    • Providing frequent formative feedback to trainees for developmental purposes is perceived as the highest priority for both trainers and trainees in rheumatology.

    • Portfolios, considered useful by both trainers and trainees, are seen as time-consuming particularly by trainees and requiring streamlined approaches to remain close to clinical practice.

    How might this impact on clinical practice or future developments

    • Through focus groups, a good insight into practices and preferences around the assessment of competences in rheumatology was gathered from European trainees (fellows, residents) and trainers.

    • These insights will help further harmonise assessment practices for rheumatology trainees across Europe.

    Acknowledgments

    We acknowledge national PIs from the selected countries (Tue Kragstrup, Diego Benavent, Marloes van Onna, Blaž Burja and Md Yuzaiful Md Yusof) for their help in identifying the focus group participants. We thank all focus group participants for their time and contribution to this project.

    REFERENCES

    Footnotes

    • Twitter Aurélie Najm@AurelieRheumo.

    • Contributors AN, AA, SR, FS and CH and the Working Group have contributed to the design of the study. AN and CH have organised and analysed the data of the focus groups. AN, AA, SR, FS, CH and the Working Group have contributed to the drafting of the manuscript and have approved the final version to be submitted.

    • Funding This project was supported by a EULAR grant (EDU043).

    • Competing interests None declared.

    • Patient consent for publication Not required.

    • Data sharing statement Data are available upon reasonable request.

    • Provenance and peer review Not commissioned; externally peer reviewed.