ORCID Profile
0000-0002-6679-0514
Current Organisation
KU Leuven
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Research Square Platform LLC
Date: 21-06-2022
DOI: 10.21203/RS.3.RS-1652676/V1
Abstract: Background In view of the exponential use of the CanMEDS framework along with the lack of rigorous evidence about its applicability in workplace-based medical trainings, further exploring is necessary before accepting the framework as valid and reliable competency outcomes for postgraduate medical trainings. Therefore, this study investigated whether the CanMEDS key competencies could be used as outcome measures for assessing trainees’ competence and for supporting competency growth across training phases in a workplace-based General Practitioner’s (GP) Training setting. Methods A web-based Delphi study was employed to collect validity evidence. In three Delphi rounds, a panel of experts (n = 25–43) was asked to rate on a 5-point Likert scale whether the CanMEDS key competencies were feasible for workplace-based assessment, and consistent for assessment across different training phases. Comments on each CanMEDS key competency were encouraged. Descriptive statistics of the ratings were calculated, while content analysis was used to categorise panellists’ comments. Results Out of twenty-seven CanMEDS key competencies, 21 were scored as feasible, and 16 as consistent across training phases for assessment in the workplace. All the key competencies under the “Medical Expert”, “Communicator”, and “Collaborator” roles reached consensus for feasibility of assessment. The panel also agreed on one out of four key competencies under the role “Leader”, one out of two competencies under the role “Health Advocate”, three out of four competencies under the role “Scholar”, and three out of four competencies for the role “Professional”. Regarding consistency of assessment in the workplace, consensus was achieved for four out of five competencies under “Medical Expert”, three out of five competencies under “Communicator”, two out of three competencies under “Collaborator”, one competency out of two under “Health Advocate”, three out of four competencies under “Scholar”, three out of four competencies under “Professional”. No competency under the role “Leader” deemed to be consistently assessed across training phases. Conclusions The findings indicate a mismatch between the initial intent of the CanMEDS framework and its applicability in the context of workplace-based assessment. Although the CanMEDS framework could offer starting points, further contextualization of the framework is required before implementing in postgraduate medical trainings.
Publisher: Springer Science and Business Media LLC
Date: 04-2023
DOI: 10.1186/S12909-023-04207-2
Abstract: In view of the exponential use of the CanMEDS framework along with the lack of rigorous evidence about its applicability in workplace-based medical trainings, further exploring is necessary before accepting the framework as accurate and reliable competency outcomes for postgraduate medical trainings. Therefore, this study investigated whether the CanMEDS key competencies could be used, first, as outcome measures for assessing trainees’ competence in the workplace, and second, as consistent outcome measures across different training settings and phases in a postgraduate General Practitioner’s (GP) Training. In a three-round web-based Delphi study, a panel of experts ( n = 25–43) was asked to rate on a 5-point Likert scale whether the CanMEDS key competencies were feasible for workplace-based assessment, and whether they could be consistently assessed across different training settings and phases. Comments on each CanMEDS key competency were encouraged. Descriptive statistics of the ratings were calculated, while content analysis was used to analyse panellists’ comments. Out of twenty-seven CanMEDS key competencies, consensus was not reached on six competencies for feasibility of assessment in the workplace, and on eleven for consistency of assessment across training settings and phases. Regarding feasibility, three out of four key competencies under the role “Leader”, one out of two competencies under the role “Health Advocate”, one out of four competencies under the role “Scholar”, and one out of four competencies under the role “Professional” were deemed as not feasible for assessment in a workplace setting. Regarding consistency, consensus was not achieved for one out of five competencies under “Medical Expert”, two out of five competencies under “Communicator”,one out of three competencies under “Collaborator”, one out of two under “Health Advocate”, one out of four competencies under “Scholar”, one out of four competencies under “Professional”. No competency under the role “Leader” was deemed to be consistently assessed across training settings and phases. The findings indicate a mismatch between the initial intent of the CanMEDS framework and its applicability in the context of workplace-based assessment. Although the CanMEDS framework could offer starting points, further contextualization of the framework is required before implementing in workplace-based postgraduate medical trainings.
Publisher: Springer Science and Business Media LLC
Date: 12-2021
DOI: 10.1186/S12909-021-03068-X
Abstract: The COVID-19 pandemic has profoundly affected assessment practices in medical education necessitating distancing from the traditional classroom. However, safeguarding academic integrity is of particular importance for high-stakes medical exams. We utilised remote proctoring to administer safely and reliably a proficiency-test for admission to the Advanced Master of General Practice (AMGP). We compared exam results of the remote proctored exam group to those of the on-site proctored exam group. A cross-sectional design was adopted with candidates applying for admission to the AMGP. We developed and applied a proctoring software operating on three levels to register suspicious events: recording actions, analysing behaviour, and live supervision. We performed a Mann-Whitney U test to compare exam results from the remote proctored to the on-site proctored group. To get more insight into candidates’ perceptions about proctoring, a post-test questionnaire was administered. An exploratory factor analysis was performed to explore quantitative data, while qualitative data were thematically analysed. In total, 472 (79%) candidates took the proficiency-test using the proctoring software, while 121 (20%) were on-site with live supervision. The results indicated that the proctoring type does not influence exam results. Out of 472 candidates, 304 filled in the post-test questionnaire. Two factors were extracted from the analysis and identified as candidates’ appreciation of proctoring and as emotional distress because of proctoring. Four themes were identified in the thematic analysis providing more insight on candidates’ emotional well-being. A comparison of exam results revealed that remote proctoring could be a viable solution for administering high-stakes medical exams. With regards to candidates’ educational experience, remote proctoring was met with mixed feelings. Potential privacy issues and increased test anxiety should be taken into consideration when choosing a proctoring protocol. Future research should explore generalizability of these results utilising other proctoring systems in medical education and in other educational settings.
No related grants have been discovered for Vasiliki Andreou.