ORCID Profile
0000-0002-9506-6007
Current Organisation
University of Leeds
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: JMIR Publications Inc.
Date: 02-06-2023
Abstract: efined as a public priority, digital health is considered essential for creating sustainable and equitable health care systems on an international scale. owever, while widely acknowledged as beneficial, recent research shows that patients and the public are rarely involved in the design, implementation, and evaluation of digital health technologies. Furthermore, stakeholder consensus on the principles that underpin meaningful involvement in digital health specifically is not yet readily available. his research draws on a four-phase methodology: i) extensive systematic review of peer-reviewed literature ii) inductive thematic analysis of review findings iii) co-design workshop to identify best practice principles and iv) consensus testing of identified principles amongst key stakeholders using a modified Delphi methodology. This manuscript reports on phases three and four only. The systematic review methodology and findings are reported elsewhere. hirteen principles categorised into four headings (i) engage, ii) acknowledge, reward and value, iii) communicate and iv) trust and transparency) termed the EnACT framework were co-designed by workshop participants. Nine principles achieved ‘essential’ consensus ( % agreement) status by 14 expert respondents. Principles with the highest level of consensus included: providing clear assurances and information about patient confidentiality, data privacy and security (100%) and creating a space where people feel safe and supported in sharing their digital innovation ideas and suggestions (100%). Providing clear assurances and information about patient confidentiality, data privacy and security was repeatedly described as essential and totally non-negotiable by participants. Principles that failed to achieve consensus included: co-designing engaging activities and evaluation methods (50% essential vs 50% desirable) building in sufficient time and resources (57% vs 43%) developing a feedback loop (71% vs 29%) and advertising the potential benefits of being involved (43% vs 57%). One prevailing justification for categorising principles as desirable was a perceived disparity between ‘preferable’ or ‘desirable’ best practise, and the ‘realities’ of implementing such principles. With the exception of one principle categorised as irrelevant by one innovator, no other principles were considered irrelevant. No alternative principles were suggested by expert respondents. his research advances existing knowledge and understanding by providing previously unavailable consensus on the principles that underpin meaningful Patient Public Involvement in digital health innovation, implementation and evaluation. Expert respondents suggest such principles should be aspired to wherever possible to ensure involvement optimisation. However, stakeholders need to be sufficiently supported and operating within adequately resourced and receptive environments if high quality involvement is to become a reality. Failure to do so may mean patient and public involvement in digital health continues to remain a rarely practised, or tokenistic exercise that jeopardises innovation relevance, value, and utility, ultimately endangering the desired vision of digital health creating sustainable and equitable health care systems on an international scale.
Publisher: Wiley
Date: 08-05-2022
DOI: 10.1111/HEX.13506
Abstract: The importance of meaningfully involving patients and the public in digital health innovation is widely acknowledged, but often poorly understood. This review, therefore, sought to explore how patients and the public are involved in digital health innovation and to identify factors that support and inhibit meaningful patient and public involvement (PPI) in digital health innovation, implementation and evaluation. Searches were undertaken from 2010 to July 2020 in the electronic databases MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus and ACM Digital Library. Grey literature searches were also undertaken using the Patient Experience Library database and Google Scholar. Of the 10,540 articles identified, 433 were included. The majority of included articles were published in the United States, United Kingdom, Canada and Australia, with representation from 42 countries highlighting the international relevance of PPI in digital health. 112 topic areas where PPI had reportedly taken place were identified. Areas most often described included cancer ( n = 50), mental health ( n = 43), diabetes ( n = 26) and long‐term conditions ( n = 19). Interestingly, over 133 terms were used to describe PPI few were explicitly defined. Patients were often most involved in the final, passive stages of an innovation journey, for ex le, usability testing, where the ability to proactively influence change was severely limited. Common barriers to achieving meaningful PPI included data privacy and security concerns, not involving patients early enough and lack of trust. Suggested enablers were often designed to counteract such challenges. PPI is largely viewed as valuable and essential in digital health innovation, but rarely practised. Several barriers exist for both innovators and patients, which currently limits the quality, frequency and duration of PPI in digital health innovation, although improvements have been made in the past decade. Some reported barriers and enablers such as the importance of data privacy and security appear to be unique to PPI in digital innovation. Greater efforts should be made to support innovators and patients to become meaningfully involved in digital health innovations from the outset, given its reported benefits and impacts. Stakeholder consensus on the principles that underpin meaningful PPI in digital health innovation would be helpful in providing evidence‐based guidance on how to achieve this. This review has received extensive patient and public contributions with a representative from the Patient Experience Library involved throughout the review's conception, from design (including suggested revisions to the search strategy) through to article production and dissemination. Other areas of patient and public contributor involvement include contributing to the inductive thematic analysis process, refining the thematic framework and finalizing theme wording, helping to ensure relevance, value and meaning from a patient perspective. Findings from this review have also been presented to a variety of stakeholders including patients, patient advocates and clinicians through a series of focus groups and webinars. Given their extensive involvement, the representative from the Patient Experience Library is rightly included as an author of this review.
Publisher: Ovid Technologies (Wolters Kluwer Health)
Date: 2018
DOI: 10.1097/CEH.0000000000000219
Abstract: Over the past 10 years, a number of systematic reviews have evaluated the validity of multisource feedback (MSF) to assess and quality-assure medical practice. The purpose of this study is to synthesize the results from existing reviews to provide a holistic overview of the validity evidence. This review identified eight systematic reviews evaluating the validity of MSF published between January 2006 and October 2016. Using a standardized data extraction form, two independent reviewers extracted study characteristics. A framework of validation developed by the American Psychological Association was used to appraise the validity evidence within each systematic review. In terms of validity evidence, each of the eight reviews demonstrated evidence across at least one domain of the American Psychological Association's validity framework. Evidence of assessment validity within the domains of “internal structure” and “relationship to other variables” has been well established. However, the domains of content validity (ie, ensuring that MSF tools measure what they are intended to measure) consequential validity (ie, evidence of the intended or unintended consequences MSF assessments may have on participants or wider society), and response process validity (ie, the process of standardization and quality control in the delivery and completion of assessments) remain limited. Evidence for the validity of MSF has, across a number of domains, been well established. However, the size and quality of the existing evidence remains variable. To determine the extent to which MSF is considered a valid instrument to assess medical performance, future research is required to determine the following: (1) how best to design and deliver MSF assessments that address the identified limitations of existing tools and (2) how to ensure that involvement within MSF supports positive changes in practice. Such research is integral if MSF is to continue to inform medical performance and subsequent improvements in the quality and safety of patient care.
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
Location: United Kingdom of Great Britain and Northern Ireland
No related grants have been discovered for Arunangsu Chatterjee.