3
54 Journal for Clinical Studies Volume 10 Issue 6 Technology Making Medical Questionnaires User-friendly As electronics have become increasingly prevalent in our daily lives, we have all experienced feelings of bewilderment when faced with new technology. In our daily technological interactions, the stakes of fluency are typically low; we may miss an incoming phone call, or struggle to use Alexa to change the thermostat temperature. For patients participating in a clinical trial, the consequences of technological confusion can be catastrophic, and accessing technical support may not be as easy as a quick call to a sibling. Collecting accurate and representative data from patients is vital, whether it’s in the context of clinical data collection that will determine a treatment plan, or a global clinical trial whose data quality could determine the results of a drug development application. To collect high-quality patient-reported data, patients must understand both the questions they are being asked and how to use the study’s device in order to answer those questions. Vitally, they need to learn how to use the device prior to an active data collection period, when early sessions may be biased due to practice and learning effects. How can we ensure patients of any age, in any part of the world, and with any educational background, will provide accurate data via electronic devices in these global trials? How can we ensure the burden of navigating these unfamiliar technologies does not lie entirely with the user? Let’s first look at the primary obstacles that may interrupt a seamless patient experience with these devices. Many global clinical trials use tablets pre-loaded with an app that prompts users at set intervals of time, so that they can respond to patient-reported outcomes (PRO) assessments and symptom diaries, both of which are forms of electronic clinical outcomes assessments (eCOA). The strength of these PRO questionnaires is the volume of research and testing done to prove that they are reliable and valid. However, these questionnaires were created for use as a paper survey. Migrating them doesn’t just mean copy-pasting over the text from a paper survey into an app template. In order to migrate the questionnaires over successfully, they must be as easy to answer as filling out the paper version of the questionnaire. Migration onto the patient-facing app should be done carefully to preserve the reliability and validity of the questionnaires, as even minor differences across formats can result in biased data 1 . When considering migration of PROs from their original paper format to an app, the choice of questionnaires and the timing of data collection are just the beginning. These apps are configured with key customisable elements that can be altered to match each study’s design and data collection needs. A representative list of elements is in Table 1. Each of these interface elements have the potential to derail a new user. Elements that are instinctive for everyday users may not be instinctive for new users. If you multiply the interface design elements by the number of languages included in these global trials – with their variation in word-length, specific font needs, text direction, and line-break rules – you’ll come up with a very large number of considerations. These complexities are exacerbated in a bring your own device (BYOD) study, a popular new study setup where patients use their own personal cell phones, rather than being given a specific phone or tablet device to take home just for the study questionnaires. BYOD studies are able to use a single app across multiple devices by programming a responsive interface design that may shrink the font size, introduce line breaks, or alter the menu appearance depending on the visible area of the screen used to access it. In short, there are a lot of boxes to check before claiming a particular platform or device will be user-friendly for all patients. If minor or moderate changes to the PRO questionnaire(s) were needed during the migration, the gold-standard process to collect evidence of a successful migration of PROs from paper to user-friendly app is a process known as usability testing 3 . Usability testing is the patient- oriented evaluation of a migrated questionnaire, oſten from paper to a soſtware interface, presented to patients representing your study’s population using a semi-structured interview process. Usability testing provides a moderate level of evidence that the paper and electronic versions of a PRO questionnaire are equivalent. By conducting quality usability testing prior to a global clinical trial, sponsors will improve data quality and the patient experience when using electronic devices in these trials. Usability testing is a straightforward solution that has failed to become common practice for a good reason: former methodologies were time-consuming and largely ineffectual. Usability Testing: An Improved Method Usability testing was originally recommended to verify that PROs being migrated from paper to another format, such as an app, had the same properties of reliability and validity in both formats 3 . A key element of this process was the use of cognitive debriefing interviews around the text of the PRO questionnaire. Cognitive debriefing interviews are a common means of confirming that representative users will understand translations used for data collection. These interviews involve a phrase-by-phrase paraphrasing task of the translation with representative groups of patients, meticulous qualitative data collection during the interview process, and qualitative analysis of the patients’ feedback. The intention of cognitive debriefing during this process is to determine “whether subjects are interpreting and responding to the items the same way on the new mode as they would on the mode from which the instrument was migrated” 4, p.509 . Unfortunately, cognitive debriefing does not give good data about the usability of the new app, or direct feedback on the migration from paper to eCOA 4,5 . In our past talk at CRF Health’s eCOA Forum 2017, we showed that cognitive debriefing distracted subjects from the two topics of interest: how useable and user-friendly the new mode of administration is, and how the migration affected their interpretation of the PRO. Instead, the 64% of feedback from patients was about the content of the questionnaires, which are psychometrically validated and therefore cannot be changed 5 . Because the content of eCOA questionnaires is psychometrically validated and cannot be revised, Usability Testing: Ensuring User-friendly Data Collection in Clinical Trials

Usability Testing: Ensuring User-friendly Data Collection ... · Usability testing provides a moderate level of evidence that the paper and electronic versions of a PRO questionnaire

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Usability Testing: Ensuring User-friendly Data Collection ... · Usability testing provides a moderate level of evidence that the paper and electronic versions of a PRO questionnaire

54 Journal for Clinical Studies Volume 10 Issue 6

Technology

Making Medical Questionnaires User-friendlyAs electronics have become increasingly prevalent in our daily lives, we have all experienced feelings of bewilderment when faced with new technology. In our daily technological interactions, the stakes of fluency are typically low; we may miss an incoming phone call, or struggle to use Alexa to change the thermostat temperature. For patients participating in a clinical trial, the consequences of technological confusion can be catastrophic, and accessing technical support may not be as easy as a quick call to a sibling.

Collecting accurate and representative data from patients is vital, whether it’s in the context of clinical data collection that will determine a treatment plan, or a global clinical trial whose data quality could determine the results of a drug development application. To collect high-quality patient-reported data, patients must understand both the questions they are being asked and how to use the study’s device in order to answer those questions. Vitally, they need to learn how to use the device prior to an active data collection period, when early sessions may be biased due to practice and learning effects.

How can we ensure patients of any age, in any part of the world, and with any educational background, will provide accurate data via electronic devices in these global trials? How can we ensure the burden of navigating these unfamiliar technologies does not lie entirely with the user? Let’s first look at the primary obstacles that may interrupt a seamless patient experience with these devices.

Many global clinical trials use tablets pre-loaded with an app that prompts users at set intervals of time, so that they can respond to patient-reported outcomes (PRO) assessments and symptom diaries, both of which are forms of electronic clinical outcomes assessments (eCOA). The strength of these PRO questionnaires is the volume of research and testing done to prove that they are reliable and valid. However, these questionnaires were created for use as a paper survey. Migrating them doesn’t just mean copy-pasting over the text from a paper survey into an app template. In order to migrate the questionnaires over successfully, they must be as easy to answer as filling out the paper version of the questionnaire. Migration onto the patient-facing app should be done carefully to preserve the reliability and validity of the questionnaires, as even minor differences across formats can result in biased data1.

When considering migration of PROs from their original paper format to an app, the choice of questionnaires and the timing of data collection are just the beginning. These apps are configured with key customisable elements that can be altered to match each study’s design and data collection needs. A representative list of elements is in Table 1.

Each of these interface elements have the potential to derail a new user. Elements that are instinctive for everyday users may not be instinctive for new users. If you multiply the interface design elements by the number of languages included in these global trials – with their variation in word-length, specific font needs, text direction, and line-break rules – you’ll come up with a very large number of considerations. These complexities are exacerbated in a bring your own device (BYOD) study, a popular new study setup where patients use their own personal cell phones, rather than being given a specific phone or tablet device to take home just for the study questionnaires. BYOD studies are able to use a single app across multiple devices by programming a responsive interface design that may shrink the font size, introduce line breaks, or alter the menu appearance depending on the visible area of the screen used to access it.

In short, there are a lot of boxes to check before claiming a particular platform or device will be user-friendly for all patients. If minor or moderate changes to the PRO questionnaire(s) were needed during the migration, the gold-standard process to collect evidence of a successful migration of PROs from paper to user-friendly app is a process known as usability testing3. Usability testing is the patient-oriented evaluation of a migrated questionnaire, often from paper to a software interface, presented to patients representing your study’s population using a semi-structured interview process. Usability testing provides a moderate level of evidence that the paper and electronic versions of a PRO questionnaire are equivalent.

By conducting quality usability testing prior to a global clinical trial, sponsors will improve data quality and the patient experience when using electronic devices in these trials. Usability testing is a straightforward solution that has failed to become common practice for a good reason: former methodologies were time-consuming and largely ineffectual.

Usability Testing: An Improved MethodUsability testing was originally recommended to verify that PROs being migrated from paper to another format, such as an app, had the same properties of reliability and validity in both formats3. A key element of this process was the use of cognitive debriefing interviews around the text of the PRO questionnaire. Cognitive debriefing interviews are a common means of confirming that representative users will understand translations used for data collection. These interviews involve a phrase-by-phrase paraphrasing task of the translation with representative groups of patients, meticulous qualitative data collection during the interview process, and qualitative analysis of the patients’ feedback. The intention of cognitive debriefing during this process is to determine “whether subjects are interpreting and responding to the items the same way on the new mode as they would on the mode from which the instrument was migrated” 4, p.509.

Unfortunately, cognitive debriefing does not give good data about the usability of the new app, or direct feedback on the migration from paper to eCOA4,5. In our past talk at CRF Health’s eCOA Forum 2017, we showed that cognitive debriefing distracted subjects from the two topics of interest: how useable and user-friendly the new mode of administration is, and how the migration affected their interpretation of the PRO. Instead, the 64% of feedback from patients was about the content of the questionnaires, which are psychometrically validated and therefore cannot be changed5. Because the content of eCOA questionnaires is psychometrically validated and cannot be revised,

Usability Testing: Ensuring User-friendly Data Collection in Clinical Trials

Page 2: Usability Testing: Ensuring User-friendly Data Collection ... · Usability testing provides a moderate level of evidence that the paper and electronic versions of a PRO questionnaire

56 Journal for Clinical Studies Volume 10 Issue 6

Technology

regardless of patient feedback, there is no point in collecting patient feedback on questionnaire content.

Figure 1: Reproduced from our 2017 poster, most feedback (64%) was on the content of the questionnaire, which cannot be changed, so their feedback was irrelevant for the purposes of testing. 35% of feedback was related to the user experience, while only 1% was feedback

on the migration to a new mode of admnistration.5; Figure 1

Figure 2: Now, 73% of feedback is relevant to the user experience, and 24% of feedback was directly related to patient’s opinions and thoughts about migration of the questionnaire.

We decided to shift the emphasis of the usability testing method away from cognitive debriefing, and towards a semi-structured interview format that incorporates active engagement with the device. Active engagement is a missing element from many usability testing studies, and requires that representative patients hold the novel mode of administration within their hands and interact with the device as they would in the field. Though a common shortcut, using screenshots or printed mockups of the planned app will not suffice. The different modes of administration should be shown in random order to different patients, to offset and isolate any effects of presentation order.

To honour recommendations in the literature, we do a brief cognitive debriefing activity with any text that has been edited for use in the eCOA platform (e.g. “Turn the page” in a paper format might become “Press next” in an app format). This ensures meaning has been preserved in a new context. Each usability testing process has a customised interview script, with questions addressing each of the customisable elements listed above. To create these scripts, a survey research expert reviews the paper source text, the migrated questionnaire, and any user interface steps required to access, answer, and submit the migrated questionnaire. All usability testing interview scripts contain questions about the primary customisable elements of an eCOA questionnaire. Afterwards, all users are asked a series of questions about elements that might have impacted the migration of the questionnaire. In particular, migration-focused questions ask about questionnaire layout, questionnaire navigation, patient preferences between formats, and questions about data collection over time using each format.

Our 2017 results, a process beginning with the time- and energy-intensive cognitive debriefing interview process, had 64% of feedback on questionnaire content, with only 35% of patient feedback focused on the user experience and 1% focused on migration. By contrast, the new, migration-focused interview script show an inversion of these results: 73% of patient feedback now focuses on usability, with an additional 24% of feedback focused on migration. Now, only 3% of feedback is on the unchangeable questionnaire content.

With our revised methods, 24% of patient feedback now relates directly to the migration of the questionnaire. This includes qualitative information on their subjective experiences interfacing with, most commonly, paper vs. touchscreen device(s). Once the patients were less concerned with the content of the questions on the PRO, they were able to focus on the actual experience of interfacing with the questionnaire. Patients commented on differences in scoring, how easy it was to answer questions, and how placement of response options changed their strategy when answering questions.

This technique echoes the methodology for user acceptance testing during software development, where text is proofread separately from reviewing the interface functionality.

Equally valuable, 73% of patients’ feedback now relates directly to key elements explored by usability testing: the user experience while answering the eCOA questionnaire; the friendliness and ease of use of the interface; as well as any technical issues related to the patients’ physical or ability limitations. Often, patients’ feedback directly corresponded to particular design elements listed in Table 1. In particular, patients gave a lot of feedback about navigating thequestionnaire and response options; button size and placement; andfeedback about the particular device(s) they reviewed. In one case, our client actually changed the device they were going to use for the study, and developed a strategy for dealing with visually-impaired patients.

By changing the pattern and direction of the questions used in these interviews away from the content of the questionnaire and to the experience of answering, patients were better able to understand what information and perspectives we were looking for. Over the course of the interview, patients progressively became more interested in representing the experience of “patients like me,” and would factor in their particular personal challenges.

Physical health or ability can strongly interfere with a patient’s ability to use touchscreen technology. Acces sibility guidelines and tools are ever-evolving to provide comparable, technological access to users with vision, hearing, and dexterity impairment. But these advances are still voluntary additions to standard interface design, and not yet consistently implemented in global trials. Usability testing data can help highlight potential gaps in design for your study in advance of fielding the questionnaires, when changes or revisions would be costly in terms of both time and money.

In the ACHI article, "Touch-Screens and Elderly users: A Perfect Match?," the authors note that "...current touchscreen designs rest on some assumptions that are often not present for elderly users: they are sometimes not able to see what to do on a small screen, nor are they able to push a small area in a way which is not too hard or soft, or too long or short. The elderly users’ abilities are different from person to person and they change over time."7 Even a user’s hand moisture and temperature can impact the responsiveness of a touchscreen, resulting in decreased touchscreen responsiveness for the elderly and users with circulatory ailments.

In current practice, reportage of physical limitations due to health, age, or disability are also a burden to the patient. If the users don’t report these issues, they’re likely to go unnoticed. Think of the stress you’ve felt when faced with a poorly functioning piece of technology – a remote control with dying batteries, a phone requiring OS updates– and imagine that frustration within the context of a clinical trial.The likelihood of a disgruntled user discontinuing data collection, orrecording data at inconsistent intervals, is an undesirable but probable outcome. Some users will be more affected by these issues than others,

Page 3: Usability Testing: Ensuring User-friendly Data Collection ... · Usability testing provides a moderate level of evidence that the paper and electronic versions of a PRO questionnaire

Journal for Clinical Studies 57www.jforcs.com Journal for Clinical Studies 57www.jforcs.com

Technology

and data collection may be biased as lower-functioning or poorer-health patients differentially drop out of the clinical trial.

These issues may apply even during BYOD studies, due to increased patient burden. The attractiveness of a BYOD study relies, in part, on the assumption that users are most comfortable with their personal devices. The efficiency of a tablet-driven eCOA study relies on the assumption that with minimal training, all patients will successfully report their experiences via touchscreen technology. However, the lack of device familiarity due to economic constraints in certain populations undercuts these assumptions, and in some cultures women generally have less access to technology8. Excluding respondents without access to their own smartphone, or without extensive experience using a tablet, will bias study results by including only patients of higher economic status and educational background. Even provisioned devices are unable to completely resolve the issue in current practice, since the usability of a provisioned device is often supported, yet again, with minimal written documentation and site support provided only when the patient requests assistance.

The only solution to these problems is to adopt a patients-first, forward-thinking attitude towards technology solutions for data collection. Empower the patients with active support during fielding, but head off potential issues before data collection ever begins through usability testing. Usability testing, done effectively, can improve data collection and the patient experience. If you have done usability testing with past studies and been unimpressed by the results, try it again using the refocused methodology. You might be surprised.

REFERENCES

1. Delgado-Rodriguez M, Llorca J. Bias. Journal of Epidemiology &Community Health 58:635-641M (2004).

2. W3C (MIT, ERCIM, Keio, and Beihang). Web Content AccessibilityGuidelines (WCAG) 2.1. June 5, 2018. https://www.w3.org/TR/2018/REC-WCAG21-20180605/ , visited on 10/4/18.

3. Coons SJ, Gwaltney CJ, Hays RD, et al. Recommendations on Evidence Needed to Support Measurement Equivalence between Electronic andPaper-Based Patient-Reported Outcome (PRO) Measures: ISPOR ePRO Good Research Practices Task Force Report. Value in Health 12(4): 419-429 (2009).

4. Eremenco S, Coons SJ, Paty J, et al. PRO Data Collection in Clinical Trials Using Mixed Modes: Report of the ISPOR PRO Mixed Modes GoodResearch Practices Task Force. Value in Health 17(5): 501-516 (2014).

5. Prince, Rebecca. Assessing the Effectiveness of Cognitive Debriefing in ePRO Usability Testing. eCOA Forum, Basel Switzerland (June 13, 2017).

6. Prince, Rebecca; Yohe Moore, Elizabeth; Brandt, Barbara; Poepsel,Tim; McKown, Shawn. Exploratory Analysis of the Effectivenessof Cognitive Debriefing during Usability Testing of QuestionnaireFormat Migration. ISOQOL, Philadelphia, PA (October 18-21, 2017).

7. Culén, Alma Leora; Bratteteig, Tone. Touch-Screens and Elderly users: A Perfect Match? ACHI, Nice, France (February 24 - March 1, 2013).

8. Simpson-Finch, Hayley; Yohe Moore, Elizabeth; Heinzman, Alisa; et al. Linguistic and Cultural Considerations when Implementing a Global'Bring Your Own Device' (BYOD) Study. ISPOR Asia, Tokyo, Japan(September 8-11, 2018).

Elizabeth Yohe Moore, MPH

Elizabeth Yohe Moore is COA Process Development Lead at RWS Life Sciences. She is a mixed methods research metho-dology professional with eight years of healthcare industry experience. Elizabeth received a BA in neuroscience and economics from Oberlin College and an MPH from Northwestern University. She is currently pursuing an MBA at University of Chicago.

Email: [email protected]

Alisa Heinzman

Alisa Heinzman is an Independent Senior Project Manager and eCOA Specialist at RWS Life Sciences. She applies her technical background in human-centred web design and development to the development of best practices for the migration and usability testing of translated eCOAs, particularly regarding the responsive and mobile-friendly platforms used in BYOD studies. She received an MFA in creative writing from Saint Mary’s College of California and a BA in French language and literature and English from University of Nebraska in Lincoln.

Email: [email protected]

Barbara Brandt, MA

Barbara Brandt is Survey Research Analyst Team Lead at RWS Life Sciences. Barbara has 17 years of experience in COAs, linguistic validation, translatability assessment, and electronic COAs and usability testing. She has a BA in linguistics and psychology and an MA in survey research from the University of Connecticut.

Email: [email protected]

Tim Poepsel, PhD

Dr Tim Poepsel is a Survey Research Analyst at RWS Life Sciences. He has a BA in linguistics from Northwestern University, US, and a PhD in psychology and language science from Penn State University, US. Tim specialises in the area of linguistic validation, and his research focusses on the patient perspective in the development and administration of clinical outcome assessment (COA) instruments, especially as it relates to comprehensibility and translatability.

Email: [email protected]

Elizabeth McCullough, MA

Elizabeth McCullough is the Manager of Linguistic Validation Services at RWS Life Sciences. Elizabeth has worked in the areas of medical translation and linguistic validation for 10 years. She holds an MA in translation from the Institute for Applied Linguistics at Kent State University, US.

Email: [email protected]

Shawn McKown, MA

Shawn McKown is the Senior Director and Practice Lead of Linguistic Validation at RWS Life Sciences. Shawn has worked with COAs, patient-reported outcomes, and linguistic validation within the clinical trial space for over 15 years and holds an MA from the University of Chicago, US.

Email: [email protected]