brintellex
ValprolBanner
Annals of Indian Academy of Neurology
  Users Online: 728 Home | About the Journal | InstructionsCurrent Issue | Back IssuesLogin      Print this page Email this page  Small font size Default font size Increase font size

Table of Contents
ORIGINAL ARTICLE
Year : 2020  |  Volume : 23  |  Issue : 8  |  Page : 130-134
 

Maintaining research fidelity: Remote training and monitoring of clinical assistants in aphasia research


Department of Communication Sciences and Disorders, Louisiana State University, Baton Rouge, LA, USA

Date of Submission20-May-2020
Date of Acceptance26-May-2020
Date of Web Publication25-Sep-2020

Correspondence Address:
Ms. Barnali Mazumdar
Department of Communication Sciences and Disorders, Louisiana State University, 422 Hatcher Hall, Baton Rouge, LA 70803
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/aian.AIAN_489_20

Rights and Permissions

 

   Abstract 


Background: In aphasia research, to improve a study's reliability, the aphasia journals compel their authors to report fidelity. Aphasia researchers are mostly concerned about Type I and Type II errors to maintain the level of confidence. However, the third type (Type III error) can significantly affect the study outcomes and question the research fidelity. Objective: This study explains the methodology of how investigators maintained research fidelity in the context of hiring and training remote data collectors and conducted a multi-site data collection. Methods: The present study used a descriptive analysis design to explicate the three-step process of remote data collection: (1) remotely selecting and training data collectors, (2) remotely supervising data collection and data management, and (3) optimizing and monitoring screening/assessment fidelity. At the initial step, investigators interviewed seven candidates and short-listed four of them, who were trained using a standard training protocol and participated in a mock data collection. For the next two steps, data collectors video-recorded each study session and e-shared the data with the investigator, who watched all the video-recordings and provided necessary feedback with a focus on the screening sections. The screenings were a part of the inclusion-exclusion criteria. Results: Two data collectors (both clinical psychologists) with the highest scores were selected and received final training. One-to-one e-supervision by the investigator resulted in significant improvement in data collectors' performance. Only 4% of the total collected sample size was excluded, and 99 participants' data were analyzed. Conclusion: The present study adds information on maintaining research fidelity for remote data collection, where limited studies exist.


Keywords: Remote data collection, remote training and supervision, research fidelity, Type III error


How to cite this article:
Mazumdar B, Donovan NJ. Maintaining research fidelity: Remote training and monitoring of clinical assistants in aphasia research. Ann Indian Acad Neurol 2020;23, Suppl S2:130-4

How to cite this URL:
Mazumdar B, Donovan NJ. Maintaining research fidelity: Remote training and monitoring of clinical assistants in aphasia research. Ann Indian Acad Neurol [serial online] 2020 [cited 2020 Oct 29];23, Suppl S2:130-4. Available from: https://www.annalsofian.org/text.asp?2020/23/8/130/296091


Guest editor's notes: Paucity of SLT in India impels us to take recourse of less trained 'Clinical Assistants' and even the SOs of PWA. The authors have documented how investigators can maintain research fidelity in the context of hiring and training 'remote data collectors' while conducting a multi-site data collection. The researchers were in US and the trainees in West Bengal.




   Introduction Top


Evidence-based practice demands empirical data that can only be achieved after conducting studies on different populations. We primarily rely on statistical power and types of statistical analysis to lower the chances of Type I and Type II errors that increase the level of confidence in study results. However, an additional error, Type III error, may occur due to an investigator's bias or poor study implementation.[1] Hence, investigators must recruit and train competent data collectors, blinded to the study-related research questions and hypotheses, to reduce investigator's bias. To ensure proper study implementation, thorough training protocols must be developed because of random factors, such as poor sampling, protocol drift or protocol contamination, will affect a study's rigor. Data fidelity, a component of methodological integrity, is based on the trustworthiness of the data researchers interpret and report to audiences,[2] and reduces the chances of Type III errors. These concerns become increasingly crucial in multi-site research where investigators must rely on remote data collection. This report aims to explicate a methodology for research fidelity developed for a study that required the investigators to conduct remote data collection.

Throughout healthcare, evidence has demonstrated the benefits of telehealth and telerehabilitation for appropriate patients under the proper circumstances.[3],[4] The rise in telepractices is a direct response to the increasing demands placed on healthcare professionals because people are living longer and thus, require more care due to the accompanying surge in chronic disease incidences.[5],[6] Concomitantly, the numbers of healthcare workers and caregivers cannot keep up with healthcare demands. In some cases, healthcare professionals remotely train individuals to provide the treatments, while in others, they administer the treatments themselves.[7] Remote training is equally essential for data collection, where the target population is not easily accessible. In that scenario, via telecommunication or video-conferencing, investigators recruit and train data collectors who can easily access the target population.

In the field of Communication Disorders, researchers have typically focused on treatment fidelity when reporting a treatment's efficacy or effectiveness.[8] However, in a recent intervention study on people with aphasia (PWA), the researchers emphasized the importance of study implementation fidelity (i.e., methodological integrity) because inappropriate study implementation could result in questionable outcomes of assessment and treatment fidelity. The researchers proposed three aims to assure the quality of data collection: supervising data collection and data management, optimizing and monitoring assessment delivery fidelity, and treatment fidelity. They reported on an implementation plan that helped them maintain and improve the study's integrity and results, which included reporting participant retention and high-reliability scores of the assessors and raters for the assessments and treatments involved in this study.[1] Despite that, in aphasia research, most investigators have not explicitly described their implementation or treatment fidelity methods. Hence, the aphasia journals compel their authors to report fidelity to improve a study's validity.[9]

The research fidelity methodology described in this report is based on a study, which was conducted in Kolkata, India, while the investigators resided in the United States. The research aimed to identify a culturally appropriate stimulus for a Bangla picture description task, designed to elicit connected speech from PWA. Therefore, considering the emphasis on reporting implementation/data fidelity in aphasia research, the purpose of the present report is to describe the steps investigators adapted to (1) recruit and train data collectors for participant sampling and recruitment, (2) supervise the data collection and data management, and (3) optimize and monitor assessment fidelity. We suggest that this information will enhance the fidelity of the experimental part of the present study and may be used as a guide for future researchers interested in conducting studies remotely.


   Methods Top


We present a descriptive analysis of the three-step process investigators adapted to conduct the remote data collection to control for Type III error and maintain data fidelity of the experimental study described above. This study was approved by the university's Institutional Review Board to recruit and protect human participants.

Investigators followed the implementation model provided by Fixsen et al.,[10] which identified the core implementation components, that is, practitioner selection, pre-service and in-service training, and ongoing coaching and supervision, which are crucial for implementation fidelity. All training and supervising phases were conducted via video-conferencing or electronic-supervision (e-supervision), which is an effective alternative of a face-to-face meeting in clinical settings to interview or train individuals.[3],[11] In this report, the investigator (BM) was responsible for selecting and remotely training the data collectors about the study protocols and later supervised them during the data collection and data management process to optimize and monitor data fidelity [Figure 1].
Figure 1: Three-step process of remote data collection

Click here to view


Selection and training of data collectors

Investigators advertised the job-post with the project details, required qualifications, and responsibilities expected from the recruited individuals via Kolkata-based local universities' job portals. Investigators received seven applications from five clinical psychologists and two linguists. After the preliminary interview, four clinical psychologists were shortlisted based on their previous research experience.

At the next level, investigators developed detailed training protocols that included information about self-training, telephone screening, study setup, the informed consent process, screening procedures, questionnaire completion, and language sample collection techniques. The training protocols were shared with all four selected individuals along with a demonstration video, where the investigator performed all the necessary data collection-related steps. During this phase, all of them were asked to start the telephone screening with interested individuals to identify the number of potential participants each of them had.

After the preliminary training, investigators prepared a setup, where all the selected individuals had to perform a mock data collection with a mock participant, who was a representative of the target population. Before the mock data collection, the investigator performed the entire process with the mock participant to acquaint that individual with the subject materials and steps. The mock participant was trained to mimic the behavior of a difficult participant, which would challenge the candidates' skills and allow the investigators to identify suitable candidates capable of dealing with difficult situations when they arise during the actual data collection process. The mock participant scored each candidate using a scoring sheet for the appropriateness of the study protocols and their performance on dealing with unprecedented situations. Each candidate video-recorded the mock session and shared it with the investigator (BM) for the final selection process. Investigators selected the data collectors based on (1) their performance during the mock data collection, (2) the scores provided by the mock participant, and (3) the total number of potential participants each of them had. Finally, two clinical psychologists (native Bangla speakers) were selected, who received additional training on recruiting participants and conducting the data collection process. Investigators provided the data collectors with feedback highlighting any critical details they missed during the mock process.

Remotely supervising data collection and data management

Training alone is not enough to reduce interviewer/data collectors' error or bias. Previous research supported that training, along with supervision, is an effective way to minimize the error or bias.[12] Electronic-supervision is a common practice in remote settings where the educator or investigator provides observation and feedback from a distant site via communication technologies such as video-conferencing. It has been reported that the one-to-one model of e-supervision does not affect the nature of the supervisory relationship.[3] Therefore, the investigators of the present study followed the one-to-one model of e-supervision using a systematic approach. The data collectors video-recorded each study session and uploaded it to a secure, web-based service that can only be accessed by the investigator (BM), who watched all the video-recordings to identify data collectors' drift from the study protocols and contamination of the guidelines. After the data screening, investigators shortlisted the participants whose data would be included in the final analysis and provided feedback to the data collectors for further improvement to avoid similar mistakes.

The investigators created and e-shared detailed spreadsheets with the data collectors. The spreadsheets were updated by the data collectors to inform the investigators about their weekly recruitment progress, demographic details of each recruited participants along with their individual screening scores, information about the excluded participants with exclusion reasons, and the study-related expenses. Investigators monitored the updates and tracked the ongoing data collection process every week to provide necessary feedback.

Optimize and monitor screening/assessment fidelity

To ensure screening/assessment fidelity, the data collectors thoroughly read the associated manuals. Being trained clinical psychologists, both data collectors had previous experience of using screeners/assessments. Four different screenings were used as inclusion criteria: (1) vision screening, (2) color vision screening, (3) cognitive screening, and (4) depression screening. Data collectors watched multiple web-based training videos for each screening, along with the demonstrative video prepared by the investigator (BM). To monitor their performance on screenings, investigators thoroughly observed each screening part of the data collection videos and provided necessary feedback on administration and scoring.


   Results Top


Selection and training of data collectors

The data collectors were selected based on the following three categories: (1) their performance during the mock data collection, (2) the scores provided by the mock participant, and (3) the total number of potential participants each of them had. The selection categories were ranked per their weightage and priority on the final selection scores. For the third category, the selected individual, who reported the highest number of potential participants, received the full category score; the other three selected individuals' scores were calculated based on their potential participants' number differences from the highest scorer [Table 1].
Table 1: Scoring Pattern of Data Collectors Selection Process

Click here to view


Remotely supervising data collection and data management

Investigators provided feedback biweekly on data entries and resolved data-related concerns after discussion with the data collectors. However, the number of investigators' feedback reduced for both data collectors after the first few data collection, and the accuracy was 100%. After screening all the collected data, investigators screened out 4% of the total sample size and recruited 99 participants for the final analysis. The reason for data exclusion was due to the data collectors' deviation from the study protocols that affected the methodological accuracy. The examples of deviations are spending more than the allotted time on a few sub-sections of the cognitive screening, and over-explaining the picture description task to certain participants.

Optimize and monitor screening/assessment fidelity

Data collectors' screening/assessment delivery was monitored by observing all the video-recordings. Also, a screening/assessment fidelity log was maintained for each data collector that includes the investigator's feedback and whether assessment administration guidelines were followed. Reviewing those logs indicated that the number of feedbacks became lesser to zero from month 1 through month 4, which was the time data collectors used to meet the target sample size.


   Discussion Top


Maintaining fidelity in study implementation and assessment administration is crucial to ensure the validity of research findings. Although it is crucial in aphasia studies to report the fidelity components, most researchers do not describe it systematically.[1] Without this information, a study's validity might be questionable. Since remote data collectors conducted the data collection process of the present study, it was essential for the investigators to devise a detailed plan of maintaining data fidelity. Additionally, the outcomes of the experimental part of the present study will be useful in understanding the spontaneous language production of neurologically healthy native Bangla speakers, as well as guide future researchers to understand the linguistic difficulties Bangla-speaking PWA experience. Hence, the present study must report data fidelity as per the requirement of the aphasia research.

Selection and training of data collectors

Existing research reported that appropriate selection and training of data collectors reduce the chances of Type III error, which improves the study validity.[1],[13] Therefore, the investigators of the present study provided the primary focus on baseline training of the data collectors to strengthen the reliability of the methodological design. At the end of the data collection process, data collectors provided opinions about the training and study protocols, where they mentioned the entire design was easy to understand and apply. They mentioned the telephone-screening process, which they initiated before being selected for this project, helped them understand the feasibility of this study and the accessibility of the target sample size.

Remotely supervising data collection and data management

As mentioned previously, video-conferencing is a common practice in remote supervision, which was used in the present study. Video-recording of each data collection session was helpful for fidelity documentation[8] as well as for follow-up training. This method helped the data collectors to heighten their awareness of protocol deviations and increased their understanding of the subtle aspects of the data collection process.[14] However, the only disadvantage of e-supervision was intermittent internet connectivity, which was addressed by a previous study.[3] This problem was mutually resolved by adapting other strategies such as regular meetings and e-mails to mitigate the connectivity issues. The initially adapted data management strategies were precise yet comprehensive, which was followed by the data collectors until the end of data collection with minor revisions on the excluded potential participants list. The changes were necessary because a few potential participants did not disclose their existing neurological conditions until the data collectors met them for the actual data collection. Subsequently, those potential participants were excluded from the study before their final participation.

Optimize and monitor screening/assessment fidelity

It is always beneficial to track assessment delivery,[1] especially when the screenings/assessments are used as inclusion-exclusion criteria. However, monitoring assessment fidelity can be time–consuming and expensive process if investigators hire trained individuals to monitor.[15] Therefore, the investigator (BM) screened all the videos and prepared feedback for the data collectors for further improvements.


   Conclusion Top


It is critical to consider Type III error when designing studies so we can be confident about the validity of the data for the inferences we make. To date, researchers have begun to address treatment fidelity, but a few have specifically addressed study fidelity. As remote technologies improve, it is going to become possible for investigators to research afar. Therefore, ensuring that data collectors are appropriately recruited, trained, and able to manage the data is critical. The present remote data collection method was also essential to conduct a time-efficient and cost-effective project. Recruiting data collectors from different areas of Kolkata spread awareness about aphasia within multiple communities.

Lastly, according to Breitenstein et al.,[15] a few researchers developed and reported comprehensive study implementation fidelity plans. Hence, the present study reports the steps of remote training and remote data collection while maintaining the implementation fidelity, where limited studies exist on similar topics in the field of aphasia research.

Acknowledgements

We wish to thank Ms. Ipsita Modak for translating the study-related questionnaire and feedback sheet in Bangla. We would also like to thank Ms. Mousumi Dey (mock participant) and Dr. Prasenjit Dey (demo participant) for participating in this study.

Financial support and sponsorship

Louisiana State University Dissertation Fellowship.

Conflicts of interest

There are no conflicts of interest.



 
   References Top

1.
Spell LA, Richardson JD, Basilakos A, Stark BC, Teklehaimanot A, Hillis AE, et al. Developing, implementing, and improving assessment and treatment fidelity in clinical aphasia research. Am J Speech-Lang Pat 2020;29:286-98.  Back to cited text no. 1
    
2.
Levitt HM. Methodological integrity: Establishing the fidelity and utility of your research. In: Levitt HM, editor. Reporting Qualitative Research In Psychology: How To Meet APA Style Journal Article Reporting Standards. Rev ed. Washington, DC: American Psychological Association; 2020. p. 29-41.  Back to cited text no. 2
    
3.
Chipchase L, Hill A, Dunwoodie R, Allen S, Kane Y, Piper K, et al. Evaluating telesupervision as a support for clinical learning: An action research project. High Educ Acad 2014;2:40-53.  Back to cited text no. 3
    
4.
Hall N, Boisvert M, Steele R. Telepractice in the assessment and treatment of individuals with aphasia: A systematic review. Int J Telerehab 2013;5:27-37.  Back to cited text no. 4
    
5.
Greengross S, Murphy E, Quam L, Rochon P, Smith R. Aging: A subject that must be at the top of world agendas – The aging of populations demands major changes across society and health care. Brit Med J 1997;315:1029-30.  Back to cited text no. 5
    
6.
Towle A. Changes in health care and continuing medical education for the 21st century. Brit Med J 1998;316:301-4.  Back to cited text no. 6
    
7.
Wearne S, Dornan T, Teunissen PW, Skinner T. Twelve tips on how to set up postgraduate training via remote clinical supervision. Med Teach 2013;35:891-4.  Back to cited text no. 7
    
8.
Kaderavek JN, Justice LM. Fidelity: An essential component of evidence-based practice in speech-language pathology. Am J Speech-Lang Pat 2010;19:369-79.  Back to cited text no. 8
    
9.
Brogan E, Ciccone N, Godecke E. Treatment fidelity in aphasia randomised controlled trials. Aphasiology 2019;33:759-79.  Back to cited text no. 9
    
10.
Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte. Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication 231); 2005.  Back to cited text no. 10
    
11.
Krouwel M, Jolly K, Greenfield S. Comparing skype (video calling) and in-person qualitative interview modes in a study of people with irritable bowel syndrome - an exploratory comparative analysis. BMC Med Res Methodol 2019;19:1-9.  Back to cited text no. 11
    
12.
Titus JC, Smith DC, Dennis ML, Ives M, Twanow L, White MK. Impact of a training and certification program on the quality of interviewer-collected self-report assessment data. J Subst Abuse Treat 2012;42:201-12.  Back to cited text no. 12
    
13.
Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized casual inference. Boston, MA: Houghton-Mifflin; 2002.  Back to cited text no. 13
    
14.
Robb SL, Burns DS, Docherty SL, Haase JE. Ensuring treatment fidelity in a multi-site behavioral intervention study: Implementing NIH behavior change consortium recommendations in the SMART trial. Psycho-Oncol 2011;20:1193-201.  Back to cited text no. 14
    
15.
Breitenstein SM, Gross D, Garvey CA, Hill C, Fogg L, Resnick B. Implementation fidelity in community-based interventions. Res Nurs Health 2010;33:164-73.  Back to cited text no. 15
    


    Figures

  [Figure 1]
 
 
    Tables

  [Table 1]



 

Top
Print this article  Email this article

    

 
   Search
 
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Article in PDF (461 KB)
    Citation Manager
    Access Statistics
    Reader Comments
    Email Alert *
    Add to My List *
* Registration required (free)  


    Abstract
   Introduction
   Methods
   Results
   Discussion
   Conclusion
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed97    
    Printed0    
    Emailed0    
    PDF Downloaded6    
    Comments [Add]    

Recommend this journal