17
Coaching S.M.A.R.T.er: A Type III Hybrid intervention to improve classroom delivery of an evidence-based intervention Team Awesome (Laura Balis, Fabiano Brito, Hannah Lane, Aurielle Lowery, Meagan Van Engen) Final Grant Proposal HNFE 6984 8 May 2015 95%--excellent job!!!

kamilanauriellelowery.weebly.comkamilanauriellelowery.weebly.com/.../teamawesome_finalproject__1_.… · Web viewS.M.A.R.T. is an evidence-based intervention (EBI)8 therefore, its

Embed Size (px)

Citation preview

Coaching S.M.A.R.T.er: A Type III Hybrid intervention to improve classroom delivery of an evidence-

based intervention

Team Awesome (Laura Balis, Fabiano Brito, Hannah Lane, Aurielle Lowery, Meagan Van Engen)

Final Grant ProposalHNFE 6984 8 May 2015

95%--excellent job!!!

Specific AimsChildhood obesity continues to be a leading public health concern, with 18% of 6-11 year olds

qualifying as overweight (BMI percentile > 85th), and an additional 17% qualifying as obese (BMI percentile > 95th percentile).1 One contributor to this complex disease is recreational sedentary screen time (RSST).2 Evidence suggests that reducing RSST decreases childhood obesity and behavioral issues in children, as well as improves school performance.3-5 As a result of this evidence, the American Academy of Pediatrics (AAP) recommends that children spend no more than two hours per day of screen time.6 Reduced RSST has been effectively achieved by several intervention programs and is a recommended strategy to reduce childhood obesity according to the CDC’s Community Guide.7,8 One such program is Student Media Awareness to Reduce Television (S.M.A.R.T), a classroom-based, teacher-delivered curriculum developed using the Social Cognitive Theory that reduced screen time among 3rd and 4th-grade children.4

S.M.A.R.T. is an evidence-based intervention (EBI)8 therefore, its implementation across various “real world” contexts is a logical next step to combat childhood obesity on a large scale. However, the efficacious results of the S.M.A.R.T. study were generated in randomized controlled trial, which by design prioritizes internal validity by controlling for any contextual factors in order to establish causality.9 Thus, information about the degree to which the results have external validity, or are generalizable to diverse populations and settings with varying context, is limited.9 In order to ensure “real world” effectiveness of S.M.A.R.T., implementation studies that utilize a more applied research framework and design are warranted.10

One framework is RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance), which can account for context by assessing the characteristics of a new population and setting that are necessary to maintain program effectiveness.11 A novel study design, implementation effectiveness hybrid (type III), measures (rather than controls for) the contextual factors that inform implementation of EBIs across populations.10 This hybrid design allows for the prioritization of implementation-related outcomes while also measuring continued effectiveness.10 One implementation-related outcome is fidelity, or the extent to which an EBI is implemented as it was intended in a new setting and population. Delivering an EBI with high fidelity is crucial to improve both internal and external validity. It can help prevent Type III error, or falsely concluding that an intervention was not successful when in fact it was not implemented properly.11,12 Further, it informs external validity by providing information on the ease with which EBI can be adapted or replicated in new settings.11,12

One method to improve intervention fidelity and thereby decrease Type III error is to focus on the delivery agents, and ensure that they have proper training, knowledge, and self-efficacy prior to delivery of the program.13-16 While the characteristics that constitute “proper training” for high fidelity are understudied in the healthcare field, research shows that initial trainings alone are often ineffective in maintaining intervention fidelity. Edmunds et al have suggested use of a model from the education field, ongoing coaching following initial training, to increase fidelity. As this has been shown to improve outcomes in both healthcare and education, it is a logical model to improve fidelity of a classroom-based, health-related curriculum such as S.M.A.R.T.13, 17

Thus, this intervention will use a two-group, randomized effectiveness-implementation hybrid type III study guided by the RE-AIM framework to test our study aims. Effectiveness, as well as the additional RE-AIM indicators of reach, adoption, implementation, and maintenance will be assessed through a mixed methods approach.18 The target population for our the implementation component of our hybrid study, Coaching S.M.A.R.T.er, is 3rd and 4th grade teachers in Montgomery County, VA, and the target population for the effectiveness component is the 3rd and 4th grade students.

The primary aim of this study is to:● Determine the extent to which Coaching S.M.A.R.T.er, an ongoing coaching following standard training

improves teachers’ implementation fidelity of S.M.A.R.T., as compared to standard training only.The secondary aim of this study is to:

● Determine the extent to which the improved fidelity resulting from Coaching S.M.A.R.T.er increases the effectiveness of the S.M.A.R.T. intervention for 3rd and 4th graders, as compared to the fidelity resulting from standard training only.

We hypothesize that we will see an increase in teachers’ implementation fidelity as a result of ongoing coaching, and that improved fidelity will increase the effectiveness of the S.M.A.R.T. intervention for the students. The results of this study can endorse the spread of an evidence-based program to reduce RSST across applied diverse populations and settings, representing a critical step in the fight against childhood obesity.

Paul Estabrooks, 05/15/15,
Excellent first two paragraphs… very strong case
Paul Estabrooks, 05/15/15,
GOOD!
Paul Estabrooks, 05/15/15,
Not sure about external.

SignificanceObesity continues to be a major leading public health concern that has doubled in children in the past

30 years in the United States.1 In 2012, more than one-third of children were considered overweight (BMI percentile >85th) or obese (BMI percentile >95th).1 It has been observed that about 40 percent of overweight children will have increased weight gain during adolescence, which can lead to high blood pressure, type 2 diabetes, stroke, cardiovascular disease, metabolic syndrome and other health complications.3,19

Increased screen time is one of the contributing factors to childhood obesity.19 Research has shown that a substantial amount of children’s daily energy intake is consumed during television viewing.6 Children eight years and over engage in an average of seven hours and eleven minutes of screen media per day.6 It is recommended that children should have no more than one to two hours per day of screen time.6 Children who spend two or more hours of screen time per day are more likely to have psychological difficulties, hyperactivity, emotional problems and difficulties with peers.3 Children from all income levels are spending a substantial amount of time engaging in digital devices and other computer-based activities.3 There is evidence that a reduction in screen time prevents childhood obesity, decreases behavioral problems in children, and improves school performance.5,6

Reduced RSST has been effectively achieved through myriad intervention programs, and is a recommended strategy with sufficient evidence to reduce childhood obesity according to the Community Guide.7,8 One program, S.M.A.R.T. (Student Media Awareness to Reduce Television), was shown in a randomized controlled trial to reduce screen time among 3rd and 4th grade children, specifically television (p<0.001), video game playing (p<0.05), and meals eaten in front of the television (p=0.01). The intervention also resulted in significant reductions in BMI (p=0.002), triceps skinfold thickness (p=0.002), and waist-to-hip ratio (p<0.001) and is currently being tested in several additional populations.4,20 In addition, S.M.A.R.T. is a Level 1 evidence-based intervention, as it was funded by a peer reviewed grant and published in several peer-reviewed journals,4,20 is included in several systematic reviews21-31 and includes strategies from the Community Guide.8

The S.M.A.R.T. study showed high internal validity, that is, it effectively achieved its desired outcomes among study participants in a highly controlled setting with few threats to causality.9 However, little is known about the external validity of S.M.A.R.T, or the degree to which its effects can be translated into other contexts and populations.9 This knowledge is critical in order to consistently and correctly deliver evidence-based programs within applied settings.32 Thus, follow-up implementation studies to evaluate indicators of external validity are warranted. One framework for assessing the external validity of S.M.A.R.T., as well as its continued internal validity within new settings, is RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation, Maintenance).33 The RE-AIM framework has been frequently used, both in designing and evaluating behavioral interventions, in order to enhance translatability of promising study findings to diverse, applied settings and populations.34

Traditional study designs, such as highly controlled effectiveness trials, are by nature less appropriate for assessing external validity factors because they control for, rather than consider, the context of the target population and setting. In recent years, researchers have proposed the use of more pragmatic designs that provide useful information for researchers, practitioners, and decision makers to incorporate evidence-based programs and principles into organizations more efficiently.42 One design is an “effectiveness-implementation hybrid” design, as described by Curran et al., which provides a method for applying RE-AIM to a combined effectiveness/implementation intervention in order to evaluate both internal and external validity. A hybrid study recognizes the need for an intermediary step between a highly controlled effectiveness study and full-scale uptake of the program in applied, “real world” settings.42

The application of the RE-AIM framework using a type III hybrid effectiveness-implementation design allows for the evaluation of the internal and external validity for the program delivery and outcomes. Reach, effectiveness, adoption and maintenance of both evidence based strategies as well as the mechanisms used to embed them within organizations are important to assess; however, the key focus of a hybrid design is often the “I” dimension, implementation.42 Implementation, particularly fidelity, which is the degree to which a program or intervention is delivered as intended, is an important yet frequently understudied indicator of internal and external validity of behavioral interventions.35,36 Fidelity is often viewed as a singular construct, mostly considered in efficacy trials.37 However, fidelity should be considered in all types of studies, from highly controlled efficacy trials to D&I studies, and should as be analyzed by its five constructs: adherence, dose, quality of delivery, participant responsiveness, and program differentiation.37

Ensuring proper expertise of the delivery agent, as well as assessing fidelity, is critical to avoid Type III error, that is, the false conclusion that an intervention failed when it was actually not implemented properly.12,32

Paul Estabrooks, 05/15/15,
This is GREAT!
Paul Estabrooks, 05/15/15,
EXCELLENT!!!

Further, improving fidelity may improve outcomes—previous studies have shown a correlation between high fidelity and delivery agent satisfaction, self-efficacy, and ability to maintain practices, which in turn correlates with improved participant outcomes; thus, targeting implementation fidelity is critical to the continued internal validity of the intervention.14-16,38 Measuring the fidelity of implementation is also an important component to understanding external validity, as it informs the ease with which the delivery can be adapted and/or replicated in other settings and populations.11, 12

A commonly used method to improve this intervention fidelity in the healthcare field is ongoing coaching. Research shows that the modality used in school-based intervention such as S.M.A.R.T., initial training alone, may be ineffective in maintaining intervention fidelity.13 The addition of ongoing coaching, defined as an ongoing support strategy following training, to increase fidelity has been shown to improve outcomes in healthcare and have also been recently effective in improving classroom management.13,17

InnovationOur study is innovative in several ways. First, it uses a novel training approach to improve fidelity. As

discussed, traditional training approaches that simply relay information to delivery agents (such as teachers) have been shown to be ineffective at improving program delivery.13 His project uses a strategy that has been used in various formats in both healthcare and education called ongoing coaching. Strategies used in ongoing coaching modalities that follow the initial training in healthcare include case-review, self-evaluation, and performance feedback, which have been shown to build providers’ skill and improve healthcare outcomes.13 Additional strategies that have been effective at improving classroom management among elementary school teachers include action planning, goal setting, reviewing, and role play.17 Facilitation of a combination of these ongoing coaching strategies by a health educator will potentially improve the fidelity of intervention delivery by teachers, which we predict will improve the effectiveness of the intervention at reducing RSST.

In addition, our coaching model is innovative in that it prioritizes the reduction of teacher burden. The research team will provide two coaching sessions per month to teachers who implement the program. One session will be in-person during the school day while the other will be a brief video call after the school day. This model is designed to decrease both costs of the coaching component and the burden on teachers. The monthly in-person visit will not add extra time to the teachers’ days. In-person coaching built in during teachers’ days has successfully been used in early childhood professional development.39 The brief video calls will decrease implementation costs while still providing an opportunity for teachers to hone skills, ask questions and “check in” with their coach.

Finally, our research design is innovative. Few studies have used a Type III hybrid design to explore implementation and fidelity-related research questions, and we are aware of none who have used this design in school-based settings.40-43 The information gleaned from this unique use of the study design can inform future implementation of evidence-based programs in school settings, where many children can be reached with obesity-reduction interventions.

ApproachParticipants, Recruitment, and Study Design

The target population for the implementation intervention is the 3rd and 4th grade teachers in elementary schools in Montgomery County, a rural county in Southwest Virginia. According to data from the school board, there are currently two 3rd and two 4th grade teachers at each of the 10 elementary schools in the county (personal communication from Montgomery County School Board). Preliminary recruitment meetings with the school board indicate that we can expect full program adoption and agreement to randomize from 8 of the 10 schools for an 80% adoption rate. Each participating school will choose whether the program will be implemented in 3rd or 4th grade, resulting in a target sample size of 16 teachers.

The target population for the effectiveness portion of the intervention mirrors the population targeted in the original S.M.A.R.T. intervention, and will include 3rd and 4th graders in participating classrooms. School board data indicates that the current number of 3rd and 4th graders is approximately 25 students per classroom (1,000 students) (personal communication from Montgomery County School Board). All students in participating classrooms (n=~400) will be able to participate in program activities unless their parents choose to opt out, but students will not be eligible for the assessments unless they give verbal assent and receive written informed consent from parents. Students and parents will be mailed a flyer home. Students identified as having a significant learning disability or limited English proficiency will be excluded from data collection. We estimate that approximately 70% of parents will give informed consent for both student and parent assessments, for a target sample size of 280 students. Students will not be incentivized, as the assessments will occur during

Paul Estabrooks, 05/15/15,
Good!
Paul Estabrooks, 05/15/15,
Good! Some of this (expanded) in sign section would be good.
Paul Estabrooks, 05/15/15,
Okay this is where you needed more info… similar to what you did with the description of hybrid designs and smart… you really need a solid review of what has been done and worked in other settings—from the coaching persepective.

class time (although the program activities often include small prizes), but parents who complete the pre- and post-assessment over the phone will be entered to win one of several $100 gift cards.

This study is a group randomized trial. Randomization will occur at the school level in order to avoid a drift effect, or a serious departure from treatment fidelity, between teachers.44 All teachers will receive the standard S.M.A.R.T. training, and teachers randomized to the intervention group will also receive ongoing coaching.

Intervention Methods and TimelineThe timeline describes the coarse of the yearlong hybrid design, and indicates where the implementation and effectiveness components overlap (Figure 1).

Primary Aim: Coaching S.M.A.R.T.erIn order to test our primary aim, both the intervention and control groups will receive a half-day

S.M.A.R.T. curriculum training, while the intervention group will also receive ongoing coaching throughout the duration of the intervention.

The standard half-day training will be taught by the research staff. Curriculum will be distributed, and the training will cover rationale for reducing recreational sedentary screen time, purpose and theoretical foundations of the curriculum, how to use the curriculum (including lesson organization, incorporating the curriculum with other subjects, family involvement and newsletters, computer use for educational purposes, and screen time budgeting devices), helpful resources, and how to schedule the S.M.A.R.T. program. A teachback model will be used, consisting of researchers presenting a lesson followed by teachers presenting the same lesson and receiving feedback. Objectives of the curriculum will be emphasized to promote fidelity of key components.

The ongoing coaching that the intervention group will receive is designed to decrease both costs of the coaching component and the burden on teachers and to increase the likelihood of maintenance after the study is completed. The coach will be a Certified Health Education Specialist (CHES) hired by the research team. The coach will provide two coaching sessions per month to teachers who implement the program. One session will be in-person during the school day while the other will be a brief video call after the school day. Both the in-person visits and the video calls will include instruction, case review, self-evaluation, and feedback based on the lesson plans.

The monthly in-person visit will allow the coach to observe the lesson and then meet with the teacher, and is designed not to add significant time to the teachers’ days. In-person coaching built in during teachers’ days has successfully been used in early childhood professional development.39 The brief video calls will decrease implementation costs and provide an opportunity for teachers to ask questions and “check in” with their coach.

Secondary Aim: S.M.A.R.T. Curriculum

To test our secondary aim, teachers in both the control and intervention groups will deliver the S.M.A.R.T. program to their students. S.M.A.R.T. is an 18-lesson, theory-based classroom curriculum designed to reduce screen time among 3rd and 4th grade children.4 The intervention addresses reducing television, videotape, and video game use.4 It is based on the Social Cognitive Theory, which states that behavior develops and is continued through the interaction of personal, behavioral, and environmental factors.4

Four behavior change processes of the theory that are addressed in the intervention are attention, retention, production, and motivation.4 The intervention targets both nonselective and selective approaches to reducing screen time.4 Nonselective approaches include budgeting total weekly screen time and limiting access to screen media, including removing TV sets from the home or out of sight.4 Selective approaches include limiting screen time to only certain days of the week or certain times, limiting screen time to only specific content, and limiting screen time to only certain circumstances (such as no TV during meals).4

The classroom curriculum is completed over a six-month period.4 Eighteen 30 to 50 minutes lessons are completed over a two month period, and are followed by weekly 5 to 10 minute booster session for the next four months.4 Classroom lessons are taught by the 3rd- and 4th-grade teachers, who are trained by the research staff.4 The curriculum is designed to match up with the three behavior change processes of the social cognitive theory: induction, generalization, and maintenance.4 Therefore, the lessons are designed and scheduled with more frequent and intense activities in the first two months to promote self-efficacy for behavior change, and less intense booster sessions in the next four months to promote self-efficacy for maintaining behavior change and correcting relapses.4

Figure 1: Coaching S.M.A.R.T.er Timeline

Paul Estabrooks, 05/15/15,
This really needs some beefing up.. I might consider dropping the timeline figure to allow for more detail on your active intervention… also referencing! The way it looks you just came up with the training ideas.
Paul Estabrooks, 05/15/15,
Hard table for old guys to read!

The curriculum consists of four sequential sections: TV Awareness, The TV Turnoff, Staying in Control, and Helping Others.4 In addition, the curriculum is supplemented by parent newsletters, as research shows that children can influence their parents’ behaviors, parents prefers activities that can be completed at home, and parent participation is essential for implementing family interventions.4 The newsletters include benefits of and strategies for reducing screen time, such as selective viewing policies and contingency management skills.4

The Coaching S.M.A.R.T.er intervention will include updates to the original S.M.A.R.T. curriculum that ensure currency but do not require new testing for effectiveness. This adaptation will not compromise the core components of the intervention that target behavior change; rather, it will adjusts the delivery and messaging strategies to be more representative of current technology. Since 1996, when the effectiveness of the S.M.A.R.T. intervention was tested, the devices available for RSST have been significantly expanded from television, video games, and videotapes to include computers, tablets, smart phones, MP3 devices, etc. Additionally, technology and screens are more frequently used as educational tools, and can serve a productive purpose and provide a platform for expanded intervention reach. Thus, the adapted version of S.M.A.R.T. will be inclusive of additional types of screens. The intervention will also use technology to reach the audience. For example, text message reminders can be used to turn off devices after a set amount of time.

MeasuresThe measurements for outcomes and process evaluation related to our study will be framed within the

context of the RE-AIM dimensions of Reach, Efficacy, Adoption, Implementation, and Maintenance.34 We will assess the dimensions both qualitatively and quantitatively in order to understand both process and outcomes, and will operationalize them as described in Table 1 and detailed below.

Primary Aim

For the implementation aims, the objectives outlined in the facilitator training course manual and teacher's manual that we will develop based on the S.M.A.R.T. intervention will help us to design a set of criteria to measure fidelity. The fidelity of the implementation design will focus on five components: adherence, dose, quality of delivery, participant responsiveness, and program differentiation.37 For the purpose of this study we will define fidelity not only as the program integrity but as a unity of all the five components since they highlight different mechanisms of fidelity as a process and work together to establish the fidelity of a program. The implementation will be assessed by multiple, mixed methods including typical fidelity evaluations conducted immediately, ongoing data collection during coaching meetings, and at 6 and 12 months following training.

Paul Estabrooks, 05/15/15,
Excellent table
Paul Estabrooks, 05/15/15,
This is a better description here.

For the intervention group, all training courses and coaching sessions will be audio recorded with the consent of participants, and these recordings will be used to assess and evaluate intervention fidelity by two independent reviewers. The assessments will be scored as ‘Yes/demonstrated’ (scored 2 points), ‘No/not demonstrated’ (scored 0 points) and ‘Unsure’ (scored 1 point).45 We will assess reliability using percentage agreement for each item rated on the evaluation forms and a third party will review the evaluation forms and select a purposive 10% sample of evaluations that reflected high and low fidelity ratings.45 The amount of time and type of activities utilized by the coach with teachers (i.e. performance feedback, action planning, modeling, reviewing, roleplaying, and goal setting) will be tracked by the coach during each coaching.17 The control group will be monitored by monthly reports in the more intensive S.M.A.R.T. curriculum phase and every other month on the less intensive phase. We will supplement objective measures of coaching with an exit interview and open-ended questions in the fidelity checklists about satisfaction and how the training and/or coaching prepared teachers well and where it did not for implement intervention. Any adaptations (program differentiation) made over time to the coaching process will also be recorded. To assess fidelity of the S.M.A.R.T. intervention random classrooms will also be assigned to independent observers to conduct direct observations of teacher implementation of classroom activities to determine the degree of implementation fidelity.17 Fidelity checks will be completed during at least 50% of lessons taught by teachers in both the intervention and control groups. Adherence and dose will be assessed using sign-in sheets and participant responsiveness will be measured both directly (through audio recordings) and indirectly (survey of students following program delivery). Any adaptations made over time to the S.M.A.R.T. curriculum will also be recorded.

Furthermore, in order to assess the others RE-AIM dimensions for the primary aim, data will be collected as described on Table 1 with the following focus: Reach - number and characteristics of teachers and their work environment (e.g. classrooms); Effectiveness - coaching of teachers; Adoption - schools’ proportion and representativeness; Maintenance - teacher and school sustainability. Measures include demographics, surveys and questionnaires, and documentation (e.g. school curriculum, activities reports).

Secondary AimFor the effectiveness intervention aims we will use the monitoring and assessment tools used in the

S.M.A.R.T. trials4,20 to inform the development of our measures. Individual measures include demographics and baseline and post-intervention questionnaires for parents and children. A research staff will conduct the children’s baseline and post-test self-report “Media use and other activities” questionnaire on the same days in both schools, during 40-minute class periods on two separate days without the teacher involvement4 (except for classroom management). The “Media use and other activities” instrument was used in the S.M.A.R.T. intervention4 and was adapted from a similar instrument4, 46 that showed high test-retest reliability (r = .94) in children in this age group.46 Trained research staff will also interview parents by telephone at baseline and post-intervention conducting a lifestyle questionnaire containing questions on socioeconomic and lifestyle characteristics of their children and family.4 In addition, how often parents read the newsletters (7-point Likert scale ranging from never to all of the time) will be obtained during the interview.4

In addition, to assess the quality of life of the children we will use the PedsQL.47 The PedsQL Measurement Model is a modular approach to measuring health-related quality of life (HRQOL) in healthy children and adolescents. This is an age-appropriate instrument that has been shown to be feasible, reliable (r≥0.9) and valid in children as young as five years old self-reporting their HRQOL.47-50

Furthermore, to assess the other RE-AIM dimensions for the secondary aim, data will be collected as described in Table 1 with the following focus: Reach - number and characteristics of students and their environment (e.g. classrooms, home); Adoption - teachers’ proportion and representativeness; Implementation - fidelity to S.M.A.R.T. curriculum; Maintenance – student outcomes (i.e. reduced RSST and improved QOL) and sustainability.

Finally, for both primary and secondary aims, during all phases of development and implementation we will document what was done, who did it, how long it took to complete, and what non-human resources were required. We will estimate total costs, costs per participant and marginal costs of intervention. We will separate research-based costs and intervention costs,51,52 as well as start-up and post start-up costs of interventions.53,54 We also will monitor whether any of our innovations and adaptations reduced the cost and unexpected outcomes and/or effects of the interventions. Data Analyses

Paul Estabrooks, 05/15/15,
same
Paul Estabrooks, 05/15/15,
All in the table…so wasting space with it here.
Paul Estabrooks, 05/15/15,
Good!

Mixed methods data collection will strengthen the validity of the conclusions reached by the study.18,53 Descriptive data from instruments and surveys will be analyzed in SPSS version 22.0 and include frequencies, means, and standard deviations. A sample of the fidelity evaluation will be tested for inter-rater and intra-rater reliability.45 The interview data will be obtained and analyzed using an iterative process that will generate meaning units to aid in interpretation of the data.55 The data will then be collapsed and results subsequently reported across all participants and settings. Quotations can also be used to illustrate findings.

We currently propose to examine changes in relation to students outcomes (e.g. reduced RSST) that achieve a change similar to the standardized effect sizes demonstrated in S.M.A.R.T. intervention trials4,20 results which were statistically significant changes and could have a significant health impact if delivered with high reach. For the coaching intervention, we propose examining results in relation to studies that have successfully used this strategy in early childhood professional development.17 To test the impact of coaching on teacher implementation fidelity, a series of two-way repeated measures ANCOVA will be conducted to determine whether teachers’ level of implementation varied based on whether they received more or less overall coaching or whether they received more or less of each coaching activity over time, while controlling for baseline levels of implementation.17 We will also examine changes over time in using simple t-tests, and then run a multi-level mixed effect logic model treating the participants as nested within schools assuming the unobserved setting-specific effects are not correlated with predictors in the model.56 We will also examine differences in effectiveness to the intervention using the described behavioral parameters as independent variables in multi-leveled ANOVAs accounting for clustering of participants within settings. Proportions will be calculated and representativeness will be compared using chi-squared analysis for categorical variables and simple ANOVAs for continuous variables with participant/non-participant designation as the independent variable in both cases.

Data ManagementAs we have done in prior studies, a manual of procedures will be developed during the initial study

start-up period that explicitly describes the specific procedures related to the proposal, data collection and safety, and quality assurance. Electronic data files will be stored on a secured computer. Under supervision from the PI and with input from the statistician, the project manager will conduct error-checking procedures on all data to ensure their accuracy and safety. All research conducted within Virginia Tech complies with the Department of Health and Human Services requirements for safeguarding the rights and welfare of human subjects, regardless of the source of funding as specified in the human subjects section. A Data Sharing Plan will be create in adherence to the NIH policy on data sharing.

Limitations and Future DirectionOne risk of this project is the potential that coaching will be shown to improve fidelity simply because of

the presence of the researchers, and fidelity will be low when researchers are not present. However, because of the design of one in person and one online coaching session, this risk should be small. In addition, our intervention will test the effectiveness of an updated S.M.A.R.T. curriculum, which is necessary due to changes in technology since the original curriculum was developed. However, we recognize that screen time is also used increasingly within education. Although our curriculum targets to reduce recreational sedentary screen time, another potential risk of our project is that the screen time reduction message may be confusing, given the benefits of technology in education. The focus on recreational sedentary screen time will be highlighted throughout the program, and examples are given in the curriculum to ensure that students, as well as teachers, understand the difference in using a screen to complete an assignment for school and using a screen to watch a television show or surfing the internet. In addition, there is a possibility that the generalizability of this study will be limited to the population tested in this intervention because the subjects come from a demographically homogeneous region.

Possible future directions could be the application of ongoing coaching in other school districts to increase fidelity and reduce recreational sedentary screen time (RSST). School districts in regions with different student demographics may need additional adaptations to the program for cultural relevance. Working towards an increase in fidelity and reduction in screen time allows other schools to be able to adopt and implement this program with high internal and external validity. This program could possibly be used in the future to address more at risk, diverse populations.

References

1. Cynthia L Ogden, Margaret D Carroll, Brian K Kit, and Katherine M Flegal, 'Prevalence of Childhood and Adult Obesity in the United States, 2011-2012', Jama, 311 (2014), 806-14.

2. World Health Organization, Obesity: Preventing and Managing the Global Epidemic (World Health Organization, 2000).

3. Donna M Matheson, Joel D Killen, Yun Wang, Ann Varady, and Thomas N Robinson, 'Children’s Food Consumption During Television Viewing', The American journal of clinical nutrition, 79 (2004), 1088-94.

4. T. N. Robinson, and D. L. G. Borzekowski, 'Effects of the Smart Classroom Curriculum to Reduce Child and Family Screen Time', Journal of Communication, 56 (2006), 1-26.

5. Mark S Tremblay, Allana G LeBlanc, Michelle E Kho, Travis J Saunders, Richard Larouche, Rachel C Colley, Gary Goldfield, and Sarah Connor Gorber, 'Systematic Review of Sedentary Behaviour and Health Indicators in School-Aged Children and Youth', Int J Behav Nutr Phys Act, 8 (2011), 98.

6. COUNCIL ON COMMUNICATIONS, and MEDIA, 'Media Use by Children Younger Than 2 Years', Pediatrics (2011).

7. Roberta Roggia Friedrich, Jéssica Pinto Polet, Ilaine Schuch, and Mário Bernardes Wagner, 'Effect of Intervention Programs in Schools to Reduce Screen Time: A Meta-Analysis', Jornal de pediatria, 90 (2014), 232-41.

8. Guide to Community Preventive Services;, 'Obesity Prevention and Control: Interventions in Community Settings.'2015) <www.thecommunityguide.org/obesity/communitysettings.html> [Accessed February 9, 2015 2015].

9. William R.. Shadish, Thomas D Cook, and Donald Thomas Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference (Wadsworth Cengage learning, 2002).

10. Geoffrey M Curran, Mark Bauer, Brian Mittman, Jeffrey M Pyne, and Cheryl Stetler, 'Effectiveness-Implementation Hybrid Designs: Combining Elements of Clinical Effectiveness and Implementation Research to Enhance Public Health Impact', Medical care, 50 (2012), 217.

11. Russell E Glasgow, Edward Lichtenstein, and Alfred C Marcus, 'Why Don't We See More Translation of Health Promotion Research to Practice? Rethinking the Efficacy-to-Effectiveness Transition', American Journal of Public Health, 93 (2003), 1261-67.

12. C. E. Basch, E. M. Sliepcevich, R. S. Gold, D. F. Duncan, and L. J. Kolbe, 'Avoiding Type Iii Errors in Health Education Program Evaluations: A Case Study', Health Educ Q, 12 (1985), 315-31.

13. Julie M Edmunds, Rinad S Beidas, and Philip C Kendall, 'Dissemination and Implementation of Evidence–Based Practices: Training and Consultation as Implementation Strategies', Clinical Psychology: Science and Practice, 20 (2013), 152-65.

14. Susan G Forman, S Serene Olin, Kimberly Eaton Hoagwood, Maura Crowe, and Noa Saka, 'Evidence-Based Interventions in Schools: Developers’ Views of Implementation Barriers and Facilitators', School Mental Health, 1 (2009), 26-36.

15. Carolyn R Ransford, Mark T Greenberg, Celene E Domitrovich, Meg Small, and Linda Jacobson, 'The Role of Teachers' Psychological Experiences and Perceptions of Curriculum Supports on the Implementation of a Social and Emotional Learning Curriculum', School Psychology Review, 38 (2009), 510.

16. Melodie Wenz-Gross, and Carole Upshur, 'Implementing a Primary Prevention Social Skills Intervention in Urban Preschools: Factors Associated with Quality and Fidelity', Early Education & Development, 23 (2012), 427-50.

17. Wendy M Reinke, Melissa Stormont, Keith C Herman, and Lori Newcomer, 'Using Coaching to Support Teacher Implementation of Classroom-Based Interventions', Journal of Behavioral Education, 23 (2014), 150-67.

18. John W. Creswell, and Vicki L. Plano Clark, Designing and Conducting Mixed Methods Research. 2nd ed. edn (Los Angeles :: SAGE Publications, 2011).

19. Patricia M Anderson, and Kristin F Butcher, 'Childhood Obesity: Trends and Potential Causes', The Future of children, 16 (2006), 19-45

20. Thomas N Robinson, 'Reducing Children's Television Viewing to Prevent Obesity: A Randomized Controlled Trial', Jama, 282 (1999), 1561-67.

21. Ginny Brunton, James Thomas, Angela Harden, Rebecca Rees, Josephine Kavanagh, Sandy Oliver, Jonathan Shepherd, and Ann Oakley, 'Promoting Physical Activity Amongst Children Outside of

Physical Education Classes: A Systematic Review Integrating Intervention Studies and Qualitative Studies', Health education journal, 64 (2005), 323-38.

22. Maureen Dobbins, Heather Husson, Kara DeCorby, and Rebecca L LaRocca, 'School-Based Physical Activity Programs for Promoting Physical Activity and Fitness in Children and Adolescents Aged 6 to 18', Cochrane Database Syst Rev, 2 (2013).

23. Y Liao, J Liao, CP Durand, and GF Dunton, 'Which Type of Sedentary Behaviour Intervention Is More Effective at Reducing Body Mass Index in Children? A Meta‐Analytic Review', Obesity Reviews, 15 (2014), 159-68.

24. Sajid Mahmood, Tahira Perveen, Allah Dino, Faisa Ibrahim, and Jaishri Mehraj, 'Effectiveness of School-Based Intervention Programs in Reducing Prevalence of Overweight', Indian journal of community medicine: official publication of Indian Association of Preventive & Social Medicine, 39 (2014), 87.

25. Marie Evans Schmidt, Jess Haines, Ashley O'Brien, Julia McDonald, Sarah Price, Bettylou Sherry, and Elsie M Taveras, 'Systematic Review of Effective Strategies for Reducing Screen Time among Young Children', Obesity, 20 (2012), 1338-54.

26. Eric Stice, Heather Shaw, and C Nathan Marti, 'A Meta-Analytic Review of Obesity Prevention Programs for Children and Adolescents: The Skinny on Interventions That Work', Psychological bulletin, 132 (2006), 667.

27. Amy van Grieken, Nicole PM Ezendam, Winifred D Paulis, Johannes C van der Wouden, and Hein Raat, 'Primary Prevention of Overweight in Children and Adolescents: A Meta-Analysis of the Effectiveness of Interventions Aiming to Decrease Sedentary Behaviour', Int J Behav Nutr Phys Act, 9 (2012), 61.

28. Youfa Wang, Yang Wu, Renee F Wilson, Sara Bleich, Larry Cheskin, Christine Weston, Nakiya Showell, Oluwakemi Fawole, Brandyn Lau, and Jodi Segal, 'Childhood Obesity Prevention Programs: Comparative Effectiveness Review and Meta-Analysis', (2013).

29. Evelyn P Whitlock, Elizabeth A O'Conner, Selvi B Williams, Tracy L Beil, and Kevin W Lutz, 'Effectiveness of Primary Care Interventions for Weight Management in Children and Adolescents', (2010).

30. Evelyn P Whitlock, Elizabeth A O'Connor, Selvi B Williams, Tracy L Beil, and Kevin W Lutz, 'Effectiveness of Weight Management Programs in Children and Adolescents', (2008).

31. Mine Yildirim, Maartje M Stralen, Mai JM Chinapaw, Johannes Brug, Willem Mechelen, Jos WR Twisk, and Saskia J Velde, 'For Whom and under What Circumstances Do School‐Based Energy Balance Behavior Interventions Work? Systematic Review on Moderators', International Journal of Pediatric Obesity, 6 (2011), e46-e57.

32. Russell E Glasgow, 'Translating Research to Practice Lessons Learned, Areas for Improvement, and Future Directions', Diabetes Care, 26 (2003), 2451-56.

33. Russell E Glasgow, Thomas M Vogt, and Shawn M Boles, 'Evaluating the Public Health Impact of Health Promotion Interventions: The Re-Aim Framework', American journal of public health, 89 (1999), 1322-27.

34. RE-AIM, 'Reach, Effectiveness, Adoption, Implementation, Maintenance (Re-Aim) Website.' <http://www.re-aim.hnfe.vt.edu/index.html> [Accessed March 21, 2015.

35. Albert J Bellg, Belinda Borrelli, Barbara Resnick, Jacki Hecht, Daryl Sharp Minicucci, Marcia Ory, Gbenga Ogedegbe, Denise Orwig, Denise Ernst, and Susan Czajkowski, 'Enhancing Treatment Fidelity in Health Behavior Change Studies: Best Practices and Recommendations from the Nih Behavior Change Consortium', Health Psychology, 23 (2004), 443

36. Robin Edward Gearing, Nabila El-Bassel, Angela Ghesquiere, Susanna Baldwin, John Gillies, and Evelyn Ngeow, 'Major Ingredients of Fidelity: A Review and Scientific Guide to Improving Quality of Intervention Research Implementation', Clinical psychology review, 31 (2011), 79-88.

37. Ross C. Brownson, Graham A. Colditz, and Enola Knisley Proctor, Dissemination and Implementation Research in Health: Translating Science to Practice (Oxford; New York: Oxford University Press, 2012).

38. Heewon Lee, Isobel R Contento, and Pamela Koch, 'Using a Systematic Conceptual Model for a Process Evaluation of a Middle School Obesity Risk-Reduction Nutrition Curriculum Intervention: Choice, Control & Change', Journal of nutrition education and behavior, 45 (2013), 126-36

39. SL Ramey, CT Ramey, NA Crowell, C Grace, and N Timraz, 'The Dosage of Professional Development for Early Childhood Professionals: How the Amount, Density, and Duration of Professional Development May Influence Its Effectiveness', Early childhood professional development: Research

and practice through the early childhood educator professional development grant. Boston: The Emerald Group (2011).

40. Jeffrey A Cully, Jessica Y Breland, Suzanne Robertson, Anne E Utech, Natalie Hundt, Mark E Kunik, Nancy J Petersen, Nicholas Masozera, Radha Rao, and Aanand D Naik, 'Behavioral Health Coaching for Rural Veterans with Diabetes and Depression: A Patient Randomized Effectiveness Implementation Trial', BMC health services research, 14 (2014), 191.

41. Kimiyo Kikuchi, Evelyn Ansah, Sumiyo Okawa, Akira Shibanuma, Margaret Gyapong, Seth Owusu-Agyei, Abraham Oduro, Gloria Quansah-Asare, Abraham Hodgson, and Masamine Jimba, 'Ghana’s Ensure Mothers and Babies Regular Access to Care (Embrace) Program: Study Protocol for a Cluster Randomized Controlled Trial', Trials, 16 (2015), 22.

42. JoAnn E Kirchner, Mona J Ritchie, Jeffery A Pitcock, Louise E Parker, Geoffrey M Curran, and John C Fortney, 'Outcomes of a Partnered Facilitation Strategy to Implement Primary Care–Mental Health', Journal of general internal medicine, 29 (2014), 904-12.

43. Meghan B Lane-Fall, Rinad S Beidas, Jose L Pascual, Meredith L Collard, Hannah G Peifer, Tyler J Chavez, Mark E Barry, Jacob T Gutsche, Scott D Halpern, and Lee A Fleisher, 'Handoffs and Transitions in Critical Care (Hatricc): Protocol for a Mixed Methods Study of Operating Room to Intensive Care Unit Handoffs', BMC surgery, 14 (2014), 96.

44. Chambers DA. Advancing the science of implementation: a workshop summary. Administration and Policy in Mental Health and Mental Health Services Research. 2008; 35(1-2):3-10.

45. Tom Mars, David Ellard, Dawn Carnes, Kate Homer, Martin Underwood, and Stephanie JC Taylor, 'Fidelity in Complex Behaviour Change Interventions: A Standardised Approach to Evaluate Intervention Integrity', BMJ open, 3 (2013), e003555.

46. Thomas N Robinson, and Joel D Killen, 'Ethnic and Gender Differences in the Relationships between Television Viewing and Obesity, Physical Activity, and Dietary Fat Intake', Journal of health education, 26 (1995), S91-S98.

47. James W Varni, Michael Seid, and Cheryl A Rode, 'The Pedsql™: Measurement Model for the Pediatric Quality of Life Inventory', Medical care, 37 (1999), 126-39.

48. James W Varni, Tasha M Burwinkle, Michael Seid, and Douglas Skarr, 'The Pedsql™* 4.0 as a Pediatric Population Health Measure: Feasibility, Reliability, and Validity', Ambulatory Pediatrics, 3 (2003), 329-41.

49. James W Varni, Christine A Limbers, and Tasha M Burwinkle, 'How Young Can Children Reliably and Validly Self-Report Their Health-Related Quality of Life?: An Analysis of 8,591 Children across Age Subgroups with the Pedsql™ 4.0 Generic Core Scales', Health and quality of life outcomes, 5 (2007), 1.

50. James W Varni, Michael Seid, and Paul S Kurtin, 'Pedsql™ 4.0: Reliability and Validity of the Pediatric Quality of Life Inventory™ Version 4.0 Generic Core Scales in Healthy and Patient Populations', Medical care, 39 (2001), 800-12.

51. Richard T Meenan, Victor J Stevens, Mark C Hornbrook, Pierre-Andre La Chance, Russell E Glasgow, Jack F Hollis, Edward Lichtenstein, and Thomas M Vogt, 'Cost-Effectiveness of a Hospital-Based Smoking Cessation Intervention', Medical care, 36 (1998), 670-78.

52. Debra P Ritzwoller, Deborah Toobert, Anna Sukhanova, and Russell E Glasgow, 'Economic Analysis of the Mediterranean Lifestyle Program for Postmenopausal Women with Diabetes', The Diabetes Educator, 32 (2006), 761-69.

53. Benjamin Johns, Rob Baltussen, and Raymond Hutubessy, 'Programme Costs in the Economic Evaluation of Health Interventions', Cost Effectiveness and Resource Allocation, 1 (2003), 1.

54. Mary Ann Sevick, Andrea L Dunn, Melba S Morrow, Bess H Marcus, G John Chen, and Steven N Blair, 'Cost-Effectiveness of Lifestyle and Structured Exercise Interventions in Sedentary Adults: Results of Project Active', American journal of preventive medicine, 19 (2000), 1-8.

55. Gretchen B Rossman, and Sharon F Rallis, Learning in the Field: An Introduction to Qualitative Research (Sage, 2011).

56. Wen You, Fabio A Almeida, Jamie M Zoellner, Jennie L Hill, Courtney A Pinard, Kacie C Allen, Russell E Glasgow, Laura A Linnan, and Paul A Estabrooks, 'Who Participates in Internet-Based Worksite Weight Loss Programs?', BMC public health, 11 (2011), 709.