Washington State Institute for Public Policy
Tutoring: By certificated teachers, small-group, structured
Pre-K to 12 Education
Benefit-cost estimates updated December 2016.  Literature review updated June 2014.
The programs included in this analysis are structured, systematic approaches to tutoring small-groups of struggling students in grades K–6 in specific English language arts and/or mathematics skills. The evaluated programs include a variety of specific approaches and curricula such as (in no particular order) Read Aloud, Proactive Reading, Responsive Reading, Leveled Literacy, Spell Read, Corrective Reading, and Number Rockets. An average program provides about 40 hours of tutoring time to groups of two to six (usually three) early elementary students. Certificated teachers provide tutoring and receive about 35 hours of training with a focus on the specific content and strategies used in the programs.
BENEFIT-COST
META-ANALYSIS
CITATIONS
The estimates shown are present value, life cycle benefits and costs. All dollars are expressed in the base year chosen for this analysis (2015). The chance the benefits exceed the costs are derived from a Monte Carlo risk analysis. The details on this, as well as the economic discount rates and other relevant parameters are described in our Technical Documentation.
Benefit-Cost Summary Statistics Per Participant
Benefits to:
Taxpayers $3,730 Benefits minus costs $12,612
Participants $7,735 Benefit to cost ratio $9.82
Others $3,228 Chance the program will produce
Indirect ($650) benefits greater than the costs 96 %
Total benefits $14,042
Net program cost ($1,430)
Benefits minus cost $12,612
1In addition to the outcomes measured in the meta-analysis table, WSIPP measures benefits and costs estimated from other outcomes associated with those reported in the evaluation literature. For example, empirical research demonstrates that high school graduation leads to reduced crime. These associated measures provide a more complete picture of the detailed costs and benefits of the program.

2“Others” includes benefits to people other than taxpayers and participants. Depending on the program, it could include reductions in crime victimization, the economic benefits from a more educated workforce, and the benefits from employer-paid health insurance.

3“Indirect benefits” includes estimates of the net changes in the value of a statistical life and net changes in the deadweight costs of taxation.
Detailed Monetary Benefit Estimates Per Participant
Benefits from changes to:1 Benefits to:
Taxpayers Participants Others2 Indirect3 Total
Labor market earnings associated with test scores $3,602 $7,933 $3,509 $0 $15,044
Health care associated with educational attainment $219 ($60) ($239) $109 $30
Costs of higher education ($92) ($138) ($43) ($46) ($319)
Adjustment for deadweight cost of program $0 $0 $0 ($713) ($713)
Totals $3,730 $7,735 $3,228 ($650) $14,042
Detailed Annual Cost Estimates Per Participant
Annual cost Year dollars Summary
Program costs $1,406 2013 Present value of net program costs (in 2015 dollars) ($1,430)
Comparison costs $0 2013 Cost range (+ or -) 10 %
In the evaluations included in this meta-analysis, a certificated teacher provides, on average, 40 hours of tutoring to nine students per year in groups of three and receives 35 hours of training. To calculate a per-student annual cost, we used average Washington State compensation costs (including benefits) for a K–8 teacher as reported by the Office of the Superintendent of Public Instruction, divided by the total number of students served.
The figures shown are estimates of the costs to implement programs in Washington. The comparison group costs reflect either no treatment or treatment as usual, depending on how effect sizes were calculated in the meta-analysis. The cost range reported above reflects potential variation or uncertainty in the cost estimate; more detail can be found in our Technical Documentation.
Estimated Cumulative Net Benefits Over Time (Non-Discounted Dollars)
The graph above illustrates the estimated cumulative net benefits per-participant for the first fifty years beyond the initial investment in the program. We present these cash flows in non-discounted dollars to simplify the “break-even” point from a budgeting perspective. If the dollars are negative (bars below $0 line), the cumulative benefits do not outweigh the cost of the program up to that point in time. The program breaks even when the dollars reach $0. At this point, the total benefits to participants, taxpayers, and others, are equal to the cost of the program. If the dollars are above $0, the benefits of the program exceed the initial investment.

Meta-analysis is a statistical method to combine the results from separate studies on a program, policy, or topic in order to estimate its effect on an outcome. WSIPP systematically evaluates all credible evaluations we can locate on each topic. The outcomes measured are the types of program impacts that were measured in the research literature (for example, crime or educational attainment). Treatment N represents the total number of individuals or units in the treatment group across the included studies.

An effect size (ES) is a standard metric that summarizes the degree to which a program or policy affects a measured outcome. If the effect size is positive, the outcome increases. If the effect size is negative, the outcome decreases.

Adjusted effect sizes are used to calculate the benefits from our benefit cost model. WSIPP may adjust effect sizes based on methodological characteristics of the study. For example, we may adjust effect sizes when a study has a weak research design or when the program developer is involved in the research. The magnitude of these adjustments varies depending on the topic area.

WSIPP may also adjust the second ES measurement. Research shows the magnitude of some effect sizes decrease over time. For those effect sizes, we estimate outcome-based adjustments which we apply between the first time ES is estimated and the second time ES is estimated. We also report the unadjusted effect size to show the effect sizes before any adjustments have been made. More details about these adjustments can be found in our Technical Documentation.

Meta-Analysis of Program Effects
Outcomes measured Primary or secondary participant No. of effect sizes Treatment N Adjusted effect sizes (ES) and standard errors (SE) used in the benefit-cost analysis Unadjusted effect size (random effects model)
First time ES is estimated Second time ES is estimated
ES SE Age ES SE Age ES p-value
Test scores 14 1649 0.209 0.039 7 0.098 0.043 17 0.254 0.001
Citations Used in the Meta-Analysis

Fien, H., Santoro, L., Baker, S.K., Park, Y., Chard, D. J., Williams, S., & Haria, P. (2011). Enhancing teacher read alouds with small-group vocabulary instruction for students with low vocabulary in first-grade classrooms. School Psychology Review, 40(2), 307-318.

Kerins, M.R., Trotter, D., & Schoenbrodt, L. (2010). Effects of a tier 2 intervention on literacy measures: Lessons learned. Child Language Teaching and Therapy, 26(3), 287-302.

Lennon, J.E., & Slesinski, C. (1999). Early intervention in reading: Results of a screening and intervention program for kindergarten students. School Psychology Review, 28(3), 353-364.

Mathes, P.G., Denton, C., Anthony, J., Francis, D., & Schatschneider, C. (2005). The effects of theoretically different instruction and student characteristics on the skills of struggling readers. Reading Research Quarterly, 40(2), 148-182.

Pinnell, G.S., Lyons, C. A., DeFord, D.E., Bryk, A.S., & Seltzer, M. (1994). Comparing instructional models for the literacy education of high-risk first graders. Reading Research Quarterly, 29(1), 9-39.

Ransford-Kaldon, C.R., Flynt, E.S., Ross, C.L., Franceschini, L., Zoblotsky, T., Huang, Y., & Gallagher, B. (2010). Implementation of effective intervention: An empirical study to evaluate the efficacy of Fountas & Pinnell's Leveled Literacy Intervention (LLI) 2009-2010. Memphis, TN: University of Memphis, Center for Research in Education Policy.

Rashotte, C.A., MacPhee, K., & Torgesen, J.K. (2001). The effectiveness of a group reading instruction program with poor readers in multiple grades. Learning Disability Quarterly, 24(2), 119-134.

Rolfhus, E., Gersten, R., Clarke, B., Decker, L.E., Wilkins, C., & Dimino, J. (2012). An Evaluation of Number Rockets: A tier-2 intervention for grade 1 students at risk for difficulties in mathematics Final Report (NCEE 2012-4007). Washington DC: U.S. Department of Education, Institute for Education Sciences, National Center for Education Evaluation and Regional Assistance.

Torgesen, J.K., Wagner, R.K., Rashotte, C.A., Herron, J., & Lindamood, P. (2010). Computer-assisted instruction to prevent early reading difficulties in students at risk for dyslexia: Outcomes from two instructional approaches. Annals of Dyslexia, 60(1), 40-56.

Torgeson, J., Schirm, A., Castner, L., Vartivarian, S., Mansfield, W., Myers, D. … Haan, C. (2007). National assessment of Title I final report: Volume II: Closing the reading gap: Findings from a randomized trial of four reading interventions for striving readers (NCEE 2008-4013). Washington DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.

For more information on the methods
used please see our Technical Documentation.
360.664.9800
institute@wsipp.wa.gov