skip to main content
Washington State Institute for Public Policy
Back Button

Per-pupil expenditures: 10% increase for one student cohort from kindergarten through grade 12

Pre-K to 12 Education
Benefit-cost methods last updated December 2023.  Literature review updated April 2012.
Open PDF
In the 2011-12 school year, Washington State school districts spent an average of $9,739 per public school student (including state, federal, local, and other sources). This analysis estimates the benefits and costs for increasing per-pupil expenditures by 10% for one cohort of students starting in kindergarten and continuing those increased expenditures for 13 years (grades K through 12).
 
ALL
BENEFIT-COST
META-ANALYSIS
CITATIONS
For an overview of WSIPP's Benefit-Cost Model, please see this guide. The estimates shown are present value, life cycle benefits and costs. All dollars are expressed in the base year chosen for this analysis (2022). The chance the benefits exceed the costs are derived from a Monte Carlo risk analysis. The details on this, as well as the economic discount rates and other relevant parameters are described in our Technical Documentation.
Benefit-Cost Summary Statistics Per Participant
Benefits to:
Taxpayers $4,887 Benefits minus costs $2,946
Participants $11,452 Benefit to cost ratio $1.23
Others $6,089 Chance the program will produce
Indirect ($6,486) benefits greater than the costs 52%
Total benefits $15,943
Net program cost ($12,997)
Benefits minus cost $2,946

Meta-analysis is a statistical method to combine the results from separate studies on a program, policy, or topic in order to estimate its effect on an outcome. WSIPP systematically evaluates all credible evaluations we can locate on each topic. The outcomes measured are the types of program impacts that were measured in the research literature (for example, crime or educational attainment). Treatment N represents the total number of individuals or units in the treatment group across the included studies.

An effect size (ES) is a standard metric that summarizes the degree to which a program or policy affects a measured outcome. If the effect size is positive, the outcome increases. If the effect size is negative, the outcome decreases. See Estimating Program Effects Using Effect Sizes for additional information.

Adjusted effect sizes are used to calculate the benefits from our benefit cost model. WSIPP may adjust effect sizes based on methodological characteristics of the study. For example, we may adjust effect sizes when a study has a weak research design or when the program developer is involved in the research. The magnitude of these adjustments varies depending on the topic area.

WSIPP may also adjust the second ES measurement. Research shows the magnitude of some effect sizes decrease over time. For those effect sizes, we estimate outcome-based adjustments which we apply between the first time ES is estimated and the second time ES is estimated. We also report the unadjusted effect size to show the effect sizes before any adjustments have been made. More details about these adjustments can be found in our Technical Documentation.

Meta-Analysis of Program Effects
Outcomes measured Treatment age No. of effect sizes Treatment N Adjusted effect sizes(ES) and standard errors(SE) used in the benefit - cost analysis Unadjusted effect size (random effects model)
First time ES is estimated Second time ES is estimated
ES SE Age ES SE Age ES p-value
5 40 1000 0.101 0.042 16 0.101 0.042 20 0.101 0.050
5 40 1000 0.120 0.055 16 0.109 0.047 18 0.120 0.050
1In addition to the outcomes measured in the meta-analysis table, WSIPP measures benefits and costs estimated from other outcomes associated with those reported in the evaluation literature. For example, empirical research demonstrates that high school graduation leads to reduced crime. These associated measures provide a more complete picture of the detailed costs and benefits of the program.

2“Others” includes benefits to people other than taxpayers and participants. Depending on the program, it could include reductions in crime victimization, the economic benefits from a more educated workforce, and the benefits from employer-paid health insurance.

3“Indirect benefits” includes estimates of the net changes in the value of a statistical life and net changes in the deadweight costs of taxation.
Detailed Monetary Benefit Estimates Per Participant
Affected outcome: Resulting benefits:1 Benefits accrue to:
Taxpayers Participants Others2 Indirect3 Total
High school graduation Criminal justice system $26 $0 $52 $13 $91
Test scores Labor market earnings associated with test scores $4,862 $11,452 $6,036 $0 $22,350
Program cost Adjustment for deadweight cost of program $0 $0 $0 ($6,499) ($6,499)
Totals $4,887 $11,452 $6,089 ($6,486) $15,943
Click here to see populations selected
Detailed Annual Cost Estimates Per Participant
Annual cost Year dollars Summary
Program costs $974 2011 Present value of net program costs (in 2022 dollars) ($12,997)
Comparison costs $0 2011 Cost range (+ or -) 0%
Office of Superintendent of Public Instruction (2013). Financial Reporting Summary, Washington State School Districts and Educational Service Districts, Fiscal Year 9/2011-8/2012. The estimated per-pupil annual cost equals 10% of the total per-pupil expenditures reported in Table 7. http://www.k12.wa.us/safs/PUB/FIN/1112/2011-12%20Financial%20Reporting%20Summary.pdf
The figures shown are estimates of the costs to implement programs in Washington. The comparison group costs reflect either no treatment or treatment as usual, depending on how effect sizes were calculated in the meta-analysis. The cost range reported above reflects potential variation or uncertainty in the cost estimate; more detail can be found in our Technical Documentation.
Benefits Minus Costs
Benefits by Perspective
Taxpayer Benefits by Source of Value
Benefits Minus Costs Over Time (Cumulative Discounted Dollars)
The graph above illustrates the estimated cumulative net benefits per-participant for the first fifty years beyond the initial investment in the program. We present these cash flows in discounted dollars. If the dollars are negative (bars below $0 line), the cumulative benefits do not outweigh the cost of the program up to that point in time. The program breaks even when the dollars reach $0. At this point, the total benefits to participants, taxpayers, and others, are equal to the cost of the program. If the dollars are above $0, the benefits of the program exceed the initial investment.

Citations Used in the Meta-Analysis

Archibald, S. (2006). Narrowing in on educational resources that do affect student achievement. Peabody Journal of Education, 81(4), 23-42.

Chaudhary, L. (2009). Education inputs, student performance and school finance reform in Michigan. Economics of Education Review, 28(1), 90-98.

Dee, T.S. (2005). Expense preference and student achievement in school districts. Eastern Economic Journal, 31(1), 23-44.

Dolton, P. & Marcenaro-Gutierrez, O.D. (2011). If you pay peanuts do you get monkeys? A cross-country analysis of teacher pay and pupil performance. Economic Policy, 26(65), 5-55.

Ferguson, R.F. & Ladd, H.F. (1996). How and why money matters: An analysis of Alabama schools. In H. F. Ladd (Ed.), Holding schools accountable: Performance based reform in education (pp. 265–298). Washington, DC: Brookings Institution.

Fuchs, T. & Wößmann, L. (2007). What accounts for international differences in student performance? A re-examination using PISA data. Empirical Economics, 32(2), 433-464.

Gibbons, S., McNally, S., & Viarengo, M. (2012). Does additional spending help urban schools?: An evaluation using boundary discontinuities. Bonn: IZA.

Guryan, J. (2003). Does money matter? Estimates from education finance reform in Massachusetts (NBER Working Paper). Cambridge, MA: National Bureau of Economic Research.

Hægeland, T., Raaum, O., & Salvanes, K. G. (2012). Pennies from heaven: Using exogenous tax variation to identify effects of school resources on pupil achievement. Economics of Education Review, 31(5), 601-614.

Häkkinen, I., Kirjavainen, T., & Uusitalo, R. (2003). School resources and student achievement revisited: New evidence from panel data. Economics of Education Review, 22(3), 329-335.

Heinesen, E. & Graversen, B. K. (2005). The effect of school resources on educational attainment: Evidence from Denmark. Bulletin of Economic Research, 57(2), 109-143.

Holmlund, H., McNally, S., & Viarengo, M. (2010). Does money matter for schools?. Economics of Education Review, 29(6), 1154-1164.

Houtenville, A.J. & Conway, K.S. (2008). Parental effort, school resources, and student achievement. Journal of Human Resources 43(2), 437–453.

Hoxby, C. (2001). All school finance equalizations are not created equal. The Quarterly Journal of Economics, 116(4), 1189-1231.

Jacob, B.A. (2001). Getting tough? The impact of high school graduation exams. Educational Evaluation and Policy Analysis, 23(2), 99-121.

Ladd, H.F., Muschkin, C.G., & Dodge, K. (2012). From birth to school: Early childhood initiatives and third grade outcomes in North Carolina. Working Paper, Duke University.

Lee, J.W. & Barro, R.J. (2001). Schooling quality in a cross-section of countries. Economica, 68, 465-488.

Loeb, S. & Page, M.E. (2000). Examining the link between teacher wages and student outcomes: The importance of alternative labor market opportunities and non-pecuniary variation. The Review of Economics and Statistics, 82(3), 393-408.

Machin, S., McNally, S., & Meghir, C. (2010). Resources and standards in urban schools. Journal of Human Capital, 4(4), 365-393.

Papke, L.E. (2005). The effects of spending on test pass rates: Evidence from Michigan. Journal of Public Economics, 89(5-6), 821-839.

Papke, L.E. & Wooldridge, J.M. (2008). Panel data methods for fractional response variables with an application to test pass rates. Journal of Econometrics, 145, 121-133.

Ram, R. (2004). School expenditures and student achievement: Evidence for the United States. Education Economics, 12(2) 169- 176.

Ribich, T.I. & Murphy, J.L. (1975). The economic returns to increased educational spending. The Journal of Human Resources, 10(1), 56-77.

Sander, W. (1999). Endogenous expenditures and student achievement. Economics Letters, 64(2), 223-231.

Sherlock, M. (2011). The effects of financial resources on test pass rates: Evidence from Vermont's Equal Education Opportunity Act. Public Finance Review, 39(3), 331-364.

Steele, F., Vignoles, A., & Jenkins, A. (2007). The effect of school resources on pupil attainment: A multilevel simultaneous equation modelling approach. Journal of the Royal Statistical Society: Series A (Statistics in Society), 170(3), 801-824.

Taylor, C. (1998). Does money matter? An empirical study introducing resource costs and student needs to educational production function analysis. In W. J. Fowler, Jr. (Ed.), Developments in school finance, 1997: Fiscal proceedings from the Annual State Data Conference (pp. 75-97). Washington, DC: U.S. Department of Education, National Center for Education Statistics.

Todd, P.E. & Wolpin, K.I. (2007). The production of cognitive achievement in children: Home, school and racial test score gaps. Journal of Human Capital, 1(1), 91-136.

Waldfogel, J. & Zhai, F. (2008). Effects of public preschool expenditures on the test scores of 4th graders: Evidence from TIMSS. Educational Research and Evaluation, 14, 9-28.

Wenglinsky, H. (1997). How money matters: The effect of school district spending on academic achievement. Sociology of Education, 70(3), 221-237.

Wenglinsky, H. (1998). School district expenditures, school resources and student achievement: Modeling the production function. In W. J. Fowler, Jr. (Ed.), Developments in school finance, 1997: Fiscal proceedings from the Annual State Data Conference (pp. 99-120). Washington, DC: U.S. Department of Education, National Center for Education Statistics.

Wilson, K. (2001). The determinants of educational attainment: Modeling and estimating the human capital model and education production functions. Southern Economic Journal, 67(3), 518-551.

WSIPP study, unpublished (2012). We conducted a multi-year, state-level, fixed-effects analysis of NCES data on per pupil expenditures, student test scores, and on-time graduation rates. See the technical appendix in this report for details.