Chris Ball, CPS Parent and Raise Your Hand board member, and Cassie Creswell, Raise Your Hand Action Co-Director and board member, collaborated and submitted these comments on the Draft Illinois ESSA Plan. We are very grateful for their work.
The comments are below. To view the statement with references, please go here.
The Every Student Succeeds Act (ESSA) is the new federal education law. You can read more about it here.
The ESSA webpage for the Illinois State Board of Education (ISBE) is here.
ESSA requires an accountability system that provides meaningful differentiation in school performance. It sets out a few specific indicators and more kinds of indicators that states must use in their accountability systems. But it requires accountability. We must be clear on what accountability means. To hold someone or something accountable or to call someone to account means there is an "answering for conduct" or a justification of actions. Accounting alone, merely measuring, is insufficient for accountability. NCLB failed because there was never accountability, only accounting. The measures matter but taking five measurements, weighting them differently, and saying "that's our accountability system" is ridiculously inadequate. A meaningful accountability system requires more. The current draft is vague about what many of these indicators will be. Without more detail, we cannot assess the quality of the accountability system.
Any performance measure should take resources and challenges facing schools into account. The current draft omits measures from the accountability formula that would control for relative funding, poverty levels, class sizes, or the percentage of English learners -- all that factors affect performance. For example, a school that spends three times as much as another with the same proficiency level is not performing equally well. Under the proposed accountability system, there is no way to accurately and fairly hold a school accountable because these factors are ignored.
By contrast, there are measures included in the current formula that could penalize a school for lacking resources. High-school curricular offerings are a problematic indicator alone. An under-funded high school will have fewer AP or IB offerings and so would receive a lower rating, essentially punishing it for its poverty and the failure of our existing funding formulas.
Other suggested measures can yield counter-intuitive inferences. Consider attendance. We can understand that a school with low rates of attendance might have low rates of proficiency (absences retard proficiency). But what if a school has high rates of proficiency and low rates of attendance? Why would we lower a school's performance rating because it was more productive with students' time in school than its peers? And why should a school with high attendance but low proficiency be rewarded for attendance without proficiency? It is not that measuring attendance is unimportant -- high attendance with low proficiency might indicate that instruction is ineffective. But these percentages alone are a poor indicator of school quality. Unfortunately, few of the indicators provide enough information to give an account of what affects school performance or actually hold a school to account for its performance.
Some of the Accountability Working Group ideas are promising (for example, grades, arts and enrichment coursework and social-emotional learning), but, when a "portfolio" indicator is a student's Lexile level (p.15), it is not clear we are talking about the same concepts even though the words are the same. New York has piloted a genuine portfolio-based performance assessment system that is both rigorous and rich -- what ESSA calls an “innovative assessment.” The flexibility in state accountability systems that ESSA makes possible will most strongly benefit public school students if the state seriously pursues establishing a system of high-quality innovative assessments.
Below we address specific questions and requests for feedback ISBE posed in the ESSA draft.
Consolidation of Funds, Sec. 1.2 (5-6)
Any consolidations of funds must not use federal or state funds for special education for other purposes, in effect, redirecting special education funds away from its intended recipients. Using other federal and state funds to support special education is potentially useful. Fiscal or accounting changes that would eliminate accurate tracking of special education spending would also be unacceptable.
Locally Selected High School Assessment, Sec. 2.2 (10)
RYH supports the use of locally selected assessments under ESSA. The current draft does not specify the technical aspects of the assessments. International Baccalaureate Diploma exams should be one of the locally selected options.
Additional School Quality Indicators, Sec. 3.1 (15-17)
As stated above, several of the listed indicators are promising, but it is unclear in this draft how these items would be measured or how they would factor into the accountability formula. Focusing on the easily measurable factors that are already collected and reported (e.g., attendance, absenteeism, disciplinary data) adds little to our understanding of why schools and students succeed. Others are commendable, e.g., conducting Spanish-language literacy and science assessments for ELs (assuming Spanish is their first language). Our position is not that we should ignore factors like nutrition, teacher retention, student-counselor ratios, but that scoring a school as performing worse because its students lack adequate nutrition or have high staffing ratios would fault individual schools for factors outside the control of the school. One set of factors omitted from the list is school funding or per-pupil spending in relation to other schools. Again, giving a school a lower rating because it spends less would be wrong, but other measures of performance (e.g., proficiency levels) could be weighted upward or downward based on relative spending. This must be done carefully (e.g., we would not want a district to lower its spending in order to offset otherwise low proficiency measures).
College and Career Readiness, Sec. 3.1 (17)
Given that the SAT college readiness math benchmark score is at the 61st percentile of the nationally representative sample, we do not understand how Illinois expects to achieve college readiness for most of Illinois 11th or 12th graders. By creating at least six necessary conditions for college and career readiness, the plan would create numerous veto-points for students to achieve readiness. Even if a student had a 90% probability of successfully meeting each indicator, the joint probability of meeting all six would be 53%.
Weighting of Indicators, Sec. 3.1 (18-19)
ESSA requires that “much greater weight” in aggregate be given to at least academic proficiency, growth, graduation rates, and English proficiency than any fifth type of indicator or indicators. This requirement would be satisfied by a 55 percent aggregate weighting.
In the two examples, it appears that ISBE is assuming that all the measures would be in the form of percentages (e.g., proficiency, growth, graduation rate). While this makes the weightings simple to understand in these examples, it is not clear how many of the other measures proposed above would easily or meaningfully converted to percentages (e.g., staffing ratios, grades, socio-emotional learning).
Goal Setting, Sec. 3.1 (19)
Goal-setting must account for resources available. How achievable a particular goal is depends on the resources made available to achieve. Discussing achievability in the abstract is impossible.