Taking Stock: How Are Essential Schools Doing?

Do students in Essential Schools perform better? As results start to come in, the chief problem is how to answer this in thoughtful and precise ways – without losing the Coalition’s focus on intellectual depth as defined by each local school community.

A wry joke is making the rounds of the education world, which we heard from Judy Lanier, who is dean of the school of education at Michigan State University. Think of John F. Kennedy in 1960, Lanier suggests, holding a press conference to announce his plan to land an American on the moon. “The project will take ten years,” she imagines the President saying, “but the first six years will be spent building a giant telescope so that we can look at the astronaut when he finally gets there.”

Looking carefully at how schools are doing has become an overriding concern in American political and educational circles during recent years, and the Coalition of Essential Schools rightly comes in for its fair share of scrutiny. But many in the Coalition fear that, like that giant telescope, the push for accountability may soon overshadow the very classroom change it aims to stimulate. The questions people ask are tough and various, but in the end they come down to one: “Is there any evidence that students do better when they attend Essential Schools?” This goes to the vulnerable heart of Essential school reform; and precisely because it is so complicated, it may tempt schools in the midst of change to offer simplistic or skewed responses that could skirt the real issues involved.

The chief problem in coming to an answer, of course, is that the question of how Essential Schools are doing can be approached from so many different perspectives. How students are doing on conventional measures like basic skills test scores, attendance rates, graduation rates, and college acceptances is easy enough to measure. But how do we track their thoughtfulness, their originality or creativity, their ability to handle the ill-structured problems that show up in real life? If the aim is to tell whether the Coalition itself is a success, do we judge that by the number of schools involved, or by the depth of what is going on in those schools? Is the assessment aimed at quality control–making all Essential Schools “stand for” the same thing–or does it attempt to assess diverse grass-roots efforts despite wide variations in practice? And what is at stake in being seen as a success–higher teacher salaries, more professional development, or a school’s very existence in a marketplace that involves more and more choice?

Without a context that acknowledges such questions, any drive for accountability must run aground. Still, self-evaluation is a critical part of the Essential Schools effort, and the struggle to frame that evaluation in terms that are both thoughtful and precise, both qualitative and quantitative, has occupied the Coalition increasingly in recent years. On top of efforts launched by leaders in state government and academic circles, the recent education initiative of the Bush Administration has added impetus to the search for ways to make assessment richer, fairer, and deeper. Key Coalition figures serve on many of the national groups in government, academia, and business that are examining accountability and testing issues. And major projects within the Coalition will soon begin to track student progress over the long term.

Serving as an umbrella over these in-house efforts is a new Coalition unit known as Taking Stock. Under its aegis, a nine-year longitudinal study following Essential School students through high school and the five years afterwards will soon begin. Already underway is a concerted attempt to gather similar information from Coalition member schools on the progress of their students along carefully framed measures. And soon to be formed is a high-level panel of outside advisers to review the work of Taking Stock as well as external studies of the Coalition, and to make recommendations regarding additional assessment research.

What and How to Compare?

One of the key Essential School principles prods teachers to plan their classes backwards from some demonstrable way students can exhibit mastery at the course’s end–not by requiring students to regurgitate textbook facts, that is, but by asking them to link concepts across the disciplines, think on their feet, speak and write persuasively about things that matter to them. What should result, the Coalition argues, is a system of assessment that relies primarily on performances or exhibitions (either at a course’s end or at graduation), and on portfolios of student work demonstrating progress over time.

Can such performance-based assessments be quantified in a way that will satisfy the current bureaucratic hunger for a high-stakes accountability system to measure our schools? Given the considerable differences among classrooms and school communities that the Coalition fosters, can we devise ways to measure students against some “national standard” of excellence without resorting to simplistic standardized tests? A growing national alliance–in academia, in business, and in government–is alarmed by the notion of a lock-step national curriculum, or of one multiple-choice national exam that would determine every student’s chances after secondary school. And for the first time, via several separate public and private groups, substantial money and energy are beginning to pour into finding out the answers.

In the British Isles and elsewhere, examples already exist of record-keeping based on performances and portfolios. In this country, the National Assessment of Educational Progress has begun a pilot study of higher-order thinking skills assessment techniques in science and mathematics. The National Center on Education and the Economy is working on a comprehensive and controversial “New Standards Project” with offices in Pittsburgh, Rochester, and Washington, D.C. And many states, like Vermont and Connecticut, are beginning to include portfolios, open-ended test questions, and “best pieces” in the systematic comparison of student progress. How such assessments are adjusted to reflect differences among schools and student bodies is a subject too complex to describe here, but it relies on visiting teams of teachers trained to evaluate the consistency of scoring practices from school to school. An element of public reporting and display is also present in most such plans, so that the community itself can be the final judge of how well its school is doing.

Given the complexity and expense of developing comparative and quantifiable records of student thoughtfulness at the national level, is it worth it? Many in the Coalition–including Theodore Sizer, its chairman, and Deborah Meier, who heads New York’s Central Park East Secondary School–think not. Rather than giving every student in the country the same test, they suggest, why not simply introduce more robust measures of effectiveness at school and community levels– then let the school focus instead on the last step of accountability, holding up student work for each local community’s critical scrutiny? “What is crucial here is who sets the standards,” says Ted Sizer. “The local people whose kids’ minds are at stake must have direct and meaningful access to those who have the power to make or change their curricula.”

Helping that happen–so that state and district officials can craft policies that support those local visions–is the job of the Re:Learning initiative, a joint effort of CES and the Education Commission of the States in Denver. “What we’re finding is that if you begin to develop policies without local visions you’ll never be able to assess where you are,” says Bob Palaich, a senior researcher at ECS. “If you do have that vision, once your district and school agree to focus on Essential School ideas, then you should be able to go ahead and assess students–with authentic performances, performance testing, and basic skills testing.” In any case, he notes, the problem of how to compare students nationwide need not be resolved before one can say whether Essential Schools are making progress.

Collecting and Tracking Figures

If each school community would articulate its standards of excellence clearly, and if it routinely exhibited them through portfolios and public performances, local communities would have good evidence of how their students were doing. However, very few of the more than 100 Coalition member schools have reached that point. So how can the public judge whether the Coalition’s principles are worth working and paying for?

Individual member schools in the Coalition are expected to keep track of their attendance rates, graduation rates, standardized test scores, and college acceptances. Some schools keep better records than others, but where comparative data is available (especially in those charter schools that have been with the Coalition since its inception in 1984), improvement is clear in all those areas. (See figure, page 5.) California’s Pasadena High School, for example, is entering its fourth year of working with Coalition ideas; the 600 students who began ninth grade in 1989 are now juniors with two years of Essential Schooling behind them. The dropout rate at Pasadena was 35 to 40 percent when she arrived four years ago, notes its principal, Judy Codding. “But in that first Coalition class of 600, we can account for all but seven students still being in school,” she says. “Attendance in our core Essential School classes is up to around 93 percent. And the percentage of D’s and F’s in those classes has gone from around 40 percent to 20 to 25 percent.” Codding finds this “enormously exciting,” and is looking for funds with which she can give the eleventh-grade Stanford Achievement Test this fall to the same students who took the mandated ninth- grade version on entering the Essential School program two years ago. “We also follow their course grades, which reflect how they do on performance-based tests,” she says. “We track disciplinary suspensions and expulsions. And of course we’re very interested in how many kids are actually going to graduate.”

Measures like attendance and discipline may be one indication of student engagement, which is so hard to pin down otherwise. So when schools show dramatic improvements in these areas, Theodore Sizer suggests, one should look carefully at what they are doing right. “I’m beginning to believe, for example, that small schools, or a house system in big schools, do better than big schools on measures like attendance and discipline,” he says. “If they know someone notices, kids show up and they don’t cut up so much.” In schools where such things aren’t a problem, as in many affluent suburban schools, he notes, significant changes in student engagement are harder to judge, short of sitting in on classes to observe how actively the students are using their minds.

Even with simpler indicators like attendance, however, schools differ widely in how (or if) they define and report data; and the Coalition has run up against frustrations in trying to assemble comparable information on student progress across member schools. This is a major challenge of the new Taking Stock effort, which aims to gather and coordinate a broad range of information on just how Essential School students fare, and to publish an annual accounting of that information.

Already, the Coalition has commissioned a pilot study for a substantial research project conducted under the Taking Stock umbrella, and its findings have just landed on the Coalition’s desks. It will lead to a series of annual “Common Measures” reports, says Rick Lear, the CES senior researcher who coordinated the effort from Providence. “Our goal is first to establish the uniform collection of data around the ‘common measures’ such as attendance and the like,” he says. “Then we’ll attempt to establish new ‘uncommon measures’ to follow students with–ways to track less easily quantified qualities such as thoughtfulness, problem-solving, decency.”

Because of differences in how schools collect and record their hard numbers, the most useful part of the School Survey section of this first Common Measures report may be the research team’s recommendation for exactly how schools should record information in coming years. But two other sections–a survey of the perceptions of teachers in Coalition member schools and a similar survey comparing involved and non-involved students in some of those schools–provide some qualitative information, which the team hopes can be amplified and refined in coming years.

Teachers who are highly involved in Essential School activities, for example, say they work harder than they did before; but they also enjoy teaching more and are more likely to recommend it as a career. They notice changes in the intellectual habits of their students, although students overall do not report such a change. On pages 6 to 8 of this issue, the survey findings appear in summary; but what does not show up when the data are compiled are the substantial and fascinating variations (revealed in the individual schools’ replies) in how various schools are introducing and interpreting Essential School principles. In some schools, for example, Essential school teachers experience more respect from their colleagues; in others, it’s quite the opposite. Almost no Essential schools have taken on the challenge of altering their schedules into longer blocks, but where a school has actually done so, even those who are uninvolved have longer periods. Few schools are using portfolio assessments, whether they are very involved in the Coalition or not. Findings like this are primarily useful not because they prove anything, but because they shed light on the complex political issues of change. As ethnographers Donna Muncey and Pat McQuillan point out, teachers in the vanguard are often naive in their expectations or use of power, for example, or whether a school is united in thinking reform is even necessary.

“Reading these results, one becomes sharply aware of the kinds of things that get lost when you try to lump together and quantify data from schools that are proceeding very differently and are at very different stages of the process,” says Ted Sizer. “The successes of schools who show major strides may get lost, and schools that haven’t come very far may look better than they should. It’s part of the frustration of assessing a project that in its very design suggests schools play out the process of change differently.”

The fact that more teachers than students appear to be noticing changes in their Essential School experiences does not discourage Sizer. “It reminds us to be realistic,” he says. “We’re talking about changing the direction of a very large vessel. When you spin the wheel, the people in the pilot house may experience the change, but it takes a long time for the ship to move noticeably.” It takes time, Sizer points out, for the changes teachers note–like spending more time talking about learning in faculty meetings–to directly affect the actual experience of students. “If you look at kids who have been at it a long time, you might see more marked changes,” he says.

The Anecdotal Evidence

For a good look at students who have been at it a long time, there may be no better person to turn to than Deborah Meier, principal of what is one of the Coalition’s most advanced schools, Central Park East Secondary School in New York City’s East Harlem District No. 4. The school has a head start over others in the Coalition because it was an Essential School from its start. In 1991 the first class graduated from its Senior Institute, which replaces, at CPESS, the conventional eleventh and twelfth grades. Though a number of the students who entered the Senior Institute two years ago are staying on for another year of study, all who graduated are going on to college, and all but two of these to four-year colleges. Meier has both a strong sense of who those students are and ambitious plans for tracking their future progress.

“Our students are enormously determined, hardworking, and articulate,” she says, “and I think that’s one reason colleges have been so impressed with them in interviews. They talk easily to adults–about themselves, about education, about the things they are interested in. Because our school is built around conversation, these kids feel at home with different conversational styles among adults–they know their teachers as adults, and so they’re used to what adults interested in intellectual things are expecting. This is not true for most high school kids; it’s one reason I believe so strongly that schools must be smaller, or broken into smaller units like houses.”

In the long run, Meier says, one must measure students’ success by how well they do in life–their works and deeds. “If we define ‘well-educated’ as ‘thoughtful and reflective,'” she says, “it’s hard to see how nonreflective, nonthoughtful, decontextualized exams could ever capture it.” Toward this end, CPESS will track the progress of 50 students who graduated this June, following not only hard data but issues like self-esteem and ability to handle challenge. “This is a lot of work, and I don’t know how expensive it will be,” Meier says. “But for me it is more useful than any national assessment. For one thing, schools are a wonderful home base for sharing such information, if they really know their students and graduates. It’s also a way that communities can choose their schools by finding out what’s important to them–and it’s perfectly appropriate for schools to boast about different things. Private schools have always been under less pressure to use traditional assessment, for instance, because they could say 100 percent of their students went on to college. But if a school wants to brag about how many of its students go into political life or into the arts, or make a difference somehow, that’s a value system too.”

Successes like those at Central Park East and Pasadena sometimes lead people to conclude that Essential School principles make the most dramatic changes at big city schools with large minority populations and the problems of urban unrest. But at Brimmer and May, a small private school in Chestnut Hill, Massachusetts, which has just graduated its first class of seniors who started Essential Schooling in the ninth grade, headmistress Anne Reenstierna declares emphatically otherwise. “I’ve been here for eighteen years, and it’s true that our students have always been motivated and good achievers,” she says. “But I have seen a real difference in the classrooms since we began in the Coalition. Students don’t just answer the questions we ask, listen, and take notes. They are highly articulate; they have opinions on everything and are ready to question your opinions and your facts. They look at questions from a much broader, interdisciplinary perspective–that’s very different. I was in a class on the Holocaust recently, and during the discussion students brought in examples from the apartheid system, from the American civil rights movement, and from Kohlberg’s levels of moral development. They initiated most of the questions; the teacher hardly spoke at all.” Shortly before graduation Brimmer and May seniors wrote extensive evaluations of their high school experience, which were overwhelmingly positive, Reenstierna says. “Now other aspects of school life will need to change to reflect that greater student involvement,” she notes. “They will be working more with teachers and administrators on critical issues like curriculum, self-evaluation, and long-range planning.”

Can more qualitative anecdotal evidence like this help at all in assessing how the Coalition is doing? To find out, the Taking Stock effort has just hired Donna Muncey, an ethnographer who has spent the last several years documenting and analyzing school change in a number of Coalition schools. She will head an ambitious nine-year study, following 50 to 75 Essential School students through high school and the five years that follow. “The design problems in setting up a study like this are enormous,” Muncey says. “But we will probably interview parents, teachers, friends, and employers of the students to see if we can tell whether their school experience actually made a difference in how well they use their minds.”

The project is not unprecedented. In the 1930s and early 1940s, the Progressive Education Association tracked 1,500 students from progressive high schools through four years of college, comparing their performance with that of traditionally prepared students. (See story, page 10.) On conventional measures the students did not do much better than their peers, but on tests of problem solving or creativity they did markedly better–and the more boldly their high school had altered its curriculum, the better their performance. The study lends weight, Ted Sizer says, to his hunch that the Coalition schools that are moving most assertively are also the ones where measurable results are most dramatic. “If the faculty identifies what ails a school and takes bold measures to remedy it, you’ll see very visible changes in student performance,” he says.

Looking at Systemic Change

This is probably the case whether a school sees its problems as unsubtle ones like attendance, or hard-to-quantify ones like intellectual passivity. But a whole different angle on measuring the effectiveness of the Essential School reform effort is to look at whether change is happening system-wide–not only in terms of student achievement, but by assessing climate and policy shifts at the school, district, and state levels. For example, one can gauge success by how much say a school actually has over what it teaches and how. Or one can ask how much planning time its faculty has, how many opportunities teachers have for professional development, even how they are evaluated and how much they are paid. Are the district and state replacing conventional standardized tests with new techniques of performance assessment, to bolster such efforts in individual classrooms? Are teachers being allowed to cross subject-area lines without red tape?

At the Education Commission of the States, the chief challenge for Re:Learning is to line up state and district policies so they support individual schools and teachers in the throes of classroom change. “Until we can fully join these levels so that they work together, the picture is one of progress and halt, progress and halt,” says Re:Learning’s Bob Palaich. “But there are good signs. A state Re:Learning coordinator can call a high school principal and say, ‘I have your proposal in front of me, but how are you going to integrate your vocational ed money into Re:Learning’s goals?’ When all the different camps– special ed, Chapter One, accelerated ed, vocational ed, and Re:Learning–stop being independent camps and start working closely together, we will see real movement. We need schools to be internally coherent, not just to have awards on the wall.”

These kinds of messy questions tend to frustrate attempts to neatly assess just how the Coalition is doing. For just that reason, outside observers like Donna Muncey and Patrick McQuillan of the School Ethnography Project recommend tracking its progress with measures that clearly identify what level of the system is being assessed. Designing flexible research that spans a long period of time and crosses levels, they suggest, will better reveal answers in their context.

A true and sober assessment of whether Essential schools are really working must probably wait until many years have passed. But Central Park East’s Deborah Meier speaks passionately about how to reach answers in the meantime. “What the Coalition is saying about high school education is itself an answer to the question of respectful assessment in this country–by the community, by teachers, and by outsiders,” she says. “That’s how we should approach it–each school working through its own assessment with its own community and making it public. Commonalities will appear in threads and patterns, and maybe a system of national assessment will be the theory deduced from that collection of assessments.” She pauses. “I think of my friends who are well-educated people,” she says, “and how different they are. Some hate novels, some can’t stand anything but novels. They write in different ways, think in different styles, have different areas of strength. Stop trying to invent the well-educated person that all schools should produce!”