University Business
UB Daily
UB Exec
Arts & Letters Daily
Academic Partners


Volume 11, No. 4—May/June 2001  
Table of contents for this issue  
 
FEATURE

The Voucher Vortex
Is school choice a panacea or a peril?
by David Glenn

IMAGINE YOU'RE A state legislator who happens, somehow, to be perfectly agnostic on the question of whether public-school students should receive funding to attend private schools. You're unmoved by the free-market-über-alles tone sometimes taken by advocates of vouchers, charter schools, and other efforts to inject competition into the education system. You're equally unimpressed by their opponents' claim that our existing public-school arrangements are, or should be, set in stone. You're inclined to support particular choice systems if—and only if—they clearly improve students' achievement without worsening schools' already-grave levels of racial and economic segregation.

Imagine, further, that you're extravagantly wealthy, so you needn't concern yourself with the prodigious campaign donations offered by interest groups on both sides of the debate. The Center for Responsive Politics estimates that Democrats in the 2000 federal election cycle received close to $5.8 million from teachers unions, which are staunchly opposed to vouchers and broadly skeptical about charter schools. Choice proponents, meanwhile, spent some $45 million on unsuccessful ballot initiatives to establish vouchers in California and Michigan last year. No, you're determined that your judgments about school choice will be simon-pure. Would widespread choice and competition force public schools to improve, as advocates claim? Or—as skeptics worry—would choice simply allow private schools to skim off the brightest, healthiest, and easiest-to-serve students, leaving troubled students in troubled public schools, to the detriment of both? To answer these questions, you swear that you'll look to the evidence alone.

So you dust off your old statistics textbook and head to your fishing cabin, where you'll sequester yourself for the weekend to read studies of the voucher question. It shouldn't be too hard to find reliable information, you think as you settle down with a cup of tea and a stack of articles and reports. After all, vouchers have been near the center of the debate over public education in the United States for more than a decade. The constitutionality of a voucher program in Cleveland may soon come before the U.S. Supreme Court. George W. Bush's administration has signaled that it will promote vouchers and other models of school choice. In Milwaukee, more than nine thousand low-income students currently use vouchers to attend private schools; the city's system has been in place for almost eleven years. There must be some well-established findings by now. Right?

But over the course of the weekend, your fantasies of well-oiled social science and crystal-clear policy judgments are slowly ground into dust. First, you learn that administrative roadblocks have prevented any large-scale study of the Milwaukee voucher system since 1995, during which time it has grown more than tenfold; meanwhile, researchers sharply disagree about what to make of the data gathered between 1990 and 1995. Second, the best-designed voucher study ever conducted has sparked a public quarrel among members of its research team. Third, some of that study's critics have argued not for further, better studies to address its limitations but instead for shutting down vouchers as a subject of inquiry altogether. And fourth, the field has been plagued by ad hominem attacks and accusations of bad faith. Its best-known scholar, Harvard's Paul Peterson, walked away in disgust (or pique, depending on who tells the story) from a 1999 meeting that was partly designed as a peace summit among the researchers.

At the end of the weekend, you emerge from the cabin—ears ringing, eyes glazed over—and lean on your mailbox for a few minutes. This is going to be more difficult than you'd thought.

Rule 1 Nothing and no one is immune from criticism.

— Ten rules for democratic discourse, proposed by Sidney Hook (1954)

William Howell is wiry and boyish—he's just out of Stanford's political science graduate program—and seems to take an almost physical delight in the work of social science. Howell is now in his first year as an assistant professor at the University of Wisconsin, and as we sit in his small, moderately disheveled Madison office, he insists that however high-pitched the political debates surrounding vouchers might become, researchers can always study the issue with care and objectivity. "My approach has been to kind of lower my head and keep working with the data," he says.

The object of Howell's labor was born in early 1997. That spring, Paul Peterson, a professor of government at Harvard, approached the School Choice Scholarships Foundation, which had just announced that it would offer small scholarships to thirteen hundred low-income New York City students. Winners of these scholarships would be able to attend any private school that admitted them, whether secular or religious. Peterson wanted to conduct a definitive study, and he figured that if researchers became involved with this privately funded effort from the beginning, it could serve as a rough proxy for the publicly funded voucher systems in place in Milwaukee and Cleveland and desired by choice advocates elsewhere. In 1998, other privately funded programs in Washington, D.C., and Dayton were added to the project, and Peterson assembled a research team. (In addition to Howell, the group includes two of Peterson's graduate students: Patrick Wolf, now at Georgetown's graduate public policy institute, and David Campbell, who's still completing his Harvard Ph.D.)

Peterson hoped that this project would avoid two of the hazards that had bedeviled analysis of the early-1990s Milwaukee data. First, under time pressure, the Milwaukee researchers had failed to gather a full array of baseline information about the students in the program. Second—and more crucially—the researchers had been unable to agree on what group the Milwaukee voucher students should be compared with: all Milwaukee public-school students? Low-income students only? Just the students who'd tried and failed to obtain vouchers?

The Peterson team's attempted solution to these problems was to use a randomized field trial (RFT). This technique, familiar from medical research and increasingly used in the social sciences, allows the researchers to sidestep the quarrels about self-selection and comparison groups that plagued the Milwaukee analysts. "What we did was to hold a lottery," says Howell. More than twenty thousand students applied for the New York City scholarships, and thirteen hundred were chosen at random to receive them. The researchers then chose at random roughly a thousand of the lottery's "losers" to serve as their control group. In Milwaukee, scholars had worried that the data could have been skewed by voucher students' unseen characteristics (unusually motivated parents? unusually low rates of learning disability?). But in New York, they could have no such worries, because both groups would be randomly selected. Explains Howell: "Both of these groups are presumptively alike in all respects. This allows you to avoid all of the problems with observational data that require sophisticated statistical techniques to deal with. Here you just compare the groups—where are they going?"

This random-assignment technique is, according to most observers, an important and overdue innovation in school-choice research. The Harvard economist Caroline Hoxby has called RFT "the gold standard" in social science. Cecilia Rouse, an associate professor of economics and public affairs at Princeton, says she's mystified that RFT has been so slow to take hold in this field: "It's one of the tools in the box, but in education it's been almost vilified." (One of the few well-known education studies that has used RFT has had far-reaching policy effects. In the so-called STAR study of 1985-1989, researchers used random assignment to explore the effects of class-size reductions in Tennessee. When California launched its $1.5 billion class-size-reduction program in 1996, proponents leaned heavily on the results of the STAR study.)

The Peterson team's experiment is simple. The researchers collected baseline data from all applicants before the lottery was conducted. Students took the Iowa Test of Basic Skills, and researchers asked parents about their income, education, and religious commitments. Once a year, on a Saturday, students in both the control and treatment groups are asked to return for achievement testing (the next grade's Iowa Test) and questionnaires about issues such as class size and school safety.

In the summer of 2000, with two years of data under its belt, Peterson's team prepared an interim report for the annual meeting of the American Political Science Association (APSA). Its major finding: In all three cities, African-American students—but no other ethnic group—made statistically significant achievement gains on the Iowa Test in reading and math when they used vouchers to switch to private schools. This result was quickly trumpeted by voucher advocates.

When I ask Howell what to make of this finding, he replies that it's much too early to say. When I ask him whether he hopes that ten years from now vouchers will be a broadly accepted part of the education landscape, he answers: "No, I don't hope that. I just don't know how to get my head around that. I can say the words for why you should hope for that, and I can say the words for why you should not. But fundamentally, I don't know. That's why I feel most comfortable being buried in the data—just trying to see if there are effects."

But if Howell is undecided about vouchers, Peterson is widely perceived as an advocate. In recent years, he has produced hortatory essays for Commentary and The New Republic ("A Liberal Case for Vouchers"). Partly for this reason, many of his longtime critics were alarmed by the intense media attention that the APSA paper received. In their eyes, this was a tentative, non-peer-reviewed working paper drawn from a study that wasn't even concluded. No big deal, right? And yet here it was, in the thick of an election season, being written up in The Washington Post and USA Today and celebrated in columns by William Safire and George Will. (The latter, channeling Michael Harrington, declared that one Michigan voucher initiative was "a test of this contented country's capacity to address glaring inequities.")

"On the one hand, Peterson makes these high-toned Olympian statements about the need for randomized studies," says Alex Molnar, a professor of education at the University of Wisconsin at Milwaukee. "On the other hand, you have these tawdry press releases.... Journalists get these reports on their desks, and they make little effort to make serious evaluations of them. They're not set up to address this."

In mid-September, just two weeks after the APSA meeting, reporters found another press release on their desks. Mathematica Policy Research, Inc., a private firm that collaborates with the Peterson team on the New York City component of the study, was publicly dissenting from the APSA paper's analysis. Not only were the New York achievement gains confined to African-American students, Mathematica said, but they were confined to a single cohort of African Americans: those who were in sixth grade in 1998-1999. Thus began what Howell calls, "from a public relations standpoint, a disaster." He adds: "When researchers on the same team are saying different things, well, red flags are shooting up."

Rule 8 Do not hesitate to admit lack of knowledge or to suspend judgment if evidence is not decisive either way.

David Myers, the Mathematica senior fellow who directs the firm's work on the New York City voucher study, is relaxed and easygoing—but slightly wary—when we meet for lunch near his Washington, D.C., office. A few reporters, he says, have grossly distorted the disagreement over the APSA paper. "I haven't recognized my own words," he laments. Myers, who is a decade or so older than Howell, clearly takes his work very seriously but also exudes a certain seen-it-all detachment. Vouchers are only a small part of Myers's bailiwick: He also uses RFT experiments to evaluate Upward Bound and other education programs.

The gist of Mathematica's disagreement is this: In 1998-1999, the second year of the New York City study, the team found substantial achievement gains among sixth-grade African Americans. This cohort performed 7.92 percentile points better than its control-group peers on the Iowa skills test. The third-, fourth-, and fifth-grade African Americans, meanwhile, had small, statistically insignificant test differentials: 1.85, -1.93, and 0.93, respectively. Even though the sixth-graders were just one of the four cohorts, their 7.92-point gain was so strong that it pulled the average for all African Americans in the New York treatment group up to a statistically significant 3.27-point gain.

But was it really correct, Myers wondered, to declare on that basis that African Americans had gained—period—by using vouchers in New York City? "Well, maybe there's an effect for African Americans," he says. "But let's wait and see. The effects last year were so concentrated in that one cohort. You have three cohorts with a net result of 1. And then you have this other group with an 8. Well, do you not believe the 1?

"I don't know if it was because of the hype that was put on the APSA paper by the press or not," he says. "But I do know that as I looked again at our data, what it did was to make me more cautious."

For their part, Howell and his colleagues say that for purposes of the APSA paper, which examined all three cities, it was appropriate to aggregate the New York data. "The more data you can draw from," says Howell, "the more stable the findings, and the more confident you can be that they're real." After all, the aggregate gains for African-American students were 6.5 points in Dayton and 9 points in Washington. In Washington, but not in Dayton, the results were more or less consistent across all grade cohorts. If they'd seen patterns like New York's in all three cities, he says, they might have concluded that "there isn't an achievement effect among African Americans generally; there's only an effect among older African Americans. And that would be a really interesting story to tell, certainly one that we would tell. But we don't see that finding consistently across the cities—it's only in New York that that holds. We were trying to look across these three cities and look at aggregate trends."

In the months since September, the APSA paper has been criticized on other grounds. In particular, skeptics have questioned the study's high level of "missing values." Despite the researchers' fervent efforts, not all members of the treatment and control groups return for the annual rounds of tests and interviews. In the second year, the testing response rates were disconcertingly low: 66 percent in New York, 50 percent in Washington, and 49 percent in Dayton.

These missing values force the researchers to do some elaborate statistical acrobatics: They compare the people who do show up for testing with the original baseline portrait of the entire subject pool. If children of Protestants, say, are underrepresented in the group of testing respondents, those who do show up are assigned weights (they're made to stand for 1.4 Protestants, for example). The Peterson team uses these weights to construct regression equations—a process that involves a serious degree of discretion. How can we be certain that the variables they choose—religion, family composition, baseline achievement results—correctly compensate for the tested subjects' absent peers? And do these techniques adequately offset the possibility that voucher-bearing students with declining test scores are especially likely to drop out of the study?

Howell and his colleagues answer that their technique—using econometric analysis to compensate for missing values—is a common practice in social science. To which the critics reply: Fine; but the missing-value levels here are so high that you should stop claiming that the study meets medical-research standards of experimentation. The sheer number of regression equations, they say, compromises the clarity of the study's results. (One statistician who works on medical RFTs told Lingua Franca that "a general rule in medical research is that if you have more than 20 percent missing values, that's a serious problem.")

For his part, Myers isn't terribly concerned about these lines of criticism. "You can always poke holes in randomized trials," he says. But he is anxious that the Peterson team's raw data sets be released to the broader community of scholars, so that other researchers can "get under the hood...and do their own reanalysis. Not enough of that goes on in this field." In fact, it's on this point—not the disagreement over aggregating grade-level results in New York—that Myers comes closest to explicitly criticizing his colleagues. Over dessert he looks me squarely in the eye and says, "Mathematica feels very strongly about the importance of releasing the data."

Rule 2 Everyone involved in a contro versy has an intellectual responsibility to inform himself of the available facts.

Martin Carnoy is a veteran of the New Left—one of his early books, published in 1974, was titled Education as Cultural Imperialism—and, like many liberals, he's concerned that voucher systems may exacerbate inequalities rather than remedy them. In his office late one February afternoon, the Stanford education and economics professor expresses gruff bemusement as he reviews the faults he sees in the Peterson team's work. "Frankly, honestly, maybe there is such an achievement effect," says Carnoy. "But I can tell you that they haven't done the requisite work to demonstrate it."

In January, Carnoy published an extended critique of the New York/ Washington/Dayton research in The American Prospect, where he complained, among other things, that it would be impossible to assess the work correctly without seeing the raw data sets. "That's the way things work in academia," he tells me. "You share the data and go back and forth. There are a lot of assumptions built into this kind of work.... In this case, we're guessing what they're doing." (Howell has given Carnoy some raw data from Dayton and Washington at his request. More recently, Mathematica has offered to share the New York data with curious researchers.)

Both Howell and Wolf insist that they're eager to release the full data sets but that—as very junior faculty—it's important for them to have first crack at their data, to get something published in a peer-reviewed journal before being scooped. Terry Moe, a senior fellow at the Hoover Institution, a professor of political science at Stanford (his office is a few hundred yards from Carnoy's), and the author of the forthcoming Schools, Vouchers, and the American Public (Brookings), makes this case more forcefully: "Let's say that you as a scholar have gone to the time and trouble of raising money to carry out your own study—maybe you've spent two years trying to get the money together, collecting the data painstakingly, hiring the staff to do that, and then carrying out your analysis. And then somebody comes along and says, Hey! I'd like your data! What you'd like, I think, is to be assured that you can get a return on your investment. If you think that there are three articles in there, well, you're going to get three articles out of it. Then they can have your data."

It's "a plausible concern," Carnoy concedes. But he adds: "I can counterargue and say that in medical research, your findings usually aren't released until they're published in a peer-reviewed journal." Why, then, Carnoy asks, the APSA paper and the press releases? "They know this is controversial, and yet they want to get it out before it's peer reviewed," he says. "They're sort of being peer-reviewed out in the world, but no one has access to their methods."

This question of peer review has confronted Peterson before. In 1999, a researcher with the American Federation of Teachers (AFT) named Edward Muir criticized Peterson in an APSA journal for releasing school-choice findings without first going through normal academic channels [See Christopher Shea, "Without Peer," Lingua Franca, April 2000]. But perhaps this debate has been oversimplified. Christopher Jencks, a longtime education researcher and colleague of Peterson's at Harvard's Kennedy School, says, "The standard peer-review process is unbelievably slow. I'm quite dubious if you want to say that you have to slow this whole procedure down to the pace of molasses." In the absence of a more responsive system of academic peer review, Jencks proposes an alternative: Funders—like those who have supported the New York/Washington/Dayton research—ought to create a credible review system of their own. Jencks explains, "The funders could say, We're going to arrange for two or three people to take a look at this thing and release their comments along with the original paper."

Rule 9 Only in pure logic and math ematics, not in human affairs, can one demonstrate that something is strictly impossible.

Beyond these disputes about statistical method and academic protocol lies the question of the "cash value" of school-choice research: how elected officials will translate it into real-world practice. "Do we really want to live in a technocratic society where these social-science studies are plugged directly into public policy?" asks Bella Rosenberg, an assistant to the president of the AFT. "Albert Shanker [the late president of the AFT] used to say, 'Every education experiment is doomed to be a success.'" In other words, each new curriculum or pedagogy tends to be studied by its early, zealous creators, who unsurprisingly find that it works—and then the program is foisted on districts far and wide.

Studies that use RFT techniques may be less prone to bias, but they naturally cannot address the long list of contentious issues that school choice raises: segregation, funding equity, school discipline, religious expression. Evaluating voucher programs' "systemic effects" is a thornier business than looking narrowly at their impact on student achievement. School-choice proponents believe that vouchers will force public-school districts to get their acts together in order to compete with the private schools threatening to siphon away their best students. Opponents fear the opposite effect. But how would one test either intuition? Voucher advocates could point to recent changes in the Milwaukee public schools—but how to distinguish which reforms were motivated by voucher competition and which by new statewide high-stakes tests?

Jay Greene, a senior fellow at the Manhattan Institute for Policy Research, released a study in February in which he found that Florida's new voucher system—which gives vouchers only to students in schools deemed "failing" by the state—has indeed motivated schools to improve. Schools graded F by the state and given a year to shape up made sharp gains, whereas schools graded D, and therefore not facing a voucher threat, made far less impressive progress. (Two Rutgers scholars, Gregory Camilli and Katrina Bulkley, have already begun to contest Greene's findings, charging among other things that he improperly aggregated achievement scores across grade levels.) Meanwhile, Princeton's Cecilia Rouse and several colleagues have begun their own six-year study of the systemic effects of Florida's voucher system. They'll consider a range of not easily quantifiable matters: How will public schools change their instructional practices? What types of new private schools will emerge to fill the voucher market? How will the system affect the finances of local school districts?

"Education debates always tap into such dearly held values," says Helen Ladd, a professor of public policy studies and economics at Duke. "When push comes to shove, it's not a matter of whether the coefficient is 0.2 or 0.3." Ladd and her husband, Edward Fiske, who covered education for The New York Times for seventeen years, have attempted to address a broader range of issues by conducting a study of New Zealand's decade-old system of school choice. In When Schools Compete: A Cautionary Tale (Brookings, 2000), Ladd and Fiske raise a number of questions they believe U.S. voucher proponents should bear in mind: How will governments deal with schools in their death throes—schools that have lost market share, and therefore funding, and don't seem to be making a recovery? Should schools serving students from low-income families or historically oppressed minorities (like New Zealand's Maori) receive extra funding?

Terry Moe is a zealous choice proponent, but he shares one worry with Ladd and Fiske: that U.S. lawmakers will seize upon narrowly defined studies of particular school-voucher programs and neglect a broader range of qualitative evidence. He worries that schools of education have a tight grip on public discourse ("They stay away from certain subjects. Why isn't there a huge scholarly literature on the teachers unions?"). Therefore, he's concerned that the schools will guide elected officials to take a single negative finding (say, that a given voucher program dramatically increased racial segregation) and use that as an excuse to scrap vouchers altogether.

Moe argues that individual bad outcomes should lead us not to abandon school choice but to tinker with the design of the particular program: "If you set up a choice system and see that it's the upper-middle-income, more motivated kids that are taking advantage of it, then you could say, Let's just have a choice system where only the low-income kids get to choose. Or you could impose certain rules on the schools that require them to take a certain percentage of low-income kids. These are design issues."

This point is well taken. But will real-world governments be willing to tinker with choice designs in midstream, as Moe suggests? It's tough to imagine legislators suddenly choosing to take vouchers away from upper-middle-income families.

Rule 5 Before impugning an opponent's motives, even when they legitimately may be impugned, answer his arguments.

Casually informed citizens may have seen coverage of the Peterson team's findings during the first spate of publicity in late August. Later that fall, they may have read about Mathematica's dissent. "Can't these people get their act together?" they might have thought to themselves over breakfast. "And didn't the RAND Corporation recently come out with two dueling studies about Bush's education policies in Texas? What's social science good for, anyway?"

The AFT's Bella Rosenberg says she fears just this sort of cynicism. The intensity of the school-choice debate, she worries, "promotes an anti-intellectualism." She explains: "The public tends to say, A pox on both your houses—science doesn't matter, facts don't matter, research doesn't matter—because for every study you're going to get another that contradicts it."

It's a natural, lazy impulse: Rather than make the effort to study and unthread these controversies, we simply assume that the controversies' very existence discredits the whole enterprise of social science. And from that lazy impulse, it's a short leap to the next stage of cynicism: assuming that each and every piece of research is corrupt—that it's all ultimately driven by funders' interests and commitments.

Sometimes this habit can be comic. On February 22, Edison Schools, a for-profit operator of public schools, issued a press release criticizing a new Western Michigan University study that cast doubt on its schools' performance. The Western Michigan study was funded by the National Education Association (NEA), and Edison headlined its press release UNION-SPONSORED STUDY PROVIDES PREDICTABLY BIASED EVALUATION OF EDISON SCHOOLS. The text of the release referred again and again to the NEA's role in funding the study. Just six days later, on February 28, Edison issued a second press release, this time to praise a much more favorable Columbia University study, which happened to have been funded by the same union. The Columbia report's fairness, this release said, reflects "the professional cooperation we have come to expect and value from the NEA."

In the case of Peterson's study, this cynicism takes the form of beginning and ending one's critique by pointing to its funders—which include the Lynde and Harry Bradley Foundation and the Thomas B. Fordham Foundation, both of which passionately support school choice. Myers recalls that when he released a report on the first year of the New York experiment, none of the report's critics called to ask for the data or to look closely at his methods. "They just said, Ah, you're working for that crazy voucher advocate Peterson."

The philanthropists who've funded the scholarships presumably to have an interest—if only not to lose public face—in demonstrating that they work. But Myers, for one, insists that there's been no interference. "I have to give the philanthropists credit," he says. "They've been very responsible with this. They've never tried to influence us, even though they have a vested interest. They're appreciative of research." Howell, meanwhile, says that he's never dealt with the funders at all; that's Peterson's department. "Milton Friedman hasn't been fogging up my glasses," he quips.

This brings us to perhaps the most depressing recent episode in the voucher wars. In 1999, the Center on Education Policy (CEP), a think tank based in Washington, D.C., tried to initiate a discussion among voucher researchers, with an eye toward launching a new round of studies that would avoid even the appearance of funding bias. It would be supported equally by liberal and conservative foundations, and it would bring together researchers from all sides of the debates.

Rule 10 The cardinal sin, when we are looking for truth of fact or wisdom of policy, is refusal to discuss, or action which blocks discussion.

Jack Jennings, the director of the CEP, has a gentle, weary voice. He served as a staff member for the congressional education and labor committee for nearly three decades, from 1967 until the Republican revolution of 1994. "I was there for the first fight over renewing Title I," he says, "and I was there for the voucher debates of the early 1970s." This largely forgotten episode occurred when school vouchers were briefly promoted by antistatist New Leftists and idiosyncratic liberals like Harvard's Jencks. In 1970, the federal Office of Economic Opportunity offered to fund several field trials of vouchers—but only the small town of Alum Rock, California, took up the offer. Even there, the model was watered down and the results inconclusive.

In 1998, having retreated from Capitol Hill into the think-tank world, Jennings felt sick at heart when he reflected on the continuing bitterness of the country's school-voucher debates. His policy organization, the CEP, had for some time hosted a national forum on public and private education, bringing together leaders from each realm for civil conversations. Perhaps, he thought, the CEP could play the same role for the world of school-voucher research.

In mid-1999, armed with a small grant from the Spencer Foundation, Jennings convened a series of meetings that brought together some forty people: researchers such as Peterson, Myers, Molnar, and Rouse, along with representatives of interest groups (unions, Catholic school federations) and program officers from a diverse array of foundations. This assembly discussed unanswered questions in school-choice research, and the CEP issued a report based on these talks in the summer of 2000. And yet, Jennings recalls with a sigh, all of the group's attempts to endorse a specific plan of research ended in a stalemate. Peterson and Chester Finn Jr., the president of the Thomas B. Fordham Foundation, wanted the assembled group to endorse their call for a large-scale, federally funded voucher demonstration program. When they found no takers, Peterson and Finn abruptly (and petulantly, according to some accounts) withdrew from the group. "The limitation of those meetings was that they had a lot of people who were representative of organized interests," Peterson says curtly.

Meanwhile, Jennings had vague hopes of putting together an improved study of privately funded vouchers—with more oversight, more ideologically diverse funders, and perhaps larger scholarships. But the representatives from the AFT, the NEA, and the Parent-Teacher Association refused to support such a proposal, objecting that any further large-scale experiments could damage already-fragile urban school districts. "I was willing to make an effort and let the chips fall where they may," says Jennings. "But only the U.S. Catholic Conference was willing to participate in anything. This was an opportunity to put together research that couldn't be impugned, research with appropriate checks and balances. But it seemed to me that both sides were afraid of the results."

In the absence of additional research, Myers says, the question of vouchers will remain a festering sore. "This will sound kind of crazy," he explains, "but the simplest way to do it is to say we'll randomly pick, I don't know, a hundred school districts where we'll offer vouchers, and we'll randomly place another hundred districts in a control group. And then we're going to watch these places over the next four, five years, and see how they change.... I think that's the simplest thing to do. It's simple to understand, it's rigorous, and I think people would be accepting of the results. It's such a huge issue right now, and we need evidence on it, either to keep vouchers on the table or get them off the table. I think we just need to have the backbone to do that."

David Glenn is writing a book on the city of Milwaukee and the future of the social contract. His article about Goddard College, "Speaking Up," appeared in the October 1997 LF.




 
 
 

If you have problems accessing or using any area of this site, please contact us at web@linguafranca.com.

Copyright © 2001 Lingua Franca, Inc. All rights reserved.
Sponsored by Seven Bridges Press