Arne and Michelle vs. Larry: The Statistical Battle

By Robert Plants
Editor, Schools for the 21st Century

I opened my newspaper this morning to an article titled “ACT Scores Dropping but More Students Are Prepared for College.” I asked myself how is this possible when other reports say that schools and teachers are not preparing students for future learning.

But I’m getting off my topic, which is the research-based finding that “more than 90% of the variation in student gain scores is due to the variation in student-level factors that are not under the control of the teacher.”

Another interesting note from those who have evaluated value-added methods and particularly the one used in LA is that there is opportunity for as much as 26% error in teacher ratings. If you want to put it another way, 26% is a large labeling error to make regarding someone’s chosen vocation. It sort of opens one to litigation in my mind.

Continue reading

Who’s on First for the Education Reform Pennant?

Bonnie BraceyBy Bonnie Bracey Sutton
Editor, Policy Issues

As I read about solutions for addressing our nation’s educational problems, I am prompted to ask, Who’s on first in the race toward the best answer?

This question echoes Abbott and Costello’s “Who’s on First” routine. It is worth listening to if you don’t remember it or haven’t heard it.

My dad was a teacher. He used to say that if education were an airplane, it would never get off the ground because everyone aboard would think they knew enough to fly it. His point was that the arguments in the cockpit would ensure that the plane would never leave the runway.

I’m not trying to make fun of anyone, and I’m not picking on the President. But I am talking about the media and about us, educators. The bottom line is that discussions about education aren’t funny, especially when you think about all the nuances of the problems that make them extremely difficult to solve. Continue reading

Investing in Innovation Fund: Criteria May Be a Barrier to Some Innovators

Harry KellerBy Harry Keller
Editor, Science Education

Arne Duncan, the new Secretary of Education and much-praised previous superintendent of Chicago Public Schools, is applying $650 million of his ARRA (federal stimulus) money to a new initiative: Investing in Innovation Fund (i3). At first glance, this program is bold and should bring some much-needed innovation to education, at least in part through education technology and change.

As with any such program, the details will make all the difference. The rules for the i3 program are going out for comment and have already been released in preliminary form. Their thrust is commendable. The rules require that all proposals come from LEAs (essentially school districts), non-profits affiliated with LEAs, or consortia of schools.

Arne DuncanThe program lists four “absolute” priorities. Your proposal only has to meet one of these. In short, they are teacher quality, data use, standards and assessments, and low-performing schools. The program goes on to list four more “competitive” preferences: early childhood programs, college access, disabilities and limited English proficiency, and rural schools. Addressing competitive priorities will gain evaluation points.

The program will provide grants in three categories: scale-up grants, validation grants, and development grants. The size of the grants runs from high to low across these three as does the requirement for evidence for effectiveness of the proposal, which includes research on, significance of, and magnitude of the effect.

All those interested in innovation in education in the United States should be prepared to comment on this major initiative. I see a major weakness of the program as the dependence on experimental studies for deciding which proposals to fund. Perhaps, few alternatives present themselves. Still, I have read a number of studies that purported to prove opposite conclusions. I cannot imagine the equivalent of a double-blind study in education because the instructor and students know that they’re doing something different, and change affects performance.

Then, there’s the expectations. Expecting better results generally creates better results in these studies.

Here’s one definition of “strong evidence” from the draft of the program: “one large, well-designed and well-implemented randomized controlled, multisite trial that supports the effectiveness of the practice, strategy, or program.” This definition leads to the question of who can afford to conduct such a trial?  So, are only wealthy purveyors of education innovation eligible for this sort of grant?  Will the “usual suspects” garner all or nearly all of this federal largesse?

Looking at the criteria for the three grant types yields some interesting information.

Table of criteria for i3 grants

Even if you’re ready to scale to national level, you must have “strong evidence” before doing so. What is “national level” anyway? The draft program requests estimates of costs for reaching 100,000, 250,000, and 500,000 students for validation and development grants. It requests estimates for 100,000, 500,000, and 1,000,000 students for the scale-up grants. Does that mean that national scale is just double regional scale?

As a small business operator, I find myself in a difficult position in the education marketplace, and this program simply underlines my situation. I cannot afford to conduct large studies of my effectiveness. Yet, without such expensive studies, I have trouble attracting enough business or investment to conduct such studies. So, at this time, I have to rely on anecdotal evidence including quite a few enthusiastic testimonials.

For the sake of argument, let’s say that my business has a truly transformative innovation. I can certainly say that it’s new, unique, and exciting without running studies, but let’s assume that such studies would prove the case for my service. The publicity benefits of receiving validation from the Department of Education in the form of one of these grants to a school district for the use of my service would be huge. If my assumption for the sake of argument were true, then schools across the country would reap large benefits too. Yet, I see no clear path to such a result.

I believe that my situation is not unique. Others must have excellent innovations ready to be deployed more widely and are facing numerous obstacles in doing so. Can this government program be amended to provide the opportunity for such innovations to be recognized?  If so, how?