Are you measuring what’s important?

Measurable Does Not Mean Important

pencils

Yesterday, I received E. Jane Davidson’s book, Actionable evaluation basics: Getting succinct answers to the most important questions, in the mail. I love that it is so simple, clear, and brief! Flipping through it, a phrase jumped out at me — “Measurable does not mean important.”

Isn’t that the truth! At CES, we do quite a bit of evaluation for youth development and afterschool programs. Some of these programs require grantees to collect a lot of data. Much of that data is not very important or useful, at least from the perspective of some of my clients and me. They are required to report on a multitude of indicators that are of interest to the sponsoring federal or state agency, but not necessarily the local site. For example, one of the many things a site may have to report is change in students’ math and/or reading scores. The change can be as little as one point. Now we all want children to improve, but just how meaningful is a 1 or 2 point change in grades in the grand scheme of things? What the program really is interested in are things like, are kids safe afterschool? Do they get the extra help they need to understand their homework? Does the program help them get the credits the student needs to graduate from high school on time? Has the program helped improve literacy? By the time we answer all of the required items, there is little time or money to answer more interesting (and important) questions for these underfunded projects. Moreover, we have found funders uninterested in outcomes not on the required list of outcomes.

Another consequence of measuring outcomes that are unimportant is not measuring things that are important. Over the years, we have worked with many programs housed in schools and some with transient and/or immigrant populations. Funders with very specific requirements and processes sometimes fail to consider culture influences and the complexity of the systems that surround these children.

Furthermore, grantees are sometimes required to enter data into a “do-it-all” databases which naturally means the database doesn’t do anything very well. Data entry is usually difficult and data export even more so. To make it easier on us (the evaluators) we could require programs to do double data entry, but this would add to staff burden.

graph

It’s these kinds of situations that give evaluators and evaluation a really bad name. Funders aren’t the only guilty party here. Programs and even evaluators may choose to measure the low-hanging fruit rather than the levers that trigger the outcomes in which we are most interested. At the most recent AEA/CDC Summer Institute, John Gargani challenged listeners to think about whether they were really making an impact. We should ask that question of the programs we serve and the evaluations we design. Of course, it would be nice if funders asked themselves the same question. What’s your measurement story?

Speak Your Mind

*