But Did it Work?: Time to Focus Government Data on Outcomes, not Inputs.

Nutrition North Canada was a program with a great goal; make nutritional food available to remote northern communities to improve health and well-being. 

Screen Shot 2014-12-08 at 4.03.26 PM

While we in the south talk about food deserts bcause we can’t walk to a grocery store, the Canadian North is truly a food desert, not only because communities may not have permanent road access, but because the ability to conduct any kind of agriculture is limited to nil.  A government program designed to get fresh and healthy food into these communities should be lauded.  But in November 2014, the Auditor General found that despite the good intentions, there was no evidence that the program was working.  It might be working wonderfully, but there was no evidence either way. Despite the plans to collect data on whether the food was both affordable and actually purchased, the only data available was the amount shipped and a rough estimate of what a northern food basket might cost.

“However, although the weight of items subsidized increased by about 25 percent, Aboriginal Affairs and Northern Development Canada has not managed the Program to meet its objective of making healthy foods more accessible to residents of isolated northern communities as it has not identified eligible communities on the basis of need. Neither has it managed the Program to meet its objective of making healthy foods more affordable as it has not defined affordability nor has it verified that northern retailers are passing on the full subsidy to consumers. As well, the Department has not captured the information needed to manage the Program or measure its success. It has also not implemented the Program’s cost containment strategy.”

http://www.oag-bvg.gc.ca/internet/English/parl_oag_201411_06_e_39964.html#hd3b (Accessed December 1, 2014)

 As a former government employee (not Federal), and a consultant who helps organizations improve their performance measurement, I can sympathize with the employees at the Department of Aboriginal Affairs and Northern Development Canada.  Outcome measurements are really hard to identify and even harder to calculate.  In the AG’s report, he alludes to this in respect of the affordability of the food.  One of the elusive pieces of information is the profit margin of the retailer.  In order to calculate the true cost of the food, it would be necessary to require the information be provided by the retailer and either trust that it is correct or conduct regular audits.  Requiring that it be provided risks retailers dropping out of the program because they don’t want to make commercially sensitive information public and conducting audits is expensive and time consuming.  Yet, at the end of the day, all we know is that a lot of food was shipped, not whether it was consumed.  We know even less about whether it actually benefited the community through improved health indicators.

Governments are really good at measuring inputs and outputs because they control them.  Employees are happier about being held accountable for information they can manage.  When they start to be held accountable to data that is hard to capture, hard to trust and replete with “confounding factors” (external influences), they understandably resist.   In part this is a natural human tendency to avoid criticism.  In this day and age of constant audit and criticism of government, even the most well-intentioned public servant will duck for cover if held accountable to a standard outside their control.

Yet, the time has come to emphasize the success of the outcomes.  As the cost of education and health care continue to consume the majority of budgets and as we learn more about the true drivers of ill-health – lack of education and poverty –  we must stop funding programs and services that don’t work and start supporting the ones that do.  That doesn’t mean that we need to have rigorous, double-blind testing of every new innovation before we can start, but it does mean we need to know the objective of the program, the baseline state and the change over time.  If we don’t know how to measure the baseline, then how can we postulate that our proposed intervention will change it?

I accept that there are many confounding factors to outcomes like improved health.  It is typical of large government programs that there will be confounding factors in measuring success.   The Social Determinants of Health is a great framework to understand the intersection of race, gender and poverty as having intersecting impacts on health.  Free will is also a powerful confounding factor; we may ship all the healthy food in the world to remote communities but it doesn’t mean that they want to eat their broccoli any more than the rest of us.

Mark Friedman of the Fiscal Policy Institute, is the creator of a performance measurement framework called Results Based Accountability™.  In his framework, a complete measurement system requires the inclusion of measures for “is anybody better off?”  It’s a simple question.  We shipped XX kg of food to the north, it cost $YY, but is anybody better off?  Friedman recognizes that in the real world, we can’t wait to run medical-model research testing to know if a program will work.  Rather we need to measure a baseline condition in the community (consumption of healthy food), try something that we believe has a reasonable chance of working (lower the price of that food) and measure the condition at a future time to determine whether it worked (increased consumption of healthy food.)  What the AG found is that while we believe that consumption of healthy food is good for people and that they would consume more healthy food if costs were lower, we have no idea if Nutrition North actually worked to increase healthy food consumption.

Nutrition North is not alone.  In my work I often see my clients struggle to identify and measure outcomes of programs like providing clothes to low income families or health information to youth.  It is possible that the consistent underfunding of social programs is one of the reasons we don’t measure or it may be the case the in the social sector we don’t want to waste time or money measuring when we would rather just do what we “know” is right because the clients are in crisis.  But it is time to make a commitment to better understanding of whether our good intentions are really working or are just a road to more bad audits.