In Search of Better Data about Nonprofits’ Programs

comments 04/29/2013

ID-100162561(1)What are we really asking for when we require nonprofits to produce data on performance, effectiveness and impact? While the surface logic is clear – we need to know this information – the full context and set of assumptions surrounding the request bears closer examination. Laura Quinn, founder of Idealware, breaks it down to reveal the barriers to generating good data and, further, calls out a few ways that the sector can and should support any request for more and better data.

Few would dispute that there is a lack of transparency in the nonprofit sector, but the blame for that transparency falls all too often—and far too easily—on nonprofits themselves. This refrain goes, “If only they would collect better data and better show their impact, it would be clear to funders and donors where to best spend their money.” While this type of thinking is hard to refute in theory, in practice it’s almost impossible to live up to.

To illustrate, let’s play through a hypothetical scenario: Let’s say you’re the data and program evaluation manager for a mid-sized human service nonprofit that provides counseling to victims of domestic violence in the large city of Springfield, with about 35 social workers in the field. It’s your job to help oversee the data systems, analyze data to identify how programs are going and how to improve them, report to funders and foundations on what they want to know, and to think strategically about how you’re measuring and evaluating in general.

With all the recent interest in data and measurement, you have substantial buy-in from your executive team to try to think strategically about you can best use data—after all, they hired you, and the very existence of your position speaks to their commitment. You also have the luxury of solid data systems that allow staff to enter data from any browser and see case data for their own clients, and that let you pull high level numbers and reports on a number of important metrics.

Sounds like you’re in good shape, right? Compared to a lot of nonprofits, you are—but you still have a lot to juggle. What are your biggest headaches likely to be?

  • Data Quality. Your social workers are generally on board with the idea of systematically entering data, but they’re already overworked and underpaid—should they stop to enter data if it means putting a woman’s life at risk? Entering data sometimes falls off the bottom of their critical priorities list, leaving the data out-of-date. You’re thinking through options: Would giving them mobile devices to enter data from the field help—and can you find funding for that? How about simply being stricter about data being part of their job—would that help, or would it damage morale for critical client-facing staff? What about trying to find the budget to hire someone just to help with the data entry? There are no easy solutions.
  • Providing Data to Funders. Let’s say your organization receives funding from two different state programs and three foundations, which is not at all unusual. There’s no standard set of metrics, so each foundation asks for its own, often requesting similar metrics with meaningful differences in definitions—so, for instance, one asks for detailed data on children vs. adults served and defines children as under 16, while another asks for similar data but defines children as dependents living in a parent’s household. What’s more, two funders ask for client-level data so they can do their own analysis. For one, you can download the appropriate data from your system and upload it to their system, but the other won’t accept an upload, so you need to one at a time copy and paste from your database all the data about the constituents you’ve served under their grant, field by field. (This may sound agonizing, but it’s not rare. A number of funders—especially government entities—require detailed data but don’t accept any form of upload or automatic data transfer, apparently expecting that nonprofits will not have any data systems of their own.)
  • Meeting Changing Data Requirements. It’s complicated enough providing all the metrics funders want, but every year about a third of your funders change their data requirements. What’s more, you’re not likely to be reporting to all your funders at the same time each year, so several times per year you’ll need to update your reports, your processes, and maybe even your systems to account for new requirements
  • Defining How Best to Measure for Improvement. A huge part of your job is making sure you have the right data to report to funders, but is that data actually useful to your organization? Does it help you understand what’s working and improve what isn’t? At best, funders are likely asking for a lot of disparate data, requiring some strategy to figure out how best to use it to improve your own programs. More likely, some of what would be truly useful to internal improvement requires additional reporting and analysis, so you need to make time to work with executive management to define precisely what should be measured and how, and to make that happen.
  • Trying to Measure Impact. These days, everyone wants information on actual impact. Many people will tell you it’s not enough to know how many people you’re serving or what happened to them after you served them—it’s also critical to understand the long-term impact of your services on the community. There’s just one problem: This type of measurement generally requires extensive, university-level research—often with control groups, enormous budgets and large spans of time. If someone had already done research relevant to your services, you could use that to define your impact based on more easily gathered data, but unfortunately, nothing exists. (In fact, it’s rare to be working in a program area where this kind of research does exist). Funders don’t seem interested in funding this type of research for the good of all the organizations doing this type of work, but seem to expect your organization to be able to produce it on your own with your very limited data and evaluation budget.
  • Fending off Bad Research. With so many demands for data that isn’t really “knowable,” it’s tempting to take on research projects that might appear to address them but don’t provide any real value to your organization. Which means you spend a lot of time trying to dissuade the powers-that-be from taking on foolish research projects that can’t possibly provide useful data on your limited budget.
  • Proving Your Value. Even as you think through all this, you’re often called upon to prove that the money the funders are spending on you makes sense—after all, your salary isn’t directly going to help the enormous amount of women who need help, and who’s to say all your work isn’t just a waste of money? You’re asked on a monthly basis to show how you’re saving the organization money or helping with fundraising, and there’s always the lurking danger that the executive team will no longer prioritize data and evaluation and you’ll be out of work.

Not an easy job, right? Some might say it’s nearly impossible.

But for many, if not most, small to midsized nonprofits, the reality is even worse. Remember, this example assumed that you had the money and buy-in to get up and running with solid data systems, which is probably not an accurate assumption for the vast majority of nonprofits. It also assumed that there was actually a person in the organization able to put any strategic thought to using data effectively on top of all things needed just to keep the doors open—again, not a likely assumption.

The point of this hypothetical exercise is, primarily, to show that we can’t assume nonprofits have the resources to provide high quality data about their own effectiveness. While that might seem like an easy and obvious thing for them to be able to do, it’s not—not in the least.

How do we get them to a point where that’s possible? It would take more than just a little training or a second look at their priorities. They’d need sizable investments in a number of areas. They’d need help with technology, and to understand how to best make use of data and metrics on a limited budget. They’d need a rationalized set of metrics and indicators that they’re expected to report on, standardized as much as possible per sector with a standard way to provide them to those who need them.

Funders need to understand what is and isn’t feasible, and to redirect the focus of their desire for community impact evaluations from small nonprofits to the university and research world so the nonprofits they support can be unencumbered to work toward a better world.

We all need to understand that if we as a sector lean on nonprofits to provide data they simply don’t have the infrastructure to provide, what we’ll get is not better data—in fact, we may data that’s worse. Organizations pushed to provide impact data to get funding will provide something, but it’s not likely to be the high quality data or strategic metrics that would actually help them improve, or that would help the sector learn anything about the effectiveness of the services they provide.

These organizations rely on funders to help them meet their missions, but sometimes the burdens put on them by the reporting requirements that come with that funding can make it more difficult for them to carry out their work.

We welcome relevant, respectful comments. Please read our Community Guidelines.
  • Penny Black

    This is a great post! I think you hit the nail on the head in several places – inconsistent reporting requirements and definitions, different reporting timelines, and the questionable value of the required data to the organization. Well articulated – thank you!

    • TheraNest

      This is a very important issue and the example of the human service organization hits home. The data points here should be captured by a good agency management software. The problem remains that many of them require too much implementation time and cost and have large ongoing costs.

      Board members and funders also need to realize that tools cost money, but they more than pay for themselves when used and shouldn’t shy away from providing the tools their organizations need to get the kind of data they are looking for.

      I’ll plug TheraNest and say that we are really committed to providing a solution to this issue that’s why our solution doesn’t have a limit on number of staff members or charge per staff member. When you do, it becomes quickly unaffordable for most human service organizations to provide the tool for their staff.
      Instead we are leveraging web technologies and a software as a service model to provide our solution and then we listen to users for their feedback to contually improve.

      Measured outcomes can lead to improved outcomes which leads to improved lives.You just need the right, easy to use tools, the buy-in of all stakeholders and actions based on data.

      http://www.theranest.com

    • Matthew Pike

      Yes, this post provides the reality check we all need in grappling with this agenda. A few points: 1) we need an open source approach to building a shared language around impact measurement – shared metrics, tools etc; 2) data tools should be free, at the very least for non profits; 3) the approach needs to be simple and intuitive to get vital frontline staff on board and
      4) the culture should be about a collaborative approach to learning and improvement – with the user / client voice centre stage. That’s the way we are taking things here in the UK .

      Matthew Pike, ResultsMark

  • Trina Willard

    A completely spot-on analysis Laura. I’ll add this – I believe that evaluation can be very valuable at the program/organization level, particularly for the purposes of ongoing program development. Evaluation research is designed more towards practical application and improvement, rather than establishing causality, which as you noted is the most feasible within the academic setting. Nevertheless, I wholeheartedly agree that nonprofit organizations across the board need additional support, both financial and human, to plan and implement such efforts. While accountability requirements have increased over the past few decades, training and resources have just not kept pace to support their accomplishment.

    • Laura Quinn

      Thanks for your thoughts, Penny and Trina! I’m glad the post resonates for you. Trina, I certainly agree with you that program evaluation is very valuable– I didn’t mean to imply otherwise. Just that program evaluation shouldn’t usually equate to determining the impact of the program on the community (which is where the larger research projects come in!)

      • Trina Willard

        Laura – We are definitely on the same page! You stated the case clearly and I just hoped to lend a supporting voice! (Sorry if it came across as though I interpreted your words differently.) I agree wholeheartedly that determinations of community impact have much different goals and can not be accomplished by local programs in isolation. That message seems to get lost sometimes and I’m so glad you’ve raised it here!

  • Adrienne Hinds

    Great and true post! A major challenge is to “educate” or let’s say lead the funders into agreed upon , useful metrics (and not reductive outputs) up front as a part of the application. More often than not assessment is often not a part of the initial conversations beyond the infamous you will report back in one year to share your results. It is much more effective and efficient (again metrics for organizations) when measurable objectives are established up front, known by all parties, and not changed in the middle or worse at the end without mutual consent. Knowing the measures upfront helps direct resources such as software/technology selection as well as human resources to determine which skills are most appropriate for the task.

  • Andrew

    Great article! As someone full invested in the nonprofit data world I see tremendous opportunity but many dangers and traps as well. I think data is and will continue to transform the way the nonprofit sector operates but it must mature and nonprofits need to gain the human capital and technical expertise to do this work well and not cause damage with bad data.

    • Eric J. Henderson

      Hello, Andrew: Thank you for this comment. In agreement with the context you and the other commenters have painted here, we look forward to furthering this discussion, keeping mindful of the pitfalls and, at the same time, figuring what we can do at this point to build a proper foundation upon which to execute. Let’s all keep eyes open to share examples of orgs making headway. All- feel free to share your own here at Markets For Good – case by case, brick by brick. Thanks for the lively discussion here! Eric J Henderson, Curator.

  • Debby Zelli

    Laura, your post is spot on. As someone who has a heavy background in evaluation, has run grants for a pass through, and is now in a local domestic violence program, this is a daily struggle for me. And evaluation is merely one of my “other duties as assigned.” There is some buy-in from leadership but when I report the outcome evaluation data we have, I;m told there must be a problem with the questions because we need to meet the unrealistic goals set by funders to keep funding. I’ve been working on our data problems for over a year and have improved things significantly, but we still don’t have a good data system and there are problems at every level of the process. Staff don’t have time to enter accurately, the system itself never pulls the same report twice, the funders all have different definitions and time periods, the politics are problematic and then there’s the question of how you really assess impact with something as slippery as domestic violence. And let’s not forget high turnover rates w/staff so that constant training is needed, with less and less money.
    Interestingly, when this post was cross-posted to an evaluation list, all the conversation from professional evaluators centered on educating programs on the importance of evaluation without any discussion of the nuts and bolts capacity issues or system problems articulated here. For us, at least, it isn’t that the workers don’t care or need education, it’s that they spend a tremendous amount of time generating data that has so many problems that the data becomes meaningless and staff becomes jaded. The evaluator in me is constantly heartbroken by what I see.

    • Patrick Yurgosky

      Laura –

      Fantastic post! As the former Director of Business Analysis at a national nonprofit and a current performance management and technology consultant, a lot of what you wrote resonates with me. Thank you for bringing it to light.

      There is a lot of great research coming how that will make effective data collection/reporting easier – I just finished a textbook chapter that discusses the value of data/quantity/type/role across organizations. Many times, the stresses come from misalignment between these factors. There’s also a lot of older research that is coming back into the spotlight to help nonprofits – e.g. “perceived usefulness & perceived ease” – which is helping to ease stresses and drive adoption. Donors, and consultants, should use this great research – this broader thinking – to help build capacity, but many times it’s gets overlooked because it’s less tangible.

      When I submitted reports, and now when my clients submit reports, to donors they only include raw numbers – number of job placements, number of women served, etc. As you highlight, this forces the nonprofit to analyze, crunch, and define all nuances of these metrics, resulting in inconstancy across the industry (for example, a job placement might have a slightly different definition between different organizations) and a ton more work for the nonprofit and the donor. I’m wondering if a better way would be to just give institutional donors primary data – individual data records – and allow them to crunch the data and relay it back to the nonprofit. This way, at least they could ensure consistency across the portfolio and less stress on the nonprofit. There are downsides to this, particularly with building as much capacity at organizations, but it could also incentivize donors to invest in technology to get beter data (b/c they would be directly faced with data issues from poor design) and alleviate some of the data “growing pains.” Just an idea…

      Thank you!

      – Patrick

  • Debra Natenshon

    Thanks Laura. This illustrates a very real and damaging scenario. To work toward solutions, we need to do 2 things: 1) empower nonprofits to define success and results more clearly for their own management and client improvement – then to push back on funders who are not aligned. This represents risk but it is far riskier to chase money and perform processes/reporting that does nothing to improve results for our clients. 2) We must align funders around an “investor mind-set”, so they build supportive relationships with grant partners – then, the metrics the implementers use provide the same set of results that the investor needs to continue their support.

    • Laura Quinn

      I completely agree, Deborah, particularly around empowering nonprofits to define their own success metrics, rather than making them chase ones imposed by funders. I also think there’s good potential in more projects to standardize indicators in specific domains — intuitively, it seems like the resources needed to convene the players and define a standardized set of indicators and definitions and when each apply (ideally, bottom-up, rather than top-down) must be far less than the resources needed for nonprofits to support a ton of different sets of indicators in the same domain….

  • Michelle

    Communication is key…grantmakers and grantees need to discuss what’s ideal vs. what’s realistic, then come to an agreement about what sort of data collection and analysis is most appropriate. A new CECP survey (http://bit.ly/104VDoI) shows the need for better communication in other areas as well.

    • Michelle

      Oops! I meant CEP — Center for Effective Philanthropy.

  • Brad Struss

    Laura, great post! I wholeheartedly agree there needs to be more of this in the nonprofit sector. There is one funder that is working to solve this problem in affordable housing – NCB Capital Impact. We have been fortunate to be involved in the HomeKeeper project with them. Some additional info can be found here:
    https://appexchange.salesforce.com/servlet/servlet.FileDownload?file=00P3000000GwUspEAF
    http://myhomekeeper.org

  • Ekennedy

    Thank you for outlining the problems human service nonprofits face with the demand for data and outcomes. To many outside the sector, the request for data seems a simple request but it isn’t. The expense of maintaining, changing, adapting and/or upgrading IT systems plus retraining staff is often prohibitive. In the for profit sector, these expenses are more easily assumed in the budget and expenses tacked onto the cost of services or product provided. The demand for nonprofits to be lean and plow all proceeds to provide services is a double-edged sword.

  • Pingback: Special Topic: Markets For Good & Beneficiary Insight()

  • Pingback: The Human Service Sector’s Four Biggest Information Management Problems | Human Service Informatics()

  • Pingback: The Human Service Sector’s Four Biggest Information Management Problems | Human Service Informatics()

  • Pingback: Nonprofits’ challenges to obtaining quality performance data()

  • actonace

    Non-profits are just innocuous little entities existing in their
    own isolated corner of the economy. They do not hurt the economy, but they
    certainly do not carry it, either. Non-profits serve one distinct purpose –
    bettering the world while zeroing out their books. Non-profit organizations are
    a steady source of employment. Just because non-profits are not allowed to
    carry forward does not mean their operation does not require specialized jobs
    to be filled. In fact, in terms of day-to-day operations, non-profits run very
    similarly to for-profit corporations. Non-profits, like for-profits, rely on
    computer programmers, accountants, graphic designers and other specialized
    workers to ensure smooth operation. AGC founder Alan Gavornik personally
    commits his 32 years of entrepreneurship and business development
    excellence to facilitating the causes of the philanthropic community.
    AGC is committed to the prudent application of the successful business
    models and principals of the private sector to those of the philanthropic and
    public sector markets. His consulting, coaching and training services to the
    non-profit community are delivered on both a discounted basis as well as
    through a host of pro bono engagements. AGC has a
    mission to boost the performance of non profit clients by using
    decades of private sector experience tailored to the following areas;

    • Development Sales Training

    • Program Marketing

    • Effective Team Leadership Skills

    http://www.alangavornik.com/

Top