02 September 2009

Human-Centered Evaluation

Some folks led a session at SoCap09 that I wish I could have been sitting in on. Twitter allows you to catch a glimpse of the topics but I would have loved to hear the dialogue beyond the 140 characters.

Two people I follow on Twitter were on this panel: Aaron Sklar & Tatyana Mamut (both work at Ideo and are connected to designing for social impact).

Here's the description:
Social investors and social entrepreneurs have been struggling for years to align on the right tools for measuring new-to-the world offerings. These offerings often take years to show results and many initiatives run the risk of stalling or failing due to lack of demonstrated progress. In addition, the evaluation of new solutions is often uncharted territory in terms of how to measure and what to measure. In this session aimed at funders and social entrepreneurs, Ideo, Jd Power and associates and Keystone will share their frameworks and experiences for creating a strategic measurement portfolio based on human centered measurement and evaluation methodologies. In the workshop portion of the session, participants will break into teams and create a human centered evaluation strategy to meet their current measurement challenges. Participants are invited to come with an evaluation challenge of a new-to-the-world initiative that they would like to work on in the workshop. The session is part of a collaboration between Ideo and Good magazine to advance dialogue about evaluation in the social sector.

Twitter began feeding these 5 principles for making change happen:
1. Put people at the center of evaluation
2. Take a systemic view
3. Navigate uncertainty
4. Zoom out to a portfolio view
5. Measure what you care about


Having pursued a focus on human-centered design in my graduate studies, I can relate to the struggle to adequately evaluate. There are so many factors that play into this! Notably, this becomes difficult when you are seeking to access evaluations from those who don't share the same language but who could benefit from your offering. Professor Ranjan at NID has a great diagram that reinforces how to keep people at the center of the evaluation by constantly revisiting the community at all stages of design. His diagram infers being present to these individuals: "You have to get your hands dirty on the ground to be able to really understand your customers' needs." (Lucky Gunasekara)

What Ranjan's diagram can't address is how you should interpret the feedback you receive in order to establish some sort of metric. It likely presumes that a conversation has occurred that will allow you to move onto next stages of evaluation. To me, this presents another reason why meaning and metrics are worth investigating. I continue to return to Jacqueline Novogratz's words in The Blue Sweater where she assesses how social programs have often created a culture of charity that don't empower people to say what they really mean. How can we access what people really think when money (or lack thereof) might affect their decision? Check the Twitter feeds under #socap09.ideo for much more on this. The diversity of discussion is worth perusing.

I'm definitely not a pro in the area of social innovation and/or its measurement. I am keen to grow in it because of the witnessed frustrations of ineffective offerings. I don't want to suggest simplistic ideas on such a far-reaching topic either; my limited experience continues to remind me that the number one principle suggested in this session is the one which will define the remainder on the list. Organizations like the ones included on this panel are asking good questions in order to link us to the broader topics of democracy, transparency and governance (for the sake of those who aren't physically present to lend their voices to their own social innovation but who are the beneficiaries of these discussions on some level).

But that's another blog entry altogether.

For more on this discussion, check out the Innovation in Evaluation forum.

No comments: