Assessing the outcomes of collaborations between nonprofits and businesses is difficult but critical to their effective management. In practice, such assessment is generally fraught with complications. This chapter presents the final component of the CVC Framework: a systematic, multilevel approach to assessing collaboration value outcomes aimed at advancing the complicated task of evaluation.
One is not collaborating for the sake of collaboration. Partners are investing scarce resources to generate value, and this investment, like all others, should be assessed to ascertain its productivity and to provide guidance for further enhancing the collaboration’s value generation. A recent empirical study notes the extent to which the corporate philanthropy of 500 firms listed in the Dow Jones Sustainability Index is strategic by measuring its impact on society, business, and reputation/stakeholder satisfaction; despite the lack of common practice in how impact is measured, 76 percent of the firms assessed some sort of impact.1 Corporate social responsibility (CSR) reports are now common practice, although Porter and Kramer assert that, instead of offering a coherent or strategic framework, “they aggregate anecdotes about uncoordinated initiatives to demonstrate a company’s social sensitivity,” and that “philanthropic initiatives are typically described in terms of dollars or volunteer hours spent but almost never in terms of impact.”2 Peloza suggests three key reasons for businesses to strengthen their metrics regarding social performance: to facilitate cost-effective decision making, to avoid interference in the allocation of resources due to lack of hard data, and to enable inclusion of social-performance budgets in the mainstream budgeting of companies.3
Similarly, demands for nonprofit organizations to measure impact have emerged because of the need to demonstrate the effectiveness of programs to all stakeholders, including funders, in an increasingly competitive philanthropic marketplace. In the United Kingdom, for example, the Charity Commission requires NGOs to report in terms of their core strategic objectives. The Innovation Network’s “state of evaluation” study of U.S. nonprofits found that 90 percent measure their performance, but that less than half use this information to adjust their operations annually, evaluation being considered among their lowest priorities.4 Mario Morino, founder of Venture Philanthropy Partners, contends that managing to outcomes is “a way for leaders and nonprofits to learn and grow” and is “essential for achieving lasting impact.”5 Nevertheless, a survey of nonprofit evaluation in Brazil revealed that, although assessment efforts did focus on results, they “were not concerned with creating spaces for self-reflection and learning.”6 Our CVC Framework creates an even sharper focus: on managing for the co-creation of value. Geoff Mulgan, director of policy under British Prime Minister Tony Blair, contends that value metrics should be used for three key functions—“external accountability, internal decision making and assessment of broader social impact”—but notes that rigor in such assessments is often lacking.7
It is not uncommon for collaborations to state performance in terms of the inputs provided (for example, the number of books supplied in a literacy program) or the outputs (for example, number of students reading the books). While these indicators describe programmatic activities, value assessment requires a focus on the outcomes (for example, the increased level of reading capability and comprehension, and the resultant benefits in terms of both social and economic value). An even further refinement in assessment is to utilize rigorous impact evaluation methodologies to determine the extent to which the outcomes are attributable directly to the collaboration rather than to other possibly intervening factors.
Collaborators recognize the need for evaluation. Its execution, however, is often deficient. If it were easy, everyone would be doing it. Outcomes assessment is fraught with complexities embedded in the very nature of value creation in social-purpose collaborations. As we have pointed out elsewhere, collaborative value is “the transitory and enduring multidimensional benefits relative to the costs that are generated due to the interaction of the collaborators and that accrue to organizations, individuals, and society.”8 While Porter and Kramer assert that such “shared value” is a superior kind of value,9 these multiple forms of value and beneficiaries and their interactive dynamics create assessment complexities for both collaborative and individual social-purpose undertakings. Porter, Hills, Pfitzer, and Hawkins acknowledge that “the tools to put this concept into practice are still in their infancy.”10
While there is no simple solution to evaluation complexities, one can focus the assessment process more systematically on outcomes by concentrating specifically on who benefits, and how. Too often the actual value generated by collaborations is undercounted because the focus of assessment is conceived too narrowly. Consequently, a critical step in ensuring a more comprehensive value assessment is to examine who has benefited and how far those benefits have spread in terms of three interrelated levels: that of individuals, that of organizations, and that of society (these are also referred to as the micro-level, the meso-level, and the macro-level of analysis). Collaborative value is created at each level, either sequentially or simultaneously.
A further distinction is between those benefits that accrue internally to the organizations and the individuals within the partnership and those benefits that are external to the partnership, including benefits to the larger society. Although evaluations tend to focus on how collaboration enhances the performance of the partnering organizations, it is important to recognize that the internal benefits derive fundamentally from the creation of value for external beneficiaries. For every category of beneficiary, one can specify the value generated. This locational multilevel mapping of outcomes enables a more comprehensive and systematic assessment. It is beyond the scope of this chapter to provide a comprehensive and detailed treatise on outcomes assessment, but the chapter does present a systematic approach to the task, and the works cited in the chapter point the way to the more detailed information that supports the chapter’s conclusions.
Because value, like beauty, is in the eyes of the beholder, some outcomes may not be perceived in the same way by both partners. There may be ambiguity about whether the results constitute success, particularly where there has been significant innovation.11 Resolving such perceptual differences may lead to deeper mutual understanding and collaborative capacity in the partnership.
In this section, we consider organizations as well as individuals. As explained in Chapter Two, the value accruing to the partnering organizations can be expressed in terms of four types of value and their corresponding value subsets. All of these have been discussed and illustrated in previous chapters, and they are recapped here:
Associational Value
Transferred-Asset Value
Interaction Value
Synergistic Value
Each of these types of value, with their corresponding subcategories, can give rise to economic value in a multitude of forms for the partnering organizations, thereby contributing to “financial sustainability, i.e., an organization’s capacity to operate indefinitely,” as Márquez, Reficco, and Berger define economic value.16 In addition to economic value, managers can identify how, in their particular situations, value generation contributes to the attainment of social and environmental missions.
At the micro-level (the level of individuals) there can be two types of value outcomes. One is instrumental in that engagement in the collaboration increases the capabilities and professional development of the involved individuals. For example, individuals may learn new technical or management skills, gain new knowledge, broaden their exposure to other organizational approaches and cultures, increase their interpersonal skills, and strengthen their leadership capacity.17 A second value outcome at the micro-level is psychological benefits. These can include psychic satisfaction in helping others, new friendships with colleagues in the partnering organization, pride in the organization, and prestige in the community.18
Here we consider organizations, individuals, and society outside the partnership. There are many external organizational stakeholders of businesses and nonprofits, including institutional financial supporters interested in knowing the results of their support. Therefore, outcomes that demonstrate the synergistic contribution of the collaboration to the generation of economic, social, and environmental value reveal the enhanced return on the investments of these donors and investors. Other organizations can also benefit by the collaboration. These include governmental entities and community organizations concerned about or involved in the social or environmental problems being addressed by the partners. Similarly, other businesses or nonprofits from the same sector may be assisted indirectly when any of the collaboration’s actions improve the situation for everyone engaged in that arena—for example, when new industry standards are created20—or when new opportunities for value creation are demonstrated. In effect, these are positive externalities of the collaboration that accrue to those organizations.
The primary external individual beneficiaries will be those receiving the services or goods produced by the collaborating organizations.21 The nature of the benefit will depend, of course, on the specific problem and client focus of a partnership. Indirect beneficiaries will include individual donors to the nonprofit and individual clients of the business who receive psychological “income” from the enhanced well-being that their patronage has enabled.
At the societal level, collaborations might create economic, social, or environmental value for society in general through systemic changes in societal awareness, values, or priorities, in social or sectoral relationships, in institutional governance arrangements, or in access to new technologies and innovations.22 Improvements in the environment inherently generate benefits for society at large. Hitt, Ireland, Sirmon, and Trahms refer to such benefits as “meeting social needs in ways that improve the quality of life and increase human development over time,” including attempts that “enrich the natural environment and/or are designed to overcome or limit others’ negative influences on the physical environment.”23 In the partnership between Alcoa and Greening Australia cited toward the end of Chapter Five,24 the partners became increasingly rigorous and encompassing in their evaluations. They moved from counting the number of volunteers involved in reforestation (inputs) to trees planted (outputs) to environmental health benefits (outcomes) to impact (ecosystem changes that were due to the collaboration).
In assessing benefits at all levels, partners should also determine the accompanying costs in terms of economic, management, and reputational resources deployed and risks incurred.25 In so doing, however, partners should operate with the investment and longer-run mindset (see Chapter Three) that recognizes that the gestation period and the process for creating collaborative value are often longer and cumulative.
The Outcomes Assessment Framework, described in the preceding section of this chapter, advances the evaluation task by more systematically and comprehensively identifying who is benefited, and how. But performance measurement is more than just a technical issue, and collaborations, along with all other welfare-enhancing programs, are characterized by a perplexing set of difficulties in measuring outcomes.
David Hunter, former director of assessment for the Edna McConnell Clark Foundation, observes, “Few people involved in this work have thought deeply about managing toward outcomes. Most put the cart before the horse—focusing on how to measure rather than on why [to] measure and what to measure.”29 Therefore, before plunging into specific measurement difficulties, we should focus on four important precursors: mindset, clarity of objectives, theory of value creation and change, and assessment investment.
As discussed in Chapter Three, partners need to have a robust and multifaceted collaborative value mindset. Central to this is a sharp and ever-present focus on ensuring that the partnership is generating maximum value, which requires ongoing assessment. Outcomes measurement is essential. If this mental framework is not present within each partnering organization, it is unlikely to emerge in the collaboration. But even if only one of the partners has a strong value measurement mindset, there is an opportunity to stimulate and develop that orientation in the other partner.
Many companies lack an explicit mission statement or goals for their social-performance activities (including collaborations), which is to say that they lack criteria against which they would have to perform,30 or they lack consistency in employing outcomes metrics.31 Previous chapters have stressed the importance of partners’ linked interests and alignment. Investing time in clarifying what the collaboration aspires to achieve is vital for defining the outcomes objectives. (We referred to this in Chapter Five as the process of converging value frames.) These converged objectives, expressed as specifically as possible in terms of the types of value being sought, then guide the formulation of what should be measured. The clearer the objectives, the sharper the measurement. Hunter, in a book filled with detailed examples from his decades of experience in performance management, prescribes five characteristics for performance indicators: they must be clear, relevant, economical, adequate, and monitorable.32
Hunter observes that the nonprofit sector “suffers generally from a pervasive case of unjustifiable optimism,” that is, a habit of “over-claiming . . . effectiveness while under-measuring . . . performance,” and he contends that this problem can be addressed if nonprofits formulate “robust theories of change that serve as blueprints for achieving specific results in well-defined domains” and “make their strategic visions operational.”33 In collaborations, this means identifying collective sources of value and delineating value-creation pathways (see Chapter Five) so as to project the theoretical value.34
According to the Center of Effective Philanthropy’s 2012 survey, 71 percent of the nonprofits surveyed received no support from their funders for their assessment efforts.35 Collaborations serious about creating value will ensure that adequate resources are mobilized for carrying out meaningful assessment. Morino stresses the importance of such investment: “Management-oriented data collection and analysis is what managing to outcomes requires. It is a way for leaders and nonprofits to learn and grow. It is essential for achieving lasting impact.”36
Let us now turn to the complexities of measurement. Austin, Leonard, Reficco, and Wei-Skillern summarize the difficulties concisely: “The challenge of measuring social change is great due to nonquantifiability, multicausality, temporal dimensions, and perspective differences of the social impact created.”38 Others have also cited these and related complexities. Although these measurement problems constitute significant challenges to evaluation, a variety of approaches have emerged to deal with them. For each of the problem areas indicated here, we set forth some of the ways that have been used to deal with them.
Methodological challenges in measurement may be due to the intangible character of many outcomes and to the need for documented, likely, and perceived effects.39 Quantification is important, but not everything can be quantified. Khandker, Gayatri, Koolwal, and Samad, writing about experience at the World Bank, observe that “a mixture of qualitative and quantitative methods (a mixed-methods approach) might . . . be useful in gaining a comprehensive view of [a] program’s effectiveness,” as illustrated by the Jamaican Social Investment Fund (JSIF):
Program evaluators conducted semi-structured, in-depth qualitative interviews with JSIF project coordinators, local government and community leaders, and members of the JSIF committee that helped implement the project in each community. This information revealed important details about social norms, motivated by historical and cultural influences that guided communities’ decision making and therefore the way the program ultimately played out in targeted areas. These interviews also helped in matching communities, because focus groups were asked to identify nearby communities that were most similar to them.40
The tendency to attribute results to a particular program, especially in companies and nonprofits that have a sophisticated portfolio of social activities or partnerships, may be even more pronounced with respect to changes in complex systems influenced by a range of factors.41 Impact evaluation focuses explicitly on determining causality attribution by using rigorous methodology (for example, randomized-control trial groups). Khandker, Gayatri, Koolwal, and Samad provide a comprehensive exposition, with case examples and exercises, of a multitude of quantitative methods for assessing causality.42 Outcomes assessment, in contrast to impact evaluation, does not have such randomized designs or control groups, but Lim suggests that if a collaboration is achieving its intended results, one can reasonably assess outcomes with the following approach:
It can take a long time to effect social or environmental change. The long gestation period for creating collaborative value means that intervening factors can affect outcomes and that managerial needs for immediate feedback may be frustrated. One approach to dealing with this situation is to examine the collaboration’s theory of change and, in the words of Lim, to “apply a model drawn from external evidence and adjusted to current local conditions pertaining to ultimate effectiveness. This external evidence includes quantitative data from prior studies and consultations with sector experts.”44 An example that comes from a malaria-prevention program is the practice of multiplying the number of individuals receiving antimalarial bed nets by a figure (drawn from the results of existing studies) representing the expected effectiveness of the nets, to yield a projection of lives saved. That projection then becomes an intermediate indicator of outcomes.
Subjectivity on the part of different stakeholders in varying contexts may influence how those stakeholders value outcomes, which is to say that stakeholders may have subjective impressions of what is acceptable, what is appropriate, and what is of value to whom.45
There are two dimensions with respect to subjectivity concerns. The first dimension has to do with the fact that different groups will value benefits differently. Rather than being a problem, however, this is a pathway to refined value assessment. Value is and should be in the eyes of the beholders. It is important to identify all the different beneficiaries at the microlevel, the mesolevel, and the macro-level, and so the task is not to homogenize value but to recognize its heterogeneity, which provides a more comprehensive view of the value created. The second dimension has to do with the reasonable desire to monetize the different forms of value created so that one can use a common unit of analysis to compare costs/investments with benefits. According to the SROI [Social Return on Investment] Network:
There will be estimates and assumptions. We prefer to call these professional judgements, which is after all what accountants use to describe their estimates and assumptions . . . There are many outcomes for different stakeholders, some negative and some conflicting. And this means there needs to be a way of deciding which of these are important or, in SROI terms, which are material. Valuation is a way of weighting outcomes in order to help make this decision and cannot be left until the end of the process if used this way. Valuation isn’t an end, it’s a beginning.46
Even if one does not convert the social or environmental value to a financial equivalent, useful cost-effectiveness indicators can be calculated (for example, cost per life saved, or cost per gallon of water saved).
The cost of evaluation may seem too high. Even a staunch supporter of rigorous evaluation like the Roberts Enterprise Development Fund (REDF), a pioneer in developing and applying SROI and blended-value methodologies, confirms that the process of using SROI analysis uses many resources and is not financially feasible for many nonprofits.47
But one should consider the cost of not doing evaluation. One researcher cites
a well-known program, Scared Straight, which arranges for juveniles who are getting in trouble with the law to meet, up close and personal, lifers who let them know that prison is hell. The idea is that this will terrify the kids and propel them back onto the straight and narrow path. But you might want to know that rigorous experimental research shows that Scared Straight is more harmful to teens than doing nothing. What does this mean? It means that Scared Straight has been proven to increase violence among teenagers who participate in its visits to prison. Nevertheless, Scared Straight not only thrives in the U.S. but has spread to at least six other countries.48
Similarly, an evaluation by the Latin American Youth Center revealed that its educational program that aimed at decreasing domestic violence actually increased it, which led to a redesign that produced the opposite results.49 Although rigorous, evidence-based evaluation is desirable, the level of sophistication and the accompanying costs should be adjusted to fit the purpose and the users.
Clearly, outcomes assessment is filled with challenges, but its importance makes Nike’s slogan “Just do it” quite applicable. In fact, that company appears to have applied the slogan to itself:
A critical task . . . [has been] to focus on impact and develop a systematic approach to measure it. We’re still working hard at this. How do we know if a worker’s experience on the contract factory floor has improved, or if our community investments helped improve a young person’s life? We’re not sure anyone has cornered the market in assessing real, qualitative social impact. We are grappling with those challenges now. In FY07–08, we will continue working with key stakeholders to determine the best measures. We aim to have a simple set of agreed-upon indicators that form a baseline and then to measure in sample areas around the world.50
In its 2009 CSR report, the company acknowledged that solutions require industry-level and systemic change, which will have to pass through “new approaches to innovation and collaboration”; interestingly, the report states, “Our aim is to measure our performance and report accurate data. At times, that means systems and methodology for gathering information needed to change even as we collect data, as we learn more about whether we are asking the right questions and whether we are getting the information that will help us to answer them rather than just information.”51 The company also reported that it aimed at developing targets and metrics around programs for excluded youth around the world, which demonstrates the policy-type thinking required for the development of impact indicators as well as for the development of processes to monitor, report, and advocate. These are competencies usually associated with nonprofit organizations, but they clearly are needed equally by corporations to assess the creation of multiple types of value.
Nike has embraced an evolutionary and historical approach in understanding the workplace impact on factories.52 The World Business Council for Sustainable Development has responded to the growing interest in assessment by publishing a guide for measuring socioeconomic impact. As the organization’s president, Peter Bakker, has stressed, “Capitalism requires a new operating system, and needs to be re-booted so that we expect and manage the returns on financial, natural, and social capital in a balanced way with a view to future-proofing our economies.”53
Assessing outcomes is a learning journey. The complications of evaluation can appear overwhelming, but in fact they are manageable. One can begin by using the multilevel outcomes assessment mapping framework to systematically and comprehensively identify who is benefiting, and with what type of value. Approaching the task with the appropriate outcomes-oriented collaborative value mindset, and ensuring that the partners have clarity about their objectives, will provide the necessary focus. Then, delineating a theory of value creation and change, one is able to set forth a guiding logic of transformational pathways toward value creation. Finally, a willingness to invest in assessment efforts enables a meaningful ongoing learning process that is essential to continuous improvement in co-creating value.
This chapter has elaborated the fifth and last component of our Collaborative Value Creation Framework. In the book’s final chapter, we extract from the previous chapters a set of smart practices for co-generating collaborative value.
Notes
1. Maas and Liket, 2011.
2. Porter and Kramer, 2006, p. 3.
3. Peloza, 2009.
4. Morariu, Athanasiades, and Emery, 2012, pp. 2, 6.
5. Morino, 2011, p. 4.
6. Campos, Andion, Serva, Rossetto, and Assumpção, 2010, p. 238.
7. Mulgan, 2010, p. 42.
8. Austin and Seitanidi, 2012a.
9. Porter and Kramer, 2011.
10. Porter, Hills, Pfitzer, Patscheke, and Hawkins, 2012, p. 1.
11. Jay, 2013.
12. Austin, 2000b; Elkington and Fennell, 2000; Gourville and Rangan, 2004; Seitanidi, 2010; Austin, 2000d; Googins and Rochlin, 2000; Heap, 1998; Huxham, 1996; Yaziji and Doh, 2009; Waddock and Post, 1995; Warner and Sullivan, 2004; Pearce and Doh, 2005; Alsop, 2004; Greenall and Rovere, 1999; Heugens, 2003; Andreasen, 1996; Bowen, Newenham-Kahindi, and Herremans, 2010.
13. Brown and Kalegaonkar, 2002; Galaskiewicz, 1985; Googins and Rochlin, 2000; Yaziji and Doh, 2009; Vock, Van Dolen, and Kolk, forthcoming; Austin and Seitanidi, 2012a; Austin and Seitanidi, 2012b; Milne, Iyer, and Gooding-Williams, 1996; Porter and Kramer, 2002; Seitanidi, 2010.
14. Austin, 2000d; Googins and Rochlin, 2000; Huxham, 1996; Yaziji and Doh, 2009; Gray, 1989; Hardy, Phillips, and Lawrence, 2003; Porter and Kramer, 2011; Heap, 1998; Millar, Choi, and Chen, 2004; Austin 2000b; Seitanidi, 2010; Vock, Van Dolen, and Kolk, forthcoming; Polonsky and Ryan, 1996; Seitanidi and Lindgreen, 2010; Pearce and Doh, 2005; Crane, 1998; Newell, 2002; Bishop and Green, 2008; Googins and Rochlin, 2000; Porter and Kramer, 2002; Gourville and Rangan, 2004; Kanter, 1999; Bendell, 2000; Das and Teng, 1998; Selsky and Parker, 2005; Tully, 2004; Wymer and Samu, 2003; Le Ber and Branzei, 2010a; Le Ber and Branzei, 2010b; Stafford, Polonsky, and Hartman, 2000.
15. Austin, 2000b; Kanter, 1999; Seitanidi, 2010; Stafford, Polonsky, and Hartman, 2000; Yaziji and Doh, 2009; Tully, 2004; Drucker, 1989; Austin, 2000d; Holmes and Moir, 2007; Glasbergen, 2007; Murphy and Bendell, 1999; Waddock and Post, 1995; Bryson, Crosby, and Middleton Stone, 2006; Le Ber and Branzei, 2010a; Le Ber and Branzei, 2010b; Gourville and Rangan, 2004.
16. Márquez, Reficco, and Berger, 2010, p. 6.
17. Burchell and Cook, 2011; Austin, 2000b; Austin and Seitanidi, 2012a; Bartel, 2001; Jones, 2006; Vock, Van Dolen, and Kolk, forthcoming.
18. Bhattacharya and Sen, 2004; Green and Peloza, 2011; Vock, Van Dolen, and Kolk, forthcoming; Bhattacharya, Sen, and Korschun, 2008; Kolk, Van Dolen, and Vock, 2010; Bhattacharya, Korschun, and Sen, 2009.
19. Austin, Reficco, Berger, Fischer, Gutierrez, Koljatic, Lozano, Ogliastri, and the Social Enterprise Knowledge Network (SEKN) Team, 2004.
20. Stafford, Polonsky, and Hartman, 2000.
21. Bockstette and Stamp, 2011.
22. Waddock and Post, 1995; Crane, 2010.
23. Hitt, Ireland, Sirmon, and Trahms, 2011, p. 68.
24. McDonald and Young, 2012.
25. Austin and Seitanidi, 2012b.
26. Austin and Reavis, 2002.
27. Millard, 2005; Zettelmeyer and Maddison, 2004.
28. Conservation International and Starbucks, 2011.
29. Hunter, 2011, p. 6.
30. Austin, Gutiérrez, Ogliastri, and Reficco, 2007.
31. Peloza and Shang, 2010; Peloza, 2009.
32. Hunter, 2013.
33. Hunter, 2011, p. 99.
34. White, 2009.
35. Brock, Buteau, and Herring, 2012, p. 6.
36. Morino, 2011, p. 4.
37. Jeff Edmondson, quoted in Eckhart-Queenan and Forti, 2011.
38. Austin, Leonard, Reficco, and Wei-Skillern, 2006.
39. Jorgensen, 2006; Sullivan and Skelcher, 2002.
40. Khandker, Gayatri, Koolwal, and Samad, 2010, p. 19.
41. Peloza and Shang, 2010; Peloza, 2009; Brinkerhoff, 2002; Austin, 2003b; Hoffman, 2005; O’Flynn, 2010.
42. Khandker, Gayatri, Koolwal, and Samad, 2010.
43. Lim, 2010, p. 15.
44. Lim, 2010, p. 15.
45. Mulgan, 2010; Lepak, Smith, and Taylor, 2007; Austin, 2003b; Endacott, 2003; Amabile, 1996.
46. SROI Network, 2012, pp. 2–3.
47. Javits, 2008.
48. Morino, 2011, p. 43.
49. Morino, 2011, p. 96.
50. Nike, 2005, p. 11.
51. Nike, 2009, p. 18.
52. Nike, 2005, p. 37.
53. World Business Council for Sustainable Development, 2013.