STICKING TO THE NUMBERS: PERFORMANCE MONITORING IN SOUTH AFRICA, 2009-2011 SYNOPSIS President Jacob Zuma took office in 2009 amid a wave of demonstrations by South Africans protesting the government's poor record in delivering basic services. During the 15 years since the end of apartheid, South Africa had made strides in extending basic services to previously underserved communities, but frustration with the pace of progress boiled over in early 2009. During his first month in office, Zuma established a Ministry of Performance Monitoring and Evaluation to improve service delivery by ministries. Two key officials in the new ministry, Ketso Gordhan and Ronette Engela, identified three major reasons for the government's poor performance: a lack of accountability at the upper levels of ministries, decentralized and often ad hoc policy planning, and poor interministerial coordination. They devised a system that reorganized ministries around 12 policy goals and set data-based performance targets for ministers and departments. They succeeded in focusing departments on setting measurable performance targets, but as political support waned, the sustainability of the system came into doubt. This case study offers insights on building the accountability of managers in the civil service and improving the quality of policy planning by setting measurable performance targets. Jonathan Friedman drafted this case study on the basis of interviews conducted in Johannesburg and Pretoria, South Africa, in March and April 2011. Case published August 2011. INTRODUCTION In early 2009, many South Africans were fed up with the government's failure to provide basic services with any degree of reliability and efficiency. The Department of Public Service and Administration's own figures showed citizen satisfaction with service delivery had plummeted during the final years of Thabo Mbeki's presidency, to 58% in 2008 from 75% in 2006. Twenty-six public demonstrations, several of them violent, were recorded in seven of South Africa's nine provinces during the first half of 2009, one fewer than the total for all of 2008. The protests sent a clear signal to the new government led by President Jacob Zuma, who took office in April. A study commissioned by Parliament found the protesters' top grievance was poor service delivery, especially regarding water, electricity, sanitation and waste removal. After the end of apartheid, the South African government made substantial progress in extending basic services to rural areas and townships that had been underserved, providing electricity to an additional four million citizens, building more than two million housing units, improving access to clean water and attaining near universal enrollment in primary schools. Still, with a population that had grown by more than 10 million since 1994, to 49 million, according to World Bank data, progress was too slow. During his presidential campaign, Zuma had stressed the need to improve service delivery, and his election was widely considered to signal a shift in government focus toward expanding basic services for the poor. Soon after taking office, he established a Ministry of Performance Monitoring and Evaluation (MPME) within the Office of the Presidency. To lead the new ministry, Zuma appointed Collins Chabane, a veteran of the anti-apartheid struggle and former head of the Department of Public Works in Limpopo province, where he had won awards for his success in upgrading roads. When Chabane took the post in May 2009, he tapped Ketso Gordhan, former Johannesburg city manager, and Ronette Engela, a monitoring and evaluation specialist in the policy unit of the presidency, to develop a system to monitor and improve government performance. They refocused ministries toward 12 broad policy goals and set specific service-delivery targets for ministers and departments. The new system, which they called the "outcomes approach," measured ministries' performance in terms of the goals and targets. This case study profiles the creation and implementation of South Africa's performance monitoring system, which aimed to improve government performance by building the accountability of managers in the civil service, strengthening the policy planning process, and facilitating interministerial coordination. THE CHALLENGE Gordhan and Engela identified three weaknesses across the South African government: 1) a lack of accountability at the ministerial and upper managerial level; 2) poor planning at some ministries, including a failure to link activities to a departmental plan; and 3) ineffective coordination between ministries in producing and implementing policies. The government had achieved limited success in its earlier attempts to address the lack of accountability in the civil service through monitoring and evaluation by the Public Service Commission, the Department of Public Service and Administration, and the Treasury. The Public Service Commission, an independent agency, published well-respected reports that evaluated ministerial performance in terms of nine criteria that were prescribed in South Africa's constitution. Among the standards were efficient, economic and effective use of resources, high professional ethics and an accountable public administration. The commission took a bold approach by publicly ranking ministries according to their performance for each of the nine criteria. However, as an independent commission with a limited constitutional mandate, the commission had no authority to hold ministries accountable or to enforce recommendations to improve performance. The Department of Public Service and Administration oversaw a Senior Management Service that required managers across the civil service to sign annual performance contracts. However, the Senior Management Service program had several weaknesses. The department, like the Public Service Commission, had no authority to remove managers for poor performance. Additionally, high turnover and vacancy rates at the managerial level diminished the impact of performance contracts. According to a 2009 Public Service Commission report, slightly more than half of senior managers typically signed a contract in a given year. The Treasury had made the most significant contribution to building accountability in the civil service. Beginning in 1999, the Treasury implemented a Program Performance Information reporting system that required ministries to report data linking financial inputs to actual outputs. As a result, for instance, the Ministry of Housing would say how much money was used to build which housing units and how many, an important step in accountability. However, the Treasury program had limitations. Like the Department of Public Service and Administration and the Public Service Commission, the Treasury had no statutory authority to hold officials accountable for failing to meet their targets or to demand specific actions to improve performance. The Treasury's main enforcement tool was its ability to confront ministries in Parliament regarding the proper use of funds, which it did with some regularity, often with success. Although acting as a check on wasteful spending, these confrontations did more to create animosity than to foster a climate of accountability. The second weakness identified by Gordhan and Engela, inconsistent departmental planning, had historical roots. Under the apartheid government, the presidency had played an important role in guiding policy planning for the rest of government. However, anticipating the transition to full democracy, the apartheid government had decentralized policy planning, allowing the 34 ministries to set their own priorities and devise their own action plans. As a result, the presidency could not implement strategic planning with any degree of confidence, and the policies of individual ministries often produced haphazard results or ignored the goals of other ministries. Gordhan said, "You'd have 34 plans and not one mention of an outcome." The Treasury played a significant, though indirect, role in policy planning. By measuring departmental outputs, the Treasury helped departments to focus on what they were actually getting done. However, this framework for policy planning had several shortcomings. First, the Treasury left it to departments to identify the outputs and targets against which they would be measured. The Treasury's interests focused on accounting for money spent, with little concern for the appropriateness of the outputs or targets selected. Engela said, "They [the Treasury] allowed the Ministry of Housing to tell you the number of housing units constructed in a certain area over a certain time period, but they could not tell you how many houses needed to be built or what this meant for the overall state of the housing sector in South Africa." Additionally, Treasury officials worked with chief financial officers of departments in determining and reporting on outputs, excluding the actual policy planners who might better incorporate feedback from the Treasury system into their planning processes. The third shortcoming identified by Gordhan and Engela, the lack of interministerial coordination, stemmed from a well-intended but ineffective "cluster" system, in which ministers with overlapping responsibilities were supposed to jointly plan and implement policies. But ministers used cluster meetings merely to describe what they were doing rather than to develop initiatives with their counterparts. As a result, Gordhan said, tasks assigned to individual ministries were typically accomplished, but tasks assigned to multiple ministries were not. "You sat there and talked about your work, not about the group," he said. "Anytime one department was responsible, it got done. When two or three were responsible, it didn't get done. That was clear." Engela said of cluster meetings: "They talked and talked, but not a lot got done." FRAMING A RESPONSE As the first head of the MPME in 2009, Chabane had to figure out what he was supposed to do. The new ministry lacked a clear mandate, and it was up to the minister to carve out a role. Gordhan recalled, "I was called by Minister [Chabane] the day after he was appointed and he said to me, 'I've got this job. I don't know what it means. Can you help me figure it out?'" Gordhan had extensive experience in national and municipal government as well as in the private sector. As director general of the Department of Transport in the 1990s, he had helped to introduce principles of New Public Management, an approach that melds the concepts of private-sector business management into the public sector. He later used similar principles to solve serious problems when he was city manager of Johannesburg. When Gordhan arrived at the Office of the President, he sought out the monitoring and evaluation specialist in the policy unit, Engela, for assistance in building substance into the new ministry. Engela, an archaeologist by training, had helped develop the Treasury's Program Performance Information reporting system during the early part of the decade and later developed a government-wide monitoring and evaluation framework that built on the Treasury system as one of its key components. Engela's expertise in monitoring and evaluation complemented Gordhan's managerial experience. Once in position to develop a new system, which they later decided to call the "outcomes approach," Gordhan and Engela assessed what was already in place and researched potential models abroad. Engela knew the Treasury system as well as anyone, and she was aware of its flaws and where it needed improvement. She noted the shortcomings of allowing departments to set their own indicators and of the Treasury's limited focus on outputs without a broader sense of policy impacts. Together, they studied monitoring units in other countries, including Chile, Canada and New Zealand. Of particular interest was the U.K.'s Delivery Unit, a monitoring and advisory operation within the prime minister's office, set up during Tony Blair's second term as prime minister from 2001 to 2005. Gordhan and Engela developed a system under which the presidency, represented by the MPME, organized ministries around a dozen broad policy goals based on Zuma's campaign platform. These goals, or outcomes, covered a broad range of interests from education, public safety, employment and the economy to "an efficient, effective and development-oriented public service." By organizing ministries around policy outcomes, the system aimed to achieve Gordhan and Engela's three aims of building managerial accountability, improving policy planning and getting ministries to work together toward joint goals. To build accountability, the outcomes approach went beyond the drafting of policies and set specific delivery targets against which ministers and departments would be assessed. Rather than setting a vague goal of reducing crime, for example, all departments that contributed to combating crime had specific targets for reducing the frequency of different types of lawbreaking. Ministers committed to achieving their high-level targets by signing performance agreements with Zuma and developing more comprehensive delivery agreements that assigned tasks and targets to all who were involved. The new system lent greater coherence to national policy planning by requiring departments to organize their priorities and focus their efforts on outcomes that were determined by the presidency rather than on internally developed targets that often overlapped with other departments or left policy gaps. Departments jointly developed plans that began with one of the 12 desired policy outcomes and worked backward: identifying the factors that best measured progress toward the desired outcome, determining the actions needed to produce the outcome, and harnessing the inputs, such as people and money, required for the actions. To build policy coordination, MPME staffers met with representatives from government ministries and civil society organizations to develop outcome-focused plans for each of the 12 goals. The idea was to cultivate involvement and ownership by enlisting a broad spectrum of contributors who were involved in each outcome, from policy formation to delivery of the actual service to citizens. Subsequent meetings monitored implementation of the plans and evaluated feedback to make adjustments. The foundation of the entire reform effort, Gordhan said, was to "introduce the idea of measurability. ... We said, 'We are going to tie you down to a number, to something that can be measured.' If there was an innovation, that was it." GETTING DOWN TO WORK Knowing that they had no legislative mandate to compel ministers to sign agreements and commit to specific targets, Gordhan and Engela were careful to explain details of their outcome-based system to Zuma, whose support provided the political punch needed to get things done. By developing a system that adhered closely to Zuma's public stance on reforms, the two also built credibility for themselves in the halls of government. Gordhan explained, "We had a new president who sounded at the outset like he was dead serious. ... Every time he spoke, he said, 'We're going to do things differently.' So when we said, 'We're going to do things differently,' it sounded like we were following his lead; and we were. We created the impression, not unfairly, that we spoke on his behalf." Identifying indicators Gordhan and Engela's first task was to identify performance indicators to measure progress toward each of the 12 policy outcomes. Initially, they tried to work with department heads. They approached the Forum of South African Directors General and asked that each manager list 20-30 indicators that might be used to accurately assess their department's performance. Despite several months of workshops, the directors general were not eager to cooperate, and the process bogged down. Frustrated by the slow pace of progress, Engela and Gordhan decided that they had to develop their own indicators for each outcome. Engela said, "The breakthrough was when we said, 'We're going to tell you what we're going to measure you on. Come back and tell us if you agree or disagree, that is fine.' But we were clear that we needed to put something on the table to get the conversation going in the right direction around real results of delivery." They began low-key consultations with both domestic and international experts to identify a specific set of drivers for each outcome. Gordhan described the process of identifying measurable outputs related to "improved quality of basic education," one of the 12 policy outcomes. He and Engela met with representatives of the Ministry of Basic Education and found that the ministry measured 155 aspects of its work. However, Gordhan and Engela thought the 155 indicators incorrectly emphasized internal factors such as school infrastructure and teacher-pupil ratios, and failed to measure the overall quality of basic education. Gordhan recalled, "So we said, 'Let's go back and think about what really drives an education outcome.'" In consultation with an education expert in the U.K., Gordhan and Engela identified teacher quality as the most important of four basic drivers of basic education quality. Second, citing a study from India, they found that the marginal dollar spent in education had the highest return when spent on learning materials and only meager returns when spent on infrastructure or additional teachers. Third, in response to their findings that, on average, South African teachers covered only half of the curriculum each year, they set out to provide teachers with tools to help them do a better job. Gordhan and Engela found that one solution was for schools to produce daily lesson plans for each grade in all 11 national languages, and workbooks with sections that corresponded to each day's lesson. "This monitors whether kids are learning and whether teachers are completing the curriculum," Gordhan said. The fourth factor identified by Gordhan and Engela related to the proper time and method for evaluating student performance. Citing U.S. studies, they stressed the importance of measuring reading and mathematics skills at the third-, sixth- and ninth-grade levels. The ministry measured reading and math ability only in the 12th grade, and most South African students left school before reaching that level. Additionally, Gordhan and Engela evaluated student performance on two kinds of tests: those that were made internally and those that were from external, independent sources. They found that students performed 30% better on internally generated tests, suggesting that schools lowered standards to create an impression of good performance. Gordhan and Engela recommended administering externally written reading and math tests to all third, sixth and ninth graders. "The logic was pretty compelling, when you put that picture together for education," Gordhan said. Gordhan and Engela conducted similar reviews for each of the 34 ministries, identifying three or four primary factors that determined success in their particular fields, combining lessons from international studies and findings from within South Africa. In November and December 2009, they hired a consulting company to review their work and improve the presentation of the findings. The company compared Gordhan and Engela's indicators and targets to performance levels in Brazil and Malaysia, countries that were considered comparable in terms of economic development, and put the data in a more rigorous framework but did not make substantive changes to the indicators. Gordhan said, "They [the indicators] were not scientifically derived, but they passed a reasonability test." Zuma's cabinet approved the proposed outputs in early 2010. Building accountability After Gordhan and Engela had settled on indicators, they and other advisers to Chabane focused on developing the Department of Performance Monitoring and Evaluation (DPME) as a permanent agency to support the minister and administer the outcomes approach system. Chabane selected Sean Philips, his former deputy from the national Department of Public Works, as director general in April 2010. Engela became deputy director general for PME data systems, and Gordhan remained an adviser to the minister. Philips said, "The first thing on my plate was to provide the president with draft agreements to sign with each of the 34 ministers." Performance agreements were intended to cover the full five years of Zuma's term in office, but already one year into his presidency, Zuma had yet to sign one. Negotiations over indicators and targets slowed progress, as did disagreements regarding how to group ministries. Based on the work of Gordhan and Engela, Philips compiled four- to eight-page documents that specified which of the 12 outcomes each minister was expected to contribute to, the primary outputs for which the minister was responsible, and specific targets for each output. Ministers had mixed responses to signing the performance agreements. Although a few were enthusiastic about the new system and readily signed, others disputed the appropriateness of some outputs to their outcomes or the reasonableness of their targets. Philips wanted realistic but ambitious targets, whereas ministers had incentives to lobby for outputs that were easier to achieve. He worked quickly to reach common ground and finalize the performance agreements. He said, "We always resolved disputes by consensus, usually with some give and take by both sides." Zuma signed performance agreements with each of his 34 ministers by the end of April 2010. He would assess ministers' performance annually against the targets delineated in their agreements. The agreements were not made public because, Philips said, it was the "president's view that it's a personal thing between him and his minister." However, Zuma allowed ministers to make public their agreements, and several ministers did so. Improving coordination Performance agreements assigned one lead minister the responsibility of bringing together diverse groups to develop detailed action plans for achieving each of the 12 policy outcomes. For example, the minister for human settlements had the job of convening an "executive implementation forum" to develop an action plan, or delivery agreement, to achieve the outcome of "sustainable human settlements and improved quality of household life." The forums were designed to shift departments' focus from what they could accomplish individually to what they could achieve collectively. Participants included the so-called "delivery partners" for each outcome, including representatives from provincial offices, other relevant departments and ministries, and civil society groups, along with a sector expert employed by DPME to facilitate the proceedings. The minister of human settlements, for example, invited provincial leaders and officials from the Treasury, the Ministry of Cooperative Governance, and the Ministry of Public Works, among others, to contribute in devising and implementing the delivery agreement. The coordinating minister for each outcome forum convened a meeting in May-June 2010 to negotiate a common understanding of the goal and the outputs required of them. Following the first forum meeting, officials met in teams to develop implementation plans and work schedules for each output. The work teams then compiled the plans into a single delivery agreement for adoption by the implementation forum. Individual delivery partners, mostly government agencies, were responsible for implementing and managing the activities assigned in delivery agreements, and would regularly report their progress to a technical implementation forum comprising senior officials below the ministerial level from all participating agencies. Technical forums would make joint reports every other month on the progress of delivery partners against their delivery agreements and submit recommendations to the executive implementation forums, chaired by the coordinating minister. The executive forums, in turn, adopted or adjusted the recommendations of the technical forums and made their own recommendations to relevant Cabinet committees, which submitted recommendations to the full Cabinet. Throughout the chain, feedback and assistance facilitated the work of delivery partners and coordinated their activities. The forums developed joint action plans that enabled departments to address policy outcomes in concert with other departments. Gordhan said this cooperative approach reflected a key lesson he had learned as Johannesburg city manager: that successful outcomes are produced by multiple agencies working together. Because 55% of urban diseases are water-borne, for instance, Gordhan said he and Engela were sure to include the Department of Water Affairs in strategizing their responses to various health challenges. The forums fostered a cooperative spirit. Ivor Chipkin, director of the Public Affairs Research Institute, a South African institution that studied government performance in developing countries, gave one example involving the Ministry of Sports and Recreation and the Ministry of Basic Education. Historically, Chipkin explained, soccer was the main sport at schools with predominantly black student bodies, while rugby was the sport of choice at so-called "white" schools. As a result, the schools rarely met for sports events that might help break down some of South Africa's social barriers. "The reason it wasn't happening," said Chipkin, "was because of a departmental confusion over who did what. Schools fell under [the Ministry of] Basic Education and didn't view [after-school sports] as a key part of its mandate. ... Now the two departments can talk to each other." Between April and November 2010, the forums developed delivery agreements for each of the original 12 policy outcomes, specifying the inputs, such as funds or personnel; the outputs that were required; the activities that were needed to achieve the outputs; and a timetable for implementation. Gordhan and Engela focused on popularizing the idea of measurability in the delivery agreements, requiring measurable targets for each output. Although they initially hoped to assign responsibility for delivery of each target to individuals, in many cases they had to settle for designating units or departments as bearing responsibility. OVERCOMING OBSTACLES Despite strong presidential support at the initial stages, Gordhan and Engela needed to get the attention of the ministries. Representing a new direction in the presidency, with an ambitious plan to centrally indicate policy priorities to other ministries and assess their performance, Engela and Gordhan anticipated resistance. To preempt the pushback and "essentially shock them," she compiled data showing ministries' poor service delivery, based on indicators she had been publishing through the presidency since 2007. Engela confronted ministers in November 2009 with the performance data immediately before releasing the set of indicators she and Gordhan had developed. After that, she said, ministries could not dismiss the notion that substantial changes were necessary. Gordhan and Engela still faced resistance from ministers and senior managers over specific aspects of targets. Gordhan explained, "If you're a typical risk-averse bureaucrat that doesn't want to be held accountable, that's your response: Procrastinate, delay, and make it harder to measure." In one instance, a minister unhappy with the negotiations over his performance agreement appealed directly to Zuma to intervene, but the president was steadfast in his support for DPME. Generally, Zuma played a passive role in the process. As a result, Gordhan and Engela spent from November 2009 until April 2010 negotiating targets, and they continued some negotiations until the November 2010 signing of the delivery agreements. Resistance from ministers affected the design and formation of implementation forums as well. The DPME had expected the implementation forums to replace the clustering system and improve interministerial coordination. But where ministries were already coordinating activities through clusters toward what approximated one of the outcomes, such as the crime cluster, some ministers argued that disrupting clusters would stymie progress. After months of negotiations, many of the original clusters were essentially retained, with a mandate to carry out delivery agreements with DPME participation. "We just provided them with a more strategic agenda," said Philips. A drawback of this compromise, according to Philips, was that forums for the most part included only government agencies and did not include civil-society groups that could have contributed to devising and implementing delivery agreements. Philips said, "They [delivery agreements] should be negotiated by all key stakeholders, but for some, negotiations were too narrow within government and didn't include some key stakeholders." ASSESSING RESULTS Zuma signed performance agreements in April 2010 with his 34 ministers, who together framed 12 delivery agreements during the following months. Not surprisingly, some delivery agreements were better than others. Although some assigned specific responsibilities to particular officials, others delegated tasks vaguely to entire departments. Other complications stemmed from the lack of accurate baseline data in some sectors. Still, the performance and delivery agreements succeeded in their main purpose of specifying delivery targets against which ministers and departments could be assessed. The most important contribution of the outcomes approach, according to Gordhan, was the use of data in formulating and assessing policies. "What we introduced is to have a measureable output linked to an outcome, even if vaguely linked," he said. "It can improve over time, even if it's tenuously linked at first." Previously, he explained, the Ministry of Education, for example, would say, "'We'll improve education' but didn't say, 'We'll do it by this, and we'll measure it by this.' We just did that second part. We weren't rewriting policy in that sense. What we did was change how you articulate policy and understand what you are achieving." The outcomes approach made the process of formulating policies more coherent and more closely coordinated across ministries. Chipkin said, "What had been happening for a good decade is that departments had been planning for themselves, and it was often ad hoc. Even the creation of ministries and departments didn't reflect some overall logic regarding configuration of the state." He saw an opportunity to improve policies within and across ministries in the outcomes approach. "The real value of the exercise is to start for the first time incentivizing the partners [to the delivery agreements] to start planning strategically on their own terms, in terms of their own portfolios and targets, and also amongst themselves," he said. However, the outcomes approach had several shortcomings. Without a legislative mandate, enforcement of performance agreements and delivery agreements relied on Zuma's sustained commitment to hold ministers and departments accountable. But Gordhan and other officials said the president's support began to weaken after the politically bruising process of completing the agreements. Zuma, who considered himself a conciliatory leader rather than a confrontational one, knew that he risked alienating his ministers and colleagues if he demanded strict compliance with the outcomes system. The president needed to shore up support in his party, the African National Congress, ahead of leadership conferences in 2012 that would determine whether he would be nominated to run for a second term. In early 2011, Gordhan described the situation: "[The] weakness is that we don't have the political will at the end to drive the system ... because they know it will lead to more conflict, not because they don't think it's important." Additionally, some critics said that the architects of the outcomes approach, in their haste to get the system up and running early in Zuma's term, failed to invest the time and effort needed to develop ministries' ownership of the effort. Ministries that didn't embrace the new system would consider it an external reporting requirement of little value to them, similar to the Treasury's Program Performance Information system. REFLECTIONS With wavering presidential support in 2011, implementation of the outcomes approach set up by the Ministry of Performance Monitoring and Evaluation was uneven across departments. Some ministries, including those dealing in health and education, embraced the system without pressure from the presidency. The minister of health even made some targets more ambitious than those suggested by the two key officials in the ministry, Ketso Gordhan and Ronette Engela, in order to motivate staff. Other ministries, however, did not embrace the outcomes approach with the same vigor. Ivor Chipkin, director of the Public Affairs Research Institute, thought successful implementation in some ministries would give the DPME leverage in persuading other ministers to increase their future participation in the outcomes approach. "We'll have a baseline to see whether this stuff actually does anything," he said. "In five years, we can see if it made a difference. Uneven implementation will be a blessing." Engela said, "When the ministries who take advantage of this system show the results, other ministries will hopefully see its value in the future." Gordhan was optimistic that despite shortcomings, the system had changed the culture of planning, monitoring and evaluation of policies in South Africa to embrace data-based processes. "The minister of justice reported to cabinet a few weeks ago, and every line of the report had a number ... in relation to targets," Gordhan said. "I read every cluster report for years, and I didn't find one number. Now, people are talking numbers." Terms of Use Before using any materials downloaded from the Innovations for Successful Societies website, users must read and accept the terms on which we make these items available. The terms constitute a legal agreement between any person who seeks to use information available at www.princeton.edu/successfulsocieties and Princeton University. In downloading or otherwise employing this information, users indicate that a. They understand that the materials downloaded from the website are protected under United States Copyright Law (Title 17, United States Code). b. They will use the material only for educational, scholarly and other noncommercial purposes. c. They will not sell, transfer, assign, license, lease, or otherwise convey any portion of this information to any third party. (Re-publication or display on a third-party's website requires the express written permission of the Princeton University Innovations for Successful Societies program or the Princeton University Library.) d. In all publications, presentations or other communications that incorporate or otherwise rely upon information from this archive, they will acknowledge that such information was obtained through the Innovations for Successful Societies website. A suggested citation format is below. [Document author if listed], [Document title], Innovations for Successful Societies, Princeton University, accessed at http://www.princeton.edu/successfulsocieties on [date accessed on web] e. They understand that the quotes used in the case study reflect the interviewees' personal points of view. Although all efforts have been made to ensure the accuracy of the information collected, Princeton University does not warrant the accuracy, completeness, timeliness or other characteristics of any material available online. f. They acknowledge that the content and/or format of the archive and the site may be revised, updated or otherwise modified from time to time. g. They accept that access to and use of the archive is at their own risk. They shall not hold Princeton University liable for any loss or damages resulting from the use of information in the archive. Princeton University assumes no liability for any errors or omissions with respect to the functioning of the archive. Jonathan Friedman Innovations for Successful Societies (c) 2011, Trustees of Princeton University Terms of use and citation format appear at the end of this document and at http://www.princeton.edu/successfulsocieties.