TOMS impact evaluation finds zero to negative effects in El Salvador

This is from the Economist:

The first of two studies found that TOMS was not wrecking local markets. On average, for every 20 pairs of shoes donated, people bought just one fewer pair locally—a statistically insignificant effect. The second study also found that the children liked the shoes. Some boys complained they were for “pregnant women” and some mothers griped that they didn’t have laces. But more than 90% of the children wore them.

Unfortunately, the academics failed to find much other good news. They found handing out the free shoes had no effect on overall shoelessness, shoe ownership (older shoes were presumably thrown away), general health, foot health or self-esteem. “We thought we might find at least something,” laments Bruce Wydick, one of the academics. “They were a welcome gift to the children…but they were not transformative.”

More worrying, whereas 66% of the children who were not given the shoes agreed that “others should provide for the needs of my family”, among those who were given the shoes the proportion rose to 79%. “It’s easier to stomach aid-dependency when it comes with tangible impacts,” says Mr Wydick.

For a litany of criticisms of TOMS before the study see here, here, and here. The original study is available here.

Also, would anyone ever think that donating shoes, or even mining hard hats, to rural Kentucky would be “transformative”?

Anyway, huge props to TOMS for daring to scientifically study the impact of their ill-advised in-kind aid initiative.

Cash transfers do not make the poor lazy

This is from the New York Times:

Abhijit Banerjee, a director of the Poverty Action Lab at the Massachusetts Institute of Technology, released a paper with three colleagues last week that carefully assessed the effects of seven cash-transfer programs in Mexico, Morocco, Honduras, Nicaragua, the Philippines and Indonesia. It found “no systematic evidence that cash transfer programs discourage work.”

A World Bank report from 2014 examined cash assistance programs in Africa, Asia and Latin America and found, contrary to popular stereotype, the money was not typically squandered on things like alcohol and tobacco.

Still, Professor Banerjee observed, in many countries, “we encounter the idea that handouts will make people lazy.”

Professor Banerjee suggests the spread of welfare aversion around the world might be an American confection. “Many governments have economic advisers with degrees from the United States who share the same ideology,” he said. “Ideology is much more pervasive than the facts.”

More on this here.

A call for “politically robust” evaluation designs

Heather Lanthorn cites Gary King et al. on the need for ‘politically robust’ experimental designs for public policy evaluation:

scholars need to remember that responsive political behavior by political elites is an integral and essential feature of democratic political systems and should not be treated with disdain or as an inconvenience. instead, the reality of democratic politics needs to be built into evaluation designs from the start — or else researchers risk their plans being doomed to an unpleasant demise. thus, although not always fully recognized, all public policy evaluations are projects in both political science and political science.

The point here is that what pleases journal reviewers is seldom useful for policymakers.

H/T Brett

Why are some African governments so bad at managing their countries’ resources?

UPDATE: According to Reuters,  Israeli billionaire businessman Dan Gertler sold one of his Congo-based oil companies to the government last year for $150 million – 300 times the amount paid for the oil rights – in a deal criticised by transparency campaigners.

*************************************************************

Resource mismanagement in Africa is not just a story of rampant corruption and the complete lack of political will for reform. It is also a story of governments that remain completely out-staffed by multinationals with far superior technical capacities. Improving resource management on the continent will therefore have to be as much about government technical capacity development as it will be about political reform.

Vale, for example, employs nearly 200,000 people around the world and has annual profits equivalent to nearly four times Mozambique’s state budget. It can recruit, train, and compensate employees to represent its interests on a scale far beyond what the government can do. Without Vale’s capacity for number crunching, Mozambique’s regulators lean on the companies they oversee for all manner of important data.

In 2011, the Mozambican government published an independent study of the country’s mining, oil, and gas industries. Conducted by a Ghanaian consulting company, Boas and Associates, the report was part of Mozambique’s application to the Extractive Industries Transparency Initiative, a World Bank-funded program designed to encourage an honest accounting of mining revenue and payments by participating countries and corporations alike. Mozambique’s candidacy was ultimately denied on the basis of its failure to publish what it earned from the companies involved, [but the report also noted a lack of qualified personnel in the agencies governing almost every aspect of the extraction of Mozambique’s natural resources]: licenses, prospecting, mining and drilling, sales, export.

According to the report, the [Mozambican government has no way of verifying the quality and quantity of minerals in the concessions it leases to private companies, and it depends on those companies for data on what is ultimately mined and exported]. Worse, the government has no system for monitoring global commodity prices or of tracking companies’ investment costs, which means it cannot independently verify a company’s profits.

Lesson? It’s not all corruption. It is also about the incentive structure that has resulted from government’s reliance on the word of profit-maximizing mining companies.

Improving government capacity to regulate resource sector operations is a key pillar of accountability and transparency that is currently missing from the discussion on how to manage Africa’s resources. It is easier to blame it all on thieving politicians and mining executives.

Not all governments might find it useful to improve their technical capacity (it is easier for them to steal if the valuation of state assets remain uncertain) but I bet many African states, especially those with moderately democratic regimes, can be persuaded to boost their technical capacity, if not for anything then just to improve their bargaining position vis-a-vis mining companies. At a minimum, this would mean more money for their pockets, and perhaps also more money for roads, schools and hospitals.

Can RCTs be useful in evaluating the impact of democracy and governance aid?

The Election Guide Digest has some interesting thoughts on the subject. Here is quoting part of the post:

The use of the RCT framework resolves two main problems that plague most D&G evaluations, namely the levels-of-analysis problem and the issue of missing baseline data. The levels-of-analysis problem arises when evaluations link programs aimed at meso-level institutions, such as the judiciary, with changes in macro-level indicators of democracy, governance, and corruption. Linking the efforts of a meso-level program to a macro-level outcome rests on the assumption that other factors did not cause the outcome.

An RCT design forces one to minimize such assumptions and isolate the effect of the program, versus the effect of other factors, on the outcome. By choosing a meso-level indicator, such as judicial corruption, to measure the outcome, the evaluator can limit the number of relevant intervening factors that might affect the outcome. In addition, because an RCT design compares both before/after in a treatment and control group, the collection of relevant baseline data, if it does not already exist, is a prerequisite for conducting the evaluation. Many D&G evaluations have relied on collecting only ex-post data, making a true before/after comparison impossible.

Yet it would be difficult to evaluate some “traditional” D&G programs through an RCT design. Consider an institution-building program aimed at reforming the Office of the Inspector General (the treatment group) in a country’s Ministry of Justice. If the purpose of the evaluation is to determine what effect the program had on reducing corruption in that office, there is no similar office (control group) from which to draw a comparison. The lack of a relevant control group and sufficient sample size is the main reason many evaluations cannot employ an RCT design.

More on this here.