Ugandan seed distributors aren’t adulterating seeds, it’s probably a problem of handling and storage

This is from a new paper by Alicia Barriga and Nathan Fiala in World Development:

Results from the tests showed very high levels of DNA similarity (above 98%) and good performance in general, but highly variable quality in terms of the ability of the seed to germinate under standard conditions. We do not see differences in average outcomes across the distribution levels, though variation in seed performance does increase further down the supply chain.

ugandaseedsThe results of the tests point to potentially important issues for the quality of seeds. The variation in germination suggests that buying a random bag of seeds in this particular distribution chain can matter a lot for farmer’s production. The high rate of seed similarity suggests that the main concern among policy makers and researchers, that sellers add inert or low-quality material to the seeds, is likely not the case, at least for the maize sector in the districts we study. However, given the remoteness of these districts and the lack of any oversight in these areas, we believe the results are likely a lower bound for the country as a whole.

The supply chain analysis suggests that the quality of seed does not deteriorate along the supply chain. The quality is the same, on average, across all types of suppliers after leaving the breeders. However, we observe high variation of seeds’ performance results on germination, moisture, and vigor, suggesting that results are more consistent with issues of mishandling and poor storage of seeds, possibly related to temperature or quality controls, rather than sellers purposefully adulterating seeds. Variation on these indicators is usually associated with mishandling during transportation and storage.

As the authors note in the paper, African governments and their external donors have put a lot of effort in “certification and labeling so as to reduce the possibility of adulteration by downstream sellers”. Obviously, e-labels and systems of verifying seed authenticity in the fight against adulteration are important. But equally important is an understanding of how the seed distribution system works. And that is one of the major contributions of this paper. Corruption is not always the problem.

Read the whole paper here.

fao_eac

Interestingly, Uganda bests both Kenya and Tanzania on productivity in the cereal sector (I made the graph using FAO data). Despite starting off with relatively lower productivity and having gone through civil conflict beginning in the late 1970s, Uganda has since around 2007 clearly separated itself from both Kenya and Tanzania (and appears to have plateaued). Productivity in Kenya peaked in the early 1980s and has pretty much stagnated since. Tanzania’s figures appear to be trending upwards having collapsed in the early 2000s. There is likely an element of soil quality and general aridity involved in these trends. According to the FAO, Kenya and Tanzania use fertilizer at significantly higher rates than Uganda. For comparison, cereal yield in Vietnam is about 2.7 times higher than in Uganda.

 

World Development symposium on RCTs

World Development has a great collection of short pieces on RCTs.

Here is Martin Ravallion’s submission: 

….practitioners should be aware of the limitations of prioritizing unbiasedness, with RCTs as the a priori tool-of-choice. This is not to question the contributions of the Nobel prize winners. Rather it is a plea for assuring that the “tool-of-choice” should always be the best method for addressing our most pressing knowledge gaps in fighting poverty.

… RCTs are often easier to do with a non-governmental organization (NGO). Academic “randomistas,” looking for local partners, appreciate the attractions of working with a compliant NGO rather than a politically sensitive and demanding government. Thus, the RCT is confined to what NGO’s can do, which is only a subset of what matters to development. Also, the desire to randomize may only allow an unbiased impact estimate for a non-randomly-selected sub-population—the catchment area of the NGO. And the selection process for that sub-sample may be far from clear. Often we do not even know what “universe” is represented by the RCT sample. Again, with heterogeneous impacts, the biased non-RCT may be closer to the truth for the whole population than the RCT, which is (at best) only unbiased for the NGO’s catchment area.

And here is David Mckenzie’s take: 

A key critique of the use of randomized experiments in development economics is that they largely have been used for micro-level interventions that have far less impact on poverty than sustained growth and structural transformation. I make a distinction between two types of policy interventions and the most appropriate research strategy for each. The first are transformative policies like stabilizing monetary policy or moving people from poor to rich countries, which are difficult to do, but where the gains are massive. Here case studies, theoretical introspection, and before-after comparisons will yield “good enough” results. In contrast, there are many policy issues where the choice is far from obvious, and where, even after having experienced the policy, countries or individuals may not know if it has worked. I argue that this second type of policy decision is abundant, and randomized experiments help us to learn from large samples what cannot be simply learnt by doing.

Reasonable people would agree that the question should drive the choice of method, subject to the constraint that we should all strive to stay committed to the important lessons of the credibility revolution.

Beyond the questions about inference, we should also endeavor to address the power imbalances that are part of how we conduct research in low-income states. We want to always increase the likelihood that we will be asking the most important questions in the contexts where we work; and that our findings will be legible to policymakers. Investing in knowing our contexts and the societies we study (and taking people in those societies seriously) is a crucial part of reducing the probability that our research comes off as well-identified instances of navel-gazing.

Finally, what is good for reviewers is seldom useful for policymakers. We could all benefit from a bit more honesty about this fact. Incentives matter.

Read all the excellent submissions to the symposium here.