A collation of my work and writing on randomised control trials (RCTs)

The single topic that I have written on most extensively to date is the use of randomised control trials (RCTs) in economics to identify causal effects, generalise those findings and make policy claims. Much of this was done before this approach was awarded the 2019 Nobel Memorial Prize in Economics: I started work on RCTs in 2010 for my PhD in economics.

The purpose of this page is to collate links to all of that work in one place. I have ordered the publications based on how some interested readers might want to go through them (which is why the link to my 200+ page PhD thesis comes at the end!).

Some of the academic articles are, unfortunately, gated – I put 🔐 symbols next to those. Feel free to contact me if you’d like a copy of any of them.

The 2019 Nobel Memorial Prize in Economics was awarded to three scholars for the methodological approach that has been the focus of critique in my work. In a short paper in a special issue of the journal World Development on the 2019 Nobel, The implications of a fundamental contradiction in advocating randomized trials for policy” (🔐), I aim to provide a succinct version of my argument against that ‘randomista’ approach.

Two articles with co-authors (Grieve Chelwa and Nimi Hoffmann) in The Conversation aimed at a more general audience also respond to the 2019 Nobel award. The first, How randomised trials became big in development economics, provides some background. The second, Randomised trials in economics: what the critics have to say, explains some of the criticisms – including my own.

In a special issue of the CODESRIA Bulletin on Randomised Control Trials and Development Research in Africa, I argue that RCTs are “A Dead-End for African Development”. In other words, I argue that the likely outcome of the emphasis on such methods will be to retard development – in sharp contrast to what proponents claim.

One example I discuss in that working paper is the use of RCTs in the context of education policy debates in South Africa. In a seminar given as a visiting fellow at the Johannesburg Institute for Advanced Study, “The new colonial missionaries: basic education policy and randomised trials in South Africa”, I explain how academics with a ‘missionary zeal’ (Bardhan) have sought, with some success, to inappropriately dominate basic education policy debates in South Africa. (I also wrote a fairly lengthy blog post on related matters in 2016, “Some thoughts on Taylor and Watson’s (2015) RCT on the impact of study guides on school-leaving results in South Africa”).

The crux of the formal (technical/econometric) argument I have made against the ‘randomista’ use of randomised trials in economics and for public policy was published well before the Nobel was awarded, in an article in the World Bank Economic Review, Causal Interaction and External Validity: Obstacles to the Policy Relevance of Randomized Evaluations”.

Some of the limitations of using RCTs for policy, and insisting on them as the only truly credible basis for decision-making, have been revealed during the Covid-19 pandemic. I wrote a short paper on that for a special issue of History and Philosophy of the Life Sciences, Masks, mechanisms and Covid-19: the limitations of randomized trials in pandemic policymaking”.

The misuse of an RCT to distort a public policy process in a manner that facilitated private sector rent-seeking – rather than the policy objective of reducing youth unemployment – was the focus of my detailed analysis of South Africa’s ‘Youth Employment Tax Incentive’ (YETI) published in Development and Change, Evidence for a YETI? A Cautionary Tale from South Africa’s Youth Employment Tax Incentive (🔐).

In a recent chapter in the Edward Elgar compilation A Modern Guide to Philosophy of Economics, Randomised trials in economics(🔐), I provide my most detailed assessment of these issues. A notable additional contribution of this chapter is that it examines the strategies advocates of these methods are using in an attempt to counter criticisms and explains why those are unconvincing and cannot succeed.

In another chapter forthcoming in the Routledge volume The Positive and the Normative in Economic Thought, The Unacknowledged Normative Content of Randomised Control Trials in Economics and Its Dangers(🔐), I explain how normative factors (biases, prejudices, ideologies, etc) enter a process that is typically represented as ‘objective’, ‘scientific’ and ‘neutral’. This develops a point I alluded to in earlier work.

All of this work began with the research conducted for my PhD in economics, “The external validity of treatment effects: an investigation of educational production”, which I started at the beginning of 2010 and completed in 2014 – under the supervision of Martin Wittenberg, examined by Gary Solon, Jeff Smith and Steve Koch.

A note on the philosophy literature on external validity

Later this month (August 2019) I’ll be presenting a paper at the 14th conference of the International Network for Economic Method (INEM). The paper is titled, “From ‘data mining’ to ‘machine learning’: the role of randomised trials and the credibility revolution”. An apparent puzzle is that there’s a session on external validity – which was the subject of my economics PhD, a working paper and short publication – in which I’m not presenting. Surely if I am going to be presenting at conferences on the method or philosophy of economics I should be presenting my work on external validity? The short answer is: I already did in 2012. But I think the longer explanation is also worth giving.

First, the paper I will be presenting at INEM (and ENPOSS) builds explicitly on work I’ve done on external validity (henceforth ‘EV’).

Second, and more importantly, my contribution to the philosophy literature on EV was not just presented at the Evidence and Causality in the Sciences (ECitS) 2012 conference but subsequently finalised as a paper in 2012, revised in 2013. Unfortunately that paper was not published at the time, for reasons that were at best flimsy. Preoccupied with finishing my economics PhD and changing jobs, I delayed resubmitting the manuscript. When I returned to academia in 2016 I discovered that a paper on the subject had been published in Philosophy of Science. More surprising was that, apart from some differences in verbiage and references, the core arguments of the paper seem to be the same as about 30-40% of my own paper but with no reference to that or my work in economics. Then, earlier this year, another paper appeared in the Journal of Economic Methodology. The core arguments of this paper, too, are very similar to the other 30-40% of my paper (dealing with issues like causal process tracing and related matters). In the second instance, my economics work is cited by misunderstood or misrepresented: suggesting that my views are different to the author’s when in fact, as is clear from the 2012/13 paper, they are almost entirely the same.

Needless to say, this creates a rather awkward situation. Not least because I believe, for reasons I will not ventilate in detail at this point, that it is implausible that the two authors were unaware of, or uninfluenced by, my 2012/13 work. But it is now simply impossible to publish my own work, despite clearly having a claim to intellectual priority. These concerns have been taken-up in the relevant fora, but the wheels turn slowly. And it will be informative to test the extent to which academic philosophy is committed to principles of intellectual priority. In the interim it makes for an ‘interesting’ context for intellectual engagement…

Economics: scientists and plumbers, or bullshit and mathiness?

On the 6th of January 2017 the Annual American Economic Association conference is scheduled to host a plenary address entitled The Economist as Plumber: Large Scale Experiments to Inform the Details of Policy Making. The speaker is the academic economist Esther Duflo, widely-acclaimed for popularising the use of randomised control trials (RCTs).

Given my PhD work in economics on external validity of RCTs and implications for policy, and parallel work in philosophy, I have a few thoughts on this subject. In a draft paper (first presented in 2015) entitled When is Economics Bullshit? I argue that practitioners promoting RCTs have systematically overstated the policy-relevance of results and thereby produced ‘bullshit’ (as defined in the famous essay by philosopher Harry Frankfurt).

A consistent problem in critiquing so-called ‘randomistas’ is that the goalposts have been constantly shifted. Early advocacy for RCTs within economics reflected a ‘missionary zeal’ (Bardhan). It has been suggested that experimental methods have led to a ‘credibility revolution‘: giving credibility to applied microeconomic work that apparently did not exist before. One recipient of the Bates Clarke medal argued that the introduction of RCTs indisputably rendered economics ‘a science’. In the policy domain I, along with other economists, have come across much grander and/or more extreme claims. But when challenged, proselytisers scale back the claims and deny ever overclaiming. So from missionary zeal, revolution and science we now have plumbing….

I look forward to reading Duflo’s speech/paper, but my own view of the methodology and philosophy of economics and RCTs suggests that plumbing is a very poor analogy.

In my own paper, motivated in part by claims that RCTs render economics ‘a science’, I tackle the question of scientific status head on. Using a revival of the so-called demarcation question (basically: how do we demarcate science from non-science or pseudoscience?) in philosophy, I argue that economics cannot (yet) be classified as a science, may never be classifiable as such and in the way it is used by some economists too-often verges on pseudoscience and/or bullshit.

The similarities between this very critical view and that of Romer’s recent critique of macroeconomics (which was made public later) are interesting. Romer focuses more on the use of mathematical modelling whereas my focus is on empirical methods. I will write a detailed comment on Romer’s piece later this year; I agree with some aspects but strongly disagree with others.

In its two presentations so far, my paper on bullshit has been relatively well-received by philosophers of science but not so well-received by philosophers of economics. There is good reason for this: the paper is even more an indictment of the current trend in philosophy of economics than it is of economics itself. The paper notes that in the absence of sufficient technical training and understanding of economics, philosophers in this area have increasingly taken the safer route of becoming apologists for the discipline. In effect, they compete to provide explanations of why economists are correct in their approach. (Exceptions to this, such as Nancy Cartwright – who has collaborated with Angus Deaton in providing important and influential critiques of RCTs – arguably prove the rule: Cartwright’s reputation was already established in philosophy of physics, causality and metaphysics).

The result, unfortunately, is that philosophy of economics currently has very little to add to economists’ critical understanding of their own discipline. Some critics, such as Skidelsky, argue that economists should read more philosophy, but while I am sympathetic to his overall stance I do not think economists would find much worth reading at present. Combining the abject failure of the ‘mainstream’ of philosophy of economics with the low quality of most economists’ reflections on methodological issues leaves us with few critical insights that could move the discipline beyond parochial or self-interested debates.

Updated reference list on external validity

My review of the external validity literature is slowly working its way through the peer review process. Parts were presented from 2011 onwards, but it was first published as a full working paper here and then updated for the Annual Bank Conference in Development Economics in 2014. A short version of one key contribution from that work has been published here.

Since these pieces, however, the reference list has been expanded in two important ways.

First, I became aware of a number of references and parallel literatures outside of economics that had either been missed in the original review, or published subsequent to the first version. Notably: in biostatistics (Elizabeth Stuart and co-authors), educational statistics (Elizabeth Tipton) and causal graphs (Elias Bareinboim and Judea Pearl).

Second, feedback through peer review questioned the omission of structural contributions to the topic – suggesting that this favoured the ‘design-based’ literature most closely associated with randomised control trials (RCTs). That was certainly not the intention. The rationale of the original review was to focus on the problem of external validity within the theoretical framework used by most RCT studies, in order to clearly delineate structuralist critiques from more fundamental external validity challenges.

I still think that it is absolutely critical to emphasise this distinction. However, there are contributions from the structural literature that propose something of a middle ground. Notably, work by Heckman, Vytlacil and co-authors argues for the merits of using the theoretical framework of Marginal Treatment Effects (MTEs). And one interesting recent, empirical contribution (by Amanda Kowalski) which has done so is forthcoming in the Journal of Economic Perspectives. Given this, I have added a number of references from that literature and expanded the review to cover this middle-ground between structural and design-based contributions.

While the paper proceeds through the publication process, I thought it would be useful to post the most recently submitted (May 2016) version of the reference list for those who may be interested. It can be found here.

Links: universities&growth, good podcasts and bad philosophy of economics, external validity, etc

Papers, blogs, podcasts

Do universities cause economic growth? Anna Valero and John Van Reenen have a paper saying yes

In my past engagements with higher education policy this question has annoyed me a lot, and I’ll post more about that in future posts. I get even more annoyed when I see definitive headlines based on papers with questionable identification strategies. We need a bit more humility about empirical work.

 

In which regard, I’ve recently been catching-up on some EconTalk podcasts. I enjoyed these two:

Heckman on econometrics, with some useful comments about ‘Hayekian humility’, failures of prediction and the like.

Phillip Tetlock on ‘superforecasting’

 

Even closer to the subject of my own recent work, two interesting-looking papers relating to RCTs and external validity:

Bentley Macleod on an issue I’m interested in: performance of subjective expertise

Banerjee, Chassang and Snowberg on decision-theoretic considerations relevant to external validity

I covered aspects of this in my PhD and published working paper on external validity, but look forward to reading this contribution.

 

Chris Blattman has a useful summary of recent developments among development NGOs relating to basic income grants (an idea that was debated at some length in South Africa over a decade ago):

http://chrisblattman.com/2016/04/15/ipas-weekly-links-57/

 

Came across a truly terrible piece on experimental methods in economics and ‘economics imperialism’. The saddest part is that this is often the only kind of ‘philosophising’ tolerated in parts of the discipline.

I had similar sentiments about this related podcast with Russ Roberts.

I have one draft paper and a sketch of a research programme on this topic, and the coverage given here to the issue is really bad. (That’s as nicely as I can put it). Classify both as links to avoid

 

A lot’s being said about the Panama Papers. People and companies should not evade taxes. The notable absence of some countries’ citizens from this particular database, though, does raise some interesting questions about possibly selective leaks.

Events and initiatives

On the 28th of April Thandika Mkandawire is speaking in Cape Town on panel discussion entitled:

Africa and the Millennium Development Goals (MDGs): Progress, Problems and Prospects

Details here

 

In London, CEMMAP recently held a one-day conference on econometrics for public policy:

http://www.ifs.org.uk/uploads/cemmap/programmes/Econometrics%20for%20public%20policy%2C%20methods%20and%20applications%20040416.pdf

Looks like a great programme.

 

Economic Research Southern Africa has a new initiative to train academic economists in quantitative methods:

http://www.econrsa.org/call-application-skills-development-training-econometrics-0

On the one hand, this is a good idea. On the other hand, it’s a real slap in the face for those who have these skills but still can’t get academic jobs. However, it usefully supports a point I’ve been arguing for some time: in most disciplines, academics in South Africa are amongst the most protected of workers regardless of their competence or effort. (For international readers: in South Africa formalised ‘tenure’ processes don’t really exist.) Much more on both issues in future posts.

 

Forthcoming deadlines

The Campbell Collaboration annual conference is open for submissions:

http://www.campbellcollaboration.org/news_/What_Works_Global_Summit_Request_for_Submissions.php

Some thoughts on Taylor and Watson’s (2015) RCT on the impact of study guides on school-leaving results in South Africa

Since 2010 most of my time spent on academic research has focused on two particular areas:

  1. The use of randomised control trials (RCTs) to support inappropriate, or overly strong, policy claims or recommendations
  2. Empirical examples of how this has manifested in the economics of education.

I was therefore somewhat frustrated when I attended a presentation at the Economic Society of South Africa conference in 2013 to find some rather strong policy claims being made on the basis of what is very weak evidence (even by the standards of practitioners favouring RCTs). I raised my concerns with the relevant author, but I see that the recently-published working paper contains the same problems.

It therefore seems appropriate to summarise my concerns with this work: partly so that interested parties can understand its flaws, but mainly to provide an illustration of how the new fad for RCT-based policy is often oversold.[1] That’s important, because despite seemingly ample evidence I often get economists saying: “Oh but no-one really uses RCT results in that way”.

Continue reading “Some thoughts on Taylor and Watson’s (2015) RCT on the impact of study guides on school-leaving results in South Africa”