What is the Value of Randomized Control Trials?
Illustrating vignette: "Today, I'm afraid I accidentally insulted the woman I interviewed. I guess my translator Jeffrey interpreted my question as 'how many of your children get fed.' That wasn't what I meant, but she got angry and bristled through the rest of the interview. I realized later on how insulting that had seemed on my part. It also made me realize how bad an interviewer I am. It takes a lot of skill, and I'm not really there yet. Asking pertinent, but non-insulting follow up questions is harder than I anticipated. It's really hard to get at the information I'm looking for during my interviews. I end up going through my notes at the end of the days and realizing I have NO IDEA about the backstories or reasons for some of the answers. Sure, I have data, but I don't really know what it means on a deeper level because my interviewing skills aren't very good If I'm only understanding things on the surface-level, and not digging deeper, I'm worried that I'll be able to pain an interesting story, but not one that is really true in the local sense."
What are RCTs?
The basic goal of RCTs is to better understand how development programs affect different types of populations. For economics students out there, by implementing a program in one setting and not another, economists hope to move some previously unobserved variables “out of the error term” to better understand how the characteristics of populations shape development outcomes.
Development Economists model these experiments off of experiments in the natural sciences, so a random sample is important to ensure that results can be attributed to the program and not heterogeneity in the samples.
Development Economists model these experiments off of experiments in the natural sciences, so a random sample is important to ensure that results can be attributed to the program and not heterogeneity in the samples.
What's the Problem?
Through talking with some Anthropology professors during this course, I got especially interested in analyzing the role Randomized Control Trials play in Development Economics. I learned that many Anthropologists believe that RCT's, the Gold standard of development economics, are not trustworthy because they ignore local factors as statistical “noise.” The specifics of local customs, economics, religion, and geography are considered irrelevant to determining the success of development projects. By taking a large sample size, peculiar local factors are supposed to balance each other out so development economists can see the true effects of a program on random population.
As an Economics major I was very interested in this critique, and I decided to dig deeper to learn more about why some people from the Anthropological perspective distrust RCTs.
As an Economics major I was very interested in this critique, and I decided to dig deeper to learn more about why some people from the Anthropological perspective distrust RCTs.
This discussion is important in the context of participatory development programs because it exemplifies how Economists can be inattentive to local factors. Participatory programs need to be designed and evaluated with an understanding of local factors, and RCTs might not be the best method of evaluating such programs.
Intrinsic Problems with RCTs
Impossibility of “blinding” participants
Randomizing waste Designers having a heart “Faux” Exogeneity Side-payments ruin randomness Population non-compliance People aren't ROCKS, and the LATE misses the POINT!!!!! All information from (4) |
For ethical and practical reasons, it is often impossible to “blind” participants about their status in the treatment or control group. Therefore, participants in the control group often realize this, and enroll themselves in multiple trials to receive the maximum services. This can taint the randomness of the sample
RCT's have been criticized for being wasteful, as an obsession with acquiring a random sample leads researchers to knowingly treat populations who do not need the program, and rejecting others who do need the program Because experimental designers “have a heart” and don't want to see funds denied to people who truly need the program, they sometimes can contravene the experimental design by choosing people for the experiment who “need” it. This ruins the randomization on which the trials are based Although all participants are supposed to receive the same exact treatment, and the treatment effects are supposed to be completely exogenous to any other observed factors, in practice, this is often not the case. Comprehension of the program material almost certainly varies among participants in ways that are correlated to other observed attributes and the expected returns from the treatment. Economics students know that not understanding how regressors are correlated can lead to model mis-specification and a biased estimator of the program effects. I realize that this critique gets heavy into econometrics, but it suffices to say that when researchers don't understand why some participants benefit more from the treatment than others, their understanding of the program's impact can be statistically insignificant. When researchers have to give side-payments for people to participate in the program, then we no longer see a random sample, which biases understand of the program's effects Some of the population can refuse to participate in surveys associated with the program. This means that researchers again cannot collect a random sample Ultimately, RCTs are limited by the same critique detailed in the previous page "Can Econometric Variables Describe Human Development." RCT's seek to measure something called the Local Average Treatment Effect, or (LATE) which is a measure of the program's effect for the sub-population which actually complies with the program. However, when researchers don't observe why people in this population respond differently to the program, the LATE estimator is uninformative. Biophysical sciences differ from social sciences in that basic physio-chemical laws ensure that the system responds in a homogenous way every single time, or at least close to it. In the social sciences, researchers take a large sample size and hope that because of the laws of statistics, the “system” will respond in a predictable way to a program. However, people, and their environments, differ in minute ways, and all these little differences among us make us respond differently to programs. The largeness of the data ignores these peculiarities of people or villages that make them respond differently to the program. RCT's have no ability to bring these little differences out of the error term, to contribute to understanding how program work on the local, or individual, level. This would be of greatest interest to researchers or policy-makers, but only try to explain what happens at the mass aggregate scale. This is less useful for policy and a consequence of doing random experiments with people instead of rocks. (Note, for Economics majors out there, this critique is similar to the "Faux Exogeneity Critique, expect that it's saying that even if the program's effects are exogenous to other observed variables, and we have avoided these statistical biases, the estimation of the program's effects may be worthless for policy discussions) |
Extrinsic Problems with RCTs
Because not every question of interest to development organizations can be approached through a randomized design, the hegemony RCTs hold in development economics means that many interesting studies never take place. This is a problem because RCTs are often seen as the way of knowing things. This is an argument for rebalancing the research agenda
RCTs are incredibly complex and time-consuming, and they often serve people who don't truly need the program. This funding could be reapplied to programs treating people who really need the help. RCTs build knowledge, but at what point does pragmatism trump building more and more knowledge. |
Conclusions
As I hope I have demonstrated, there are significant doubts about the usefulness of Randomized Control Trials. Most damning is the critique labeled “People Aren't Rocks,” because if the effect of the program estimated by RCT says nothing about how local factors affect program success, what have we really learned? There is definitely a place for anthropologists to make an improvement in understanding how local factors shape development outcomes. As we see again, Economists look on a big scale, but there's room to learn more on the small scale.
Because RCT's don't unearth how local factors affect development outcomes, they are of limited use in the context of participatory development programs.