Skip to content

Interview with Amy Finkelstein about Oregon Health Insurance Experiment

by sinan on April 17th, 2017
Amy Finkelstein

1) What are the key results from your work on the Oregon Health Insurance Experiment?

The Oregon Health Insurance Experiment is a randomized evaluation of the impact of extending Medicaid to low-income, uninsured adults. In 2008 the state of Oregon ran a lottery to offer Medicaid to some low-income uninsured adults but not others who signed up on the lottery list. My co-PI Kate Baicker of the Harvard School of Public Health and I have been working with a large team of collaborators in academia, the state of Oregon, and the health care system to use this lottery to study the impact of Medicaid on a wide range of outcomes. We study the impact of Medicaid in the first two years of coverage; after that the state offered Medicaid to individuals who had lost the lottery.

The experiment and its key findings are summarized in a J-PAL North America briefcase; additional details can be found on our study homepage. There we also provide the underlying data and programs used in the analyses so that interested researchers can perform their own analyses (or find useful material for designing student problem sets!).

Broadly speaking, we examined Medicaid’s impacts in three main domains: health care utilization, economic security, and health. Medicaid increases health care use across the board. Hospital admissions, ER visits, physician visits, prescription drugs, and preventive care all increase with Medicaid coverage. Our finding of a large – 40 percent – increase in ER visits garnered particular attention, given widespread prior conjectures that covering the low income uninsured with Medicaid would get the uninsured out of the emergency room and into primary care.

Medicaid improves economic security. Medicaid reduces out of pocket medical spending, medical debt, and the probability that individuals report having to skip paying other bills or borrow to pay their medical expenses. This suggests that Medicaid is doing what insurance is designed to do: smoothing consumption (and hopefully marginal utility of consumption) across health states of the world. In this respect, one of our more interesting findings is that Medicaid virtually eliminates the probability of “catastrophic” out of pocket medical spending – defined as out of pocket medical spending in excess of 30 percent of income.

Medicaid improves health. Medicaid increases various measures of self-reported health, and it reduces depression. We were unable to detect impacts on measured physical health – specifically blood pressure, cholesterol, and blood sugar (a measure of diabetes) – although we did find evidence of increased diagnosis of and prescription drugs for diabetes. For some of these results, such as blood pressure, the estimates allow us to reject the types of improvements found in prior quasi-experimental studies of Medicaid. But in other cases, such as blood sugar, the results are too imprecise to be meaningful; we cannot, for example, rule out the improvements in blood sugar that the clinical trial literature would suggest we should find given our estimates of the increases in diabetes medication caused by Medicaid.

2) What have we learned from the experiment?

The basic findings that come directly from the experimental estimates concerning the impact of Medicaid on a variety of outcomes are, I believe, compelling and credible. They allow us to reject a number of claims that one often hears in health policy debates – such as the argument that expanding Medicaid will save money by getting the uninsured out of the emergency room and into primary care, or the argument that Medicaid coverage is worthless because so many providers won’t take it. Other inferences, however, are less straightforward.

One important set of issues is how to extrapolate from the study’s findings to forecasting what the impacts of Medicaid would be in other contexts. For example, the low-income uninsured adult population offered Medicaid through the Oregon Health Insurance Experiment is very similar to the population of low-income uninsured adults newly covered by Medicaid in states that expanded Medicaid under the Affordable Care Act. However a range of issues must be confronted in extrapolating from the results in the Oregon Experiment to this (or any) other context. Not only does the low income uninsured population and the nature of the health care system vary across states, but, as I’ve written about in other work, the general equilibrium effects of market-wide coverage expansions may differ from the partial equilibrium effects we estimate in the Oregon Experiment, and the type of people who voluntarily sign up for a chance to get Medicaid may experience different impacts of health insurance than the full population covered under a mandate. Such issues suggest that additional careful thought and modeling – and perhaps additional data – needs to be brought to bear in any such extrapolation attempts.

Another important set of issues – and one I have been trying to work on with my co-authors – concerns how to use the results for formal welfare analysis of Medicaid. A natural way to measure the value of Medicaid to recipients would be to estimate their willingness to pay for Medicaid. However Medicaid is not traded in a private market, making it challenging to estimate demand. In on-going work with Nathan Hendren and Mark Shepard, we are estimating low-income individuals’ willingness to pay for subsidized health insurance on the Massachusetts Health Insurance Exchange as one way to gauge willingness to pay for a Medicaid-like product in a Medicaid-like population.

Another approach to estimating the value of Medicaid to recipients is to take the experimental estimates from the Oregon Experiment of the impact of Medicaid on various arguments of the utility function, assume a normative utility function over those arguments, and thus evaluate recipient welfare with and without Medicaid. Nathan Hendren, Erzo Luttmer, and I have pursued this strategy and found that recipient value of Medicaid is substantially less than what Medicaid spends, about 20 to 40 cents on the dollar. Preliminary results from demand estimates on the Massachusetts exchange are consistent with a recipient value for Medicaid that is substantially below Medicaid expenditures.

Both approaches require substantially more assumptions than the estimates of Medicaid’s impacts which come directly from the experiment. They therefore should come with substantially more caveats. Still, we believe we understand what (other than assumptions) is driving these results: the low-income uninsured pay only a fraction of their medical expenditures; in the Oregon experiment we estimate they pay only about 20 cents on the dollar. As a result, a substantial portion of Medicaid expenditures represents a transfer not to recipients of Medicaid but to the other parties who were implicitly providing insurance to the low-income uninsured. Understanding the ultimate economic incidence of such “uncompensated care” is critical for understanding the distributional impacts of Medicaid.

3) There have been dozens upon dozens of studies of the impact of Medicaid. Why does the Oregon Health Insurance Experiment gather such outsized-attention?

Although randomized controlled trials are the gold standard in medical and scientific studies, they are much rarer in social policy research. In 2008, the state of Oregon, facing budgetary constraints, decided to draw names by lottery for its Medicaid program, as the fairest way to allocate a limited number of spots. Random assignment of access to Medicaid to some individuals who signed up for the lottery but not to others allows us to attribute any differences in outcomes between the two groups to the causal effect of Medicaid. By construction, the treatment group of lottery winners and the control group of lottery losers are, on average, statistically identical except for Medicaid assignment.

The attention that the study’s results have received not only within the academy but also in the median and in public policy discourse is encouraging; it suggests that the broader public appreciates and understands the ability of randomized evaluations to clearly, simply, and credibly identify the impact of an intervention. However, at another level it is discouraging: the Oregon Health Insurance Experiment receives the attention it does in part because randomized evaluations on important health policy issues are all too rare. For example, Sarah Taubman and I looked across top academic publications and found that less than 20 percent of published studies of interventions in US healthcare delivery are randomized. By contrast, about 80 percent of studies of medical interventions are randomized; even if one excludes drug trials, two-thirds of studies of non-drug medical interventions are randomized.

This may be changing. The relevant actors in the public sector and the healthcare sector have increasing “skin in the game” and thus increasing need for rigorous evidence of what interventions work and why. Randomized evaluations also are increasingly feasible for researchers to undertake; we identified a number of design choices that can enhance the feasibility and impact of RCTs on US healthcare delivery. I am cautiously optimistic that the increasing demand for rigorous evidence for how to improve the efficiency of US healthcare delivery and the decreasing supply-side costs of designing and implementing randomized evaluations will help make randomized evaluations closer to the norm than the exception in evaluating healthcare delivery interventions.

J-PAL North America, a research center at MIT that I co-direct with Larry Katz, is devoted to supporting and encouraging randomized evaluations on important domestic policy issues. We are facilitating more randomized evaluations on US healthcare delivery, among other topics. To date, our US Health Care Delivery Initiative has catalyzed and supported 19 randomized evaluations of health care delivery interventions. The projects involve partnerships between academics and a wide range of implementing partners, including government actors (at the federal, state and local levels), large hospital systems, innovative non-profit organizations, private firms, and others. The studies span a wide range of interventions, including efforts to reduce re-admissions by “super utilizers” of the health care system, to provide cost and quality information to consumers on state health insurance exchanges, to reduce over-prescribing of opioids, to evaluation workplace wellness programs, and to provide clinical decision support to physicians ordering high-cost scans. The range of topics and implementing partners reflects the great breadth of issues for which rigorous evidence is needed, as well as the encouraging supply of implementing partners committed to finding evidence-based improvements for US healthcare delivery.

Comments are closed.