« Projecting the Health and Economic Impact of a Quadrivalent HPV Vaccine using a Multi-type Transmission Dynamic Model | Main | Valuing Variety: How Much Do Workers Value Having Choices Among Health Insurance Plans? »

Date
Jun
05
2006

Modeling Efficiency at the Process Level

Presenter:

Robert Lee

Authors:

Robert Lee, Roma Taunton, Byron Gajewski, Marjorie Bott

Chair: Dave Vanness; Discussant: Brian Denton Mon June 5, 2006 15:30-17:00 Room 235

The health economics literature includes a significant number of efficiency studies, yet most have not fully engaged Newhouse’s (1994) concerns that standard data sources may omit important inputs and that output measures may ignore important sources of heterogeneity. A third concern is that Data Envelopment Analysis and Stochastic Frontier Analysis often disagree about efficiency rankings (Chirikos and Sear, 2000), suggesting that the results are sensitive to untested modeling assumptions. In looking at the performance of a sample of nursing homes, this study seeks to address all three issues.

This paper examines a single process: how nursing homes complete a standardized, mandatory assessment of residents. The paper is based on a study that also gathered detailed data about variations in the quality of these assessments and detailed data about differences in the characteristics of residents. The product should be homogeneous, and we have data to test this assumption. The paper analyzes primary data collected from a stratified random sample of 107 nursing homes during the last two years. The first step in collecting the underlying data was the construction and validation of a process map for each facility. As a result, the steps in the process for each nursing home and the resources used at each step were identified and verified. It is unlikely that data on any significant inputs into the production process are missing. Moreover, these data allow us to identify at which steps production is inefficient, not just which resources appear to be overused.

These data were augmented with data from Medicaid cost reports, the Minimum Data Set, and the Online Survey Certification and Reporting System. As a result, we have the data needed to conduct a variety of case mix-adjusted and quality-adjusted Data Envelopment and Stochastic Frontier Analyses.

At this point we have just finished data cleaning and have done initial DEA analyses. These first analyses find that some facilities are quite inefficient. Our analyses will proceed on three tracks. One will be to examine the correlations of efficiency scores as we change assumptions about input aggregation and returns to scale. Because we have data on input prices, the next track will repeat this process with DEA models that identify technical and allocative inefficiencies. We will again examine the impact of doing so on the correlations of efficiency scores. Finally, we will estimate alternative stochastic frontier cost functions that replicate the DEA assumptions about input aggregation and returns to scale. Again we will examine the correlations of the efficiency scores with the other models. For each of these three models we will estimate auxiliary regressions that examine whether ownership, chain membership, the structure of the care planning process, size, physical structure, market structure, or manager training affect efficiency scores.

Inefficient production is one of the hallmarks of the American health care system, yet economics has yet to offer profound insights into its sources. This unique data set should help us understand why alternative models paint different pictures of efficiency.

ASHEcon

3rd Biennial Conference: Cornell on June 20-23 2010

Welcome to ASHEcon

The American Society of Health Economists (ASHEcon) is a professional organization dedicated to promoting excellence in health economics research in the United States. ASHEcon is an affiliate of the International Health Economics Association (iHEA). ASHEcon provides a forum for emerging ideas and empirical results of health economics research.