Table of Contents
2. The RAND Health Insurance Experiment
3. Experimental Analysis
3.1. Treatment Effects
3.2. Validation and Robustness
4. Price Elasticity
4.1. Empirical Background
4.2. Application and Problematic Nature
5. Raison d’être of an Experiment
5.1 Endogeneity Problem
5.2 Potential Outcome Model
List of References
The famous RAND Health Insurance Experiment (RAND HIE) deals with the question how health insurance affects medical spending. The scientific essay The RAND Health Insurance Experiment, Three Decades Later (2013) by Aviva Aron-Dine, Liran Einav, and Amy Finkelstein, extracted from the Journal of Economic Perspective, forms the basis for this seminar paper. All facts regarding the primary experiment are taken from this essay. It features a reexamination of the core findings of the RAND HIE with a state of the art perspective regarding the analysis of randomized experiments and the economics of moral hazard. Between 1974 and 1981, more than 5,800 individuals from about 2,000 households in six different locations across the United States participated in the RAND HIE and thereby received health insurance. The experiment randomly assigned families to health insurance plans with different levels of cost sharing and was representative of families with adults under the age of 62. The plans ranged from full coverage (“free care”) to plans with little coverage (5 percent) for the first approximately $4,000 (in 2011 dollars) incurred during a year.
The conduct and analysis of randomized experiments as well as the economic analysis of moral hazard in the context of health insurance were relatively novel fields of research back in the years of the RAND investigation. Nevertheless, the RAND results are highly esteemed when predicting the likely impact of health insurance reforms on medical spending or designing actual insurance policies. In the course of time, health spending has grown and the consequent pressure on the public sector confers additional significance to the RAND estimates. (Aron-Dine et al., 2013)
The RAND HIE was funded by the US Department of Health, Education, and Welfare and cost roughly $295 million (in 2011 dollars). From a cost perspective alone, a replication of such an experiment is highly improbable. (Greenberg et al., 2004)
In section two, the design of the RAND HIE is presented and complemented by a depiction of the key economic object of interest, namely the impact of health insurance on medical spending. Section three describes the experimental analysis, including the baseline regression. The core variable of interest, the treatment effect, is specified and validated. In section four, the price elasticity is derived and the application discussed. Section five emphasizes the raison d’être for a randomized experiment based on statistical evidence and additional literature.
2. The RAND Health Insurance Experiment
Families participating in the RAND HIE were assigned to plans with one of six consumer coinsurance rates. The consumer coinsurance rate is “the share of medical expenditures paid by the enrollee”. (Aron-Dine et al., 2013, p. 201) The coverage by the assigned plan lasted between three to five years. The first four plans simply differed in their overall coinsurance rates of 95, 50, 25, or 0 (“free care”) percent. The fifth plan consisted of a “mixed coinsurance rate” of 25 percent for most services but 50 percent for dental and outpatient mental health services. The RAND investigators referred to the sixth plan as an “individual deductible plan” as it contained a coinsurance rate of 95 percent for outpatient services but 0 percent for inpatient services. Free care was assigned to 32 percent of the families, followed by the individual deductible plan (22 percent), the 95 percent coinsurance rate (19 percent) and the 25 percent coinsurance rate (11 percent). By randomly assigning families, within each of the six plans, to different out-of-pocket maximums, it was possible to limit the financial exposure of the participants. This is referred to as the “Maximum Dollar Expenditure” and its limits were 5, 10, or 15 percent of family income, with a maximum of $3000 or $4000 (in 2011 dollars).
The sample selection and assignment of plans were not simply random. The RAND investigators applied a “finite selection model” which is a type of stratified random assignment. It aimed to maximize the sample variation in baseline covariates while complying with the budget constraint for the experiment and to attain better balance across a set of baseline characteristics than would probably be attained by chance alone. Several sources provided data for the RAND HIE. A screening questionnaire, prior to the plan assignment, compiled basic demographic information on health, insurance status, and past healthcare utilization from all potential enrollees. Participants filed claims with the experiment in order to be reimbursed for incurred expenditures. Therefore, detailed data on healthcare spending and utilization outcomes could be acquired during the RAND HIE.
The key economic object of interest is an estimate of the impact of the health insurance coverage on healthcare utilization. Hence, the underlying parameter of interest is the price elasticity of healthcare utilization.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1: The Price Elasticity of Healthcare Utilization: A Hypothetical Example
Source: Aron-Dine et al., 2013, p. 199.
Figure 1 exemplifies the elasticity by depicting two different hypothetical insurance contracts: one with a constant 20 percent coinsurance (solid line) and the other with a constant 10 percent coinsurance (dashed line). The horizontal axis describes the healthcare utilization in terms of the total amount of US dollar spent on healthcare services, whereas the vertical axis represents the insurance coverage in relation to the out-of-pocket spending. Assuming that utility increases in healthcare utilization and in income net of out-of-pocket medical spending, two points of optimal spending can be determined. The tangency points between the indifference curve and the budget set depict the optimal spending. Thus, individuals would raise their total healthcare spending from $3,000 to $5,000 in answer to a 50 percent cut in the out-of-pocket price. This is an elasticity of -1.33. (Aron-Dine et al., 2013)
The price elasticity of healthcare is interpreted as follows: A one percent increase in the co-insurance rate on average reduces the incurred costs by 1,33 percent. This implies, the more an individual has to pay for healthcare himself, the less he takes use of healthcare services. Vice versa, the more the general public (the RAND insurer) covers the expenses, the more the individual takes use of healthcare services. This corresponds to the definition of the term moral hazard: “Lack of incentive to guard against risk where one is protected from its consequences, e.g. by insurance.” (Oxford Dictionaries, 2015) Concluding, the elasticity of -1.33 reflects the moral hazard effect of health insurance. This conforms to the perception by Arrow who understood moral hazard in health insurance as the tendency that medical insurance increases the demand for medical care (Arrow, 1963).
The main purpose of the RAND HIE was obtaining estimates of the elasticity by randomly assigning budget sets to consumers. The previously described hypothetical example neither takes the heterogeneity in healthcare needs nor the non-linearity typical for health insurance contracts into consideration. Certainly, the RAND investigators recalled these two aspects within their experimental design. (Aron-Dine et al., 2013) The heterogeneity in healthcare needs is taken into account in section 3.1. The handling of the non-linearity of health insurance contracts is presented in section 4.1.
3. Experimental Analysis
This section begins by reporting the estimates of experimental treatment effects, followed by discussing potential threats to the validity of interpreting these treatment effects as causal estimates. Based on the RAND HIE, the individual-year is the primary unit of analysis. The denotation is chosen as follows: i for an individual, p as the plan the individual’s family was assigned to, t as the calendar year and the location and the start month as l and m. The baseline regression is composed as follows:
The dependent variable is e.g. medical expenditure, and the explanatory variables are plan, year and location-by-start-month fixed effects. The experimental treatment effects are the six plan fixed effects and represent the average effect of each plan, because an additional randomization of Maximum Dollar Expenditure limits was integrated. accounts for any underlying time trend in the cost of medical care. Due to the fact that the plan assignment was random dependent only on site and start month, , a full set of location by the start month interactions, was included. represents the time variant (or idiosyncratic) error. All regression results cluster the standard errors on the family, since the plan assignment took place on a family rather than individual level. (Aron-Dine et al., 2013)
3.1. Treatment Effects
By estimating the basic regression for various measures of healthcare utilization, indicates the treatment effect of the different plans relative to the free care plan. The treatment effect is the causal effect of cost sharing on healthcare utilization. The results are from an ordinary least squares regression (OLS) and reveal a consistent pattern of less spending in higher cost-sharing plans. The p-values imply the rejection of the null hypothesis that spending in the positive cost-sharing plans is equal to that in the free care plan. Furthermore, the total spending is broken down into inpatient (42 percent), indicating more serious and costly medical episodes, and outpatient spending (58 percent). This classification reemphasizes the pattern of lower spending in plans with higher cost sharing. The effect of cost sharing in an inpatient setting is consistently small and generally insignificant, implying more serious medical sequences to be less price sensitive – an essential and plausible causal inference of the treatment. Consequently, the null hypothesis of no differences in spending across plans for any in- and outpatient spending is rejected. In addition to the classification of spending, the extent to which the effect of cost sharing might vary for those with higher medical expenditure is considered. In other words, a quantile regression is employed to estimate the above equation and subsequently determine the potential variation across the quantiles of medical spending. The results are concordant with a lower percentage treatment effect for higher-spending individuals. This is likely to stem from two factors. Firstly, the lower price responsiveness for inpatient spending. Secondly, the early obtainment of a zero percent coinsurance rate for high-spending individuals, regardless of the initial plan-assignment. (Aron-Dine et al., 2013)
3.2. Validation and Robustness
The indisputable advantage of a randomized experimental strategy is that a direct comparison of the different groups with the free care group can plausibly be construed as a causal effect of the treatment. Moreover, it minimizes bias and confounding factors as opposed to observational studies. Nonetheless, this approach generally requires the absence of systematic differences across participants in the various plans that could correlate with the measured utilization. Hereafter, three possible sources of systematic differences are reflected:
1) Nonrandom assignment to plans
2) Differential participation across plans
3) Differential reporting across plans
Firstly, the stratified random assignment should be examined as to whether it was successfully implemented. For this purpose, it must be ensured that there is no statistically significant correlation between any particular characteristic of a person and the treatment arm to which that person was assigned. The characteristics employed by the RAND investigators in the “finite selection model” were balanced across plans because the assignment algorithm was explicitly designed to fulfill this requirement. Consequently, the statistical tests were unable to reject the null hypothesis that the characteristics used in the stratification are balanced across plans. Thereupon, further individual characteristics, not considered by the RAND investigators, were tested and did not appear to be randomly distributed across treatment arms. This mainly stems from the relatively small 50 percent coinsurance plan. This plan was ignored, irrespective of the fact that these imbalances may have arisen from sampling variation. Therefore, the statistical tests were unable to reject the null hypothesis that covariates that were not used in stratification are also balanced across plans. Effectively, the assumption prevails that the initial randomization was valid – leaving the 50 percent coinsurance plan aside. (Aron-Dine et al., 2013)
The second potential risk to validity, differential participation or attrition across plans, could bias the estimated effect of insurance coverage on medical utilization. Crucial in this context is the proportional relation that the more comprehensive the insurance the greater the incentive to participate in the experiment. The RAND researchers tried to abolish these differential incentives by offering higher lump sum payments for those individuals randomized into less-comprehensive plans. The incremental utility from more comprehensive coverage remains greater for higher-spending individuals, meaning that the alignment of participation incentives was only achieved on average. The issue of differential participation can be read off, inter alia, the completion rates. Overall, 76 percent of the individuals offered enrollment completed the experiment. Among these, 88 percent receiving free care completed, while only 63 percent with a 95 percent coinsurance rate did likewise. This share of non-compliance does not stem from sampling variation alone. The potential bias arising from attrition is a common issue for the analysis of randomized experiments in social science. Later in this section possible measures to compensate for this potential bias are described. (Aron-Dine et al., 2013)
- Quote paper
- Nina Schwenniger (Author), 2015, A Review of the Rand Health Insurance Experiment. Statistics and Econometrics, Munich, GRIN Verlag, https://www.grin.com/document/300637