Can We Trust Population Surveys to Count Medicaid Enrollees and the Uninsured?
Kincheloe, J., E. R. Brown, J. Frates, K. T. Call, W. Yen, and J. Watkins. 2006. “Can We Trust Population Surveys to Count Medicaid Enrollees and the Uninsured?” Health Affairs 25(4): 1163-1167.
Health foundations, such as the Robert Wood Johnson Foundation (RWJF), make multimillion-dollar investments in programs to expand insurance coverage. These efforts are driven largely by estimates of the number of uninsured people derived from population surveys, which might overestimate the number of uninsured people if they under-count people enrolled in Medicaid. This paper reports the results of the RWJF-funded California Medicaid Undercount Experiment (CMUE) to estimate the extent of underreporting of Medicaid in the California Health Interview Survey (CHIS) and its effect on estimates of uninsurance. Although some over- and underreporting occurs, overall CHIS Medicaid estimates match administrative counts for adults.
Publication
Agreement between Self-Reported and Administrative Race and Ethnicity Data among Medicaid Enrollees in Minnesota
McAlpine, D., Beebe, T. J., Davern, M. E., & Call, K. T. 2007. “Agreement between Self-Reported and Administrative Race and Ethnicity Data among Medicaid Enrollees in Minnesota.” Health Services Research 42(6, part II): 2373-2388.
OBJECTIVE: This paper measures agreement between survey and administrative measures of race/ethnicity for Medicaid enrollees. Level of agreement and the demographic and health-related characteristics associated with misclassification on the administrative measure are examined. DATA SOURCES: Minnesota Medicaid enrollee files matched to self-report information from a telephone/mail survey of 4,902 enrollees conducted in 2003. STUDY DESIGN: Measures of agreement between the two measures of race/ethnicity are computed. Using logistic regression, we also assess whether misclassification of race/ethnicity on administrative files is associated with demographic factors, health status, health care utilization, or ratings of quality of health care. DATA EXTRACTION: Race/ethnicity fields from administrative Medicaid files were extracted and merged with self-report data. PRINCIPAL FINDINGS: The administrative data correctly classified 94 percent of cases on race/ethnicity. Persons who self-identified as Hispanic and those whose home language was English had the greater odds (compared with persons who self-identified as white and those whose home language was not English) of being misclassified in administrative data. Persons classified as unknown/other on administrative data were more likely to self-identify as white. CONCLUSIONS: In this case study in Minnesota, researchers can be reasonably confident that the racial designations on Medicaid administrative data comport with how enrollees self-identify. Moreover, misclassification is not associated with common measures of health status, utilization, and ratings of quality of care. Further replication is recommended given variation in how race information is collected and coded by Medicaid agencies in different states.
Publication
Distributing State Children’s Health Insurance Funds: A Critical Review of the Design and Implementation of the Funding Formula
Blewett, L. A. and M. Davern. 2007. “Distributing State Children’s Health Insurance Funds: A Critical Review of the Design and Implementation of the Funding Formula. Journal of Health Politics, Policy and Law 32(3):415-55.
The development of formulas to distribute federal funds to states based on demographic data continues to challenge data and policy analysts. Analysts must forward the best objective statistical analysis and data inputs to formula specifications while acknowledging the politics of the legislative process that authorizes federal funding formulas. This article evaluates the federal funding formula for the State Children's Health Insurance Program (SCHIP) using key formula components of need, effort, capacity, and performance. We also examine the operationalization and measure of the target population in the SCHIP funding formula. Legislative decisions on formulas are, by nature, based on compromises that balance competing policy objectives. The analyst's role is to continually review current research standards, data quality, and relevant formula inputs and make recommendations to refine federal funding formulas to better target resources to their intended populations.
A summary of this article is attached.
Publication
Validating Health Insurance Coverage Survey Estimates: A Comparison between Self-Reported Coverage and Admin Data Records
Davern, M., K. T. Call, J. Ziegenfuss, G. Davidson, T. Beebe, and L. A. Blewett. 2008. "Validating Health Insurance Coverage Survey Estimates: A Comparison between Self-Reported Coverage and Administrative Data Records." Public Opinion Quarterly 72(2): 241-259.
We administered a health insurance coverage survey module to a sample of 4,575 adult Blue Cross and Blue Shield of Minnesota (BCBS) members to examine if people who have health insurance coverage self-report that they are uninsured. We were also interested in whether respondents correctly classify themselves as having commercial, Medicare, MinnesotaCare, and/or Medicaid coverage (the four sample strata). The BCBS of Minnesota sample is drawn from both public and commercial health insurance coverage strata that are important in policy research involving survey data. Our findings support the validity of our health insurance module for determining whether someone who has health insurance is correctly coded as having health insurance coverage, as only 0.4 percent of the BCBS members answered the survey as though they were uninsured. However, we find problems for researchers interested in using survey responses to specific types of public coverage. For example, 21 percent of the Medicaid self-reported coverage came from known enrollees and only 67 percent of the MinnesotaCare self-reported count came from known enrollees. We conclude with a discussion of the study's implications for understanding the Medicaid "undercount" and the validity of self-reported health insurance coverage.
Publication
The Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence
Call, K. T., G. Davidson, M. Davern, L. A. Blewett, and R. Nyman. 2008. “The Medicaid Undercount and Bias to Estimates of Uninsurance: New Estimates and Existing Evidence.” Health Services Research 43(3): 901-14.
OBJECTIVE: To examine whether known Medicaid enrollees misreport their health insurance coverage in surveys and the extent to which misreports of lack of coverage bias estimates of uninsurance. DATA SOURCE: Primary survey data from the Medicaid Undercount Experiment. STUDY DESIGN: Analyze new data from surveys of Medicaid enrollees in California, Florida, and Pennsylvania and summarize existing research examining bias in coverage estimates due to misreports among Medicaid enrollees. DATA COLLECTION METHOD: Subjects were randomly drawn from Medicaid administrative records and were surveyed by telephone. PRINCIPAL FINDINGS AND CONCLUSIONS: Cumulative evidence shows that a small percentage of Medicaid enrollees mistakenly report being uninsured, resulting in modest upward bias in estimates of uninsurance. A somewhat larger percentage of enrollees report having some other type of coverage than no coverage, biasing Medicaid enrollment estimates downward but not biasing estimates of uninsurance significantly upward. Implications for policy makers' confidence in survey estimates of coverage are discussed.