No. At least that’s the conclusion of an important new paper (ungated version here) by Trent Alexander, Michael Davern and Betsey Stevenson, who find enormous errors in some critically important economic datasets.
Let’s start with the 2000 Decennial Census. Your responses to the Census were used for two purposes. First, the Census Bureau tallied up every response to produce its official population counts. And second, it produced a 1-in-20 sub-sample of these responses, which it made available for analysis by researchers. Just about every economist I know has used this Census sub-sample, as do a fair number of demographers, sociologists, political scientists, and private-sector market researchers.
The errors are documented in a stunningly straightforward manner. The authors compare the official census count (based on the tallying up of all Census forms) with their own calculations, based on the sub-sample released for researchers (the “public use micro sample,” available through IPUMS). If all is well, then the authors’ estimates should be very close to 100% of the official population count. But they aren’t:
Trent Alexander, Michael Davern and Betsey Stevenson
The two estimates are pretty similar for those younger than 65. But then things go haywire, with the alternative estimates disagreeing by as much as 15%. In fact, the microdata suggest that there are more very old men than very old women — I know some senior women who wish this were true! The Census Bureau has confirmed that the problem isn’t with the authors’ calculations. Rather, the problem is in the public-use microdata sample.
What’s the source of the problem? The Census Bureau purposely messes with the microdata a little, to protect the identity of each individual. For instance, if they recode a 37-year-old expat Aussie living in Philadelphia as a 36-year-old, then it’s harder for you to look me up in the microdata, which protects my privacy. In order to make sure the data still give accurate estimates, it is important that they also recode a 36-year-old with similar characteristics as being 37. This gives you the gist of some of their “disclosure avoidance procedures.” While it may all sound a bit odd, if these procedures are done properly, the data will yield accurate estimates, while also protecting my identity. So far, so good.
But the problem arose because of a programming error in how the Census Bureau ran these procedures. The right response is obvious: fix the programs, and publish corrected data. Unfortunately, the Census Bureau has refused to correct the data.
The problem also runs a bit deeper. If the mistake were just the one shown in the above graph, it would be easy to simply re-scale the estimates so that there are no longer too many, say, 85-year-old men — just weight them down a bit. But it turns out that the same coding error also messes up the correlation between age and employment, or age and marital status (and, the authors suspect, possibly other correlations as well). When you break several correlations like this, there’s no easy statistical fix.
Worse still, the researchers find that related problems afflict the microdata released for other major data sources. All told, they’ve found similar errors in:
- The 2000 Decennial Census.
- The American Community Survey, which is the annual “mini-census” (errors exist in 2003-2006, but not 2001-02, or 2007-08).
- The Current Population Survey, which generates our main labor force statistics (errors exist for 2004-2009).
These microdata have been used in literally thousands of studies and countless policy discussions. While the findings of many of these studies aren’t much affected by these problems, in some cases, important errors have been introduced. The biggest problems probably exist for research focusing on seniors. Yes, this means that many of those studies of important policy issues-retirement, social security, elder care, disability, and medicare-will need to be revisited.
The problem is that until the Census Bureau does something about these widespread problems, we can’t even begin this process of cleaning up problematic research findings. Right now, the authors warn that: “The resulting errors in the public use data are severe, and… should not be used to study people aged 65 and over.” Given the long list of afflicted datasets, up-to-date credible research on seniors is virtually impossible.
The whole research community is waiting for the Census Bureau to do something about these problems.
UPDATE: Carl Bialik of the Wall Street Journal also?digs a little deeper into these problems.