Survey modelling

Adjusting for the Lizardman constant

August 29, 2019 — October 24, 2021

Bayes
confidentiality
hidden variables
hierarchical models
mind
networks
ordinal
regression
sociology
statistics
wonk
Figure 1

Placeholder page for information about surveys, their design, analysis and limitations.

1 When can I get any information at out of surveys?

It is, in general, hard, to get information out of surveys. Finding research questions that we can answer with surveys is a challenge in itself, and, having done so, it is a whole specialty field to design surveys to actually get at the research question we want to answer. Typically surveys that I have been asked to look at have not put sufficient effort into that, or they put that effort in too late.

Survey Chicken is a good essay about the difficulties:

What I mean by “surveys” is standard written (or spoken) instruments, composed mostly of language, that are administered to subjects, who give responses, and whose responses are treated as quantitative information, which may then be subjected to statistical analysis. It is not the case that knowledge can never be obtained in this manner. But the idea that there exists some survey, and some survey conditions, that might plausibly produce the knowledge claimed, tends to lead to a mental process of filling in the blanks, of giving the benefit of the doubt to surveys in the ordinary case. But, I think, the ordinary survey, in its ordinary conditions, is of no evidentiary value for any important claim.

There are a lot of problems that arise. A famous one is response bias:

Figure 2: Image: Sketchplanations. Of course, thanks to the lizardman constant we know that it is more plausible that 4% of people would have responded ‘no I never answer surveys’.

But there are so many!

Another one that I am fond of using, because it has a catchy name, is the Lizardman constant; This is the problem that survey responses have an irreducible level of noisy nonsense. Specifically, a rule of thumb 4% of people will claim their head of state is an alien Lizard monster on a survey. Related, although less lurid, nonattitudes [@].

I am particularly exercised by the problem that I refer to as the Dunning Kruger Theory of Mind, which is that, even with the best intentions in the world and an unbounded survey budget, we are not good at knowing our own minds, and even worse at knowing the minds of others. With all the focus and intent in the world, my survey responses are far more reflective of my self-image than they are of any facts about the world.

OK, many caveats, warnings and qualifications. Does that mean that surveys are useless? No, it does not. It just means that surveys are difficult and limited. But sometimes there is no other clear way to study the phenomenon of interest, so we have to do what we can. What follows are some tricks to do this.

2 Survey design

TBD. To pick a paper that I have been looking at recently, Gelman and Margalit (2021) is an example of a paper that does ingenious survey design to answer non-trivial questions.

3 Post stratification

Tricks of particular use in modeling survey data when you need to adjust for bias in who actually answers the survey. Reweighting the data to correct for various types of remediable sampling bias.

There is some interesting crossover with clinical trial theory, in that there are surprising things that you CAN learn from a biased sample in many circumstances

It is a commonly held belief that clinical trials, to provide treatment effects that are generalizable to a population, must use a sample that reflects that population’s characteristics. The confusion stems from the fact that if one were interested in estimating an average outcome for patients given treatment A, one would need a random sample from the target population. But clinical trials are not designed to estimate absolutes; they are designed to estimate differences as discussed further here. These differences, when measured on a scale for which treatment differences are allowed mathematically to be constant (e.g., difference in means, odds ratios, hazard ratios), show remarkable constancy as judged by a large number of published forest plots. What would make a treatment estimate (relative efficacy) not be transportable to another population? A requirement for non-generalizability is the existence of interactions with treatment such that the interacting factors have a distribution in the sample that is much different from the distribution in the population.

A related problem is the issue of overlap in observational studies. Researchers are taught that non-overlap makes observational treatment comparisons impossible. This is only true when the characteristic whose distributions don’t overlap between treatment groups interacts with treatment. The purpose of this article is to explore interactions in these contexts.

As a side note, if there is an interaction between treatment and a covariate, standard propensity score analysis will completely miss it.

This is a whole interesting thing in its own right; See post stratification for details.

4 Ordinal data

A particularly common data type to analyze in surveys — Ordinal models are how we usually get data from people. Think star ratings, or Likert scales.

sjplot is a handy package for exploratory plotting of Likert-type responses for social survey data. by Daniel Lüdecke.

5 Confounding and observational studies

Often survey data is further complicated by being about a natural experiment where we must deal with non-controlled trials. See Causal graphical models.

6 Graph sampling

Cannot pull people from the population at random? How about asking people you know and getting them to ask people they know? What can we learn from this approach? See inference on social graphs.

7 Data sets

Parsing SDA Pages

SDA is a suite of software developed at Berkeley for the web-based analysis of survey data. The Berkeley SDA archive lets you run various kinds of analyses on a number of public datasets, such as the General Social Survey. It also provides consistently-formatted HTML versions of the codebooks for the surveys it hosts. This is very convenient! For the gssr package, I wanted to include material from the codebooks as tibbles or data frames that would be accessible inside an R session. Processing the official codebook from its native PDF state into a data frame is, though technically possible, a rather off-putting prospect. But SDA has done most of the work already by making the pages available in HTML. I scraped the codebook pages from them instead. This post contains the code I used to do that.

8 To elicit wisdom of crowds

See Wisdom of crowds.

9 Incoming

10 References

Achlioptas, Clauset, Kempe, et al. 2005. On the Bias of Traceroute Sampling: Or, Power-Law Degree Distributions in Regular Graphs.” In Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing. STOC ’05.
Bareinboim, and Pearl. 2016. Causal Inference and the Data-Fusion Problem.” Proceedings of the National Academy of Sciences.
Bareinboim, Tian, and Pearl. 2014. Recovering from Selection Bias in Causal and Statistical Inference. In AAAI.
Biesanz, and West. 2004. Towards Understanding Assessments of the Big Five: Multitrait-Multimethod Analyses of Convergent and Discriminant Validity Across Measurement Occasion and Type of Observer.” Journal of Personality.
Bond, Fariss, Jones, et al. 2012. A 61-Million-Person Experiment in Social Influence and Political Mobilization.” Nature.
Broockman, Kalla, and Sekhon. 2016. The Design of Field Experiments With Survey Outcomes: A Framework for Selecting More Efficient, Robust, and Ethical Designs.” SSRN Scholarly Paper ID 2742869.
Converse. 1974. Comment: The Status of Nonattitudes.” American Political Science Review.
Gao, Kennedy, Simpson, et al. 2019. Improving Multilevel Regression and Poststratification with Structured Priors.” arXiv:1908.06716 [Stat].
Gelman. 2007. Struggles with Survey Weighting and Regression Modeling.” Statistical Science.
Gelman, and Carlin. 2000. Poststratification and Weighting Adjustments.” In In.
Gelman, and Margalit. 2021. Social Penumbras Predict Political Attitudes.” Proceedings of the National Academy of Sciences.
Ghitza, and Gelman. 2013. Deep Interactions with MRP: Election Turnout and Voting Patterns Among Small Electoral Subgroups.” American Journal of Political Science.
Hart, VanEpps, and Schweitzer. 2019. I Didn’t Want to Offend You: The Cost of Avoiding Sensitive Questions.” SSRN Scholarly Paper ID 3437468.
Kennedy, Mauro, Daniels, et al. 2019. Handling Missing Data in Instrumental Variable Methods for Causal Inference.” Annual Review of Statistics and Its Application.
Kohler, Kreuter, and Stuart. 2019. Nonprobability Sampling and Causal Analysis.” Annual Review of Statistics and Its Application.
Kong. 2019. Dominantly Truthful Multi-Task Peer Prediction with a Constant Number of Tasks.” arXiv:1911.00272 [Cs, Econ].
Krivitsky, and Morris. 2017. Inference For Social Network Models From Egocentrically Sampled Data, With Application To Understanding Persistent Racial Disparities In Hiv Prevalence In The Us.” The Annals of Applied Statistics.
Lerman. 2017. Computational Social Scientist Beware: Simpson’s Paradox in Behavioral Data.” arXiv:1710.08615 [Physics].
Little, Roderick JA. 1991. “Inference with Survey Weights.” Journal of Official Statistics.
Little, R. J. A. 1993. Post-Stratification: A Modeler’s Perspective.” Journal of the American Statistical Association.
Maul. 2017. Rethinking Traditional Methods of Survey Validation.” Measurement: Interdisciplinary Research and Perspectives.
Prelec, Seung, and McCoy. 2017. A Solution to the Single-Question Crowd Wisdom Problem.” Nature.
Rubin, and Waterman. 2006. Estimating the Causal Effects of Marketing Interventions Using Propensity Score Methodology.” Statistical Science.
Sanguiao Sande, and Zhang. 2020. Design-Unbiased Statistical Learning in Survey Sampling.” Sankhya: The Indian Journal of Statistics.
Shalizi, and McFowland III. 2016. Controlling for Latent Homophily in Social Networks Through Inferring Latent Locations.” arXiv:1607.06565 [Physics, Stat].
Shalizi, and Thomas. 2011. Homophily and Contagion Are Generically Confounded in Observational Social Network Studies.” Sociological Methods & Research.
Yadav, Prunelli, Hoff, et al. 2016. Causal Inference in Observational Data.” arXiv:1611.04660 [Cs, Stat].
Zhang, and Nguyen. 2020. An Appraisal of Common Reweighting Methods for Nonresponse in Household Surveys Based on Norwegian Labour Force Survey and Statistics on Income and Living Conditions Survey.” Journal of Official Statistics.