• mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    53
    arrow-down
    3
    ·
    3 months ago

    we give more weight to respondents from demographic groups underrepresented among survey respondents, like people without a college degree

    Oooooohhh

    All of sudden it makes sense

    Here’s their methodology page, with in addition to that fuckin fascinating tidbit you quoted, some other things of note:

    • The New York Times/Siena College Poll is conducted by phone using live interviewers at call centers based in Florida, New York, South Carolina, Texas and Virginia. Respondents are randomly selected from a national list of registered voters, and we call voters both on landlines and cellphones.
    • In the end, fewer than 2 percent of the people our callers try to reach will respond. We try to keep our calls short — less than 15 minutes — because the longer the interview, the fewer people stay on the phone.
    • We call more people who seem unlikely to respond, like those who don’t vote in every election.
    • But the truth is that there’s no way to be absolutely sure that the people who respond to surveys are like demographically similar voters who don’t respond. It’s always possible that there’s some hidden variable, some extra dimension of nonresponse that we haven’t considered.

    It is, indeed, always possible.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      arrow-up
      27
      arrow-down
      2
      ·
      3 months ago

      To be clear, polling theory is totally valid and an established science within statistics.

      But the challenge is always with methodology, because you can never get a perfect simple random sample. And the methodology here certainly seems terrible.