Monday/Tuesday Highlights

Well, … life intrudes.

  1. Ah, some suggestions for Mr Romney for the first debate.
  2. On those cricket races.
  3. So, still no mention of the magical Obama roadmap to a nuclear (weapon) free world.
  4. (One) Religious freedom returns to Germany.
  5. Death of a bad man noted (HT: CB)
  6. “Call me a Luddite” … no I’ll call myself confused as to why the person named as the architect of one of the reasons for the success of the 9/11 attacks still has a job in Washington (recall her “wall of separation” between agencies was one of signal factors blamed for our failure to detect the actors).
  7. Let’s re-write history … or not.
  8. Flower children.
  9. A book noted.
  10. Land of the free? Or incipient police state.
  11. When speaking with others.
  12. A question not answered well to my knowledge.
  13. Administration lies and the liberal echo chamber hasn’t noticed.

41 Responses to Monday/Tuesday Highlights

  1. Ah, some suggestions for Mr Romney for the first debate.

    Word is Romney has been practicing several zingers. It’s hard for me to imagine that he’ll be able to deliver them, though.

    Mitt Romney is no Ronald Reagan.

    On those cricket races.

    In other words, “I’m too ignorant to know how this works but arrogant enough to conclude that it therefore cannot.” That old combination of ignorance and arrogance that comes up over and over again on your side of every issue.

    Forget poker, I want to gamble with you on elections. How about we bet on 100 elections — I’ll take the candidates leading in the polls on election day, you take those trailing.

    If you’re genuinely interested in polling, you should be reading fivethirtyeight.

    A question not answered well to my knowledge.

    And more ignorance + arrogance.

  2. 3.So, still no mention of the magical Obama roadmap to a nuclear (weapon) free world.

    You’ve claimed this a few times yet I don’t think you ever supported this assertion that Obama is not realistic about nuclear weapons.

    6.“Call me a Luddite” … no I’ll call myself confused as to why the person named as the architect of one of the reasons for the success of the 9/11 attacks still has a job in Washington (recall her “wall of separation” between agencies was one of signal factors blamed for our failure to detect the actors).

    Architect is a bit over the top, kinda implies intention which I don’t think you intended to do. As for the ‘wall of separation’ being the reason for 9/11, I think that was over the top CYA acting on the part of the CIA/FBI as well as partisanship. I for one find hindsight more than 20-20 and I’m pretty skeptical of those who claim “those people over there couldn’t connect the dots but I could if I had seen this back then!”.

    2.On those cricket races.

    Not really impressed. The major polling agencies *do* do cell phone as well as land line calls. As for only 8% of calls producing a ‘useable interview’, that’s not really surprising IMO. I don’t think the ‘polls are rigged meme’ has been very well thought out. Polling companies are in the business of producing good polls. Why would they want to tarnish their business reputation? More importantly, while a rigged pro-Obama poll may be helpful to the Obama campaign, the fact remains that both parties *need* reliable poll numbers to allocate their resources and shift their strategies as needed. Why wouldn’t the market produce reliable polling companies to meet this demand?

  3. Also excluding a category from sampling does not in itself bias the results of the poll unless that category breaks down differently from the rest of the population. For example, if you took a sample by calling people with odd social security numbers, I don’t think you’d bias your poll because there probably isn’t any difference between Obama and Romney’s popularity between even SSI holders and odds. The famous poll miscalculation from decades ago (Truman’s election possibly?) was from a poll done only by phone, failing to account that back then having a home phone was still a bit of a luxury that many low and moderate income people didn’t have.

    If cell phone only users are being excluded from polls they tend to be younger which, if anything, leans more strongly Democratic than Repbulican.

  4. Boonton,

    You’ve claimed this a few times yet I don’t think you ever supported this assertion that Obama is not realistic about nuclear weapons.

    No. Obama has made the claim he has a plan for a road map to a nuclear weapon free world. Yet he’s never detailed the here-to-there methodology … that remains mythical. Google “Obama Nuclear free world”. That he thinks this is a reasonable goal is not unsubstantiated. Obama has failed to substantiate a reasonable path from there to there.

    Architect is a bit over the top, kinda implies intention which I don’t think you intended to do.

    I see. If a person does something with horrible spectacular unintended consequences, which they failed to anticipate … you encourage keeping them in high posts? I’ll have to remember that.

    Polling companies are in the business of producing good polls. Why would they want to tarnish their business reputation? More importantly, while a rigged pro-Obama poll may be helpful to the Obama campaign, the fact remains that both parties *need* reliable poll numbers to allocate their resources and shift their strategies as needed. Why wouldn’t the market produce reliable polling companies to meet this demand?

    The real question might be why you expect the numbers reported to the sponsor of the poll match those reported to the public by a the media … put the bias where you wish.

    Also excluding a category from sampling does not in itself bias the results of the poll unless that category breaks down differently from the rest of the population.

    So. You’re a polling agency. You find that your sampling never matches your poll, but you find a formula that worked with last years data that “corrects” for the bias. You use that this year. It works so-so. You use the new correction next year. And so on. But … how accurate is your poll? Why depend on something with fudge factors and pretend it doesn’t.

    JA,
    I don’t recall which link on this topic I offered. One feed result today noted that when polls error they far more often err to the Democratic side, i.e., when they are wrong they had predicted a Democrat victory. How about your wager be on that? If the poll is wrong, the payoff is if the error is to the GOP or Democrat. You figure Dems will error as often as GOP, so you pay if a Dem wins when the poll was incorrect and vice versa. If the polls were accurate unbiased and all that, you’d expect the error to evenly choose party. They don’t. Are you claiming the labels ignorance and arrogance for yourself? Was that your point?

    So has your cell phone been called?

    If you’re genuinely interested in polling,

    I genuinely think polling is less useful than people pretend. Recall from your discussions … “what would you do on stranded starving on a lifeboat”, “what would you do if drafted”, “who would you vote for” … on all these questions I think people very very often do not do what they did in discussion. When faced with the actual choice in real situations they very frequently do not do what they claimed they would do.

    And more ignorance + arrogance.

    So … fill me in. I’ve asked for data. You’ve failed every time to supply. I’ll ask again. Let me know why you expect warming worse than cooling or .. in fact not bad. I’ve had discussions on that, why is more arable land such a bad thing? Eh?

  5. You find that your sampling never matches your poll, but you find a formula that worked with last years data that “corrects” for the bias. You use that this year. It works so-so. You use the new correction next year. And so on. But … how accurate is your poll? Why depend on something with fudge factors and pretend it doesn’t.

    I think it’s a bit less dramatic than this. YOu take a random sample of 1000 people, and you get 600 women and 400 men. But you also know the entire population you are interested in is about 50-50 male to women. So you weigh the male answers in your sample higher to compensate for the fact that you had fewer men than ‘normal’.

    The objection to the polls, from what I understand, is they do a sample and it breaks like 600 Democrats to 400 Republicans. The Republicans are asserting that just like with the men above, the Republicans should be weighted more because we ‘know’ the general population isn’t weighted so heavily. The pollsters are not doing that, though, because party affiliation isn’t fixed as gender is. If a person is leaning towards Obama this year they are more likely to say they are a Democrat, next year if they are leaning the other way they will say they are a Republican.

    The real question might be why you expect the numbers reported to the sponsor of the poll match those reported to the public by a the media … put the bias where you wish.

    Again from a game theory perspective it’s not clear to me that a party really would want a polling company putting out the idea ‘it’s ok you’re guy’s going to win no matter what’. Also problematic, a polling company earns money by its reputation. Somehow it doesn’t seem like they would be eager to put out ‘fake numbers’ doesn’t seem very plausible.

  6. So … fill me in. I’ve asked for data. You’ve failed every time to supply. I’ll ask again. Let me know why you expect warming worse than cooling or .. in fact not bad.

    Try this

  7. If a person does something with horrible spectacular unintended consequences, which they failed to anticipate … you encourage keeping them in high posts?

    I’m not convinced said person did anything nor that such consequences arose from that.

    No. Obama has made the claim he has a plan for a road map to a nuclear weapon free world. Yet he’s never detailed the here-to-there methodology … that remains mythical. Google “Obama Nuclear free world”.

    First thing that came up for me was this
    http://www.telegraph.co.uk/news/worldnews/barackobama/5109810/President-Barack-Obama-calls-for-a-nuclear-free-world-in-Prague-speech.html

    Granted I didn’t listen to the entire video nor read the transcript but I see no such claim that he had some type of ‘secret plan’ to a nuclear free world or that such a goal would be ‘right around the corner’. Instead he specifically states it would be a very difficult goal to achieve and would require coorporation by lots of people and the only immediate first steps was to oppose the spread of nuclear weapons (a sensible break from the previous administration which decided to support Pakistan’s development of a nuclear weapon which almost lead to diaster as one of their scientists was caught trying to pass information to Islamist groups!). Seems like you’ve taken a run of the mill aspirational statement (we’re going to fight poverty, we’re going to fight crime, etc.) and twisted it to make it something it wasn’t.

  8. Boonton,

    I’m not convinced said person did anything nor that such consequences arose from that.

    I’ll admit I haven’t read the reports or the data … but the bi-partisan committee which investigated laid some of the blame on that … although I believe few make the connection publicly with the author of that policy.

    OK. If he said let’s move to a gun free world or poverty free world, you’d say those are “aspirational statements” but that is not how I read those statements that he made. I think there is some academic theory some plan he has in mind on how this can be obtained. Hence, “mythical plan” because the statements I’ve seen make it sound like his goals are more than just aspirational.

  9. JA,
    Hmm. Not a single scientific paper. You don’t believe in real science do you, just scientism. Cite. A. Refereed. Paper.

  10. I’ll admit I haven’t read the reports or the data … but the bi-partisan committee which investigated laid some of the blame on that … although I believe few make the connection publicly with the author of that policy.

    So you are telling me a bipartisan committee wrote a report that blamed her, but didn’t want to put it in writing. Then you’ll snipe at JA for using poor quality non-scientific sources….

    OK. If he said let’s move to a gun free world or poverty free world, you’d say those are “aspirational statements” but that is not how I read those statements that he made. I think there is some academic theory some plan he has in mind on how this can be obtained. Hence, “mythical plan”…

    In other words the ‘mythical plan’ exists in yours, rather than Obama’s, imagination. Now shall we cue your empty chair to speak too?

  11. I genuinely think polling is less useful than people pretend. Recall from your discussions … “what would you do on stranded starving on a lifeboat”, “what would you do if drafted”, “who would you vote for” … on all these questions I think people very very often do not do what they did in discussion.

    Well if let’s think about how useful this is…

    Suppose you take Dr. House’s assumption; people always lie. In that case polling would be very useful. If you asked people if they were on a sinking ship would they help others to the lifeboats first or try to get to them first. If people always lie you just have to take their answer and flip it. Those that say they will bravely help others first can be assumed to act like cowards, those that say they would step over babies to get on the lifeboat first will act like heros. In this case the poll will be perfect. Likewise if people never lie, the poll will also be perfect. If you needed to figure out how many lifeboats need to be on the ship to save everyone, you can figure that out with the poll. Or at least get close to it.

    In order for the poll to be entirely useless, the answers people give have to be totally random. If a person thinks of their answer, then flips a coin and will lie if its heads or tell the truth if its tails, then the poll will be totally useless.

    Notice we need some things to be absolutely true to achieve a totally useless poll:

    1. Whether or not a person chooses to change their answer has to have a probability of exactly 50%. If a person only changes their answer 10% of the time, the poll will be less accurate but still pretty useful. If 80% of the people say they will help others, it’s very unlikely that a huge majority of them won’t and the ’10% random number generator’ happened to tell them to lie. Likewise if people lie 90% of the time, you still have a case where the poll is likely to be very useful.

    2. The direction of deception must be unbiased. By flipping a coin people can either make themselves seem more heroic than they would really be or more cowardly than they would really be. In reality most people tend to bias their lies. We are more likely to falsely claim we’d help others than falsely claim we’d step on babies. If that’s true we can measure the direction and intensity of biases and use that to compensate for the answers people give in a poll. So if we ask questions about what people would think of someone who helped others versus stepped on babies, we could get a sense of what people think they should *signal* they would do versus what they would really do. Hence we can say while 80% claimed they would help others, odds are more likely that some of the 80% pseudo-heros will turn out to act like cowards than 20% self-proclaimed cowards will turn out to be heros.

    Of course none of this is easy but it is helpful to note that it is very unlikely that a poll will be useless and more importantly, if you have a series of different polls all coming in around the same area, it’s unlikely they are not telling you something.

  12. WTF kind of game are you playing? Do you think there are no scientific papers on why global warming is bad? This is ridiculous. Try google scholar:

    http://scholar.google.com/scholar?hl=en&q=global+warming+consequences&btnG=&as_sdt=1%2C47&as_sdtp=

  13. JA,
    I’m not playing a game. In the first time I asked you for reading/references and background science on global warming you pointed me at a big political document. I ask the same question again, and you point me at news articles. Again you point me at a new google search, clearly you’ve surveyed these articles and found a good source of excellent reading material, oh, wait you haven’t. You really haven’t read into this at all. It seems to me that you’re taking journalists word that on the science and that the science is good. You don’t have any notion that climate science is any better at prediction than astrology (recall my question about “what accurate predictions have been made in this field” drew a blank) and you haven’t actually read the science. You know that in any field you actually have a background that journalists covering that field is notoriously bad. Journalists do just about as much justice to any particular venture as Hollywood, from crime, to medicine, to science. Journalists (in Physics) will jump all over “neutrinos faster than light” or what have you .. but that isn’t good reporting of what is happening in the field. Yet you figure that for climate things are different.

    One might further ask why you haven’t looked into this? Why you are just as ignorant of it as I am. I suggest it is because you don’t really believe this as important as you pretend.

    In Physics, at say arXIV, there are articles every day … a large percentage of them are useless. Some are good. In that mix you often get very good overview articles reviewing the status of a particular question or subtopic. It is likely that there might be similar things to be found in meteorology. You don’t know about them apparently because you don’t know the field. This is what I’m looking for. It’s not a game. It’s exactly what I was looking for all along. The question asked is specifically about “good vs bad features related to warming” (because there are good things …

    Take one example, such as raising crops further north, which cannot be simple set aside by moronic news articles noting that “gosh there’s less light up north”. Yes, nobody is going to grow sugar beets above the arctic circle, but if it was a bit warmer you might be able to grow corn in the Dakotas … and more than just winter wheat in parts of Canada, which do get enough sunlight but the growing season is too short because of frost not sunlight. “The soil is poor” up North is also stupid, the soil is poor because not much has grown there, because it’s been frozen. Fertilizers do in fact exist and in a few decades that decayed vegetation can build up soil. It’s not poor soil because it’s sandy, it’s poor because little has grown there the past, there is no fundamental problem (like sand) that generically affects northern areas.

  14. Boonton,

    Well if let’s think about how useful this is…

    Yes. If “everyone lies” you have a good angle on the figuring out what’s going on. No. It’s not deceit. It’s that the poll is not the same as the situation. You can poll people on “how they might vote” in a jury … but that is different than how they might vote in an actual jury with a defendant who is no longer hypothetical.

    Those that say they will bravely help others first can be assumed to act like cowards, those that say they would step over babies to get on the lifeboat first will act like heros.

    Those that say they will act bravely may do so, they might not. Those that say they’d be cowards, may or may not. The problem is your expectations of how you will act are not well correlated with how you actually act.

    In order for the poll to be entirely useless, the answers people give have to be totally random.

    No. For the poll to be useless there has to be little correlation between answer and action. In many polls this is the case. It is difficult to weed out a priori whether your answers will correlate well with actions in advance. Therefore polling is mostly useless.

    it’s unlikely they are not telling you something.

    Yet the polls isn’t telling you “what they will do” it’s telling you “how they will answer your poll” which is a different beast.

  15. Boonton,

    So you are telling me a bipartisan committee wrote a report that blamed her, but didn’t want to put it in writing

    No. I’m telling you a bi-partisan commission blamed “separation of agencies”. They did not name the sponsor and author of said separation, because that wasn’t relevant and likely Ms Napolitano was one of many such sponsors.

    In other words the ‘mythical plan’ exists in yours, rather than Obama’s, imagination.

    No. Look at your word usage for aspirational phrases. Compare with Obama’s on nuclear free. There is a difference. Obama thinks nuclear free is possible. That isn’t realistic.

  16. I’m not taking journalists’ word, I’m taking scientists’. I don’t need to pore over scientific papers myself — indeed that would be perhaps counterproductive as I do not have the relevant expertise.

    Unlike you, I am not so arrogant as to think I can as a layperson pick a few papers here and there (or, more to the point, read a few opinion pieces that cherry-pick the papers for you) and decide that an entire scientific field is a joke.

    That is not my job. There are many thousands of people who do have that job and they are called scientists. And they damn near universally agree on this issue that is only under debate for political reasons.

  17. No. For the poll to be useless there has to be little correlation between answer and action

    In other words random. If people flipped a coin before answering that would make the poll useless. BUT anything other than that would generate a correlation that itself could be used. In other words, if it’s not equal to a coin flip, then you’ll have a correlation. For example, if you see that 60% of people who say they will act like heros really act like cowards and 90% of people who say they will act like cowards will indeed do so you can get a good sense of what will actually happen should a poll say 70% will act like heros and 30% like cowards. In that case it would be sensible to prepare for 69% cowards and 31% heros.

    This of course brings in a time element. Who people will say they will vote for on Oct 2nd may not be the same ratios as on Nov 2nd. If the poll was useless, though, you’d expect to see patterns. A striking one is that you’d expect the polls right before the actual election to be no closer to the end result than polls long before the election. I don’t think you demonstrate that pattern, although I’m sure you can find individual examples.

  18. JA,

    I’m not taking journalists’ word, I’m taking scientists’.

    Who? Where are you hearing from these scientists except via journalists reporting as such?

    I don’t need to pore over scientific papers myself — indeed that would be perhaps counterproductive as I do not have the relevant expertise.

    Certainly you do … enough to understand the basis of it.

    … And they damn near universally agree …

    I don’t understand how you can say that. Physicists don’t “universally” agree on anything. Protons decay? Maybe, maybe not. How many dimensions in spacetime? 4? 10? Other? Dunno. Quarks exist? Some say yes, some say no. Physics is confident enough to have disagreements. Apparently meteorology is too new a science and has different methods of enforcing dogma, kind like astrology?

  19. Boonton,
    Weather reporting, at least a decade or two ago, it was said have about a 58-60% accuracy rate … but predicting that tomorrow would be the same as today is right 55% of the time.

    At some point as correlation becomes weaker and weaker your method is meaningless. Apparently you’re not ready to move to that sort of conclusion.

  20. Physicists don’t “universally” agree on anything.

    They universally agree on most things, like the fact that gravity exists or that the sun is hot. It’s just that your side isn’t turning those into political issues. Obviously, they disagree on some things when there isn’t enough evidence.

    Similarly, climate scientists disagree on some things but agree on the basics that have already been established. There have been surveys of scientists that show near-universal agreement. There have been huge associations of scientists that themselves say they agree nearly universally.

    We’ve been through this over and over again and you’re just too dishonest or arrogant to admit that your side is wrong.

    http://en.wikipedia.org/wiki/Scientific_opinion_on_climate_change

  21. JA,

    They universally agree on most things, like the fact that gravity exists …

    OK. What causes gravity? (or does the Earth suck?) Are gravitons spin-two massless bosonic particles, which we cannot detect or not? How does one quantize gravity? As noted in a link today, E=mc2 requires quantum mechanics … yet how does one quantize gravity? We might agree that “it exists” but there is no agreement at all about what that thing which exists might be.

    Oddly enough, there are no “synthesis reports” “scientific bodies” “surveys of opinion of high energy physicicsts and relativits” to decide what is doctrine and what is not.

    Again. What predictions has this field “climate” science made. How reliable is it? Some fields are new and their theory is weak. Some fields are older and they have a large body of reliable theory. This is true in physics … why is that not true in climate?

    you’re just too dishonest or arrogant to admit that your side is wrong.

    So … if a “survey” of HEP scientists believe quarks are real, why then are those that do not excluded or not mocked and given the same treatment.

    “your side is wrong” or you are “dishonest or arrogant” … these are not statements which are made by actual scientists … the process of science seems foreign to you … let me set you straight. If I disagree about something (for example quarks are not real unless we detect free quarks), that doesn’t label me as dishonest or “wrong” … This is not how science works. I’m pretty sure you know that and your labeling me as “dishonest” and “wrong” is you being dishonest. Didn’t you learn about that in school?

    And by the by, what is “my side” … is that the one that gets mocked by you for asking for scientific overviews of the various effects, both good and bad, that might accompany warming?

    And speaking of dishonest, is that like the reporting that the Arctic ice pack is at at record lows and not reporting that the Antarctic is at record highs? Just curious.

  22. At some point as correlation becomes weaker and weaker your method is meaningless. Apparently you’re not ready to move to that sort of conclusion.

    I think you’re confusing correlation between a person’s answer and their vote with correlation between a poll’s results and an elections results. Even if people ‘randomly’ lie 55% of the time you will probably be able to build reliable polls.

    If poll results were only right 55% of the time you’d see a lot of surprises on actual election days and you don’t. By surprises I mean things like Texas voting for Obama rather than McCain or New York going for McCain rather than Obama. I believe I heard that every single state that Romney won the primary in, he had previously held a lead in the major polls. This would be so improbable as to be impossible if polls were only slightly better than flipping a coin.

    Which is why polling firms make excellent money and both political parties pay through the nose for their services.

  23. And by the by, what is “my side” … is that the one that gets mocked by you for asking for scientific overviews of the various effects, both good and bad, that might accompany warming?

    Oh, you’re just asking, so innocently. How could you possibly find scientific overviews of such effects without relying on me to find them for you? That’s why I pointed you to Google. You’re being a weasel. If you want the science, it’s out there. You aren’t interested in it. You’re interested in wasting my time by asking me to do your research for you, at which point I know from experience you’ll either just stop posting in the thread or come up with some cockamamie reason to disbelieve it based on your gut and delusions of being smarter and less biased than the greatest scientific minds of our generation.

  24. JA,

    Oh, you’re just asking, so innocently. How could you possibly find scientific overviews of such effects without relying on me to find them for you?

    Uhm, I know you want to cast me in an unfavorable light but that’s not quite the situation. I figured you’d read such things and you could point me to one you found useful. If you haven’t read such, just say so. I’m not asking you to find it, but recall it.

    I know from experience that you don’t think highly of people with strongly held opinions who have not arrived on those based on their own research, reading and learning. Right? So you’d have read about this … I’m skeptical because chaotic systems are a new and poorly understood thing. I know that, I started grad school with the idea of studying non-linear dynamics. So this is poorly understood … and a fundamental feature of climate and weather. Yet you can confidently assert things about it … cause why? Again, what predictions have they made that lead you to grant them credibility. Why do you think climatology predictions are more credible than astrology?

    You’re interested in wasting my time by asking me to do your research for you, at which point I know from experience you’ll either just stop posting in the thread or come up with some cockamamie reason to disbelieve it based on your gut and delusions of being smarter and less biased than the greatest scientific minds of our generation.

    Usally, I buy the book, get the paper, or download the PDF.

    …. the greatest scientific minds of our generation.

    I’m sorry. I am a bigot on this one. None of the “greatest scientific minds of our generation” are in meteorology. They are in Maths and Physics. Name one meteorologist/climatologist you’d put on the same intellectual footing as Edward Witten, Grigory Perelman, or Pierre Deligne. Or was that a jest?

  25. I recall Mark claimed the BP oil disaster had been proven to have happened because the gov’t ordered the well to undertake unsafe practices. Will he care to cite his scientifically reviewed studies and evidence to support that?

  26. BTW, speaking of polls, check out this review of a new book by five thirty eight author Nate Silver

    http://www.slate.com/articles/business/books/2012/10/nate_silver_s_book_the_signal_and_the_noise_reviewed_.single.html#pagebreak_anchor_2

    In 2008, Silver’s general election forecast, while perfectly sound, was only a marginal improvement on crudely averaging a bunch of opinion polls. Where he really stood out was in the Clinton-Obama primaries where the unprecedented contours of the race were ruining pollsters’ models.

    It’s a good short essay on the merits of taking models seriously, but not too seriously and being very suspect of ‘models’ that are simply built out of noticing correlations in the data (such as since WWI no Democrat has won the White House without taking West Virginia…perfectly reflected in the data but if you relied on it in 2008 to predict the election you’d fail). Another area where I think models went wrong was using past recessions to predict the path of this one, thereby assuming the ‘normal path’ would be something like 9% unemployment without stimulus and 6% with. The average of previous recessions makes sense when dealing with an average recession, but if 2008 belongs to a class of recessions more typically represented by 1928 rather than 1980 your model should say the ‘normal path’ is more like 20% unemployment (which seems to be the case in European countries who are unable to use stimulus or loosen monetary policy because of the EU).

    But in terms of polls, fact is run of the mill polls seem to work very well. While people may ‘lie to pollsters’ those errors are either trivial or tend to cancel each other out.

  27. http://www.slate.com/blogs/the_slatest/2012/09/28/dean_chambes_unskewed_poll_tweaked_fox_news_survey_has_obama_beating_romney_.html?wpisrc=obnetwork seems to go into the ‘polls are skewed’ idea a bit more. The idea seems to be that Republicans make up 34.8% of voters, Dems 35.2% and independents 30.0%. The polls are supposedly skewed because they ‘oversampled’ Democrats.

    The problem is how exactly do you ‘oversample’ Democrats? Take a state like Illionis. Suppose 30% of the population lives in or very near Chicago. A polling company sends people out to randomly stop people on the street and interview them. 50% of their sample is from areas around Chicago and 50% from areas far from it. That makes sense…Chicago is more dense so someone spending an hour randomly interviewing people will encounter more people. The less dense areas will require more pollsters and even then might capture ‘too few’. So the solution would be to weight the sample…add to the ‘power’ of those from outside of Chicago and dampen the ‘power’ of those inside by the *already known* breakdown of 30%-70%. This is called ‘normalizing’ and it’s done quite often. In epidemology, for example, if you’re comparing two cities you have to normalize their population based on age to eliminate the fact that there’s a huge correleation between getting older and getting sicker.

    But the problem with the above objection is that the Dem-Rep breakdown is not ‘already known’ or fixed. To see the problem, just imagine that spontaneously the ratio of men to women magically went from 50-50 to 60-50. If you did a survey, took your sample and ‘normalized’ it to the 50-50 ratio, you’d fail to notice the change…even though everyone else would find it rather amazing if 1/5th of women suddenly turned into men.

    So either your sampling method somehow was biased….which is odd since it wasn’t biased before as evidenced by the numerous elections called correctly by polls. Or there are more people who say they are Democrats today than there were before. That is probably more likely since people do often change what they say they are based on who they say they are voting for. If someone says they are leaning towards Romney, they more often than not will also lean towards saying they are Republicans.

  28. Thanks for the link to the review, Boonton. Your comment is a great example of what it looks like when an intellectually honest person thinks about this stuff rather than when one picks a conclusion first and then marshals arguments, no matter how stupid, to reach that conclusion.

  29. JA,

    Thanks for the link to the review, Boonton. Your comment is a great example of what it looks like when an intellectually honest person thinks about this stuff rather than when one picks a conclusion first and then marshals arguments, no matter how stupid, to reach that conclusion.

    Let’s see you accuse me of being intellectually dishonest? On what basis? Hmm? What am I claiming? That polls are inaccurate. Am I claiming the linked reason is the reason. No. You claim, without evidence, that I pick conclusions and marshall arguments afterwards? The only thing I’ve suggested looking into that I recall is that the poll predictions when they make mistakes skew to the Democrat side, which since the skew should be symmetric suggests a methodological error. Which part of this claim is it that you disagree with? That actual elections fall outside the error bounds and differ from election results? You don’t think that happens? Or is that you don’t think there are errors that skew to one side?

    The linked post suggests one possible source for a methodological error. You and Mr Boonton don’t find “sampling” as a credible source of error. OK. Yet error has been found (I googled around a bit but have not yet found a tally of election results vs predictions and verified that the predictions skew to the Democrat side). OK. Then tell me your hypothesis what the methodological error might be?

    Boonton,
    Reviewing the original post the complaint was that polling was innacurate because the poll pool was self-selected and skewed by any number of demographic factors, muddying the results. Given that “surprises” in elections, i.e., results widely different than pre election polls, are quite common … the notion that you give such credence to polls is what remains confusing. I’ve suggested in the past that it’s the old Physics joke about the drunk looking for his keys (“I’m looking here because that’s where the light is located”).

    The problem is how exactly do you ‘oversample’ Democrats? Take a state like Illionis. Suppose 30% of the population lives in or very near Chicago. A polling company sends people out to randomly stop people on the street and interview them. 50% of their sample is from areas around Chicago and 50% from areas far from it. That makes sense…Chicago is more dense so someone spending an hour randomly interviewing people will encounter more people. The less dense areas will require more pollsters and even then might capture ‘too few’. So the solution would be to weight the sample…add to the ‘power’ of those from outside of Chicago and dampen the ‘power’ of those inside by the *already known* breakdown of 30%-70%.

    Well, your polling suggestion would be spectacularly wrong. Chicago and downstate vote very different in every single past election. Treating them as uniform is your first mistake.

  30. Boonton,
    Grist for your “run of the mill polls work well” thesis.

  31. (I googled around a bit but have not yet found a tally of election results vs predictions and verified that the predictions skew to the Democrat side).

    It sounds like you’re setting yourself up for confirmation bias. Why not google first to see just how often (if ever) polls are wrong. For example, were there any notable cases of the polls calling one of the primary elections wrong? How far off were they if so?

    Given that “surprises” in elections, i.e., results widely different than pre election polls, are quite common

    Polls have been seen to move over time. It’s not surprising to find examples where candidate A was winning two months before an election but not a week before.

    Well, your polling suggestion would be spectacularly wrong. Chicago and downstate vote very different in every single past election. Treating them as uniform is your first mistake.

    I think you misunderstood the concept of normalizing the sample. The sample took 50% from chicago and 50% from downstate. But it is known that only 30% live in Chicago and 70% live downstate. So you attach weights to the sample. Downstate gets multiplied by 0.70 and Chicago gets multiplied by 0.30 so you are NOT treating them as uniform. This, of course, assumes you have reliable information that know the 70-30 split is correct.

  32. Grist for your “run of the mill polls work well” thesis.

    Makes sense to average the big polls together rather than rely on one single poll. All of the examples cited, though, were polls done well before the actual election which doesn’t make them in error. Dukakis was leading Bush early in the election, for example, until the Republican ad machine ripped him apart and he didn’t respond in kind. Saying the polling was in error makes sense only if you think votes are fixed and campaigns themselves are worthless in changing the way people vote. Which, of course, begs the question why rational institutions would blow so many hundreds of millions on campaigns.

    The only exception seems to be the 1980 Carter Regan election which Gallup did indeed call wrong right before the election. You can find their listing of their calls and actual results here:
    http://www.gallup.com/poll/9442/election-polls-accuracy-record-presidential-elections.aspx

    For the most part, I’m not seeing any real pattern in their calls. Most are called correctly (and by correct I don’t mean calling the spread correctly but calling the winner of the popular vote). 2004 they called a tie instead of calling Bush. In 1980, though, their actual *final* survey they called Regan over Carter. In 1976 they incorrectly called for Ford rather than Carter. The only other error they made was way back in 1948 calling Dewey over Truman.

    So 3 bad calls from the poll. Two incorrectly favored the GOP candidate, one called for a tie rather than calling correctly for Bush.

    A more sophisticated analysis might look at state by state polls constructing an electorial college prediction and compare that too the actual result. I’ll leave that exercise for you to do.

  33. Boonton posted an intelligent synthesis of research he had done with a nuanced result that made it look like he cares more about the truth than confirming his prior beliefs.

    You post an absurdly simplistic conclusion that doesn’t indicate that you did any real research or even really care about the truth. Moreover, it fits into a larger pattern of dismissing all sources of knowledge that might conflict with your worldview. Your reasons vary, but the conclusion is the same. These are the things that you roundly dismiss in a completely unnuanced way: scientists, the media, polls, academia, economists, etc.

    Instead of using the trusted (for good reasons) albeit sometimes flawed sources that honest people the world over rely on for gathering facts, you find excuses to disregard them entirely and focus on finding contrary sources that are not just flawed and not just skewed but blatantly propaganda outlets or “opinion” columnists that tell you things you are already inclined to believe.

  34. JA,

    Boonton posted an intelligent synthesis of research he had done with a nuanced result that made it look like he cares more about the truth than confirming his prior beliefs.

    Let me know if I’ve misread this. Boonton’s conclusion was that the thesis of the linked post, that self-selected and partial poll samples can still not taint the results. I granted his conclusion. I’m unclear on how that makes me look like I don’t care about the result.

    The hypothesis that was suggested was that poll results when they differ beyond the margins of error do not skew symmetrically. This indicates a methodological error. Boonton did not address this. You haven’t either.

    I later linked a result that compares poll results to election results which suggests that rarely do the electoral results fall within the margin of error their prediction for election results when the election occurs. Do you disagree with that sort of finding?

    Instead of using the trusted (for good reasons) albeit sometimes flawed sources that honest people the world over rely on for gathering facts

    The only reason people use polls is because they have no other tool, hence the reference to the drunk/streetlight joke.

    focus on finding contrary sources that are not just flawed and not just skewed but blatantly propaganda outlets or “opinion” columnists that tell you things you are already inclined to believe.

    Yawn. You fabricate stuff nicely, which oddly enough is the your accusation against me. Irony win.

    Boonton,

    (and by correct I don’t mean calling the spread correctly but calling the winner of the popular vote)

    I’m sorry “not calling the spread correctly” is problematic when the margin of error is outside your prediction. There are two problems with that. The claim is that the error is not symmetric, i.e., methodological or dishonest. And polls being wrong does in fact influence votes, so being wrong asymmetrically prior the election skews the actual vote. Calling the race in a landslide (like Nixon/Carter) is not the point. To be right, your poll should predict the spread correctly within error.

    It sounds like you’re setting yourself up for confirmation bias

    How? Here’s what I’m planning to do. I’m planning to find 2008 by-state predictions (with error) from a variety of poll producers. Then I’m going to compare those predictions to the election results. As a first pass, I’ll tally those predictions which miss the target … binning them by error to the GOP or to the Dem side. How is that going to produce confirmation bias. The hypothesis is that errors have a skew. If the methods are unbiased there should be no favorite side in the errors. Let me know before I get started where you think my method is setting itself up for confirmation bias.

    Again when I have time, I’m going to try to find a list of state-by-state pre-election predictions (with error) and their results and try to see if the hypothesis is right. I plan to do this.

  35. The hypothesis that was suggested was that poll results when they differ beyond the margins of error do not skew symmetrically. This indicates a methodological error. Boonton did not address this. You haven’t either.

    Hmmm, I think this is a bit muddled.

    ‘Margins of error’ is not quite the best of terms. ‘Confidence interval’ is much better. I’ll give you a result with a 100% confidence interval; Romney will get 50% of the popular vote plus or minus 50 points.

    The idea of a confidence intervale is this, you took a small sample randomly from a huge population. What are the odds that the population average is different from your sample one. I take a sample of 100 people from a population of 100 million. How many possible combinations of 100 can you take from 100 million? Chart their averages. Since you took all possible combinations of 100, you already have data for the whole population, get its true average. Draw two lines that cover 95% of the possible sample averages. That is your 95% confidence itervale and depending on your sample size and its deviation it will probably be something like “plus or minus 3.5 points”.

    So if a poll says Candidate A will get 40% of the vote +- 3 points and he in fact gets 45%, what went wrong? Quite possibly nothing. 5% of the time you’d expect the ‘true average’ to be more than 3 points away from the poll. If you’re doing polls in 50 states, then something like two and a half polls will quite normally be off by more than their margin of error!

    That’s why I said it makes sense to look at multiple polls and even average them together. You can also increase your confidence level to something like 99% or even 99.5%. When you do this, though, your range becomes wider. If it expands to +- 6 points, say, it becomes worthless if both candidates are within close range of each other.

    Now the 2nd question is whether or not the polls always skew in one direction. You might be able to do this with the data I gave you…maybe I’ll do it tonight. I think it is interesting to note, though, that at least with one major poll it’s always been right except for 3 times and in 2 out of those 3 times it’s tended to incorrectly favor the Republican rather than Democratic candidate.

    And polls being wrong does in fact influence votes, so being wrong asymmetrically prior the election skews the actual vote. Calling the race in a landslide (like Nixon/Carter) is not the point. To be right, your poll should predict the spread correctly within error.

    Are you saying that a poll is bad if it’s range is +- 3 points and calls a race 51-49%? All the plus minus means is that 95% of the time the ‘true values’ will be anywhere from 54-46 to 48-52. You can make your range smaller by lowering your confidence level. At, say, 60% confidence the result can become 51-49 +- 0.25 points. Normally in social sciences the convention has been to use a 95% confidence level as the best tradeoff between costs and benefits. Even if a race is very close, it matters who has the higher average in the sample.

    It seems harder to make the case that polls are influencing the results. If a poll says 52% will vote for Obama when the ‘true’ value is 50% and as a result 2% of people opt to vote for Obama because they are inclined to go with whoever is winning, well the election will come in at 52% for Obama and the poll will certainly look very right. If 50% vote for Obama then the poll will be bashed as wrong but there goes your claim that the poll is altering reality!

    How? Here’s what I’m planning to do. I’m planning to find 2008 by-state predictions (with error) from a variety of poll producers. Then I’m going to compare those predictions to the election results. As a first pass, I’ll tally those predictions which miss the target … binning them by error to the GOP or to the Dem side. How is that going to produce confirmation bias.

    If you do it systematically like that you’ll have much less error of confirmation bias. I thought you were just going to sit around googling things like “polls said Dem victory when election went to GOP”. One place you may run into difficulty is you’ll want to compare polls done at the same time. You may want to just start with a single polling company like gallup. It has to be one large enough to be doing polls by state and nationwide. You might find some ‘metapolls’ (averages of multiple polls) published from around that time too.

  36. Boonton,
    Calling the “confidence” interval one vs two sigma doesn’t help you. The errors should be symmetric.

    I’d worry about averaging multiple polls. You don’t know if individual poll reporting agencies are getting their data from the same sources, which would be problematic for your weighting.

  37. Boonton,
    I’m just starting. In wiki, there is a list of “national” polls, they give 9 polls just prior to the election (10/28-11/1). No error is given, so the numbers are mostly useless (it was hammered into me early and often that experimental numbers are meaningless without error estimates and propagation). Let’s assume 2% error. Mr Obama got 52.9% of the national vote and Mr McCain got 45.7 -> lets’ call that 53 and 46 … so if a prediction for Obama is over 55 or under 51 it’s a polling “error” and likewise for Mr McCain over 48 and under 44 is an error. Of the 11 polls we have 5 low for Mr Obama and 6 under for Mr McCain. I’d call that evenly distributed, except that the sample size is so low as to be basically meaningless. The predictions on both sides were almost always low.

    On to states.

  38. The hypothesis that was suggested was that poll results when they differ beyond the margins of error do not skew symmetrically. This indicates a methodological error.

    Even if your premise is correct (that they do not skew symmetrically) it does not necessarily indicate a methodological error. There are many correlations between voting and turnout, for example. If the weather happens to be bad in an area, it might reduce the turnout of D voters more than R voters. In fact, most things that reduce turnout overall tend to reduce D turnout disproportionately. That has nothing to do with polling methodology (except for the “likely voters” part,)

  39. I’d worry about averaging multiple polls. You don’t know if individual poll reporting agencies are getting their data from the same sources, which would be problematic for your weighting.

    Different sampling methods shouldn’t matter here, in fact it should make for a better result when you average everything together. It will mess things up if the different polls you are averaging have different target populations (say one is targetting likely voters nationwide, another is looking at general popularity among all people voters and non-voters).

    . I’d call that evenly distributed, except that the sample size is so low as to be basically meaningless. The predictions on both sides were almost always low.

    So I went back to about JFK and averaged gallup’s under/overages for Dems and Republicans. I get on average Dems were overstated by 0.75 points and Republicans understated by 0.57 points. Standard deviation for D’s was 2.4 and 1.7 for Republicans.

    This would seem to lean towards the hypothesis of a rather slight benefit to Dems and loss to Rep’s by polls…but not enough to swing most elections (in other words cause the pollster to call the election for Dems when it’s Republicans who really win).

    Where I think this ray of hope for ‘biased polls’ will falter, though, is in hypothesis testing. The variance is probably close enough to zero, and the deviation big enough, to fail to reject the assumption that the true average of both sides variance is really zero.

    The national vote might be pulling some noise into the results too. Consider states like NY and CA where the Democratic candidate often sweeps. I could see a case happening where Obama has something like 65% of NY voters, but given that all you need to get NY’s electorial votes is 51%, 5% of voters decide to stay home since their vote doesn’t count producing a victory of only 60% for Obama in NY. This doesn’t cause the actual election to be called wrong but could cause Obama’s actual realized votes to be overstated by the poll.

    It would take more work but possibly be more interesting to average multiple state polls and see if they produce a different electorial count than produced by the actual election.

    JA

    If the weather happens to be bad in an area, it might reduce the turnout of D voters more than R voters.

    This would imply that D votes should have a higher deviation than R voters. If 99.9% of R voters will show up rain or snow or hurricane but D voters are more fickle then you’d see higher variation with D votes…sometimes the polls widely overestimating them, other times getting them about right. Does it matter though? If some NY voters stay home on a cold or rainy day it might cause D’s national numbers to be low but won’t alter the election itself.

  40. Boonton,
    OK. A first pass at 50 states. In 2008 in the Real Clear Politics average polls of those taken just before the election the skew was in favor Mr McCain. In those states that Mr McCain actually won, only three over-estimated his margin, the rest under. But those states which Mr Obama won, which were more of course, his margin was too under-estimation was the rule. In general the polls called the race as closer than it really was. That is their methodological skew, not to one part or the other. In 2008. In a small sample. I was struck in some cases by the “luck” as it were of an average coming close in some cases, in which the 4 numbers averaged differed by more than 10 points.

    It would take a research project on its own to investigate, say, all elections compared to polls over the last 20 years. That would be interesting, and I think something which polling firms would not like very much, as in general the results were horrible. Individual (non-averaged over different poll companies) gave wildly different numbers.

    When you watch TV reports of polls recalling from election nights, they report error. But not really. The error they report assumes uncorrelated random error of a very large (trend to infinite) sample pool. Random error would indicate an error bar size would scale as the inverse square root of the sample size. 10k questioned -> 1% accuracy. This is what they report, what they fail to mention is that this assumes (falsely) none of the “correlations” JA misnames above. Those correlations are really the crux of the problem. The poll is fundamentally inaccurate because of flaws in matching “turnout”. Yet 6-10% inaccuracies were common. That indicates non-random (methodological) flaws.

    Look at your Illinois example. Your Rural/Chicago split based on Demographics assumes that the “right” or representative neighborhoods in Chicago as well as which towns and localities in the outlying and rural areas are chosen well. Some towns (rural), like say Champaign/Ubana (U of Illinois) with a very large University, isn’t going to match small towns and farmers like in Sterling. The point is, that where in the city and in the country that you chose your sample could very easily screw up your prediction. Which is likely why the polls did so poorly in a relatively not-very-close election like the last one.

    JA

    LOL. Identifying (or failing to do so) those who go to vote and those who respond to polls … and other errors are the methodological errors. The process is rife with them.

    That has nothing to do with polling methodology (except for the “likely voters” part)

    No. That has to do with the futility of the whole polling enterprise.

  41. The point is, that where in the city and in the country that you chose your sample could very easily screw up your prediction. Which is likely why the polls did so poorly in a relatively not-very-close election like the last one.

    A principle of sampling would be that, in theory, the entire population has an equal chance to be in the sample. Asking some random people walking between classes at a university, then, clearly does not fulfill that requirement unless your study population is the university.

    Calling random phone numbers might be better, assuming all voters can be reached by phone or phoneless voters are a trivial portion of the pop. If you’re putting pollsters out on the street you run into the problem of city v county and so on but that can be compensated for by using weights. That requires updated hard knowledge of demogrpahic breakdowns.

    It is interesting that despite all the problems you would think the process has it actually does a very good job calling elections which I think is the primary service the customers of polls want them to do correctly.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>