A Spicy Stew of Economics, Politics, Data, Food, Carpentry, etc.
Worthless Wine Reviews
I'm not much of an expert on wine, but I though I'd pass on one of the few good pieces of advice I've ever read: buy the importer, not the wine
. That is, you're unlikely to go wrong if you buy a wine distributed by a good importer. There's a list at the end of the Slate article in the link. I can personally vouch for Robert Kacher's $10-$12 wines from the south of France.
Sure you can read wine reviews in a newspaper or on the web, but what good will it do you? You won't remember the name of the wine when you go to the liquor store, and if you write it down, the liquor store won't have it anyway.
I once made a serious effort to track down a great wine I had in a New York restaurant. It turned out to be sold in one store in NYC, and was impossible to find in DC, even if I'd been willing to special-order a case.
Of course, with wines produced in large quantities, reviews are worthwhile, so let me point out that Zardetto Prosecco
(a sparkling, dry white wine) is great and widely available. As I learned at DC's second-best pizza place
(Matchbox is the best) it pairs really well with pepperoni pizza.
I've updated my blogroll, attempting to come up with a list closer to what I'm currently interested in. The most notable changes are the addition of a "Sixth Sense" section (science blogs); the replacement of Pierre Carion, who I suppose has gone back to France, with two French-language economics bloggers; and a bunch of new economics blogs. The most interesting of the last group is Jim Hamilton
, who looks like he's as good at econ-blogging as he is at time series econometrics.
Oh, and I've long since lost interest in what Glenn Reynolds
has to say.
Traveling to France? Don't Forget the Soap.
What's up with cheap hotels that don't provide soap?
This is a common practice in France and other European countries. Last week I stayed in a bed & breakfast in rural France that was very nice, and a real bargain at 37 euros ($45) a night. It was relatively low-end, I suppose (no TV, no 24 hr front desk, 3 flights of stairs to my room) but they provided a clean, attractive room with a private bath, and it was certainly no youth hostel. The one guest I chatted with seemed to be an old-money, wealthy Brit. But no soap.
As a good economist, I've got to consider the possibility that the hotel is striking an efficient bargain with its guests. In the U.S., lots of half-used hotel soap bars are probably wastefully thrown out at the end of a stay. Suppose that the little soap bars cost a dollar a day, but that European hotel guests can supply their own soap, taking it home at the end of their stay, for only fifty cents a day. Then an economist might speculate that the typical American is happy to pay the extra half-dollar for the convenience, while Europeans prefer to pack a bar of soap and save a few euros. The average European is poorer than the average American, after all.
But as I've said, the guests at this "chambre d'hotes" didn't seem particularly hurting for money. Further, why not offer a choice? For American hotels where the default is to provide little bars of soap, this might be difficult, requiring the maids to be informed of the different preferences of each guest: high "transaction costs" might prevent American hotels from offering fewer services to their guests who'd prefer that. I guess that's why I can't get a discount by offering to re-use unlaundered towels a second time, or even a third.
But soapless European hotels could just sell bars of soap at check-in (and how about little bottles of shampoo as well). I'd certainly buy one, even at an inflated price, rather than have to run out to the convenience store to pick up a bar of my own. Is it possible that cheap European hotels actually do this, but no one's told me?
Lancet Quiz: The Answer
Someone has finally attempted my quiz! Commenter Kevin, writes:
My guess is that Apfelroth's reasoning is along these lines: in a war, refugees move away from potential flash-points, so the population is not geographically disributed as it would be for a census. So if your sampling method derives from census data, you will be over-sampling violent areas.
No, that's not it. Kevin's critique
has previously been raised (very mildly) by Daniel Davies, one of the Lancet study's defenders. But that isn't what Apfelroth is saying, and in my opinion the reliance on old Iraqi census data is probably only a small problem with the Lancet study.
The problem is this. Within
cities and villages, the Lancet researchers drew a rectangle around the village on a map, divided the rectangle up into a grid of hundred meter squares, and chose one of the squares at random. Each square had an equal chance of being chosen even though they might have had vastly unequal populations.
Apfelroth, who seems to be knowledgable about surveys, knows that if you draw a random sample of neighborhoods with unequal populations, you have to re-weight the sample to take that into account. Apfelroth is saying that given the available data, this would have been very hard: "it seems quite likely that the grid rectangles created by driving around in a war zone were much smaller than the original census tracts used in the 'cumulative population lists'."
In fact, Apfelroth is being overly generous here, assuming that the Lancet researchers did their best with this crucial step (weighting for neighborhood population). Either he is giving them the benefit of the doubt, or he's failed to notice that the Lancet researchers did not take this vital step at all: they weight all neighborhoods (100m grid squares) equally
. Because they weight neighborhoods with different populations equally, their sample is much more likely to choose people from low-population neighborhoods: neighborhoods near parks or rivers, neighborhoods on the fringe of the city.
Anyone who's ever seen a Red/Blue map of Presidential voting
in US counties will understand this phenomenon. Bush won the vast majority of the counties in the US (something like 80%) covering the vast majority of the country's land area (maybe 95%, I'd guess). Suppose you took a poll using the Lancet grid method. First you draw a rectangle around the continental US, then you choose grid squares within the rectangle at random, and finally you survey 30 people in each selected grid square. You'd end up with a lot of grid squares with no one living in them at all (e.g., oceans and deserts). And the vast majority of people interviewed would be in the rural, "Red" areas. A survey like this would likely find that 95% of respondents voted for Bush even though only 51% of the country actually did.
Now the Lancet study isn't quite this bad. It was only after they'd chosen a set of cities, towns, and villages to survey, using a reasonable method that gave a larger chance of selection to larger towns, that they began drawing rectangles and grids, and sampling land-area rather than people.
This is a huge problem for the Lancet survey if fighting is heavier on the fringes of cities than in the center, as I think it is. It will cause an overestimate of the death toll (I think a large overestimate). My commenter, Kevin, thinks that fighting is more common in city centers. If he's right, the Lancet study is still biased, although towards finding too small a death toll. It's hard to be sure who's right about where most fighting has occurred, but I'll make my case in a later post.
There seems to be a fad among right-wing authors to write anti-Europe and especially anti-France books
. But whenever I travel in Europe I'm always shocked by how dangerous it is, which I attribute to a lack of trial lawyers. In this regard, it's a right-wing paradise!
In Europe, people ride bikes without helmets, signs inform the innocent tourist of beautiful views atop cliffs without guard rails, narrow twisty staircases are unlighted, bread slicing machines are self-serve. Today I saw a Belgian fire escape, which was a ladder bolted to the side of a building. No doubt it was much safer than all the adjoining buildings, which lacked fire escapes entirely.
You may say, big deal, only fools would walk off a cliff, get their finger caught in a bread-slicing machine, or smash their head in a fall from a bike. But during my fairly brief time in Europe, I've see a friend trip and fall flat on his face at an unexpected step in the middle of a long, unlighted hallway, and later ride a bike at high speed into a chain stretched across an unlighted road entering a campsite. Myself, I've smashed into a closed glass door at the bottom of a spiral staircase.
No one was seriously hurt in any of these mishaps, and no doubt many will say that this only proves that I and my companion are fools. Indeed, after his spills, I explained to my friend that he had behaved extemely foolishly. But, in hindsight, I think I would have rather have dangerous areas lit up, safe egress from burning buildings, and a bike helmet protecting my head -- even at the cost of government intervention and tort actions by agressive trial lawyers.
So, I'm in Europe, investigating how the lifestyle compares to Mississippi
. Specifically, I'm in Leuven, Belgium, a university town of 100,000 or so about 30 minutes from Brussels. Leuven has two
. I highly doubt that the same, nor anything similar, can be said of anywhere in Mississippi.
Lancet: Extra Credit Assignment
I've occasionally debated the Lancet study with their blogging defenders for some months now. It's been frustrating, and I've finally figured out why. The Lancet defenders, such as Lambert
, both know something about the kind of statistics that starts "assume you have a random sample," but aren't particularly knowledgeable about the actual mechanics of obtaining a random sample.
The most egregious example is Lambert's casual dismissal of the cogent and well-informed criticisms
of Professor Stephen Apfelroth of the Albert Einstein Medical school. Lambert dismisses Apfelroth's criticisms as "just speculations
the sampling was not done correctly." In fact, Apfelroth's criticisms are hardly speculations: they're the expert opinion of somebody who obviously understands survey methods. They could be taken straight from a textbook on sampling. Here, Lambert is just sneering from ignorance, and I doubt that he understands Apfelroth's criticisms, which are indeed pretty terse.
One of Apfelroth's more obscure, but more important criticisms is this:
When a town or village was selected from the "cumulative population lists for the Governorate", the survey team then "drove to the edges of the area and stored the site coordinates"....it seems quite likely that the grid rectangles created by driving around in a war zone were much smaller than the original census tracts used in the "cumulative population lists".
If you understand survey methods, you'll understand why it is a huge problem that the grid rectangles were much smaller than the original census tracts. And if you think about this issue further, you'll realize that the Lancet method is highly likely to oversample rural areas and the fringes of cities
. It is precisely these low-density areas where the fighting was most intense.
So here's the extra credit question. If you get it right, I promise to take your criticism or praise of the Lancet study very seriously. Why does the discrepancy between the grid rectangles and the census tracts imply that the Lancet study oversamples low-density areas?
UPDATE: So, as of June 10, no one's attempted my quiz, even though I know plenty of people have come over here from the Lancet discussion on Tim Lambert's blog. Come on people! I'll settle for an explanation of what Apfelroth is talking about!
Lancet's 100,000 Deaths in Iraq Vindicated? Nope.
Before the presidential election, the medical journal Lancet released a study finding 100,000 excess deaths
from the Iraq war. Since then, the UN and the Iraqi government have released a new study (the ILCS survey) with a much larger sample size, finding about 24,000 deaths
from military action. This has lead to much uninformed crowing from the right
, charging that the Lancet study has been refuted.
The fact is, the two numbers are not comparable. The Lancet figure is for all excess deaths, and is based on higher post-war rates of violent crime, disease, infant mortality, and so on. The ILCS figures ask specifically about deaths due to combat.
But now Tim Lambert, an Australian computer science professor who delights in denouncing bad science and has written over 40 blog entries defending the Lancet study, has done some new calculations based on the Lancet data. And he's crowing that the Lancet study has been "vindicated
Lambert calculates that 33,000 deaths occured as a direct result of fighting in Iraq. Since the Lancet study is for a slightly longer period (18 months vs. about 14 months), Lambert concludes that the numbers suggest about the same death rate. But Lambert's claim of "vindication" is just as flawed as the earlier right-wing debunking.
First, the ILCS counts include Iraqi soldiers killed during the war, while the Lancet counts include only civilians: so say the authors of the two studies. More importantly, Lambert's figures exclude Falluja, where most of the deaths in the Lancet sample occured. If Falluja were included, the Lancet figure would be 189,000, almost an order of magnitude higher than the ILCS numbers.
Now, I don't deny that there are good reasons to exclude the Falluja cluster. But if you do so, you can't claim to have an estimate of war-related deaths in Iraq. You have an estimate of war-related deaths in areas without intense combat. In a letter to the editor
of a British newspaper, the Lancet authors give a good description of the issue.
Our study found that violence was widespread and up 58-fold after the invasion; that from 32 of the neighbourhoods [i.e. excluding Falluja] we visited we estimated 98,000 excess deaths; and that from the sample of the most war-torn communities represented by 30 households in Fallujah more people had probably died than in all of the rest of the country combined.
Fallujah is the only insight into those cities experiencing extreme violence (ie Ramadi, Tallafar, Fallujah, Najaf); all the others were passed over in our sample by random chance. If the Fallujah duster is representative, there were about 200,000 excess deaths above the 98,000.
Perhaps Fallujah is so unique that it represents only Fallujah, implying that it represents only 50-70,000 additional deaths. There is a tiny chance that the neighborhood we visited in Fallujah was worse than the average experience, and only corresponds with a couple of tens of thousands of deaths. We also explain why, given study limitations, our estimate is likely to be low.
I hope I'm not belaboring this obvious point: excluding the part of the sample where the most war-related deaths occured means that the Lancet's 100,000 and Lambert's 33,000 are likely to be underestimates. Indeed, in other contexts the Lancet's defenders on the web have emphasized this point: the Lancet figures are conservative, probably very conservative. Lambert himself
has written that "excluding Falluja biases the results downwards."
Now Lambert appears to have changed his mind. I've raised this point with him
on his blog, and as near as I can figure out, his reply is that Falluja was uniquely violent, or that the period covered by the Lancet survey after the ILCS survey had ended (April or May 2004 to September 2004) was uniquely violent. So he claims that the figures from the two surveys are comparable even when the Falluja data is dropped from the Lancet survey.
In fact, both surveys cover time periods with similar amounts of fighting. Both surveys exclude
the second round of fighting in Falluja during November 2004. Both surveys include
the intense fighting during the conventional war itself (March-April 2003). In general, there has been a lot of fighting in Iraq, at many times and places.
According to the Iraq Body Count
, 7,981 civilians were killed from the start of the war until March 2004, a time period covered by both surveys. During intense fighting in Baghdad, Falluja, and other parts of Iraq in April 2004, another 1,165 civilians were killed. These deaths are covered in the Lancet survey, but some were probably missed by the ILCS, which was in the field from March 22, 2004 to May 2004. Finally, 1,696 civilians were killed in fighting from May 2004 to September 2004, a time covered only by the Lancet study. The Iraq Body Count figures are based on newspaper reports, and so probably miss many deaths. But there's no reason to doubt that they get the time pattern of deaths about right.
Some calculation shows that the ILCS survey covers about 3/4 as long a time period as the Lancet study, during which time about 3/4 of the deaths occured. Although there was a lot of fighting in April 2004, the next 3 or 4 months were relatively quiet. So Lambert's claim that "the intense fighting was mostly after the ILCS was conducted" is simply false.
And that leaves us back where we started. Lambert's 33,000 figure of war-related deaths in a narrow population (excluding areas of intense combat and the deaths of soldiers) just isn't comparable to the ILCS figure of 24,000 for the whole population. If anything, the similarity of these numbers suggests that something went very wrong with the Lancet study. It certainly doesn't "vindicate" it.
[The Iraq Body Count figures can be found here
(for the first 50 days of the war) and here
(for the period since then)].