On the topic of labor reform, this proposal for "open-source unionism," by Joel Rogers and Richard Freeman, seems like a perfectly good idea:
[Right now,] workers typically become union members only when unions gain majority support at a particular workplace. This makes the union the exclusive representative of those workers for purposes of collective bargaining. Getting to majority status… is a struggle. The law barely punishes employers who violate it, and the success of the union drive is typically determined by the level of employer resistance. Unions usually abandon workers who are unsuccessful in their fight to achieve majority status, and they are uninterested in workers who have no plausible near-term chance of such success.
Under open-source unionism, by contrast, unions would welcome members even before they achieved majority status, and stick with them as they fought for it--maybe for a very long time. These "pre-majority" workers would presumably pay reduced dues in the absence of the benefits of collective bargaining, but would otherwise be normal union members. They would gain some of the bread-and-butter benefits of traditional unionism--advice and support on their legal rights, bargaining over wages and working conditions if feasible, protection of pension holdings, political representation, career guidance, access to training and so on. And even in minority positions, they might gain a collective contract for union members, or grow to the point of being able to force a wall-to-wall agreement for all workers in the unit. … Joining the labor movement would be something you did for a long time, not just an organizational relationship you entered into with a third party upon taking some particular job, to expire when that job expired or changed.
That seems spot on, or at least a step in a spot-on direction. As mentioned before, it seems very unlikely that either organizing or political action will boost labor density by a significant amount, and the latest intra-labor feuding has a bit of a more-heat-than-light quality to it, in that respect at least. Taking the historical view, it's usually been new innovations, of just the sort Freeman and Rogers are discussing, that have lead to "spurts" in organizing. Now if a split in the AFL-CIO can make it more likely that innovations of this sort will appear—competition being the mother of invention and whatnot—then so much the better.
I've been reading Barry T. Hirsch's paper, "What Do Unions Do for Economic Performance," and while it's probably not the last word on the matter, some of his findings seem worth sharing. According to Hirsh, most economic studies show that, on average, the effect of unions on productivity is zero, although this varies from sector to sector. (Unions do good things for productivity, such as improving a firm's personnel policies, and bad things, such as restrictive hiring rules.) Unions also lower corporate profits, as expected, though they do not seem to have any effect on business failure. Some finer points:
The ability of unions to win benefits for its members depends, predictably enough, on the degree of competition facing both unions and workers. A company in a fairly competitive, largely nonunion industry can't just pass wage gains on to consumers in the form of higher prices. This explains why unions have taken firmer hold in less competitive settings, like the public sector.
Positive union effects on productivity are highest in sectors where competitive pressure exists—because management needs to respond to an increase in labor costs by organizing more efficiently, etc. But these are also the sectors in which there is the least scope for union organizing and wage gains. (Because of #1).
Says Hirsh: "This implies that steady-state union density in the U.S. private sector must remain small, absent a general union productivity advantage. By the same token, introduction of unions or the strengthening of other instruments for collective voice into highly competitive sectors of the U.S. economy is unlikely to have large downside risks for economy-wide performance."
Some theories have held that the union wage premium will provide an incentive for employers to upgrade the skill level of the work force, thus increasing productivity. But, Hirsh says, "empirical evidence for skill upgrading is weak." (One theory for this: employers may reason that if they were to train their workers, the union would just bargain for even higher wages next time around, thus restoring the premium, so the employers decide not to bother.)
"Union wage increases can be viewed as a tax on capital that lowers the net rate of return on investment," says Hirsch. Unionized firms tend reduce investment in physical and innovative capital, such as R&D, leading to slower growth in sales and employment.
There is some evidence that union companies in the U.S. have performed poorly relative to nonunion companies, which has led, to some extent, to a shift of production and employment away from the former sector to the latter. Since unions tend to lower profitability, this could explain the decline of unionized industries in the 1970s and 1980s. But Hirsch claims this is far from settled. (See #7).
Importantly, Hirsch emphasizes that many of these empirical findings don't always identify the causal relationships at play. Do unions really cause X effect on productivity? Sometimes it's hard to say. For instance, older plants tend to have lower productivity, but older plants also have, on average, higher union density. Plus, union status is often an endogenous, rather than random, variable. Nor is it clear, moreover that union effects on, say, sawmills can be generalized to union effects among industries of the future.
So what does all this mean? If Hirsch is right, union representation is likely to continue to decline in the private sector. Firms just won't happily follow, say, the CostCo model and expect that collective bargaining will enhance productivity or profits. So labor will have to rely, increasingly, on the government for support. Now Hirsch thinks we should be thinking of more flexible ways to give workers representation and participation; regulations that capture the gains and considerable benefits from giving employees a greater voice in the workplace, while limiting the economic losses that come with "excess" union rent-seeking. Maybe. I get more than a little bit skeptical, though, when he starts touting an alternative form of organizing called "conditional deregulation." Beware the Greeks bearing gifts and all...
Colbert King's op-ed today—laying out the case against the use of racial profiling against "young Muslim male"—hits a bunch of the really crucial notes, I think, but it's possible to expand this argument a bit. First, a few misconceptions to clear up. Racial profiling probably isn't motivated by bigotry. Many minority police officers support the practice, after all, because they see it, sensibly enough, as a statistical tool. Moreover, for some intents and purposes, the statistical tool can work. Racial profiling can, in theory, offer a more cost-effective way to attack a certain problem. If, say, most people committing X action are young Colombian men, and if a reliable way to identify young Colombian men exists, then racial profiling will reduce X action. Clear enough, and you can see why so many people find the concept so attractive. Nevertheless, it's still an awful idea and ought to be abolished.
One of the big worries is that defenders of racial profiling want the practice to evade the "strict scrutiny" that generally gets applied to various forms of racial discrimination. Police departments and other proponents argue that, in racial profiling, they aren't taking action based solely on race—which is true—but rather using race as just one of a variety of factors to identify suspects. As such, they say, "strict scrutiny" shouldn't apply. Indeed, many mayors and governors will denounce racial profiling when "race is the only factor," but approve of other types, as if this makes all the difference in the world. It doesn't. In the real world, most discrimination doesn't take race as the only factor. Let's say I preferred to associate only with white people (or hire only white people, or admit only white people to my grad school program), but I would make exceptions for blacks and Hispanics who attended Ivy League universities. Clearly I'm discriminating by race, although race isn't the only factor in my decision. Point is, when we start approving of those types of racial discrimination in which race is just "one of many factors," we start heading down a troubling slippery slope.
That's the conceptual problem. The practical problems with racial profiling are more straightforward. For one, it antagonizes the group of people being profiled. One might argue that in the case of anti-terrorism profiling, the targets here are relatively small in number. (Not that many people ride public transit or fly on airplanes, after all.) That seems plainly false. Even a "Muslim-looking male" who never boarded an airplane would still know full well that if he wanted to do so, he would likely be stopped, and that in itself could cause resentment. Nor is getting pulled out of a line at an airport or subway station, as a result of racial profiling, just a "minor inconvenience," since the person being profiled knows all the while that it's not just this one time he's being yanked aside; rather, he's likely to have go through this process many more times in the future.
Now, as it happens in the case of, say, young Muslim men in America, I'm not sure if the resentment that would flare up as a result of racial profiling would necessarily "create" new terrorists. Maybe not. (That still wouldn't make it right, just less dangerous.) But it might piss off people who would otherwise help in a terrorism investigation, or who might call the police on a tip, or whatnot. Significant? It could turn out that way.
By the way, how many people would be affected if the police start profiling "young Muslim men"? A lot. A whole lot. As King notes, "Muslim-looking" men encompasses a wide, wide swath of races and nationalities, from Nigerians to Iranians to Indonesians. (Those three don't look anything alike.) Then you have your Central Asian Muslims, who often resemble the Chinese more than they do Mohammed Atta, not to mention your Chechens and other Caucasian Muslims, who can often pass for white. And so on. A lot of different people are getting profiled and antagonized here. Then we have to deal with the fact that many of the people we might think are Muslims probably aren't; around three-quarters of all Arab-Americans follow Christianity, for instance.
Meanwhile, even if racial profiling isn't motivated by bigotry, over time the practice would very likely create racial tension, or bigotry, among law enforcement officers, who, after all, would be out there day after day looking suspiciously at every Arab or North African they see. (Not to mention that they would likely have, over time, many a tense confrontation with "Muslim-looking men" who resent being targeted.) Pretending that these security personnel could continue to operate in a race-neutral manner day after day seems extremely naïve to me. Meanwhile, the practice would encourage civilians to view anyone they considered a "young Muslim male" suspiciously, which would further inflame racial tensions. How could it not? To top it all off, as King notes, police officers engaged in racial profiling will be far more likely to overlook white terrorists, who are as old as the republic itself, and include such stalwarts as Eric Rudolph and the dude pictured to the right. A police officer focusing hard on what that swarthy fellow standing in line is up to will almost certainly miss suspicious behavior by the white dude with the crew-cut and bulky backpack. Nor, for that matter, does it seem like it would be terribly difficult for a "young Muslim male" to pass himself off as, say, Hispanic or Greek (or white, if from the Caucus region) or whatever if one really wanted to pull off some bombing or other.
But maybe not. If cops were to use racial profiling against what they thought were young Muslim men, it might well very reduce terrorist incidents. Who knows? Nevertheless, even if that was the case, the loss of this statistical tool would have to be the price we pay for racial equality. As with all things, trade-offs sometimes become necessary. If preserving racial harmony means that the DHS needs to spend an extra couple billion dollars on some other, less cost-effective, security measure, well, fine. That sounds like a worthwhile trade-off to me. Now sure, one can imagine any number of "ticking-bomb" scenarios to question these principles—say that we had impeccable intelligence that a group of four Arab men were planning to bomb the New York subway tomorrow, but didn't know who they were; what then?—but clever hypotheticals like these don't disprove the general rule here.
In the London Review of Books, Eric Hobsbawm discusses the history of the family in the 20th century, including some fun trivia: "How many people knew, for example, that up to the middle of the 20th century by far the highest rate of divorce ever recorded—up to 50 per cent—was to be found among nominally Muslim Malays, that there is less gender bias in domestic work in Chinese cities today than in the USA, that the highest divorce rates in the second half of the 20th century were to be found among the main protagonists of the Cold War, the USA and Russia, or that the most sexually active Western people are the Finns?" Well, I didn't. Nor have I ever really thought about the fact that the Russian Revolution did more to bust up patriarchal family structures than perhaps any other event—as, for instance, in the way that decades of Communism brought the Balkan zadruga, the patriarchal extended family, to an end. Very much worth reading.
Oh yeah, the post below reminds me of something a colleague and I were chatting about the other day. It seems to me that one of the reasons Democrats, especially so-called "New Democrats," have been snuggling ever-closer to the financial industry over the past decade might be, in part, because it's one of the few corporate sectors that doesn't conflict in an obvious way with any other major liberal interest group. Democrats have to get corporate donations from somewhere, after all, and the finance industry, happily, doesn't usually clash with labor unions. It's not part of the military-industrial complex. It doesn't pillage the environment. It screws over ordinary voters in opaque and non-obvious ways. What's not to like? Indeed, it's a pretty natural ally for a party in dire need of campaign cash.
The downside is that any party that jumps in bed with the financial sector is often going to end up backing the sorts of anti-progressive measures—from the recent bankruptcy bill, to financial deregulation, to inflation targeting by the Fed—that all strike me as far more malignant than, say, an energy company donating to Tom DeLay in exchange for the right to pollute or pour MTBE into our drinking water or whatever. In some ways, I'd feel better if, say, Hillary Clinton was getting her money from ExxonMobil and Halliburton, rather than Citigroup and MetLife. (Okay, probably not, but you get my point...)
When did "failure" shift from a term denoting something that happened to someone into a term denoting who someone was? Here's one theory:
Credit, speculation, debt: the spreading net of confidence created a need for confidential reporting on men's trustworthiness. The nation's first credit-rating agencies opened in the 1840s in New York, close to the banks and merchants who needed the information. The agencies invented a lexicon of succinct ratings to sum up a man: "dead beat" (when suing for payment was as pointless as flogging a dead horse), "bad risk," "a great loser," "good for nothing"--or the triumphant "A no. 1." Comically useless in grasping the value of any life, such judgments nonetheless registered as probity in a society fixated on the stark oppositions of credit and debit, gain and loss.
That's from Christine Stansell's review of Born Losers: A History of Failure in America. (Bugmenot can help prise open the firewall if you don't have a subscription.)
Via Justin Logan, a somewhat old and thoroughly excellent Slatearticle about the Ultimate Fighting Championship. I didn't know that far fewer people have died in UFC (zero) than in boxing (many), but it makes sense; most of the fights I've seen are vaguely disappointing for those (as I was) expecting lots of carnage. Much more, as Plotz says, like sex. Riveting, nonetheless—and much less barbaric than the romanticized and mostly consequence-less violence you see in the movies. But here's something I wonder about:
If anything, ultimate fighting is safer and less cruel than America's blood sport [i.e. boxing]. For example, critics pilloried ultimate fighting because competitors fought with bare knuckles: To a nation accustomed to boxing gloves, this seemed revolting, an invitation to brain damage. But it's just the reverse: The purpose of boxing gloves is not to cushion the head but to shield the knuckles. Without gloves, a boxer would break his hands after a couple of punches to the skull. That's why ultimate fighters won't throw multiple skull punches. As a result, they avoid the concussive head wounds that kill boxers--and the long-term neurological damage that cripples them.
Okay, but most hockey fights involve multiple punches to the head, without gloves. Not all fights, of course—most get by with a few poorly-aimed swings and end in a lot of hugging and jersey-grabbing—but on average a fighter that fights multiple times might land a good five or six head punches in a game. Yet very few broken hands! Is that because the helmet cushions the blow? I guess helmets are kind of softer than a skull. And face punches probably help too.
It warms the heart to see Michael Stipe take a milk bath in order to raise awareness about the developed world's overly-high agriculture subsidies, but I still don't understand why people get so exercised over this stuff. "Trade not aid!" cries Minnie Driver in the Times today. Eh, says I. Here's what we know. Arvind Panagariya of Columbia University has found that 33 of the 49 poorest countries are net importers of food. So on balance, those countries would all likely get kneecapped—at least initially—if developed nations were to slash their own agricultural subsidies, since the price of food would rise. Obviously rural farmers in the Third World would do very well, since they could sell their wares for more. But food consumers, especially in urban areas, could suffer from the rise in food prices; and since the poorest of the poor often spend up to a third or more of their income on food, we're talking about a fair bit of potential hurt here. (Meanwhile, if OECD countries fail to lift their own import restrictions, or if some countries lose their preferential access, then rural farmers could get squashed too, but that's another story.)
Now what about the countries or people that would be helped if agricultural subsidies got the knife? It seems unlikely that they'll be helped all that much, at least in the short term. The IMF estimates that world prices would only rise by 2-8 percent for rice, sugar, and wheat; and 4 percent for cotton. That's not nothing, though do note that this is a good deal less than the typical annual fluctuation in world commodity prices. But okay, wouldn't even a modest rise in prices ameliorate poverty? Eh, hard to say. A recent and not-online Foreign Affairs article points out that in 1994, countries in the CAF currency zone—including Burkina Faso and Benin—devalued their currency 100 percent, essentially doubling the price of cotton. This vastly exceeds even the wildest hopes for what would come out of Doha. Despite all that, rural poverty remained "stubbornly high" in Burkina Faso. So why should we think that a reduction in OECD cotton subsidies now—which would have a much more modest effect on prices—would achieve so much more?
I'm all for trade liberalization, really, I just don't see how it's going to shift the world's tectonic plates all that much. Read Richard Freeman on this topic: when it comes down to it, trade just doesn't seem all that important—immigration, technology transfers, and capital flows have far greater impact. Or read Dean Baker and Mark Weisbrot, who make a convincing case that drastic reductions in trade barriers probably won't save the world and lift 540 million out of poverty, as William Cline has proclaimed. (Many of those enriched by trade, for instance, would go from just below $2 a day to just above.) By all means, Bush should stop holding up the Doha talks and take his mighty scythe to all those trade barriers. Go wild! But this should only be seen, I think, as one relatively small part of a larger development strategy.
Studies have shown that inequality is intrinsically harmful because people care about their relative status in the economic order. Richard Wilkinson has argued that vast economic inequality can alter the makeup of a country's social relationships, inducing stress, anguish, and ultimately poor health. Bad news all around. So even if the rising tide is lifting all boats, we should still do everything we can to reduce inequality, no? Well, no, says Will Wilkinson, we should just get people to care less:
My relative success has no "polluting" effect whatsoever if you don't care about it. (You're a good Buddhist, say.) The "pollution" is a joint product of my move up and your preference to not move down. The correct approach to the problem, if there is a problem at all, depends on what the lowest cost solution happens to be. If you changing your preference is cheaper than taxing me, then you ought to change your preference.
Well here's the low-cost solution. Those who control the means of production should just come up with some sort of... distraction mechanism... yes, to get those who fret about being on the bottom of the totem pole to fret no more. Some sort of "opiate," we'll call it, delivered to the masses. Perhaps in pill form. Now since the poor in Europe seem to care more about inequality than the poor in the United States, that just means that the European ruling classes haven't yet perfected the false consciousness technique yet. What's the matter with Kansas? Don't worry about it, pop another Soma. No, I don't know. Read Will's post, it's interesting.
From a political standpoint, I'm fairly convinced that the Democratic Party should steer far, far away from its gun control stance: frankly, it's a losing issue and it costs them too many votes that could be used to forward far more important progressive goals. From a personal standpoint, I'm all for letting people have guns, but every now and again I read a story about the whining coming from the NRA and some primal part of me just wants to regulate the gun industry right out of existence, purely for spite. The assault weapons ban, granted, is frivolous and mostly useless, and the NRA was right to oppose it. Nevertheless, there's no reason that the gun industry shouldn't be treated like the auto industry—universal registration of firearms, granting gun owners licenses based on skills and knowledge of gun law, a liability insurance requirement, letting the Consumer Product Safety Commission test and rule on guns—and perhaps a modest limit on gun purchases (one per month, maybe).
Now it may be that the NRA opposes these common-sense measures because they fear that sensible regulation would just amount to one giant slide down the slippery slope to total gun confiscation—and admittedly, that fear has some basis in fact, since lots of liberals really do want to ban all guns—but by itself, the opposition to these sensible regulations is pretty much without merit.
Anyway, that's all by way of saying that I have mixed feelings about this liability shield for gun manufacturers that's coming up for a vote in the Senate. Some might argue that lawsuits against gun manufacturers as a result of misuse of a firearm are just frivolous, and will never succeed anyway. (After all, should alcohol makers face lawsuits for "foreseeable misuse" of their products too?) The first point seems true—strictly speaking, holding a gun manufacturer liable for the "criminal or unlawful misuse of a [gun]" by a third party seems idiotic.
But then again, say a gun manufacturer starts selling far more guns in states with weak-gun control laws, more than the people living there could possibly buy, and the excess guns end up in, say, the hands of criminals in a state with tight gun-control laws like New York? Should third parties be allowed to sue for "negligent marketing"? These aren't hypotheticals, of course; the courts have thrown out these exact cases—even supposedly anti-gun activist judges like U.S. District Judge Jack Weinstein have ruled against the plaintiffs—although that trend could change over time, as legal thinking "evolves." (Surely the gun industry isn't just worried about frivolous lawsuits—they're worried they might start to lose these suits.) Meanwhile, negligent marketing does seem like a problem and not at all something in the spirit of the Second Amendment, or freedom to bear arms, or anything of the sort. We can all see what's at issue here—manufacturers are getting rich by circumventing laws meant to reduce gun crime. And when legislatures at the state and national level can't or won't do anything about it, then litigation can often step in and force an industry to account for its negligent product marketing and/or design, as it did to the tobacco industry in the 1990s.
Then again, it may be unwise to rely on activist courts to fight these sorts of battles. So there just might not be any answer to gun manufacturers running amok, at least barring a shift in popular sentiment at the national level. (And given that rural states are disproportionately represented in the Senate, that seems unlikely; unless, of course, the filibuster were to be abolished.) Now as it happens, I'm not convinced that guns are even among the top 10 biggest problems facing America today, so I don't lose a whole lot of sleep over this, but it still seems like a difficult issue.
Why does acupuncture work? I'm planning to go to a free session next week, purely out of curiosity, but I still want to know why. Surely the stated premise here— that we're all infused with Qi or "life energy" that gets clogged up now and again and just needs a bit of needling—is all just a bunch of arrant nonsense, right? I mean, right? Luckily, we have a less-mystical theory, courtesy of Scientific American:
[A] medicinal procedure like acupuncture may work for some other reason not related to the [Qi theory]. Electroacupuncture--the electrical stimulation of tissues through acupuncture needles--increases the effectiveness of analgesic (pain-relieving) acupuncture by as much as 100 percent over traditional acupuncture. ... Ulett posits that electroacupuncture stimulates the release of such neurochemicals as beta-endorphin, enkephalin and dynorphin, leading to pain relief. In fact, he says, the needles are not even needed--electrically stimulating the skin... is sufficient. Ulett cites research in which, using this technique, the amount of gas anesthetic in surgery was reduced by 50 percent.
These findings might help explain the results of a study published in the May 4, 2005, issue of the Journal of the American Medical Association, in which Klaus Linde and his colleagues at the University of Technology in Munich compared the experiences of 302 people suffering from migraines who received either acupuncture, sham acupuncture (needles inserted at nonacupuncture points) or no acupuncture. During the study, the patients kept headache diaries. ... The results were dramatic: "The proportion of responders (reduction in headache days by at least 50%) was 51% in the acupuncture group, 53% in the sham acupuncture group, and 15% in the waiting list group." The authors concluded that this effect "may be due to nonspecific physiological effects of needling, to a powerful placebo effect, or to a combination of both."
Hm. Although I can totally see how a "powerful placebo effect" could unblock the Qi...
Roland G. Fryer Jr. and Glenn Loury try to bring a bit of economic analysis to bear on affirmative action in a new paper: "Affirmative Action and its Mythologies." The first myth they discuss is one that's always seemed a bit bewildering to me: namely, the fiction that employers or educational institutions can somehow, magically, pursue affirmative action goals effectively without imposing "quotas". Hogwash, say these two:
[T]his distinction between goals and quotas is dubious, because to implement either a goal or quota requires that a regulator credibly commit to some (possibly unspoken) schedule of rewards/penalties for an employer or an education institution, as a function of observable and verifiable outcomes. The results engendered by either policy depend on how firms or educational institutions react to these incentives. If the penalty for certain "bad results" is sufficiently severe, then people will tend to say that a rigid quota had been imposed. If penalties for bad results are minimal, then the people will tend to say that a flexible goal has been adopted. Clearly, this difference is one of degree, not of kind.
Word up. Additionally, you can run into this sort of case, in theory: say the government is simply in the business of enforcing nondiscrimination rather than quotas. So Employer X comes under suspicion, say, because it's been hiring a disproportionately low number of minorities, though perhaps this is due to a low number of minority applicants or some other complex HR reasons. Whatever. Point is, the regulatory regime won't always be privy to all these "mitigating factors" and could in theory punish Employer X for discrimination. To avoid this possibility, Employer A may well end up adopting an implicit quota system regardless. Basically, it's hard escaping quotas so long as affirmative action remains a goal that's enforced with any sort of rigor.
Eh, come to think of it, conservatives have been saying the same thing for years, especially after Bill Clinton's "Mend It Don't End It" jingle went live. Ah, well. While they're at it, though, Fryer and Loury also take the time to knock down the popular idea that "color-blind" attempts to pursue racial diversity—such as the Texas state university scheme to automatically admit the top 10 percent students from all high schools—are a more efficient way of doing things, although admittedly much of this debate hangs on what people think the purpose of a university should actually be.
City Journal always strikes me as one of the most noxious magazines around. Mostly because its writers love to wade into decades-old debates, debates that have generated heaps and heaps of research, disregard all that research, and then flatly declare that liberals are stupid and conservatives were right all along about everything. Exhibit A is Kay Hymowitz's piece this month on how, contrary to the legions of liberal academics who have kept people poor and stupid for 40 years now with their pleas for welfare and whatnot, the one true cause of black poverty is that most black children grow up in fatherless homes. Liberals, Hymowitz declares, need to step out of their "don't blame the victim" mentality and realize this hyper-obvious fact.
Well, okay. Plenty of liberals have been thinking about the importance of family structure for quite some time: she even mentions two (William Julius Wilson and Sara MacLanahan), and then there was, um, the last Democratic president—a pretty prominent liberal, when you think about it. (Hymowitz makes it seem like Clinton was only "forced" to worry about family structure in the post-Gingrich era, but in fact, his 1992 campaign speeches included lines like, "Governments don't raise kids; parents do.") Beyond that, though, the relationship between marriage and childhood problems—let alone wider poverty—is complex and deserves a bit fuller treatment than the shallow gloss Hymowitz gives.
As it happens, the other day I was reading a collection of essays called The Future of the Family, edited by none other than Hymowitz' hero, Pat Moynihan, with a literature review of the effects of fatherlessness co-authored by... yet another one of Hymowitz' heroes, Sara MacLanahan! And lo, the results are a bit more ambiguous than the City Journal essay suggests. I can't possibly summarize the whole book here, but MacLanahan argues that, on the whole, research does indicate that fatherlessness is associated with lower test scores, greater levels of poverty, behavioral problems, delinquency, etc. for children. (For a dissenting view, however, do read Trish Wilson's post.) What's not clear, as MacLanahan points out, is why this might be the case. There could be a selection issue at work here: perhaps poverty causes both fatherlessness and negative outcomes for children, in which case single motherhood wouldn't be the root problem.
One study, for instance, found that "when pre-divorce circumstances are taken into account, the associations between family disruption and child outcomes become smaller, sometimes statistically insignificant." (Not all studies, though.) And then some of the findings are just plain odd. For instance, the academic achievement gap between kids in one- and two-parent families is moderately small in many social democracies like Sweden and Iceland—smaller than the gap in "neo-liberal" states like the U.S. or New Zealand—suggesting that a sturdy safety net can overcome the supposed disadvantages of single-parent families. On the other hand, the achievement gap is even smaller in Mediterranean countries like Greece, Portugal, and Cyprus, where child poverty is rampant and the safety net is tattered and frayed. But why? Basically, it's just not clear what works and what doesn't; it seems like losing a parent matters more in some places than others. But why? Dunno. Also, children in homes with a "resident cohabitating father" actually do worse than children in just single-mother families. But why? Dunno. The facts here aren't speaking for themselves, or else they are, but in ancient Aramaic.
One other thing: insofar as the fact of single motherhood itself is actually a "problem" (and I'm not convinced it is, but let's suppose...), there are basically two remedies. One, we can try to reduce the number of divorces by, say, making divorce harder to do, though that seems like a terrible option. Divorce is often very necessary, quite obviously, since even the best marriage counseling can't prevent every unhealthy or violent relationship. No kidding. Not only that, but placing restrictions on divorce could very well dissuade many adults from getting married in the first place, which would achieve exactly the opposite of what the family crusaders are gunning for here. Plus, changes in divorce laws would alter women's bargaining power in fairly fundamental and perhaps harmful ways—no one knows much about how this works. Policymakers might want to pursue publicly-funded marriage counseling (I believe Bush has advocated something of the sort, though I don't think anyone knows just how well it's worked yet.) At any rate, the divorce rate (per 1,000 women) has actually fallen over the past 20 years, from 22.6 in 1980 to 18.9 in 2000, according to a 2002 National Marriage Project study. Divorce just doesn't seem like a growing crisis in need of drastic action.
So let's look behind door #2. And door #2 is... reducing out-of-wedlock births in the first place. This seems like a pretty unambiguously decent policy goal, especially since 60 percent of all births are unintended, according to a 1995 Institute of Medicine study. (For her part, Hymowitz writes that back in the day "the truth was that underclass girls often wanted to have babies," but gives no evidence.) Now the tried-and-true way to reduce unintended out-of-wedlock births involves teen-pregnancy prevention programs that emphasize, yes, condoms and other "icky" items. (Hell, they can teach abstinence too, since that seems to work, though "abstinence-only" programs pretty clearly do not.) Measures to reduce subsequent pregnancy, like "second-chance homes" for teen mothers, or home visiting programs, seem to have had some success. Oh, and abortion—which, at the moment, is effectively unavailable to a good number of low-income women. But these are all pretty well-known liberal policy goals, I daresay.
If I understand this article correctly, Eliot Spitzer is doing his damndest to prevent crappy music from playing on the radio more often than is absolutely necessary. Well, then. Forget everything bad I've ever said about him (e.g., here); he's so my presidential pick in 2008 or whenever. On the downside, this budding Medicaid scandal in New York might bruise Spitzer's "social crusader" image, since from what I gather he was supposed to be in charge of pursuing and prosecuting instances of fraud. Nicely done.
Ah, it's up on the Mother Jones site: my essay on how corporations are marketing sickness. I'm somewhat convinced that the proliferation of pseudo-illness could be a deeper and more fundamental problem with health care in America than the fact that we don't have single-payer, or whatever. But I haven't come across anyone who has a grasp of how widespread a problem it really is, or how it affects insurance premiums or whether it bankrupts public health systems. Without better numbers, this debate gets very anecdotal very quickly.
David Galenson and Joshua Kotin have a theory on how innovation in the film industry works:
Why have some movie directors made classic early films, but subsequently failed to match their initial successes, whereas other directors have begun much more modestly, but have made great movies late in their lives? This study demonstrates that the answer lies in the directors' motivations, and in the nature of their films. Conceptual directors, who use their films to express their ideas or emotions, mature early; thus such great conceptual innovators as D. W. Griffith, Sergei Eisenstein, and Orson Welles made their major contributions early in their careers, and declined thereafter.
In contrast experimental directors, whose films present convincing characters in realistic circumstances, improve their techniques with experience, so that such great experimental innovators as John Ford, Alfred Hitchcock, and Akira Kurosawa made their greatest films late in their lives. Understanding these contrasting life cycles can be part of a more systematic understanding of the development of film, and can resolve previously elusive questions about the creative life cycles of individual filmmakers.
Er, a bit of elaboration might be necessary, especially since the paper isn't yet free for the taking. Galenson had originally developed a similar thesis for modern art—this article gives a readable overview—with a similar division. "For the conceptual artist, the important decisions for a work of art occur in the planning stage, when the artist either mentally envisages the completed work or specifies a set of procedures that will produce the finished work." As one might expect, then, most conceptual artists peak relatively early on—when they haven't yet been bogged down by pre-existing conventions and methods and can think up radical new stuff. Obviously they don't have to peak early on—conceptual innovation can in theory occur at any time in one's life; it's just more likely to occur at a young-ish age.
"Experimental artists," by contrast, make most of their innovations "in the working stage, as the artist proceeds on the basis of visual inspection of the developing image." This is the sort of thing that clearly gets better with age. Indeed, Galenson found that Abstract Expressionists like Mark Rothko and Willem de Kooning, who were more experimental, "peaked" at a much later age than did subsequent, more conceptual artists like Frank Stella and Jasper Johns. The Rothko pictured didn't just pop out of the womb fully-clothed, ya know. Andy Warhol was another conceptual artist who appears to have peaked early, in his late 20s-early 30s. The French Impressionists were experimentalists; Monet was doing marvelous water lilies until very late. The division obviously isn't hard and fast, though Galenson has argued elsewhere that it is a decent approximation for the spectrum of artistic approaches, though artists can change their position over time. Picasso was the most notorious Energizer Bunny in this regard—doing his cubist works in his mid-20s, "Guernica" at age 56, and so on. But, says Galenson, this is pretty damn rare.
At least for visual artists, Galenson measures "peaks" by looking at how much an artists' work is sold for many years later. Is this a perfect yardstick? Probably not, but as an approximation, it can reveal quite a bit. Moreover, to get the obvious out of the way, what we value about an artist now may not be what we value from the same artist 50 years from now, so that's a bit of a problem. How they measured the "success" of movies, though, I have no idea. Not knowing much about movies, I have no idea if this theory is even remotely plausible.
Cass Sunstein's typology of all the various modes of legal reasoning seems very helpful:
In order to determine what kind of justice [John G. Roberts] will be, it helps to understand the philosophical camps that have shaped modern constitutional theory. Over the past century, justices have come in four varieties. Majoritarians prefer to uphold the decisions of other branches of government unless those decisions clearly violate the Constitution. Perfectionists believe that, in order to perfect the Constitution, they should interpret it in broad terms that expand democratic ideals. Minimalists like small steps and prefer rulings in which the most fundamental questions are left undecided. And, finally, fundamentalists believe that the Constitution should be read to fit with the original understanding of the Founding Fathers; they are willing to make large-scale changes to established laws to return to that understanding.
His defense of judicial minimalism—ruling narrowly and setting aside judgment on fundamental questions—is also quite elegant: "[L]aw, and even social peace, are possible only because people set aside their deepest disagreements and are able to decide what to do without agreeing on exactly why to do it." Although one should also note that to a large degree, the social peace has been kept ever since the most notorious of majoritarian—or maybe perfectionist—rulings: Roe vs. Wade. Which is really quite remarkable, given that here you have a bunch of people who believe that legalized abortion is worse than the Holocaust, yet most of them are willing to uphold and support a government that enforces this law.
Obviously there are a few clinic-bombers here and there, but we never see social disobedience on a very broad scale. Judged solely by their actions, it sure seems like the Civil Rights protesters in the 1950s and 1960s felt far more strongly about their cause than pro-lifers do about theirs. Again, whatever, people can do what they want, but you'd think something worse than the Holocaust would incite a bit more in the way of drastic action. Anyway, that's just to say that the "social peace" argument for judicial minimalism probably deserves some skepticism. To some extent the country isn't tearing itself apart; overruling Roe, likewise, would be horrendous beyond belief, I think, but it wouldn't upend the existing social order. Oh, and Sunstein thinks that Roberts is probably a minimalist, though the guy does exhibit some fundamentalist tendencies here and there; that's certainly something to ferret out during the hearings.
UPDATE: I should've googled around before posting this, because a quick search shows that the question of whether anti-abortion protestors are justified engaging in civil disobedience is actually a somewhat heated one among religious conservatives. The most obvious objection, of course, is that all the legal avenues to overturning abortion rights have not yet been exhausted, so it doesn't make sense for abortion foes to engage in widespread civil disobedience. Sorry, but that's a cop-out. Imagine this hypothetical scenario: Hitler is in charge of Germany, freely elected, and decides to fire up the concentration camps. He still has three years left in his term, and can be voted out at that time, so the country's citizens say, "Well, this genocide business is pretty bad, but we shouldn't defy the Hitler administration by extra-legal means when we have perfectly legal means of voting him out"? In a sense, that's what pro-lifers, at least those who believe that legalized abortion is worse than the Holocaust, are doing by biding their time, trying to win elections and get Roe v. Wade overturned by legal means.
As Jessica of Feministing reminds us, the Senate hearings have already started on the Violence Against Women Act, which certainly deserves to be reauthorized. From a policy perspective, though, I wonder if anyone has looked into spending public resources on teaching women self-defense against sexual assault. Currently, VAWA only funds "educational" programs, which are fine and important, but why not self-defense? And why not start in elementary school rather than junior high? As best I can tell, studies seem to suggest that "women who use physical and verbal resistance strategies are more likely to avoid the completion of a rape" (without increasing the chance of physical harm), though much of the data is fairly ambiguous, simply because no one's looked into the matter in-depth, or done the proper longitudinal studies. But they should. If teaching women to crush windpipes and gouge out eyes in junior high is something we really ought to be doing, I see no reason why Congress can't fund it.
Now that the split between the AFL-CIO and dissident unions seems all but official, it's time to make a few predictions. The New York Timessuggests that the labor split will hurt the Democratic party, as the various unions will spend more time squabbling with each other and less time coordinating get-out-the-vote efforts come election day. The SEIU and other "Unite to Win" unions, meanwhile, think that electoral politics ought to come second to a focus on organizing. They have a point; labor density has gone down under both Republicans and Democrats, so it's not as if electing the latter to office has done them much good.
My more pessimistic take is this: neither organizing nor electoral politics will reverse labor's long slide. Politics for reasons given above. Organizing because the numbers are just too overwhelming. A few years ago, Harvard economist Richard Freeman ran the numbers on this:
To fund a massive organizing campaign would take, moreover, huge union resources. Turning Paula Voos’s estimates of the marginal cost of organizing a new member into 2001 dollars, the cost of organizing a new member would appear to be on the order of $2,000 – though it could be as low as the $1,000 that is the rule of thumb for some unions and as high as $3,000. Adding half a million new members annually at $2,000 per member would then require spending $1 billion, or about 20 percent of total annual union dues. Adding 1 million members would take about 40 percent of total dues.
A million new members is nothing to sneeze at, and this is precisely the strategy SEIU and the other dissident unions are going for. Nonetheless, even a million new members—and this falls in the "optimistic" category—won't fundamentally reverse the long decline in labor density. A million new members would only add a point in density; 500,000 new members would simply balance the loss of members due to workplace changes. So the Unite to Win unions are doing the noble thing, but ultimately they're highly unlikely to pull off a structural shift in the layout of the labor land; at most they'll stop the earth from being scorched.
I know I keep harping on this, but the historical record is instructive. Unions have traditionally exploded in size not because of a commitment to organizing, and not necessarily because of labor-friendly legislators in Washington, but largely because of historical accidents. Labor density has grown in "spurts," due to factors that were often difficult to predict. Unions went forth, multiplied, and prospered during World War I, for instance, because developed Allied countries needed the full cooperation of labor to mobilize and fight their splendid little war, and a slew of labor-friendly compromises ensued. Likewise, union density grew during the Great Depression for obvious reasons—people saw the need for unions—and during World War II because the government yet again needed cooperation. It's worth noting, though, that legislative compromise and popular support weren't the only reason for labor's success during the 1930s and 1940s—the rise of the industrial union, and the opening up of an entire new sector to organize, really fueled the surge in density.
So for those asking "What will save Labor?," the answer probably isn't "a greater commitment to organizing" or "electing more Democrats." Presumably the answer will involve some new way of organizing—structured around the internet, perhaps—or the rise of a new sector of unions that no one has yet thought of. Perhaps white-collar programmers angry about outsourcing will provide fertile new ground. Perhaps the Bush administration will drive the economy into the ground and the public will flock to unions. Perhaps Andy Stern's vision of a global labor movement winning representation at the WTO will prove the new face of unionism. Still, the politicking vs. organizing debate going on right now seems much too narrow, and, sadly, a bit hopeless.
Dan Darling points out something important: The similarities between the London and Sharm al-Shaikh bombings suggests that al-Qaeda may still be far more centralized than people think. From what I gather, Darling's big on Rohan Gunaratna's thesis that al-Qaeda still maintains a fairly robust vertical leadership structure, coordinating activities among a broad swath of associations, cells and "franchises" from on high. Bin Laden and Zawahiri, along with their various subordinates holed up in Iran and elsewhere, are still calling the shots to a large degree. You might say that al-Qaeda's structure isn't fundamentally different from that pre-9/11, except that there are fewer quality leaders, fewer training camps, and it's harder to coordinate stuff. Gunaratna's view—again, assuming I've recalled it correctly—sits in contrast to Jason Burke's more popular thesis that "al-Qaeda" itself isn't terribly important as an organization, and has mostly become a rallying point for a broader Islamic militant movement. The main threat, in other words, comes from a bunch of very loosely coordinated or uncoordinated terrorist cells often inspired by bin Laden but not necessarily acting on his orders.
Having re-read Burke's book recently, I should say that he does seem convincing when he argues that the structure of Islamic terrorism today may well resemble the structure of Islamic terrorism in the early 1990s, when bin Laden was as yet a relatively minor financier and skilled terrorists like Ramzi Yousef and Khalid Shaikh Mohammed operated somewhat as freelancers—albeit freelancers with access to training camps in Pakistan and Afghanistan and virtually unlimited funds pouring out of private mosques around the Gulf. If Burke's right, that's the lay of the land post-2001, though the freelancers and terrorist cells are much, much less adept and ambitious than Yousef or Shaikh Mohammed ever were. On the other hand, Burke does seem to go out of his way to deny any links between, say, Zarqawi and bin Laden—or between Basayev's band of Chechen salafists and bin Laden—when that hardly seems certain at all, given what we now know. It's a brilliant book, no doubt, but it does seem a bit tendentious.
Clearly I don't know enough to "weigh in" on this debate. Intuitively, though, it probably doesn't have to come down to one or the other. Gunaratna could be right in that bin Laden, Zawahiri, al-Adel, and other al-Qaeda higher-ups are still very much coordinating a far-flung terrorist organization with franchises in Iraq, Egypt, Pakistan, etc. (The leadership here is probably a very small "hardcore" element, of 200 or less, as per Marc Sageman.) So sure, maybe the London and Egypt bombings were done with the help and blessing of bin Laden himself, from his cavern resort or wherever. But Burke could also be right in that al-Qaeda has become a rallying point or inspiration for wholly unaffiliated cells and freelancers to carry out attacks on their own. And then there's Sageman's middle-way view, that "al-Qaeda's fragmentation since the invasion of Afghanistan has left it metastasizing into local operations seeking legitimacy under its banner."
On the other hand, Islamic militants were killing tourists in Egypt long before anyone in the West even knew bin Laden's name, so I guess we'll just have to wait and see where the trails actually lead. Also, read Marc Lynch's post on all of this.
How does Lance Armstrong do it? Ah, dear reader, it's all in the freakish physiology: "Mr. Armstrong's VO2 max is 85 milliliters of oxygen per kilogram of body weight per minute. An average untrained person has a VO2 max of 45 and with training can get it to 60. 'Lance would be 60 if he was a couch potato and never trained,' Dr. Coyle said. 'For the average person, their ceiling is Lance's basement.'"
Oh, at last, I've revived the ol' internet connection here at home. In a way, that's too bad, I was so enjoying spending the weekend away from the dull glow of the computer screen. But anyway, here's a passage from Robert Kaplan's new book, Imperial Grunts, that's worth sharing—on how you "might learn as much about a culture from its weaponry as you could from its literature":
As [Lt. Col. Custer] demonstrated, while stripping the AK-47 down to its constitutent parts, it was a rifle designed for use by fifteen-year-old illiterates whose life was valued cheaply by the designer. "Illiterates won't clean a gun, or at least not meticulously, so the parts are measured to fit loosely. That way the gun won't jam when it's filthy with grime. But it also makes the AK-47 less accurate than our M-16s and M-4s, which have tight-fitting parts and must be constantly cleaned. And because illiterate peasants aim less precisely," he continued, "the lever of the AK-47 goes from safety directly to full automatic, for spraying a field with fire. With our rifles, the lever rests on semi-automatic before it goes to full auto."
The sites on the Russian rifle could be adjusted for greater accuracy, unlike on American rifles. The Kalashnikov had a bullet magazine that had to be gripped before it could be released, so it wouldn't be lost in the dirt, because magazines were dear in the old U.S.S.R. That made changing magazines slower, and thus further endangered the life of the soldier in combat. In the old Soviet Union, soldiers were more easily expendable than bullet magazines. By contrast, American magazines dropped onto the ground and could be lost, but it made for a faster, more fluent performance by the rifleman.
"The M-4 can hit a man at several hundred yards every time," Custer explained. "The AK-47 is more of an area weapon. We value our soldiers as individuals with precision skills; the Russians see only a mass peasant army."
Good stuff. Kaplan's book, by the way, is marvelous, although extremely annoying at times. He's clearly a far braver man than I could ever hope to be, but ultimately his priorities seem to be: 1) printing stuff that will ensure his continued access to military sources; 2) going out of his way to prick at "delicate" liberal sensibilities**; and finally 3) figuring out how the military works and how it needs to work. Once you figure that out, and filter accordingly, Kaplan's basic thesis—that small, highly specialized military units working without heavy bureaucratic constraint are the optimal way for America to run its far-flung empire, which, like it or not, exists—starts to sound like something worth discussing seriously.
[**An example of this. In a chapter on Afghanistan Kaplan notes, with approval, complaints from grunts that interrogation procedure is too lax: "Usually, an Afghan willing to be uncomfortable for a few days could stiff the American interrogators with impunity. Everyone complained about this." Yikes, seems like an argument for torture, huh? I mean, even the military folks are chafing at the kid gloves that Dick Durbin and Ted Kennedy want them all to wear! In context, it's clear that that's how the passage is meant to come across. But then one page later Kaplan quotes a military man saying about a mission in which they arrest a bunch of Afghan suspects: "The real object of the mission is to treat them respectfully, so that after they are released they'll tell their families how different the Americans are from the Russians." So those kid gloves, it seems, are essential to whole reconstruction project, not an impediment, huh? It seems so, but Kaplan can't admit that without first giving all those squishy liberals in D.C. a kick in the shins.]
Oh my word, this is the funniest thing I've seen all day. Powerline's uncovered the Top Secret Democratic Plan to out John G. Roberts as the plaid pants-wearing gay man he so clearly might be. Well, fuck it folks, that plan was our last, best hope to stop him. Anyway, Charmaine Yoest gives Operation Pink Elephant the flogging it deserves: "Of course it is the height of hypocrisy for the (allegedly) pro-tolerance crowd to start questioning someone's sexual preference." So true, but in our defense, we were desperate! This bit cuts deepest though: "John Roberts may have played Peppermint Patty back in the day, but here and NOW, the Left is playing Lucy with the football..." She means "Charlie Brown," I think, but 'ouch' all the same.
Oh, fine, just one more for the self-flagellating. This penetrating insight from Hindrocket: "Even during the Civil War, when the Democrats were fighting to preserve slavery, limits were observed. Now, all civility is gone."
Mark Benjamin of Salon has finished his four-part investigative series on reparative therapy for gay people. The last part, with links to the others, is here. (In the second part, he pretends to be gay and actually goes through therapy.) What's interesting is that Benjamin was unable to track down a single man or woman who had gone through the process and been "cured." He asked the president of the National Association for Research and Treatment of Homosexuality to point him in the right direction, but no dice: "He responds that his patients will not talk to me because they don't get a fair shake in the press."
One of the things I've realized while writing this piece on how drug companies "create" illnesses is that journalists are appallingly complicit in the whole thing. Magazines such as U.S. News & World Report will blare headlines such as, "Living with Adult ADD. New hope for coping with the distraction and anxiety." In reality, there's a good deal of serious controversy over what ADD really is, and how it should be treated, but you won't get much of that in the piece. The emphasis in these stories will usually be on the neurobiological basis for the disorder—which is only one theory among many, though obviously the one favored by drug companies—and there will usually be key product placement early on, along with a ringing endorsement from some doctor who likely moonlights as a paid speaker or consultant for the company in question. (In this case, the company is Lilly and the drug is Strattera.) "Real people" experiencing the condition will be supplied by a patient-advocacy group rightly trying to raise awareness for the condition, although that group will, in turn, often be funded by the relevant drug company.
That's not to say that Adult ADD is bullshit. That's the thing—I'm not in any position to say. The only information I as a regular non-scientist can get will come from these glossy magazines, where it's clear that one side of the issue—the industry-favored side—is being heavily pushed. This is essentially an advertisement for ADD—and hence its treatment—rather than any sort of investigative journalism. So then I think, "Well, gee, sometimes I feel distracted and anxious," and it's off to the doctor I go, who, of course, is far more likely to prescribe a medical treatment than, say, a lifestyle change to deal with my condition. If I'm lucky, my doctor won't be moonlighting as a paid speaker for Lilly, but I don't know. (Would I even know to ask?) And let's be clear here. This isn't a conspiracy to invent a disease out of nothing. Doctors aren't paid to prescribe drugs against their better judgment. No. It's more subtle than that—something like a confluence of interests that gathers around millions and millions of dollars in pharmaceutical marketing money.
At any rate, from what I gather medical reporters are getting slightly better at calling foul when "news" of the hot new illness sweeping the nation comes via blast-fax, but it's still a real problem. And in a sense, who can blame them? They're under deadline pressure, nothing sells glossy magazines like unearthing a new disease, and they need quotes and case studies fast. Meanwhile, though, health care premiums keep rising...
Anything I say about voter behavior is likely to be wrong. But I'll take another crack at it. Dan Kahan and Donald Braman of Yale have put out a new paper, "Cultural Cognition and Public Policy" that puts forward a somewhat-obvious, somewhat-neglected argument. Many of us like to think that, if only we could disseminate correct information about the world—say, the mounds and mounds of scientific evidence that global warming exists—people would come around to our policy views. Not so, argue these folks. If all our differences on policy questions were simply due to the fact that we all have imperfect empirical information, than these opinions would be randomly distributed. But that's not what happens. Cultural cognition plays a huge role here.
Drawing on the cultural theory of risk, Kahan and Braman graph cultural typology along two axes, with four compass points: individualist vs. solidarist, and hierarchist vs. egalitarian. Where a person sits along these axes is far more likely to determine your policy preferences on various cultural issues than party affiliation or "ideology." So, for instance, people who are more egalitarian or solidaristic are more likely to a) worry about global warming, and b) believe that it is real and a serious problem. Likewise, people who are more hierarchical or individualistic are more likely to oppose gun control as a matter of principle, but also more likely to believe that gun control actually has perverse effects. Willingness to believe certain facts is very much affected by cultural worldview. This also helps explain why a non-expert who believes that global warming is real is also statistically very likely to believe that, say, gun control can prevent violence, even though there's no reason why a non-expert should necessarily believe both empirical results.
As I say, that's all pretty obvious so far. Cultural cognition structures facts, and it also filters facts. An egalitarian person is more likely to listen to other egalitarians, and trust what they have to say. Likewise, scientists or researchers with a particular worldview are likely to be biased in their findings. Frankly, it's not much of a surprise that the Center for Budget and Policy Priorities always discovers that supply-side economics is bullshit, and it's not much of a surprise that I always trust them. Meanwhile, I'm prey to all sorts of mental biases that reinforce these positions. There's cognitive-dissonance avoidance (believing that what's noble is benign, and what's ignoble is dangerous—e.g., liberal belief that Guantanamo will fuel a global backlash.) Or group polarization, which Cass Sunstein has done much with. Meanwhile, I'm more likely to believe that I've arrived at my empirically-based beliefs through objective assessment, and that my opponents are hostage to some biased worldview. And I'm certainly not a perfect Bayesian. So biases due to cultural cognition accumulate over time.
Anyway, enough of that. How does this all affect voting behavior? Well, let's revisit the Tom Frank thesis: Voters are inclined to vote against their self-interest (i.e. Republican) because they're overly-swayed cultural factors. Now I once suggested that maybe it's not actually in the narrow self-interest of that many voters to actually vote Democrat. I wasn't happy with the post, though. So let's revise Frank in terms of cultural cognition: Voters' cultural worldviews incline them to believe that a given set of policies actually is in their self-interest to support, regardless of the facts. Moreover, voters don't spend nearly as much time as, say, bloggers do thinking about public policy. Instead they're inclined to trust whoever shares their cultural outlook on all empirical matters.
In other words, it might not be enough to say to a white working class male in Kansas, "Look, you're continually being screwed by the ruling class. They've dismantled labor protections. Your wages have deteriorated. And yet you go for it because they rile you up about gay marriage!" It won't work. Odds are, unless he believes you share his cultural worldview, he won't trust your assessment of economic life. Or to put it another way: "Moral values" voters probably didn't look at Bush and think, "Hm, he hates gays too," and thus forget all about economic self-interest. They probably thought, "Hm, he hates gays too, so his line about how Kerry's tax hike will hurt small businesses is probably true and important." Now what Bush said about taxes was utter bullshit. Good luck convincing anyone of that, though! The same might go for voter perceptions of foreign policy too. "Hm, he thinks gay marriage will harm society, so he's probably right that the Iraq war has made us safer." To some extent, the ability of rational persuasion to change that is limited, even if we did shut down right-wing talk radio.
That's not to say that everything depends on gay-bashing, though that's one possible conclusion. Nor is it to say that Democratic politicians could never connect with a certain class of voters. Obviously, if white working-class rural voters think that, say, Montana Gov. Brian Schweitzer shares their cultural worldview, they're more likely to trust him on various empirical matters that the non-expert can't evaluate on his or her own. And debates can probably be "reframed" to take them out of their usual cultural context, although I don't think George Lakoff is the man you want here. Kahan and Braman, for instance, suggest that reframing can happen if "the common perception that the outcome of [a] debate is a measure of the social status of competing groups" is dissipated. That's not Lakoff, right? But they explain no further. Well, that's all I have for now. Clearly there's much more to be said.
Question. Everyone keeps saying that this John G. Roberts fellow will get confirmed no matter what anyone does. And media folks keep saying, "Well, all this ruckus over the Roberts pick is really going to distract from the Plame Name Blame Game, but we're just reporters, so it's not like we can do anything about it!" And to top it off, it seems obvious that Democrats aren't going to "define the debate" through a pre-hearing media battle, or benefit in any way.
So...
Why doesn't liberal HQ just send out the memo telling everyone to lay off? When the confirmation hearings arrive in September, then Democrats can rattle off their questions and do what Lindsay Beyerstein says, but until then, focus on Rove-Plame, which actually seems to be hurting the Bush administration. I mean, yeah, John Roberts is going to make the world a worse place, but if that's going to happen no matter what, why not just let it go for now and get back to the task of castrating Karl Rove and the Republican Party? Oh I don't know...
Slateexplains the "cat lady" phenomenon. Now that's all well and good, and I'm sure psychological expanations serve their purpose, but the more important question is this: How many cats does a lady actually need before she can be considered a "cat lady"? I say six. Six cats and you're a cat lady. Five cats and you just have a lot of cats. Well I guess that solves the Sorites paradox. Next up, my thoughts on whether or not words correspond to things in the world...
For those who have been following the debate, there's not too much of pressing interest in the Nation's roundtable on the future of labor unions. Except, I think, for the very end, when the moderator asks AFL-CIO president John Sweeney, SEIU president Andy Stern, and UNITE HERE hospitality president John Wilhelm if they think that a split in the labor movement could be good for organizing:
Sweeney: You know, this is not the 1930s and '40s, when US industry was on the rise and we were shifting from an agricultural to an industrial economy. We are shifting to service, working families have no basic healthcare, the working poor is growing and people are barely getting by. Right-wingers in Congress are actively attacking worker protections at every turn. It should tell you something that those right-wingers are salivating at the prospect of a split. Look at the Republican websites--they are gleeful, because the truth is, we have maintained power out of proportion to our numbers because we have been organized. A split will help no one…
Stern: The labor movement is incredibly divided right now. The only way it is united is at a table in Washington, DC, or because it uses the same initials after its name. But when it comes to dealing with companies like United Airlines or national strategies about how to organize Wal-Mart or healthcare workers, there is no unity. The airline industry, the most heavily unionized industry in the country, is Exhibit A, B, C and D. If we don't believe our lack of ability to coordinate and cooperate within companies and across them is a factor, we are crazy. I don't think a split itself will create a whole new wellspring of growth and hope. I don't expect immediate results…
Wilhelm: There is no question that the historical context is radically different, but I would point out that the CIO did not begin as a split but as a group of unions who wanted to try doing some things differently. The AFL expelled them. The question is, Will the federation take advantage of the greatest opportunity since the Great Depression to respond to what the country needs the most, or won't we? I don't think there ought to be a split. … But the definition of insanity is doing the same thing over and over again and expecting a different result. …
Hm, interesting thoughts all around. One other thing to add, and this gets touched on very slightly earlier in the roundtable. My grasp of union history is somewhat fuzzy, but it seems, as Richard Freeman has demonstrated elsewhere, that labor density has historically grown in "spurts," during periods and under conditions that were very difficult to see and predict. So in World War I, for instance, developed countries needed the full cooperation of labor in order to mobilize and fight their splendid little war, and due to various resulting compromises, union density grew considerably. Likewise, union density grew during the Great Depression for obvious reasons, and during World War II because the government yet again needed cooperation. In the EU, union density grew considerably during the oil crisis inflation (but not, note, in the private sector in the United States during the same time). Likewise, experts have tolled the bell for various unions before—George Meany thought public sector unions were doomed in the 1950s—only to be horribly wrong.
So it's very often hard to foresee what's going to happen, and to some extent these productive spurts have to do with changes in labor law and legislation, but to some extent they also have to do with contingent historical events and the emergence of new unions and new modes of operating. One of the factors that were hard to foresee in the 1930s and 1940s was precisely the rise of the industrial union, which really fueled the surge in unionism during the Great Depression. (In conjunction with New Deal legislation, but not wholly dependent on it.) Likewise, during World War I, some of the main driving factors were the rise of war labor boards and mandatory arbitration. So it's very easy to rely too much on lessons from the past, which makes the question posed above something of a misleading one.
Yet another Harry Potter skeptic converted! (I, too, was once one.) Echidne has been reading the books and has, as usual, interesting thoughts on the matter. One very crucial point she brings up is that the whole wizard economy seems very ill-formed; everyone's still on the gold standard (and a gold reserve whose price is apparently unaffected by supply and demand here on planet Earth). The economy also seems to rely very, very heavily on slave labor—something that only Hermione (and at one fleeting point in the sixth book, Harry) seems bothered by. It's all very unstable. In fact, as with the thesis that the Civil War wasn't really about slavery, I'm sure you could find some sort of economic rationale for Voldemort's de facto secession from the wizard world.
Via Tyler Cowen, I see that James Surowiecki is attacking the myth that aid doesn't work in the New Yorker. Good for him. As it happens, though, I wrote a similar article during the G-8 conference that's a bit less fluffy, and adds two other twists to this story: Besides the fact that aid isn't harmful, and can do good even in badly governed countries (and in the past has been addled by Cold War politics), one should also note that increased trade isn't nearly the panacea that people think it will be. In fact, some of the strongest solutions to pulling developing nations up out of poverty may involve neither trade nor aid, but things no one pays attention to. In particular, increased labor mobility and technology transfers can achieve quite a bit. Anyway, I don't link to MoJo articles all that often, but this one was pretty good, I thought, if a bit dry.
Here's a fun game. Tyler Cowen charted the evolution of his "favorite movie" over the years. This seems like an easy exercise, let's see... I was born in December 1981, so:
1985 – Big Bird Goes To China
1989 – Indiana Jones and the Last Crusade
Jan. '93 – Indiana Jones and the Last Crusade
Mar. '93 – Terminator 2
Apr. '93 – Indiana Jones and the Last Crusade
1999 – Indiana Jones and the Last Crusade
2005 – Indiana Jones and the Raiders of the Lost Ark
Surprise twist at the end, but let me tell you, I've revised my thinking dramatically since graduating college. Admittedly I have pretty boorish tastes in movies. Don't get me wrong, I think Persona is fantastic and the Bicycle Thief gets me teary-eyed every time I see it, but my approach is: if I wanted human drama and complex emotional interplay, I could always pick up a good novel. Allan Hollinghurst recently came out with something new, I see. That will do. Only the screen, however, can dole out the fast pace and thundering score that a rogue archaeologist with a whip and fedora truly deserves.
Well, I've always wanted to know this too. Jonathan Gruber and Michael Frakes do some research:
The strong negative correlation over time between smoking rates and obesity have led some to suggest that reduced smoking is increasing weight gain in the U.S.. This conclusion is supported by the findings of Chou et al. (2004), who conclude that higher cigarette prices lead to increased body weight. We investigate this issue and find no evidence that reduced smoking leads to weight gain. Using the cigarette tax rather than the cigarette price and controlling for non-linear time effects, we find a negative effect of cigarette taxes on body weight, implying that reduced smoking leads to lower body weights. Yet our results, as well as Chou et al., imply implausibly large effects of smoking on body weight. Thus, we cannot confirm that falling smoking leads in a major way to rising obesity rates in the U.S.
Can't seem to find the full paper, so I don't know exactly how they went about the study, but this seems like a very roundabout way of studying something that shouldn't be too hard to determine, no? Experiment group and control group? Oh I don't know, I'm just the blogger.
Hmmm... Stanley Fish says that it's impossible to be a "strict constructionist" without imputing some sort of intention to the words of the Constitution, as Scalia wants to do. Ramesh Ponnuru says no, you can:
Scalia's point, I take it, is that if we want to know the meaning of, say, the First Amendment, knowing the Framers' views about religious freedom does not settle the matter. The illustration usually brought up here is that if a secret letter from James Madison saying "what we had in mind was to prevent our new Congress from hiring chaplains," it wouldn’t matter. What matters is what the ratifying public of the time understood the words of the amendment to mean.
Okay, but… I'm very far from an expert on constitutional history, but as far as I can tell, several scholars have pointed out that, for instance, the Civil War amendments were very likely kept purposefully vague so as to attract broad support. (Eric Foner is big on this idea.) In that sense, if different parts of the "ratifying public of the time" took words like "freedom" to mean different things, the Scalia project is quite shot. I highly doubt the ratifying public—even if it has been a small minority within the larger public—all agreed on what the words of various amendments meant. But that aside, if it was in fact the framers' intention to keep certain language vague and leave it to future generations to adapt for their own purposes, Scalia would be pretty clearly working against the spirit of the whole enterprise. How do you get around this? It's much harder to dismiss this sort of intention than the "secret letter" Ponnuru is talking about.
Ha, ha. Just kidding; not yet. But if it ever comes time to write the manifesto, I plan to kick it off with the following little set of statistics:
In the 1990s the U.S. National Health Institute's cholesterol guidelines noted that thirteen million Americans could benefit from treatment with statins (i.e., drugs to lower your cholesterol.)
In 2001, the guidelines were rewritten, and that number jumped to 36 million.
In 2003, the guidelines jumped again, and this time as it happened about 40 million Americans could really use these statins.
Why the rapid increase? Are cholesterol levels in the United States actually getting worse and worse? Are more and more people at risk of a heart attack? Hard to say. But whatever you do, don't look at this page noting all the financial ties between writers of the NIH guidelines and drug companies. Really, it's totally irrelevant that eight out of nine experts rewriting the guidelines in 2003 were paid consultants or paid speakers or paid researchers for Pfizer or Merck or GlaxoSmithKline or Bayer or other companies that would stand to benefit in a major, major way from new "official" recommendations that would expand the statin market to millions and millions of otherwise healthy Americans. Totally irrelevant. And we wonder why health care costs are so high. Maybe because they keep inventing diseases and conditions for us to go get treated for.
That's not to bash statins. I have no idea if 40 million people need them or 40 thousand or what. They probably do many heart attack victims a world of good—although it's funny, independent researchers mostly seem to think their benefits for the rest of us are wildly overstated. Huh. Also, since my brain's still untarnished by the latest glossy Newsweek article pushing the latest disease dreamed up in GlaxoSmithKline headquarters, I would guess that some of those billions spent on, say, Lipitor might be better spent on public health programs instead. Then again, any scientific study I could dig up on public health is very likely to be funded by the diet and fitness industries—they've already got Paul Krugman in their thrall, why not me? And so it goes, with new diseases concocted and commodified every which way we turn. Some say Michel Foucault is dead. I just think he got hired by Merck.
Anyway, I'm finishing up a book review on this topic right now, but I really, really, really have no idea what can be done about this problem. (And the cholesterol incident is just the tail of the whale here.) Independent, government-funded review boards would be nice to have—something akin to the National Institute for Clinical Excellence in the U.K.—anything that could tell us whether the latest "diagnosis" now hitting the newsstands is really all it's cracked up to be. I've heard that the Public Library of Science here in San Francisco is working on something like that. But they're just one lonely voice pushing back against the onslaught. Perhaps the health wonks among us can mull this problem over, while I ponder what it means when two of our nation's largest industries (health and defense) can essentially manufacture demand out of thin air. Free market, they call it. Baffling, I say.
What are the campaign issues of the future? Clay Risen thinks that Wal-Mart will be the mammoth in the mudhut come '06 and '08. Democratic operatives have been heard sharpening their spears even now. So Wal-Mart. I guess it'd be a good idea for me to have a grand opinion on the subject, and even better if that opinion was the correct one. Well I can't do that all in one post, but here's an attempt to list out, as fully as I can, both the pros and cons of Wal-Mart's existence. Feel free to list additional points or corrections, and I will add to and revise the list as needed.
Pros:
The low prices increase wages for other people. (Wal-Mart's entry in an area can drive down grocery prices 15 percent.) For low-income families, groceries are a somewhat big percentage of the budget. [Update: See Ezra Klein's important dissent here; it seems Wal-Mart's "low prices" can be deceptive.]
The activity that Wal-Mart generates can benefit other nearby local shops. (Although this obviously isn't true if Wal-Mart is in an isolated rural-ish area, away from other shops, as is very often the case)
In urban areas, the competition can help produce variety among local shops, forcing them to specialize and benefiting consumers.
Is the pay particularly horrible? The average wage for Wal-Mart is $10.77. The national average for the service industry, according to a 2002 BLS Survey, is $9.77. Unions haven't made much inroad into retail generally. Starting pay at Wal-Mart for inexperienced workers is $7-8. This is well below living wage levels in many areas and pretty unacceptable, in my opinion. But perhaps it's less a problem with Wal-Mart per se and more a problem with the nature of pay and labor density in the entire industry. (Which means, true, that remedies will have to start with Wal-Mart.) It's also not clear that the "Mom 'n' Pop" stores being put out of business are paying much better than Wal-Mart.
Wal-Mart is just a temporary job for many. 44 percent of Wal-Mart's 1.4 million employees left for other jobs in 2003. Of course, if Wal-Mart uproots other businesses in the area, there may not be other jobs to go to.
Wal-Mart pays its managers 30 to 40 percent less than its competitors, a practice which could in theory help flatten inequality in the retail sector. (Maybe not, this is very unclear.) Meanwhile, Wal-Mart has a far lower percentage of college graduates among its managers than most companies, which, a bit, reduces the income gap between education levels.
As someone somewhere said, Wal-Mart probably did more than the Fed during the 1990s to hold down inflation. But I'm not sure how this works. If someone wants to make the case that Wal-Mart helped the Fed keep interest rates low, which therefore helped push unemployment down to 4 percent and hence push wages up, I'd be curious to see it.
Wal-Mart can, in theory, be a positive influence abroad—introducing more modern business practices, eliminating corruption, etc. There's some evidence that it has had this effect in China. (The flip side: it puts such pressure on its subcontractors and suppliers to keep costs down that the latter end up pursuing all sorts of labor violations in order to keep their Wal-Mart contract. Wal-Mart officials claim that they try to crack down on this, but see this post for skepticism on that front.)
Cons: (Here we go!)
Serious, serious labor abuses. (Off-the-clock work, lock-ins, running employees ragged.) This isn't just an accident. It's part and parcel of the whole system. Wal-Mart world headquarters expects labor costs to be cut by two-tenths of a percentage point each year. The only way to get there is to push people to work harder and cut corners.
Gender discrimination. One would hope this would be fixable—since basic gender discrimination is always inefficient—but then there's also pregnancy discrimination and discrimination against mothers, both of which are likely to be quite bad at Wal-Mart. (I don't see the store offering its clerks paid maternal leave, for instance.)
Yes, Wal-Mart's wages are particularly horrible. (See #4 above.) The average supermarket clerk makes $10.35 an hour. Sales clerks at Wal-mart make $8.23 a year, which translates into $13,861 a year.
Union-busting. Enough said. And Wal-Mart has an effect on union busting elsewhere. In 2004, unions had to make major wage and health concessions to supermarkets in California because of the threat of competition from Wal-Mart.
Wal-Mart was responsible for a bafflingly large part of the productivity boom of the 1990s. But most of the gains went to shareholders rather than workers. In fact, most of the "productivity gains" reaped by Wal-Mart probably shouldn't be counted as such. See Daniel Davies on this point.
A study by the San Diego Taxpayers Association found that Wal-Mart depresses wages for similar workers in the sector; in 2003 this was estimated to be between $105 million and $221 million. (But is this offset by the rise in purchasing power? And is this true in rural areas?)
Only 41 percent of Wal-Mart's workers are insured. The employee burden for health care is, on average, 42 percent, as compared to 16 percent nationally. In theory, this should force Wal-Mart's competitors to follow suit. (But, how many of these workers are already covered by their spouses or parents?)
Studies have shown that, for instance, one 200-person store can cost taxpayers $420,750 a year, due to all of the welfare programs that the Wal-Mart workers would qualify for. (Free lunches, section 8 housing, federal tax credits and deductions, Title I expenses, S-ChiP, energy assistance.) But this figure is misleading—how much would all of these workers be making if Wal-Mart doesn't exist. Presumably the welfare costs would still exist.
Wal-Mart can swamp other local businesses. Economist Kenneth Stone has found that, in small towns, retail sales collapsed a few years after Wal-Mart enters. And it's not just other retail businesses that are hurt: because Wal-Mart imports goods in bulk, it can make it hard for local or new manufacturers to break into the business.
Is Wal-Mart good or bad for upwards mobility? My guess is bad. In theory, both CostCo and Wal-Mart fill their management ranks from people who started out on the floor. But Wal-Mart's turnover is so breathtakingly high—some 70 percent—that a given worker has an infinitely smaller chance at rising up the Wal-Mart corporate ladder. But see #6 above—Wal-Mart does offer a better chance at promotion for those who didn't go to college.
Labor abuses in China. See #8 above.
That's all I can think of right now. Another thing to consider: how do different policy environments benefit or hamper Wal-Mart? In the 1990s Wal-Mart really took off. That was also a time when the federal minimum wage continued to decline in real terms, despite a minor boost in 1995, and the Earned-Income Tax Credit was expanded. Did that help Wal-Mart? (My thinking here is that, in theory, a higher minimum wage forces companies to pay low-wage workers more than they otherwise would, and the EITC allows them to pay those workers less than they otherwise would, although the actual impact of each seems difficult to calculate.)
Meanwhile, one of the arguments for boosting the minimum wage and/or labor density in the retail sector is that it would make it more difficult for Wal-Mart to keep improving its bottom line by slashing labor costs. In that case—and this is theoretical—the store might have to aim for productivity increases by other means, either by investing in its workers like CostCo does or what have you. Then again, as I said, the Wal-Mart model thrived in the late '90s, when full employment was putting upward pressure on wages for the first time in a long, long while. Why was that so? One possible answer: the increase in wages at the low-end of the payscale was hurting Wal-Mart's retail competitors, who often pay their workers even less, more than it did Wal-Mart. Or maybe low oil prices did it. Hmm...
I'm a reporter at The New Republic, mostly covering green issues, apocalypses, general doom. This is my personal site. I also post regularly at The Vine, TNR's enviro-blog.