Come to think of it, there hasn't ever been a good pornography discussion on this site—and this post probably isn't it—but some of the facts laid out in thisAmerican Sexuality article on the subject seem worth tucking away for future reference:
From research and the testimony of women who have been prostituted and used in pornography, we know that childhood sexual assault (which often leads victims to see their value in the world primarily as the ability to provide sexual pleasure for men) and economic hardship (a lack of meaningful employment choices at a livable wage) are key factors in many women’s decisions to enter the sex industry. (For a good summary of this evidence see Margaret Baldwin’s article "Split at the Root: Prostitution and Feminist Discourses of Law Reform," 5 Yale Journal of Law & Feminism 47, 1992.) We know how women in the sex industry—not all, but many—routinely dissociate to cope with what they do. We know that in one study of 130 street prostitutes, 68% met the diagnostic criteria for post-traumatic stress disorder. (For details on this, see the work of Melissa Farley, "Prostitution, Violence, and Post-Traumatic Stress Disorder.")
Clip. Good magazine, too, although I'm a little dismayed that left-wing critiques of Queer Eye for the Straight Guy haven't really advanced much in the past two years.
So Samuel A. Alito, Jr. will be the new Supreme Court dude. Emphasis on "dude". Or emphasis on "fascist". Whatever. Anyway, I've been reading his infamous dissent on Planned Parenthood v. Casey, the one in which he upheld a spousal notification law for abortions, and it's important to hash this out. The conservative defense of Alito will be that it wasn't his job to decide whether the law was good public policy or not, merely to decide whether it was constitutional; and on the latter, he was upholding what he thought were the precedents on abortion at the time. That's not implausible; see this passage:
Taken together, Justice O’Connor’s [earlier] opinions reveal that an undue burden does not exist unless a law (a) prohibits abortion or gives another person the authority to veto an abortion or (b) has the practical effect of imposing "severe limitations," rather than simply inhibiting abortions "to some degree'" or inhibiting "some women."
In Alito's defense, it's sometimes hard to figure out exactly what Sandra Day O'Connor intends in her opinions—often only she knows for sure—and prior to Planned Parenthood, the Supreme Court had placed restrictions on abortion that, while not "severe," probably did prevent some women from getting abortions. So Alito's ruling partially stems from previous Supreme Court sloppiness, it seems. Meanwhile, the plaintiffs who opposed the spousal notification law had not shown that the 5 percent of women who don't notify their husbands would in fact be harmed by the new law. (The law leaves an out for women who have "reason to believe that notification is likely to result in the infliction of bodily injury upon her.") On one level, then, Alito's opinion is sort of reasonable.
But on another level, it's not. It's ridiculous. It's dangerous. It's wrong. According to Alito, because only a small number of women might face an "undue burden" in theory, but that's not known for sure, the law is just fine and dandy? What kind of legal principle is that? The Supreme Court obviously disagreed with Alito, noting that regardless of whether 95 percent of women would be unharmed by the law, "[l]egislation is measured for consistency with the Constitution by its impact on those whose conduct it affects." And that includes women potentially affected.
This all matters very, very much because in an upcoming abortion case, Ayotte v. Planned Parenthood, the Supreme Court will decide just this sort of dry procedural issue: on whether litigants need to show that an abortion restriction places an "undue burden" on women in the abstract—and is therefore unconstitutional—or must show that it places an "undue burden" in a particular case. Alito would appear to side with the latter view, and a ruling this way would make it very hard for women to challenge abortion restrictions (litigants would have to show that parts of the law affect them personally), and the net effect would be that Roe v. Wade, for all practical purposes, would be crippled—states could leave restrictions on the book for many years before ever being challenged.
The political issue here is that over 70 percent of Americans support spousal notification laws, and if Democrats try to fight on that terrain, they could well lose. [EDIT: Sorry, I didn't intend this to mean that they shouldn't even try to convince people otherwise; they should.] But there's so much more at stake here.
"Starve the beast"—Grover Norquist's theory that cutting taxes and running massive deficits will somehow force Congress to rein in spending—obviously looks flimsy after the high-spending Reagan and Bush years. Even the Cato Institute agrees. But via AngryBear, this old Daniel Shaviro article makes the point that cutting taxes and running huge deficits not only fails to curb spending, but also increases the influence of interest groups:
The increased fiscal gap also makes future government policy far less predictable. Having a looming debt of that size will stir every interest group in Washington to try to influence future policy. It won't be possible to take any government commitment for granted for more than a few years. With even Social Security and Medicare likely to be on the chopping block eventually, no group or lobby will be able to rely on political inertia to protect what it now has. That is an enviable state for members of Congress set on gaining campaign funds, but a worrisome situation for the rest of us.
Judging from the past four years, this seems anecdotally true—out-of-control deficits lead not only to more spending, but worse spending, tilted heavily towards lobbyists and other interest groups. Alternatively, one could look at it this way: when various political interests see certain groups rewarded with tax cuts, they feel they should be rewarded too. So the squabbling begins. Meanwhile, so long as government spending is being paid for with borrowing and future taxes rather than present-day taxes, it's very easy for Congress to spend more than it otherwise would. (Which is why, of course, big-government liberals have usually opposed balanced budgets.) Ultimately, it's probably easiest to restrain government spending after tax increases—since that sets a general tone of fiscal austerity, and perhaps becomes easier for Congressmen to turn down spending requests. Divided government probably helps too.
Obviously, though, from a Republican perspective, Bush- and Reagan-style tax cuts do accomplish two very important things: they redistribute wealth upwards—rather than tax the rentier class to pay for spending, the government just borrows from them instead and pays them interest for their generosity (during the Reagan years, that interest was paid with payroll tax hikes on workers); and second, the tax hikes put Democrats in a bind by forcing them to raise taxes if and when they come to power, which hurts them politically, and forces the "responsible" party to rein in its own preferred spending programs (the tax-cutting party, meanwhile, can spend more freely). It's all very clever, at least so long as deficits don't collapse the economy and prompt a socialist takeover of government. But that looks increasingly unlikely.
Hm. What to write about? Scooter Libby? Nah, not dull enough. Oh, what about John Negroponte's new "National Intelligence Strategy," released last week. Yes, that's the dullness we need. Important, though, especially since the new report doesn't have much to say about one of, I think, the United States' most persistent intelligence weaknesses over the past decade. More on that in a bit. For now, it's enough to note that Negroponte's report is full of resolutions to coordinate this and integrate that and improve the other, and it's all focused, naturally, on using intelligence to combat terrorism, stopping WMD proliferation, and, uh, democracy promotion. Apart from the last, which is bizarre, this is all pretty uncontroversial and likely useful—up to a point.
The idea that we need to "improve and integrate" our intelligence is always a nice platitude, but it's worth stepping back and asking how we got to where we are. In this day and age, curiously, every major foreign policy move needs to be backed, it seems, by ultra-solid intelligence—as if to give the public a veneer of objectivity for what are often simply judgment calls, hunches, guesses. That alone puts a very high degree of natural political pressure on what is otherwise an inherently imprecise process. The Bush administration may have been particularly flagrant about "stove-piping" CIA reports during the march to war in Iraq, true, but any policymaker who gets it in his or her head to pursue a course of action will have to do the same to some degree or other, because he or she will want backing from the intelligence agencies, and they can often provide no such thing.
Now Congress can always re-jigger the ways in which the various agencies are set up, as it recently did, and maybe that will alleviate some of the pressure from here, but the implicit "politicization" of intelligence can never go away completely. In a better world, policymakers would just acknowledge that intelligence is highly imperfect even in the best of times, and tell voters that ultimately, their national security decisions are primarily judgment calls, rather than obvious conclusions borne of intelligence. So many charades could be dispensed with. (Including the idea that ordinary citizens aren't informed enough to form an opinion on foreign policy decisions—a fiction that should have been buried by the Iraq war.) This cultural change will never happen, of course. So the real problem is that foreign policies that put an impossibly high burden on intelligence—the Bush doctrine and preventive war come to mind—will likely fail more often than not.
(As a side note, if only because I don't know where else to stick this, Chaim Kaufmann's "Threat Inflation and the Failure of the Marketplace of Ideas" is one of the better essays on intelligence failure during the run-up to the Iraq war; he notes, among other things, that the tendency of experts to avoid treating what they think they see as a scientific hypothesis—which would entail making predictions—was an especially ingrained failure. No amount of shuffling or re-jiggering will fix this.)
At any rate, these are badly disjointed thoughts, sorry, but I promised to say a bit about one of the most glaring and overlooked types of intelligence failure throughout U.S. history: namely, our poor ability to predict how other countries—or other people, period—will react to our actions abroad. Robert Jervis has written a few papers on this, but to put it another way, U.S. policymakers rarely seem to be able to figure out how other countries see the world, a blind spot which, during the Cold War, was more serious than the various mistaken analyses about missile gaps or mineshaft gaps or the like. The problem is that this sort of "empathy" is much very difficult to improve—trying to figure out the near-infinite set of calculations and beliefs other actors might have will always be close to impossible.
A few examples from history: In the 1950s, the U.S. failed to understand that Stalin would invade Korea—at the time, his strategy was remarkably opaque. More recently, in 1996, the Cedras junta in Haiti for some reason didn't take the Clinton administration's warnings to abdicate seriously until an invasion force was actually in the air. (Did they not think Clinton meant it? Why?) Ditto a few years later, when the Clinton administration couldn't understand why Milosevic wouldn't back down from Kosovo in the face of NATO threats. Nor did the Bush administration make any apparent attempt to understand why, in 2002, Saddam might have been acting the way he did—for instance, keeping the status of his WMDs ambiguous to fool Iran. But so long as the U.S. has a poor handle on the beliefs and calculations of other world leaders, especially its adversaries, coercive diplomacy will tend to fail. (And we haven't even touched on terrorist groups....)
Check out the list of potential new nanobiotechnology weapons under development by the U.S. military. Let's see, we've got: ultralight body armor; "artificial muscles" built of nanomaterial; nanotech sensors capable of detecting individual molecules; camouflage suits that automatically heal a wounded soldier. It's every adolescent's comic-book fantasy! Now all we need to do is start a war or two to test this stuff out…
No, really, it's pretty stunning to see how much research money is poured into weapons. Over a third of NIAD's basic research is now in biodefense, and that's a growing share of a shrinking research budget. "Biodefense" is much like missile defense, only infinitely more lunatic, and in practice ends up creating ever more deadly biological weapons—necessary to test out the defenses, see—potentially kicking off a bioweapons arms race. Meanwhile, there's no money left for flu vaccines, or much of anything else. We are insane. Human beings are insane. But at least we'll have cool armor.
In the New Republic today, Clay Risen has a smart analysis of the infamous Wal-Mart memo that surfaced yesterday—which described ways the company could reduce its health care costs while appearing to care more about its workers:
The memo is the result of a study carried out in coordination with McKinsey, the elite consulting firm--and it shows in its fantastic grasp of [Wal-Mart's] numbers and abysmal conception of the workers who make them possible. One proposal would replace the current 401(k) program, into which the company puts a fixed percentage of the employee's wage, with a matching program, in which the company's contribution is equal to the employee's (this on top of the proposed cut in company contributions, from 4 percent to 3 percent). From a cost-savings point of view, this is a brutally efficient strategy--after all, the average Wal-Mart employee makes $17,500 a year. How many are going to set aside 3 percent of that for retirement? What's amazing, though, is that the memo's author, Susan Chambers, seems to believe that employees would actually like this reduction in benefits, because, for those who can somehow afford to take full advantage, it "would help Associates better prepare for retirement."
Then there is the proposal to shift all employees into health-savings plans, replacing traditional insurance with tax-free bank accounts in which both employees and the company set aside money; they then use that money to pay for doctor visits, prescriptions, and so on. Again, from a coldly rational point of view, this makes certain sense: The more financial responsibility employees bear in their health-care costs, the less they are likely to spend. The problem is that, again, poorly paid employees are unlikely to make the sort of contributions necessary to cover expenses. Moreover, it's much easier for the company to quietly adjust its own contributions to employee health downward, a fact sneakily acknowledged by the memo (though instead of proposing a check it merely recommends more p.r.: "Wal-Mart will have to be sophisticated and forceful in communicating this change").
That's the crux of it: Wal-Mart will use some nifty gimmicks to slash its workers' health and retirement benefits and then just pretend that this counts as an improvement. Ultimately, of course, this won't work. Wal-Mart's critics have bullshit detectors like few other groups of people on the planet, and always, always, always assume the worst about the store. The company will never appease its "well-funded and well-organized" attackers until it actually starts offering substantial benefits for workers. Although, do note, Wal-Mart executives are probably paranoid that the critics want to destroy the company altogether, rather than merely improve the lives of its workers, so maybe Wal-Mart thinks that there are no steps ever worth taking—because its enemies will never be appeased. Surely it doesn't help when lunatic lefties start writing posts like "Abolish the Corporation," either.
Alternatively, of course, Wal-Mart could solve its problems by lobbying for some sort of government-run health insurance, which would relieve the company of the burden of covering workers in the first place. It probably will end up doing this, although it won't lobby for single-payer, but rather the GOP's plans for government-financed Health Savings Accounts, high-deductible insurance, and tax credits, along with a phase-out of the employer-health tax deduction; two steps that I think would be very bad for actual people, but on the other hand would let Wal-Mart and other big companies wash their hands of handling health insurance without having to pay taxes for some sort of single-payer system. We'll see.
No larger moral lurking here, but Dexter Filkins' New York Times Magazinestory on Iraq from last weekend was a really good read. Since the topic of the hour seems to be whether the occupation was doomed from the start or could have succeeded with a more competent helmsman, these four paragraphs should do the trick:
The tough tactics employed by Sassaman's battalion had their effect. Attacks in the Sunni villages like Abu Hishma, wrapped in barbed wire, dropped sharply. And his men succeeded in retaking Samarra. Winning the long-term allegiance of the Iraqis in those areas was another matter, however. If many Iraqis in the Sunni Triangle were ever open to the American project - the Shiite cities like Balad excepted - very few of them are anymore. Majool Saadi Muhammad, 49, a tribal leader in Abu Hishma, said that he had harbored no strong feelings about the Americans when they arrived in April 2003 and was proud to have three sons serving in the new American-backed Iraqi Army. Then the raids began, and many of Abu Hishma's young men were led away in hoods and cuffs. In early 2004, he said, Sassaman led a raid on his house, kicking in the doors and leaving the place a shambles. "There is no explanation except to humiliate," Muhammad told me. "I really hate them."
In retrospect, it is not clear what strategy, if any, would have won over Sunni towns like Samarra and Abu Hishma. Crack down, and the Iraqis grew resentful; ease up, and the insurgents came on strong. As Sassaman pointed out, the Americans poured $7 million of reconstruction money into Samarra, and even today, the town is not completely under American control.
But there is another reason American commanders shy from using violence on civilians: the effects it has on their own men. Pittard, the American commander in Baquba, says that he was careful not to give his men too much leeway in using nonlethal force. It wasn't just that he regarded harsh tactics as self-defeating. He feared his men could get out of control. "We were not into reprisals," Pittard says. "It's a fine line. If you are not careful, your discipline will break down."
In most of the 20th century's guerrilla wars, the armies of the countries battling the insurgents have suffered serious breakdowns in discipline. This was true of the Americans in Vietnam, the French in Algeria and the Soviets in Afghanistan. Martin van Creveld, a historian at Hebrew University of Jerusalem, says that soldiers in the dominant army often became demoralized by the frustrations of trying to defeat guerrillas. Nearly every major counterinsurgency in the 20th century failed. "The soldiers fighting the insurgents became demoralized because they were the strong fighting the weak," van Creveld says. "Everything they did seemed to be wrong. If they let the weaker army kill them, they were idiots. If they attacked the smaller army, they were seen as killers. The effect, in nearly every case, is demoralization and breakdowns of discipline."
Steven Greenhouse's New York Timesarticle today on how Wal-Mart is trying to pare down its health-care costs makes for depressing reading alongside Time's new cover story on those ever-shrinking corporate pension funds. The short story: Companies don't want to be on the hook for medical or retirement costs. Wal-Mart's first order of business is to keep its profits rising, which means insuring its employees as little as public opinion will allow. (Ensuring high turnover helps.) Meanwhile, managers across America have been raiding and overstating their company's pension holdings, while forking over millions to takeover artists and CEOs. Congress has let them get away with it by promising, dubiously, to take care of the retirees if things go badly. The abstract term for this is "moral hazard"; the more concrete term, I believe, is: "retirees recycling cans to avoid eating garbage." But fear not: as long as shareholders are happy, capitalism is working. We know because they tell us so.
How did we get into this mess? I mean, the quick 'n' dirty answer is that in the postwar era, short-sighted union leaders bargained with employers for corporate benefits rather than stumping en masse for universal health care and super-Social Security. But why, in this day and age, are companies still forced to worry about health care and retirement funds? They shouldn't have to do it; the system only encourages Wal-Mart-esque behavior, and makes it hard for businesses to compete globally. Luckily, good liberals have an exit strategy: the government should handle health care and retirement, so that corporations can get back to what they do best: focusing on profit-making. GM would no longer have to operate as a "social insurance system that sells cars to finance itself," and the business of America could be business once again. Only the state can free the market from these heavy chains, say liberals. On most days, I'd agree with this. But is all this really the best way to go, or only yet another short-sighted solution to a longstanding problem? Let's digress for a bit.
There is, of course, no such thing as a truly "free" market, only types of markets designed by the state, and it's hard to figure out what the ideal design really is. The framers of the Constitution never predicted that the corporation as we know it would ever exist—at the time of the American Revolution, there were only franchises charted by legislatures for public purposes. The Jacksonians, aiming to curb corruption, later revised the corporate charter and opened it up to all comers, but never intended to exempt corporations from the common law or social responsibility. It wasn't until the late 1800s that New Jersey changed all that, rewriting its charter laws to allow corporations to do whatever they damn well pleased. Soon all the major corporations were flocking to New Jersey, and states were forced to compete with each other for lax charters (thanks especially to several Supreme Court decisions protecting charters and declaring corporations "persons" entitled to full constitutional protection—including out-of-state recognition). Toss in decades of lavish federal subsidies and voila, we've got corporate America. Hence the modern "free market" that conservatives have fought so hard to protect against "state intrusion."
True legal originalists would overturn Santa Clara County vs. Southern Pacific Railroad Company—which gave corporations 14th amendment protection—as an unconscionable act of judicial activism, though don't expect anything along these lines today (nor, necessarily, should there be). But that's just to say that ultimately nothing stops citizens from revising corporate charter law, if needed. Charters aren't sacred. That leaves the question of whether to do so. The situation we have today, in which firms like Wal-Mart and GM are obliged to cater to shareholders (in theory at least), but somehow got lumped with these other profit-draining social responsibilities, like paying for prescription drugs—is untenable. It's no surprise, then, that, as Time details, companies are raiding and shedding their pensions and getting the government to bail them out of their obligations to retirees whenever possible. It's what they're "supposed" to do.
Again, one response is to say, "Enough, enough" and just make the government assume primary responsibility for all these profit-draining obligations. Set up basic universal healthcare and mandatory savings accounts, and let corporations offer extra benefits only insofar as they need to compete for workers. Taxpayers will foot the rest. That way, unions can stop haggling with employers over premiums and deductibles and focus instead on wages and workplace conditions. It's a far more rational and stable system than what we have now, true. But why we think it will be any more sustainable, over the long haul, than the postwar bargain struck fifty years ago is a good question. Isn't it likely that, even if we had a single-payer health care system and "mandatory savings accounts", companies would still offer extra benefits to workers, only to blow the whole thing up down the road when they decide it's no longer profitable to support increasingly long-living retirees?
Alternatively—and you see this proposal at WTO protests or in Multinational Monitor from time to time—we could start rewriting corporate charters, drastically, and require companies to worry about this stuff. Always seems iffy, but you know. Perhaps in the end we could even hack away at a good deal of government regulation; there'd be no need if companies were beholden to civil authority, as was the case in the 19th century, and required to meet their social responsibilities in whatever manner they find most efficient. What would that mean for health care? I don't know. State-run health insurance might still probably be the best way to go. Fine. Perhaps charter revision would prove far more useful in other areas, like in environmental conduct. (I'm not sold on the current "corporate responsibility" trend underway.) But corporations were originally designed by the people and for the people; why not have them act that way? "Ah," one will say, "but that sort of thing would never work in The Globalized Economy™; companies couldn't compete!" Or: "Fool! We tried this already; it was called 'Fascist Italy.'" Yeah, yeah. Still, the idea that companies should just do their thing while government can swoop in later and pick up the mess looks less and less appealing by the day.
Question of the day: Is homework even necessary? Ayelet Waldman demands answers:
I also learned from professor Cooper -- aka the homework guru -- that there is no correlation between how much homework young children do and how well they comprehend material or perform on tests. [n.b., see also this study.] Why? … Because their attention spans are just too short -- they can't tune out external stimuli to focus on material. Second, younger children cannot tell the difference between the hard stuff and the easy stuff. They'll spend 15 minutes beating their heads against a difficult problem, and leave themselves no time to copy their spelling words. Finally, young children do not know how to self-test. They haven't the faintest idea when they're making mistakes, so in the end they don't actually learn the correct answers. It isn't until middle school and high school that the relationship between homework and school achievement becomes apparent.
So why the hell do Zeke and I have to spend every afternoon gnashing our teeth… The reasons, Cooper says, extend beyond Zeke's achievement in this particular grade. Apparently, by slaving over homework with my son, I am expressing to him how important school is. … When younger kids are given homework, Cooper says, it can also help them understand that all environments are learning ones, not just the classroom. For example, by helping calculate the cost of items on a trip to the grocery store, they can learn about math. The problem is, none of my children's assignments have this real-world, enjoyable feel to them. My children have never been assigned Cooper's favorite reading task -- the back of the Rice Krispies box.
The final, and perhaps most important, reason to assign homework to young children, says Cooper, is to help them develop study habits and time management skills that they'll need to succeed later on in their academic careers. If you wait until middle school to teach them these skills, they'll be behind. I suppose this makes sense. Spending their afternoons slaving over trigonometry and physics will come as no surprise to my kids. By the time they're in seventh grade they won't even remember what it's like to spend an idle afternoon.
I guess that settles that: Everyone go out and play. Seriously. Also, let me call bullshit on Dr. Cooper and doubt very much that homework "help[s children] develop study habits and time management skills." Generalizing from a single experience here, when I was in elementary school, I remember very distinctly cutting corners on virtually all of my homework. Math problems would get scribbled frantically in pencil on paper during homeroom. (In fact, what little creativity I have owes entirely to those ingenious, sweaty-fingered minutes spent trying to make it appear as if I had thought very hard about, say, problem #23(a) but just couldn't get the answer.) The spelling workbook, I quickly discovered, didn't need to be filled out at all—if you worried about grades you could always recoup your losses by getting the "bonus" spelling words on quizzes right. "Homework" always denoted something to do as little of as physically possible. Ever since, I've always had terrible study skills, and while I blame my own laziness, all that useless homework gets part of the blame.
But let's do Waldman one better and say it flat out: homework is most likely evil. Yes, evil. Any educational system that relies on parents at home to help with the "learning process" will only end up perpetuating inequality, as long as some parents can help their kids and some cannot; as long as some parents can speak English and some cannot. And homework, for all its uselessness, is far more likely to put undue stress on family life than anything else. Of course, let's also be honest, the whole point of public school isn't to turn students into well-educated citizens but rather to produce good consumers and dutiful worker bees—people with short attention spans who follow authority, care deeply about status, and will attend with all due diligence to humiliatingly pointless tasks. Get used to working overtime, kid, you'll need it. In that regard, homework is indispensible.
The New York Times has a very good piece today on the "brain drain" phenomenon among developing countries; wherein the most talented and educated workers in the Third World emigrate to the United States or Europe or other wealthy countries, thus leaving their home countries with very little in the way of human capital, and no way to exit the vicious cycle that caused people to leave in the first place:
Most experts agree that the exodus of skilled workers from poor countries is a symptom of deep economic, social and political problems in their homelands and can prove particularly crippling in much needed professions in health care and education.
Jagdish Bhagwati, an economist at Columbia University who migrated from India in the late 1960's, said immigrants were often voting with their feet when they departed from countries that were badly run and economically dysfunctional. They get their government's attention by the act of leaving….
But some scholars are asking whether the brain drain may also fuel a vicious downward cycle of underdevelopment - and cost poor countries the feisty people with the spark and the ability to resist corruption and incompetent governance.
Remittances back home from expatriate workers make up some of the difference—and these payments are usually spent more effectively than foreign aid—but not enough. Interestingly, the "powerhouses" of the developing world—China, India, Indonesia, Brazil—don't suffer from brain drain, with less than 5 percent of their skilled citizens living in OECD countries.
Some suggest that OECD countries should restrict skilled immigration. One response would be that in some sense we already do; strict licensing requirements here in the United States already put up staggeringly high informal tariffs on the importation of doctors, lawyers, economists, and other professionals. Quick example: Several years ago the federal government paid New York hospitals $400 million to train fewer doctors out of concern for "oversupply"; blue-collar protectionists never had it so good. These barriers, by the way, dwarf our rather small tariffs on goods that "free traders" tend to worry so much about. But that's only part of it. On the other hand, the United States, Britain, Canada, and Australia really do actively seek out many other sorts of skilled workers from abroad, especially in more technical fields, and this seems to hurts developing countries the most.
So what to do? Only a handful of countries have been successful in luring their emigrés back home. Bhagwati has suggested that developing countries should tax their expatriates. Creating networks among entrepreneurs might offer one solution—I know of at least one example in Latin America where the government sets up links between researchers abroad and workers at home to share knowledge. Set up something like Craigslist for really smart expatriates. Ultimately, the best thing to do would be to figure out how to get the poorest countries in the world to start growing—just as China, India, Indonesia, and Brazil have done—but the first person who figures out a fail-proof way to do that will get a very nice prize indeed.
Over the years myth tended to obscure the truth about Mrs. Parks. One legend had it that she was a cleaning woman with bad feet who was too tired to drag herself to the rear of the bus. Another had it that she was a "plant" by the National Association for the Advancement of Colored People.
The truth, as she later explained, was that she was tired of being humiliated, of having to adapt to the byzantine rules, some codified as law and others passed on as tradition, that reinforced the position of blacks as something less than full human beings.
"She was fed up," said Elaine Steele, a longtime friend and executive director of the Rosa and Raymond Parks Institute for Self Development. "She was in her 40's. She was not a child. There comes a point where you say, 'No, I'm a full citizen, too. This is not the way I should be treated.' "
Right. Similar "caveats" (Parks was a NAACP plant!) seem to be wending their way through the internet, and I'm still not sure what the "point" of these myths is; in truth, they don't matter very much. Yes, Parks was handpicked—agreed to be handpicked—by civil rights leaders to become the poster child for the Montgomery bus boycotts. So what? That's always made her even more of a hero, I think; to have agreed to set her life aside and stand at the forefront of a movement. From the obit: "Her act of civil disobedience, what seems a simple gesture of defiance so many years later, was in fact a dangerous, even reckless move in 1950's Alabama. In refusing to move, she risked legal sanction, even harm." To put it lightly. No amount of mythmaking can denigrate that.
As many people know, nine months before Parks refused to move, a fifteen-year-old—fifteen!—named Claudette Colvin did much the same thing on a Montgomery bus; the case she ended up filing in court along with three other women, Browder v. Gayle, eventually became the one in which the Supreme Court's struck down bus segregation. Initially, the NAACP wanted to organize a boycott around Colvin's case, but backed off because they didn't think she made for a suitable enough poster-child—Colvin was allegedly several months pregnant, and "prone to outbursts." Or perhaps the timing just wasn't right—mass movements are always sensitive to timing. (Baton Rouge had staged the first bus boycotts two years earlier, but that had been forgotten.) Parks, as a member of the NAACP, was Colvin's mentor, and sat in on the decision to boycott or not after the younger girl was arrested, and was eventually inspired by her example to do the same nine months later. That this was how a movement sprouted—with two women inspired by each other—is no less sweeping a story than the traditional tale of one brave person sparking a wildfire.
Presumably those civil rights leaders were right that the nation needed to see Rosa Parks—"one of the finest citizens of Montgomery"—at the head of the boycotts rather than Colvin, who might have been more easily be slimed by reactionaries who think a movement can be discredited by attacking the private lives of the people who lead it. Not much has changed in the last fifty years, in that regard. At any rate, none of this can minimize what Parks did; that wouldn't be possible.
John M. Owen IV has a Foreign Affairsreview of Electing to Fight: Why Emerging Democracies Go to War, which argues exactly what the title suggests:
According to Mansfield and Snyder, in countries that have recently started to hold free elections but that lack the proper mechanisms for accountability (institutions such as an independent judiciary, civilian control of the military, and protections for opposition parties and the press), politicians have incentives to pursue policies that make it more likely that their countries will start wars. In such places, politicians know they can mobilize support by demanding territory or other spoils from foreign countries and by nurturing grievances against outsiders. As a result, they push for extraordinarily belligerent policies.
Even states that develop democratic institutions in the right order -- adopting the rule of law before holding elections -- are very aggressive in the early years of their transitions, although they are less so than the first group and more likely to eventually turn into full democracies.
The historical record bears this out, it seems. Owen wonders if, on this theory, "a democratic Iraq [will be] no less bellicose" than Saddam Hussein's regime, as various factions in the near future "compete for popularity by stirring up nationalism against one or more of Iraq's neighbors." This doesn't seem so implausible—I could see an Iraqi government with a large Sadrist presence getting all up in some neighbor's face; Jordan, perhaps—but it does sort of seem like the least of Iraq's concerns right now. On the other hand, a rapid push for democratization in the Middle East—if and when it ever comes—would make this sort of chaotic outcome all the more likely. But as Josh Marshall once suggested, perhaps this was the plan all along.
This isn't really the place to come for Federal Reserve commentary, but maybe I can provide a few knee-jerk lefty complaints about the new Fed chief, Ben Bernanke. He's undoubtedly a smart guy, and all the center-left blogs like him, but this just looks like more of the same. He's a fan of "formal inflation targeting," eh? As best I can tell from his 1999 spat with James K. Galbraith, Bernanke doesn't take this to mean that the Fed should sacrifice everything else under the sun—including employment growth—at the altar of Always Low Prices, but Gerald Epstein argues here that that's what inflation targeting tends to mean in practice. That inflation-obsessed monetary theorists in the U.S. wrongly insisted that the rate of unemployment could never go below 6.5 percent during the 1980s, letting wages stagnate and poverty rise, makes Scooter Libby's high crimes and misdemeanors look rather flimsy in comparison.
Moreover, Epstein argues, moderate rates of inflation, up to about 20 percent, "have no predictable negative consequences on the real economy," so perhaps the Fed obsession is misguided after all. As far as I can tell, no one seems to know for sure whether or not inflation would hurt the poor, but that's probably not to question to ask, instead let's debate: what sort of monetary policy would be better for the least well-off, and the rest of us? Or rather: Why not have the Fed stop fretting about inflation—within limits—and instead focus on promoting full employment, investment, and GDP growth? Good question. The answer is to follow the money:
One likely explanation is that a focus on fighting inflation and keeping it low and stable is in the interest of the rentier groups in these counties. Epstein and Power (2003) present new calculations of rentier incomes in the OECD countries supporting the view that in many countries, higher real interest rates and lower inflation increase the rentier shares of income.
Ah, rentiers. The argument against Epstein, I take it, is that theoretically a central banker just can't use inflation to boost employment because people aren't dumb, they'll soon catch on to what the bank's doing and plan accordingly, nothing will change when inflation strikes, and soon we're on the path towards stagflation. Hence the virtues of a hawk like Greenspan—or Bernanke. In reply, the dying herd of old Keynesians might say eh, this isn't really a concern, since the real inflationary dangers come not from full employment, which is usually a good thing, but from stagnant growth, since during a slowdown monopolistic enterprises will start raising prices to recoup their fixed costs. Certainly Big Pharma and Big Insurance have been doing just that recently, so score one for the dying herd.
I'm not even fractionally smart enough to know who's right in all of this, so I'll just leave it at that and admit that my bias is towards Epstein. His suggestion for "real targeting" makes sense on the surface, although for the Fed to be truly democratic, the whole institution itself will probably have to be rejiggered so that ordinary citizens get actual input into central bank decision-making. That obviously won't happen in my lifetime, but surely the least we can do is be bitter about it, no?
I was flipping through a copy of National Geographic Adventure yesterday, and figured, hey, some of this stuff is worth linking to on the ol' blog. The cover feature's about a guy who backpacked along some or all (I forget) of the Great Wall of China. Some good factoids in there: The GWoC took 1800 years to build, you can't actually see it from space, and the reason that the Mongols of old would get so ornery and conquer stuff every now and again was because Mongolia is prey to the occasional freak super-blizzard, called a zud, that would wipe out all their livestock. You'd be ornery too.
The other good story was about how climate change has made it difficult for grizzly bears in ANWR to find food these days, so now they're out for blood... human blood! No, really, they never used to attack people, but now they do. Best part comes at the end when, shortly after being chased by a bear, a once-fuzzy-wuzzy environmentalist vows to go on a shooting spree the next time he sees one. Cool pictures, too.
Oh hilarious. Here's the much-mocked Maggie Gallagher's take on the real problem with legalizing gay marriage:
If the principle behind SSM is institutionalized in law… then people like me who think marriage is the union of husband and wife importantly related to the idea that children need moms and dads will be treated in society and at law like bigots.
Awww… poor thing. Sign gay marriage into law and suddenly people might not be allowed to gay-bash on the radio anymore, for fear of sounding like bigots and having their broadcast licenses revoked (I really don't think she needs to worry here); and future schoolteachers will brainwash their students into thinking that the gay rights debates of yore pitted a few noble crusaders for equality against a wall of old-fashioned and mostly stodgy bigots. Liberal elites can be so cruel! Really, though, I wanted to comment on this, somewhat less-goofy, paragraph:
The most important fault line in the marriage debate is between a) people who think SSM will help a small number of gay couples and not affect anyone else and b) people like me who think this is going to change fundamentally the nature of marriage.
Is that really the fault line? Neither of these propositions is testable unless we just go for it and legalize gay marriage—or, alternatively, we could just look at Europe's experience and note that Option A looks like the likely result. Alternatively, though, one could throw in a third option—that gay marriage will change marriage, yes, but for the better. I don't see why this argument's any less implausible than the other two. Insofar as legalizing gay marriage can send out a signal that being married is preferable to not being married, it could easily strengthen the institution, which, I take it, is Andrew Sullivan's argument. That's why you have more than a few feminists on the left opposed to the whole idea, seeing as how it would bolster what they see as a patriarchal and mostly oppressive institution. And they're probably right.
But let's also take Gallagher's fears seriously for a second. My guess is that keeping gay marriage illegal will do far more to erode marriage than anything else in the near future. Corporations and states, after all, are increasingly creating partner benefits for gay couples—it's hard to stop the states from doing this, and even harder to stop companies from doing it. (I guess you could try to pass an amendment, but that seems difficult.) And once there are benefit systems in place for gay couples, straight couples may as well sign on too, forgoing marriage. If companies increasingly extend healthcare and retirement benefits to "domestic partners," well, that's one less incentive for everyone else to get married, isn't it? I think Jonathan Rauch once warned that without gay marriage, "every unmarried gay couple"—especially those with kids—"will become a walking billboard for the joys of co-habitation." Not good for Gallagher. This seems like the greater "threat" to marriage, and unless we plan on banning all gay people everywhere from even looking at each other—and even in America this seems like a daunting task—allowing gay marriage is probably the best way to avert the inevitable "erosion" at work here.
We can add another loop too. Just as Gallagher seems to fear, young people increasingly do seem to see the backlash against gay rights as a form of bigotry. How much respect will those kids have for an institution they see as discriminatory? Not much, one would think. This should really be what gets Gallagher nervous. Granted, it's near-impossible to test any of these arguments—I guess we can see what happens in Massachusetts and, inevitably, California in the coming years—though my gut feeling is that it would be impossible for gay couples to screw up marriage any more than straight couples have already done.
(Granted, in real life I think it's right to allow gay marriage even if it does somehow affect straight couples—just like it was right to end racial discrimination among employers even if the net effect is to pull down white wages—but this seems to be one of those cases where doing what's right and doing what's beneficial for the majority are actually aligned.)
This is over a year old, but Stephen Brooks and William Wohlforth of Dartmouth have written a very interesting (draft) paper asking whether other countries are engaging in "soft balancing" against the United States. Prior to the Iraq war, many liberal analysts worried that too much unilateralism from America would provoke other nations—especially Europe, China, and Russia—to start banding together and counterbalancing that loud, honking hegemon across the Atlantic. U.S. conservatives, meanwhile, viewed France and Germany's opposition to the war as stemming from a desire to constrain American power. On this view, what started as "soft" balancing—a bit of stubbornness at the Security Council—would soon lead to hard opposition. As Bill O'Reilly said on the Daily Show just a few nights ago, "France is the enemy!"
So is this true? Brooks and Wohlforth say probably not. It's hard to distinguish, granted, between explicit "balancing" and normal moves made by other countries, for reasons of their own, that just so happen to inconvenience or hurt the United States. But real "balancing" would mean that Europe and Russia and China were taking moves that are only coming about because the United States is the pre-eminent power in the world, and they fear that; moves they wouldn't pursue otherwise.
This probably isn't the case. Jacques Chirac and Gerhard Schroeder opposed the Iraq war partly because they genuinely thought it was a bad idea, quite rightly, and partly because opposition was popular domestically. Likewise, Russia's recent "strategic partnerships" with India and China may look menacing, but they aren't really intended to counter U.S. power in any meaningful way. (All three countries are pursuing economic modernization, and since that entails working with U.S.-controlled financial institutions, they still need to cozy up to the hegemon.) Meanwhile, Russia's recent arms sales to India and China, along with its support for Iran's nuclear program, mostly stem from its desperate need to slow the rapid decline of its defense sector, which is in a bad way. That's why Vladimir Putin can call nuclear proliferation the "main threat of the 21st century" and still fund the Bushehr reactor in Iran. He's sincere about the former, no doubt, but that reactor contract means 20,000 jobs at home.
The EU's proposal for defense cooperation, meanwhile, is meant to complement, not counter, American military power. Again, people like Chirac may say otherwise for public consumption at home, but in reality, the EU is actually weakening its ability to balance against the United States—by foregoing investment in advanced defense technology—in order to create a rapid reaction force that can help the U.S. by dealing with Balkans-style problems. Given that the U.S. and the EU are currently working together on Iran, it's obvious that their interests are mostly aligned.
In short, people like O'Reilly are wrong. No one's balancing against the U.S.; not yet. Though it still seems that the U.S. should avoid unilateralism when possible, because ill will makes cooperation on other issues difficult. Also, notice that France and Germany have a serious dilemma here. The more that they use the language of balancing—the more that they talk about "checking American power," even when they obviously intend to do no such thing—then the more the U.S. will discount their specific objections to policies. Chirac and Schroeder may have had good reason to believe that Iraq was a flawed idea, but U.S. policymakers were inclined to dismiss their objections as knee-jerk anti-Americanism. That's bad. Likewise, if U.S. leaders believe that, say, France and Germany want to work through international institutions only in order to check American power, then the U.S. will be less likely to pursue multilateralism.
One of these days, I'll actually be able to wrap my head around those extra dimensions in space that string theorists always talk about. One of these days.
The simplest way to hide extra dimensions from view is to imagine that they are "compactified"—curled up into a tiny ball (or other geometrical configuration) with an extent much smaller than what can be probed by current experimental apparatus. In the 1990s, however, a new possibility arose, as scientists came to appreciate the role of "branes" in higher-dimensional physics. A brane, generalizing the concept of a membrane, is simply an extended object: A string is a one-dimensional brane, a membrane is a two-dimensional brane, and so on, up to however many dimensions may exist. A remarkable feature of such objects is that particles may be confined to them, unable to escape into the surrounding space. We can therefore imagine that our visible world is a three-dimensional brane, embedded in a larger universe into which we simply can't reach.
Gravity, as the curvature of spacetime itself, is the one force that is hard to confine to a brane; the extra dimensions must therefore have some feature that prevents gravity from appearing higher-dimensional. (For example, in four spatial dimensions, the gravitational force would fall off as the distance cubed, rather than the distance squared.) One possibility, proposed by Nima Arkani-Hamed, Savas Dimopoulos and Georgi ("Gia") Dvali, is that the extra dimensions curl up into a ball that is small without being too small—perhaps as large as a millimeter across in each direction. Randall, in collaboration with Raman Sundrum, showed that an extra dimension could be infinitely big, if the higher-dimensional space was appropriately "warped" (hence the title of her book).
That's from a review of Lisa Randall's new book, Warped Passages.
The other day, I wrote that it might be time to socialize drug research, or at least take a kangaroo hop in that direction. (Granted, the Pharma lobby would never let this happen, but let's keep things unrealistic for now.) In comments, serial catowner pointed out that the drug industry isn't in any way innovative these days, which I definitely agree with, and JimPortlandOR pointed to the Bayh-Dole Act of 1980 as a culprit mucking things up. Good point. So here's a suggestion.
To recap: The Bayh-Dole Act, in essence, transferred the patents for all pharmaceutical inventions made with the help of federal research grants to the universities and small businesses where they were made. No longer would taxpayers own the research that the government had paid for; it would be in private hands from now on. Many people credit this act with spawning the multi-billion dollar biotech industry, since in 1979, only 5 percent of government-held patents had ever been developed—because companies didn't want to risk commercializing them if they didn't own the patents. Bayh-Dole fixed that, in theory. But it also made academic institutions much more unwilling to share their research with other scientists, instead spending their time seeking out licenses with private business in order to earn millions. Out with the altar of Hermes, in with Mammon.
Whether or not Bayh-Dole was warranted at the time—if nothing else, it helped many research universities reap windfalls—it's certainly not having a positive effect on drug innovation today. Between 2000 and 2003, the average number of "new molecular entities," or genuinely new drugs (as opposed to "me-too" drugs) dropped to eight a year, and few of them were by the major corporations. More tellingly, drug companies have made often made relatively little progress on any number of important diseases in the past few decades. There's virtually nothing out there to treat MS, or Parkinson's, or Alzheimer's. Diabetes treatments have stalled. Cancer medications aren't really going anywhere. Perhaps that's just because these things are intrinsically difficult. But one theory is that, because government-funded research institutions now worry more about cashing in on their inventions, they spend more time hoarding their research, groveling for contracts, and litigating over patents than they do collaborating fruitfully with other scientists. Plus, Bayh-Dole inflates the price of drugs—drugs researched with taxpayer money.
The thing is, as Clifton Leaf pointed out in a recent Forbesarticle, there's another "high-technology, university-incubated industry" that's doing perfectly fine without anything like Bayh-Dole: the computer industry. The IT industry still has patents, of course, but companies and research institutions are much more generous in licensing their technology, and inter-company sharing is much more widespread. In part because of all this sharing, computer prices keep going down, Moore's law is awesome, and innovation after innovation keeps cropping up. Meanwhile, entrepreneurs and researchers at universities don't need restrictive patent rules as incentive to innovate: Leaf points out that the "$50K Competition" at MIT, which offers a mere $50,000 in seed money for innovative business plans, "has showcased some notable winners—and losers" over the years, including Ask Jeeves and Akamai. Smart people will always find ways to bring good ideas to the market, and, the more widely ideas are shared, the more stuff they'll probably invent. No reason the pharmaceutical industry should be any different.
Now here's the kicker: technically the Bayh-Dole Act empowers federal agencies to ensure that new technologies—gene analysis, cell lines, research techniques—are being shared as widely as possible. But the NIH has never once used this power. As well, Bayh-Dole technically allows the government power to use its taxpayer-funded research royalty-free, but it's never done that. One wonders: What the hell? Government reticence on both these measures essentially acts as a taxpayer subsidy to Pfizer and Bayer, and hinders innovation. I guess that's the point, but it sucks. I honestly don't know whether it's time to repeal Bayh-Dole altogether. As a "compromise" an amendment could be passed, though, that forces the government to do the above two things—and require scientists to license their patents as widely as possible—at minimum.
The consensus seems to be that the Tax Reform Commission's proposals for, uh, tax reform won't actually go anywhere, and they're mostly just ideas for "discussion" rather than things the Bush administration will actually end up backing. (With sinking poll numbers and Karl Rove potentially out of commission, it's hard to see the president finding the gumption to cap the mortgage-interest deduction, ya know?) So let's "discuss." I realize no one on the House Ways & Means Committee plans on asking me, but here's one way to take a more progressive stab at tax simplification:
Find some way to slowly phase out the mortgage-interest deduction. Robert Shapiro has argued that it doesn't actually benefit home-buyers, since sellers just bid up the price of houses until they exactly offset the cost of the deduction, so in essence, it just acts as a taxpayer subsidy to the construction and real estate industries. Is that really worth it? Any phase-out would lower home values, though, so this step makes for thorny politics, but that's why you…
Simplify and expand the family tax credit. Rep. Rahm Emanuel has proposed a simplified, refundable tax credit available to all working taxpayers with children that would replace the EITC, Child Credit, Additional Child Credit, and Child and Dependent Care Credit—cutting away about 200 pages of the tax code. This would cost an extra $200 billion over ten years, which is a lot, but doable once we get to…
Earlier this year, David Cay Johnston reported on a paper by two tax experts noting that a number of investors overstate the price of stocks, businesses, and real estate, because they're allowed to report their capital gains and losses on the honor system, unlike wage-earners. Actual verification and enforcement of these reports could recoup at least $250 billion over the next decade.
It also seems like a good idea to consolidate, simplify, and expand, as Paul Weinstein, Jr., has suggested, both the various college subsidies into one single College Tax Credit, and the various tax savings vehicles—IRAs or 401(k)s—into a single and transferable universal pension account. Simplifies a lot, and good for all involved. I realize people like Paul Krugman have argued that college isn't for everyone, but might as well try to raise the numbers. Weinstein lists a bunch of corporate loopholes and tax deductions we could close to pay for these parts. Works for me.
That's not so hard. Those aren't earth-shaking steps, but they're all good, liberal things to do, and they do simplify the tax code quite a bit, especially for working families. I don't really see the point in repealing the alternative-minimum tax (AMT), which is there to ensure that the very wealthy won't exploit loopholes and dodge taxesl; if the AMT is falling on too many middle-class families then just raise the threshold and reform, rather than eliminate, it. I also don't really know how one would simplify capital gains taxation, which is obviously at the heart of any reform, but I'm sure there are decent ways to go about it. Oh yeah, and most of the Bush tax cuts are going to have to be repealed (for a start) to avoid fiscal disaster in the long run, but that's another story...
Sam Rosenfeld and Matt Yglesias have a new TAParticle arguing that the war in Iraq—or at least the "liberal hawk" idea that Iraq could be made into a democracy at the barrel of a gun—was always doomed to fail, and it wasn't just because Bush utterly botched it. They say that even if the war had been sold and fought exactly as the liberal hawks wanted—as a way to turn Iraq into a liberal democracy—with a different, more competent administration, it still would have failed.
Well, agreed. The United States has never shown much interest in democracy-building, it's never been very good at it, and as I noted earlier in the week, the success of our nation-building adventures abroad have usually depended on internal factors in the occupied country, rather than the competence of our plans. That was as true of the American South in 1865 as it was of Kosovo in 1999. And sad to say, but the mere existence of a profit-seeking military-industrial complex made problems like the looting of the Iraqi treasury pretty much inevitable. There's no reason to think an invasion run by George Packer or Peter Beinart could have "remade" Iraq better than Bush did. That said, I think this part of the TAP piece sells the idea of liberal interventionism somewhat short:
Intervening requires us to take sides and to live with the empowerment of the side we took. Tensions between Kosovar and Serb, Muslim and Croat, Sunni and Shiite are not immutable hatreds, and it’s hardly the case that such conflicts can never be resolved. But they cannot be resolved by us. Outside parties can succeed in smoothing the path for agreement, halting an ongoing genocide, or preventing an imminent one by securing autonomy for a given area. But only the actual parties to a conflict can bring it to an end. No simple application of more outside force can make conflicting parties agree in any meaningful way or conjure up social forces of liberalism, compromise, and tolerance where they don’t exist or are too weak to prevail.
That's obviously true of the United States' military, which has classically been good primarily at smashing things, although our twenty-year-old soldiers have adapted to "mission creep" unbelievably well in Iraq. But Donald Rumsfeld wants to make the military even more focused on smashing things—as opposed to people like Thomas Barnett, who wants to see a more fully developed "SysAdmin" side—and regardless of what you want to call it, the "Jacksonian tradition" in American foreign policy has never had much interest in anything more than overwhelming bloodletting in the defense of the national interest. We're a nation ruled by speculators and powered by Southern nationalists; as such, idealistic projects abroad just aren't in the cards, except in very rare circumstances.
But the United Nations complicates the tale somewhat, since their peacekeeping forces actually have succeeded in reconciling a large number of post-conflict nations. Post-WWII UN operations in Congo, and post-Cold War peacekeeping forces in Namibia, El Salvador, Mozambique, Eastern Slavonia, Sierra Leone, and East Timor should all count as successes—the UN disarmed the parties, demobilized militias, held relatively free and fair elections, and put the countries on a path towards sustained civil peace. So in one sense, outside forces can "make conflicting parties agree in [a] meaningful way," and if those UN missions didn't conjure up, as TAP puts it, "social forces of liberalism, compromise, and tolerance," they at least pointed the way down that path. Those countries, save for the Congo, are all peaceful democracies today. We know it can work because it's been done.
On the other hand, even the UN can't seem to stop a country on the brink of disintegration from doing so, but it's hard to tell how much of that failure has come from the sheer difficulty of the task and how much from poor implementation. The original UN peacekeeping mission in Somalia obviously flopped, but it was also severely undermanned. Same with the initial UN force in Bosnia. (Could a more robust operation—say, 20,000 more troops and American commanders—have averted many of the Balkan crises later in the 1990s? Who knows?) The UN actually enforced (rather than just "kept") the peace in Eastern Slavonia and East Timor, both successfully, when it had enough troops. So I don't think I'm quite as ready to say "it's impossible", although a good deal of modesty and skepticism is absolutely crucial here. I think the United States is inherently awful at nation-building right now, yes. But that says as much about the United States and its military as it does about the inherent impossibility in peacekeeping and nation-building, and it's worth, I think, trying to disentangle the two.
Time has an illuminating interview with "Abu Qaqa al-Tamimi," an Iraqi insurgent trainer, that among other things sheds light on why so many suicide bombers in the country have been foreign fighters rather than Iraqis:
Most of the more than 30 bombers he says have passed through his hands were foreigners, or "Arabs," to use al-Tamimi's blanket term for all non-Iraqi mujahedin. Although he says more and more Iraqis are volunteering for suicide operations, insurgent groups prefer to use the foreigners. "Iraqis are fighting for their country's future, so they have something to live for," he explains. He says foreign fighters "come a long way from their countries, spending a lot of money and with high hopes. They don't want to gradually earn their entry to paradise by participating in operations against the Americans. They want martyrdom immediately." That's a valued quality sought by a handler like al-Tamimi, says counterterrorism expert Hoffman: "It's one less thing for the handler to worry about--whether the guy is going to change his mind and bolt.
Makes sense. Meanwhile, al-Tamimi—a pseudonym, obviously—claims that he was radicalized after being tortured in Abu Ghraib by occupation forces; which could be true or not, though he does seem to have used prison time productively to become more religious and develop further terrorist contacts. (He was originally a member of Saddam's Republican Guard, although as with most Baathists joined up with radical Islamic networks in 2004.) Another point: as Doug Farah has noted, one would think that capturing people like al-Tamimi would probably be much more effective for purposes of counterinsurgency than worrying about all those "high-ranking lieutenants," since the trainers and former military men seem to have all the semi-irreplaceable skills. But then, the Republican Guard alone numbered some 175,000 before the war, and that doesn't include Mukhabarat (100,000) and Fedayeen Saddam (~40,000), so it's not like people like al-Tamimi are at all in short supply...
(Those numbers, by the way, come from John Robb, who has argued, persuasively I think, that the United States is fighting a much bigger insurgency than we've been lead to believe.)
At long last, the readable layman's primer on evolutionary development (or as the cool kids apparently say, "evo devo") I've been waiting for, courtesy of H. Allen Orr in the New Yorker:
Why, then, do different creatures look so different? How do penguins and people emerge from the same genes? Evo devo's answer to this question represents its second big finding. Different animal designs reflect the use of the same old genes, but expressed at different times and in different places in the organism. As embryos, penguins might express one combination of genes in their limbs (and the result is wings), while people might express another (and the result is arms).
The basis of this selective expression involves that part of the DNA which is noncoding. Most genes, like most light fixtures, have "switches" near them. These switches, which are made of DNA, affect only whether a gene is on in a particular cell at a particular time; they do not change the actual protein coded by a gene. One switch might specify whether a gene should be on in the pancreas and another whether it should be on in adults. What’s special about many of those tool-kit genes is that they make proteins that toggle these switches. If a tool-kit protein finds and binds to a switch, it insures, through a complex molecular choreography, that a certain gene is expressed (or, in some cases, not expressed). In effect, tool-kit proteins act like molecular fingers, reaching out and physically turning on or off the switches that sit next to genes. … If all goes well, each of the possibly trillions of cells in an animal’s body will express just the right genes: insulin in your pancreas, not in your eye.
The real excitement about evo devo, however, has to do with its third claim. Carroll and others have taken the next, and by far the most radical, step and argue that evolution is mostly a matter of throwing these switches.
Undeniably cool. Though when I first saw the article's subheading ("A revolution in the field of evolution?") it looked like, egad, we were in for yet another full-length treatment of intelligent design. Not so, happily. Which brings me back to the most infuriating part of the entire ID "debate": namely, that there are so many genuine controversies in the field of evolution out there, most of them actually interesting, that it seems like a shameful waste of paper to spend time dithering over a bunch of creationists who pretend not to understand how light-sensitive receptors could evolve into eyes. In not-really-related news, except insofar as I lump all things vaguely related to biology together, it's not a good idea to stop worrying and embrace the world's invasive species.
Dean Baker's post on why the U.S. government should confiscate the Tamiflu patent is all well and good (read Tyler Cowen's rejoinder too), and his rant on the evils of the pharmaceutical industry well-warranted, but the real action's all in this old paper he wrote on alternatives to our present method of financing drug innovation. Why doesn't the current patent system—which allows drug companies to sell their little pills for 300-400 percent above the competitive market price in order to recoup their "research" investment—work very well? Here:
[T]here are very good reasons - well known to all economists -- for preferring that drugs be sold in a competitive market with the price approximating the marginal cost of production. The gap between price and marginal costs under the current system of patent supported research leads to large and rapidly growing distortions. This includes denying drugs to patients who could afford them if they were sold at their marginal cost, the distortions also include the tens of billions of dollars spent each year on promoting drugs.
Even more serious is the incentive that monopoly pricing provides firms to conceal or misrepresent research findings. Finally, a large gap between price and marginal cost will inevitably lead to the production of unauthorized versions of patent protected drugs. While these unauthorized versions make drugs available at a lower costs to patients, their quality cannot be ensured since illegal markets are unregulated.
All very real problems, these, and one can note that this sort of protectionism matters much, much more than the various trade barriers people get agitated about. Now obviously we can't just junk the patent system; companies need some incentive to invest in research. But sure we can think of alternatives that work better. Baker lists a couple, including Dennis Kucinich's proposal to get rid of drug patents and steer about $25 billion in taxpayer money—about what Big Pharma claims to spend on research—to government-backed research organizations, similar to the current NIH or the research universities of yore, and socialize drug research. (He lists some other, less drastic, and very clever alternatives too.) More on that in a bit, but the point here is that any financing alternative will have to achieve four main things:
1) provide incentives for pursuing "useful" research 2) minimize the possibility that market distortions will create incentives to pursue less useful lines of research 3) minimize the risk that political interference will direct research spending to less useful ends 4) minimize the incentive to suppress research findings
Obviously it's tricky to decide what is and isn't "useful" research—who decides? the "market"? the government? the dying children lobby?—but the current patent system certainly does badly on the last three counts. Drug companies presently have greater financial incentives to cater their research towards balding, impotent, overweight suburban males rather than look into, say, innovative malaria treatments for the Third World. The patent system also gives drug companies incentives to pursue "me-too" drugs and reap the monetary rewards—see Marcia Angell on this—as well as to suppress any inconvenient research findings.
Now if the government decided to sponsor research directly, as Kucinich proposed, it could avoid many of these problems—2) and 4) especially—but, of course, there's the possibility that politicians could start mucking around with where the research dollars go. Think the reigning First Church of Dennis Hastert would approve one cent for developing new contraceptives? Me neither. And under Kucinich's plan, private research companies could use the legalized graft system in this country to win contracts unduly. On the other hand, to some extent this problem already exists—current research at the NIH is subject to political pressures, and since drug companies often depend to a large extent on government Medicare purchases to profit from their patents, innovation already depends on lobbying, to some extent.
So... What Is to Be Done? In my opinion, the pharmaceutical industry as it stands still does good work, and I don't think full-blown socialism is called for just yet. No, I much prefer creeping socialism. Right now most government research money goes towards basic research, rather than the development and testing of new drugs. Why not steer a couple billion this way, as a test to see if the government can do drug innovation on its own? In the meantime, draconian regulation to crack down on some of the worst excesses of the current system: i.e., force drug companies to open its books; regulate advertising; free the FDA from Big Pharma's tentacles; make the approval of new drugs contingent on improvements over existing drugs (right now, new drugs merely need to be better than placebos to be approved). We can be reasonable people.
According to the Oakland Tribune, a new UC Berkeley study says that Proposition 77—Arnold Schwarzenegger's redistricting initiative for California—would "create some closer races... but would not shift decisive power to the Republicans." It would also boost the number of competitive districts from 30 back up to 50. Well, that sounds like it could alleviate my earlier fears that the initiative would corral Democratic voters into a handful of urban bantustans. So I'd like to read the study, but oddly, "the release of the IGS redistricting study has been delayed, pending further research." Huh?
The big news on Harriet Miers today... she's an abortion opponent! Seemed fairly predictable, but okay:
President Bush's Supreme Court nominee, Harriet E. Miers, pledged support in 1989 for a constitutional amendment that would ban abortions except when necessary to save the life of the woman.
We still don't know for sure, granted, whether or not Miers would actually vote to overturn Roe v. Wade if she got a chance—and it's worth noting that even if she did, the Court would still have a five-vote Roe majority (Ginsberg, Stevens, Souter, Breyer, Kenendy)—but it's as good an indication as we'll get. Really, we can only guess. What I do want to question, though, is the prevailing view among some liberals that George W. Bush would never want to see Roe overturned, on the theory that it would mean the electoral death of the Republican Party. Is this really so certain? It's true that a substantial majority of Americans supports abortion rights, but there's reason to think that the Republican leadership would still try to overturn Roe, and risk the backlash, if they got the chance.
Will Saletan formulated one version of the "Republicans fear overturning Roe" thesis in this Slate piece, noting that in 1989, when Pa Bush was president, and it looked like the Supreme Court might overturn Roe, the political backlash was colossal: Voters reportedly made it an issue; pro-life politicians lost their jobs; many pro-lifers backed away. That's the claim, anyway. Saletan suggests that the younger Bush has learned his lesson and would never touch Roe. Alternatively, some pundits have suggested that if Roe was overturned, the Christian right would finally be satisfied, pack up their protest signs and go home, and thus dampen voter turnout for the GOP. Pro-choice activists, meanwhile, would be newly fired up (and popular), and the Democrats would benefit.
Historical analysis bears this out to some extent, although these things are always fuzzy. Roe v. Wade helped the Republican Party, no doubt, by spurring religious groups, notably the Christian Coalition and Pat Robertson's 700 Club, to abandon their longstanding quietist stance and finally start getting involved in politics. It also, maybe, spelled the beginning of the end of the Democratic Party's reputation as the party of "moral values" among the electorate. As William Galston and Elaine Karmack recently pointed out, the Democrats have historically polled much higher than the Republicans on "traditional family values" questions, but that lead started declining in or around 1973. Did Roe cause that? Hard to say—surely not singlehandedly (Dems were still doing well on this question in the mid-80s), but perhaps in part.
Nevertheless, there's no going back to 1973, even if Roe was overturned. The Christian Right and other social conservatives wouldn't dismantle the vast political operation they have in place; instead they'd stay focused on passing bans at the state and national level. (The usual estimate is that 30 states would ban abortions if Roe was overturned, although this poll suggests that only about 10-15 states have pro-life majorities; so there's lots to crusade against here.) Once Roe's gone, the Ralph Reeds and Pat Robertsons of the future won't suddenly wash their hands of the GOP and decide that they no longer need to rile up the faithful for political ends—the money's too good and the power too marvelous. No, the conservative base will stay perfectly active. Meanwhile, pro-choice activists in California, Illinois, Massachusetts, New York, and other blue states might actually be de-mobilized, since abortion would in theory be safe for them and their friends, and fighting for abortion rights at the state level is much more daunting than fighting to uphold Roe. This might not happen, but it's not that outlandish either. Perhaps this is a risk someone like Grover Norquist would just as soon not take, especially since conservatives are doing just fine as it is, but there seem to be enough hardliners on the Republican side of the aisle willing to take the plunge.
Personally, I think all of this would be horrific, which is why I believe Roe shouldn't be overturned. But the point is that it's not impossible to think the conservative movement would survive the backlash—they've already made abortion a near-impossibility for a large swath of the country, and haven't suffered for it yet—and those who believe the GOP would never dare overturn Roe are making a pretty daunting leap of faith, it seems.
For some mysterious reason, Echidne has a fantastic tribute to Bruce Lee up on her site, and mentions the famous "One-Inch Punch," which would send people flying backwards with little to no windup. (See the video!) Now I had read elsewhere that even Bruce Lee never harnessed the full power of the One-Inch Punch, and here's a random and obviously reliable website that seems to confirm that:
The One Inch Punch, which was made world famous by Bruce Lee, is in truth an ancient technique in Weng Shun Kuen. Bruce Lee, who was never tutored in this technique, learned it by spying on senior students. Because he never got to learning the second form, "Chum Kiu", that trains the footwork that is needed to perform the One Inch Punch correctly, he never got that part right. In fact, the One Inch Punch can easily be learned, if one understands the principle behind it.
Principle shminciple. How does it work? This part actually seems like a decent explanation:
We all know what it feels like when we have to sneeze uncontrollably. Or when we are startled so bad that our hair in our necks stand on end and you feel this "electrical" tingling in our spine. Now THAT is natural Fa Jing! It has got something to do with that same uncontrollable force that’s unleashed while you’re sneezing. You can’t keep your eyes open, no matter how hard you try. That is how close I can can come to describing the essence of Fa Jing for you.
Ah! Speaking for myself, I never really got beyond yellow belt in my elementary-school karate class, so this is probably out of my league, though I did master a fearsome Tiger Claw. Or maybe it was an Eagle Claw. Nevertheless, fearsome. Also, this, from Echidne's comments, is very funny.
Ronald Brownstein has a piece in the Los Angeles Times today noting that "[some] prominent legal thinkers from left and right" want to end life tenure for Supreme Court justices. Each time I hear this suggestion, it sounds an awful lot like a solution in dire search of an actual problem, but since it keeps popping up, let's take a look:
Fewer vacancies mean more conflict over those that occur because neither side can be certain when it will receive another chance to change the court.
Longer tenure also raises the stakes in each confirmation by multiplying the effect of each nominee. The common assumption during the recent confirmation debate over new Chief Justice John G. Roberts Jr. was that he would serve at least 30 years.
Kevin Drum says the "benefits almost certainly outweigh the drawbacks," but I'm not so sure, although admittedly it's all a bunch of guesswork trying to predict what would happen. The way I see it, giving Supreme Court justices, say, fixed 18 year terms (which would mean that each president is guaranteed two nominees per four-year term) would lead to a lot more instability in law, as a two-term president could completely remake the Court. Moreover, since the justices would be serving for a shorter period of time, the Senate would have less incentive to oppose overly-radical nominees, meaning that we'd potentially see greater ideological polarization, and hence greater swings left and right. (There would also be the sense that the president "earned" his two picks by winning his/her election, and thus should get greater deference... basically, I see a lot less "advise and consent" under this system.) Obviously I'd be thrilled if the Court swung somewhere to the left of Earl Warren, but preferences aside, continual instability of this sort doesn't really benefit anyone.
Is it "fairer" to give each president two guaranteed picks to the Court? Maybe, but then you start getting into problems like the fact that people vote for president for lots of reasons, often with the Supreme Court far from mind. Now perhaps if every president was guaranteed two picks, each presidential election would become "about" those two potential nominees to a much greater extent. Good or bad? I don't know. It might lead to a greater degree of cronyism, and could lead to those two judicial nominees showing undue deference to the president who appointed them (since their fortunes were more directly tied to the president's). Now I'm not absolutely convinced that the Supreme Court should be "insulated" from the democratic process, as it currently is, but some of this makes me nervous.
Perhaps even more seriously, I'm not thrilled with the idea of having a bunch of justices who have to worry about what they'll do after they step down from the Court. The revolving door between Congress and K Street—where legislators retire and pick up some plum lobbying job—has produced more corruption and impropriety than is really healthy among the legislature, and I there's no reason to believe that a high court operating in a similar manner would avoid producing its share of Billy Tauzins.
Now maybe we could find clever ways to solve these problems, and others that might arise, and perhaps there wouldn't be any problems, but that leads us back to the deeper issue here: Why bother? Ultimately, if we wanted to end life tenure on the Supreme Court, it would have to be done with a constitutional amendment. If the experiment goes badly, it's unlikely that we can have a do-over, or constantly tweak the rules. Which brings us back to the question: Why are we itching to muck up the current rules in the first place? Granted, there might be a few problems with the existing system—perhaps life tenure can lead to the odd senile Justice sitting on the Court—but just because the horse has a few fleas is no reason to shoot it, I think.
Here's a bleak statistic: Over the past twenty years, the median per capita growth of the poorest countries was zero. That's from Branko Milanovic's paper, "Why Did the Poorest Countries Fail to Catch Up?" His explanation: war. Conflict may be on the wane everywhere else in the world, but poor countries are much more likely to get involved in wars and civil strife. This alone accounts for an income loss of about 40 percent. If so, then all the debates about free trade and good governance and foreign aid, while important at the margins, miss the larger trend here. Conflict-prevention in the Third World would do more for global poverty than any other single measure.
Kos plans on voting for Arnold Schwarzenegger's "redistricting reform" ballot initiative, while Nancy Pelosi's making it her "top priority" to defeat the measure. On the face of things, I think Pelosi's right and Kos wrong. Yes, redistricting reform makes sense if it can stop legislators from gerrymandering themselves into permanently safe seats, but there are good ways and bad ways to enact reform, and Schwarzenegger's proposal seems like one of the bad ways, judging by the initiative's proposed guidelines for drawing up districts:
Judges must maximize the number of whole counties in each district, and minimize the number of multi-district counties.
Judges must maximize the number of whole cities in each district, and minimize the number of multi-district cities.
Districts must be as compact as practicable. To the extent practicable, a contiguous area of population shall not be bypassed to incorporate an area of population more distant.
This looks dubious. Under the second guideline there, the judges drawing the boundaries could end up packing the majority of urban voters into a few concentrated, ultra-Democratic districts. (The first guideline might, equally, pack Republicans into conservative "counties," but I can't tell without data, and am guessing this would be a smaller effect.) Schwarzenegger's plan wouldn't necessarily lead to more competitive districts either, as is widely hoped. Since "[j]udges must maximize the number of whole cities in each district," you'd have a handful of ultra-safe single-city seats that would vote overwhelmingly Democratic. If you wanted more electoral competition, then you'd try to create a bunch of districts that, say, combined parts of "blue" urban areas with parts of "red" suburbs. Schwarzenegger's plan does the exact opposite.
Now his plan would give representatives more "natural" regions to represent (i.e., it makes sense to represent a whole city rather than parts of two different regions), but that's a different goal from either a) ensuring competitiveness or b) making sure that voters have anything like proportional representation in Congress, and should be sold as such. Plus it looks for all the world like a naked, calculated power grab, rather than a solid reform that just happens to hurt the Democrats. (I'd happily support the latter; not so much the former.)
Moreover, the initiatives's requirement that districts must be "as compact as practicable" doesn't necessarily make sense. Pundits love to bemoan the fact that many congressional districts are long and squiggly and funny-looking, but sometimes long and squiggly districts are more appropriate than a compact, block-like district. Geography is funny, and people arrange themselves in all sorts of funny ways. There's no reason why districts shouldn't represent this fact. Again, it depends what you want. Let's say you had a state with two small Democratic enclaves on either end—together comprising a fourth of the population—and a large Republican middle section. If you divvied the state up into four blockish districts, then you'd likely get four Republican representatives—two of them semi-competitive—whereas if you created a funny-looking district that encompassed the two pockets with a long corridor in between, then you'd get three Republican districts and one Democratic one, as the state's population might warrant. But those wouldn't be competitive!
Indeed, Iowa has sensible, block-like districts, drawn by computer, but it also has only one Democratic district out of five in the House, despite the fact that 49 percent of Iowans voted for Kerry in 2004. Ultimately, if you wanted to make the representation "fairer" for Democrats in Iowa, you might have to draw some funny looking districts that connected disparate "blue" parts of the state. Indeed, some political scientists believe that a focus on compactness will always hurt the party that relies heavily on the urban vote. But then elections might become less competitive! So it depends on what your goals are. In the abstract, if you think that a state with X percent of its population voting for a given party should have X percent seats in the House hailing from that party, then compactness won't always help.
Basically, it's not at all easy to figure out what the best way to do redistricting reform is, because reformers aren't always clear on what exactly they hope to achieve. Do they want more competitive districts, or do they want the representation to approximate the popular vote in a state? (These aren't always compatible goals.) Or do they want something else? At any rate, there are smart and not-so-smart ways of achieving each of these goals, but there's no reason to support "reform" in the abstract, especially if the goals are unclear, the plan seems poorly designed, and it looks strongly like a partisan power grab. And that's exactly what Schwarzenegger's plan looks like.
MORE: As several commenters point out, the best way to have competitive and proportional elections would be to turn California into a single district and elect at-large candidates (via party lists, or the single-transferable vote, or what have you). I agree completely, although this seems difficult to pull off in practice, since U.S. voters seem to enjoy having "local" representatives.
EVEN MORE: See Mark Kleiman for some numbers (down at the bottom).