Bill McKibben spins horror stories in the New York Review of Books about how the world is burning thanks to climate change and it's rapidly becoming too late to halt the process. I happen to think he's right, but mostly I wanted to quote this part:
Kelly Sims Gallagher, one of the savviest early analysts of climate policy, has devoted the last few years to understanding the Chinese energy transition. Now the director of the Energy Technology Innovation Project at Harvard's Kennedy School, she has just published a fascinating account of the rise of the Chinese auto industry. Her research makes it clear that neither American industry nor the American government did much of anything to point the Chinese away from our addiction to gas-guzzling technology; indeed, Detroit (and the Europeans and Japanese to a lesser extent) was happy to use decades-old designs and processes.
"Even though cleaner alternatives existed in the United States, relatively dirty automotive technologies were transferred to China," she writes. One result is the smog that is choking Chinese cities; another is the invisible but growing cloud of greenhouse gases, which come from tailpipes but even more from the coal-fired utilities springing up across China. In retrospect, historians are likely to conclude that the biggest environmental failure of the Bush administration was not that it did nothing to reduce the use of fossil fuels in America, but that it did nothing to help or pressure China to transform its own economy at a time when such intervention might have been decisive.
Much of China's auto industry consists of joint ventures between foreign auto-makers and Chinese companies. American firms, from what I gather, have been reluctant to share pollution-control technology with their Chinese counterparts primarily because of China's weak intellectual property protections. And Chinese companies, for their part, are not at the point where they can develop the technology themselves. Gallagher has also noted elsewhere, I think, that foreign companies actually did begun sharing once China started putting bare-bones environmental standards in place. For years, though, there were no incentives and China built a lot of dirty cars—and power plants, and factories, and lord knows what else.
So now China's spewing more and more greenhouse gas, and at some point, it will be too late to reverse things, if it isn't already. The country already emits roughly two-thirds the greenhouse gases that the United States does. China burns through about 40 percent of the world's coal output, which accounted for 80 percent of its carbon emissions. This year alone the Chinese will build around 80 gigawatts worth of power plants—more than the entire output of the United Kingdom. Coal-burning plants last for 30 or 40 years. Change can't really wait. The same, obviously, goes double for the United States.
By the way, this was my favorite passage from the McKibben piece: "[T]he testimony of Lovelock, Hansen, and the rest of organized science makes it very clear that it would be a wise investment, indeed the wisest possible investment, to spend large sums of government money to hasten this transition to solar power. Where should it come from? One obvious candidate is the Pentagon budget, now devoted to defending us against dangers considerably less threatening than climate change."
I see where Conor Clarke's coming from in his quasi-defense of single-sex education—in theory, it might offer various educational benefits, and some studies show that it might have positive effects on women. But in the real world, it's worth noting that the actual single-sex education proposals out there are being pushed by reactionaries who have no interest in advancing gender equality, and that's why people are concerned. Read the ACLU's challenge to a Louisiana public school district's plan to implement single-sex education in junior high:
Mr. Murphy briefly outlined the differences in instruction that would be given to girls and to boys.
For instance, girls would receive character education and be subject to high expectations both academically and socially. Girls would be taught math through "hands-on" approaches. Field trips, physical movement, and multisensory strategies would be incorporated into girls' classes. Girls would act as mentors for elementary school girls.
On the other hand, boys' teachers would teach and discuss "heroic" behavior and ideas "that show adolescents what it means to truly 'be a man.' Boys' classes would include consistently applied discipline systems and offer tension release strategies. Boys' classes would also feature more group assignments.
The Louisiana plan would put boys in "competitive, high-energy teams" while girls would be "encouraged to take their shoes off." And so on. (Maybe female students can learn math by counting how many shirts they can iron in an hour.) The whole proposal's based on a fairly dubious book about gender differences in education by Leonard Sax. Sorry, whatever the academic studies might say, I can't shake the feeling that, in practice, Louisiana officials are pushing this plan in order to steer female students into home economics classes, de-emphasize real learning, and teach girls how to act like "proper" ladies. Separate but equal, and whatnot. It sounds a bit too much like a Harvey Mansfield wet dream to me...
Microcredit. It's big, we hear. Mohammed Yunus, the "godfather" of microfinance, just won the Nobel Peace Prize. The whole thing involves making small loans to the poor, who generally have no access to credit, so that they can become entrepreneurs and bootstrap themselves out of poverty. It's supposedly ultra-effective, beloved by both lefties and righties. This week's New Yorker, in fact, has a long feature on microfinance, and I wouldn't link to it if I didn't think it was worth reading. So... let's excerpt:
In the eighties and early nineties, the Grameen Bank received close to a hundred and fifty million dollars in soft loans and grants; today, funded by savings deposits from borrowers and others, it essentially supports itself. It has disbursed more than $5.3 billion to nearly seven million borrowers who have no collateral; ninety-six per cent of them are groups of women, who meet once a week and, through incentives, help to insure their individual loan repayments. (Traditionally, Third World banks lend only to men. Yunus says that he developed the policy of lending mainly to women not only because they were more responsible about re-paying the loans but because families benefitted more when the women controlled the money.)
To cover the high cost of servicing these small loans, borrowers pay interest rates of up to twenty per cent, and Grameen claims that it recovers ninety-eight per cent of the loans. Some of Grameen's numbers have been challenged, but no one disputes Yunus's assertion that, contrary to traditional banking doctrine, the poor can be reliable borrowers, even at high rates of interest. These days, Yunus raises money for the Grameen Foundation, a global nonprofit group that supports microcredit institutions around the world. Many are related to the Grameen Bank, but only loosely; Yunus believes in locally designed, run, and controlled institutions.
As microcredit has changed in the past thirty years, achieving broad recognition and even some early commercial success, Yunus has modified his methods, but he has never wavered from his goal. He insists that microcredit can lead to a world in which poverty has been extinguished and that eventually, as he puts it, there will be "poverty museums."
As Connie Bruck reports in the piece, various microfinance masterminds are now debating whether these banks should exist as non-profits, and preserve their mission of helping to lift the poor out of poverty, or become financially self-sufficient, with all that entails. The debate's rather nuanced, and Bruck gives the full flavor. On a more general note, though, it seems to me that microfinance has been a bit oversold as a poverty-reduction strategy. And it's certainly been oversold as a path towards economic development. Perhaps dangerously so. But other than that, it's great.
Gina Neff wrote a pointed critique of the Grameen Bank back in 1996 for Left Business Observer, noting that 55 percent of lenders were using their loans to meet their basic needs rather than invest in businesses and become entrepreneurs. In other words, microcredit was helping many poor households survive rather than better their lot. That's valuable, but presumably traditional safety-net programs could perform that function better than microloans. The danger occurs when development experts start proposing microcredit as an alternative to traditional safety nets, which some neo-liberal types have certainly done.
Many experts take for granted that microcredit always empowers women. In many cases it quite obviously does—Bruck offers some heartening examples of women using microloans to start their own businesses and gain financial independence. One study has found that women who join credit associations enjoy greater respect and lower rates of domestic violence. On the other hand, a much-cited 1996 survey by Anne Marie Goetz and Rina Sen Gupta found that, in Bangladesh, only 37 percent of women who received microloans had "full or significant control" over how the loan was spent. Most women merely act as collection agents—they're responsible for repayment, while the husbands spend the money. Results may vary from place to place, but microcredit can't, on its own, uproot patriarchal social structures.
On balance, though, despite various problems—and many of these problems may get ironed out as microlenders become more sophisticated—microfinance can greatly benefit the world's poor. (Although, as Bruck notes, some economists debate whether these lenders truly reach the poorest of the poor, and "commercial" microfinance institutions may prove worse on this score than non-profits like Grameen.) But, as development expert Thomas Dichter has written, what microfinance probably can not do is lead to "strong and flourishing local economies" in developing countries.
Say a woman in Malawi takes out a loan to buy a hammer so that she can set up a rock-smashing business—a typical microcredit success story. That may help her stave off extreme deprivation, but it won't turn Malawi into a developed country. That's certainly not how Europe or Asia became industrialized. Without massive state-driven investments in, among other things, infrastructure, legal institutions, health, and education, the markets in these countries will be too stunted for "entrepreneurs" with microloans to do much more than set up rock-smashing businesses or sell bananas on the side of the road. I really don't want to denigrate that, but as a poverty-reduction strategy, it's no substitute for proper economic development.
Okay, let's look at the Chinese government's plan to revamp labor rights a little more closely. A recent policy paper from Global Labor Strategies describes the legislation, noting that "it will not provide Chinese workers with the right to independent trade unions with leaders of their own choosing and the right to strike." The new laws will, however, provide contracts for the millions of Chinese workers who lack them, set up structures for negotiations over workplace policies and procedures, allow workers to change jobs more easily (something that's currently extremely difficult), and create a path for temporary workers to become permanent employees.
One can ask to what extent these policies will be enforced, but on the face, they represent real advances. No wonder, as GLS ably documents, corporations in China are fighting them tooth and nail. Angelica Oung notes, quite shrewdly, that the Chinese Communist Party's interests don't quite line up with those of global capital. The CCP worries about stability. These changes provide that stability, to some extent. As Matt noted on in comments, some of the new rules will attempt to limit unemployment—since the last thing China needs is millions of frustrated unemployed workers.
The twist is that multinational corporations may actually have an interest in working with the CCP on this issue. State-backed unions in China, after all, exist to solve labor disputes without striking or other unpleasantries. According to the New York Times, some specialists suggest that an improved network of official union branches [will] assist the authorities in defusing protests that could potentially pose a threat to Communist Party rule." Unions can keep tabs on workers pretty well—and limit what they do. Certainly the relatively conservative unions in the United States have helped defuse a good deal of labor militancy over the years. It was bad for the left, but good for business.
The San Francisco Chronicle recently had a great article on the plight of workers in China. Han Dongfang, a union advocate with the China Labor Bulletin in Hong Kong, suggested that there will likely be "no enforcement" of the new rules. Another activist said, "No one knows where a union ends and a political party begins." Perhaps most fascinatingly, the article noted that the most impressive worker gains have come not through state-backed union activity, but through labor rights cases in the courts. To some extent, China has had to reform its legal system to attract foreign investment. It appears that workers are increasingly using that to their advantage—maybe it's the only one they have.
This is a bit of a ramble, so apologies up front. I mentioned recently that abortion opponents around the country have shifted tactics recently, and are now trying to frame the pro-life position as one that "empowers" women. That strategy has played a major role in South Dakota. But here's another, related angle. This (very) old Policy Review essay on "pregnancy crisis centers" suggests that pro-lifers should try to dissuade women from having abortions by "awakening the maternal instinct." Ah yes, the "maternal instinct". The concept seems to make so many appearances these days in various parenting debates—and, now, abortion debates—that it's worth, I think, an exploration.
I'm obviously not, nor will I ever be a mother, so I'll defer to what others have written on this point. In her new book, Laura Kipnis points to a related concept—the idea that mothers and infants "bond" at birth. According to Diane Eyer, the whole notion is actually something of a scientific fiction, first pushed in the 1970s, when a large number of women where entering the labor market for the first time and messing up cultural conceptions of a woman's place in society. Many people still believe that mother-infant "bonding" exists—articles are still cropping up in the news about how painkillers and the like affect the process—but to a large extent it appears to be invented.
So what about the "maternal instinct"? Social conservatives such as Suzanne Fields sometimes invoke the notion to suggest that it's only "natural" that women would want to leave the workplace to take care of their kids. Biology is driving the opt-out revolution, in other words, rather than sexist policy structures. Others lament the decline of the maternal instinct thanks to feminism. But Kipnis points out, to provide some useful context, that the entire concept sprung up at a very particular point in history and isn't exactly "natural":
With the industrial revolution, children's economic value declined: they weren't necessary additions to the household labor force, and once children started costing more to raise than they contributed economically to the household, there had to be some justification for having them. Ironically, it was only when children lost economic worth that they become the priceless little treasures we know them as today. On the emotional side, it also took a decline in infant-mortality rates for parents to start treating their offspring with much affection—when infant deaths were high (in England prior to 1800 they ran between 15 and 30 percent for a child's first year), maternal attachment ran low. With smaller family size… the emotional value of each child increased; so did sentimentality about children and the deeply felt emotional need to acquire them.
That's all quite fascinating. It's also worth noting that back in the old days, when one child died at birth, parents would often give the next child the exact same name—children were, it certainly seemed, replaceable parts rather than distinct individuals. Now obviously that doesn't mean that emotional attachment isn't real. Of course it's real—even someone who isn't a parent but merely has parents can see that. But the idea was to some extent socially constructed, and put in the service of an economic order in which men worked and women stayed at home. It could presumably be constructed differently.
Anyway, this is an old debate and I'm not saying much that's new (or very coherent for that matter). Moving on, though, Nancy Campbell wrote an essay some time ago arguing that congressional drug policy during the 1980s was often primarily aimed at, curiously enough, regulating maternal instincts. Policymakers, mostly male, were especially horrified by the fact that crack caused mothers to lose interest in their kids—a supposedly unprecedented problem that required drastic action, since not even wars, concentration camps, or alcoholism had "eroded" the maternal instinct in the past. Or so the story went.
Sen. Chris Dodd said on the Senate floor that solutions should involve "capitalizing on the profound maternal instincts of many of these mothers." Sen. Pete Wilson argued for legislation that offered pregnant drug users the "opportunity to voluntarily rid themselves of their addiction" before moving onto punitive measures, arguing that some women have "sufficient strength of maternal instinct" to do so. And so on. I don't know what solutions were proffered in this context (the related hysteria over "crack babies" led to some extremely destructive policies), but it's quite interesting that this is the way the debate was framed.
Christopher de Bellaigue has a wonderful essay about Iran in the New York Review of Books this week that pretty well encapsulates the dovish (or anti-alarmist) view of things. Two things to spotlight. Back in 2002—before the invasion of Iraq—some Iranian officials were apparently talking about a "strategic realignment" to bring the country closer to Jordan and Egypt, and offering cautious support for Saudi Arabia's Israel-Palestine peace proposal. "Within the Iranian establishment... there were intense disagreements, of which the public was only partly aware, over the value of maintaining Iran's rejection of Israel's legitimacy."
If true, that's yet another data point in favor of the view that Iran's leadership isn't totally insane or intent on going to war with everyone and destroying Israel at all costs. Of course, maybe not all the mullahs are that pragmatic. Ahmadinejad sounds extraordinarily belligerent at times. Then again, according to de Bellaigue, earlier this year Ahmadinejad tried to allow women to attend soccer games. He was... overruled. One might suppose that a dude who can't even set attendance policies for his own football stadiums probably has less than total control over the country's foreign policy too. Anyway, much recommended.
As someone who used to play a fair amount of violent video games, and watch a fair amount of violent movies, and is nowadays one of the most violence-averse people around, you'd think I'd have an opinion on whether shoot-'em-ups and other violent media have bad effects on kids. Well, I guess I don't think they do, but I'm also a bit bored with this line of inquiry. So it's nice to see that Chris Suellentrop has some more offbeat thoughts on video games:
[T]he authors of Got Game: How the Gamer Generation Is Reshaping Business Forever suggest that gamers do come out ahead in the world of business. John C. Beck and Mitchell Wade surveyed 2,500 Americans, mostly business professionals, and came to the provocative conclusion that having played video games as a teenager explains the entire generation gap between those under 34 years of age and those older (the book was published in 2004, so presumably the benchmark is now 36).
Beck and Wade argue that the gamers somehow intuitively acquired traits that many more-senior managers took years to develop and that their nongaming contemporaries still lack. According to their survey, video game players are more likely than nongamers to consider themselves knowledgeable, even expert, in their fields. They are more likely to want pay for performance in the workplace rather than a flat scale. They are more likely to describe themselves as sociable. They’re mildly bossy. Among these traits, perhaps the most important is that gamers, who are well acquainted with the reset button, understand that repeated failure is the road to success.
I guess that's possible. But there's a flip side. On some level, Suellentrop points out, the whole point of video games is to figure out the rules of the game and work within them—to become a good worker bee, to innovate only within limits. "So don't worry that video games are teaching us to be killers. Worry instead that they're teaching us to salute."
Maybe. But video game players very often try to find loopholes in games, unlock secret codes, figure out ways around the rules. I don't know if that counts as actual creativity (what does I guess), but I doubt the diehard video-game geeks are the ones who become the Organization Kids that David Brooks once described. Of course, we're presumably talking about only a tiny subset of all video game players, so maybe Suellentrop has a point...
Thanks to the colossal cock-up in Iraq, virtually no one has taken a hard look at the flailing occupation of Afghanistan and asked whether, in retrospect, it was also a mistake to invade that country. No one asks that. Afghanistan's the ultimate uncontroversial war—even liberals point to it approvingly to show they're not reflexively dovish. But Stephen Zunes is right—the Afghan war's not going that well, Osama bin Laden has eluded capture, and second-guessing the various decisions made back in 2001 to go to war really shouldn't be out of bounds.
After September 11 of course, the United States had to deal with al Qaeda in the short term, no matter what happened next. I remember reading, in various European newspapers at the time, about talk of a deal with the Taliban to extradite bin Laden and his associates. That fell through. But why? Was it because Mullah Omar had married bin Laden's daughter and wouldn't hand over his friend? Was it because the United States wanted to go to war come what may? And if so, why?
Some reports indicated that the Taliban had agreed to extradite bin Laden to Pakistan, but the United States killed that deal. Jason Burke's book on al Qaeda reported that after the embassy bombings in 1998, the Taliban was also ready to rid itself of bin Laden, but after Clinton ordered a missile strike on Afghanistan, the Taliban, obviously embittered, changed its mind. If that's true, it sounds like the Taliban leadership was never thrilled with bin Laden, but had too much pride to bend to U.S. browbeating. Perhaps if an International Criminal Court had been in place in 2001, that would've been an acceptable compromise. We'll never know, obviously.
Of course, perhaps the United States had good reason to turn down these offers. Afghanistan, after all, was a terrorist haven—with training camps and everything—and getting bin Laden alone might not have been enough. But is this true? Some reports suggested that most al Qaeda fighters in Afghanistan, circa 2001, basically existed to fight the Northern Alliance. Were they a threat to the United States? I don't know. Moreover, would enhanced police and intelligence operations around the world, in conjunction with political pressure on the Taliban, have been just as disruptive to al Qaeda as an invasion was?
And even if war was the best option for disrupting al Qaeda at the time, was an air war and occupation really the best way to disrupt that terrorist haven? Zunes mentions a couple of other possible options that might have been pursued—exploiting divisions within the Taliban to bring down the government (many Taliban members reportedly resented al Qaeda's presence) and arming the Northern Alliance, or, if we absolutely had to go in, focusing on "precisely targeted small-unit commando operations" rather than "high-altitude bombing." Has anyone studied this? Why not?
A cynic would say that, after September 11, the Bush administration needed to do something dramatic—perhaps partly for public consumption. As Thomas Friedman said, "after 9/11 the United States needed to hit someone in the Arab-Muslim world." There was some psychological need to smash shit up. I mean, sure I think that attitude's sick and twisted, but it appeared to be fairly widespread. Political pressure and covert operations wouldn't have satisfied the masses. A dramatic air war was just the thing. Any administration, Democratic or Republican, would've understood that, I think.
At any rate, we got an air war and an invasion of Afghanistan. It didn't really stabilize the country, and what advances in human rights were originally made have been largely rolled back in parts of the countryside—some aid workers are now saying security was actually better under the Taliban. On the downside, our policy of torturing and abusing detainees sprung from that war, a number of civilians died, the opium trade is flourishing once more, warlords are running the country again, and the war, while not as unpopular as Iraq, certainly didn't win the United States many admirers in the Muslim world.
One could throw many of these problems at the feet of the Bush administration, which sanctioned torture and didn't pay enough attention to reconstruction, distracted as it was by preparations for toppling Saddam Hussein, and all that other stuff. I don't really know how to untangle these things. A more competent administration might have had better luck capturing bin Laden and stabilizing the country. Or perhaps completely different policies would've achieved much more. I certainly don't know for sure, but it seems worth asking all the same.
I don't know about you folks, but I'm simply stunnedto hear that various multinational corporations oppose China's attempts to bolster unions and curb labor abuse. Okay, not really. But this sounds like a huge, huge story in the making. The CCP, it seems, has a new plan to "tackl[e] the severe side effects of the country’s remarkable growth." I wonder if it's sincere. I wonder how long they'll stand up to corporate pressure. Maybe Zimbabwe can provide the next generation of Nike sweatshops.
Since I've never given it much thought, I've always just assumed that working for a phone-sex hotline wasn't nearly as difficult or strenuous for women as, say, being an actual prostitute. As it turns out, that's quite wrong. One women quoted in Stripped (discussed here) describes the job in gruesome detail:
Well, the phone sex situation was like eight hours of men yelling in your ear. I worked for a really fucked up company. There are a lot of companies that say you can't do any bestiality, child stuff, no extreme violence to women, etc. My company was like, "If the caller calls and he wants you to be a German Shepherd, you better start barking."
There was a soft-porn line and a hard-porn line. One the soft porn, you weren't allowed to say "dick"; you had to use a metaphor, because kids could call in on that line. If you got good on the soft-porn line, you went over to the hard-porn line, which was more money, but then you were forced to do anything. Any scenario. Ninety percent of the girls would unplug the phone, and we'd get written up. Just eight hours of that foolishness. "I'm going to chop your legs off and shove them up your cunt." "I'm going to take your tits and boil them and give them back to you." And you're supposed to be like, "Yeah, thanks. That's so totally nice of you to give them back."
Affirmation is the same thing over and over; it must work the opposite way. So eight hours of "Fuck you, fuck you, you bitch. I want to put pencils through your labia." And you're talking to hundreds of men, and they're all giving you their worst thoughts. [...] When I was stripping, I'd wash it off. But with phone sex, I didn't do anything. I got to the point where someone called my house in the middle of the night as a wrong number, and I was so out of it I talked phone sex to this wrong number. That's when I quit.
That's unbelievably horrifying. And not to change the subject too quickly, but Sara Anderson of f-words recently had a really insightful post about viewing the porn industry as a worker safety issue rather than as a free speech issue. Bernadette Barton touched on something very similar in her book, after interviewing sex-activists who were agitating for better labor standards in San Francisco (a city that, ironically, has some of the worst working conditions for strippers in the country—partly because the competition for customers is fiercer than in other cities).
Many of the activists were trying to speak out against unsafe and exploitative conditions in the sex industry, and trying to improve safety standards, but were sometimes reluctant to complain too loudly for fear of giving ammunition to conservatives and those feminists who want to abolish the sex industry altogether. That's not to criticize the anti-sex-work feminists who, I think, raise important issues—and, judging from Barton's book, tend to have a more accurate diagnosis of the sex industry—but it does illustrate the difficulty in seeking middle ground in the current sex-work debate. (It's worth noting that the activists were also frustrated that "sex positive" feminists too often glossed over safety concerns.)
Anyway, Ann Bartow had a related post a while back on how the lack of safety standards in the porn industry has led to an outbreak of HIV infections. To a large extent, I think Sara's right and the issues concerning safety and labor standards seem like a more fruitful focus than the endless debate about whether or not we should abolish pornography or stripping. Get OSHA in there, at the very least.
Earlier this week at TNR, John Judis wrote about the decline of Business Week, noting that the magazine no longer covers labor issues like it used to, and that, indeed, almost no major media outlet these days carries a dedicated labor reporter. The decline of labor reporting is an important story, and there are a few things to add here. Christopher Martin, a communications professor at the University of Iowa, touched on the issue in his recent book, Framed!: Labor and the Corporate Media. Here's a synopsis of Martin's argument about how the media frames labor disputes:
Martin argues that the "framing" of U.S. labor coverage occurs through five means: 1. The consumer is king; 2. The process of production is none of the public’s business; 3. The economy is driven by great business leaders and entrepreneurs; 4. The workplace is a meritocracy; and 5. Collective economic action is bad. He also notes that corporate ownership of most media has led to a hostile relationship between unions and media outlets. None of those frames has anything to do with the plight of the worker, he added.
Martin looked at how the three major television networks, USA Today, and the New York Times covered five major labor issues in the 1990s, and concludes in each case that "media coverage predominantly focused on the [usually adverse] effects on consumers"--undermining potential solidarity between strikers and the public--"and ignored the working conditions of employees."
And this wasn't just a side effect of "New Economy" mania during the Clinton years. FAIR delivered a similar indictment back in 1989, noting that few outlets employed a labor reporter, that newspapers rarely cover labor issues unless a strike is going on, and that labor coverage has been replaced by a "workplace beat" investigating office gossip and the like. And it's hard to ascribe all of this to a lack of interest. Cable outlets, after all, spend lavishly on corporate- and finance-oriented shows, even though most of them have earned piddling ratings since the stock market crash in 2001. Even PBS has no counterpart to Kudlow & Cramer.
Anyway, we're a far cry from the start of the 20th century, when Eugene Debs' Appeal to Reasonboasted a stunning 760,000 subscribers. Indeed, one of the last major labor-run publications, the Racine Labor--often held up as a model for other aspiring labor papers to emulate--closed down in 2002. Anyway, people can debate causes all day, but it's hard to imagine that the decline of labor coverage hasn't had a negative effect on unions--or the labor movement in general.
By way of a worthy digression on the Sicilian Mafia, Henry Farrell suggests that music critics need to issue unexpected and controversial verdicts from time to time in order to stay relevant. Critics, he argues, "don't want consumers to feel so secure in their own tastes that they can bypass the critic" and hence "have incentive to inject certain amounts of aesthetic uncertainty into the marketplace, by deliberately writing reviews which suggest that bad artists are good, or that good artists are bad, so as to screw with the heads of the listening public."
That's... clever. Might even be true in some cases. But Farrell's writing in the context of a discussion on Pitchfork, the online magazine with a virtual monopoly on indie music reviews, and in that context his take is, I think, inapt. Mark Schmitt's advice to politicians has always been, "It's not what you say about the issues, it's what the issues say about you." The same thing applies to Pitchfork. The reviews fairly obviously have less to do with measuring the aesthetic quality of a given album, and more to do what liking or not liking an album would say about you as a person.
It's not shocking to say that a lot of people's musical tastes are a means of self-expression. When I was a short, skinny kid in high school and getting picked on, I developed an interest in indie rock partly as a way to distance myself from the "cool" kids listening to Dave Matthews Band and Pearl Jam. It wasn't only an aesthetic consideration. Frankly, I can't tell you, from a purely musical standpoint, exactly why I prefer Neko Case to Norah Jones anymore than explain why Beethoven outshines Brahms. I have genuine preferences in both cases, but much of it is so influenced by extra-aesthetic factors—by authorities who deem Beethoven a giant, or by Neko Case's indie credibility—that it's hard to disentangle everything.
I don't think there's anything wrong with that. I've never agreed with the New Criticism view that works should be judged on formal aspects alone, and in any case, I don't have time to learn enough music theory to describe music in any but the most rudimentary terms: "This sounds like a cross between X and Y," or "The lyrics are witty!" And neither do the Pitchfork people. Their reviews tend to involve a series of addled metaphors sprinkled with references to obscure lo-fi indie bands from the 1990s. There's little in the way of rigorous musical analysis. (Consider their review of Kid A: "Comparing this to other albums is like comparing an aquarium to blue construction paper." Okay, then.) But so what?
Mostly they're saying, "This is how liking this band will help define who you are." Aesthetic judgments play at most a co-equal role. And that's valuable to a lot of people—in the same way advertising is valuable to a lot of people—even if it's not always the only thing they want. It also explains why Pitchfork reviews can be so mercurial. Calculated snobbery requires a bit of unpredictability now and again. The genius is that, even when people disagree, their disagreement is still an opportunity to define themselves through their musical tastes. In a strange sense, even a disagreeable and totally off-base Pitchfork review can give people what they're looking for.
By now everyone's seen the new study showing that roughly 655,000 people have died in Iraq, over and beyond what would've been the case had the Bush administration never gone to war. Calling the numbers "ghastly" or "sickening" would be redundant at this point, but I do want to highlight a few things in the full study. Some armchair staticians are already casting doubt on the paper by suggesting that the numbers couldn't possibly be that high. But here's what the authors have to say:
Much violence is occurring far from the view of journalists and widely cited mechanisms for counting the dead. Most Western reporters are based in Baghdad. Even there, large-scale events tend to gain attention, not the numerous but scattered incidences of violence that also occur. [...]
The large rise in sectarian violence, and the survey's findings regarding gunshots being the principal cause of death, correlate closely. They also reflect the reports of widespread assassinations. If, for example, there were three such killings daily in each of the 75 or so urban centers of Iraq (outside of Baghdad and the Kurdish north), the total for the 40 months covered by this survey would equal more than 270,000; four such killings daily in those 75 cities would equal 360,000 in that period.
So the numbers are entirely plausible. I'd note, though, that I was a bit surprised to see that of the estimated 655,000 excess deaths, roughly 600,000 were attributed to violence, and the rest to disease and other causes. From what we know about the dismal water quality and health care in Iraq, wouldn't we expect the number of deaths from disease to be much, much higher? I guess the explanation is that Iraq's public health system was so horrible under Saddam Hussein--due, in part, to U.N. sanctions during the 1990s--that the bungled occupation and incessant violence has only made things moderately worse on that score.
Anyway, the researchers made a couple other points worth reiterating. One, they observe that while these numbers may sound shocking to some, they might explain why so many Iraqis want the United States out. Coalition forces, after all, are killing an estimated 4,700 Iraqis a month--many via the air war that's being conducted largely outside the media's purview. Two, "the overall scale of violence and the large representation of young men in the mortality figures may indicate a much larger insurgency and/or membership in militias than is widely estimated." Three, a staggering percentage of the violence is taking place outside Baghdad, even though the capital has been the focus of the military's strategy of late. Quite the punctuation mark we've got here.
I see Arnold Schwarzenegger has had to declare a state of emergency in California's overcrowded prisons. The government is now shipping inmates to other states in order to alleviate the burden. One could make a couple of obvious points here, either about the draconian drug laws that have created an unsustainable situation, or the iron grip that the prison's guard union has on the state legislature, preventing real changes from taking place. But instead, let's talk about parole reform, the topic all the rage in the California legislature these days.
Nationwide, roughly 51 percent of ex-convicts end up back in prison after three years, according to the Justice Department. Of those, over half are sent back not because they committed a crime, but because they had a "technical" parole violation: missing an appointment, failing to land a job, getting a speeding ticket, flunking a drug test. Parole officers are overworked, and often have to deal with hundreds of cases at a time, so it's usually just easier to send a difficult parolee who misses appointments back to jail. There's less risk that way.
So even setting aside the problems with wildly irrational drug laws and mandatory minimums, that's a huge number of people going back to jail for no good reason. For speeding tickets, in some cases. It's a massive problem in California, where 70 percent of parolees are back in prison in three years, and 67 percent of new admissions to California prisons are parole violators. It's the worst state in the nation on both counts. By comparison, only about 14 percent of new admissions in Mississippi of all places are parole violators.
A working solution won't necessarily involve more funding. California already spends more per parolee than most states. Procedures could change. Most people violate parole in the first few months; so "front-loading" resources to avoid this could pay off. Using insurance-style risk instruments could help parole agencies figure out which ex-convicts require more supervision than others. Incremental sanctions, rather than automatic jail time, for technical parole violations seems sensible. California could also reduce the barriers faced by prisoners when trying to re-enter society. Right now many can't vote, and they're excluded from receiving food stamps or college loans or whatnot. It's a pretty clear recipe for recidivism.
Schwarzenegger entered office with good ideas about parole reform, only to abandon them in the face of carping from victim's rights groups and the prison's guard unions. He also assumed parole reform would lead to instant savings, and required that the changes basically fund themselves, which sabotaged the project from the start. Nowadays, according to the San Diego Union-Tribune, the governor isn't even "considering alternatives to imprisoning 'technical' parole violators who did not commit new crimes." But other states have tried it, and succeeded: Texas, Ohio, and Kansas especially. Surely it can be done, no?
Gee, John McCain's sure giving the impression that he would support an attack on North Korea to prevent them from developing nuclear warhead technology: "That is a grave threat ... that we cannot under any circumstances accept." Any circumstances? Really? And negotiations are out, he says. You know, I don't really see the case for thinking that McCain's some sort of closet dove or sane "realist". But that's just me.
George Monbiot has a very significant story in the Guardian today: A soon-to-be-published rainfall forecast in the Journal of Hydrometereology will apparently predict a serious "global drying trend" over the next century if greenhouse gas emissions remain moderate or high. Droughts are on the way, thanks to global warming. This would come on top of the fact that many areas are already facing water shortages for reasons discussed previously:
The great famines predicted for the 1970s were averted by new varieties of rice, wheat and maize, whose development was known as the "green revolution". They produce tremendous yields, but require plenty of water. This has been provided by irrigation, much of which uses underground reserves. Unfortunately, many of them are being exploited much faster than they are being replenished. In India, for example, some 250 cubic kilometres (a cubic kilometre is a billion cubic metres or a trillion litres) are extracted for irrigation every year, of which about 150 are replaced by the rain. "Two hundred million people [are] facing a waterless future. The groundwater boom is turning to bust and, for some, the green revolution is over."
In China, 100 million people live on crops grown with underground water that is not being refilled: water tables are falling fast all over the north China plain. Many more rely on the Huang He (the Yellow river), which already appears to be drying up as a result of abstraction and, possibly, climate change. Around 90% of the crops in Pakistan are watered by irrigation from the Indus. Almost all the river's water is already diverted into the fields -- it often fails now to reach the sea. The Ogallala aquifer that lies under the western and south-western United States, and which has fed much of the world, has fallen by 30 metres in many places. It now produces half as much water as it did in the 1970s.
Yikes. Meanwhile, salty oceans will start encroaching on freshwater supplies lying around the coast, and many of the proposed solutions to the shift would only accelerate global warming -- such as moving agriculture inland (which would require cutting down trees in the Amazon) or building desalination plants (which are energy-intensive). The obvious step would be to cut carbon emissions and forestall the whole disaster, although Monbiot reports that forestalling global drying would require a whopping 90 percent cut in carbon emissions by wealthy nations before 2030. Who'd take those odds?
From a geopolitical standpoint, the future looks ugly. The spectre of "water wars" has been risen often. Just last month, the Sri Lankan government launched an assault on the Tamil Tigers for refusing to open a canal for rice farmers. China and India willl need to start diverting rivers for their own ends, causing chaos for their neighbors, especially Bangladesh. We know about the water conflicts in the Middle East; Turkish dams on the Euphrates could make things rather parched for Syria and Iraq. Granted, some nations have learned to share, as with the Nile in Africa, although those treaties might start drying up as soon as the rivers do.
In this month's American Prospect, Reva Siegel and Sarah Blustain report on the newest pro-life tactic being put to use around the country. A number of abortion foes, it seems, are trying to distance themselves from the religious families in unicorn sweatshirts who are protesting by holding up blow-up photos of dismembered fetuses. Apparently that strategy isn't quite as persuasive as once thought.
So abortion foes are now leaving the fetus out of it, trying instead to convince people that pro-life policies are actually better for the mother. South Dakota's abortion ban talked about protecting women's welfare, women's health. Some of this is accomplished through outright lying—the insistence, against all evidence, that abortion causes staggeringly high rates of depression, or leads to breast cancer. Pro-lifers claim that women are bamboozled by clinics into aborting their pregnancies, only to regret it later. So, they say, pro-life policies actually increase women's freedom, by helping them making more "informed" choices. "Pregnancy crisis centers" fit neatly into this Orwellian strategy. Just add lies and deception, shake, and serve.
Will Saletan created a fantastic resource nearly a decade ago that listed many of the frames both sides in the abortion debate have used to make their case. It's a rapidly shifting debate, and this new twist is particularly devious. Pro-choicers who focus too much on the idea that abortion can be a "tragic" experience only play into the frame. Most leading Democrats are in that camp. Most women, of course, don't regret their abortion and can make informed choices perfectly well without the "assistance" of those peddling junk science and trying to shutter actual health clinics. Still, this looks like a very significant new development.
MORE: The Los Angeles Timesalso notices the rhetorical shift.
If you're look for some off-beat reading, Bernadette Barton's Stripped: Inside the Lives of Exotic Dancers might fit the bill. Sorry, no pictures. It's one of the better books I've read lately. The takeaway thesis, I think, is this: Debates about strippers—or sex workers in general—often center on the question of whether sex work is empowering or oppressive. Barton took the radical step of getting to know the workers themselves and finding out. The answer she gets is... it's neither. There are aspects of working in a nude club that are empowering, in some sense, but those same aspects often become oppressive after a period of time.
Women begin dancing because they need the money and like the hours. Coercion is subtle but it's there too. Barton talks to dancers who thrive, at least in the short term, on the compliments they get while working. Some appreciate the ability to set boundaries—the "no touching" rule, for instance—something they might have a hard time doing in the outside world, where some level of everyday harassment is deemed tolerable or acceptable. A few women are able to partially overcome past abusive experiences by taking control of their sexuality through stripping, although this is sort of problematic. Some women even find pleasure in performing—in dancing, in dressing up.
But those upsides rarely last long. The compliments inevitably give way to insults, boundaries are violated by "grabby" men, the thrill of dancing yields to achy joints. Anyone who works in a strip club for long enough—Barton identifies the magic time period as roughly three years—begins to break down. I hadn't really thought about it, but rejection wears on a lot of the women. Like, say, nannies, they're doing a lot of emotional labor—trying constantly to please people, being judged on their appearance, being invested in their interactions. It's draining, and little wonder alcoholism and drug use is so rampant.
There's an interesting section on how dancers deal with the social stigma attached to their profession (which can make life tough for their kids at school, lead to discrimination, etc). Some women cope by distancing themselves—both mentally and in conversation—from other strippers. "I don't let customers touch me, but the girls who do make it harder for the rest of us..." There are good strippers and bad, the implication goes. But attempts to marginalize certain dancers just ends up reinforcing negative stereotypes about all dancers. It's a rather vicious trap. There are other strategies women use to deal with the stigma of stripping, but that one stuck out.
Then there are the men. Because stripping is considered tawdry and outside the bounds of moral rectitude, many customers feel entitled to be rude, or obnoxious, or lewd. Dancers are treated as whores, pressured and manipulated in various ways. Customers confuse obligatory flirting with actual desire. Barton's dancers estimate that only 20 percent of customers are "problem clients". That's still a lot, obviously. Anyway, none of this does justice to the book. It's an academic study, and the writing's a bit dry, but it's really very good. Maybe I'll post again when I finish reading it.
An easily-forgettable fact—by which I mean a fact that I easily forget—is that science is to some extent a product of various social institutions, with all their quirks and imperfections. There are, I take it, all sorts of heated debates about to what extent this is the case. Either way, I thought this bit in Jim Holt's New Yorkerpiece on string theory illustrated a few things rather nicely:
String theorists dominate the country’s top physics departments. At the Institute for Advanced Study, the director and nearly all of the particle physicists with permanent positions are string theorists. Eight of the nine MacArthur fellowships awarded to particle physicists over the years have gone to string theorists.
Since the fall-off in academic hiring in the nineteen-seventies, the average age of tenured physics professors has reached nearly sixty. Every year, around eighty people receive Ph.D.s in particle physics, but only around ten of them can expect to get permanent jobs in the field. In this hypercompetitive environment, the only hope for a young theoretical physicist is to curry favor by solving a set problem in string theory. "Nowadays," one established figure in the field has said, "if you’re a hot-shot young string theorist you’ve got it made."
Maybe string theory is a good theory about the universe. Maybe it's even the best theory. But if it's not, in fact, the best theory, the odds of this fact being discovered anytime soon seem rather slim, seeing as how the people most likely to doubt the whole enterprise are the people least likely to have tenured positions and lots of free time to think about theoretical physics. No?
Anyway, my clumsy understanding is that sort of thing Holt describes has happened fairly frequently throughout history—I think that's what Thomas Kuhn's book was kinda sorta about—so maybe it's not a terrible problem and we just have to sit back and wait for some odds-defying, paradigm-shifting genius to set us all straight. Or maybe modern physics has become so advanced, requiring so much technical skill and prior learning, that social barriers to innovation have much more of an adverse impact nowadays than they did in, say, the 19th century. I guess I have no idea.
UPDATE: Read Sean Carroll's more-informed take. He says, "It seems worth emphasizing that the dominance of string theory is absolutely not self-perpetuating," and that string theorists are hired because of their interesting work, not due to some string-theory mafia. I'm not sure Carroll's view has to be incompatible with Smolin's (the people doing the hiring may not be string theorists, but their sense of what's interesting and worthwhile might be unduly influenced by other string theorists) but I'll leave this debate for someone who knows more about the topic.
MORE: Do check out the comments. Allen K. has some really smart things about what Smolin (and Holt) are missing.
Michael Kinsley is complaining that the political ads up in Washington are all totally asinine. I haven't seen them, but if all the brain-dead spots in the Maryland Senate race are any indication, he's probably right.
Reason has a cover story this month praising negative campaign ads in all their vicious glory. Attack ads, David Mark says, tend to convey more information than thumb-sucking "positive" spots. That doesn't sound entirely wrong to me. Ronald Reagan's "Morning in America" ad was the apotheosis of inanity. What did it even mean? Who's lapping that up? At least the "Daisy Girl" ad let people know that Goldwater was a kook. There's something to be said for that.
A few experts, like David Geer, say negative ads are more likely to be substantive—and substantiated—than clips of Senator Stay-Puft frolicking around with puppies and children. Or "talking about the issues" in vague, meaningless terms. But I wonder if this goes too far. Negative ads can do a lot of harm. A lot of the attack ads in this election cycle focus on immigration, and most of them do nothing but fuel xenophobia (this one's so wacky as to be a possible exception). More thumb-sucking might be preferable. Lawmakers might also avoid voting for X or Y policy out of fear of the demagogic attack ad that would ensue. Risk aversion by politicians can be good or bad, depending. I'm not sure how it all shakes out.
I don't know what should be done about North Korea any more than you do. But here are some articles on how things got to where they are. Fred Kaplan's "Rolling Blunder" explained how the Bush administration abandoned the flawed-but-basically-sound deal its predecessor had made with North Korea. (One can also note that it was the United States that originally cheated on the Agreed Framework—by never delivering promised light-water reactors, and by never taking promised steps towards normalization—long before North Korea resumed its illicit attempts at uranium enrichment.) That deal doesn't look half so bad now.
Beyond mere finger-pointing, the point is that diplomacy can work in these situations—although, as Peter Scoblic wrote in The New Republic two years ago, the Clinton administration had to combine diplomacy with the threat of force to get North Korea to agree to lock up its plutonium rods and let in IAEA inspectors back. It took more than flowers and candy. The current administration, by contrast, has done little but provoke Kim Jong Il—despite evidence that North Korea may actually want negotiations. Selig Harrison's article in the current Newsweek makes a strong case for more talk and less confrontation. He might be overly sanguine about the North Korean officials he's chatting with these days, but read his piece anyway.
In 2003, Robert Alvarez wrote that it was "very destructive to exaggerate North Korea's nuclear capability." Maybe he's still right. Michelle Malkin posted a map this morning showing the ranges of North Korea's ICBMs. Much of continental United States looks like it's in the crosshairs. But arms control expert Jeffrey Lewis noted a few months ago that the missile in question, the Taepodong 2, doesn't actually work. North Korea can't lob a nuke at Los Angeles for the time being. We also don't know how big a nuke they've actually tested. That won't stop the missile-defense cheerleaders from doing their thing, but unwarranted hysteria doesn't seem very helpful at this point.
Matt Yglesias revisits the debate about letting North Korea collapse. Nicholas Eberstadt has argued that U.S. food aid unwittingly propped up Kim Jong Il's regime during the 1990s, although the country's economy seems somewhat sturdier nowadays. At any rate, neither China nor South Korea really want North Korea to collapse. It would be chaos—refugees galore plus a proliferation nightmare. The current South Korean strategy—engaging its northern neighbor, exerting some measure of peaceful influence, and slowly building sweatshops across the border in the hopes that North Korea will eventually worship at the altar of capitalism and decide that there's more to life than waving their warheads around maniacally—has its own problems, but is still likely better than anything the White House will concoct at this point. What a mess.
UPDATE: Not that it really contradicts any of the above, but Jeffrey Lewis thinks the North Korea nuclear test might have been a dud...
In Grist, Tom Philpott wonders if industrial agriculture can "withstand climate change." Or maybe it's the other way around. The modern-day food supply increasingly depends on lots and lots of fossil fuels. Food is grown further and further away, and requires more and more oil to get delivered to our kitchen tables. Wal-Mart's foray into the "organic" food market, ironically, may well accelerate this trend. Then there are all those petroleum-based pesticides. Once peak oil, or a carbon tax, comes into play, the fertilizer's going to hit the fan.
Climate change is just the tip of the iceberg. Industrial agriculture has certainly done a great deal of damage on a variety of fronts. Philpott discusses the well-known loss of biodiversity in various regions, thanks to Western plant breeders who have introduced "hybridized" strains that end up dominating once-diverse agricultural areas. "97 percent of fruit and vegetable seed varieties that were commercially available in 1907 had vanished by 1983." Some regions are now more prone to crop devastation as a result. All that diversity served a purpose besides just looking pretty.
Then there are grim side stories, like the "plague of suicides" in India. The problem has a lot of causes. Here's a sinister-sounding one: Monsanto sells genetically-modified seeds to Indian farmers. New crops flourish, require fertilizers and pesticides galore. Soon the land won't grow anything but Monsanto crops, which are twice as expensive. Farmers plunge into debt, despair, and then take their own lives. Read all about it in the New York Times—or in the lefty journals that have been waving their hands about this issue for years.
Come to think of it, the Times has done some truly excellent reporting lately on the dark side of modern-day agricultural practices. India, we learn, is "running through its groundwater so fast that scarcity could threaten whole regions… and ultimately stunt the country's ability to farm and feed its people." No exploration, though, of whether "water-guzzling" Green Revolution irrigation practices, or massive and often ill-conceived World Bank water projects, have intensified the problem. On the very significant other hand, some of these grand schemes have helped ramp up food production, staving off famines in the past (although that might not still be the case). But that doesn't mean there are no alternatives to the current state of affairs.
Via Crooked Timber, Alex de Waal, who certainly knows a thing or two about Darfur, argues that a military intervention to halt the ongoing conflict there would be a terrible idea:
The knock-down argument against humanitarian invasion is that it won't work. The idea of foreign troops fighting their way into Darfur and disarming the Janjaweed militia by force is sheer fantasy. Practicality dictates that a peacekeeping force in Darfur cannot enforce its will on any resisting armed groups without entering into a protracted and unwinnable counter-insurgency in which casualties are inevitable.
The only way peacekeeping works is with consent: the agreement of the Sudan government and the support of the majority of the Darfurian populace, including the leaders of the multitudinous armed groups in the region. Without this, UN troops will not only fail but will make the plight of Darfurians even worse.
I've heard arguments both ways on this score, and don't know nearly enough about the country to figure out which one is correct. Eric Reeves, who has done a lot of work charting the humanitarian catastrophe in Darfur, seems certain that both the Khartoum-backed militias and the Darfur resistance groups would basically drop their guns at the first sign of Western military might (or at least after a moderate amount of fighting) and thereby put an end to this war. Problem solved. The political issues that led to the conflict can be hashed out later.
Trouble is, that didn't exactly work in, for instance, Somalia. Now not all situations are equal, granted, but if the janjaweed didn't disarm quietly, it would be very difficult for the West to dole out humanitarian aid in Darfur while waging what might amount to a counterinsurgency campaign. The rebel groups, no angels they, might continue to wage an insurgent war against the central government. And a Western force might be ill-equipped to settle a number of issues driving the conflict—would the West be in charge, say, of handing the lands seized by janjaweed militias back to dispossessed Darfur tribes? By force? One begins to see why de Waal thinks all this "philanthropic imperialism" could make things worse, not better.
Of course, the intermediate case is that a Western intervention proves fairly bloody and gruesome, but not quite as bloody and gruesome as the status quo, and hence is worth doing. That sounds entirely likely. But De Waal suggests that the safest option, at this point, is to focus on reviving talks between the central government and the two rebel groups that haven't signed onto the failed May peace accords. (To a large extent, after all, the ongoing slaughter is due to a political conflict, not some irrational desire by Khartoum to kill every last Darfuri; that doesn't make the violence more palatable, but it does mean that it could be stopped with a properly-structured peace accord.)
Maybe de Waal's right about the prospects for military intervention, maybe he's not. The way I see things, it doesn't matter. Practically speaking, neither the United States nor Britain nor France nor anyone else will send troops in anytime soon. So at this point, idle saber-rattling probably doesn't accomplish much beyond undermining the resumption of peace talks—which sound, at this point, like the only realistic option for stopping the slaughter. (Continued U.S. threats to intervene might, for instance, actually embolden the Darfur rebel groups to continue fighting in the hopes that the West will, in fact, intervene at some point and help them get the independent state some leaders apparently want.)
I'll admit, I have a soft spot for clever arguments about tinkering with our various government institutions. Abolish the filibuster! Elect half the Senate through proportional representation! Heck, abolish the Senate! It's all no more likely to happen than I am to sprout antlers, but it's good fun all the same. And now Sanford Levinson wants a piece of the action, arguing that the United States should junk the presidential veto. Here's some history:
The veto was not always a key tool of executive power; early presidents used it with restraint or, as in the cases of John Adams, Thomas Jefferson, and John Quincy Adams, not at all. Things began to change once Andrew Jackson arrived in office and set about expanding the powers of the presidency. Prior to Jackson, according to UCLA political scientist Scott James, "[s]ettled practice authorized vetoes where Congress's authority to legislate in a given area was constitutionally murky, or where hasty consideration produced legislation with remediable technical deficiencies."
The veto did not operate "to substitute the president's judgment for the Congress's on national policy matters." In the ensuing decades, however, presidents slowly stopped thinking of themselves as mere enforcers of laws passed by an autonomous Congress. The veto gave them power to block legislation they disliked, and the threat of the veto gave them growing influence over how laws took shape. Soon, presidents were key participants in the legislative process.
Things I didn't know. In any case, Levinson's complaints about against the veto mostly boil down to the fact that, thanks to the electoral college, presidents aren't really very representative of the nation as a whole—they can lose the popular vote, after all, and are often elected by pandering to the preferences of a few key "swing" states—and hence, shouldn't be able to thwart the will of Congress, which is more democratic. That sounds more like an argument for scrapping the electoral college to me, but what do I know.
If journalist Seth Ackerman is to be believed (here, here, and this comment here), the following things are all true:
* Hamas has recently become "open to changing its policy in favor of a long-term peaceful accommodation with Israel." The moderate wing of the party—led by Prime Minister Ismail Haniya—increasingly favors an armistice, followed by an Israeli withdrawal to the pre-1967 borders, a Palestinian state, elections, and a truce lasting ten years or so, maybe longer.
* U.S. media outlets have, almost without exception, failed to report this shift. News accounts still describe Hamas as "committed to the destruction of Israel" and so forth. Granted, there are a lot of Palestinians for whom this description holds true, but it lacks, shall we say, nuance.
* Hamas recently entered discussions with Fatah over forming a unity government. The two groups agreed to use the 2002 Arab League plan, which calls for the recognition of Israel in exchange for withdrawal from the occupied territories—as the basis for their foreign policy.
* The United States told Mahmoud Abbas, no, this was unacceptable, that Hamas had to agree to recognize Israel immediately and commit to both the Oslo Accords and the Road Map, before the United States would deign to deal with the unity government. (In return for this large concession, Hamas would get no assurances that the Palestinians would receive a viable, contiguous state.) Rather than shrug off the Bush administration, Abbas agreed.
* Hamas, quite predictably, refused to go along. It's worth noting, as Ackerman does, that Fatah doesn't recognize Israel's right to exist either, and Likud doesn't recognize the right of a Palestinian state to exist. So without glossing over Hamas' many problems, the party's refusal isn't exactly unprecedented. Naturally, though, the American media portrayed Hamas as incorrigible through and through.
* So Haniya backed out of the talks and now Fatah and Hamas are clashing on the streets, and the situation continues to deteriorate.
None of that sounds all that outlandish to me, so I'll settle on this as the basic storyline. One might add that the military wing of Hamas—led by Khaled Meshal, who lives in Syria—has "repeatedly undermined the Haniya government's authority" in the past, as one Abbas advisor put it, and they may be hindering Hamas' ability to negotiate here. Not everyone's ready to play nice. But the United States also appears to be doing its best to scuttle diplomatic options (and our media intent on providing the most obtuse coverage possible). Go figure...
Apparently a new study has found that state laws protecting clinics from abortion violence are less effective than previously believed:
During a wave of anti-abortion violence in the early 1990s, several states enacted legislation protecting abortion clinics, staff and patients. Some experts predicted that these laws would provide a deterrent effect, resulting in fewer anti-abortion crimes. Others predicted a backlash from radical members of the anti-abortion movement, leading to more crimes in states with protective legislation.
"We tested these competing hypotheses and found no support for either one," Pridemore said. "In other words, states with laws protecting abortion clinics and reproductive rights are no more or less likely than other states to have higher or lower levels of victimization against abortion clinics, staff or patients."
Well... I wouldn't say that these laws were ineffective, exactly. In 1994, Congress passed the FACE Act, preventing abortion protestors from using physical force in order to stop women from entering clinics and made it a federal crime to damage abortion facilities. Then the Supreme Court ruled that states could establish buffer zones around clinics to protect their patients and staff from violent protestors. Both of those things, combined with racketeering suits against anti-abortion groups, really did help decrease certain types of violence, especially clinic blockades, which could shut down facilities for weeks. Clinic blockades basically have nearly disappeared. That was a huge accomplishment.
But the larger point—that laws haven't deterred anti-abortion extremists from finding other ways to terrorize clinics—seems sound. A survey of 361 abortion facilities in 48 states found that 7 percent had faced major violence or vandalism and 27 percent had endured minor vandalism. Abortion foes aren't murdering doctors and firebombing providers at the rate they used to, but they're still plenty disruptive. One imagines it's a bit difficult to run a clinic properly under this sort of onslaught:
The reported victimizations involved many forms of violence, harassment and intimidation, including physical violence, bombing, arson, death threats, bomb threats, facility invasion, burglary, break-ins, broken windows, glue in locks, nails in driveways, graffiti, clinic blockades, stalking of staff or physicians, home picketing, posting of "wanted" posters and harassment via the Internet.
I'd be interested to know just how many abortion providers have had to shut down (or not start up in the first place) over the years because of these sorts of attacks. That might be hard to count, because intimidation can work in so many indirect and subtle ways (and because right-wingers are also waging a stealthy legal campaign to shut down clinics), but seeing as how 87 percent of U.S. counties—where over a third of women age 15-44 live—have no abortion providers, I'm guessing the answer is: a lot. And if this study is correct, it's the sort of intimidation that's incredibly hard to deter. Religious fanatics aren't exactly fazed by legal threats, at least not the ones currently on the books.
It's over a year old, but here's something Scott Lemieux once wrote that I just stumbled upon and found fascinating:
Mark Graber’s classic article “The Non-majoritarian Difficulty” noted that on issues like slavery, antitrust and abortion ended up in the courts not because the courts "seized" power but because legislatures wanted no part of these issues, which cross-cut existing political coalitions and hence threatened to destabilize incumbent parties. Congress both failed to use tools at their disposal to retaliate against judicial policy-making, and passed various measures that expanded the authority of the courts.
Other scholars have confirmed Graber’s insights. Perhaps the best example is labor law, as George Lovell argues in Legislative Deferrals. One of the great puzzles of political science is why labor became so ineffectual in the United States, although labor in the U.S., was once more radical and more powerful than in Europe. Some scholars argued that the key was the presence of judicial review: Congress would pass pro-labor legislation, which would be gutted by the courts. Lovell, studying the Erdman and Clayton Acts, find however that Congress made each piece of legislation progressively more ambiguous, and gave more injunction power to the courts (which they knew full well were conservative and unlikely to resolve ambiguities in way that was favorable to the interests of labor) at each iteration.
So the story of the courts "usurping" legislative power and screwing labor is highly misleading. The policy-making the courts engaged in was how a majority of Congress wanted it. Although we think of institutions as maximizing power, it is not uncommon for legislatures to delegate and defer power to the courts, just as they do to executive agencies. (Remember, legislators don’t always want to maximize policy goals; they also want to get re-elected, finesse conflicts among constituencies, etc.)
Sounds like an interesting book! I wonder how that argument would go over with those who think that judges are too activist and should always look to the "original intent" of whatever law they're interpreting to figure out what it means. Sometimes, apparently, Congress' original intent is to let the courts do the policymaking. Presumably legal scholars who are big on "original intent" have already thought of this; I'm just curious what they'd say.