May 31, 2005

Abolish the Primaries?

Neil the Ethical Werewolf—an unbeatable name, really—had a post the other day picking apart the presidential primary process. Indeed, it ought to be picked apart. At the same time, the solutions usually presented—break Iowa's lock on the process! Include Southern states! Rotating regional primaries!—always seem a tad unimaginative to me, even if they're the most realistic. At the very least we ought to think about and reject radical changes to the entire system before we resign ourselves to just shuffling a few states around here and there.

The main problem with simply breaking Iowa's death-grip on the primary process is this. Iowan caucus-goers are unrepresentative of the larger electorate, true. But any state's primary voters would be unrepresentative, mainly because primary voters are largely the "party base," not swing voters or "undecideds," and that's going to pull the nominees a little further away from the median voter. According to this data-set (pdf), even in southern states Democratic primary voters tend to be older, more liberal, better educated, less white, and poorer than the general electorate. And obviously more politically active. Indeed, Iowa got a lot of heat for being "too antiwar" in 2004, but regardless of what you think about this, primary voters in South Carolina and Virginia were actually more dovish on Iraq than Iowans. Now that's not necessarily a bad thing, and it's probably true that these southern primary voters may still be more representative of the general electorate than Iowan caucus-goers, but no matter what states go first, you're still getting a decent tug away from the "center".

The defense here is that there should be some sort of system in place to reward party activists. Eh, I'm not convinced this is always a sound principle. Why, exactly, should people who spend more time—have more time—to engage in Democratic activities get a greater say in the process? That same principle, mind you, justifies the undue influence of the wealthy, better-educated, and elderly over politics in general. It's not instantly defensible, and I don't think there's anything intrinsically unfair about throwing open the primary process to the wider electorate—heck, even include Republican voters. There is a danger of scheming conservatives hijacking the whole process and throwing a whole chunk of votes towards, say, Kucinich, but I don't think that's a serious danger.

Of course, that's all based on the assumption that primary voters pick the candidate they personally want to see in office. In 2004, that wasn't true at all—most primary voters became mini-pundits, basing their picks to a large extent on who they thought would win the general election. (So you saw all sorts of exit polls indicating that most Dean voters actually liked their man, whereas Kerry voters, not so enthused, but attracting to that glowing white obelisk we all called… "electability.") Fair enough, but if that's going to be the dominant trend, I'd almost rather leave the selection to people who are better suited to this sort of guesswork: namely, the delegates themselves. Go back to the old pre-1968 days of smoke-filled rooms and let party officials pick the candidate at the convention. Yeah, it's less democratic, but hey, I'd rather get a Democratic nominee who can win the general election than a nominee chosen by the people. Within reason. Of course, the "wisdom of crowds" thesis suggests that a wide swath of primary voters could well be better at guessing who that electable candidate might be, but I think the bandwagon effect—later states latch on to what former states have done—cancels this out.

(Another possible reason to abolish primaries and go back to smoke-filled rooms? Money. With the exception of Dean and I think one other, though I forget who, since 1976 the candidate who started the primary with the most money has won the election. Also, lots of people like the primaries because it "tests" candidates with various political problems and surprises along the way. Well, humbug. Frankly, I don't really want a nominee who can solve thorny, campaign-type "obstacle courses." I want a president who can, you know, govern. That's not to say the old way of choosing candidates was flawless, but it's worth a revisit.)

Meanwhile, there's no reason why the Democrats can't add things like single-transferable vote schemes to the primary process. If anything, doing so would make the primaries more likely to pick the sort of candidate who can win in a general election—since a candidate who has, say, only the second-most number of first-place votes but the overwhelming majority of second place votes, likely has broader appeal in a wide-open race, and thus more likely to do well in a general election. I think it's been shown via computer simulation that the single-vote method for a wide field—which is what the current primaries do—only picks the Condorcet winner about half the time. (i.e., the candidate who would beat every other candidate one-on-one.) Either the single-transferable vote or even approval voting do much better.

Alternatively, I'd be open to thinking about a primary process like that in New York City (or even in the French presidential election) where all candidates from all parties run in a general primary, and then the top two vote-getters, regardless of party affiliation, move onto the general election. This would be especially valuable, I think, in House and Senate races, and make it harder for incumbents to automatically win re-election. On the other hand, you'd have chaos, but I'm all in favor of a bit of chaos.
-- Brad Plumer 5:42 PM || ||

May 30, 2005

Linky

Since rapid-fire linking is where the action is these days, or something, here's a bunch of nifty posts and articles found around the internet.
  • Steve Sailer dissects voting patterns and suggests that Republicans—who tend to do well among married couples and families with children, and in places where housing is plentiful—ought to make their stand on "affordable family formation". Presumably Democrats could make a stand on this as well; I'm not exactly sure where all the causal arrows are pointing here. Anyway, interesting analysis; Sailer also notes that "the spread of immigrants into the middle of the country puts once-solid Republican states [Arizona, Nevada, Colorado, Georgia, North Carolina] into play."

  • Elizabeth Anderson channels Hayek to… well, defend the liberal welfare state it seems: "In Law, Legislation, and Liberty, vol. 2, Hayek argued that, for a society to secure the liberty of all, its distributive rules cannot aim at achieving some pre-established pattern of distribution based on individual need, desert, or merit. Instead, they should be purely procedural in form. Set up a system of fair, impersonal rules governing our interactions and applicable to all, let people choose freely from among the opportunities generated by acting within the constraints of the rules, and whatever distributions of goods result from following the rules will be just." So that rules out communism but not much else, no?

  • Ah... chess tactics. For the record, my chess game consists entirely of sneaky knight forks, primarily because they're so delightfully infuriating, and my winning strategy really involves making my opponents lose their cool and start doing stupid stuff. Which is why I never challenge anyone over the age of 12.

  • Jon Henke's much-linked post on torture deserves yet more links.

  • Fish! (Long story, but true.)

  • I've spent the last hour clicking around Pharyngula's fun little carnival. Fascinating stuff, especially this linked-to post on the Epoch Times revolutionaries in China.

  • And... a fun article about just how totally awesome math is, by way of profiling Dartmouth's Dan Rockmore. Except, at least while I was there, Rockmore didn't actually teach any math classes at Dartmouth—he taught Computer Science. And he's wrong about the slicing the cake bit. No matter; the rest is quite good.
  • -- Brad Plumer 11:27 PM || ||
    Who Loves Freedom?

    As someone who knows approximately nothing about corporate governance, I found this paper (pdf), by Paul Gompers, Joy Ishii, and Andrew Metrick, illuminating in its own way. Firms with stronger shareholder rights, it seems, do much better performance-wise than their autocratic peers:
    Corporations are republics. The ultimate authority rests with voters (shareholders). These voters elect representatives (directors) who delegate most decisions to bureaucrats (managers). As in any republic, the actual power-sharing relationship depends upon the specific rules of governance. One extreme, which tilts toward a democracy, reserves little power for management and allows shareholders to quickly and easily replace directors. The other extreme, which tilts toward a dictatorship, reserves extensive power for management and places strong restrictions on shareholders’ ability to replace directors. Presumably, shareholders accept restrictions of their rights in hopes of maximizing their wealth, but little is known about the ideal balance of power.

    From a theoretical perspective, there is no obvious answer. In this paper, we ask an empirical question -- is there a relationship between shareholder rights and corporate performance? Twenty years ago, large corporations had little reason to restrict shareholder rights. Proxy fights and hostile takeovers were rare, and investor activism was in its infancy. By rule, most firms were shareholder democracies, but in practice management had much more of a free hand than they do today. The rise of the junk bond market in the 1980s disturbed this equilibrium by enabling hostile-takeover offers for even the largest public firms. In response, many firms added takeover defenses and other restrictions of shareholder rights. Among the most popular were those that stagger the terms of directors, provide severance packages for managers, and limit shareholders’ ability to meet or act. During the same time period, many states passed antitakeover laws giving firms further defenses against hostile bids. By 1990, there was considerable variation across firms in the strength of shareholder rights. The takeover market subsided in the early 1990s, but this variation remained in place throughout the decade….

    We combine a large set of governance provisions into an index which proxies for the strength of shareholder rights, and then study the empirical relationship between this index and corporate performance. Our analysis should be thought of as a “long-run event study”: we have democracies and dictatorships, the rules stayed mostly the same for a decade -- how did each type do? Our main results are to demonstrate that, in the 1990s, democracies earned significantly higher returns, were valued higher, and had better operating performance.
    As they say, though, "the cat jumped up on the table and the vase fell" doesn't mean the cat broke the vase—though that is a handy way to "alert" the people from whom you're renting a house that you broke their vase, ahem—and just because autocratic firms did worse in the 1990s, doesn't mean autocracy was the cause. Of course. Trying to pin this down though, the researchers do find that "autocratic" firms engaged in "an unexpectedly large amount of inefficient investment in the 1990s." I still don't quite understand why this might be: is it because entrenched managers are more likely to make a lot of moves for their own private, rather than for their shareholders', benefit? Hm. Meanwhile, the most fun hypothesis here—that corporate insiders realized their firms were going to do badly in the 1990s and hence gave themselves strong protections from shareholders—seems to have no evidence in support of it. Sad; I enjoy the truly dastardly stuff. But the lesson is clear here: democracy is the best antidote to these freedom-hating regimes.
    -- Brad Plumer 1:57 PM || ||

    May 29, 2005

    Beyond Uninsurance

    Er, so first we were treated to the strange spectacle specter of Newt Gingrich and Hillary Clinton holding hands and collaborating on health care issues, and now I see the AFL-CIO and the Heritage Foundation are butting heads and thinking up ways to expand health care for the uninsured. Well, okay. I also heard some very liberal thoughts from the mouths of some otherwise staunch conservatives this weekend, on immigration and pharmaceuticals, so perhaps the big progressive coalition's coming sooner than we think.

    At any rate, if Jonathan Gruber's right about this, the most cost effective method of extending health coverage to the currently uninsured is simply to expand Medicaid. But instead of wonking out, let's do a bit of fretting that the debate about health care for the poor has become much too narrow. "Cover the uninsured and all will be well." Well, no. Many low-income mothers and children are already eligible for Medicaid, and don't take full advantage of it. Partly that's because the eligibility rules are often stiflingly complex—acting as a deterrent more than anything else—and partly because state workers often simply deny people care, to keep costs down. I doubt this is a problem somehow intrinsic to Medicaid—private insurance companies are just as good at weaseling out of coverage if it redounds to their benefits. So long as we have a patchwork system of coverage, the poor and the sick are always going to be booted around.

    But there are all sorts of other reasons why insurance doesn't guarantee good health care for the poor. As these researchers showed, it's very often difficult for people in poor areas to get access to good primary care. People don't have all the time in the world; if care isn't convenient and accessible, then a person's far more likely to shrug off a nasty cough or limp than spend hours going to see a doctor. That goes triple for someone with two retail jobs, a one hour commute each way, and kids to take care of. Plus, as economists Dana Goldman and Dana Lakdawalla argue here, the poor tend to be less adept at taking care of themselves and using care efficiently, largely for reasons of education. (If you're interested, take a look at that paper—it implies that universal coverage alone could well worsen health inequality.)

    To a large extent, then, who cares about the uninsured as such? That's overstating it, since a lot of people obviously do need affordable coverage, but it's certainly not the case that if everyone received decent coverage, everyone would start receiving drastically better health care. For many low-income families, that's potentially way off the mark. So perhaps it's time to start toying around with paternalistic solutions like better education about health, more public screenings (especially in school), more public health clinics, or the like. Newt? Heritage? Robert Fogel does a lot of good work along these lines, while reminding everyone to keep their eyes on the ball—the focus here should be on inequalities and inadequacies in health care, of which insurance plays only one part. Important, but still just one part.
    -- Brad Plumer 10:54 PM || ||
    The Joys of 'What If?'

    Following one of the comment threads earlier this week, I decided to finally read Master of the Senate, part of Robert Caro's massive ongoing biography of Lyndon B. Johnson, while I was traveling. Marvelous stuff! And Johnson was indeed the master.

    Still, one question: to what great and noble end was all this "Master of the Senate" stuff? True, under his leadership (1953-1960) Johnson managed to wedge a few important bits of progressive legislation—a minimum wage increase and the disability portion of Social Security—through a chamber that had not passed any important bits of progressive legislation since the early days of Roosevelt's first term. But most liberal programs were watered down to the point of irrelevancy, and the northeastern liberals were duly abused and marginalized all throughout the '50s. Fair enough, but with such Democratic dominance—especially after they won massive majorities in the 1958 midterms—you would expect more substance. Sadly, no. Thanks Lyndon! Though it's a good reminder that liberals have far, far more influence and power in the Senate nowadays—even under the past two Congresses—than they ever did during the Truman/Eisenhower administrations.

    Anyway, that leaves the 1957 civil rights act, the first of its kind in some 80 years, as Johnson's major liberal accomplishment in the Senate, and Caro really points out how difficult it was for the Majority Leader to finesse a compromise between the Paul Douglas liberals and the southern Democrats. Again, marvelous stuff. But what if he hadn't finessed a compromise? Presumably, the other southern Democrats would've just joined Strom Thurmond in filibustering the bill altogether, so then the Democrats would've been pilloried in the upcoming midterms, and the GOP would've captured the African-American vote, as per Richard Nixon's strategy at the time, and presumably retaken the Senate. That, in turn, would've pushed the southern Democrats off the Senate chairmanships, perhaps pulled in enough votes to shatter a southern filibuster, and in the end made it much more likely that the Senate could've passed a much stronger civil rights bill than what actually came out in 1957—mostly a toothless voting act that was all but unenforceable—sooner rather than later.

    So then, and I'm just guessing here, you have a big realignment in which Republicans become the pro-business, pro-civil rights, and perhaps mildly pro-labor party (the AFL-CIO, as Caro tells it, was willing to make civil rights its number one priority at the time, even at the expense of certain labor provisions). At this point, alternate history gets too murky to say for sure, but presumably the Republicans would've remained strongly conservative on economic issues, there would've been no unified liberal political coalition in the 1960s (no Goldwater!), and hence: no Great Society, no Medicaid, no Medicare, no War on Poverty. Many northeastern and midwestern progressives would've still stuck with the Democrats for some time (though some would've joined the Republicans, perhaps), but their influence would be much reduced. So in the long term, perhaps that 1957 act meant a lot more than simply getting one foot in the civil rights door. I don't know. Good book, though.
    -- Brad Plumer 9:35 PM || ||

    May 25, 2005

    Graduation Time!

    Right. I'm heading off to Annapolis for a few days, to watch my younger brother graduate from some little tugboat school or other. Should be fun! But it's possible there will be no blogging 'round these parts until Sunday, unless I manage to leech someone's wireless signal along the way. But before leaving, I do want to say many, many thanks to readers here for all the truly wonderful comment threads over the past few weeks—I've learned far more about valuing forests, inequality, and when and when not to use the Force than I really thought possible. I'm very grateful. Alright, cheers.
    -- Brad Plumer 2:24 AM || ||
    The Innovation Dilemma

    I see Leif Wellington Haase of the Century Foundation has put out a new proposal for mandatory individual health insurance. Awesome. Rather than list all the problems with this sort of approach, though, I think I'll just link to the best argument against individual mandates that I've ever read, by... Leif Wellington Haase. Uh, right.

    Cute games aside, it seems that Haase has changed his mind, no longer favoring single-payer health care, because of the innovation question. If guv'mint took over the health insurance system, he thinks, it would just tamp down costs and reimbursements—especially when fiscal stability was at stake and those crazy politicians needed to keep spending down—and hence, no one would have any incentive to innovate or try out nifty new health technologies. Life would suck! Haase's plan gets around this, supposedly, by requiring all private insurers to offer three health plans—basic, pretty good, and shiny Cadillac—with the expensive Cadillac tier being the one that motivates and drives technological innovation.

    Well, okay. But look, is it actually true that single-payer plans squelch innovation? Let's do anecdotes first, since they're fun. Look at this list. Seems like our single-payer neighbor up north has been pulling off all sorts of surgical firsts over the years: first heart-valve transplant, first double lung transplants, ah, they were even the first to use pig liver tissue to keep a patient alive! If that's not ingenuity, what is, etc. On the other hand, our dear single-payer friends across the Atlantic, what with their NHS and all that, don't seem to be winning any of the all-important transplant races. Tsk tsk! And of course, the first transplant ever was done in Boston—'twas a kidney—so perhaps the U.S. is in fact the alpha male here and everyone else is just following our lead. Or something.

    But enough transplant silliness. When all is said and done, the United States is probably vastly more innovative than the rest, for some generally accepted value of "innovative." (We'll quibble with that later.) But why is this? In theory, it's because the private insurance system in the United States more or less forces medical providers to compete not on cost but on quality. Hence, innovation! But notice: there's nothing intrinsic about single-payer that prevents this from happening—you can have government take over the insurance industry and still leave a competitive provider market, as is the case in Canada and France. In France, I believe, you're also allowed to purchase additional private insurance to pay for nifty extras and experimental surgery and the like, so really, there's nothing here that necessarily limits innovation from occurring, in theory.

    Shifting tracks a bit, when it comes down to it, the American advantage in innovation seems to come from a few sources. One, we have a bunch of top notch universities that attract world-class medical researchers, many of whom receive plum research subsidies from... the federal government. (The National Institute for Health alone shelled out over $28 billion in research money last year.) That's important. I don't know offhand how important it is, but this isn't the sort of thing that would necessarily just disappear if we switched to a single-payer system. (Pfizer claims that 57 percent of all biomedical research comes from private industry -- fine, but is this the sort of thing that could easily be supplanted by public subsidies? If so, we're talking, what, $50 billion a year? In the grand health care spending scheme, that's peanuts, and could in theory come from savings elsewhere in switching to single-payer.)

    Second, at the moment many relatively well-off Americans don't seem to mind letting health care costs balloon—in part because those Americans who do consume a lot health care and drive medical innovation generally don't pay a lot out-of-pocket. This is also important. It's worth noting that many of today's health care technologies simply don't help control costs. Exactly the opposite, in fact. And that's presumably because, as noted earlier, providers largely compete on quality rather than cost. So costs go up as niftier and niftier technologies get hauled out.

    Now many conservatives want people to pay more out of pocket in order to rein in private health care spending. Okay, fine, but then presumably providers are going to start competing on cost—since that's what people will be looking for—and the character of technological innovations is going to be transformed, from the current paradigm of quality-improving innovations to innovations that involve new and cheaper variations on old techniques. Again, that's just a theory, but it seems like a sound one, and not something I think many conservatives have grappled with. Is this the sort of shift in innovation conservatives want? Well, is it? Maybe it is. Personally, I think that our nation's overall quality of life would be better today if innovation had been driven by a desire to control costs and make various technologies available to all, rather than new "quality-improving" devices that are very expensive, limited to only a few consumers, and may or may not do all that much "good" when it comes down to it. (It's all about the QALY control!) At any rate, any sort of cost containment reform is going to restrict medical innovation to some extent.

    It's also worth asking: innovation where? It's no secret that our health care system focuses on hospital care and drug treatment, thus placing a high priority on acute care, and de-emphasizing a lot of preventive and even rehabilitative care. So naturally, that's where much of the innovation goes. I've heard—and I can't find anything to verify this, so dispute away—that a good deal of technological advances in the United States are concentrated in the field of surgery. And why not? The procedures aren't very closely regulated, and the specialists get paid by the buckets. But it's also not clear that more specialists lead to better health outcomes—indeed, one study showed that areas with more specialists were correlated with worse health outcomes.

    On the other hand, health care reforms that placed a greater emphasis on ambulatory and preventive care could, in theory, lead to more technological innovation in those areas. And perhaps that would be better overall for the nation's health. Someone should find out! At any rate, I haven't really answered anything here, but the general question that usually gets asked on this topic is: "How do we best encourage medical innovation?" That seems crude. Medical innovation isn't monolithic. A better question, I think, is: "Where and how should we be steering medical innovation?" For all the impressive technological advances made here in the U.S., I'm not convinced they've always been heading in the right direction.
    -- Brad Plumer 2:19 AM || ||

    May 24, 2005

    Milton Friedman, Meet Sweden

    Readers of this little blog have probably picked up on the fact that I know very little about economics. Fortunately, that has never stopped me from writing about the subject. So here we go. I've been reading Peter Lindert's Growing Public—an economic history that looks at the relationship between social spending and growth. In particular, Lindert suggests that the net economic cost of big welfare programs might well be virtually zero. Heartening stuff for lefties like me. More on that big picture argument later—it's fascinating, and I'll try to lay it out when I understand it fully—but now it's time to talk about welfare and work.

    As we've learned from, um, Milton Friedman, certain forms of means-tested welfare essentially act as a tax on work for the poor. Starting in 1935, for instance, the original "welfare" program, AFDC, created what amounted to a 100 percent marginal tax rate on earnings for single mothers with children. It makes sense: if a mother on welfare were to get a job, she'd lose all those welfare dollars, food stamps, medical care, subsidized housing, child care subsidies, etc., and make very little back from her paycheck. Where's the incentive? Well, that marginal rate was scaled back in 1967—by allowing the working poor to keep more benefits—but then reinstated under the Reagan administration, which wanted stricter means-testing for all welfare programs. Eventually, under Clinton, the Earned Income Tax Credit was expanded considerably, which lowered the marginal rate for people who get low-paying jobs—because people could "keep" more of their foregone welfare benefits. The British, too, are experimenting with something similar, the Working Family Tax Credit. In theory, this should spur more low-income people into finding work.

    Of course, there's no free lunch here. Eventually the Earned Income Tax Credit is phased out at some income level, so in effect, the marginal tax rate for work is simply pushed up into the middle classes. Now Lindert notices two things: One, this is essentially how the European welfare states work, by not strictly means-testing their benefits, and two, that no one's figured out whether shoving the marginal tax rate "up" into the middle class is actually better for the economy than strictly means-testing benefits. What's interesting, though, is that conservatives have always argued that taxes plus welfare are bad because they place high marginal tax rates on both the wealthy (for making that extra million) and on the poor (for getting a job). As Galbraith wryly summarized the "doctrine of the eighties": "the rich were not working because they had too little money, the poor because they had much."

    But Lindert points out that European welfare states actually follow this conservative assumption pretty closely. These high-budget states tend to have very low taxes on capital and property—lower than in the United States or Japan—thus decreasing the marginal rates for the very wealthy. In particular, European states tend to shy away from double taxation on dividends. Bush's dream! But on the other end of the income spectrum, the universal benefits of the European welfare state also tend to decrease the marginal tax rate on the poor, essentially taking away the incentives to get off the dole and find a job. As it turns out, it's the middle class that gets saddled with the tax burden here. What's hilarious about Lindert's book is that his research seems to suggest that conservative economic theories are basically right, and that that's what explains why European welfare states work so well. Definitely more on this book later.
    -- Brad Plumer 5:36 PM || ||
    Deal-Mania

    Not much time to talk about the big filibuster deal now. All I can say is that if and when the Democrats do retake the Senate, they should immediately push for a massive government takeover—and I mean an outright invasion—of the health care system, prompt a filibuster on the bill, threaten to abolish the filibuster, and then extract major concessions—expand Medicaid!—from moderate Republicans in order to avert a crisis. It's going to be awesome!

    Also, this is funny.
    -- Brad Plumer 12:50 PM || ||

    May 23, 2005

    NARAL's Strategy

    Okay, this NARAL/Langevin/Chaffee thing is getting out of hand. As Atrios said, NARAL's endorsement of Chaffee probably didn't drive Langevin out of the RI Senate race. And if it did, well, then Langevin's not much of a fighter anyways. But let's pretend NARAL's endorsement either way makes a shred of difference. Then the second point is this: they made an entirely rational choice from a strategic standpoint. Let's just run through the four scenarios NARAL's facing here:
    1) Republicans keep the Senate in 2006 and Chaffee gets elected. Well, that's bad news. But notice, whenever the Republicans slap down some bit of legislation restricting abortion rights, Chaffee will be voting against it (remember, he votes pro-choice 100 percent of the time. 100 percent!).

    2) Republicans keep the Senate in 2006 and Langevin gets elected. Worse news. Republicans are still in charge, but now whenever they slap down abortion restrictions, Langevin will likely vote for them, giving pro-life legislation one extra vote and making it more likely to pass. Clearly outcome #2 is worse for NARAL than #1. But then we have...

    3) Democrats retake the Senate and Chaffee gets elected. Hooray! Now whenever Democrats want to push through some legislation expanding abortion rights, Chaffee votes for it, making it more likely to pass. Which is still better, from NARAL's perspective, than...

    4) Democrats retake the Senate and Langevin gets elected. Now, that legislation expanding abortion rights suddenly becomes harder to pass—at the very least, you'll have to do Langevin some favor elsewhere to get him to vote for it. But odds are, he won't vote for it!
    So NARAL's preferences here are, I think, ranked: 3, 4, 1, 2. Endorsing Chaffee, then, is a pretty optimal choice—it makes either 3 or 1 more likely, rather than 4 or 2. The wild card here, of course, is that control of the Senate may actually hinge on who wins, Chaffee or Langevin—in other words, the race itself may be between outcome #1 and #4. I don't really know what the probabilities are here, or how to model this, but presumably NARAL doesn't think this scenario is very likely (in other words, the probability is low that Senate control either way will depend on the RI race). I'm sure there's a rigorous way to calculate out what NARAL's optimal strategy is here, but I'm not smart enough to do that, so I'll just eyeball and say, yes, it makes sense to endorse the pro-choice Republican over the pro-life Democrat.
    -- Brad Plumer 3:33 PM || ||
    Either Way, Bad Senate

    With Republicans all set to override the rules of the Senate with their nuclear-mania, it's probably worth taking a quick look through history and trying to figure out why we even have an upper chamber. Jefferson's line about the Senate being "a saucer into which the nation's passions may be poured to cool" has often been quoted. And indeed, that's how the Senate was created: a chamber insulated against the masses, with its long terms and staggered elections and members who answer only to their individual states. Optimally, then, what we would expect—what, presumably, the good ol' Founders expected—is for the Senate to be a place where legislation and ideas are debated, not demagogued, where arguments and reasons win, rather than crude and stark political power.

    Obviously Bill Frist is trying to change all that, as Harry Reid has pointed out. But really, what's he changing it from? When was the last time the Senate resembled, in any way, shape, or form a place where passions were "poured to cool"?

    1850, perhaps, when Calhoun saved the Union with his Senate compromise on slave states. (The usual argument here is: had the country split in 1850, rather than a decade later, the North wouldn't have been so united on abolition, and likely not able to win a war against the South.) But after that, the Senate has been damned near useless. A house of opulence and corruption all through the Gilded Age. The place that sneered at Wilson's League of Nations, and nixed it, despite all the exhortations of a popular president. The place that sat, with arms crossed, and refused to prepare for World War II all through the 1930s. And, of course, the place that allowed Southern Democrats, via the seniority system and filibuster, to obstruct widely popular civil rights legislation for decades and decades. None of this was passion "poured to cool"—no, it was just one massive ice block, frozen and dumb. The two big exceptions were FDR's first 100 days, when the Senate was dumbfounded by the 1932 elections and passed many of Roosevelt's bills without even reading them (after 1936, though, the Senate thwarted virtually all of the president's domestic legislation), and, of course, Lyndon Johnson's singular time at the helm.

    So really, I have no special brief for the Senate. It's rarely served any useful purpose—at least from a progressive perspective—and if it was set up to promote "deliberative democracy," it has more often been used to wage petty power struggles. Perhaps in the 1970s and 1980s there was a "golden era" of compromise, deliberation, alliance-forming, etc., among Senators. Perhaps not. Nevertheless, that era has quite clearly disappeared over the past decade, and moved in precisely the opposite direction. Both parties are pretty ideologically homogenous nowadays, and, as Mark Schmitt pointed out the other day, most of Senate work is done through the budget reconciliation process, which cannot be filibustered—one could add that time for debate is limited here, and no amendments are usually allowed. True, it's no longer an ice block, but now the Senate has become just another place where the majority pushes through whatever it damned well feels like, with little reason or deliberation, and where passions are allowed to run as high as they can go. Really, the "nuclear option" just pushes this trend to its logical conclusion. And it's not, in itself, a bad conclusion—considering the alternative.

    The problem here, of course, is that the Senate isn't set up to be this sort of "majority rules" institution. For one good reason, it's not at all representative: the 44 Democrats plus Jeffords received more votes over the last 3 cycles than their Republican counterparts, and if each senator represents half a state, the minority party represents 50.8 percent of the population. Now the good thing for Republicans is that this imbalance works in their favor, and likely will for a long time—of the 20 least populous states, only five are reliably blue (Vermont, Delaware, Rhode Island, Hawaii, Maine), and it's likely that Republicans will be able to keep a healthy Senate majority more often than not, even without a national majority.

    There's no good reason to do things like this—and I doubt the three or four honest Republicans in this country would disagree. So either the Senate needs to be reorganized so that, if it's just a glorified House, at least it's representative, or else the whole trend should be reversed. At any rate, not an institution I'll be happy with either way!
    -- Brad Plumer 1:52 PM || ||

    May 22, 2005

    Why Inequality Matters

    Will Wilkinson has a typically thoughtful post about income inequality that raises the age-old question: Why does inequality matter? If the rising economic tide is lifting all boats—which may or may not be the case—then who cares that Joe CEO brings home orders of magnitude more money than average Joe? Well, let's see.

    The obvious rejoinder, I think, has to do with social status. Without getting too deep into the evolutionary psychology here, it's at least reasonable to assume that humans are hard-wired to want to make it to the very top. Good for attracting lots of fertile women on the savannah and all that. But not everyone can make it to the top, obviously, so the result is lots and lots of dissatisfaction among the losers. And the more inequality, the more dissatisfaction! Paul Krugman once used the example of a middle-class family in the 1950s. Say they were offered to be transplanted into the 25th income percentile in 2005. Would they take it? They'd undoubtedly have more stuff—iPods! color televisions!—but many probably wouldn't take the deal, because they'd lose their middle class status. (And hey, it's not like they could move up in today's world—workers from the 1950s were, on average, less skilled and not as intelligent as workers today. That's just how the future works.)

    But hypothetical shmypothetical. More interestingly, Richard Wilkinson—whose book on the subject I've been meaning to read for ages—has shown that income inequality may well affect the health of nations. After controlling for all sorts of other variables, Wilkinson found that more unequal countries were found to have worse health outcomes. Part of this is due to other bad effects of an unequal society: more violence, lower levels of involvement in community life, lower levels of trust, and more downward discrimination. All of which are reasons in themselves to argue against inequality. But the other important thing Wilkinson found was that inequality affects the type of society we have—increased inequality shifts the balance from a society dominated by what he calls "mutually supportive relationships" to one rife with social dominance, class differentiation, rat races, the feeling that people no longer have personal control over their lives, etc. etc. As you'd imagine, this leads to a lot of undue stress, unhappiness, and poor health.

    Now if this is all true, that's not necessarily an argument against increased inequality. It just matters what you value: stuff or happiness, and the relative trade-offs between the two. It also probably matters as to what kind of person you are. I'd rather have less material stuff and work less. So, it seems, would many French and German people. But I can't speak for everyone.

    Speaking of health, though, there's another, argument that ought to be floated out there, though libertarians can cover their ears right about now. Increased income inequality seems to lead to lower levels of political participation, at least among the lower classes, as we'd expect. In theory, this should always be the case—if the wealthy have comparatively more resources—which includes both money and free time—they'll have comparatively more political influence. Duh. In turn, that should lead to less income redistribution by the government, and fewer government programs for the lower classes. If the poor were all dedicated voters and civic participants, Medicaid cuts would be as off-limits as Medicare cuts. But they're not, so it isn't. Meanwhile, and much more perniciously, those with more political influence will start distributing government resources towards themselves. No libertarian worth his or her salt should be happy with massive corporate subsidies, for instance, but that's the sort of thing you get when the corporate world has more political influence. Over the past 20 years, the Republican party has become less and less "free market" as it becomes more and more dominated by wealthy political participants.

    The flip side here is that increased inequality can, in theory, entrench a certain class in power. There are only limited spots in government, top universities, and other positions of influence, and when the upper classes have a disproportionate amount of wealth, they're naturally going to scoop up more and more of these spots. The chance that a wealthy and not-so-bright (or talented, or skilled) person takes the place that should go to a brighter but not-so-wealthy individual goes up as inequality increases.

    Of course, in all this, we've been assuming that increased inequality is compatible with higher levels of overall growth. Maybe that assumption's not true; economists ought to find out. This paper (pdf), by Huw Lloyd-Ellis of Queen's University, suggests that there's certainly reason to think inequality has an adverse effect on growth, though the research is still inconclusive. For example, there's only so much credit to go around in this small world of ours, and inequality could in theory drive less-wealthy people out of the credit market, thus depriving the economy of potentially productive borrowers and investors. It's not inconceivable that drastic inequality may produce serious market failures and screw up the whole system. Admittedly, though, this isn't a subject I'm at all familiar with, so I'll have to read up before saying anything more. I'm very curious to see what readers think about all this.
    -- Brad Plumer 4:12 PM || ||
    Teaching the Bible

    Although I nearly had a heart attack (so young!) when I thought I saw John Ashcroft cited as a scholarly source here—he wasn't—this David Gerlenter essay about the Bible and the decline of "Bible literacy" among young Americans today at least raises some good questions. The Bible is pretty central to American history and thought—so how does one go about teaching it?
    So let's have Bible-as-literature electives in every public high school, by all means. But let's also face facts: These are hard courses to teach at best. Do we have teachers who are up to the job? (With laudable foresight, the Bible Literacy Project is already developing workshops for teachers.) And let's also keep in mind that, for most children, such courses can only be half-way houses. Children studying the Bible should learn their own religious traditions as precious truth, not as one alternative on a multicultural list.
    Now as it happens, Gerlenter isn't the first to think about these questions—the First Amendment Center has put out a pamphlet, "The Bible and Public Schools: a First Amendment Guide" that, as far as I know, has been endorsed by a number of religious and secular organizations (from PFAW to the National Association of Evangelicals to the Council on Islamic Education). It suggests that the Bible ought to taught in an academic, not devotional manner, that it generally should not be the only such text in a given course, and that events in the Bible should not necessarily be taught as historical fact. Plus, various interpretations should be presented. Seems like a workable compromise.

    The fear Gerlenter has, though, is that the Bible will be cheapened or somehow made mundane by being taught in public schools. It's a reasonable fear—how many kids have lost any possible love for, say, Shakespeare, by first reading it in 10th grade? Right. Meanwhile, Supreme Court doctrine on religion in public schools states that the school can officially neither "approve nor disapprove" of religion, as Sandra Day O'Connor put it. But in the classroom, teachers are perfectly welcome to approve or disapprove of the content of other literary or philosophical texts—indeed, I had an 11th grade English teacher who all but elevated Albert Camus' The Stranger, and a peculiar strain of existentialism along with it, to a religion of sorts. (And I'm not proud to admit it, but it certainly had an effect on my impressionable young mind!) That one set of ideas can be endorsed but not another seems a bit odd.

    Still, the analogy to literature suggests there's no perfect way to do this—it's impossible to set national standards for taste in any artistic or cultural work without collapsing the range of things teachers can and should be allowed to do. Sometimes a teacher can give her students a deep and thorough appreciation for, say, Shakespeare, but that's not the sort of project you can force any teacher to undertake. So it is with the Bible: some will teach it in a way more conducive to religious belief, some will teach it in a way conducive to secularism. As long as there's no official school policy either way, that's probably fine with me, although I'm sure there are all sorts of things I'm not considering here.
    -- Brad Plumer 1:54 PM || ||

    May 21, 2005

    The Case for Attack of the Clones

    Let me be the first, I think, to slap this bit of heresy down onto the table: Star Wars Episode III, Revenge of the Sith, was only the second best movie in the new trilogy. Episode II, Attack of the Clones was much, much better in just about every way. Yes, yes, don't worry, I've already gathered kindling, lashed myself to this here stake and if you would just be so kind as to light that match... But first, read my reasoning below the break, spoilers and all.

    The main difference for me was the action. Attack of the Clones wins on this front by a running mile. Sorry, but face it: light-saber duels are boring. Or rather, any excitement they bring can be cheapened much too easily. The original Star Wars movies were cool because for most of the movie you had one Jedi—either Obi-Wan or Luke Skywalker—fighting a horde of random enemies with his light-saber, in a variety of novel situations. Hordes of Stormtroopers, or big four-legged machines, or abominable snowmen, or trash compactors. And the Jedi kicked ass, which is what Jedis are understood to do. So when you finally get Obi-Wan vs. Darth Vader squaring off—or Luke vs. Darth Vader—you think, "Sweet infant Jesus with a rattlesnake; Jedi versus Jedi?" It's a new form of battle; clash of the titans.

    But in Revenge of the Sith, Jedi on Jedi battles become much too commonplace. In the first fifteen minutes of the movie we get a ho-hum sequence where Anakin easily bests the clownishly-named Count Dooku. And that's the first of five or six of these duels. By the time Obi-Wan and Annakin face off at the end, yes, it's a very good and frenetic battle—so frenetic that, in retrospect, it makes their sluggish rematch in Episode IV seem pretty pedestrian—but the climax differs only in degree, not kind, from what we've seen earlier in the movie.

    Attack of the Clones varied it up much better. You had the chase after that assassin-lady, the battle in the lava pit, the Obi-Wan vs. Jango Fett duel, and the rather-cool gladiatorial scene, in all of which the sometimes saber-less Jedi had to use their odd powers and ingenuity to face off against unconventional foes. It's cool. So sorry, I'll take Episode II's gladiator scene—or, in Empire Strikes Back, the ice planet battle—against the thoroughly ridiculous "face-off" between Yoda and Darth Sidious (some lightning bolts tossed back in forth, a bit of swordplay, some tossing of Senate chairs... yawn!). At any rate, the shock of Yoda's saber-skills pretty much wore off after the climax in Episode II—which was also the first movie in which we saw Jedi maimed or killed, making it the better movie for pure "shock value."

    Meanwhile, General Grievous was lame. The fight between Obi-Wan and General Grievous was lame. There was a split second where I thought, "Four lightsabers? This guy's going to wreak serious havoc!" But no, no, he's easily splattered. Grievous' "gimmick" was that he strung on his belt lightsabers of various Jedi he had slain—well, then we should've seen him slay some Jedi! Sheesh. Also, what kind of Army is vanquished when its general gets captured or killed? I'm not sure that would have even been the case with, say, Napoleon's French Army. Delegate some responsibility, buddy.

    Next complaint. George Lucas, as we all know, can't do character, writing, or dialogue. Nevertheless, the third movie hinged on three very dramatic developments: Annakin's love for Padme, Annakin's turn to the dark side, Annakin's relationship with Obi-Wan. All were botched and unconvincing, leaving essentially nothing to hang the movie on. In defense of Natalie Portman's shoddy acting, though, had she played anything resembling an actual human being, we would expect Senator Padme to banish all those silly thoughts from Annakin's head (a Senate works best with one Chancellor in charge? Please!), and we'd have no Darth Vader. So she almost had to play a non-entity or the movie would have failed. Anyway, the only real emotional part of the movie was when Annakin's writhing in lava, and Obi-Wan starts screaming at him, plaintively, "You were supposed to be the one!" or something of the sort. Anyway, Attack of the Clones depended far less on character development, making it at the very least a more enjoyable and satisfying movie.

    Also, when the human army turns on the Jedi in Revenge of the Sith, where's the drama? Don't they feel bad? They've been fighting along these Jedi knights for years and years? Now they turn and assassinate without blinking? And let's not speak, ever again, of Annakin Skywalker "killing younglings." Terrible, just terrible. In fact, I never thought I'd say this, but when Annakin goes Iron Chef on the Sand People in Episode II, he's much more convincing as a skilled young dude with some serious anger management problems than he is at any point in Episode III.

    One final thought: If I'm not mistaken, galactic travel is a lot quicker in this new trilogy than it was in the old one. Towards the very end of the movie, Darth Sidious jets from the capital planet out to some lava planet on the "outer rim" (which is, presumably, very, very far away) in the short time it takes Annakin and Obi-Wan to finish up their fighting business. Ships weren't that fast in the original trilogy, were they? Also, Padme goes from discovering she's pregnant to giving birth in the course of the movie. Where, um, did those nine months come from?

    Continue reading "The Case for Attack of the Clones"
    -- Brad Plumer 7:30 PM || ||

    May 20, 2005

    Hitlers Everywhere!

    Hm, so both Sens. Robert Byrd and Rick Santorum have compared their opponents to Hitler. But weighing the evidence, I'm going to have to say that breaking Senate rules to abolish the filibuster is a little more like Hitler than using filibusters. On the other hand, I've heard that our very own Barbara Boxer is a vegetarian, so that makes her most like Hitler.

    Really, though, I don't understand why Hitler analogies are so beyond the pale. Yes, yes, not everyone wants to murder millions and millions of Jews. Not Bill Frist, not Nancy Pelosi. And yes, the Holocaust was a horrible thing, a uniquely horrible thing, even. But not all Hitler comparisons need to take the form, "If you do X, it will lead to World War II and mass genocide," to be valid. Politicians around the world can take all sorts of initial steps and abuses of power that resemble the sort of thing Hitler did in the early days. That ought to be pointed out! Billmon often makes sharp (and sometimes not-so-sharp) analogies between Germany in the 1930s and our present situation, and while no one thinks we're actually going to descend into full-blown Nazism, it's a perfectly sound reminder of the type of thing that can happen when these sorts of abuses and power grabs are allowed to slide unchecked. Even halfway to Hitler is still too much Hitler. Now Rick Santorum's problem was that his analogy was stupid, not that he made the analogy in the first place.
    -- Brad Plumer 1:32 PM || ||

    May 19, 2005

    Unsustainable

    I don't usually write about environmental issues—in part because it's hard to care a lot about everything—but while editing this short piece for Mother Jones, a few things came to mind. My colleague Erik writes that economists are beginning to improve their cost-benefit analysis of various environmental resources. A forest, for instance, might be worth a couple million as lumber, but a billion dollars or more when you factor in: recreational use, quality-of-life benefits, and its benefits to the ecosystem—especially as a carbon sink. There are problems with this sort of valuation—can you really put a number on all of these factors?—but ideally, this should help environmentalists combat the things they'd like to combat; after all, serious liberals don't just want to preserve the environment because it's "pretty"; they want to preserve environmental resources because they're valuable—both now and for future generations.

    Now from what I know about ecological economics, the proper way to determine whether a country's economic development is sustainable is whether there's a growth in wealth over time. Among other things that includes material goods, resource wealth, human capital, and natural capital. A nation could deplete some of these while increasing others, but in the aggregate, wealth should be growing, not shrinking. So a country can't just keep boosting its GDP while depleting its wealth base; otherwise you start to run into the situations seen in Jared Diamond's Collapse (which I haven't read). India, for instance, is growing like gangbusters, but environmental-types tell me that its natural resources are being quickly depleted, and that depletion isn't being compensated by increases in wealth elsewhere—building enough infrastructure, say, or investing enough in human education, etc. On the other hand, plenty of Indian policymakers are aware of this and trying to put the country on a more sustainable path.

    So far, so good. Or bad. But now it doesn't seem like economists have really perfected their answers to the question about how much natural resources are really worth in wealth terms. Which would make it awfully hard to figure out exactly how sustainable or unsustainable are the paths on which certain countries—like China—are heading, no? At any rate, something else to learn more about, I guess.
    -- Brad Plumer 5:18 PM || ||
    What Were You Thinking, Saddam?

    What between George Galloway and the recently-unveiled British memo, there's been a lot of revisiting the Bush admininstration's decision to launch our excellent Iraq adventure—even if just to say, "Yes, it was still a dumb-ass idea." True. But while we're retrospecting, here's a question that's never, as far as I know, been fully answered: Why did Saddam Hussein decide to go to war with the United States? I mean, come on. And this isn't just out of idle curiosity; it sort of has relevance to Iran and North Korea, I think.

    Now it's true, the White House was dead-set on invading Iraq no matter what, but surely there were things Saddam could've done that would've left him in a better state than he's in now. He could've absconded to Saudi Arabia, for instance. Or he could've immediately opened the doors to all inspectors and proved once and for all, as he knew perfectly well, that Iraq had absolutely no weapons of mass anything. (At the time of invasion, recall, there was still reasonable doubt about whether he had WMDs or not—hence the chemical suits for our soldiers.) Or whatever. But surely in a rational and sane world Saddam should've foreseen that staying defiant and letting the U.S. invade was the worst of all possible outcomes. And yes, it's true that proving to the world that he had no WMDs would've opened the door for a possible Shiite rebellion, say, but again, that would be one possible threat to weigh against the imminent threat—U.S. invasion.

    Anyway, some possible explanations come from the ol' Duelfer report. Perhaps Saddam just didn't believe the United States was serious about invading: "By late 2002 Saddam had persuaded himself, just as he did in 1991, that the United States would not attack Iraq because it already had achieved its objectives of establishing a military presence in the region." Perhaps—though after his capture, Saddam told a debriefer that four months before the war he was convinced that "hostilities were inevitable." For what that's worth. Alternatively, perhaps Saddam was receiving such awful military advice that he really believed he could've defeated the United States in a conventional battle. (Tariq Aziz claims this was the case.) On the other hand, remember his cryptic remark to his advisors in late 2003: "Resist one week, and then I will take over." So perhaps he thought that he would let the U.S. invade, retreat, start up the insurgency we see now, bleed the U.S. into withdrawal, and then retake Baghdad, this time stronger than ever (because who would attack him now?).

    Any of those seem plausible. Another theory: In the end, Saddam probably thought he was screwed no matter what he did. So what if he proved once and for all that he had no WMDs? Perhaps the U.S. would just attacks anyway. Indeed, the Bush administration didn't really give Saddam an exit strategy, or any assurance that backing down would yield any sort of benefit. There were a few prewar moments when Bush hinted that Saddam could just comply with the UN resolutions, avoid war, and save his own skin. But only a few. For the most part there was nothing but a steady drumbeat out of the White House: regime change, regime change, regime change. Under those circumstances, it's no wonder Saddam decided to take his chances with whatever deterrent he thought the threat of WMDs and mass American casualties might have, and failing that, the possible urban warfare/insurgency strategy.

    At any rate, this is sort of moot, since it doesn't seem like the goal of the White House was ever to force Saddam to back down, comply with all UN resolutions, give up his WMD ambitions, and retreat back into his little shell. No, the goal was war, and war was what we got. Nevertheless, if Bush really had wanted to make Saddam back down, hypothetically, it seems he should have the following: 1) made it extremely clear that the U.S. was dead serious about invading, 2) made it absolutely clear that the U.S. would kick ass, 3) given credible assurances that if Saddam complied, he could save his own skin. None of those things were done. More to the point, our intelligence agencies could have focused less on what weapons Saddam did or didn't have, and more on how Saddam would've reacted to various threats and promises. Not a whole lot of prewar intelligence really focused on this aspect—again, probably because Bush wanted war no matter what—but it should have focused on this aspect; knowing what the enemy's thinking (or might be thinking) is essential. The Duelfer report suggests that we didn't have a clue. For all we know, the White House just figured Saddam was an irrational lunatic and so there was no use trying to get a handle on his thought process.

    I'm bringing all this up because it seems there ought to be some lessons here for dealing with North Korea and Iran—two countries, presumably, that the White House really does want to coerce into doing stuff, and not invade no matter what. (Good god.) People doing intelligence stuff ought to be figuring out these questions: How will Kim Jong Il or the Tehran mullahs react to this or that situation? Will they find this or that condition too humiliating to accept? Is there a way for them to "save face"? Do they properly understand this threat or that promise? This seems like obvious stuff, and perhaps the CIA is on it, but the case of Iraq—and reading the Duelfer report makes this clear—suggests that our intelligence agencies really aren't very good at figuring this stuff out. The historical record's no better: No one really understood why Saddam didn't back down in the first Gulf War; no one really understood why Milosevic didn't back down initially over Kosovo. How well, exactly, do we understand our good buddies on the international stage? I wonder.
    -- Brad Plumer 12:21 AM || ||

    May 18, 2005

    Bring On Theocracy

    I'm not sure this was his main point, but Alan Wolfe makes the atheistic case for more funding for religious social services, by looking at the curious case of Martin Luther:
    Once launched into the world, Lutheran orphanages met the same problems of limited budgets, diverse clienteles, and staff professionalization as any other kind of organization; as Thiemann writes, "their missions became shaped more by the demands of external public demands than by a clearly stated internal theological rationale." The lesson for our times is plain: provide public funds for religiously motivated social services if you wish, but do not expect them to remain "religious" if you do. Religion is an excellent incubator for such secular ideals as the modern welfare state.
    This sort of logic has always somewhat made sense to me—state support for religion just ends up, well, watering down religion. Make prayer mandatory in schools and you'll cheapen it to the point where no one takes it seriously anymore. (Indeed, when the Supreme Court first banned prayer from private schools, many religious leaders backed the move for this very reason.)

    So what about social services? From what little I know, the Netherlands pioneered the Christian-democratic form of governance with its notion of "sphere sovereignty": namely, that the government merely oversees a variety of self-governing religious entities that need to submit to state laws, but otherwise maintain a good deal of autonomy. The Dutch constitution, for instance, allows government funding to be split, without bias, among private and public schools. But the Netherlands didn't become a theocracy, instead—again, correct me if I'm wrong—the world unfolded exactly as Alan Wolfe said; people started shopping around for social services not on the basis of religion, but for services that suited their needs. Religion sort of faded into the background, and Netherlands became the godless heathen place we know and love today—legalized prostitution and gay marriage and all.

    Germany is another model, in which public services are, I think, required to enlist churches and other religious organizations when they can for welfare provisions. And the churches and whatnot get a fair amount of independence to do whatever work they want. On the other hand, I think there are a few key differences between what Germany has and, say, President Bush's proposed faith-based initiatives. In particular, the Christian-democratic model entails a good deal of income distribution—giving families the wages and tools they need to be self-sufficient—and Bush, of course, has nothing of that sort in mind. Still, Germany: another godless country, where only 14 percent of people attend church once a week. We can see where this is going.

    The other point is that, in the end, it doesn't seem like the form of delivery much matters when it comes to many social services. It's very important for some things (anything to do with reproduction, especially), but as far as anti-poverty programs go, what really makes the difference is how much money actually gets dished out. If our welfare state ended up being administered by churches, ala Germany or the Netherlands, that might work—just don't be surprised when America drifts toward atheism!—but in order for this to be effective at fighting poverty, etc. we'd also have to boost total spending to Germany and Netherlands-esque levels.
    -- Brad Plumer 2:20 PM || ||
    No Bumper For You

    It's in the middle of a more serious article about reforms underway in Syria, but there's some quality morbid humor here:
    This year, some Syrians distributed a video clip via cell phone of a smiling [President Bashar] Assad riding a bumper car with his oldest son, Hafez, at a popular park. (Those in other bumper cars noticeably kept their distance.)
    Yeah, that should be a rule or something: If you're a man known for beating people on the feet with thick cables, you probably won't find people to play bumper cars with you. Life is tough.

    Read the full piece, too. Bashar Assad came into office in 2000 planning to pull off a bunch of economic reforms—and that still seems to be his goal—but there were all sorts of political obstacles in the way. This 2001 overview of those obstacles, by Gary Gambill, still seems relevant today: "Bashar Assad's failure to undertake substantial economic reforms is therefore rooted in his failure to effect a controlled political opening." In particular, the security services started cracking down on the opposition without Bashar's approval. Seems like something you would want to rein in...
    -- Brad Plumer 1:25 PM || ||
    Health Care Coalitions

    Speaking of CAFTA, reading about how Bush has had to make all sorts of protectionist compromises to gain support for his bill, brings NAFTA back to mind. Ah, NAFTA. Lots of people argue that that much-discussed trade deal was either a smashing economic success or a miserable failure for the United States in the '90s. On this, eh, I'll stick with Richard Freeman's analysis: there were so many other important economic factors affecting the United States and Canada and Mexico in the '90s, that if you can disentangle them all and say "NAFTA did this grand (or devastating) thing," etc., well, you deserve a serious prize.

    But NAFTA did have one very concrete effect: namely, it alienated the labor unions from the Clinton White House in 1993-94. And that hurt where it counts: on health. Before the election, recall, Lane Kirkland had promised "storm troopers" to help Clinton get national health care passed. But when it came time to do health care, the AFL-CIO was too busy fighting hard against the free trade deal. Well, NAFTA got passed, but the Health Security Act flopped, in no small part due to lack of union support (the delays over NAFTA also gave Clinton's opponents time to organize against his health care plan). So thank free trade, and Bill f'n Clinton's lack of labor tact, if you're running around uninsured.

    (Okay, yes, there were other factors too—although interestingly, the historical failure of national health care to catch on in America actually has much to do with labor's reluctance to join the fight at various points, for various reasons. The one fight they did truly join was the battle over Medicare. And hey!

    Also, this leads to another point: if and when Democrats ever regain control of government and decide to go full throttle on health care, the key is going to be to divide and conquer various groups opposed to national health insurance. The doctors, it seems, have shifted to the side of angels after decades of recalcitrance, though Clinton never courted the AMA; big mistake. Meanwhile, there are the various business groups that should support national insurance, though again, they have to be wooed with the right measures. And obviously the insurance industry is the big end-zone blocker, but even here, some insurance companies—Aetna, MetLife, Prudential, Cigna, etc.—have said before that they'd like to see national health care, and maybe they could be enticed if the plan involved managed competition and no government caps on nationwide spending. Or something.

    Basically, it's extremely tricky to thread the needle here, and it's certainly something to keep in mind when designing health care policies. The optimal policy, I think, will be an incremental reform that helps the uninsured for now, but also has some sort of ticking time bomb that eventually explodes and clears the way for a more sensible health care overhaul. Much like, um, the GOP's sneaky Social Security phase-out strategy, only hopefully this one wouldn't suck as much. Jacob Hacker's thinking along the right lines with his plan for extending Medicare to the uninsured—setting aside the merits for now, that's the slippery slope we want to be paving! But also, you don't want to give the game away, which is why I'm writing this all in parentheses. Shhhh.)
    -- Brad Plumer 3:29 AM || ||
    CAFTA Or Atomic Wedgie?

    The Carnegie Foundation has a new analysis of CAFTA that I meant to blog about earlier today, but stuck it over on the mini-blog. Anyway, unlike praktike, I do think the labor standard provisions of CAFTA are wholly atrocious—my baseline for this sort of thing is at least to give workers the right to organize and speak out in the workplace, and I've seen little evidence that this would unduly harm developing nations, though I'll admit enforcement is a much, much trickier matter (how, for example, do you allow factories to break up genuinely corrupt unions?)—but setting that aside, I think we can all agree that much of this stuff is pretty horrendous:
    The optimal combination of trade terms for regional rural development would be to phase out the tariffs on staple crops very slowly and gradually, so that subsistence farmers have time to adapt to competition from subsidized U.S. production, while increasing access to the U.S. market for traditional cash crops such as sugar and other agricultural products that exploit the comparative advantages of the region.

    As negotiated, the CAFTA-DR gets less than half of this equation right. The Administration did recognize the need for the countries in the region to slowly open their markets for a few staple products, but does not provide adequate transition periods for all such products. This half-measure will mean little for poor farmers in the region, who typically produce more than one staple product, be it rice, beans, corn, chickens, onions or potatoes. For poor farmers, lower prices for any of the products will put already precarious household incomes at risk. Malnutrition in the region is rising due to low coffee prices. The shock of CAFTA would likely mean more hunger in Central America.

    Moreover, and even more significantly, the CAFTA-DR does not provide new opportunities for rural populations to utilize and benefit from. For example, production of sugar could absorb some of the labor displaced from staple crops. Yet, sugar from Central America and the Dominican Republic is denied meaningful access to the U.S. market.
    Look, I understand comparative advantage just fine, thanks, and under some conditions lower tariffs sound truly swell, yes. But this is just a boot to the teeth. The transition period will be poorly managed, there's little assistance for farmers harmed by the upheaval, and Central American farmers get screwed by our high barriers on sugar imports. "Free trade" or trans-hemispheric donkey-punch? You decide. Meanwhile, for Republicans who love open markets but hate open borders, guess where all those displaced workers are going to do? Uh-huh. Hope you've stocked up on MinuteMen.
    -- Brad Plumer 3:19 AM || ||
    Judges Gone Wild: Euro Edition

    In comments, NickS mentioned Alec Stone Sweet as a judicial scholar to read. Didn't get to it, but an itchy mouse-clicker finger led me to this long survey of a bunch of books on judicial systems around the world, including one by Sweet. Interestingly, the practice of judicial review—i.e. the ability for courts to throw out laws—was pretty much confined to the United States and Norway until 1942. Up until then, most nations guarded their "parliamentary sovereignty" jealously. But things change, and nowadays more than eighty countries have the ol' review in place. Ran Hirschl, whose book I've been reading, argues that this came about, at least, in non-European countries like New Zealand because once-dominant groups were losing power after World War II, and wanted to enshrine certain constitutional rights that would keep formerly marginalized groups in line. But that's not overly relevant for now.

    So here's the good stuff: What happens when you have Godzilla-sized judicial review, as they do in Europe? Ah, Europe. With the rise of the bureaucratic welfare state, you get an increasing number of complex laws that require interpretation by the courts. So judges in Europe are essentially making more and more actual policy these days. Plus, European bill of rights are very extensive—containing rights to adequate pay and housing, to pensions, etc. etc.—so the courts really have to wade in frequently and apply various "balancing" tests to various laws. These courts, according to Sweet, often "interpret" statutes so freely that the become barely recognizable to the legislature that passed them. Even better, in France, Germany, Italy, and Spain, a sizeable minority in parliament can request judicial review before a bill actually becomes a law. The mere threat of this is often enough to make the majority compromise. And you thought we had a judiciary "run amok."

    But hey! What if the courts go too far? Well, in theory, European parliaments can amend the constitution to overrule activist judges, but in practice that's very difficult. The German constitution, for instance, forbids any changes that affect "the essence of a basic right." In other words, most everything. In fact, the French legislature is one of the few that can easily amend constitutions to skirt judicial review, but even they've only done it once twice—to tighten immigration rules [and improve gender representation]. So much for that. Then there's the whole question of nominating judges. As you'd imagine, it's a pretty fundamental power for a European ruling party, and in fact, when a party that's been enthroned for a long time is finally voted out, there's usually some temporary brawling between the legislature and judiciary, as the Old Guard's judges lay the smackdown on the New Guard's laws. Really, the only check on judicial power is the fact that judges are good people and try to maintain their own legitimacy by satisfying as many people as possible. That's your restraint.

    Anyway, it's tough to say whether judicial review on steroids is a good thing or bad thing. Ran Hirschl thinks the constitutionalization of rights and move towards "juristocracy" is a bad thing, of a piece with the "broder neoliberal global trend toward delegating power away from electorally accountable bodies and toward quasi-autonomous decision-makers." For now, I'm in his camp, though one should add that the details matter.

    Obviously the independence of the judiciary depends, to some extent, on what sort of political controls are built into the promotion system for judges. In Japan, for instance, judges are appointed by taking a civil service exam. So you'd think they'd be ultra-independent. But not so! They are more likely to get promoted if they toe the party line, and the party line is dictated from on high by the Japanese Supreme Court, which is in turn appointed by political parties in the Diet. Not so independent after all. So in theory there can always be strong political controls on the judiciary, and Europe's trying to put some of these in place. On the other hand, as the survey says in its conclusion, these books all show "just how far courts can push in the modern world without getting taken over politically." Fair warning.
    -- Brad Plumer 3:01 AM || ||

    May 17, 2005

    So Who Hates Star Wars

    Anthony Lane can do no wrong:
    The young Obi-Wan Kenobi is not, I hasten to add, the most nauseating figure onscreen; nor is R2-D2 or even C-3PO, although I still fail to understand why I should have been expected to waste twenty-five years of my life following the progress of a beeping trash can and a gay, gold-plated Jeeves.

    No, the one who gets me is Yoda. May I take the opportunity to enter a brief plea in favor of his extermination? Any educated moviegoer would know what to do, having watched that helpful sequence in "Gremlins" when a small, sage-colored beastie is fed into an electric blender. A fittingly frantic end, I feel, for the faux-pensive stillness on which the Yoda legend has hung.

    At one point in the new film, he assumes the role of cosmic shrink—squatting opposite Anakin in a noirish room, where the light bleeds sideways through slatted blinds. Anakin keeps having problems with his dark side, in the way that you or I might suffer from tennis elbow, but Yoda, whose reptilian smugness we have been encouraged to mistake for wisdom, has the answer. "Train yourself to let go of everything you fear to lose," he says. Hold on, Kermit, run that past me one more time. If you ever got laid (admittedly a long shot, unless we can dig you up some undiscerning alien hottie with a name like Jar Jar Gabor), and spawned a brood of Yodettes, are you saying that you’d leave them behind at the first sniff of danger? Also, while we’re here, what’s with the screwy syntax? Deepest mind in the galaxy, apparently, and you still express yourself like a day-tripper with a dog-eared phrase book. "I hope right you are." Break me a fucking give.
    True, although "Train yourself to let go of everything you fear to lose" is pretty miraculously uncorrupted, for Yoda. Anyway, easily Lane's best review since he jack-booted Pearl Harbor and gave us this reassuring line: "On the other hand, it must be said that a Washington in which Dan Aykroyd plays an expert in naval intelligence is not a Washington that would enjoy one's undivided confidence."
    -- Brad Plumer 4:25 PM || ||
    Joint Custody

    Really good article by Trish Wilson arguing that "presumptive joint custody" in divorce cases is almost never a good thing. If the two parents agree to split the kid, sure, great idea, but research shows that forcing them to do so can cause all sorts of havoc for the child. More here from Ampersand.

    As a side note, intuitively I agree that custody laws should be based on what's best for the child, but I'm not always sure why I agree, exactly. Is it because children are little? Or because they'll get fucked up and hey, they didn't choose to get into the mess in the first place? Yeah, that's obviously it. (It's a weird thing to ask, I know, but why is there usually such a strong presumption in favor of children? Sometimes it's obvious, but not always. "Save the women and children first!" on a sinking ship seems noble and chivalrous, but not necessarily rational or moral.) But more importantly, and per my anti-judiciary post below, this is an instance where courts are all but certain to do a better job than legislatures.
    -- Brad Plumer 4:00 AM || ||
    Uzbekistan Riots

    I've been trying to catch up on my Uzbekistan reading, but yikes, hard to just wade right into a region and figure out what's going on. A few things about the recent violence in the eastern part of the country, though,—in which, reportedly, 500 protestors have been killed by the government. First, as far as I can tell, the revolt in Andijan has only a fair amount to do with Islamic extremism, and a lot to do with the fact that Uzbeks are basically poor as dirt and fed up. President Islam Karimov—of "boil the dissidents!" fame—has badly mismanaged Uzbekistan's economy, as Robert Templer of ICG has described:
    The concentration of wealth and power in an ever-smaller number of hands close to the president, combined with increasing repression and a weakening economy, is fueling widespread discontent that could turn violent. Visiting officials who lecture Karimov on his economic failures are firmly reminded that his education was in the dismal science. Indeed, during Soviet times, he worked at the Uzbek branch of Gosplan, the central planning agency, where he shuffled goods from one unproductive factory to another while skimming a cut. That's still how he sees economic management.

    Last year, he effectively closed down Uzbekistan's bazaars, the wholesale markets that are the center of commerce, in an attempt, many believe, to enrich members of the government trying to control wholesale trade. He has had the government buy back, at their original price, businesses that were privatized years ago. Many businesses that are then taken over by families of politicians.
    The Jamestown Foundation, meanwhile, gives an overview of recent events. The so-called "rebels," supposedly including Islamist group Hizb-ut-Tahir, seem to have fled to neighboring Kyrgystan, where the southern region of the country isn't under government control. "Thus," says Jamestown, "southern Kyrgyzstan could become the center of armed resistance to the Karimov regime." But the uprising wouldn't have such wide popular support if Karimov's regime wasn't so brutal, and economic conditions weren't so miserable. That seems to be the crucial factor. Also, Nathan Hamm notes that Uzbeks want change, yes, but they're also worried about chaos and violence—he quotes one protestor saying, "All I know is that if change in Uzbekistan is possible only through violence and blood—then I don't want that kind of change." Horrible.

    I have no idea whether or not the United States is going to let Karimov continue on with his little crackdown on the "rebels." Russian President Putin is backing Karimov to the hilt, worrying about "the danger of destabilization." Meanwhile, praktike catches the State Department condemning the recent violence and urging Karimov to open up Uzbekistan's political system. But we're in something of a bind here: as this report points out, the United States has an important air base in the country, and Uzbekistan is a "crucial" ally in the war on terror. More substantively, in Robert Kaplan's much-derided new Atlantic article, he claims the U.S. worries that if we disengage from Uzbekistan, China will swoop in and gain influence—killing any hope for reform there. That sounds like a reasonable concern. Still, Karimov benefits from his relationship with the United States, and it seems we ought to have some leverage over him to push for reform, though that's always easier said than done.

    By way of suggestions, Ariel Cohen of Heritage writes that the U.S., Russia, China, and others ought to prod Karimov into negotiating with the secular, moderate opposition parties in the country—the Erk and Birlik parties—to bring about stability. Meanwhile, a year ago, Fiona Hill of Brookings argued that Karimov's method of "stabilizing" the country was only going to produce more and more social upheaval. Well, she was right about that, though she also doesn't have any easy answers for fixing the country:
    Unfortunately, even sensible solutions to Uzbekistan's problems are now difficult to implement. Although the United States can exert some leverage on Uzbekistan's government, which sees the U.S. military presence and American security assistance as a source of legitimacy, pushing the government to liberalize suddenly could be disastrous. The pressure has to come off gradually to prevent an explosion that would have negative consequences for the whole of Central Asia.

    The U.S. government must, however, push the Uzbek government to put the safety valves back in place and allow its population to let off some steam. Stopping torture and arbitrary detentions would be of the first order, along with re-opening key border crossings, allowing freedom of movement in the Fergana Valley, facilitating private trade, and reinvigorating the bazaars. If something is not done, and soon, there will be more physical and political explosions in Uzbekistan...
    MORE: For those who really want to get into details, Dan Darling has a backgrounder on Uzbekistan, focusing on Hizb-ut-Tahrir, the militant Islamist group. Incidentally, there are reports that HuT is inciting the Koran-in-toilet riots in Afghanistan, but who knows.
    -- Brad Plumer 3:43 AM || ||
    Liberals Who Hate Judges

    Reading up on judges and the courts over the past few days, I've noticed that there's been a growing trend in recent years for liberals to speak out against judicial review, i.e. the power of the Supreme Court to invalidate state and federal laws on constitutional grounds.

    Off the top of my head, Mark Tushnet, Robin West, Mary Becker, Michael Klarman, maybe even Cass Sunstein have all been skeptical of the courts' ability to advance progressive goals, especially in the economic sphere. (As Sunstein puts it, the judiciary tends to uphold "status quo neutrality" rather than enforce things like welfare rights.) Historically, of course, this has usually been the case—in the Progressive era, the courts were the major obstacle to change—and there was a brief uptick in progressive court-mongering during the Warren Court, but that was something of an aberration. At the very least, regardless of what you think of judicial review, it's safe to assume that a Warren-esque Supreme Court isn't going to be in the cards for the foreseeable future.

    Anyway, I'm far decided on the issue, but since I do have the occasional anti-judicial review outburst, I'll sketch out some of the arguments here. Yes, disdain for the courts puts me in the camp with Tom DeLay and the wingnuts over in the picture. Life is weird. I should also say that, barring an unlikely constitutional amendment—for instance, one that allowed Congress to override judicial decisions by a three-fourths majority—there's not really anything to be done about judicial review. (I assume that the Supreme Court isn't going to wake up one day and say, "Oops! Well it looks like we've up and arrogated way too much power for ourselves these past 200 years. Sorry!") The best that can be said is that, if enough people start questioning judicial review, judges—who cater to popular sentiment more than we like to think—may start showing more deference to legislatures. Perhaps!

    Anyway, one of Tushnet's big arguments against judicial review is that the courts, by taking constitutional matters into their own hands, have essentially prevented the public from taking an interest in the Constitution—junk judicial review, and we'd have a broader public debate about these matters. Eh, maybe; those sorts of arguments always seem plausible-yet-ethereal to me. One could also say that legislatures usually adhere closely to the Constitution anyway, and they would adhere even more closely if they weren't able to palm off the hard decisions to the courts. Again, who knows?

    The better liberal argument against judicial review is Sunstein's—historically legislatures have been much better at enacting progressive change than courts. Again, the Warren Court is an exception, and shouldn't be leaned on too heavily. This seems to be true globally too: I've been leafing through Ran Hirschl's Towards Juristocracy, and it seems that courts in other countries haven't been very effective at enacting progressive economic change, either. (And courts in other countries usually have far more latitude to do so—though that's another post.)

    Another argument for "Taking the Constitution Away from the Courts," that I think Tushnet underplays, is that the courts—and not just the Supreme Court—often make law in an extremely ineffective and clumsy manner. Of all the arguments, I like this one the best. First off, yes, it's perfectly "legitimate" for courts to make law—the Constitution is a vague and fuzzy beast, with vague and fuzzy clauses like "equal protection," and inevitably in the course of interpretation, the courts are going to make new law. Such is life. There's nothing a priori illegitimate about that. But there are other problems.

    The term "impact litigation" refers to rulings that actually enact some widespread social change. Roe vs. Wade (legalizing first-trimester abortion) and Brown vs. Board of Education (ruling against school segregation) are the famous blokes here, but you also rulings as "tiny" as suits against gun and tobacco companies that resulted not only in settlements, but in real changes. We now have more safety locks on guns in large part because of litigation, instead of legislation.

    Now all these decisions seem just fine and dandy to a liberal, but consider. The problem with courts making law is that judges generally aren't always qualified to do this sort of thing. Judges see evidence, the courts follow specific procedures. They can't take into account the vast technocratic expertise available on any given topic. Is it really the case that judges are experts on segregation in prisons, music distribution software, pornography, school prayer, campaign finance, etc. etc.? No, of course not. Changing the law often creates as many problems as it solves, and it takes serious finesse to anticipate all sorts of ill side-effects that come with enacting social change. The courts are terrible at doing that, and as a result, have ended up making terribly clumsy laws—from forcing mental institutions to turn out their patients and creating homelessness, to sequestering black politicians in their own special districts, to various safeguards in juvenile courts that have had bad effects—that have had all sorts of unintended consequences.

    The most famous example, of course, is Brown vs. Board of Education, which a number of liberal scholars have recently been deriding as a ruling that didn't change anything about segregation per se (although it had other important effects). I've been reading Michael Klarman's excellent book, From Jim Crow to Civil Rights, and Klarman (a judicial-review skeptic) notes that a decade after Brown, 98 percent of African-American schoolchildren in the South still attended segregated schools. The Court simply didn't have the tools to force the change. That doesn't seem very surprising—courts can only do a couple of ham-fisted things to make people do things: either punish violators or make people pay money. Very ham-fisted. Real desegregation only began when the Civil Rights Act passed in 1964 and state and federal agencies were given specific powers to actually do something.

    Of course, that's less an argument for getting rid of judicial review than it is to enjoin liberal reformers to please, please stop using the courts to enact any and all social change. And I say this not because "impact litigation" often creates a backlash. You usually hear that argument trotted out—for instance, that Roe vs. Wade created the rise of the "values" evangelical movement—but that's not a big concern to me. Most judicial decisions that enact sweeping social change end up creating their own legitimacy. School desegregation, or Miranda rights, are virtually unquestioned today, even though they were controversial decisions at the time. That, and not the Roe example, seems to be the norm. The backlash usually doesn't happen.

    No, the bigger problem with relying on the courts is that, as mentioned above, the outcomes are usually far different from what reformers actually intended. It's depressingly great fun to read about Common Cause, the advocacy group that went to the courts to get a favorable ruling on campaign finance, and ended up with Buckley v. Valeo, which protected campaign cash as "free speech," and basically imploded any hope for real campaign finance reform. Common Cause has been fighting the decision ever since. You usually don't get the reform you want going through the courts, and if you do, it's likely to cause as many problems as it solves.

    Um, so I actually argued two things: the liberal case against judicial review, and practical reasons for liberals not to enact social change through the courts (or at least be selective about it!). I should say, though, that I'm not convinced that judicial review is as bad as all that, and in another post I'll try to mount a defense. Otherwise, fire away!
    -- Brad Plumer 3:16 AM || ||