Most of Marcia Angell's review of The Constant Gardnerrecounts how drug companies really do use third world inhabitants as guinea pigs for pharmaceutical testing, which is all very horrifying, but this bit was far and away the most chilling: "Yet the [movie] is based on the premise that a pharmaceutical company would be so threatened by disclosures of its activities that it would have someone killed. That is what is fantasy. In fact, many of the practices that so horrified le Carré's heroine are fairly standard and generally well known and accepted. They seldom provoke outrage, let alone murder. A company like KDH would not kill someone like Tessa even if it were willing to do so; it wouldn't have to. Her concerns would have seemed isolated and futile, and the companies would hardly have taken notice of them."
Angell, meanwhile, gets at what's so noxious about testing pharmaceuticals in the third world—most of it is for drugs that will only be used in the developed world. Very few companies test new treatments for malaria, or sleeping sickness, or what have you; that's just not where the money is. Instead, they find willing patients abroad—who will happily sign up for drug trials in exchange for a few bucks and the promise of free care—to test experimental drugs for things like cholesterol reduction, with little to no oversight. (See also the Washington Post's six-part series from 2000, "The Body Hunters.") "In my view," says Angell, "research should not be done in the third world unless it concerns diseases that are virtually confined to those regions." Right, and that should happen right about the time we get an independent FDA. Luckily no one needs to track down and kill anyone who holds this view; it's already far on the fringe.
The military's latest plan for Iraq, apparently just approved by Gen. George W. Casey, is suitable cryptic, but the following seem to be the main points, judging from an Inside the Pentagoninterview with officials who reviewed the plan:
The military is planning for a wide range of changes the number of military personnel in Iraq between now and spring of 2006, from slightly increasing the Army to, in the most wildly optimistic scenario, bringing home 70,000 troops.
It will, however, be almost impossible to sustain the current force through 2006.
By earlier next year the military plans to hand off key tasks to private security contractors.
There's no set timetable for withdrawal. The conditions for reduction will include "the state of the insurgency, the capability of Iraqi security forces, and the Iraqi government's ability to support military operations," to be determined by a "multinational advisory panel."
"[S]ome defense analysts" think that "phasing troop reductions over the long term" is the best way to avoid instability.
How long term? "Some estimates" think the Pentagon will retain at least 20,000 military personnel in Iraq for perhaps a decade or more.
Seeing as how training the Iraqi Army doesn't seem to be getting anywhere, this likely means staying for a long, long time. The alternative, it seems, is the Center for American Progress' recently-released proposal to withdraw 80,000 troops by the end of 2006—no matter what—and then... deploy them elsewhere around the world. Because, really, the most sensible way to withdraw from Iraq is to get entangled, immediately, in yet another quagmire. No, but seriously, is there any reason to think that putting 1,000 more troops in the Philippines, as CAP proposes, is a good idea? Is the plan to invade Mindano province and wipe out Abu Sayyaf? Maybe we can broaden the war to the MNLF and other affiliated Islamic separatist groups too. Should be fun, I'll make the popcorn.
Also, I'm no expert, but I guess I don't see the logic in taking 20,000 troops out of Iraq and dumping them in Afghanistan to "beat back the resurging Taliban forces and to maintain security throughout the country," as Korb and Katulis suggest. Either we think deploying more troops to defeat insurgencies with ties to al-Qaeda is an important thing to do or we don't. If we do, I don't see why Afghanistan or the Philippines or whatever takes precedence over Iraq, or why we think counterinsurgency will be successful elsewhere but not in Iraq. Or why we can let Iraq go to hell but absolutely must not let al-Qaeda fighters wander around the Horn of Africa. Is the idea here that Iraq is just botched beyond all repair, it sucks, that's life, nothing we can do, but hey, if we send troops somewhere else and try just a bit harder to do more or less the same thing, maybe this time it will work? I guess "redeployment" is more palatable, politically, than straight-up withdrawal, but it still seems like an odd way of thinking about things.
Check out this year-old piece from the New Yorker on one man's obsessive quest to find a live giant squid… before it's too late. No, really, it's bar none the most fascinating piece about deep-water invertebrates that I've ever read. And I bring this up only because last week Japanese scientists took the first-ever live photographs of the beast:
Eyes like dinner-plates! Tentacles that can kill whales! Give it an 'ooh' and an 'aah', people! Incidentally, the New Yorker piece notes that there's some debate over what the squid's famous ink-spray actually does—does it just act as a smokescreen, making it hard for attackers to see, or does it actually have some sort of chemical in it that either irritates or hurts or disables predators? Even squids that live in very, very deep water, where light is nearly non-existent, still shoot out ink in times of trouble, so that's evidence in favor of the latter theory, maybe. Odd that no one knows for sure.
Via Mark Safranski, an interesting interview with Christopher Andrew, author of the newly-released The World Was Going Our Way: The KGB and the Battle for the Third World:
The thing that sticks out most in my mind is the big picture. Thanks to the Mitrokhin Archive we now understand for the first time how it is the KGB thought it could win the Cold War. They knew it wasn't going to happen by nuclear confrontation with the U.S. or Britain or any of America’s other allies. They knew it wasn't going to happen by coming to power in any of the NATO countries. Instead, they thought if the rest of the world did go their way it would leave the west isolated in the same way the U.S. was isolated in the third world at the end of the Vietnam War. It’s a great illusion. For 25 years the KGB was moving around the world living an extraordinary fantasy.
Although it's not clear, at least from the interview, how effective the KGB actually was. They certainly tried, as, for instance in India: "According to KGB files, by 1973 it had ten Indian newspapers on its payroll as well as a press agency under its control." Plus Salvador Allende in Chile and of course Catro received payments from the KGB, but Andrew suggests that the strategy was eventually destined for failure. Here's a NewsMax summary of the book (yeah, yeah, it's NewsMax, but the highlights seem decent.)
Laura Rozen notes intelligence officials questioning whether the recently-killed Abu Azzam was really Zarqawi's second-in-command for al-Qaeda in Iraq after all. On the other hand, Bill Roggio says that whoever he is, Abu Azzam was still important, and also puts up a handy flow chart noting that several top al-Qaeda operatives have been captured of late. The Belmont Club says that decimating the upper ranks of al-Qaeda like so really does have an effect:
But the worst of it is the wastage to cadres. Those who write that body counts are a meaningless metric to apply against the insurgency ignore the fact that formations which sustain heavy casualties lose their organizational memory while those who suffer lightly retain them. Lt. Col. Joseph L'Etoile is on his third and half of his men are on their second tours of Iraq. For Abu Nasir and many of his foreign fighters, the memory of what to avoid next time has been lost on this, their last tour of Iraq.
Well, in some ways that's true. Note that the Phoenix Program during the Vietnam war, a CIA assassination campaign intended to find and kill Viet Cong cadres in the south which was similarly measured in body counts—and, for that matter, ended up killing lots and lots of South Vietnamese civilians—really did ended up weakening the Viet Cong infrastructure in the south. Overall, the program was a massive failure, and alienated much of the rural population, but it proved that, strictly speaking, if you kill enough people, you can destroy an organization, that the ranks aren't infinitely replenishable. On the other hand, nothing like the Phoenix Program is going on in Iraq, and Douglas Farah's analogy seems far more apt:
Having covered conflicts and the war on drugs for two decades now, it is clear how unhelpful it is to repeatedly trumpet the supposed damage to an organization when one person is taken out of action. The closest parallel I find is in the drug wars, when first Pablo Escobar then other leaders of the Medellin cocaine cartel were taken down. Then the leaders of the Cali cartel were killed would simply step into the breach. While each generation of traffickers or arrested, then the Northern Valley gangs were decapitated. At every step, the DEA and U.S. government would hail the actions as a major triumph, destined to end or greatly diminish drug trafficking. Yet, after each major killing or arrest the amount of cocaine entering the United States remained unchanged. New people was able to individually control less of the market, and each succeeding organization was small and less vertical in its structure, the aggregate amount of drugs they are able to produce and export did not diminish, and ultimately grew.
There's no reason to think Zarqawi doesn't operate like that, especially since everything we've seen has indicated that attacks continue even after this or that latest "top lieutenant" has been captured or killed. Meanwhile, as Anthony Cordesman argues—and U.S. military officers in Iraq are now recognizing, according to the Washington Post—Zarqawi and the foreign fighters have essentially "hijacked" the Sunni insurgency, and are steering it less in an anti-occupation direction, although there's that, and more in a pro-civil war direction, by directing an increasing number of attacks against Shiites and other Iraqis.
What about the rest of the insurgency? The Baathist and Iraqi "nationalist" elements, according to the Post, seem now to have quieted down—content to lay low for now, infiltrate the new government, and are perhaps waiting to stage a coup a few years down the road. Who knows? Nevertheless, Cordesman has also noted before that Zarqawi receives ample domestic support from the thousands of radicalized Iraqi salafists who grew up during Saddam Hussein's "revival of Islam" campaign in the Sunni provinces during the 1990s. (Ahmed Hashim's old analysis of the insurgency here still holds up incredibly well.) Cordesman points out that "rifts" between elements of the insurgency are few and far between, even if some Sunni clerics have been denouncing Zarqawi. This may be because, as Anthony Shadid recounted in his recent book, those clerics discredited themselves among the young Iraqi fundamentalists by their collaboration with Saddam's regime, much as the Shiite clergy in Najaf partly discredited itself among the young Sadrists.
What this all means, it's hard to say. It doesn't seem like the most active elements of the Sunni insurgency, currently, would lose steam if the United States announced a pullout right now. The obvious way forward, which seems to be the U.S. military's current strategy, is to focus mainly on uprooting and weakening Zarqawi's network—at least to the point where it can be handled by a native Iraqi force—which would drastically reduce the risk of Sunni-Shiite civil war, ala 1980s Lebanon, after the United States starts drawing down. Weakening Zarqawi would also, as Army Maj. Gen. Richard Zahner says, allow the political process to "mature." At least that's the hope. If everything written above is correct, then it's not an unreasonable strategy, I think, but it's also not clear that the U.S. can actually do it, as per Farah's post, or that Iraq would stay intact even if you killed every last member of Zarqawi's network (along with all the civilians standing in the way). There are still a hundred other sources of major instability, including the new constitution, or the squabbles in Kirkuk, or the skirmishing among Shiite radical militias, or the hundreds of thousands of ex-Baathists now biding their time.
Katherine V.W. Stone, a professor at UCLA, has written a paper entitled, "Flexibilization, Globalization, and Privatization: The Three Challenges to Labor Rights in Our Time," that deserves a look. It gives a solid overview of the three main reasons why unions in the United States have withered over the past few decades. First, companies have increasingly sought greater flexibility in the workplace—often out of necessity—which has reduced the appeal, for them, of the old unionized workplace model, with its narrow job definitions and rigid hierarchies. Second, thanks to globalization, many companies now have the ability to shift operations abroad, thus weakening the leverage unions can wield. And third, active government policies dating from the Reagan era, including the privatization of labor arbitration, have decimated the strength of organized labor.
Granted, it often depends on which industries we're actually talking about: the policy aspect seems more important for the service and retail industry—there's no good structural reason why Wal-Mart workers shouldn't be unionized, except that public policy works against this—while globalization seems more important for, say, some manufacturing. (Although even on this, I'm skeptical of her point that companies move to countries with lowest labor standards, thereby forcing nations to "compete" over deregulations; see David Kucera's work here.) Still, it's a hostile environment out there, and voting in a more labor-friendly government into Washington won't necessarily get rid of the economic reality here. The current union model, in some ways, has become obsolete.
So how to get around that? Stone, building off Edward Glaeser's work on cities, notes a striking fact: many industries tend to "agglomerate" in a single area, likely because they gain some benefit by being near each other. With tech industries, for instance, you can see why it would pay to find one specific region—Silicon Valley, say—where a large portion of skilled workers can live. Insofar as this actually happens, then, workers can form together in local "citizens unions" to put pressure on local companies concentrated in that area:
While training can help make a locality’s workforce more flexible and skilled, no individual employer has an incentive to establish such programs unilaterally because it cannot capture all the benefits for itself, or preventing their capture by a competitor. However, if a group of workers, organized as a citizens association or a local union, pressures firms in an area to contribute, it would create a benefit from which all would share. Similarly, if enough corporations were induced to contribute to a locality’s social infrastructure – its school system, hospitals, parks, cultural activities, and child care -- that would help attract a highly skilled workforce who want quality educational opportunities for their children. Such community investment would benefit all local firms in a locality.
Easier said than done, naturally, but she might be onto something. In essence, her idea takes advantage of the fact that many companies, especially those with high turnover, draw on the collective skills, knowledge, and experience of the broader local workforce, rather than just those specific workers working in the company at a given point in time. I don't know if that's always true—again, many retailers often draw on the collective lack of skill in a given locality—but it seems like a promising way to think about the structural obstacles unions face. Many local campaigns, such as Mass Global Action, or local "living wage" campaigns, or the Industrial Areas Foundation have all had a great deal of local success that bring many of the benefits of unionization without resorting solely on organizing within a specific company or industry (the campaigns usually combining local activists, church groups, community organizations, worker's groups, in addition to unions). I don't think this sort of thing can ever adequately replace a robust labor movement in this country, but they're definitely complementary in all sorts of increasingly crucial ways.
Via Jane Galt comes the perennial question: Why don't we have unisex bathrooms? Yeah, why don't we? From a utility standpoint, single-sex bathrooms would be quicker, on average, for women—those urinals keep the restroom traffic moving—and cleaner, on average, for men (the shame factor). From a cultural standpoint, some men might find it weird that suddenly people are talking and brushing their hair in the bathroom (or whatever), but they'll get over it. The main worry, of course, is that unisex bathrooms would lead to more sexual assaults. Even if that's not true statistically, still, the first assault that did happen would cause the company or venue that adopted the unisex bathroom to come in for special opprobrium, since it went out of its way to do things differently. So people are reluctant to stick their neck out and be the first-movers. Perhaps you can use the money saved by consolidating restrooms to hire a guard. Or security cameras. But perhaps that's unworkable.
MORE:Here's why the GLBT community is getting behind single-sex bathrooms: "In a 2002 survey conducted by the San Francisco Human Rights Commission, nearly half of all transgender respondents reported having been harassed or assaulted in public restrooms."
Earlier today in the office, I was joking with one of the interns that someone ought to write a contrarian defense of patronage and cronyism. But after reading Time's latest cover story on the ways in which President Bush has stocked the federal agencies with his assorted cronies (and his cronies' cronies), it's clear that this isn't really a laughing matter. Here's the nut of it:
The Office of Personnel Management's Plum Book, published at the start of each presidential Administration, shows that there are more than 3,000 positions a President can fill without consideration for civil service rules. And Bush has gone further than most Presidents to put political stalwarts in some of the most important government jobs you've never heard of, and to give them genuine power over the bureaucracy. "These folks are really good at using the instruments of government to promote the President's political agenda," says Paul Light, a professor of public service at New York University and a well-known expert on the machinery of government. "And I think that takes you well into the gray zone where few Presidents have dared to go in the past. It's the coordination and centralization that's important here."
American democracy, it would appear, has become a form of monarchy, in which cloying courtiers line up, preen and primp in front of the king, and hope for some blessed bit of favoritism handed down from on high. It's clear, mind you, that we're not seeing anything like the Jacksonian spoils system of the 19th century, in which, after each presidential election, tens of thousands of cronies would descend on Washington hoping that the new president would give them a plum sinecure in some federal agency or other. That racket pretty much ended in 1881, after James A. Garfield was shot by a spurned office-seeker, and Congress passed the Civil Service Act two years later.
But even today the president—already an absurdly powerful position in this country; too powerful in my opinion—still gets to appoint a ridiculous number of federal jobs. 3,000 all told. What purpose, pray tell, does it serve to allow the president to choose the 57 Inspector General positions? As we've seen with various IGs over the past four years, the opportunities for abuse here are endless. Of course, George W. Bush and Dick Cheney see a purpose. These appointments allow them to reward business allies and political associates. But even more importantly—and this, I think, distinguishes this administration from previous ones—the new spoils system helps them fill the federal government with pliant officials who won't act independently. As Paul Light puts it, "coordination and centralization." And Bush, as his recent proposals make clear, wants to go even further in this direction, with reforms that would give his patronage appointments greater flexibility in reshaping the lower ranks of the civil service.
Bureaucracy, as anyone can agree, has its disadvantages. It can be slow, cumbersome, hidebound, inefficient, pick your adjective. Many libertarians and small-government types—at least those theoretical purists free from the burden of actually having to run things—hate bureaucracies for just this reason. (Occasionally privatization can help to fix these problems; in practice it often just adds further levels of inefficiency and corruption.) Nevertheless, a series of agencies filled with career civil servants brings with it certain advantages: you have a bunch of middle-class, educated and professional individuals generally working with a broad conception of the public interest in mind. As Michael Lind might say, these are the meritocratic mandarins of our time.
The modern-day spoils system, Bush-style, turns that conception on its head. Many of the technocrats who vie, like courtiers, for presidential appointments these days now generally spend their exile years, when their party is out of power, working in partisan-hack think tanks, like Heritage or the Center for American Progress, spending their time thinking not about the public interest but about fighting political warfare via clownish policy papers, trying to accentuate their differences from the opposition. These are the people who end up getting bureaucratic jobs—just look at the Coalition Provisional Authority in Iraq, filled with Heritage apparatchiks and hotshot young political operatives rather than with the public servants who actually spend all their time thinking about the drudgery of nation-building and building sewage treatment plants and the like. The end result was a disaster.
To be clear, there's absolutely no a priori reason why a Democratic president would do any better—the problem is, in many ways, systemic. On the other hand, the modern GOP harbors a special bias against bureaucracy. To be clear, Republicans certainly don't hate big government—why should they, when it provides billions of dollars worth of favors to dispense? They do, however, tend to assume that civil servants are all very liberal, and hostile to their own interests. In 1995, the Gingrich Republicans dismantled the Office of Technology Assessment, a very effective agency staffed by highly competent people that helped disseminate useful scientific information to Congress and the public, in order to inform policy debates. The problem, however, was that OTA put out two major studies in the 1980s unfriendly to Reagan's Star Wars project, and the GOP became convinced that the place was a leftist operation. As Chris Mooney recently reported, Republicans are sort of regretting the move nowadays, but they're still afraid that a rebuilt OTA would be staffed with "liberal" scientists who produce "skewed" analysis.
The Bush administration has taken up a similar stance, especially after meeting hefty civil servant opposition in, among other places, the CIA and State Department. Like characters in a Thomas Pynchon novel, Bush Republicans are deeply paranoid about bureaucracy, though it's mostly a partisan paranoia. Yet the civil servants who so irk conservatives—Richard Clarke, say, or Eric Shinseki—have very much stood athwart a presidency backed only by a war-crazed public and yelled 'stop'. Any conservative suspicious of popular rule, as Burke was, would do well to consider the crucial role that career civil service officers can play in checking a president empowered by the masses.
I understand the reasons in favor of some form of patronage. It gives highly-qualified people a real incentive to slog it out for presidential candidates and work for their parties in the hopes that some sort of cushy federal job will be the end reward. That's important. It's also a way to keep unelected agencies accountable. And, in those times when some hidebound federal agency really does need a shakeup, a smart president can wield his power of appointment to change things for the better, as Bill Clinton did for FEMA. But it's equally clear that the broad leeway a president gets to choose his own "entourage" is rife with danger. (And lest conservatives think I'm saying all this out of Bush-hatred, ask yourself, "What if Hillary got to select 3,000 of her finest cronies—to pursue 'coordination and centralization'?") Perhaps it's time to rethink civil-service reform. Independent bureaucracy, rather than the Kafkaesque monster that devours democracy, may well be an important defense against tyranny and corruption.
Kevin Drum notes that kids today are learning weird rules for dividing fractions. Strictly speaking, division can be thought of as "repeated subtraction"—that's how computers do it—but it becomes pretty cumbersome. Oddly enough, though, at some point in elementary school I learned how to subtract by nines-complement addition, and that quickly became the main way I did it. So, for instance, to figure out 8,731 minus 362, you'd take the nines complement of 362, which is 9,637, add that to 8,731, giving you 18,368, and then you lop off the first "1" and add it to the end of the sum, giving you the correct answer: 8,369. That all seems very frivolous, but I got pretty good at it, and it was fun visualizing the complementary numbers: for instance, imagining how the groove of the "5" locks in with the tooth of the "4", much like Africa and South America forming Pangea. Plus, that's how computers and mechanical adding machines all do subtraction—since it's impossible otherwise to "borrow" the one and the like. (Well, computers do twos-complement addition, but still.)
Speaking of adding machines, here's a nifty page on the Curta mechanical calculator—the first handheld mechanical calculator, which was invented by Curt Herzstark while he was a prisoner in the Buchenwald concentration camp. (Previously, accountants would lug around their adding machines in suitcases.) From what I recall, Herztstark ran into a sticking point as to how to do subtraction, because the crank only rotated one way, until he realized that you could use nines-complement addition. And, voila.
Here's a random question: Has the mayor of any major American city ever been elected president? It doesn't seem so. Searching the White House site turns up Grover Cleveland, whom someone put in charge of Buffalo way back in the day, for whatever reason, but that's all I can find. Obviously Hubert Humphrey was the mayor of Minneapolis, but he doesn't count either. Dennis Kucinich, former mayor of Cleveland, definitely doesn't count. Maybe Rudy Giuliani someday, but not now.
This doesn't seem all that surprising: anyone who campaigns for president obviously needs to set out with a relatively broad constituency, and mayors usually have a smaller and less diverse base than governors or senators. Still, this isn't the case in many other nations that elect their presidents nationally. Jacques Chirac was once mayor of Paris. Yuri Luzhkov, mayor of Moscow, practically runs everything Putin doesn't, and could maybe find himself next in line for the presidency—if there is a 'next in line'. Mexico City's mayor, Andres Manuel Lopez Obrador, looks like he'll become president real soon. Iran's new president, Mahmoud Ahmadinejad, used to be mayor of Tehran.
But then again, most nations have only one primary urban center that utterly dominates the rest of the country, so mayors, by extension, are much more powerful people. Paris makes up one-seventh of France's population; Moscow one-fourteenth (but a disproportionate percentage of the wealth); Tehran one-fifth; Mexico City one-sixth. By contrast, New York City comprises less than one-fifteenth of the American population—and unlike Moscow, doesn't have an overwhelming percentage of the wealth—and, unlike in many countries, there is at least one major rival city: Los Angeles. (Other countries with "decentralized" urban seats of power include, perhaps, Canada, Brazil, and China.) Relative to national politics, then, being mayor of New York City doesn't carry nearly as much weight as mayor of Moscow, or Paris, or Tehran, or Mexico City. And yes, this is highly relevant to the pressing issues of the day...
Via yet another excellent Digby post, here's a fantastic review article on the history of segregated housing in America:
One of the most useful aspects of the first chapter is Lamb’s description of the myriad ways that the federal government has contributed to racially segregated housing. He emphasizes the impact of federal mortgage guarantee programs that channeled money to the suburbs where middle- and working-class whites were able to buy new homes, the creation of the interstate highway system that made it possible for whites to commute from their homes in the suburbs to jobs in the city, and urban renewal programs that displaced African Americans from their communities without providing sufficient replacement housing. Government decisions about where to build public housing, the location of federal jobs in suburban areas rather than in central cities, and federal income tax deductions for mortgage interest also have contributed to the problem.
Interestingly, George Romney, Nixon's HUD Secretary, made several big attempts to promote racial integration in the suburbs, in part by proposing to build lots of low- and moderate-income housing away from urban centers. But the suburbs rioted, and eventually Nixon put a clamp on Romney, wresting control of "fair housing" programs away from HUD and taking the opposite tack. His was hardly the first administration to aid and abet segregated housing, but Nixon understood better than most the politics of doing so. As Lamb's book tells it, Nixon's "suburban strategy" played nearly as important a role as his "southern strategy" in his electoral victories. By "enunciat[ing] a policy declaring that the national government would not pressure the suburbs to accept low-income housing against their will," Nixon exploited and overcame one of the most divisive aspects of the Civil Rights movement. (A point James Nuechterlein touches on briefly in his recent First Thingsessay, "How Race Wrecked Liberalism"—northern whites were perfectly happy to extend the vote to blacks down south, but the prospect of living side by side with them shattered the consensus on civil rights and led to the '68 and '72 conservative landslides.)
That approach has persisted, more or less, to this day. Whatever else you want to say, good or bad, about the Bush administration's policies to promote homeownership (which has been the focus of housing policy over the past five years), they don't do much for segregated housing patterns in this country. Those who can't afford to buy a home—about 80 percent of all renters, many of them minorities—don't benefit from HUD's nifty downpayment grants and the like. And programs that have helped renters in the past, like Section 8 housing vouchers, have had their budgets slashed. Meanwhile, the homes that are affordable for many minority homebuyers tend to be located in poorer neighborhoods, which, again, just perpetuates the status quo. Policy and law aren't entirely responsible for housing segregation, but they certainly help. I don't have a good idea what a solution would look like, but then again, no one really wants to change anything; perhaps because, as Nixon knew perfectly well, there's no surer path to electoral oblivion.
I have no idea why thisChicago Tribune piece about Barak Obama's trip to Ukraine and Russia had to be so ridiculously detailed—at one point we get to hear Obama quip, "I didn't know Lenin was a player" during a jaunt to Moscow; um...—but all the same, it's nice to see at least one major politician take an interest in nuclear proliferation and arms control.
It tends to get lost in all the din, but at the end of the day, nothing else matters nearly as much when it comes to national security. With a nuclear weapon in tow, terrorists really could destroy the United States—if not economically, then through the inevitable police state that would emerge after an attack. (Yes, I'm tinfoil enough to believe it.) But without nukes, bin Laden, Zawahiri, Zarqawi, and all the rest are just two-bit lunatics who can kill a lot of people—thousands, even—but in a fundamental way can't and won't ever destroy Western civilization. (Unless, you know, we pull it off for them.) It's that simple. And yet very few people seem all that interested in, for instance, securing all the loose nukes floating around the former Soviet Union. See, e.g., here. So good for Obama for taking this seriously, and good for Richard Lugar—one of the few who actually does obsess over loose nukes—for taking him under his wing. Now start spreading the word.
This post was originally going to open with, "What the hell is wrong with Chuck Schumer?" This business about his DSCC aides trying to swipe the credit reports of Maryland's Lt. Gov. Michael Steele, who's thinking about a Senate bid next year, seems as sleazy as it gets. The fact that his aides got caught in July and didn't get fired until late August looks even worse. But doing some aimless searching around, here's a more novel hook, "What the hell is wrong with the Democratic Party in Maryland?" Despite representing a liberal state with a sizeable African-American population (30 percent), the party hasn't once nominated a black candidate to run for statewide office. Steele, a Republican, is Maryland's first.
Meanwhile, unless the national anger at Bush persists forever, it kind of looks like the Democrats will have a nigh-impossible time retaking the Senate next year. But really, who knows? Maybe all the countless GOP scandals going on right now will actually bring down the party, although this complicated Abramoff-Rove-Safavian-Norquist business is more likely to put people in jail, it seems, than create an actual backlash at the polls. Back to the point at hand, a bunch of Dailykos commenters think Steele would actually put up a tough fight in Maryland, for whatever that's worth. And yes, this is sort of a random assortment of stuff to be reading on a Friday night.
Nathan Newman has a long list of pro-labor measures Bill Clinton enacted when he was in office. The man sometimes doesn't get enough credit here. The depressing part, though, is that most of those measures were executive orders or changes to federal regulation—precisely the sort of thing that a subsequent president can easily undo. As, you know, proved to be the case. Clinton set up workplace regulations to prevent repetitive stress injuries; Bush repealed 'em. Clinton drew up rules banning union-busters from federal contracting work; Bush repealed 'em. Clinton stocked the NLRB with labor-friendly members; Bush... well, you get the idea. The principle failure of the Clinton administration was that a conservative president hostile to labor could almost entirely undo eight years worth of progress with a flash of the pen.
One solution to this problem, from a labor standpoint, is to ensure that presidents hostile to labor stop getting elected. But that doesn't always work, obviously—in the two-party system we have Republicans will inevitably win the presidency about half the time. The only lasting solution is to make sure that conservative presidents can't gut vital workplace protections quickly and easily. In theory, liberals shouldn't have such a tough time of this. A Roper Center poll in 1997 found that nine out of ten Americans supported "enforcing workplace safety and healthy regulations." You can't get 90 percent of Americans to agree on anything, but they agreed on this.
So how did Bush and the congressional GOP get away with repealing, for instance, OSHA's ergonomic regulations in 2000, almost as soon as they came to power? Simply put, most Americans don't have the first clue as to what OSHA (the Occupational Safety and Health Administration) is or does—unless they happen to read Jordan Barab's excellent blog, which they should—let alone know what "ergonomics" regulations entail. The people who cared deeply and passionately about dismantling OSHA all headed major corporations, and donated heaps of money to political races. Pro-labor Democrats in Congress just couldn't muster the popular support or the lobbying pressure necessary to stop them. I've discussed elsewhere the relatively paltry lobbying operation the AFL-CIO has in Washington, but it's really hard to understate this fact: In 2000, labor was 13th in total lobbyist spending, vastly, vastly outmatched by agribusiness, the communications industry, finance, "misc. business," etc. etc.
The moral here, of course, is that without a pervasively strong labor movement, any pro-worker rules or regulations or legislation that get passed by a liberal president or Congress can easily be repealed by the next conservative president. A strong labor movement, on the other hand, can rile up popular awareness and prevent people like Bush from thrusting unpopular workplace measures into law, under the radar. A strong labor movement can make sure that Democrats don't just shrink away when the battle gets extra-fierce, as they did during the fight over the overtime rules. A strong labor movement can bring the proper lobbying resources to bear. That's just life. Mickey Kaus says that sometimes the labor stranglehold on the Democratic Party can get in the way of good liberal policy. Yes, it can—no interest group is unselfish. But without a strong labor movement, good liberal policy often survives only as long as the good liberal president who enacted it. That, more than anything else, should be the overriding lesson of Clinton's policy successes: not that they were wonkish and good, but that they didn't last.
Rivka has a fantastic post connecting the horrific rumors about New Orleans that cropped up post-Katrina with social psychology:
Cognitive dissonance gets particularly ugly when reality collides with the just world hypothesis, the belief that "the world is an orderly, predictable, and just place, where people get what they deserve." Faced with tragedy, victimization, or injustice, just world believers have four options to reduce the cognitive dissonance: they can act quickly to help relieve the victim's suffering (restoring the justice of the situation), minimize the harm done (making the tragedy a less severe blow to their beliefs), justify the suffering as somehow deserved (redefining the situation as just), or focus on a larger, more encompassing just outcome of the "poor people will receive their rewards in heaven" variety. The first response - the only actually helpful one - isn't always possible. Unfortunately, the latter three pretty much always are.
The aftermath of Hurricane Katrina confronted Americans with a constant parade of images of suffering. Terrible suffering, to extremes hardly imaginable in a wealthy and highly developed society. American citizens dying of thirst, dead bodies lying uncollected on the streets of a major city, elderly people and children penned into the Convention Center and the Superdome in unimaginable squalor, denied even the most basic of aid from their government. There was no immediate way for private citizens to help them. Faced with those horrific images, most of us had powerful reactions of grief, rage, shame, fear, pity. In others, however, the images of Katrina caused cognitive dissonance too great for them to tolerate. Where is the "just world," when wheelchair-bound grandmothers die of thirst? How to maintain, watching the abandonment of New Orleans victims to day after day of imprisonment without relief, the conviction that this is the "greatest country in the world"?
That's good stuff, though I doubt "cognitive dissonance" was all that was going on after Katrina hit, when (parts of) the country became temporarily obsessed with digging up horror stories. Bill O'Reilly's various remarks—about blacks who "planned" to stay around so that they could get some quality looting-time—deserve, I think, a less charitable interpretation. But still, combine that with Mark Schmitt's theory that, in times of crisis, people genuinely need to believe that there's a competent leader in charge of the situation, even if there's no such thing, and you really have something. A wildly over-generalized and ultimately quite frightening something, but there you go.
In the Washington Post the other day, Robert Samuelson took liberals to task for misunderstanding the nature of poverty in America:
But the overall poverty rate is misleading. True, poverty has been stuck for non-Hispanic whites, though it's fairly low. Since the late 1970s, it's generally fluctuated between 8 percent and 9 percent, depending on the economy. But poverty among blacks -- though still appallingly high -- has declined sharply. In 2004 it was 24.7 percent, down from 33.1 percent in 1993, though up from 22.5 percent in 2000. As recently as 1983, it was 35.7 percent.
The dramatic improvement may reflect the 1990s' economic boom. Or it could stem from the 1996 welfare reform, which restricted benefits and imposed tougher work requirements. … Given these trends, the overall poverty rate should be drifting down. It isn't. The main reason, as I've written before, is immigration. We have uncontrolled entry of poor, unskilled workers across our southern border. Although many succeed, many don't, and many poor Latino immigrants have children, who are also poor. In 2004, 25 percent of the poverty population was Hispanic, up from 12 percent in 1980. Over this period, Hispanics represented almost three-quarters of the increase in the poverty population.
Now as it happens, I think we can and should do more about poverty among unskilled immigrants, and can do much more about poverty in Latin America. But that's an argument for another time. Samuelson makes a fair point here that doesn't get much press, and I've tried searching for studies that address his argument—namely, that the increasing poverty rates in the United States are pretty much due to immigration—but virtually no one seems up to the task. A Google trawl mostly turns up sites like VDare and CIS, both of which, obviously, support the "immigration causes poverty" thesis.
Ah, but here we go. This old EPI study looks at the years 1989-1999, when, despite a white-hot economy towards the end, the poverty rate fell less than one measly percentage point by the end of the decade, and this disappointing result was blamed on immigration. EPI dissented, noting that immigrant family incomes actually rose faster than native family incomes, and this increase was substantial enough to offset the increase in the share of the immigrant population. Meanwhile, the study looked at New York and California and found that, even if you exclude immigrant data, neither state saw a significant reduction in poverty during the 1990s. So those stagnant wages and stubborn poverty rates we hear so much about weren't just due to "uncontrolled entry of poor, unskilled workers across our southern border," then.
To EPI's argument, I'd add that if we get out our fine-tooth combs and look at the state-level data, we can see that, for instance, the poverty rate actually fell in California between 1998 and 2003, but rose dramatically in, say, South Carolina or Mississippi—not places we would consider immigration hotbeds. (You can find data on who's immigrating where here.) So it's not clear that everybody's doing better these days except the immigrants, who were already poor anyway -- as Samuelson would have it.
And regardless, Samuelson's column is sort of changing the subject here. Even if Hispanic immigration is mostly responsible for the increase in poverty, that's no excuse for getting complacent about the stagnant white poverty rate, which hovers around 8-9 percent, and by itself is higher than total poverty rates in many EU15 countries. Figure that one out. It's also no reason to get complacent about the African-American poverty rate, which is indeed "appallingly high," and has been rising over the past few years, during a supposed economic boom, after falling dramatically during the 1990s. (By the way, the African-American poverty rate dropped at roughly the same rate between 1993 and 1996 as it did between 1996 and 1999, so I'd question Samuelson's suggestion that it was due to welfare reform.) Really, it's not like the non-immigration component to poverty in the U.S. is anything to brag about. For that, the usual boring explanations—stagnant wages, meager safety net, etc.—still seem to apply.
A noteworthy statistic on getting high, from Maia Szalavitz' story in Salon on the latest scare stories being peddled by the government about marijuana:
UCLA public policy expert Mark Kleiman has pointed out that federally funded research by the University of Michigan shows that since the 1970s the level of high reported by high school seniors who smoked marijuana has remained "flat as a pancake." In other words, even if today's kids are smoking more potent stuff, they don't get higher than their folks did -- like drinking a few whiskey shots rather than multiple mugs of beer, they use less of the good stuff to achieve the same effect.
Against this, though, outside of a Lovemakers' concert a few weeks ago, an aging hippie told me that wimpy kids these days smoke "utter dogshit." So, you know, there's a real debate here. In an old post, Mark Kleiman looked at how you actually measure such things. Lab tests indicate that THC potency has probably tripled since the 1970s, but kids tend to roll thinner joints and share more often with their friends. Between increased moderation and more sharing, the kids are alright. On the other hand, the study that measured how high kids were actually getting, as compared to their parents, simply asked them, "How high do you get when you use [whatever]?" What? If our parents reported a high of "8" and kids today report a high of "8," that obviously doesn't tell us much of anything. We need longitudinal studies, dammit. But, of course, those have their own problems. Actually, I don't know how you'd design this sort of experiment. Brain scans, maybe.
Digby has two important posts on race, Republicans, and politics in America that deserve a look. No, this stuff has never really gone away, and Digby's characterization seems right to me: "Bush may not personally be a racist, I have no way of knowing what's 'in his heart.' But he is quite well aware of the fact that all the racists in the country who voted, voted for him." And he's likely aware now more than ever, perhaps, after his post-Katrina speech—which promised, at least nominally, the biggest poverty-spending in a generation and acknowledged that America's legacy of racism has contributed to black poverty—ended up winning over precisely zero liberals and alienated a bunch of Republicans. See, for instance, this Rasmussen poll:
Following the speech, the President's rating for handling the Katrina crisis fell eight points among Republicans (from 71% good or excellent to 63%). The President also draws good or excellent marks from 11% of Democrats and 31% of those not affiliated with either major political party.
Uh-oh. Now maybe that 8 percent drop among Republicans was due to Bush alienating those mythical "fiscal conservatives" holed up somewhere in New Hampshire and Arizona crunching budget numbers, but I doubt it. If fiscal conservatives haven't lost faith in Bush by now, they never will. More likely, the drop came among what Stanley Greenberg likes to call the "Fuck-You Boys" and "Fuck-You Old Men": white, often working-class Republicans who enjoy their social programs as much as anyone—people who would probably enjoy a Scandinavian socialist state, all else equal—but draw the line when government money starts going toward "those people." These are voters who can stomach a bit of tokenism among Bush's cabinet appointments, but draw the line at shoveling out $200 billion for poor, black New Orleaners. I don't think these voters just exist in the south—nor do I think they're the only racists in the country—but any analysis of politics in the south, or inquiries into why the United States doesn't have a robust European-style welfare state, needs to start here, with this bloc.
As far as Republicans go, I'm willing to believe that a number of Republicans aren't personally racist. Oh sure, exceptions abound. Trent Lott's paean to Strom Thurmond, or his amicus brief for Bob Jones University, or his involvement in the neo-Confederate movement, speaks for itself. Nor does it take much brainwave activity to decide that Jeff Sessions of Alabama doesn't like black people. But set them aside. What seems even more common are Republicans who are perfectly willing to benefit politically from racism to win elections. Sometimes the search by liberals for "code words" and "code phrases" among Republicans pursuing "southern strategies" and the like can seem a bit strained and goofy. ("He said meritocracy!") But the overall pattern just isn't hard to spot. George Bush's trip to Bob Jones University had a very clear purpose. The Rove-inspired push polls in South Carolina about John McCain's adopted daughter had a very clear purpose. To pretend that Bush's team of veteran campaigners, most of whom cut their teeth on races in the Deep South, was completely oblivious to the signals these moves send is utterly naive.
The list goes on. You have Bill Frist's 1994 stump speech line during his Senate run, "While I've been transplanting lungs and hearts to heal Tennesseans, [my opponent] Jim Sasser has been transplanting Tennesseans' wallets to Washington, home of Marion Barry." I wouldn't ever defend Barry, but what a black mayor in D.C. actually had to do with Tennessee politics isn't entirely clear. Just last fall, Tom Coburn of Oklahoma ran a campaign ad (right) against his Democratic opponent that showed Hispanics and "black hands" receiving welfare checks. In every instance, the method is the same: don't go too overt on the racism, because no one likes that—which is why Pat Buchanan's presidential campaign flopped. But do subtly appeal to white resentment of blacks who "drain public resources," and play to stereotypes about black violence, laziness, and sexuality. By itself, racial priming of this sort obviously doesn't decide political races; Coburn had an overwhelming advantage regardless. (Moreover, he still did better among black voters than most southern Republican senators do.) But they do matter. The poll data is written in Magic Marker.
Crucially, I don't want to suggest that only southern Republicans benefit politically from racism. Quite often, Democratic attacks on outsourcing to India gain currency thanks to a similar sort of nativism, regardless of the economic merits of these arguments. Racial politics, meanwhile, played an ugly role in the 2001 mayoral primaries in New York City. Governors like Pete Wilson in California and Jim Edgar in Illinois came to power in 1994 by running "law and order" ads with blurry images of gun-toting African-American rapists. And so on. Often political realities dictate strategy; I think James Glaser, in The Hand of the Past in Contemporary Southern Politics, described the dynamic well:
The fundamental dynamic of southern politics, a racial dynamic, still holds, however. The process of who gets what, when, and how still must take place between majority whites and a large black minority, and this is the stuff of politics… It is not about candidates being unable to escape the shackles of virulent racism, though some certainly may be constrained by their racial attitudes. The argument here is that the racial balance of a district is determinative of so much of the campaign. The story of Delbert Hosemann is instructive. Here was a white man who had given enormously to the black community, and not from a sense of obligation or for political gain. But as a Republican candidate, he could think of blacks only as "not my people," and savvy as he was, he recognized that there were no circumstances under which he could make headway into the black community.
That, I think, sums up what people are mostly dealing with: political constraints over-determined by America's long legacy of racism. Not surprisingly, race doesn't affect every political campaign in the exact same way. On the congressional level, for Republicans in southern districts with sizeable Africa-American populations, the only way to win is to turn-out the white voters, the "fuck-you boys," and you do that however you can. In districts with fewer black voters, this becomes less important—again, see Glaser's Race, Campaign Politics, and the Realignment in the South. Similarly, Democrats rely heavily on racial appeals in majority-black districts, although I do think this is qualitatively different. In some sense, these dynamics aren't as consciously malevolent as some might assume. But that doesn't mean they don't exist.
Today, of course, the South continues to pull all the puppet-strings in American politics, and a large number of important Senators and members of Congress get elected by mastering these racial dynamics—they are, as Glaser might say, "constrained" by them. That in turn determines the course of national policy to a large degree. Bush is quickly finding that he can't just lurch to the mushy center and advocate $200 billion in welfare for a predominantly black city without alienating a good chunk of his base. Until now, the GOP's big-government conservatism has succeeded by remaining the sort of big-government conservatism the "Fuck You Boys" and "Fuck You Old Men" like to see: lots of spending on the military and drugs for middle-class white seniors, while slashing food stamps and housing vouchers—since we all "know" where those go. (They go, of course, mostly to poor whites, but myths die hard.) To imagine race has nothing to do with any of this seems, I think, extraordinarily naive.
Okay, I'll be the first to admit, I don't know a whole heck of a lot about the science behind climate change. But luckily, I did stay at a Holiday Inn… No, even better, I did read Powerline this morning, which tells me this:
While it is theoretically possible for human and animal activities to affect the climate on Earth, the main factor causing fluctuations in temperatures on this planet, as on Mars, is variability in energy output from the Sun.
Uh... what? Powerline's little foray into "science" was spurred by recent news that polar ice-caps on Mars seem to be melting. Figuring that maybe, just maybe, there were better sources out there than Powerline, I googled the topic and got a handy Q&A page from the National Oceanic and Atmospheric Admistration which addresses this very question. Key stuff in bold:
Q: Can the observed changes be explained by natural variability, including changes in solar output?
Since our entire climate system is fundamentally driven by energy from the sun, it stands to reason that if the sun's energy output were to change, then so would the climate. Since the advent of space-borne measurements in the late 1970s, solar output has indeed been shown to vary. There appears to be confirmation of earlier suggestions of an 11 (and 22) year cycle of irradiance. With only 20 years of reliable measurements however, it is difficult to deduce a trend. But, from the short record we have so far, the trend in solar irradiance is estimated at ~0.09 W/m2 compared to 0.4 W/m2 from well-mixed greenhouse gases. There are many indications that the sun also has a longer-term variation which has potentially contributed to the century-scale forcing to a greater degree. There is though, a great deal of uncertainty in estimates of solar irradiance beyond what can be measured by satellites, and still the contribution of direct solar irradiance forcing is small compared to the greenhouse gas component. However, our understanding of the indirect effects of changes in solar output and feedbacks in the climate system is minimal. There is much need to refine our understanding of key natural forcing mechanisms of the climate, including solar irradiance changes, in order to reduce uncertainty in our projections of future climate change.
In addition to changes in energy from the sun itself, the Earth's position and orientation relative to the sun (our orbit) also varies slightly, thereby bringing us closer and further away from the sun in predictable cycles (called Milankovitch cycles). Variations in these cycles are believed to be the cause of Earth's ice-ages (glacials). Particularly important for the development of glacials is the radiation receipt at high northern latitudes. Diminishing radiation at these latitudes during the summer months would have enabled winter snow and ice cover to persist throughout the year, eventually leading to a permanent snow- or icepack. While Milankovitch cycles have tremendous value as a theory to explain ice-ages and long-term changes in the climate, they are unlikely to have very much impact on the decade-century timescale. Over several centuries, it may be possible to observe the effect of these orbital parameters, however for the prediction of climate change in the 21st century, these changes will be far less important than radiative forcing from greenhouse gases.
Good to know, I think I'll stick with that. Nevertheless, what does explain the melting ice-caps in Mars? Clearly not a human-induced increase in greenhouse gases—so what then? Changes in Mars' position relative to the sun? Or is Powerline right and solar variability is far stronger than, you know, most scientists believe (this, naturally, is the most implausible theory of the bunch)? To my (very) untrained eye, it seems like an actual mystery, although I guess this all depends on how quickly the Mars ice caps are melting—which isn't really clarified anywhere.
MORE: I would hope it was clear that I'm not trying to deny the fact of human-induced global warming here on Earth, but maybe it wasn't. Nevertheless, here's more on climate change on Mars.
This paper on development, by Peter Lorentzen, John McMillan, and Romain Wacziarg, has a fairly stunning thesis:
[W]e argue that high adult mortality reduces economic growth by shortening time horizons. Higher adult mortality is associated with increased levels of risky behavior, higher fertility, and lower investment in physical and human capital. Furthermore, the feedback effect from economic prosperity to better health care implies that mortality could be the source of a poverty trap. In our regressions, adult mortality explains almost all of Africa's growth tragedy. Our analysis also underscores grim forecasts of the long-run economic costs of the ongoing AIDS epidemic.
Emphasis added. On one theory, people in countries with higher mortality rates usually expect to die sooner, so they don't make the sort of investments—in education, say—that lead to higher economic growth. This can potentially create a vicious cycle: countries with high mortality rate don't grow, so they remain poor, so they don't have the resources to improve public health, and so on. What I'd like to know, though, is if the theory's true, how come some regions and nations managed to break the cycle in the first place. Lucky nutritional breakthroughs? Public health innovations? They never suffered at the hand of colonialism, slavery, and AIDS? No idea. On the other hand, maybe this means that increased efforts to lower mortality rates in Africa would be a more direct and effective way of generating sustainable economic growth on the continent than any of the other foreign aid solutions usually prescribed.
For pure, unbridled Wal-Mart bashing, this compilation (PDF) of figures, studies, and talking points from the Brennan Center for Justice has all the information you need. It's not fair and balanced—why, no mention of the potential benefits Wal-Mart can bring!—but it has some great tidbits, like: "In a study of over 3,000 counties, researchers found that counties with more Wal-Mart stores had a larger increase (or a smaller reduction) in the poverty rate between 1987 and 1999 than did counties with fewer or no Wal-Mart stores."
In the Financial Times of all places, Trevor Butterworth has a paean to the joys of the semicolon, and the people who adore and despise it. Count me as one who adores it, I guess. But—and this thought is hardly original—it's worth emphasizing again just how crucial punctuation can be; it's more than just style and rhythm. Every now and again I get hooked on a particular punctuation mark and watch it becomes indispensable not just to the way I write, but the way I actually think. Excessive use of the m-dash means that every complete thought has to—just has to—have another complete thought couched in the middle; just the act of making a straightforward, declarative point becomes a chore. Giving up parentheses would mean giving up those very crucial countless qualifiers and extra expositions—no, wait, I didn't clarify enough, let me cram this one more bit into the sentence! Dependent clauses also become tyrannical. And damned if I know how to emphasize things without italics; massive lifestyle changes would be necessary.
So, you know, when an editor says something like "No semicolon for you!"—as, apparently, Michael Kinsley prefers—it doesn't just clean up the style a bit, it makes it very hard to say anything in the way it needs to be said. It's all very important. Or maybe this is all just the mark of a weak and unversatile writer; oh well.
Welfare reform was always a fraud. The evidence for its claims of success never amounted to much. How could it be otherwise? Work doesn't pay a single woman enough to raise children. Never has. Welfare reform is about pushing a woman into the workforce for not much more money and a lot less time with kids, plus a child care bill that somebody has to pay. It's ridiculous.
Check out the MDRC's valuable case studies on this subject, which paint a very muddied picture. People have been leaving the welfare rolls, yes, but after reform in, for instance, Los Angeles, "families were not substantially better off financially even though many parents went to work." Meanwhile, the poverty rate continues to move upwards. As far as I've ever been able to tell, most of the positive post-reform gains have come as a result of other Clinton-era initiatives, like boosting the Earned-Income Tax Credit (in 1993) and increasing child-care assistance (in 1998) and so on. Generic liberal stuff. Well, plus a healthy economy and flukishly tight labor market in the '90s.
Welfare reform did, of course, shove people into the workplace. What's not so clear to me, though, is why this is such a fantastic goal in and of itself, as reform boosters like Mickey Kaus suggest. Does it make low-income families better off? Not so clear. The children! Does it help the children? Well, even in Minnesota, which has one of the more generous TANF programs in the country, reform "had no overall effect on the elementary school achievement of very young children." (Some disadvantaged children in the state saw gains, true, although against this, note that Minnesota was one of the few states that spent more under its reformed program than it did under the Bad Old System.)
Perhaps we'll just say that "shrinking the welfare rolls" is a positive end because it got a lot of people off their lazy asses and helped us feel morally better about ourselves as a nation. If that's your aim, then cheers, it worked. Plus, it saved the federal government a couple billion dollars each year—enough to fund an extra Don Young Highway or two. On the not-so-trivial other hand, shrinking the welfare rolls acts as a tax on the working class, by expanding the pool of labor and putting downward pressure on wages. Oops. Many centrist Democrats will say that welfare reform can and will be a stunning success story if only we increase the Earned-Income Tax Credit and raise the minimum wage and provide heaps more funds for child care and heaps more money for job training and so on. Well, no shit. Top Ramen is a great meal if it comes with a side of steak. If people are going to be forced to work then of course it helps to make that work pay. Otherwise, not.
This paper by Amy Finkelstein and Robin McKnight, "What Did Medicare Do (And Was It Worth It)?" has a fairly novel take on the value of health insurance. In the first ten years of Medicare, it seems, the program "had no discernible impact on mortality." That seems to jibe with Phillip Longman's view, among others, that health insurance has played only a small role in improving longevity rates in the past. Now that doesn't seem like the best way to think about public policy, since at some level we should care about individuals rather than aggregate statistics, but there you go. Plus, as Harvard's David Cutler has argued, Medicare has led to the adoption of new medical technologies which will probably, in the long run, have a very large effect on mortality rates.
But whatever. Even if there aren't any health gains from universal insurance, Medicare did substantially reduce out-of-pocket expenditures for many seniors, and "the welfare gains from such reductions in risk exposure alone may be sufficient to cover between half and three-quarters of the costs of the Medicare program." This social insurance function, Finkelstein and McKnight argue, make the program "worth it" regardless of its value in improving the health of seniors.
So I've been trying to make sense of these remarkable two paragraphs stuffed at the very end of the recent Timecover story of U.S. missteps in Iraq:
Another hot debate in the intelligence community is whether to make a major change in the counterinsurgency strategy--to stop the aggressive sweeps through insurgent-riddled areas, like the recent offensive in Tall 'Afar, and try to concentrate troops and resources with the aim of improving security and living conditions in population centers like Baghdad. "We've taken Samarra four times, and we've lost it four times," says an intelligence officer. "We need a new strategy."
But the Pentagon leadership is unlikely to support a strategy that concedes broad swaths of territory to the enemy. In fact, none of the intelligence officers who spoke with TIME or their ranking superiors could provide a plausible road map toward stability in Iraq. It is quite possible that the occupation of Iraq was an unwise proposition from the start, as many U.S. allies in the region warned before the invasion. Yet, despite their gloom, every one of the officers favors continuing--indeed, augmenting--the war effort. If the U.S. leaves, they say, the chaos in central Iraq could threaten the stability of the entire Middle East. And al-Qaeda operatives like al-Zarqawi could have a relatively safe base of operations in the Sunni triangle. "We have never taken this operation seriously enough," says a retired senior military official with experience in Iraq. "We have never provided enough troops. We have never provided enough equipment, or the right kind of equipment. We have never worked the intelligence part of the war in a serious, sustained fashion. We have failed the Iraqi people, and we have failed our troops."
So to recap: none of the intelligence officers here, or their ranking superiors, can "provide a plausible road map toward stability in Iraq." Nevertheless, everyone "favors continuing—indeed augmenting—the war effort." That's just perverse. So perverse, one wonders what's going on here. Unlike politicians, or prominent liberal hawks trying to schedule TV appearances, these intelligence officers presumably don't need to save face by declaring that we must stay the course. If they truly thought that Iraq was doomed—i.e., that chaos in central Iraq would threaten the stability of the Middle East and al-Qaeda was destined to seize a safe base of operations—regardless of whether the United States stayed or left, they could presumably just say so. But they didn't. That means they either genuinely think muddling along under the current course is better than leaving because that's a sound assessment, or else they're simply unable to contemplate bad things happening. If the latter, they better start contemplating.
In the past, people like Anthony Cordesman have argued rather convincingly—to me, at least—that the military could more or less muddle through the current course, training Iraqi security forces and the like, and come through with at least a stable Iraq in tow. But the fact that not a single intelligence officer can "provide a plausible road map toward stability" changes that picture, for me, in a big way. These folks have screwed up before, but they seem to know more than most. Meanwhile, Elaine Grossman of Inside the Pentagonreports that behind the scenes, officers are cautioning that "even under the best circumstances the emerging Iraqi army does not appear ready to fill the security vacuum left by departing U.S. troops." Peter Galbraith says the security forces are segregated into Sunni, Shiite, and Kurd, and a cohesive, national Iraqi Army is nowhere to be found: "There is exactly one mixed battalion." Potentially, announcing a U.S. withdrawal will make this situation much, much worse and fracture the army even more, as various militias gird themselves for what they see as an inevitable expansion of the current civil war after the U.S. leaves, which girding only makes that war inevitable. But it's not clear that things will get any better if we stay around. Where's the "plausible road map"? Nowhere.
So what does that leave? Getting out sooner rather than later, regardless of what the situation looks like on the ground, so as to emerge with at least a functioning Army intact. But then what? What if pulling out gives people like Zarqawi a relatively safe and stable base of operations, ala Afghanistan—and what if that base is used to launch attacks on Europe or the United States? Will the liberals who earned their hawk stripes by supporting war in Afghanistan also support a re-invasion of Iraq? It would be the same rationale, no? Should the United States, once it leaves, start paying and arming Shiite death squads to mow down the Sunni population? (I mean, more than we currently are doing, of course.) This whole thing seems deeply, deeply fucked. No wonder intelligence officials are so unwilling to contemplate failure.
Oh good. Craven, craven appeasement towards North Korea seemsto haveborne some very tentative fruit:
North Korea agreed Monday to end its nuclear weapons program in return for security, economic and energy benefits, potentially easing tensions with the United States after a two-year standoff over the North's efforts to build atomic bombs.
The United States, North Korea and four other nations participating in negotiations in Beijing signed a draft accord in which the North promised to abandon efforts to produce nuclear weapons and re-admit international inspectors to its nuclear facilities.
Foreign powers said they would provide aid, diplomatic assurances and security guarantees and consider North Korea's demands for a light-water nuclear reactor.
The only thing to be said about this is that, as I said, it's very tentative. Once the parties start haggling over verification and inspectors, demands and counter-demands will likely get a lot thornier and who knows where that will go? Also of concern: the breakthrough this time around seemed to come when the United States said it would "consider" providing light-water nuclear reactors—reactors that can provide electricity and are allowed under the Nuclear Non-Proliferation Treaty—to North Korea. Note that this offer also sat at the center of the Agreed Framework between the Clinton administration and Kim Jong Il in 1994, but that deal fell through first when Congress refused to fund the light-water reactors, and normalization between the two countries fell through. Getting the party of Tom DeLay to fund a nice nuclear-powered Christmas present for Kim Jong Il seems like, um, a bit of a feat, even for Bush. Kim Jong Il, meanwhile, obviously has no qualms about jerking people around.
Anyway, one could harp on the Bush administration for taking three years to return things roughly to where they were in 2002—only now, North Korea has long since carried away those plutonium fuel rods that were once under IAEA lock and key, and could conceivably keep them hidden in an undisclosed location, even if inspectors are allowed back in the country. But whatever, the administration deserves credit for pushing things this far. For awhile, it didn't seem like China was willing to flex its muscles and push North Korea towards an agreement—which was precisely why many observers, John Kerry included, thought the six-party talks had failed—but China's leaders seems to have changed their mind. Why they did so is an interesting question.
Okay, I'm totally stumped: someone's going to have to tell me what the correct opinion to have on something like this is:
The Danish government is under attack for paying for its disabled citizens to have sex with prostitutes.... Stig Langvad of the country's Disabled Association said the politicians critical of the plan are showing "double standards".
He said: "The disabled must have the same possibilities as other people. Politicians can debate whether prostitution should be allowed in general, instead of preventing only the disabled from having access to it."
Looking around for more details elsewhere on the internet, it seems that social workers accompany the disabled to licensed, government-subsidized brothels once a month for a 30-minute session. First reactions: 1) Whoa...; 2) Only once a month? We pay taxes here in America and get screwed every single day; 3) Not that far a leap from this to taxpayer-subsidized sex for everyone—why not subsidize brothels for unattractive or shy people? Same principle, no?; 4) Well, what of it? If adequate housing and a decent living and all that jazz are things lefties think government should help guarantee, why not a happy sex life too?; 5) Assuming there aren't enough male prostitutes, or gay prostitutes, or tranny prostitutes, or whatever, to go around, aren't there discrimination issues here?; 6) The health care angle: this site claims, plausibly, that providing seniors with pornography and access to prostitutes works far, far better than medicine in many cases—think of the health care dollars saved! Er, and so on.
In the Hitchens-Galloway debate last night, which you couldn't have paid me to watch, Galloway apparently said that Hitchens' descent into full-blown Bush shill was the "first ever metamorphosis of a butterfly back into a slug." Roger Kimball, always ready to flub a witty comeback, retorts: "I am no entomologist. But charity requires that I inform Mr. Galloway that slugs do not grow up to be butterflies." Charity requires him. That wry little imp...
But since Kimball rarely gets much of anything right, I thought I better fact-check this bit just to make sure. As it turns out, there's a whole family of "slug moths" (Limacodidae), although the caterpillars in question here are only called "slug caterpillars" because they have stumpy legs and no forelegs, forcing them to gloop around on their bellies like slugs. But they're still technically caterpillars, so Kimball's right. This picture, though, of a "cup moth" caterpillar (Doratifera vulnerans) is seriously cool:
Those little spiny bits are, of course, poisonous. No hapless toad, he. A little more searching around brings us to the saddleback caterpillar (Sibine stimulea):
Just you try to blindside that little feller. And lo, the crinkled flannel moth (Lagoa crispate):
Now that's just weird. On an unrelated note, comments on this site seem to be malfunctioning for all of the people some of the time, and some of the people all of the time. I tried to ask someone about it on the Haloscan tech-support board, to no avail. One day I'll get truly angry and call them "Haloscum" or something equally clever ("Haloscam"?), but I'll hold off for now; maybe they're just having server issues.
At last, the moment we've really been waiting for: Bill Clinton, on the record, about UFOs and government cover-ups. This from a Q&A session at some investor forum in Hong Kong:
Q: From president to president, do you pass along a list of secrets - you know like where's Jimmy Hoffa? What really happened at Roswell? Without giving away any state secrets, is there something that we can all look forward to in the future to read about that you know that we don't know that will make reading the National Enquirer required reading?
CLINTON: (Laughing and blushing) Well I don't know if you all heard this, but, there was actually, when I was president in my second term, there was an anniversary observance of Roswell. Remember that? People came to Roswell, New Mexico from all over the world. And there was also a site in Nevada where people were convinced that the government had buried a UFO and perhaps an alien deep underground because we wouldn't allow anybody to go there. And uhm… I can say now, 'cause it's now been released into the public domain. I had so many people in my own administration that were convinced that Roswell was a fraud but this place in Nevada was really serious, that there was an alien artefact there. So I actually sent somebody there to figure it out. And it was actually just a secret defence installation, alas, doing boring work that we didn't want anybody to else see.
The Roswell thing, I think, really was an illusion. I don't think it happened. I mean I think there are rational explanations and I did attempt to find out if there were any secret government documents that revealed things. If there were, they were concealed from me too. And if there were, well I wouldn't be the first American president that underlings have lied to, or that career bureaucrats have waited out. But there may be some career person sitting around somewhere, hiding these dark secrets, even from elected presidents. But if so, they successfully eluded me…and I'm almost embarrassed to tell you I did (chuckling) try to find out.
Dan Drezner links to a new OECD study (pdf) on education, and the results aren't entirely encouraging for the motherland—the United States is still lagging behind its developed-country peers in math and science education—but we seem to be improving. As far as the "What is to be done?" question goes, this passage deserves comment:
Lower expenditure cannot automatically be equated with a lower quality of educational services. Australia, Belgium, the Czech Republic, Finland, Japan, Korea, the Netherlands and New-Zealand, which have moderate expenditure on education per student at the primary and lower secondary levels, are among the OECD countries with the highest levels of performance by 15-year-old students in mathematics.
The study also notes that the United States spends, on the aggregate, much more on education than any other OECD country besides Switzerland. Seems the answer to fixing schools in America does not involve spending more money, right? Maybe, but not necessarily. First question: A good deal of public education spending in the United States, after all, goes towards spending on students with disabilities; in 2004, IDEA grants to states totaled $12 billion, roughly a tenth of all federal education spending. So I wonder what the OECD numbers look like with that factor removed.
Second question: looking only at aggregate expenditures seems misleading to me—as Jonathan Kozol reminds us in Harper's this month, the United States boasts a segregated public school system in which many (often white, suburban) districts rake in obscene amounts of money from local property taxes, while others (often black or Hispanic, urban) have very little to spend on their students. A chart merely showing that the U.S. spends a lot on education obscures some of these points. On the other hand, the "between-school variance" on public education in the United States was fairly low, when compared to supposedly stellar countries like Japan, Germany, and South Korea. I don't know if this means that our savage inequalities aren't quite as savage as they are elsewhere around the world, but it's fairly surprising.
Flipping through some of the other charts, it looks like the United States pays its secondary-school teachers more than most other countries, on an absolute level, but in the context of GDP per capita, our public school teachers don't make very much. Incidentally, Norway and Sweden pay their teachers even less than we do, when compared to GDP per capita, and they seem to be lagging in math and science too. Coincidence? No idea; it would be interesting to see some regressions on this. I also see that teachers in the United States teach far and away more hours than any of their OECD peers, while teachers in Japan—a country generally noted, with caveats like the above, for its educational excellence—teach only about half as many hours as their American counterparts. This doesn't pass for proof that underpaid and overworked public-school teachers are partly the reason for America's poor education showing, but on the surface that idea has at least some plausibility.
Oh boy, nothing I hate worse than this Pledge of Allegiance business. First a court declares pledge recital in schools unconstitutional, then the public bares its teeth and growls at "liberals" and Democrats everywhere for somehow being responsible for this craziness (despite the fact that a majority of Dems like the pledge just the way it is), and all of the sudden Bush's ability to hurl an ultraconservative judge onto the Supreme Court gets a whole lot easier. From a practical standpoint, it just sucks, and let's face it, no one is harmed by the "under God" portion of the Pledge. Or rather, insofar as people feel alienated from the dominant Judeo-Christian culture in this country, which certainly does happen, it's not the fault of two meaningless words that other eight-year-olds are mumbling by rote, half-asleep, in school every morning.
Moreover, I'm not even sure I like the latest ruling on the merits: District Judge Lawrence Karlton argued that children have a right to be "free from a coercive requirement to affirm God." Um, what? Children, as everyone knows, are not at all required to say the pledge at school—they can sit in their seat and look glum if they damn well please. The idea that practices are "coercive" merely if they involve other people mumbling things by rote, half-asleep, seems like a dangerous precedent to set. Slippery slope and all that. I'm serious. Isn't there some other way to argue Newdow's case? In typical San Francisco liberal fashion, I think reciting a pledge is all very silly and deserves a good sneer, but I wouldn't want to call it "coercive," or worse, argue that listening to other people stay stuff amounts to a "coercive requirement to affirm" what they're saying. Plus, everything said in the first paragraph.
MORE: Volokh on some of the technical issues involved: Judge Karlton seems to be basing his decision on a reversed Ninth Circuit Court opinion that may not have any precedential value anyway, on account of being reversed, sort of. Or something.
In the latest Boston Review, Rebecca Saxe has a fun essay on recent experiments that try (and struggle) to pinpoint "moral reasoning" in the brain. Alternatively, Chris of Mixing Memory had two fascinating posts on the subject here and here. All very easy to follow for lazy dilettantes like me. Also, one of the results that Saxe cites—that only psychopathic children are unable to distinguish between morality and obeying a higher authority—has some awesomely belligerent applications, but I don't really want to go there.
This entire debate about whether or not "conservatism" has advanced or declined under big-spenders like George W. Bush and Tom DeLay seems muddled to me. Here's another possible way of looking at it, which may or may not be right. Bush-style "conservatism" over the past decade, I think, has basically taken up the same aims as DLC-style "liberalism" (or even, in a sense, Lyndon Johnson-style liberalism): you have technocratic elites commandeering the resources of the administrative state to enact their preferred social policies and to steer taxpayer money towards their favored constituents. Obviously, the two sides have somewhat different constituents: Bush favors the capitalist class, the DLC favors the educated professional class. Or whatever. But there you go.
Bush's approach has, of course, succeeded on its own terms, and it's "failed" in the sense that it could be hijacked by a change in government. The Medicare bill really did steer lots of money towards the health care industry, but Democrats could easily retake Congress and steer that money back towards the elderly. In neither case, though, will you get truly "liberal" or "conservative" health care reform—Medicare has become at once too big and too industry-dominated. Likewise, No Child Left Behind, as a program, can be used to funnel resources towards either the testing industry or teacher's unions, depending on who's in power. But in neither case will you get truly "liberal" or "conservative" education reform, just more of the same. Social Security would be privatized by now if Martin Frost, Tom Daschle, and John Breaux were still in office, and if that had happened, well, same thing.
Basically, what the rise of Bush-style conservatism has helped preclude is the possibility of either serious libertarianism or serious progressivism. Libertarianism because, obviously, the state's never getting any smaller—Bush proved that can't be done. Progressivism, though, because the Bush administration has done very lasting damage to what few social democratic cornerstones this country once had. Republicans have delivered mortal blows to labor over the past decade from which it may never recover: unions will still live on, of course, but in their weakened state they'll pursue narrowly-tailored "business unionism" rather than broad-based social change. Ditto with the environment, which has suffered damages that will take years if not decades to repair—if that's even possible. Wall Street and the defense industry have become unimaginably powerful and self-perpetuating, with no countervailing forces in sight. Then you have the pharmaceutical and insurance industries, which, since 1994, have become so entrenched in our health care system that they probably can't ever be dislodged, which means that both Republicans and Democrats can tinker on the margins with health care policy, but that's about it.
Meanwhile, the "K Street Project" and the increased influence of the lobbyist class in Washington has likely created a self-sustaining entity: as soon as Democrats regain power they'll just drink from the same trough, slopping it about in a somewhat more liberal direction, but keeping the basic set-up in place. (My guess is that the growing influence of the financial industry within the Democratic Party will have very serious consequences in the future.) Ditto with the increasing centralization of presidential power. Oh, and the sham "reforms" wrought by McCain-Feingold probably killed off, rather than saved, any chance for a real campaign finance overhaul—money in politics is here to stay. Not to mention the fact that the prospect of endless instability in the Middle East (perhaps) and yawning long-term deficits (surely) have limited the ability to maneuver of any "liberal" government that might come to power anytime soon.
So yes, we'll get Bush-style conservatism vs. DLC liberalism for eons to come, thanks to the sort of state Bush and DeLay have helped reify—though they hardly created it. Over the long haul, neither side will gain an absolute edge: the Republicans have a rural-state advantage in the Senate, while the Democrats have popular opinion on cultural and many policy issues on their side. Nevertheless, we're looking forward to two patronage parties, frozen in power until the next major economic crisis hits and knocks over the whole damn chessboard. Obviously I wouldn't equate the two sides: the Republicans are doing real damage and need to be kicked out as soon as possible, and the Democrats can still do a great deal of good, but I'm afraid "conservatism" isn't in any more of a crisis, fundamentally, than "liberalism." Limited government (along with a return to 1950s social conservatism) on the right and social democracy on the left have both been throttled. Whether that's okay with people or not is another question.