February 17, 2010

Submarines From Scratch

I'm thinking about a new career in amateur sub-building. Here's 34-year-old Tao Xinglai showing that anyone can do it, really. All it takes is a little ingenuity and scrap metal:



Total budget? $4,400. Of course, if we wanted to take it up a notch, there's Peter Madsen, who built the largest homemade sub in the world, the UC3 Nautilus:



Total cost: "millions of Danish crowns"—though it was still cheaper than a professionally built diesel submarine. But it might be easier to build your own submarine in Denmark, where anyone can build their own vessel without permission, as long as it's shorter than 24 meters. Meanwhile, the state of New York seems to look less favorably on makeshift subs gurgling along the channel.
-- Brad Plumer 5:15 PM || ||
Post-Apocalyptic Hard Drives

Suppose human civilization collapses one day. That's not such a zany notion—lots of past civilizations have collapsed. Maybe there's nuclear war, or giraffe flu, or maybe the Yellowstone supervolcano finally heaves and spews. It's a fat disaster for awhile, but eventually things settle down. New civilizations form, people figure out electricity, and so forth. (Actually, as Kurt Cobb argues, it might not be possible to start an industrial society from scratch a second time 'round, since we've already used up most of the easiest-to-access ores and fuels. But ignore that.)

Anyway, the question: Would historians of the future be able to examine our hard drives and servers and study what twenty-first-century folks got up to? (Surely all the best info on what caused the apocalypse will be in digital form.) How much of our data would still be intact? Apparently, no one really knows for sure:
Hard drives were never intended for long-term storage, so they have not been subjected to the kind of tests used to estimate the lifetimes of formats like CDs. No one can be sure how long they will last. Kevin Murrell, a trustee of the UK's national museum of computing, recently switched on a 456 megabyte hard drive that had been powered down since the early 1980s. "We had no problems getting the data off at all," he says.

Modern drives might not fare so well, though. The storage density on hard drives is now over 200 gigabits per square inch and still climbing fast. While today's drives have sophisticated systems for compensating for the failure of small sectors, in general the more bits of data you cram into a material, the more you lose if part of it becomes degraded or damaged. What's more, a decay process that would leave a large-scale bit of data readable could destroy some smaller-scale bits. "The jury is still out on modern discs. We won't know for another 20 years," says Murrell.

Most important data is backed up on formats such as magnetic tape or optical discs. Unfortunately, many of those formats cannot be trusted to last even five years, says Joe Iraci, who studies the reliability of digital media at the Canadian Conservation Institute in Ottawa, Ontario.
Actually, though, we don't even need to consider the apocalypse. The fragile state of digital storage is already causing trouble. NASA has a few people racing to recover old images from its Lunar Orbiter missions in the 1960s, which are currently stored on magnetic tapes and may not be long for this world. And the National Archives is struggling to preserve its digital records, which tend to rot faster than paper records.

A related tale of disintegrating media comes from Larry Lessig's Free Culture—though this one has a twist. There are a lot of films that were made after 1923 that have no commercial value anymore. They never made it to video or DVD; the reels are just collecting dust in vaults somewhere. In theory, it shouldn’t be too hard to digitize these films and put them in an archive. But alas, thanks to the Sonny Bono Copyright Term Extension Act that was passed by Congress in 1998, any film made after 1923 won't enter the public domain until at least 2019.

That means these films are still under copyright, and anyone who wanted to restore them would have to track down the copyright-holders (not always easy to do) and probably hire a lawyer. And who's going to go through that much trouble just to restore some obscure movie that only a few people might ever watch? Yet a lot of these older movies were produced on nitrate-based stock, and they'll have dissolved by the time 2019 rolls around, leaving nothing behind but canisters of dust. It's sort of tragic.

On the cheery side, post-apocalyptic historians will presumably have less trouble snagging copies of Avatar and Titanic—that's the sort of digital media most likely to survive, if only because there are so many copies lying around. So James Cameron will be our Aeschylus, huh?
-- Brad Plumer 4:36 PM || ||

February 16, 2010

I Don't Respond Well To Mellow

Annie Hall is pretty much a perfect film. But as David Kimmel describes in I'll Have What She's Having: Behind the Scenes at the Great Romantic Comedies, it didn't start out that way. The original version of the film was an epic disaster, in dire need of hacking and kneading:
In the editing room, Annie Hall was an incoherent mess. [Co-writer] Marshall Brickman was appalled. “To tell you the truth, when I saw the rough cut of Annie Hall, I thought it was terrible, completely unsalvageable. It was two and a half hours long and rambled and was tangential and just endless.”

The original version was essentially Alvy free-associating about his life and his worries. Annie (Diane Keaton) was seen briefly and then disappeared from the movie for fifteen minutes. … Even the scenes with Annie—they were already there, of course—led to fantasies and flashbacks galore. The sequence where Alvy, Annie, and Rob (Tony Roberts) head to Brooklyn originally ran ten to fifteen minutes and had many more scenes than the one to two we see in the final film. Alvy and his date from Rolling Stone (Shelly Duvall) spun off to a scene where they wound up in the Garden of Eden talking to God. When Alvy is arrested in Los Angeles (after playing bumper cars in the parking lot), there was a long scene of him interacting with the other prisoners in his cell.

Like the sculptor who chips away at the block so that the statue hidden inside can emerge, Allen and [editor Ralph] Rosenblum began hacking away at the movie to see if there was something in the material worth saving. “It was clear to Woody and me that the film started moving whenever the present-tense material with him and Keaton dominated the screen, and we began cutting in the direction of that relationship,” Rosenblum later wrote. They tossed out entire sequences, tightened things up, and always kept the focus on Alvy and Annie. Even the scenes of flashbacks to Alvy’s earlier marriages were greatly shortened. Some characters were eliminated altogether. Said Allen, “There was a lot of material taken out of that picture that I thought was wonderfully funny. … It wasn’t what I intended to do. I didn’t sit down with Marshall Brickman and say, ‘We’re going to write a picture about a relationship.”
That's not overly surprising. A lot of Woody Hall's earlier films are, well, funny, but not exactly coherent—usually the plots are thin and heavily improvised, existing only so that there's something to hang an endless series of jokes and gags on. (They were basically live-action versions of his New Yorker pieces from the time.) And the movies work because the jokes and gags are hilarious. But with Annie Hall, it seems he finally pushed that habit to excess, at least in his first go-round. But hey, it's a good thing he did! Otherwise his editors might never have decided that enough was enough.

P.S. Though it's worth noting that sometimes the editors were too zealous. They had originally, for instance, cut out the famous scene with Annie's brother Duane. It was only later, once the film started doing well in front of test audiences and they figured eh, maybe they could afford a few indulgences here and there, that they snuck Duane back in. It's funny to think how Christopher Walken's career might've unfolded if they'd kept that bit out, since that was a big early break. Maybe he doesn't go on to do Deer Hunter the following year? And then what?
-- Brad Plumer 5:38 PM || ||
Lost In Translation

There are plenty of English-language writers who sell a lot of books overseas—it's not uncommon for even lesser-known American authors to get their novels translated into multiple languages. But the reverse doesn't hold nearly as well. Only a very small fraction of foreign-language authors ever manage to get published in the United States. The last two literary Nobelists, Herta Müller and J.M.G. Le Clézio, weren't widely available in English until after they won their prizes (at which point publishers scrambled to crank out translations). Why does the flow mainly go in one direction? It doesn't seem to be because Americans are boors. Emily Williams, a former literary scout, offers up a more subtle explanation:
There has been a hegemony for years of English-language books being translated into many other languages, a cultural phenomenon comparable (though much smaller in scale) to US dominance of the worldwide film market. Bestselling American authors like Michael Crichton and John Grisham and Danielle Steele and Stephen King have, in translation, reliably topped bestseller lists around the world. As the market for matching these authors to publishers abroad matured, it opened the door to less commercial writers and other genres (in nonfiction, for example, American business books continue to be in high demand).

A certain savvy in picking the right American books to translate developed into a valuable editorial skill in markets abroad. Imprints and publishing strategies were then established to capitalize on books in translation. Foreign rights turned into a profit center for US publishers, and scouting agencies sprang up to help navigate the increasingly complex marketplace.

The rising fortunes of US books abroad coincides with the rise of American pop culture in general, but also has to be partly attributed to a strong culture of commercial fiction… that, until quite recently, simply didn’t exist in many other countries. A foreign editor I worked with once compared US commercial fiction to Hollywood blockbusters: any one book might be better or worse overall, but there’s a certain level of craftsmanship you can depend on.
As a result, many countries abroad have editors and scouts who are focused on selecting U.S. books for translation. They can develop a deep expertise in the U.S. market. But on the flip side, foreign books come to the U.S. market from a whole slew of different countries and languages, so it's harder for a single editor to really know any one country or region really well and have a sense for which books will do well here. As a result, says Williams, the foreign-language books that do get translated into English often get picked via haphazard connections—an agent here has a close tie with a particular agent in Spain (say).

Granted, there are still far more foreign-language books translated into English each year than you or I could ever hope to read. But it seems likely that there's a very high number of undiscovered gems out there in other countries—books from Latin America or Europe that could catch on here and garner critical acclaim (or start a craze the way Roberto Bolaño has done recently), but simply haven't made it into English by sheer bad luck.

(Flickr photo credit: christing-O-)
-- Brad Plumer 4:12 PM || ||

January 29, 2010

Heavy Toonage

Due to a series of increasingly frivolous Google searches, I just spent half an hour reading up on the history of Sunday-morning cartoons in the 1980s. (This all started with a legitimate work-related query and somehow careened out of control.) Anyway, I'm sure everyone's well aware that the big, popular cartoons of that era—"Transformers" or "G.I. Joe" or "Teenage Mutant Ninja Turtles"—were created primarily to sell action figures and boost toy sales. But the story behind their development, as best I can make it out, is pretty interesting.

Back during the 1960s, Hasbro's G.I. Joe was one of the best-selling toys around, the first action-figure blockbuster. But that changed once Star Wars came out; suddenly, all the kids were demanding Luke Skywalker and Darth Vader dolls. Part of the shift came down to marketing: The Star Wars toys were being promoted by a ridiculously popular movie, while Hasbro wasn't even allowed to use animation in its G.I. Joe commercials. At the time, the National Association of Broadcasters had outlawed animated toy commercials, for fear that they'd blur the line between fantasy and reality for young children.

At that point, Hasbro came up with an ingenious idea. There were strict rules about how you could advertise toys, but there were fewer rules on advertising comic books. So the company's executives went to Marvel Comics and said, here, we'll give you the license for our G.I. Joe comic-book line and even spend millions of dollars of our own money advertising it. Marvel, naturally, leapt at the deal, and a best-selling series was born. Hasbro, meanwhile, could finally run animated ads about G.I. Joe. And it worked: By the 1980s, G.I. Joe action figures were leaping off the shelves again. (Of course, it helped that the comic-book series was relatively well-conceived; plenty of other toy-comic tie-ins flopped.)

After that came the cartoons. For a long time, the FCC had prevented companies from creating TV shows that centered on toys (Mattel had tried this with Hot Wheels in the 1960s and got smacked down). But by 1983, Reagan's FCC had relaxed this rule. Pretty soon, a new wave of toy-themed shows started hitting on the air: "Transformers," "Thundercats," "He-Man." The trend stretched all the way down to "Teletubbies" in the 1990s. The cartoons instantly transformed the toy industry—the most successful, "Transformers," sold $100 million worth of merchandise in its first year.

In his fascinating book The Real Toy Story, Eric Clark argues that this new cartoon-toy symbiosis also altered how kids approached playtime: "Because the backstory and the programs dictated play, the nature of play itself changed—TV took control of the play environment. The old adage that a child should dictate what the toy does was discarded." Part of me wants to believe that that's slightly overstated, and that kids are more creative than this. (I had a Transformer or two back when I was younger, and I never had them reenact plots from the cartoons—Elizabethan-style court intrigue and playground bullying scenarios were far more common.) But Clark's surely onto something here; it's hard to imagine that there could be such a massive shift in toy advertising without large effects on child psychology, no?

(Flickr photo credit: Brian McCarty)
-- Brad Plumer 7:20 PM || ||

January 28, 2010

Gruesome Tongue Twisters

What's the world's most throat-chokingly difficult language? Here's a worthy contender:
On balance The Economist would go for Tuyuca, of the eastern Amazon. It has a sound system with simple consonants and a few nasal vowels, so is not as hard to speak as Ubykh or !Xóõ. Like Turkish, it is heavily agglutinating, so that one word, hóabãsiriga means “I do not know how to write.” Like Kwaio, it has two words for “we,” inclusive and exclusive. The noun classes (genders) in Tuyuca’s language family (including close relatives) have been estimated at between 50 and 140. Some are rare, such as “bark that does not cling closely to a tree,” which can be extended to things such as baggy trousers, or wet plywood that has begun to peel apart.

Most fascinating is a feature that would make any journalist tremble. Tuyuca requires verb-endings on statements to show how the speaker knows something. Diga ape-wi means that "the boy played soccer (I know because I saw him)," while diga ape-hiyi means "the boy played soccer (I assume)." English can provide such information, but for Tuyuca that is an obligatory ending on the verb. Evidential languages force speakers to think hard about how they learned what they say they know.
I'm convinced! The piece also argues that, contrary to popular belief, English is a relatively easy language to learn: "verbs hardly conjugate; nouns pluralise easily (just add 's', mostly) and there are no genders to remember."

That may all be true, but figuring out how to spell words in English can be a nightmare; I do believe this is the only major language in which you can actually hold spelling bees. In Spanish, by contrast, a spelling competition would be pointless since it's trivial to write out a word once you hear it. Maybe the only other possible bee-language is French—except that French-speaking countries hold dictation contests instead, as does Poland with its "Dyktando." (These usually involve scribbling down, word for exact word, a long literary passage that's read aloud.)

Japanese, meanwhile, is a comparatively simple language to speak, but then you've got the vast jungle of different characters. I don't know if there are bees for that, but there is the Kanken, the national Kanji Aptitude Test, which tests for writing, pronunciation, and stroke order, and has twelve different levels: 10 through 3, pre-2, 2, pre-1, and 1—with 10 being the easiest and 1 the hardest. As I recall, your average well-educated native speaker should be able to pass pre-2. But level 1 is no joke—you have to know about 6,000 different kanji, and, in some years, only about 200 people in the whole country earn a passing grade. The rest, I guess, just have to mutter "hóabãsiriga."
-- Brad Plumer 11:25 PM || ||
Our Nerdiest President

Huh, I had no idea that James A. Garfield was credited with discovering a novel proof for the Pythagorean Theorem (a clever one, too, involving trapezoids). He did this back in 1876, before he became our second assassinated president and while he was still serving in the House—as he tells it, the proof came up in the course of "some mathematical amusements and discussions with other [members of Congress]."

I'm not sure if the country today is better off or worse off for the fact that House members no longer sit around pondering geometry in their spare time. In any case, it seems Garfield was also working on a pretty expansive math-education agenda before he got shot. Oh, and bonus Garfield trivia: He took a nasty swipe at the Mormon Church in his inaugural address (not that it mattered much—the speech was sort of a low-turnout affair).
-- Brad Plumer 6:34 PM || ||
Exporting Depression

Ethan Watters has a fascinating New Scientist report on how U.S. drug companies are basically "exporting" Western notions of mental illness to other countries, in order to create new markets for their products:
The challenge GSK faced in the Japanese market was formidable. The nation did have a clinical diagnosis of depression—utsubyo—but it was nothing like the US version: it described an illness as devastating and as stigmatising as schizophrenia. Worse, at least for the sales prospects of antidepressants in Japan, it was rare. Most other states of melancholy were not considered illnesses in Japan. Indeed, the experience of prolonged, deep sadness was often considered to be a jibyo, a personal hardship that builds character. To make paroxetine a hit, it would not be enough to corner the small market for people diagnosed with utsubyo. As Kirmayer realised, GSK intended to influence the Japanese understanding of sadness and depression at the deepest level. ...

Which is exactly what GSK appears to have accomplished. Promoting depression as a kokoro no kaze—"a cold of the soul"—GSK managed to popularise the diagnosis. In the first year on the market, sales of paroxetine in Japan brought in $100 million. By 2005, they were approaching $350 million and rising quickly.
Now, this sort of marketing of illness is hardly novel. Back in 2005, I wrote a piece for Mother Jones about the "corporate-sponsored creation of disease" (and, please note, that's not my paranoid lefty term for the practice, but a phrase taken from a Reuters Business Insight report written for Pharma execs). Drugmakers aggressively push iffy conditions like "premenstrual dysphoric disorder" in order to extend patents and boost sales. Not surprisingly, GSK again makes a cameo:
When GSK, an American drug company, wanted to repackage its best-selling antidepressant, Paxil, to treat "social anxiety disorder"—a questionable strain of social phobia that requires medication rather than therapy—it hired PR firm Cohn & Wolfe to help raise awareness about the condition. Slogans were developed: "Imagine being allergic to people." Posters featuring distraught men and women described the symptoms, which only seem like everyday nervousness to the untrained eye: "You blush, you sweat, shake—even find it hard to breath. That's what social anxiety disorder feels like." Journalists were faxed press releases so that they could write up stories about the new disorder in the New York Times and Wall Street Journal. (Does the deadline-pressed journalist need a bit of color for her story? No problem: Patient-advocacy groups, usually funded by drug companies, can provide patients to interview.) GSK even got University of California psychiatrist Murray Stein to vouch for the drug. Stein, it turns out, was a paid consultant to seventeen drug companies, including GSK, and had run company-funded trials of Paxil to treat social anxiety disorder.
How devious. Anyway, the international angle is new to me and grimly riveting. Here's a New York Times Magazine piece by Watters that further explores the phenomenon, focusing less on the corporate angle and more on the broader fact that "we in the West have aggressively spread our modern knowledge of mental illness around the world. ... That is, we've been changing not only the treatments but also the expression of mental illness in other cultures."
-- Brad Plumer 5:54 PM || ||

January 27, 2010

Dos And Don'ts For Running A Host Club

The Kabukicho district in Tokyo is famous for, among other sleazy wonders, its host and hostess clubs. The idea here is simple enough: In a hostess club, you have paid female employees who chat with male customers, light their cigarettes, pour drinks, sing karaoke, and generally make the men feel, I don't know, titillated. It's not a sex club—more like a flirtation club.

And then the host clubs are the reverse deal: Male hosts wait on female customers, listen to their sorrows, flatter them, light their cigarettes, pour drinks.... Since Japanese women don't typically get "waited on" by men, these clubs fill a real need. Anyway, hostess clubs have no doubt been featured in light-hearted New York Times pieces before. But I was reading Jake Adelstein's Tokyo Vice, a terrific book about the city's seamy underbelly, and there's a part where a cop explains that many of these clubs are actually horribly manipulative:
"It used to be that the only women who went to host bars were hostesses, but times have changed. What we keep seeing is college girls, sometimes even high school girls with money, who start going to these host clubs. They love the personal attention, and maybe they get infatuated with the hosts, who milk them for everything they have. The girls accumulate debts, and at some point the management introduces them to a job in the sex industry so that they can pay off their debts. Sometimes the guys running the host bars are the same guys who run the sex clubs."
Not all host clubs run this racket—the respectable ones try to avoid plunging their customers into crippling debt—but many of the clubs are just thinly veiled organized-crime outfits. Meanwhile, here's one young host explaining the ins and outs of his job to Adelstein:
Sometimes the thing to do is to find an actor you resemble and then basically do an impression of the guy. You make the customer feel like she is with a celebrity. ... But most of the time I just say that I'm a graduate student in law at Tokyo University and I'm just hosting to pay the tuition. It makes the customer feel like she's contributing to society, not just to my wallet. ...

You have to be able to talk to customers about almost anything, even where they send their kids to school. So I subscribe to four women's magazines to make sure I know what kinds of concerns they have. They also like to talk about television programs, but since I don't have time to watch TV I stay current by reading TV guides. ...

The bad thing is that my parents hate that I do this, even if I don't plan on doing it forever. You don't have a personal life. Every day is like summer vacation, except that you don't really have freedom. You spend most of your free time waiting on customers in one way or another; sometimes you go shopping with a customer, sometimes you go to a resort with her.

Useful tips for anyone considering a career switch.

(Flickr photo credit: yumyumcherry)
-- Brad Plumer 7:01 PM || ||
How Many Moves Ahead?

Famous chess players often get asked the same thing by reporters: "How many moves can you see ahead?" The query crept up, inevitable as the sunrise, in Time's interview with 19-year-old Magnus Carlsen. (He responded, "Sometimes 15 to 20 moves ahead—but the trick is evaluating the position at the end of those calculations.") Thing is, it's sort of an odd and not very well-defined question.

In an endgame, with just a few pieces left on the board, sure, a good player will plot out where things are heading down to the bitter end. But in the middle of a game? There are about 40 possible moves in the average chess position, so analyzing all the possibilities even just two moves out would involve looking at 2.5 million positions. No human brain can pull that off. Instead, recognizing patterns and assessing positions is the far more crucial skill, which is basically what Carlsen was saying, if you read his quote carefully. In The Inner Game of Chess, Andrew Soltis argues that most grandmasters don't usually gaze more than two moves ahead—they don't need to. And then there's Garry Kasparov's take on this question, in his fascinating New York Review of Books essay on computer chess:
As for how many moves ahead a grandmaster sees, Russkin-Gutman makes much of the answer attributed to the great Cuban world champion José Raúl Capablanca, among others: "Just one, the best one." This answer is as good or bad as any other, a pithy way of disposing with an attempt by an outsider to ask something insightful and failing to do so. It's the equivalent of asking Lance Armstrong how many times he shifts gears during the Tour de France.

The only real answer, "It depends on the position and how much time I have," is unsatisfying. In what may have been my best tournament game at the 1999 Hoogovens tournament in the Netherlands, I visualized the winning position a full fifteen moves ahead—an unusual feat. I sacrificed a great deal of material for an attack, burning my bridges; if my calculations were faulty I would be dead lost. Although my intuition was correct and my opponent, Topalov again, failed to find the best defense under pressure, subsequent analysis showed that despite my Herculean effort I had missed a shorter route to victory. Capablanca's sarcasm aside, correctly evaluating a small handful of moves is far more important in human chess, and human decision-making in general, than the systematically deeper and deeper search for better moves—the number of moves "seen ahead"—that computers rely on.
Speaking of computer chess, I wanted to bring up one of my favorite points: that, even though your average laptop chess program can now whip any grandmaster, computers still have trouble mastering the game of Go—even the most advanced programs get crushed by skilled children. Except that, it turns out, this talking point's now a few years obsolete. According to a recent Wired report, AI Go players are starting to beat ranking human players using the Monte Carlo method. Just another step down the path to robot domination, I guess.

(Flickr photo credit: Boered)
-- Brad Plumer 10:10 AM || ||

February 02, 2009

The Great Dinosaur Wars

Excuse me, but I'm on a Wikipedia bender lately. I just can't stop clicking. So here we go: The Bone Wars? This was a real thing? Oh yes. Yes it was. In the latter third of the 19th century, we learn, a pair of rival American paleontologists, Edward Drinker Cope and Othaniel Charles Marsh "used underhanded methods to out-compete each other in the field, resorting to bribery, theft, and destruction of bones."

This wasn't as gruesome as it sounds: The rivalry did spur a race to excavate, which in turn led to the discovery of 142 new species of dinosaur, including childhood favs like Triceratops, Stegosaurus, and Allosaurus. A word of appreciation, by the way, for Allosaurus: The dinosaur books I gobbled up as a kid were usually divided into three eras: the Triassic (lame chicken-sized dinosaurs), Jurassic (now these were terrible lizards…) and finally the glorious Cretaceous (oh hell yes). The Allosaurus was a late Jurassic beast, usually shown scarfing down chunks of Brontosaurus meat, and he signified a child's very first encounter with the truly massive, peerless carnivores of yore—dominating the imagination for a brief while until you flipped a few pages and stumbled across the Cretaceous-era Tyrannosaurus Rex, king of all predators. (I understand now they have Spinosaurus and other gigantic brutes that put T. Rex to shame—well, T. Rex is still top lizard in my book.)

Anyway, back to our dueling paleontologists:
On one occasion, the two scientists had gone on a fossil-collecting expedition to Cope's marl pits in New Jersey, where William Parker Foulke had discovered the holotype specimen of Hadrosaurus foulkii, described by the paleontologist Joseph Leidy; this was one of the first American dinosaur finds, and the pits were still rich with fossils.

Though the two parted amicably, Marsh secretly bribed the pit operators to divert the fossils they were discovering to him, instead of Cope. The two began attacking each other in papers and publications, and their personal relations soured. Marsh humiliated Cope by pointing out his reconstruction of the plesiosaur Elasmosaurus was flawed, with the head placed where the tail should have been. ... Cope tried to cover up his mistake by purchasing every copy he could find of the journal it was published in.
In the end, Marsh won the Bone Wars, digging up and naming 80 new species of dinosaur, versus Cope's piteous 56. But paleontology itself was the real loser—not only was the profession's good name blighted for decades by their tomfoolery, but "the reported use of dynamite and sabotage by employees of both men destroyed or buried hundreds of potentially critical fossil remains." Oh dear.

Even worse, the two miscreants would often hastily slap all sorts of bones together in their race to unveil new species, and they often assembled incorrect skeletons that sowed confusion for decades. Marsh, it turns out, was the wiseguy who fit a human skull on a looming sauropod's torso and named his creation Brontosaurus (it later had to be rechristened Apatosaurus, once it was given its correct head—the real tragedy is that the new name was clumsier and not quite so fearsome: "Apatosaurus" means "deceptive lizard"; right, uh-huh, like there's anything deceptive about that dude.)
-- Brad Plumer 10:49 PM || ||
Outsourcing the Brain

In a chat with The Philosopher's Magazine, David Chalmers lays out his theory of the "extended mind," arguing that our cognitive systems are more than just what's crammed inside that gray matter in our skulls. The mind should be thought of as a larger, extended system that can encompass even gadgets like the iPhone:
"The key idea is that when bits of the environment are hooked up to your cognitive system in the right way, they are, in effect, part of the mind, part of the cognitive system. So, say I'm rearranging Scrabble tiles on a rack. This is very close to being analogous to the situation when I'm doing an anagram in my head. In one case the representations are out in the world, in the other case they're in here. We say doing an anagram on a rack ought be regarded as a cognitive process, a process of the mind, even though it's out there in the world."

This is where the iPhone comes in, as a more contemporary example of how the extended mind works.

"A whole lot of my cognitive activities and my brain functions have now been uploaded into my iPhone. It stores a whole lot of my beliefs, phone numbers, addresses, whatever. It acts as my memory for these things. It's always there when I need it."

Chalmers even claims it holds some of his desires.

"I have a list of all of my favorite dishes at the restaurant we go to all the time in Canberra. I say, OK, what are we going to order? Well, I'll pull up the iPhone—these are the dishes we like here. It's the repository of my desires, my plans. There's a calendar, there's an iPhone calculator, and so on. It's even got a little decision maker that comes up, yes or no."
Fine, I buy it. (Scrabble racks!) The original paper that Chalmers and Andy Clark wrote on the subject invented an Alzheimer's patient named Otto who jots things down in his notebook so that he won't forget them. When you or I need to conjure up an address or name, we just summon it from some musty alcove in our memory. When Otto needs a fact, he flips through his spiral notebook. We consider memory part of the mind, so why can't we say that Otto's notebook is part of his mind? What's the difference? It's a short hop to declaring that an iPhone can be part of one's extended mind, too.

Maybe all this just means that "mind" is an unduly foggy term. Let's all thank the relevant deities I'm not a philosopher and don't have to tear hair over this. It is odd, though, how memory works in the Internet age. When I'm writing about energy issues, let's say, then instead of learning all the nitty-gritty details about how utility decoupling works or what the latest data on snowfall in Greenland says, I never bother memorizing the details. Instead, I'll just get a feel for the main concepts and remember where on the Internet I can find more information in a pinch.

Now, sure, I do the same thing with books or notes, but I rely on this method much more often now—and, of course, the Internet's always available, making the need to memorize stuff even less acute. My head's filled with fewer actual facts and more search terms or heuristics. Everyone does it. Emily Gould recently wrote a indispensable post about Mind Google, which works like this: Rather than reaching instinctively for your iPhone to settle some trivia dispute with your friends (who did the voice of Liz in Garfield: A Tale of Two Kitties again?), you just rack your futile brain about it and then, later, when you're in the bathroom gazing at the tiles and thinking about something else entirely, the answer magically slaps you in the face. ("Jennifer Love Hewitt—of freaking course!")

So some facts are still up in the old noggin. I mean, it's not like you can just do without facts altogether. It's just more efficient to outsource a larger chunk of stuff to that great hivemind on the Web. Probably a net plus. Though I wonder if someone like (oh, say) Thomas Pynchon could've written Gravity's Rainbow if instead of cramming all that stuff about rocket science and German ballads and Plato and god knows what else in his skull, he was always saying to himself, "Eh, I don't need to remember this, I'll just look it up on Wikipedia later..."

One more thing about the extended mind. It ought to include other people, too. Shouldn't it? I know I have a bunch of stories and anecdotes I can't tell—or at least tell well—without some of the other participants around to help me fill in the details and color. In effect, I have to recreate the memory with someone else. I read somewhere once a devastating article about how older widows essentially lose a large chunk of their memory after their partner dies, because there are many events that could only recollect "together," with their spouse. No, wait, maybe this was in a novel? Or a This American Life podcast? Oy, my brain's a barren wasteland—off to wheedle the answer out of the Internet.
-- Brad Plumer 11:08 AM || ||
It's Hue You Know

     

Here's a good Monday diversion: Wikipedia's master list of color names. Learn your cobalts and ceruleans and cyans. Some of the terms on this list are fairly obscure: Would you ever, in a million years, have guessed what color "Razzmatazz" is? (It's a sort of magenta.) But some of the names are hilariously perfect—"old rose" denotes the hue of aging, sickly rose petals. And some of the terms made me realize how easily I could distinguish shades I hadn't even realized were distinguishable until I saw the names for them: I could picture right away the difference between "Islamic green" (think Saudi flag) and "shamrock green."

More clicking around... There's also a complete list of every single crayon color that Crayola has seen fit to christen over the years. Some of the names have been changed, sometimes mysteriously so. In 1990, "maize" (a childhood favorite of mine, handy for drawing golden retrievers and buried sea-treasure) was replaced by the weedier name "dandelion." The color itself changed slightly, too: Appropriately, dandelion looks more like the drab patches of yellow you'd see on an abandoned lot while maize evokes sunny Ukrainian wheat fields. Why'd they get rid of it?

Some of the Crayola color names are frivolous and totally inapt: I have no use for "laser lemon" or "Fuzzy Wuzzy brown" or even "radical red" (which, perhaps to protect kids from creeping Bolshevism, actually denotes a watery hue that's more pinkish than Pinko). But some of the Crayola names should be added to the original master list of English color names—"Granny Smith green" is an amazing term, instantly visualized, and as best I can tell has no analogue on the grown-ups' list of colors.

A few clicks later, we learn that Hungarian is the only known language to have two "basic" words for red, "piros" and "vörös." Now, it's not so unusual for a language to have a different set of basic color terms than English does: For instance, both "blue" and "azure" are basic color terms in Russian, while only "blue" is a basic color term in English (in the sense that native English speakers consider azure a type of blue, which is not the case with, say, green). What makes the Hungarian case odd is that red is the most basic color—it's the third color languages acquire a word for, after black and white. Also, there's this queer business:
* Expressions where "red" typically translates to "piros": a red road sign, the red line of Budapest Metro, a holiday shown in red in the calendar, ruddy complexion, the red nose of a clown, some red flowers (those of a neutral nature, e.g. tulips), red peppers and paprika, red card suits (hearts and diamonds), red traffic lights, red stripes on a flag, etc.

* Expressions where "red" typically translates to "vörös": red army, red wine, red carpet (for receiving important guests), red hair or beard, red lion (the mythical animal), the Red Cross, the novel The Red and the Black, the Red Sea, redshift, red giant, red blood cells, red oak, some red flowers (those with passionate connotations, e.g. roses), red fox, names of ferric and other red minerals, red copper, rust, red phosphorus, the color of blushing with anger or shame, etc.
You know, those lists made sense for a few short moments, and then I was lost again.
-- Brad Plumer 10:58 AM || ||

February 01, 2009

Recycled Cities, Prefab Homes

Over at The Nation, architect Teddy Cruz walks us through Tijuana, which has been built, in large part, using waste materials from neighboring San Diego—garage doors, box crates, old tires, even abandoned post-World War II bungalows that are loaded onto trailers and shipped south. Who knew you could recycle an entire city? There's even a YouTube clip, by filmmaker Laura Hanna:

         

That video, incidentally, was included in the Home Delivery: Fabricating the Modern Dwelling exhibit at the MoMa, which was reviewed by architecture critic Sarah Williams Goldhagen in the current issue of The New Republic.

Goldhagen's review, by the way, is provocative and very much worth reading. She observes that, compared with other countries, prefabricated homes are curiously rare in the United States. (In Sweden, by contrast, 85 percent of homes are largely prefab, and yes, Ikea offers models.) In theory, mass-produced homes could be cheaper, the quality could be more consistent, and—here's the bonus environmental angle—they're likely to be a great deal greener than the usual contractor-built ("stick-built") variety. There's no reason prefabs have to be hideous, either, or shoddy, or drearily uniform in appearance. Trouble is, building codes vary so widely from town to town in this country that it's impractical for prefab homes to take hold. Is that a good thing? Goldhagen says probably not:
What these exhibitions demonstrated is that there is no dearth of splendid ideas for prefabricated mass-produced housing. Were the best of these ideas adopted by the market—and in the right climate, more good ideas would be forthcoming—there is no doubt that the bulk of newly constructed houses would be less toxic to the environment, better built, and last longer with fewer repairs. They would more fully and attractively accommodate the ways people currently live.

Why do we continue to settle for residential litter in ever-more-degrading landscapes? That is the question these exhibitions fail to address. But of course it is not an architectural question. In social policies, better ideas by savvier architects will change little. For quality affordable mass-produced housing to be built, we need to create different conditions for a mass market. A new legislative structure must clear away the obstacles presented by non-standard, municipally controlled building codes and create enforceable national standards for prefab-friendly, environmentally responsible manufacturing and construction practices. Incentives must be offered so that the entrenched and intransigent construction industry, which has made plenty of money on its poorly conceived, shoddily built, environmentally toxic houses, will re-configure itself.

If the necessary legislation were passed and new market incentives put in place, and the designers and manufacturers of prefabricated homes made all the real innovations in quality and reduction of price that the automobile industry has made since the Model T, who would walk away from a better designed and better built home for less money?
Et voilà—the future of homebuilding in the United States? I have no clue, but assuming energy and environmental issues continue to be central in the coming years (and why wouldn't they be?), I'd expect the question to bubble up now and again. Greening the construction industry and our built environment, after all, pretty much has to be a big part of any push to nudge down those greenhouse-gas emissions.

(Cross-posted to The Vine)
-- Brad Plumer 11:37 AM || ||

January 11, 2009

Not What I Meant!

My friend Francesca Mari has a great critique of the new film adaptation of Revolutionary Road:
Instead of adapting the novel, scriptwriter Justin Haythe seems to have read only the book's dialogue—to have typed up all the quotations, then deleted the document down to two hours.

Remaining faithful to a text doesn't mean lifting as many lines as possible. This is especially true with Yates, whose characters, if you were to only listen to what they say, vacillate almost exclusively between pathetic and cruel. The heart of his stories, the stuff that lobs a lump in your throat, comes from what the characters think, and then say in spite of it. Take, for instance, the book's perfectly constructed opening scene, the staging of a community play.

"The helplessly blinking cast," Yates writes, "had been afraid that they would end by making fools of themselves, and they had compounded that fear by being afraid to admit it." Gnawing on his knuckles in the front row, Frank watches his wife, initially a vision of the poised actress he fell for, devolve into the stiff, suffering creature who often sleeps on his sofa. After final curtain, he plans to tell April she was wonderful, but then watching her through a mirror as she wipes her makeup off with cold cream, he suddenly thinks twice: "'You were wonderful' might be exactly the wrong thing to say--condescending, or at the very least naïve and sentimental, and much too serious." Frank watches April through the mirror remove her makeup with cold cream. "'Well,' he said instead, 'I guess it wasn't exactly a triumph or anything, was it?'"
Haven't seen the movie yet, but I can see why that'd be a problem. Most of the book's best scenes involve a tension, usually cringe-inducing, between what the characters mean to say and what actually comes out of their mouths, either because they overthink things, or because they miscalculate how their words will come across, or because they're overcome by some fleeting childish urge to hurt, or because—most commonly—they just fail to take the other person into account. At one point, Frank plots out in his mind an entire conversation with his wife, and then gets enraged when she doesn't deliver her hoped-for lines.

That seems like a devilishly hard thing to convey in a film, assuming that there's not some hokey narrator voicing an interior monologue all the while—sort of like in A Christmas Story—especially since Yates relies on this trick so frequently. But now I'm trying to think of movies that do this well, and my brain's coming up blank.

(Image by Antony Hare)
-- Brad Plumer 4:32 PM || ||

January 10, 2009

To the Last Syllable of Recorded Time

Here's a topic that's always mystified me. Go back to the late 17th century. Newton and Leibniz had both independently developed calculus, and, when they weren't bickering over who deserved credit, they were disagreeing about the metaphysical nature of time. Newton believed that space and time were real and independent from each other. No, countered Leibniz, time and space are just two interrelated ways of ordering objects: "For space denotes, in terms of possibility, an order to things which exist at the same time, considered as existing together." Clever. But most physicists ended up siding with Newton—our basic intuition is that space and time both exist, and are separate from each other.

But then things got tricky. Once general relativity was formulated, we suddenly had this notion of space-time as a continuous fabric. In Einstein's universe, how time flows depends on one's location. In places where gravity is weaker, time runs faster. Technically, you age slightly more rapidly living in a penthouse than you would living down on the street level—think of it as a small dose of cosmic income redistribution. So that's one troubling little chink in Newton's theory.

Then we get even bigger problems burrowing down into the quantum universe, where it's not clear that time is even a relevant concept. How would you measure it? There are experiments to nail down all sorts of qualities about particles—their position, their momentum, their spin—but not how they mark time, or how long they'll stay in a certain place. Or, rather, there are plenty of ways to try to measure that, but all result in wildly different answers. Not only that, but electrons and photons don't even appear to be bound by the arrow of time; their quantum states can evolve both forward and backward. An observation of certain particles in the present can affect their past natures, and so on.

So how are we supposed to reconcile all this? Some scientists now seem to think that maybe Leibniz was right all along, and we should reconsider his argument more carefully. In a recent issue of New Scientist, Michael Brooks interviewed Carlo Rovelli, a theoretical physicist at the University of the Mediterranean, who argues that the simplest approach to time is to stop talking about how things change over time and, instead, just talk about how things relate to each other: "Rather than thinking that a pendulum oscillates with time and the hand of a clock moves in time, we would do better to consider the relationship between the position of a pendulum and the position of the hand." The notion of time is only a meaningful metaphor when dealing with, say, human experience. It's useless, Rovelli argues, for most of the universe.

That's a bit wild, and maybe we don't want to abandon Newton just yet. So one potential way to preserve time as a really-existing entity has been offered up by Lee Smolin, who argues that the reason scientists haven't been able to reconcile quantum physics and relativity—and bring the laws of the universe under one tidy grand theory—is that they aren't accounting for the fact that the laws of physics can evolve over time. In the early moments of the universe, after all, we know that the electromagnetic and weak forces were rolled up together. So why can't we see other such evolutions? Time is fundamental—it's just the laws of physics that change. (And, fair enough: There's no logical reason why the laws of physics have to be eternally true—they've only really applied for less than 14 billion years.)

Then there's the weird fact that the arrow of time always seems to move forward, as encapsulated by the second law of thermodynamics, which holds that the entropy or disorder in the universe is always increasing. That explains why cream mixes with your coffee over time—technically, it would be possible for me to dump in some cream, mix it around, and then have the two liquids sort themselves out: milk on top, coffee on bottom. But this is staggeringly unlikely from a statistical perspective, and the most probable states are some sort of mixture. Time moving in reverse—i.e., the milk and coffee "unmixing"—is very, very, very, very, unlikely. (Very.) So unlikely from a statistical perspective, in fact, that it basically doesn't happen.

But that raises at least one squirm-inducing question: How did the universe, then, start out at a low-entropy, extremely orderly state (which has been getting messier ever since) in the first place? Sean Carroll wrote a whole article in Scientific American earlier this year about this and outlined one possible (and possibly wacky) solution:
This scenario, proposed in 2004 by Jennifer Chen of the University of Chicago and me, provides a provocative solution to the origin of time asymmetry in our observable universe: we see only a tiny patch of the big picture, and this larger arena is fully time-symmetric. Entropy can increase without limit through the creation of new baby universes.

Best of all, this story can be told backward and forward in time. Imagine that we start with empty space at some particular moment and watch it evolve into the future and into the past. (It goes both ways because we are not presuming a unidirectional arrow of time.) Baby universes fluctuate into existence in both directions of time, eventually emptying out and giving birth to babies of their own. On ultralarge scales, such a multiverse would look statistically symmetric with respect to time—both the past and the future would feature new universes fluctuating into life and proliferating without bound. Each of them would experience an arrow of time, but half would have an arrow that was reversed with respect to that in the others.

The idea of a universe with a backward arrow of time might seem alarming. If we met someone from such a universe, would they remember the future? Happily, there is no danger of such a rendezvous. In the scenario we are describing, the only places where time seems to run backward are enormously far back in our past—long before our big bang. In between is a broad expanse of universe in which time does not seem to run at all; almost no matter exists, and entropy does not evolve. Any beings who lived in one of these time-reversed regions would not be born old and die young—or anything else out of the ordinary. To them, time would flow in a completely conventional fashion. It is only when comparing their universe to ours that anything seems out of the ordinary—our past is their future, and vice versa. But such a comparison is purely hypothetical, as we cannot get there and they cannot come here.
Hm. These are things I don't really understand at all (feel free to drive that point home in comments!) and wonder if it's even worth the effort to try. Probably not. The other maddening fact about time is that there's not, alas, ever enough of it.
-- Brad Plumer 12:10 PM || ||
Little Lefties

Are most parents closet Marxists? Caleb Crain says of course they are:
After all, most parents want their children to be far left in their early years—to share toys, to eschew the torture of siblings, to leave a clean environment behind them, to refrain from causing the extinction of the dog, to rise above coveting and hoarding, and to view the blandishments of corporate America through a lens of harsh skepticism.

But fewer parents wish for their children to carry all these virtues into adulthood. It is one thing to convince your child that no individual owns the sandbox and that it is better for all children that it is so. It is another to hope that when he grows up he will donate the family home to a workers’ collective.
That's from Crain's amusing review of Tales for Little Rebels: A Collection of Radical Children’s Literature, and I'd say it makes sense. What else is elementary school if not a marginally more humane version of the Soviet Five-Year Plans?
-- Brad Plumer 11:03 AM || ||

January 05, 2009

Doom, Sweet Doom

Oh, man. Back in October, I ran across some evidence that the supervolcano lurking beneath Yellowstone National Park capable of obliterating a good chunk of the United States might actually be losing its vigor, maybe even going dormant. Well, scratch that. Charlie Petit reports that since December 26, there's been an unusually large uptick in earthquake activity beneath Yellowstone—some 400 seismic events in all. Is the beast getting restless? Let's pray not, 'cause here's a quick recap of what doomsday would look like:

        

On the upside, if Yellowstone does burst, we could stop fretting about global warming for a spell, since all that ash would have a fairly sizable cooling effect...

Update: Doom averted for now! The AP added an update yesterday, reporting that the earthquakes appear to be subsiding, and that, according to one researcher, the seismic shudders "could alter some of the park's thermal features but should not raise any concern about the park's large volcano erupting anytime soon." Now I'm off to find something else to fret about.
-- Brad Plumer 9:12 PM || ||
Not Poppy, Nor Mandragora, Nor all the Drowsy Syrups of the World...

So there's the good jetlag, the kind where you return home from parts unknown, collapse at 9 p.m., wake before dawn, and are just the most productive little honeybee that's ever buzzed. And then there's the other kind, the bad jetlag—awake all night, useless as an earlobe in daylight. Right now, I have the second kind, so I've been trying to stay awake by reading about sleep. People always tell pollsters that they don't sleep enough and wish they got more. That sounds iffy to me. Do people really want more? Why not less? And how bad is a little sleep deprivation, really?

Jim Horne, who runs the University of Loughborough's Sleep Research Centre in the UK, explains in New Scientist that most modern folks probably get enough more than enough sleep, about 7 hours a night. (Presumably excluding parents with newborns.) Americans didn't actually get 9 hours back in 1913, as is sometimes claimed, and just because you snooze for 10 or 11 hours on the weekend doesn't mean you're catching up on some supposed "sleep deficit"—odds are, you're just merrily and needlessly oversleeping, in the same way a person might merrily overeat or overdrink beyond what they need. Sloths will sleep 16 hours a day if caged and bored, versus just 10 in the wild. "Now wait a sec," the critic interjects, "That's well and good, but I must be sleep-deprived, because whenever three p.m. rolls round my eyelids get uncontrollably heavy!" Mine too. But Horne says that's just our bad luck:
My team recently investigated these questions by giving around 11,000 adults a questionnaire asking indirectly about perceived sleep shortfall. ... Half the respondents turned out to have a sleep shortfall, averaging 25 minutes a night, and around 20 per cent had excessive daytime sleepiness. However, the people with a sleep deficit were no more likely to experience daytime sleepiness than those without.
Can't decide whether that's comforting or no. On an only semi-related note, apparently the brain consumes just as much oxygen when it's "idling" as it does when problem-solving. So, uh, what's the resting brain doing, anyway? Douglas Fox reports on some theories, most of which involve daydreaming—the "idle" brain is likely doing crucial work, sorting through the vast mess of memories in your brain, looking at them introspectively, and deciding which ones to keep and how to classify them—good, bad, threatening, painful. "To prevent a backlog of unstored memories building up," Fox notes, "the network returns to its duties whenever it can."

So there's a (scientific!) rationale for staying up late and staring blankly at the wall during the day. That strikes me as exceedingly useful.
-- Brad Plumer 6:05 PM || ||

January 04, 2009

In Search of a Greener Laptop

It's never that surprising to hear that some companies aren't quite as eco-friendly as they claim to be, but sometimes the examples are worth examining in detail. Last week, Ben Charny reported in The Wall Street Journal that Apple Inc., which has gone out of its way to flaunt its green cred, doesn't always stack up so well to its competitors: "For example, Dell and Hewlett-Packard report buying much more clean energy than Apple; Dell 58 times more, and Hewlett-Packard five times more."

Now, clean energy is a decent metric, though a computer's impact on the environment mostly depends on how much energy it consumes after it's been purchased. On that score, MacBooks do... okay. That said, I wish these comparisons would look more broadly at product durability. A company making laptops that last five years on average is greener than one whose computers last for just three, consuming fewer resources and spewing less hazardous waste—especially since e-recycling programs are often flawed. And, as EcoGeek's Hank Green reported last year, some of Apple's newer iPods and iPhones are designed to be virtually impossible to repair, which ends up encouraging people to junk their old gadgets and buy new ones.

For a longer discussion of this topic, Giles Slade's Made to Break is a fascinating book, and some of his insights on how businesses make short-lived products to encourage excess consumption should be taken to heart when trying to figure out just how green a company really is. True, the EPA considers "product longevity" as a factor in its EPEAT labeling for electronics, but that system is voluntary and doesn't work terribly well. Recommending that computers have "modular designs," say, hasn't stopped the average lifespan of personal computers from plummeting from six years in 1997 to two years in 2005. It's likely that rethinking the whole notion of "planned obsolescence" would involve very drastic regulatory (and probably cultural) changes. Maybe that's not desirable—maybe it's a good thing that our cell phones go out of style every 12 months. In the meantime, though, we could probably stand to require better buyback and recycling programs.

While we're on the topic, the Journal also had a fantastic dissection of Dell's claims to be "carbon neutral." Turns out, when Dell's executives fling that term around, they're only talking about Dell proper—they're not talking about Dell's suppliers or the people running Dell computers. Which basically means they're talking about just 5 percent of Dell's actual carbon footprint. And even that 5 percent achieves neutrality mostly via carbon offsets, which are often quite dubious (sure they're planting a forest—but how do they know the forest wouldn't have been planted anyway?) Well-meaning corporate initiatives are nice, but they're no substitute for better regulation and carbon pricing.

(Cross-posted to The Vine)
-- Brad Plumer 11:36 PM || ||
On the Drug Money Trail

Talk about unnerving: In this month's New York Review of Books, Marcia Angell argues that "it is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines" when it comes to drugs or medical devices. Big Pharma, she argues, has corrupted the clinical trial process too thoroughly. (Interestingly, the essay focuses primarily on drugs, although the approval processes for medical devices or, say, new surgical techniques are often even less stringent—surely big financial interests have a lot of sway there, too.)

Angell's full essay is worth taking for a spin, but note that the problems she discusses are especially acute when it comes to off-label uses of drugs. Once the FDA approves a drug for a specific purpose, doctors can legally prescribe it for any other disease they feel like. So drug companies design and bankroll often-dubious trials to "test" those other uses, pay some esteemed researcher a fat consulting fee so that he'll slap his name on the results (unless the results are unfavorable, in which case into the dustbin they go), and then lobby doctors—with gifts, if necessary—to start prescribing the drug for the new use. Oh yeah, step four: Profit!

But if all this subtle—and often not-so-subtle—corruption is hard to eliminate, as Angell says it is, why not create stricter FDA regulations on off-label use? Angell estimates that roughly half of all current prescriptions are off-label, and no doubt many of those are valid, but we've also seen, for instance, a 40-fold increase in the diagnosis of bipolar disorder among children between 1994 and 2003, with many being treated with powerful psychoactive drugs off-label—side-effects and all. It makes little sense that a drug company has to spend years and millions of dollars getting FDA approval for one use of a drug, but then all other uses are automatically kosher, no matter how little reliable data exists. Or is this off-label "loophole" useful and effective in some way I'm not grasping?

On a related note, a few weeks ago, The New York Times had a great piece on how psychiatrists were gearing up to put together the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, the "bible" of psychiatry. Edward Shorter, a historian in the field, was quoted arguing that psychiatry is not like, say, cardiology—no one really knows the causes of various mental disorders, so "political, social, and financial" factors often drive the classifications. Financial, indeed: As Angell notes, of the 170 contributors to the DSM-IV, 95 had financial ties to drug companies.

Now, does that help explain why, as Christopher Lane reports in his new book,
Shyness: How Normal Behavior Became a Sickness, ordinary shyness went from being listed as a rare "social phobia" in 1980s DSM-III to a full-fledged "social anxiety disorder" in 1994's DSM-IV? Maybe, maybe not, though GlaxoSmithKline certainly found it profitable to push the idea of social-anxiety disorder on the public ("Imagine being allergic to people...") and marketing its antidepressant, Paxil, as a treatment.

That's not to say the disorder's fake, or that Paxil can't help people; just that there's a set of murky judgments at work here not grounded in rigorous scientific evidence, and a little more transparency would be nice. But alas, as the Times reported, the contributors to DSM-V have all signed non-disclosure agreements. No peeking behind the curtain. True, the participants have agreed to limit their income from drug companies to "only" $10,000 a year while working on the DSM-V, though, given that drug companies potentially have billions at stake on what does and does not go in the manual, it's not hard to imagine that they might try to, ah, push the envelope a wee bit.
-- Brad Plumer 6:15 PM || ||
Toooons.

Well, lookit what we got here. It is possible to embed music on your lowly little blog! Very nice, and a bouquet of lilacs dispatched to LaLa.com. Now, what the world emphatically does not need is another mp3 blog telling you that, dude, the white scruffy indie kid over here likes the new Fleet Foxes EP. So I'll pass on that. But what the world might need is more Senagalese Afro-Latin big bands from the '80s that recently (albeit only partially) reunited. Like so:

Sutukun - Orchestre Baobab
-- Brad Plumer 1:06 PM || ||
Inside the Machine

Over the holidays, I picked up an old, yellowing copy of Mike Royko's Boss, his best-selling and unauthorized—to put it lightly—1971 biography of former Chicago mayor Richard J. Daley. This may sound naïve, but I found the book incredibly eye-opening. Sure, I've always had a hazy sense of how urban machine politics worked, and I've heard a fair bit about how horrifically corrupt and mismanaged many big cities were back in the 1950s and '60s—especially if you were poor or black—but Boss doles out the gory details. And, yes, later historians have softened Royko's portrait of Daley as a corrupt, racist strongman, but as best I can discern, the book's held up reasonably well.

Daley didn't create Chicago's Democratic machine, but by the time he won his first mayor race in 1955, he had worked in all of its critical nodes—from key budget positions to state party chair—and was able to consolidate his control over the vast apparatus like no one else before him, or since. Not even Rod Blagojevich. Not even close.

It went like this: Chicago had some 3,500 precincts, each with captains and assistant captains and underlings galore who devoted their energy to turning out votes—or stealing them, if it came to that—in exchange for well-heeled patronage jobs. Not just government jobs, but jobs at racetracks or public utilities or private industries—the Machine had the power to all sorts of jobs to supporters, since, after all, these utilities and contractors relied on the Daley administration for contracts and business. And the Machine always delivered the votes. (True, it didn't win Illinois for Hubert Humphrey in 1968, though Royko argues that was only because Daley had lost faith in Humphrey by that point and focused on downticket races.)

At its peak, the Machine seemed practically omnipotent: Judges came up through the Machine. The top law firms were run by lawyers close to the Machine. Who else would you want handling a zoning dispute for you? And ward bosses usually used their power to enrich themselves. Companies rushed to pay premiums to insurance firms run by ward bosses, lest they fall afoul of the county assessor or zoning board. And Daley stood on top of it all. Only Daley could keep track of all the moving parts. Only Daley, who retained his party chairmanship, knew how much money was flowing in, and how much flowing out.

Most of his money came from contractors, since the hallmark of Daley's administration was building lots and lots of stuff in Chicago's downtown. The Dan Ryan Expressway. The John Hancock Building. Most of O'Hare Airport. And on and on and on. All Daley's doing. The revisionist take on Daley seems to hold that because he revitalized downtown he saved Chicago from the blight that later afflicted, say, Detroit and Cleveland. Maybe. Another revisionist take is that Daley got things built that a non-boss mayor never could have. Maybe. But there were costs, of course. Old neighborhoods were razed in the name of "urban renewal"—including a loyal Italian ward in the Valley—and replacement housing never built. Commonwealth Edison, which ran a hideously dirty coal plant responsible for one-third of the city's pollution, was allowed to evade pollution laws simply because a boyhood friend of Daley's sat on the board.

And Daley's tenure reads like an utter disaster when we're not talking about gleaming buildings. Royko points out the grim irony in the fact that black voters continued giving Daley his electoral majorities despite the fact that he gave them absolutely nothing. (Not that black voters always had a choice: Precinct captains weren't shy about issuing threats in poorer neighborhoods—vote or else you'll lose your public housing, and, yes, I think I will go into the voting booth with you...) During Daley's tenure, the black neighborhoods had some of the worst slums in the country, crumbling schools, and rampant crime—since police often focused more on extortion than crime prevention. (True, the police department did get cleaned up a bit after Daley appointed Orlando Wilson, a UC law professor, as police chief—but Daley was forced to do this after it was revealed that a number of his police officers were running their own armed robbery gang.)

Easily the most stunning part of Royko's book is his description of how Daley fended off Chicago's civil rights demonstrators—including Martin Luther King. Daley, of course, had no interest in undoing Chicago's sharply enforced neighborhood segregation. Even if he had wanted to—and there's no indication that he did—the city's white voters, who were prone to violence anytime a black person so much as walked through their lily enclaves, would've revolted. But Daley was able to do what the mayor of Birmingham could not and neutralize King by, essentially, co-opting him. The two would meet and Daley would insist that he was a liberal, point to all his well-meaning social programs, heap promises on high... it was all b.s., but what could King say to that?

Another provocative bit in Royko's book was his short discussion of the Black Panthers. The black gangs operating in Chicago during the 1960s were hardly angelic, though some of them did run health clinics and school-lunch programs and broker truces between street gangs. But Daley wasn't chiefly worried about the violence. He was worried that these groups might eventually turn to politics. After all, Daley himself had, as a youth, been a member in a violent Irish gang, the Hamburg Social and Athletic Club, which eventually became a "mainstream" political organization that organized voters, elected local alderman, and launched many a high-flying political career. It wasn't hard to envision the Black Panthers following a similar trajectory, so Daley ordered his police chief to declare war.

Like I said, I'm sure there have been plenty of reassessments of Daley since Royko's unrelentingly scathing account (which, incidentally, was banned from many a Chicago bookstore upon release by order of Hizzoner himself). But Boss is definitely worth reading. Very often, people will tell the story of urban decay like this: Cities were flourishing in the 1940s and 1950s but then fell to tatters in the late '60s with the rise of crime, integration, busing, white flight, and so on. Royko makes a very good case that the rot began long before then.
-- Brad Plumer 11:59 AM || ||
Ree-vamp! Ree-vamp!

As it tends to do, New Year's Day came and went this year, and, as always, I made a few half-hearted New Year's resolutions—that's right, lurking somewhere deep within my to-do list is a note to "research gym memberships on Google"... fortunately, that's probably as far as that will ever get. But one thing I expressly did not resolve to do was revive this blog. No sir.

But... of course there's a "but." But then an errant bookmark click sent me tumbling down to this musty old site, and, hot damn, it's been eight whole months since I've written anything here. Not that I've abstained from littering the Internet with gibberish, no—The Vine over at TNR has kept me busy doing plenty of that. But, y'know, I do sort of miss having a personal blog, so maybe I'll take a stab at a relaunch. Luckily, this site has been stripped of all its former readers, which may let me take it in a brand-new direction. Perhaps I'll try less politics, more random and extraneous crap—this could morph into a music blog, or a glorified diary of my social life, or maybe I'll just put my hands in my pocket, hold my head high, and whistle nonchalantly for post after post. La da dee...

Anyway, we'll see how long this latest blog incarnation will actually last. Happily, this isn't an official New Year's resolution or anything—nothing filed in triplicate, nothing involving solemn swears—so if it doesn't work, I can just drop it at any time, though, if that happens, I'll no doubt put up some lethargic meta-post explaining why I'm quitting yet again. And yes, it is a good question who I'm writing this post for.
-- Brad Plumer 11:37 AM || ||

April 12, 2008

Work, Work, Work

Okay, this here blog's taking a wee vacation (as if you couldn't tell). For now, I'm helping run The New Republic's new enviro blog, which still needs a name—The Vine? The Weed?—and may just be a temporary thing to coincide with our recent environment issue, but who knows.

Speaking of which, I've got a longer piece in the new issue of TNR on the big fight within SEIU that's going on right now. It's interesting, and I do think the debate at the heart of the feud is substantively quite important, and speaks to broader issues of what unions are for, and how they could survive and grow. I tried to lay things out as fairly and as clearly as I could, so... check it out.
-- Brad Plumer 12:13 PM || ||

March 21, 2008

Trillions Here, Trillions There

Depressing: "Projected total US spending on the Iraq war could cover all of the global investments in renewable power generation that are needed between now and 2030 in order to halt current warming trends."
-- Brad Plumer 7:36 PM || ||
Hey Pig Piggy Pig Pig Pig

"Hey, what's with the dearth of great free content 'round these parts?" Well, sorry, I've been waylaid with the flu or worse. "Dire was the tossing, deep the groans, despair / Tended the sick busiest from couch to couch," as Milton put it. But all's well now, so onto business. Michael Gazzaniga's forthcoming book, Human, has this entertaining history of the "animal courts" that were sometimes held in Europe during the Middle Ages:
From 824 to 1845, in Europe, animals did not get off scot-free when they violated the laws of man... Just like common criminals, they too could be arrested and jailed (animal and criminals would incarcerated in the same prison), accused of wrongdoing, and have to stand trial. The court would appoint them a lawyers, who would represent them and defend them at a trial. A few lawyers became famous for their animal defenses.

The accused animal, if found guilty, would then be punished. The punishment would often be retributive in nature, so that whatever the animal had done would be done to it. In the case of a particular pig (during those times pigs ran freely through towns, and were rather aggressive) that had attacked the face and pulled the arms off a small child, the punishment was the pig had its face mangled and its forelegs cut off, and then was hanged. Animals were punished because they were harmful. However, sometimes if the animal was valuable, such as an ox or horse, its sentence would be ameliorated, or perhaps the animal would be given to the church. If the animal had been found guilty of "buggery" (sodomy) both it and the buggerer were put to death. If domestic animals had caused damages and were found guilty, their owners would be fined for not controlling them.

There seems to have been some ambivalence as to whether an animal was fully responsible or whether its owner should be also considered responsible. Because animals were peers in judicial proceedings with humans, it was considered improper to eat the bodies of any animals that were capitally punished (except for the thrifty Flemish, who would enjoy a good steak after a cow was hanged).

Animals could also be tortured for confessions. If they didn't confess—and no one supposed they would—then their sentence could be lessened. You see, it was important to follow the law exactly, for if humans were tortured and didn't confess, then their sentence could also be changed. Many different types of domestic animals had their day in court: horses for throwing riders or causing carts to tip, dogs for biting, bulls for stampeding and injuring or goring someone, and pigs most commonly of all. These trials were held in civil courts.
The source is E.P. Evans, The Criminal Prosecution and Capital Punishment of Animals (1906). Now, Gazzaniga recounts this all to argue that "our species has had a hard time drawing the line between" humans and other animals—that we're frequently in the habit of ascribing agency to other species. But this long review of Evans's book offers a very different read of the medieval animal courts: "Goring oxen were not to be executed because they were morally guilty, but because, as lower animals who had killed higher animals, they threatened to turn upside down the divinely-ordained hierarchy of God's creation." That seems more likely, no?
-- Brad Plumer 6:43 PM || ||

March 11, 2008

A Tale of Two Teas

Why is green tea so popular in Asia while black tea is all the rage in the West? Tom Sandage's A History of the World in Six Glasses proffers a theory:
The first tea was green tea, the kind that had always been consumed by the Chinese. Black tea, which is made by allowing the newly picked green leaves to oxidize by leaving them overnight, only appeared during the Ming dynasty; its origins are a mystery. It came to be regarded by the Chinese as suitable only for consumption by foreigners and eventually dominated exports to Europe. Clueless as to the origins of tea, Europeans wrongly assumed green and black tea were two entirely different botanical species. ...

It is not too much exaggeration to say that almost nobody in Britain drank tea at the beginning of the eighteenth century, and nearly everybody did at the end of it. ... [There was a] widespread practice of adulteration, the stretching of tea by mixing it with ash and willow leaves, sawdust, flowers, and more dubious substances--even sheep's dung, according to one account--often colored and disguised using chemical dyes. Tea was adulterated in one way or another at almost every stage along the chain from leaf to cup. ...

Black tea became more popular, partly because it was more durable than green tea on long voyages, but also as a side effect of this adulteration. Many of the chemicals used to make fake green tea were poisonous, whereas black tea was safer, even when adulterated. As black tea started to displace the smoother, less bitter green tea, the addition of sugar and milk helped to make it more palatable.
So there you have it. But why is black tea safer "even when adulterated"? Surely it doesn't have the power to neutralize the "dubious substances" on its own, right?
-- Brad Plumer 11:21 PM || ||
'90s-era David Simon

Pulled up from The New Republic's archives, a great 1997 piece by Wire-creator David Simon that recounts, among other things, an amusing shoot from his first TV project, Homicide:
The day this scene was shot was not without its peculiar charm.

"You all want to be in a TV show?"

"Which one?" asked Boo.

"'Homicide.' The cop show."

"What do we have to do?"

"Sling drugs on a corner and get chased by the police."

They looked at each other for a long moment. Then laughter broke on the Southwest Baltimore crossroads of Gilmor and McHenry. Tae, Dinky, Manny Man, DeAndre, R.C.--all of them were willing to leave their real corner untended for a day, travel across town and play-act for the National Broadcasting Corporation. Only Boo was unsure.

"How much we gonna get paid?" he asked.

"You'll be non-union extras," I told him. "That means about $45 for the day. "

"Sheeeet," drawled Boo.

Forty-five dollars was fifteen minutes' work at McHenry and Gilmor. I knew this because, at that point, I had been around Tae and Dinky and the others for about ten months, and, for most of that time, they had sold drugs. I, in turn, had watched them sell drugs.

"I don't care," Tae said finally. "I wanna be on TV."

Boo stayed on the corner that day, slinging blue-topped vials of coke. The rest followed Tae across town to the Perkins Homes, a squat stretch of public housing that would serve as the pretend drug market. They filled out tax forms, waited out the inevitable delays and were eventually escorted by an assistant director to a battered side street. There, on the set, a props man handed them pretend drugs and pretend weapons, and the director, a very earnest white man, arranged them on the street in the manner most pleasing to the camera.

"You there, can you move to that doorway?"

Dinky stepped into the doorway.

"And you--can you show some of the gun? Right. Tuck in your shirt so we can see the gun."

R.C. arranged his shirt so the butt of the prop gun showed.

They filmed the scene over and over, with the director covering it from a variety of angles and distances. Each time, the pretend lookout shouted his warning. Each time, the corner boys ran from the approaching radio car. Each time, they were penned in the same alley, forced to the ground and given the handcuffs.

After the eighth or ninth take, the boys began to rebel.

"I'm sayin' this is bullshit," muttered Dinky. "They got all of us dirty like this. Dave, man, you know it wouldn't be that way. You know we don't do it like that."

It was true. The props department had stuffed fake drugs and guns and knives into the pockets of all the extras. Every last one of them would be caught holding, every last one would, in the make-believe world, take a charge. At Gilmor and McHenry, it was very different. The boys worked ground stashes, handling only a vial or two at a time. They kept the guns in rowhouse vestibules or atop the tires of parked cars. They didn't run at the first sign of a police car. They didn't have to run.
Well, it's a good show all the same.
-- Brad Plumer 1:37 PM || ||
Amis on Alcohol



Yes, I know. Any discussion of either Kingsley Amis, hangovers, or (especially) Kingsley Amis' thoughts on binge drinking and its consequences must include Lucky Jim's peerless description of a particularly wretched hangover. So without further ado:
Dixon was alive again. Consciousness was upon him before he could get out of the way; not for him the slow, gracious wandering from the halls of sleep, but a summary, forcible ejection. He lay sprawled, too wicked to move, spewed up like a broken spider-crab on the tarry shingle of the morning. The light did him harm, but not as much as looking at things did; he resolved, having done it once, never to move his eyeballs again. A dusty thudding in his head made the scene before him beat like a pulse. His mouth had been used as a latrine by some small creature of the night, and then as its mausoleum. During the night, too, he'd somehow been on a cross-country run and then been expertly beaten up by secret police. He felt bad.
Now that that's out of the way, Alexander Waugh has written a whole essay on Amis' (rather extensive) views on drinking. "Beer drinkers," Waugh observes, "have bellies, gin swiggers sallow jowls, and wine, port, and brandy drinkers a 'Rudolph conk,' formed by a rosaceous labyrinth of tiny, luminous blood vessels assembling itself on the nose." Amis was a whiskey man himself and his telltale, Waugh offers, was the "Scotch gaze," a phrase that may be familiar in Aberdeen, but seems to be beyond the ken of Google. Ah, well. Amis, who was very often hilarious, insisted that hilarity and drink were "connected in a profoundly human, peculiarly intimate way," but I actually found this passage of his extraordinarily sad:
When that ineffable compound of depression, sadness (these two are not the same), anxiety, self-hatred, sense of failure and fear for the future begins to steal over you, start telling yourself that what you have is a hangover. You are not sickening for anything, you have not suffered a minor brain lesion, you are not all that bad at your job, your family and friends are not leagued in a conspiracy of barely maintained silence about what a shit you are, you have not come at last to see life as it really is, and there is no use crying over spilt milk.
Well, I won't belabor it, but read the whole thing if you need a break from Eliot Spitzer, or Democratic delegate counts, or whatever else is going on.
-- Brad Plumer 11:43 AM || ||

March 10, 2008

Bug Off

I can't say I have fail-proof advice for any storekeeper confronted with a pack of noisy (or even dangerous) teenagers congregating outside his door. Calling the police doesn't always work for long. Changing social mores isn't what you'd call a quick fix. Nicely asking them to loiter elsewhere...? Er, no. Still, there's plenty that's disturbing about the "mosquito" solution:
In Britain, adolescents are the new mosquitoes. Many storekeepers and municipalities now employ ultrasonic devices, of the kind hitherto used to scatter insects and rodents, to disperse young people wherever they habitually gather to make a nuisance of themselves.

The so-called "mosquito" devices--there are some 3,500 installed throughout the country--take advantage of the fact that only people younger than 20 can perceive and be discomfited by the high-pitched sounds the devices make, discouraging them from lingering in the vicinity.

Now, the owners of a building in Queens are fitting it with just such a youth repellent. No doubt other buildings will soon follow suit.

It is an easy answer to a difficult problem. Those adults should turn back now--lest they turn New York into a city that is chronically afraid of its young people.
Yeah, no doubt treating youths like cockroaches will encourage them to behave. How could it not? The author also wonders whether teenagers will eventually get used to the sound, as their eardrums are dulled by loud music and the like. It sounds curmudgeonly, but I can attest. My hearing is pretty dismal (mostly I blame Steve Albini) and I could never pick out high-pitched whines of any sort. And why are adolescent eardrums so sensitive, anyway? What changes when you turn 20?
-- Brad Plumer 12:27 PM || ||