January 11, 2009

Not What I Meant!

My friend Francesca Mari has a great critique of the new film adaptation of Revolutionary Road:
Instead of adapting the novel, scriptwriter Justin Haythe seems to have read only the book's dialogue—to have typed up all the quotations, then deleted the document down to two hours.

Remaining faithful to a text doesn't mean lifting as many lines as possible. This is especially true with Yates, whose characters, if you were to only listen to what they say, vacillate almost exclusively between pathetic and cruel. The heart of his stories, the stuff that lobs a lump in your throat, comes from what the characters think, and then say in spite of it. Take, for instance, the book's perfectly constructed opening scene, the staging of a community play.

"The helplessly blinking cast," Yates writes, "had been afraid that they would end by making fools of themselves, and they had compounded that fear by being afraid to admit it." Gnawing on his knuckles in the front row, Frank watches his wife, initially a vision of the poised actress he fell for, devolve into the stiff, suffering creature who often sleeps on his sofa. After final curtain, he plans to tell April she was wonderful, but then watching her through a mirror as she wipes her makeup off with cold cream, he suddenly thinks twice: "'You were wonderful' might be exactly the wrong thing to say--condescending, or at the very least naïve and sentimental, and much too serious." Frank watches April through the mirror remove her makeup with cold cream. "'Well,' he said instead, 'I guess it wasn't exactly a triumph or anything, was it?'"
Haven't seen the movie yet, but I can see why that'd be a problem. Most of the book's best scenes involve a tension, usually cringe-inducing, between what the characters mean to say and what actually comes out of their mouths, either because they overthink things, or because they miscalculate how their words will come across, or because they're overcome by some fleeting childish urge to hurt, or because—most commonly—they just fail to take the other person into account. At one point, Frank plots out in his mind an entire conversation with his wife, and then gets enraged when she doesn't deliver her hoped-for lines.

That seems like a devilishly hard thing to convey in a film, assuming that there's not some hokey narrator voicing an interior monologue all the while—sort of like in A Christmas Story—especially since Yates relies on this trick so frequently. But now I'm trying to think of movies that do this well, and my brain's coming up blank.

(Image by Antony Hare)
-- Brad Plumer 4:32 PM || ||

January 10, 2009

To the Last Syllable of Recorded Time

Here's a topic that's always mystified me. Go back to the late 17th century. Newton and Leibniz had both independently developed calculus, and, when they weren't bickering over who deserved credit, they were disagreeing about the metaphysical nature of time. Newton believed that space and time were real and independent from each other. No, countered Leibniz, time and space are just two interrelated ways of ordering objects: "For space denotes, in terms of possibility, an order to things which exist at the same time, considered as existing together." Clever. But most physicists ended up siding with Newton—our basic intuition is that space and time both exist, and are separate from each other.

But then things got tricky. Once general relativity was formulated, we suddenly had this notion of space-time as a continuous fabric. In Einstein's universe, how time flows depends on one's location. In places where gravity is weaker, time runs faster. Technically, you age slightly more rapidly living in a penthouse than you would living down on the street level—think of it as a small dose of cosmic income redistribution. So that's one troubling little chink in Newton's theory.

Then we get even bigger problems burrowing down into the quantum universe, where it's not clear that time is even a relevant concept. How would you measure it? There are experiments to nail down all sorts of qualities about particles—their position, their momentum, their spin—but not how they mark time, or how long they'll stay in a certain place. Or, rather, there are plenty of ways to try to measure that, but all result in wildly different answers. Not only that, but electrons and photons don't even appear to be bound by the arrow of time; their quantum states can evolve both forward and backward. An observation of certain particles in the present can affect their past natures, and so on.

So how are we supposed to reconcile all this? Some scientists now seem to think that maybe Leibniz was right all along, and we should reconsider his argument more carefully. In a recent issue of New Scientist, Michael Brooks interviewed Carlo Rovelli, a theoretical physicist at the University of the Mediterranean, who argues that the simplest approach to time is to stop talking about how things change over time and, instead, just talk about how things relate to each other: "Rather than thinking that a pendulum oscillates with time and the hand of a clock moves in time, we would do better to consider the relationship between the position of a pendulum and the position of the hand." The notion of time is only a meaningful metaphor when dealing with, say, human experience. It's useless, Rovelli argues, for most of the universe.

That's a bit wild, and maybe we don't want to abandon Newton just yet. So one potential way to preserve time as a really-existing entity has been offered up by Lee Smolin, who argues that the reason scientists haven't been able to reconcile quantum physics and relativity—and bring the laws of the universe under one tidy grand theory—is that they aren't accounting for the fact that the laws of physics can evolve over time. In the early moments of the universe, after all, we know that the electromagnetic and weak forces were rolled up together. So why can't we see other such evolutions? Time is fundamental—it's just the laws of physics that change. (And, fair enough: There's no logical reason why the laws of physics have to be eternally true—they've only really applied for less than 14 billion years.)

Then there's the weird fact that the arrow of time always seems to move forward, as encapsulated by the second law of thermodynamics, which holds that the entropy or disorder in the universe is always increasing. That explains why cream mixes with your coffee over time—technically, it would be possible for me to dump in some cream, mix it around, and then have the two liquids sort themselves out: milk on top, coffee on bottom. But this is staggeringly unlikely from a statistical perspective, and the most probable states are some sort of mixture. Time moving in reverse—i.e., the milk and coffee "unmixing"—is very, very, very, very, unlikely. (Very.) So unlikely from a statistical perspective, in fact, that it basically doesn't happen.

But that raises at least one squirm-inducing question: How did the universe, then, start out at a low-entropy, extremely orderly state (which has been getting messier ever since) in the first place? Sean Carroll wrote a whole article in Scientific American earlier this year about this and outlined one possible (and possibly wacky) solution:
This scenario, proposed in 2004 by Jennifer Chen of the University of Chicago and me, provides a provocative solution to the origin of time asymmetry in our observable universe: we see only a tiny patch of the big picture, and this larger arena is fully time-symmetric. Entropy can increase without limit through the creation of new baby universes.

Best of all, this story can be told backward and forward in time. Imagine that we start with empty space at some particular moment and watch it evolve into the future and into the past. (It goes both ways because we are not presuming a unidirectional arrow of time.) Baby universes fluctuate into existence in both directions of time, eventually emptying out and giving birth to babies of their own. On ultralarge scales, such a multiverse would look statistically symmetric with respect to time—both the past and the future would feature new universes fluctuating into life and proliferating without bound. Each of them would experience an arrow of time, but half would have an arrow that was reversed with respect to that in the others.

The idea of a universe with a backward arrow of time might seem alarming. If we met someone from such a universe, would they remember the future? Happily, there is no danger of such a rendezvous. In the scenario we are describing, the only places where time seems to run backward are enormously far back in our past—long before our big bang. In between is a broad expanse of universe in which time does not seem to run at all; almost no matter exists, and entropy does not evolve. Any beings who lived in one of these time-reversed regions would not be born old and die young—or anything else out of the ordinary. To them, time would flow in a completely conventional fashion. It is only when comparing their universe to ours that anything seems out of the ordinary—our past is their future, and vice versa. But such a comparison is purely hypothetical, as we cannot get there and they cannot come here.
Hm. These are things I don't really understand at all (feel free to drive that point home in comments!) and wonder if it's even worth the effort to try. Probably not. The other maddening fact about time is that there's not, alas, ever enough of it.
-- Brad Plumer 12:10 PM || ||
Little Lefties

Are most parents closet Marxists? Caleb Crain says of course they are:
After all, most parents want their children to be far left in their early years—to share toys, to eschew the torture of siblings, to leave a clean environment behind them, to refrain from causing the extinction of the dog, to rise above coveting and hoarding, and to view the blandishments of corporate America through a lens of harsh skepticism.

But fewer parents wish for their children to carry all these virtues into adulthood. It is one thing to convince your child that no individual owns the sandbox and that it is better for all children that it is so. It is another to hope that when he grows up he will donate the family home to a workers’ collective.
That's from Crain's amusing review of Tales for Little Rebels: A Collection of Radical Children’s Literature, and I'd say it makes sense. What else is elementary school if not a marginally more humane version of the Soviet Five-Year Plans?
-- Brad Plumer 11:03 AM || ||

January 05, 2009

Doom, Sweet Doom

Oh, man. Back in October, I ran across some evidence that the supervolcano lurking beneath Yellowstone National Park capable of obliterating a good chunk of the United States might actually be losing its vigor, maybe even going dormant. Well, scratch that. Charlie Petit reports that since December 26, there's been an unusually large uptick in earthquake activity beneath Yellowstone—some 400 seismic events in all. Is the beast getting restless? Let's pray not, 'cause here's a quick recap of what doomsday would look like:

        

On the upside, if Yellowstone does burst, we could stop fretting about global warming for a spell, since all that ash would have a fairly sizable cooling effect...

Update: Doom averted for now! The AP added an update yesterday, reporting that the earthquakes appear to be subsiding, and that, according to one researcher, the seismic shudders "could alter some of the park's thermal features but should not raise any concern about the park's large volcano erupting anytime soon." Now I'm off to find something else to fret about.
-- Brad Plumer 9:12 PM || ||
Not Poppy, Nor Mandragora, Nor all the Drowsy Syrups of the World...

So there's the good jetlag, the kind where you return home from parts unknown, collapse at 9 p.m., wake before dawn, and are just the most productive little honeybee that's ever buzzed. And then there's the other kind, the bad jetlag—awake all night, useless as an earlobe in daylight. Right now, I have the second kind, so I've been trying to stay awake by reading about sleep. People always tell pollsters that they don't sleep enough and wish they got more. That sounds iffy to me. Do people really want more? Why not less? And how bad is a little sleep deprivation, really?

Jim Horne, who runs the University of Loughborough's Sleep Research Centre in the UK, explains in New Scientist that most modern folks probably get enough more than enough sleep, about 7 hours a night. (Presumably excluding parents with newborns.) Americans didn't actually get 9 hours back in 1913, as is sometimes claimed, and just because you snooze for 10 or 11 hours on the weekend doesn't mean you're catching up on some supposed "sleep deficit"—odds are, you're just merrily and needlessly oversleeping, in the same way a person might merrily overeat or overdrink beyond what they need. Sloths will sleep 16 hours a day if caged and bored, versus just 10 in the wild. "Now wait a sec," the critic interjects, "That's well and good, but I must be sleep-deprived, because whenever three p.m. rolls round my eyelids get uncontrollably heavy!" Mine too. But Horne says that's just our bad luck:
My team recently investigated these questions by giving around 11,000 adults a questionnaire asking indirectly about perceived sleep shortfall. ... Half the respondents turned out to have a sleep shortfall, averaging 25 minutes a night, and around 20 per cent had excessive daytime sleepiness. However, the people with a sleep deficit were no more likely to experience daytime sleepiness than those without.
Can't decide whether that's comforting or no. On an only semi-related note, apparently the brain consumes just as much oxygen when it's "idling" as it does when problem-solving. So, uh, what's the resting brain doing, anyway? Douglas Fox reports on some theories, most of which involve daydreaming—the "idle" brain is likely doing crucial work, sorting through the vast mess of memories in your brain, looking at them introspectively, and deciding which ones to keep and how to classify them—good, bad, threatening, painful. "To prevent a backlog of unstored memories building up," Fox notes, "the network returns to its duties whenever it can."

So there's a (scientific!) rationale for staying up late and staring blankly at the wall during the day. That strikes me as exceedingly useful.
-- Brad Plumer 6:05 PM || ||

January 04, 2009

In Search of a Greener Laptop

It's never that surprising to hear that some companies aren't quite as eco-friendly as they claim to be, but sometimes the examples are worth examining in detail. Last week, Ben Charny reported in The Wall Street Journal that Apple Inc., which has gone out of its way to flaunt its green cred, doesn't always stack up so well to its competitors: "For example, Dell and Hewlett-Packard report buying much more clean energy than Apple; Dell 58 times more, and Hewlett-Packard five times more."

Now, clean energy is a decent metric, though a computer's impact on the environment mostly depends on how much energy it consumes after it's been purchased. On that score, MacBooks do... okay. That said, I wish these comparisons would look more broadly at product durability. A company making laptops that last five years on average is greener than one whose computers last for just three, consuming fewer resources and spewing less hazardous waste—especially since e-recycling programs are often flawed. And, as EcoGeek's Hank Green reported last year, some of Apple's newer iPods and iPhones are designed to be virtually impossible to repair, which ends up encouraging people to junk their old gadgets and buy new ones.

For a longer discussion of this topic, Giles Slade's Made to Break is a fascinating book, and some of his insights on how businesses make short-lived products to encourage excess consumption should be taken to heart when trying to figure out just how green a company really is. True, the EPA considers "product longevity" as a factor in its EPEAT labeling for electronics, but that system is voluntary and doesn't work terribly well. Recommending that computers have "modular designs," say, hasn't stopped the average lifespan of personal computers from plummeting from six years in 1997 to two years in 2005. It's likely that rethinking the whole notion of "planned obsolescence" would involve very drastic regulatory (and probably cultural) changes. Maybe that's not desirable—maybe it's a good thing that our cell phones go out of style every 12 months. In the meantime, though, we could probably stand to require better buyback and recycling programs.

While we're on the topic, the Journal also had a fantastic dissection of Dell's claims to be "carbon neutral." Turns out, when Dell's executives fling that term around, they're only talking about Dell proper—they're not talking about Dell's suppliers or the people running Dell computers. Which basically means they're talking about just 5 percent of Dell's actual carbon footprint. And even that 5 percent achieves neutrality mostly via carbon offsets, which are often quite dubious (sure they're planting a forest—but how do they know the forest wouldn't have been planted anyway?) Well-meaning corporate initiatives are nice, but they're no substitute for better regulation and carbon pricing.

(Cross-posted to The Vine)
-- Brad Plumer 11:36 PM || ||
On the Drug Money Trail

Talk about unnerving: In this month's New York Review of Books, Marcia Angell argues that "it is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines" when it comes to drugs or medical devices. Big Pharma, she argues, has corrupted the clinical trial process too thoroughly. (Interestingly, the essay focuses primarily on drugs, although the approval processes for medical devices or, say, new surgical techniques are often even less stringent—surely big financial interests have a lot of sway there, too.)

Angell's full essay is worth taking for a spin, but note that the problems she discusses are especially acute when it comes to off-label uses of drugs. Once the FDA approves a drug for a specific purpose, doctors can legally prescribe it for any other disease they feel like. So drug companies design and bankroll often-dubious trials to "test" those other uses, pay some esteemed researcher a fat consulting fee so that he'll slap his name on the results (unless the results are unfavorable, in which case into the dustbin they go), and then lobby doctors—with gifts, if necessary—to start prescribing the drug for the new use. Oh yeah, step four: Profit!

But if all this subtle—and often not-so-subtle—corruption is hard to eliminate, as Angell says it is, why not create stricter FDA regulations on off-label use? Angell estimates that roughly half of all current prescriptions are off-label, and no doubt many of those are valid, but we've also seen, for instance, a 40-fold increase in the diagnosis of bipolar disorder among children between 1994 and 2003, with many being treated with powerful psychoactive drugs off-label—side-effects and all. It makes little sense that a drug company has to spend years and millions of dollars getting FDA approval for one use of a drug, but then all other uses are automatically kosher, no matter how little reliable data exists. Or is this off-label "loophole" useful and effective in some way I'm not grasping?

On a related note, a few weeks ago, The New York Times had a great piece on how psychiatrists were gearing up to put together the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, the "bible" of psychiatry. Edward Shorter, a historian in the field, was quoted arguing that psychiatry is not like, say, cardiology—no one really knows the causes of various mental disorders, so "political, social, and financial" factors often drive the classifications. Financial, indeed: As Angell notes, of the 170 contributors to the DSM-IV, 95 had financial ties to drug companies.

Now, does that help explain why, as Christopher Lane reports in his new book,
Shyness: How Normal Behavior Became a Sickness, ordinary shyness went from being listed as a rare "social phobia" in 1980s DSM-III to a full-fledged "social anxiety disorder" in 1994's DSM-IV? Maybe, maybe not, though GlaxoSmithKline certainly found it profitable to push the idea of social-anxiety disorder on the public ("Imagine being allergic to people...") and marketing its antidepressant, Paxil, as a treatment.

That's not to say the disorder's fake, or that Paxil can't help people; just that there's a set of murky judgments at work here not grounded in rigorous scientific evidence, and a little more transparency would be nice. But alas, as the Times reported, the contributors to DSM-V have all signed non-disclosure agreements. No peeking behind the curtain. True, the participants have agreed to limit their income from drug companies to "only" $10,000 a year while working on the DSM-V, though, given that drug companies potentially have billions at stake on what does and does not go in the manual, it's not hard to imagine that they might try to, ah, push the envelope a wee bit.
-- Brad Plumer 6:15 PM || ||
Toooons.

Well, lookit what we got here. It is possible to embed music on your lowly little blog! Very nice, and a bouquet of lilacs dispatched to LaLa.com. Now, what the world emphatically does not need is another mp3 blog telling you that, dude, the white scruffy indie kid over here likes the new Fleet Foxes EP. So I'll pass on that. But what the world might need is more Senagalese Afro-Latin big bands from the '80s that recently (albeit only partially) reunited. Like so:

Sutukun - Orchestre Baobab
-- Brad Plumer 1:06 PM || ||
Inside the Machine

Over the holidays, I picked up an old, yellowing copy of Mike Royko's Boss, his best-selling and unauthorized—to put it lightly—1971 biography of former Chicago mayor Richard J. Daley. This may sound naïve, but I found the book incredibly eye-opening. Sure, I've always had a hazy sense of how urban machine politics worked, and I've heard a fair bit about how horrifically corrupt and mismanaged many big cities were back in the 1950s and '60s—especially if you were poor or black—but Boss doles out the gory details. And, yes, later historians have softened Royko's portrait of Daley as a corrupt, racist strongman, but as best I can discern, the book's held up reasonably well.

Daley didn't create Chicago's Democratic machine, but by the time he won his first mayor race in 1955, he had worked in all of its critical nodes—from key budget positions to state party chair—and was able to consolidate his control over the vast apparatus like no one else before him, or since. Not even Rod Blagojevich. Not even close.

It went like this: Chicago had some 3,500 precincts, each with captains and assistant captains and underlings galore who devoted their energy to turning out votes—or stealing them, if it came to that—in exchange for well-heeled patronage jobs. Not just government jobs, but jobs at racetracks or public utilities or private industries—the Machine had the power to all sorts of jobs to supporters, since, after all, these utilities and contractors relied on the Daley administration for contracts and business. And the Machine always delivered the votes. (True, it didn't win Illinois for Hubert Humphrey in 1968, though Royko argues that was only because Daley had lost faith in Humphrey by that point and focused on downticket races.)

At its peak, the Machine seemed practically omnipotent: Judges came up through the Machine. The top law firms were run by lawyers close to the Machine. Who else would you want handling a zoning dispute for you? And ward bosses usually used their power to enrich themselves. Companies rushed to pay premiums to insurance firms run by ward bosses, lest they fall afoul of the county assessor or zoning board. And Daley stood on top of it all. Only Daley could keep track of all the moving parts. Only Daley, who retained his party chairmanship, knew how much money was flowing in, and how much flowing out.

Most of his money came from contractors, since the hallmark of Daley's administration was building lots and lots of stuff in Chicago's downtown. The Dan Ryan Expressway. The John Hancock Building. Most of O'Hare Airport. And on and on and on. All Daley's doing. The revisionist take on Daley seems to hold that because he revitalized downtown he saved Chicago from the blight that later afflicted, say, Detroit and Cleveland. Maybe. Another revisionist take is that Daley got things built that a non-boss mayor never could have. Maybe. But there were costs, of course. Old neighborhoods were razed in the name of "urban renewal"—including a loyal Italian ward in the Valley—and replacement housing never built. Commonwealth Edison, which ran a hideously dirty coal plant responsible for one-third of the city's pollution, was allowed to evade pollution laws simply because a boyhood friend of Daley's sat on the board.

And Daley's tenure reads like an utter disaster when we're not talking about gleaming buildings. Royko points out the grim irony in the fact that black voters continued giving Daley his electoral majorities despite the fact that he gave them absolutely nothing. (Not that black voters always had a choice: Precinct captains weren't shy about issuing threats in poorer neighborhoods—vote or else you'll lose your public housing, and, yes, I think I will go into the voting booth with you...) During Daley's tenure, the black neighborhoods had some of the worst slums in the country, crumbling schools, and rampant crime—since police often focused more on extortion than crime prevention. (True, the police department did get cleaned up a bit after Daley appointed Orlando Wilson, a UC law professor, as police chief—but Daley was forced to do this after it was revealed that a number of his police officers were running their own armed robbery gang.)

Easily the most stunning part of Royko's book is his description of how Daley fended off Chicago's civil rights demonstrators—including Martin Luther King. Daley, of course, had no interest in undoing Chicago's sharply enforced neighborhood segregation. Even if he had wanted to—and there's no indication that he did—the city's white voters, who were prone to violence anytime a black person so much as walked through their lily enclaves, would've revolted. But Daley was able to do what the mayor of Birmingham could not and neutralize King by, essentially, co-opting him. The two would meet and Daley would insist that he was a liberal, point to all his well-meaning social programs, heap promises on high... it was all b.s., but what could King say to that?

Another provocative bit in Royko's book was his short discussion of the Black Panthers. The black gangs operating in Chicago during the 1960s were hardly angelic, though some of them did run health clinics and school-lunch programs and broker truces between street gangs. But Daley wasn't chiefly worried about the violence. He was worried that these groups might eventually turn to politics. After all, Daley himself had, as a youth, been a member in a violent Irish gang, the Hamburg Social and Athletic Club, which eventually became a "mainstream" political organization that organized voters, elected local alderman, and launched many a high-flying political career. It wasn't hard to envision the Black Panthers following a similar trajectory, so Daley ordered his police chief to declare war.

Like I said, I'm sure there have been plenty of reassessments of Daley since Royko's unrelentingly scathing account (which, incidentally, was banned from many a Chicago bookstore upon release by order of Hizzoner himself). But Boss is definitely worth reading. Very often, people will tell the story of urban decay like this: Cities were flourishing in the 1940s and 1950s but then fell to tatters in the late '60s with the rise of crime, integration, busing, white flight, and so on. Royko makes a very good case that the rot began long before then.
-- Brad Plumer 11:59 AM || ||
Ree-vamp! Ree-vamp!

As it tends to do, New Year's Day came and went this year, and, as always, I made a few half-hearted New Year's resolutions—that's right, lurking somewhere deep within my to-do list is a note to "research gym memberships on Google"... fortunately, that's probably as far as that will ever get. But one thing I expressly did not resolve to do was revive this blog. No sir.

But... of course there's a "but." But then an errant bookmark click sent me tumbling down to this musty old site, and, hot damn, it's been eight whole months since I've written anything here. Not that I've abstained from littering the Internet with gibberish, no—The Vine over at TNR has kept me busy doing plenty of that. But, y'know, I do sort of miss having a personal blog, so maybe I'll take a stab at a relaunch. Luckily, this site has been stripped of all its former readers, which may let me take it in a brand-new direction. Perhaps I'll try less politics, more random and extraneous crap—this could morph into a music blog, or a glorified diary of my social life, or maybe I'll just put my hands in my pocket, hold my head high, and whistle nonchalantly for post after post. La da dee...

Anyway, we'll see how long this latest blog incarnation will actually last. Happily, this isn't an official New Year's resolution or anything—nothing filed in triplicate, nothing involving solemn swears—so if it doesn't work, I can just drop it at any time, though, if that happens, I'll no doubt put up some lethargic meta-post explaining why I'm quitting yet again. And yes, it is a good question who I'm writing this post for.
-- Brad Plumer 11:37 AM || ||