tag:blogger.com,1999:blog-54874502024-03-13T09:30:26.468-04:00Bradford PlumerUnknownnoreply@blogger.comBlogger1811125tag:blogger.com,1999:blog-5487450.post-37191931175440656382010-02-17T17:15:00.000-05:002010-02-25T11:54:41.004-05:00Submarines From ScratchI'm thinking about a new career in amateur sub-building. Here's 34-year-old Tao Xinglai showing that anyone can do it, really. All it takes is a little ingenuity and scrap metal:<br /><br /><object width="560" height="340"><param name="movie" value="http://www.youtube.com/v/MfLq1e49Ewg&hl=en_US&fs=1&"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/MfLq1e49Ewg&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"></embed></object><br /><br />Total budget? $4,400. Of course, if we wanted to take it up a notch, there's Peter Madsen, who built the largest homemade sub in the world, the UC3 Nautilus:<br /><br /> <object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/JZX6BHMAf4w&hl=en_US&fs=1&"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/JZX6BHMAf4w&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />Total cost: <a href="http://www.uc3nautilus.dk/faq.htm#faq4">"millions of Danish crowns"</a>—though it was still cheaper than a professionally built diesel submarine. But it might be easier to build your own submarine in Denmark, where anyone can build their own vessel without permission, as long as it's shorter than 24 meters. Meanwhile, the state of New York seems to <a href="http://abclocal.go.com/wabc/story?section=news/local&id=5537231">look less favorably</a> on makeshift subs gurgling along the channel.Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-5487450.post-35454308149646730382010-02-17T16:36:00.000-05:002010-05-17T18:02:26.357-04:00Post-Apocalyptic Hard DrivesSuppose human civilization collapses one day. That's not <span style="font-style: italic;">such</span> a zany notion—lots of past civilizations have collapsed. Maybe there's nuclear war, or giraffe flu, or maybe the Yellowstone supervolcano finally heaves and spews. It's a fat disaster for awhile, but eventually things settle down. New civilizations form, people figure out electricity, and so forth. (Actually, as Kurt Cobb <a href="http://resourceinsights.blogspot.com/2008/07/could-we-start-industrial-society-from.html">argues</a>, it might not be possible to start an industrial society from scratch a second time 'round, since we've already used up most of the easiest-to-access ores and fuels. But ignore that.)<br /><br /><img src="http://farm4.static.flickr.com/3107/3171015404_31ff15041b_m.jpg" vspace="10" align="right" hspace="10" />Anyway, the question: Would historians of the future be able to examine our hard drives and servers and study what twenty-first-century folks got up to? (Surely all the best info on what caused the apocalypse will be in digital form.) How much of our data would still be intact? Apparently, <a href="http://www.newscientist.com/article/mg20527451.300-digital-doomsday-the-end-of-knowledge.html?full=true">no one really knows</a> for sure:<br /><blockquote>Hard drives were never intended for long-term storage, so they have not been subjected to the kind of tests used to estimate the lifetimes of formats like CDs. No one can be sure how long they will last. Kevin Murrell, a trustee of the UK's national museum of computing, recently switched on a 456 megabyte hard drive that had been powered down since the early 1980s. "We had no problems getting the data off at all," he says.<br /><br />Modern drives might not fare so well, though. The storage density on hard drives is now over 200 gigabits per square inch and still climbing fast. While today's drives have sophisticated systems for compensating for the failure of small sectors, in general the more bits of data you cram into a material, the more you lose if part of it becomes degraded or damaged. What's more, a decay process that would leave a large-scale bit of data readable could destroy some smaller-scale bits. "The jury is still out on modern discs. We won't know for another 20 years," says Murrell.<br /><br />Most important data is backed up on formats such as magnetic tape or optical discs. Unfortunately, many of those formats cannot be trusted to last even five years, says Joe Iraci, who studies the reliability of digital media at the Canadian Conservation Institute in Ottawa, Ontario.</blockquote>Actually, though, we don't even need to consider the apocalypse. The fragile state of digital storage is already causing trouble. NASA has a few people <a href="http://www.moonviews.com/">racing to recover</a> old images from its Lunar Orbiter missions in the 1960s, which are currently stored on magnetic tapes and may not be long for this world. And the National Archives <a href="http://www.technologyreview.com/web/14583/">is struggling</a> to preserve its digital records, which tend to rot faster than paper records.<br /><br />A related tale of disintegrating media comes from Larry Lessig's <a href="http://www.amazon.com/exec/obidos/tg/detail/-/0143034650?v=glance"><i>Free Culture</i></a>—though this one has a twist. There are a lot of films that were made after 1923 that have no commercial value anymore. They never made it to video or DVD; the reels are just collecting dust in vaults somewhere. In theory, it shouldn’t be too hard to digitize these films and put them in an archive. But alas, thanks to the <a href="http://en.wikipedia.org/wiki/Sonny_Bono_Copyright_Term_Extension_Act">Sonny Bono Copyright Term Extension Act</a> that was passed by Congress in 1998, any film made after 1923 won't enter the public domain until at least 2019.<br /><br />That means these films are still under copyright, and anyone who wanted to restore them would have to track down the copyright-holders (not always easy to do) and probably hire a lawyer. And who's going to go through <i>that</i> much trouble just to restore some obscure movie that only a few people might ever watch? Yet a lot of these older movies were produced on <a href="http://en.wikipedia.org/wiki/Cellulose_acetate_film">nitrate-based stock</a>, and they'll have dissolved by the time 2019 rolls around, leaving nothing behind but canisters of dust. It's sort of tragic.<br /><br />On the cheery side, post-apocalyptic historians will presumably have less trouble snagging copies of <i>Avatar</i> and <i>Titanic</i>—that's the sort of digital media most likely to survive, if only because there are so many copies lying around. So James Cameron will be our Aeschylus, huh?Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-5487450.post-12716692522463220042010-02-16T17:38:00.004-05:002010-02-16T17:53:34.535-05:00I Don't Respond Well To Mellow<i>Annie Hall</i> is pretty much a perfect film. But as David Kimmel describes in <a href="http://www.amazon.com/Ill-Have-What-Shes-Having/dp/1566637376"><i>I'll Have What She's Having: Behind the Scenes at the Great Romantic Comedies</i></a>, it didn't start out that way. The original version of the film was an epic disaster, in dire need of hacking and kneading:<br /><img src="http://media.tumblr.com/bnIhyDRF9ewja0tk6gW3MAquo1_500.jpg" align="right" vspace="10" width="250" height="174" hspace="10" /><blockquote>In the editing room, <i>Annie Hall</i> was an incoherent mess. [Co-writer] Marshall Brickman was appalled. “To tell you the truth, when I saw the rough cut of <i>Annie Hall</i>, I thought it was terrible, completely unsalvageable. It was two and a half hours long and rambled and was tangential and just endless.”<br /><br />The original version was essentially Alvy free-associating about his life and his worries. Annie (Diane Keaton) was seen briefly and then disappeared from the movie for fifteen minutes. … Even the scenes with Annie—they were already there, of course—led to fantasies and flashbacks galore. The sequence where Alvy, Annie, and Rob (Tony Roberts) head to Brooklyn originally ran ten to fifteen minutes and had many more scenes than the one to two we see in the final film. Alvy and his date from <i>Rolling Stone</i> (Shelly Duvall) spun off to a scene where they wound up in the Garden of Eden talking to God. When Alvy is arrested in Los Angeles (after playing bumper cars in the parking lot), there was a long scene of him interacting with the other prisoners in his cell.<br /><br />Like the sculptor who chips away at the block so that the statue hidden inside can emerge, Allen and [editor Ralph] Rosenblum began hacking away at the movie to see if there was something in the material worth saving. “It was clear to Woody and me that the film started moving whenever the present-tense material with him and Keaton dominated the screen, and we began cutting in the direction of that relationship,” Rosenblum later wrote. They tossed out entire sequences, tightened things up, and always kept the focus on Alvy and Annie. Even the scenes of flashbacks to Alvy’s earlier marriages were greatly shortened. Some characters were eliminated altogether. Said Allen, “There was a lot of material taken out of that picture that I thought was wonderfully funny. … It wasn’t what I intended to do. I didn’t sit down with Marshall Brickman and say, ‘We’re going to write a picture about a relationship.”</blockquote>That's not overly surprising. A lot of Woody Hall's earlier films are, well, funny, but not exactly coherent—usually the plots are thin and heavily improvised, existing only so that there's <span style="font-style: italic;">something</span> to hang an endless series of jokes and gags on. (They were basically live-action versions of his <i>New Yorker</i> pieces from the time.) And the movies work because the jokes and gags are hilarious. But with <i>Annie Hall</i>, it seems he finally pushed that habit to excess, at least in his first go-round. But hey, it's a good thing he did! Otherwise his editors might never have decided that enough was enough.<br /><br /><b><i>P.S.</i></b> Though it's worth noting that sometimes the editors were <i>too</i> zealous. They had originally, for instance, cut out <a href="http://www.youtube.com/watch?v=BGPcSd7DDLk">the famous scene</a> with Annie's brother Duane. It was only later, once the film started doing well in front of test audiences and they figured eh, maybe they could afford a few indulgences here and there, that they snuck Duane back in. It's funny to think how Christopher Walken's career might've unfolded if they'd kept that bit out, since that was a big early break. Maybe he doesn't go on to do <i>Deer Hunter</i> the following year? And then what?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-60307098434099261632010-02-16T16:12:00.000-05:002010-05-17T18:03:47.135-04:00Lost In TranslationThere are plenty of English-language writers who sell a lot of books overseas—it's not uncommon for even lesser-known American authors to get their novels translated into multiple languages. But the reverse doesn't hold nearly as well. Only a very small fraction of foreign-language authors ever manage to get published in the United States. The last two literary Nobelists, Herta Müller and J.M.G. Le Clézio, weren't widely available in English until <i>after</i> they won their prizes (at which point publishers scrambled to crank out translations). Why does the flow mainly go in one direction? It doesn't seem to be because Americans are boors. Emily Williams, a former literary scout, offers up <span style="text-decoration: underline;"></span><a href="http://publishingperspectives.com/?p=10143">a more subtle explanation</a>:<br /><img src="http://farm1.static.flickr.com/205/463801376_1186ca36ec_m.jpg" vspace="10" align="right" hspace="10" /><blockquote>There has been a hegemony for years of English-language books being translated into many other languages, a cultural phenomenon comparable (though much smaller in scale) to US dominance of the worldwide film market. Bestselling American authors like Michael Crichton and John Grisham and Danielle Steele and Stephen King have, in translation, reliably topped bestseller lists around the world. As the market for matching these authors to publishers abroad matured, it opened the door to less commercial writers and other genres (in nonfiction, for example, American business books continue to be in high demand).<br /><br />A certain savvy in picking the right American books to translate developed into a valuable editorial skill in markets abroad. Imprints and publishing strategies were then established to capitalize on books in translation. Foreign rights turned into a profit center for US publishers, and scouting agencies sprang up to help navigate the increasingly complex marketplace.<br /><br />The rising fortunes of US books abroad coincides with the rise of American pop culture in general, but also has to be partly attributed to a strong culture of commercial fiction… that, until quite recently, simply didn’t exist in many other countries. A foreign editor I worked with once compared US commercial fiction to Hollywood blockbusters: any one book might be better or worse overall, but there’s a certain level of craftsmanship you can depend on.</blockquote>As a result, many countries abroad have editors and scouts who are focused on selecting U.S. books for translation. They can develop a deep expertise in the U.S. market. But on the flip side, foreign books come to the U.S. market from a whole slew of different countries and languages, so it's harder for a single editor to really know any one country or region really well and have a sense for which books will do well here. As a result, says Williams, the foreign-language books that <i>do</i> get translated into English often get picked via haphazard connections—an agent here has a close tie with a particular agent in Spain (say).<br /><br />Granted, there are still far more foreign-language books translated into English each year than you or I could ever hope to read. But it seems likely that there's a very high number of undiscovered gems out there in other countries—books from Latin America or Europe that <i>could</i> catch on here and garner critical acclaim (or start a craze the way Roberto Bolaño has done recently), but simply haven't made it into English by sheer bad luck.<br /><br />(<i>Flickr photo credit: <a href="http://www.flickr.com/photos/christing/463801376/">christing-O-</a></i>)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-13724420367989498642010-01-29T19:20:00.003-05:002010-01-29T19:35:01.476-05:00Heavy ToonageDue to a series of increasingly frivolous Google searches, I just spent half an hour reading up on the history of Sunday-morning cartoons in the 1980s. (This all started with a legitimate work-related query and somehow careened out of control.) Anyway, I'm sure everyone's well aware that the big, popular cartoons of that era—"Transformers" or "G.I. Joe" or "Teenage Mutant Ninja Turtles"—were created primarily to sell action figures and boost toy sales. But the story behind their development, as best I can make it out, is pretty interesting.<br /><br /><img src="http://farm1.static.flickr.com/22/29098706_afb73c46d8_m.jpg" align="right" vspace="10" hspace="10" />Back during the 1960s, Hasbro's G.I. Joe was one of the best-selling toys around, the first action-figure blockbuster. But that changed once <i>Star Wars</i> came out; suddenly, all the kids <a href="http://www.wired.com/entertainment/hollywood/magazine/15-07/trans_toy">were demanding</a> Luke Skywalker and Darth Vader dolls. Part of the shift came down to marketing: The <i>Star Wars</i> toys were being promoted by a ridiculously popular movie, while Hasbro wasn't even allowed to use animation in its G.I. Joe commercials. At the time, the National Association of Broadcasters had outlawed animated toy commercials, for fear that they'd blur the line between fantasy and reality for young children.<br /><br />At that point, Hasbro came up with an ingenious idea. There were strict rules about how you could advertise <span style="font-style: italic;">toys</span>, but there were fewer rules on advertising comic books. So the company's executives went to Marvel Comics and said, here, we'll give you the license for our G.I. Joe comic-book line and even spend millions of dollars of our own money advertising it. Marvel, naturally, leapt at the deal, and a best-selling series <a href="http://en.wikipedia.org/wiki/G.I._Joe_%28comics%29#Marvel_Comics">was born</a>. Hasbro, meanwhile, could finally run animated ads about G.I. Joe. And it worked: By the 1980s, G.I. Joe action figures were leaping off the shelves again. (Of course, it helped that the comic-book series was relatively well-conceived; plenty of other toy-comic tie-ins flopped.)<br /><br />After that came the cartoons. For a long time, the FCC had prevented companies from creating TV shows that centered on toys (Mattel had tried this with Hot Wheels in the 1960s and got smacked down). But by 1983, Reagan's FCC had <a href="http://www3.interscience.wiley.com/journal/119449938/abstract?CRETRY=1&SRETRY=0">relaxed this rule</a>. Pretty soon, a new wave of toy-themed shows started hitting on the air: "Transformers," "Thundercats," "He-Man." The trend stretched all the way down to "Teletubbies" in the 1990s. The cartoons instantly transformed the toy industry—the most successful, "Transformers," sold $100 million worth of merchandise in its first year.<br /><br />In his fascinating book <a href="http://www.amazon.com/Real-Toy-Story-Ruthless-Consumers/dp/0743247655"><i>The Real Toy Story</i></a>, Eric Clark argues that this new cartoon-toy symbiosis also altered how kids approached playtime: "Because the backstory and the programs dictated play, the nature of play itself changed—TV took control of the play environment. The old adage that a child should dictate what the toy does was discarded." Part of me wants to believe that that's slightly overstated, and that kids are more creative than this. (I had a Transformer or two back when I was younger, and I never had them reenact plots from the cartoons—Elizabethan-style court intrigue and playground bullying scenarios were far more common.) But Clark's surely onto something here; it's hard to imagine that there could be such a massive shift in toy advertising without large effects on child psychology, no?<br /><br />(<i>Flickr photo credit: <a href="http://www.flickr.com/photos/mcphotoworks/29098706/">Brian McCarty</a></i>)Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5487450.post-50522735552327654172010-01-28T23:25:00.004-05:002010-01-29T00:45:53.628-05:00Gruesome Tongue TwistersWhat's the world's most throat-chokingly difficult language? <a href="http://www.economist.com/displaystory.cfm?story_id=15108609&sa_campaign=facebook">Here's</a> a worthy contender:<br /><img src="http://farm4.static.flickr.com/3141/2888922183_6279f9ef68_m.jpg" align="right" vspace="10" hspace="10" /><blockquote>On balance <i>The Economist</i> would go for Tuyuca, of the eastern Amazon. It has a sound system with simple consonants and a few nasal vowels, so is not as hard to speak as Ubykh or !Xóõ. Like Turkish, it is heavily agglutinating, so that one word, <i>hóabãsiriga</i> means “I do not know how to write.” Like Kwaio, it has two words for “we,” inclusive and exclusive. The noun classes (genders) in Tuyuca’s language family (including close relatives) have been estimated at between 50 and 140. Some are rare, such as “bark that does not cling closely to a tree,” which can be extended to things such as baggy trousers, or wet plywood that has begun to peel apart.<br /><br />Most fascinating is a feature that would make any journalist tremble. Tuyuca requires verb-endings on statements to show how the speaker knows something. <i>Diga ape-wi</i> means that "the boy played soccer (I know because I saw him)," while <i>diga ape-hiyi</i> means "the boy played soccer (I assume)." English can provide such information, but for Tuyuca that is an obligatory ending on the verb. Evidential languages force speakers to think hard about how they learned what they say they know.</blockquote>I'm convinced! The piece also argues that, contrary to popular belief, English is a relatively easy language to learn: "verbs hardly conjugate; nouns pluralise easily (just add 's', mostly) and there are no genders to remember."<br /><br />That may all be true, but figuring out how to <i>spell</i> words in English can be a nightmare; I do believe this is the only major language in which you can actually hold spelling bees. In Spanish, by contrast, a spelling competition would be pointless since it's trivial to write out a word once you hear it. Maybe the only other possible bee-language is French—except that French-speaking countries hold <a href="http://www.dicteedesameriques.com/">dictation contests</a> instead, as does Poland with its <a href="http://www.dyktando.org/">"Dyktando."</a> (These usually involve scribbling down, word for exact word, a long literary passage that's read aloud.)<br /><br />Japanese, meanwhile, is a comparatively simple language to speak, but then you've got the vast jungle of different characters. I don't know if there are bees for that, but there <span style="font-style: italic;">is</span> the <a href="http://www.kanjiclinic.com/kankeninfo.htm">Kanken</a>, the national Kanji Aptitude Test, which tests for writing, pronunciation, and stroke order, and has twelve different levels: 10 through 3, pre-2, 2, pre-1, and 1—with 10 being the easiest and 1 the hardest. As I recall, your average well-educated native speaker should be able to pass pre-2. But level 1 is no joke—you have to know about 6,000 different kanji, and, in some years, only about 200 people in the whole country earn a passing grade. The rest, I guess, just have to mutter <i>"hóabãsiriga."</i>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5487450.post-91859045387171266232010-01-28T18:34:00.002-05:002010-01-28T23:35:40.439-05:00Our Nerdiest President<img src="http://farm4.static.flickr.com/3187/2674217489_f9dbd30fb9_m.jpg" align="right" vspace="10" hspace="10" />Huh, I had no idea that James A. Garfield <a href="http://en.wikipedia.org/wiki/Pythagorean_theorem#Garfield.27s_proof">was credited with</a> discovering a novel proof for the Pythagorean Theorem (a clever one, too, involving trapezoids). He did this back in 1876, before he became our second assassinated president and while he was still serving in the House—as <a href="http://american_almanac.tripod.com/garfield.htm">he tells it</a>, the proof came up in the course of "some mathematical amusements and discussions with other [members of Congress]."<br /><br />I'm not sure if the country today is better off or worse off for the fact that House members no longer sit around pondering geometry in their spare time. In any case, it seems Garfield was also working on a pretty expansive math-education agenda before he got shot. Oh, and bonus Garfield trivia: He <a href="http://en.wikipedia.org/wiki/James_A._Garfield#Inaugural_address">took a nasty swipe</a> at the Mormon Church in his inaugural address (not that it mattered much—the speech was sort of a low-turnout affair).Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-31978151400170761042010-01-28T17:54:00.002-05:002010-01-28T18:02:27.973-05:00Exporting DepressionEthan Watters has a fascinating <span style="font-style: italic;">New Scientist</span> <a href="http://www.newscientist.com/article/mg20527441.200-how-the-us-exports-its-mental-illnesses.html?full=true">report</a> on how U.S. drug companies are basically "exporting" Western notions of mental illness to other countries, in order to create new markets for their products:<br /><img src="http://farm1.static.flickr.com/27/63419996_a9eba05086_m.jpg" align="right" vspace="10" hspace="10" /><blockquote>The challenge GSK faced in the Japanese market was formidable. The nation did have a clinical diagnosis of depression—<i>utsubyo</i>—but it was nothing like the US version: it described an illness as devastating and as stigmatising as schizophrenia. Worse, at least for the sales prospects of antidepressants in Japan, it was rare. Most other states of melancholy were not considered illnesses in Japan. Indeed, the experience of prolonged, deep sadness was often considered to be a <i>jibyo</i>, a personal hardship that builds character. To make paroxetine a hit, it would not be enough to corner the small market for people diagnosed with <i>utsubyo</i>. As Kirmayer realised, GSK intended to influence the Japanese understanding of sadness and depression at the deepest level. ...<br /><br />Which is exactly what GSK appears to have accomplished. Promoting depression as a <i>kokoro no kaze</i>—"a cold of the soul"—GSK managed to popularise the diagnosis. In the first year on the market, sales of paroxetine in Japan brought in $100 million. By 2005, they were approaching $350 million and rising quickly.</blockquote>Now, this sort of marketing of illness is hardly novel. Back in 2005, I <a href="http://motherjones.com/politics/2005/07/licensed-ill">wrote</a> a piece for <i>Mother Jones</i> about the "corporate-sponsored creation of disease" (and, please note, that's not <i>my</i> paranoid lefty term for the practice, but a phrase taken from a Reuters Business Insight report written for Pharma execs). Drugmakers aggressively push iffy conditions like "premenstrual dysphoric disorder" in order to extend patents and boost sales. Not surprisingly, GSK again makes a cameo:<br /><blockquote>When GSK, an American drug company, wanted to repackage its best-selling antidepressant, Paxil, to treat "social anxiety disorder"—a questionable strain of social phobia that requires medication rather than therapy—it hired PR firm Cohn & Wolfe to help raise awareness about the condition. Slogans were developed: "Imagine being allergic to people." Posters featuring distraught men and women described the symptoms, which only <i>seem</i> like everyday nervousness to the untrained eye: "You blush, you sweat, shake—even find it hard to breath. That's what social anxiety disorder feels like." Journalists were faxed press releases so that they could write up stories about the new disorder in the <i>New York Times</i> and <i>Wall Street Journal</i>. (Does the deadline-pressed journalist need a bit of color for her story? No problem: Patient-advocacy groups, usually funded by drug companies, can provide patients to interview.) GSK even got University of California psychiatrist Murray Stein to vouch for the drug. Stein, it turns out, was a paid consultant to seventeen drug companies, including GSK, and had run company-funded trials of Paxil to treat social anxiety disorder.</blockquote>How devious. Anyway, the international angle is new to me and grimly riveting. <a href="http://www.nytimes.com/2010/01/10/magazine/10psyche-t.html?th&emc=th">Here's</a> a <i>New York Times Magazine</i> piece by Watters that further explores the phenomenon, focusing less on the corporate angle and more on the broader fact that "we in the West have aggressively spread our modern knowledge of mental illness around the world. ... That is, we've been changing not only the treatments but also the expression of mental illness in other cultures."Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-81043023014157867502010-01-27T19:01:00.004-05:002010-01-28T14:45:09.070-05:00Dos And Don'ts For Running A Host ClubThe Kabukicho district in Tokyo is famous for, among other sleazy wonders, its host and hostess clubs. The idea here is simple enough: In a hostess club, you have paid female employees who chat with male customers, light their cigarettes, pour drinks, sing karaoke, and generally make the men feel, I don't know, titillated. It's not a sex club—more like a flirtation club.<br /><br /><img src="http://farm2.static.flickr.com/1066/1348472892_253ab3f36f_m.jpg" align="right" vspace="10" hspace="10" />And then the host clubs are the reverse deal: Male hosts wait on female customers, listen to their sorrows, flatter them, light their cigarettes, pour drinks.... Since Japanese women don't typically get "waited on" by men, these clubs fill a real need. Anyway, hostess clubs have no doubt been featured in light-hearted <i>New York Times</i> pieces before. But I was reading Jake Adelstein's <a href="http://www.amazon.com/Tokyo-Vice-American-Reporter-Police/dp/0307378799"><i>Tokyo Vice</i></a>, a terrific book about the city's seamy underbelly, and there's a part where a cop explains that many of these clubs are actually horribly manipulative:<br /><blockquote>"It used to be that the only women who went to host bars were hostesses, but times have changed. What we keep seeing is college girls, sometimes even high school girls with money, who start going to these host clubs. They love the personal attention, and maybe they get infatuated with the hosts, who milk them for everything they have. The girls accumulate debts, and at some point the management introduces them to a job in the sex industry so that they can pay off their debts. Sometimes the guys running the host bars are the same guys who run the sex clubs."</blockquote>Not all host clubs run this racket—the respectable ones try to avoid plunging their customers into crippling debt—but many of the clubs are just thinly veiled organized-crime outfits. Meanwhile, here's one young host explaining the ins and outs of his job to Adelstein:<br /><blockquote>Sometimes the thing to do is to find an actor you resemble and then basically do an impression of the guy. You make the customer feel like she is with a celebrity. ... But most of the time I just say that I'm a graduate student in law at Tokyo University and I'm just hosting to pay the tuition. It makes the customer feel like she's contributing to society, not just to my wallet. ...<br /><br />You have to be able to talk to customers about almost anything, even where they send their kids to school. So I subscribe to four women's magazines to make sure I know what kinds of concerns they have. They also like to talk about television programs, but since I don't have time to watch TV I stay current by reading TV guides. ...<br /><br />The bad thing is that my parents hate that I do this, even if I don't plan on doing it forever. You don't have a personal life. Every day is like summer vacation, except that you don't really have freedom. You spend most of your free time waiting on customers in one way or another; sometimes you go shopping with a customer, sometimes you go to a resort with her.</blockquote><p></p>Useful tips for anyone considering a career switch.<br /><br />(<span style="font-style: italic;">Flickr photo credit: <a href="http://www.flickr.com/photos/yumyumcherry/1348472892/">yumyumcherry</a></span>)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-30516100570865163862010-01-27T10:10:00.003-05:002010-01-27T10:35:58.326-05:00How Many Moves Ahead?Famous chess players often get asked the same thing by reporters: "How many moves can <span style="font-style: italic;">you</span> see ahead?" The query crept up, inevitable as the sunrise, in <i>Time</i>'s <a href="http://www.time.com/time/world/article/0,8599,1948809,00.html">interview</a> with 19-year-old Magnus Carlsen. (He responded, "Sometimes 15 to 20 moves ahead—but the trick is evaluating the position at the end of those calculations.") Thing is, it's sort of an odd and not very well-defined question.<br /><br /><img src="http://farm1.static.flickr.com/120/313961626_e1cda9e9f2_m.jpg" align="right" vspace="10" hspace="10" />In an endgame, with just a few pieces left on the board, sure, a good player will plot out where things are heading down to the bitter end. But in the middle of a game? There are about 40 possible moves in the average chess position, so analyzing <span style="font-style: italic;">all</span> the possibilities even just two moves out would involve looking at 2.5 million positions. No human brain can pull that off. Instead, recognizing patterns and assessing positions is the far more crucial skill, which is basically what Carlsen was saying, if you read his quote carefully. In <a href="http://www.amazon.com/Inner-Game-Chess-How-Calculate/dp/0812922913"><i>The Inner Game of Chess</i></a>, Andrew Soltis argues that most grandmasters don't usually gaze more than two moves ahead—they don't need to. And then there's Garry Kasparov's take on this question, in his fascinating <i>New York Review of Books</i> <a href="http://www.nybooks.com/articles/23592">essay</a> on computer chess:<br /><blockquote>As for how many moves ahead a grandmaster sees, Russkin-Gutman makes much of the answer attributed to the great Cuban world champion José Raúl Capablanca, among others: "Just one, the best one." This answer is as good or bad as any other, a pithy way of disposing with an attempt by an outsider to ask something insightful and failing to do so. It's the equivalent of asking Lance Armstrong how many times he shifts gears during the Tour de France.<br /><br />The only real answer, "It depends on the position and how much time I have," is unsatisfying. In what may have been my best tournament game at the 1999 Hoogovens tournament in the Netherlands, I visualized the winning position a full fifteen moves ahead—an unusual feat. I sacrificed a great deal of material for an attack, burning my bridges; if my calculations were faulty I would be dead lost. Although my intuition was correct and my opponent, Topalov again, failed to find the best defense under pressure, subsequent analysis showed that despite my Herculean effort I had missed a shorter route to victory. Capablanca's sarcasm aside, correctly evaluating a small handful of moves is far more important in human chess, and human decision-making in general, than the systematically deeper and deeper search for better moves—the number of moves "seen ahead"—that computers rely on.</blockquote>Speaking of computer chess, I wanted to bring up one of my favorite points: that, even though your average laptop chess program can now whip any grandmaster, computers still have trouble mastering the game of <i>Go</i>—even the most advanced programs get crushed by skilled children. Except that, it turns out, this talking point's now a few years obsolete. According to a recent <i>Wired</i> <a href="http://www.wired.com/wiredscience/2009/03/gobrain/">report</a>, AI <i>Go</i> players are starting to beat ranking human players using the Monte Carlo method. Just another step down the path to robot domination, I guess.<br /><br />(<span style="font-style: italic;">Flickr photo credit: <a href="http://www.flickr.com/photos/boered/313961626/">Boered</a></span>)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-37480337429654493102009-02-02T22:49:00.005-05:002009-02-02T23:01:23.708-05:00The Great Dinosaur WarsExcuse me, but I'm on a Wikipedia bender lately. I just can't stop clicking. So here we go: <a href="http://en.wikipedia.org/wiki/Bone_Wars">The Bone Wars</a>? This was a real thing? Oh yes. Yes it was. In the latter third of the 19th century, we learn, a pair of rival American paleontologists, Edward Drinker Cope and Othaniel Charles Marsh "used underhanded methods to out-compete each other in the field, resorting to bribery, theft, and destruction of bones."<br /><br /><img src="http://4.bp.blogspot.com/_7OJsIQFWYZo/SYe_RaQhXBI/AAAAAAAAAG4/GZbTQew7N0M/s400/Allosaurus-crane.jpg" 300="" align="right" height="201" hspace="10" vspace="10" />This wasn't as gruesome as it sounds: The rivalry <span style="font-style: italic;">did</span> spur a race to excavate, which in turn led to the discovery of 142 new species of dinosaur, including childhood favs like Triceratops, Stegosaurus, and Allosaurus. A word of appreciation, by the way, for <a href="http://en.wikipedia.org/wiki/Allosaurus">Allosaurus</a>: The dinosaur books I gobbled up as a kid were usually divided into three eras: the Triassic (lame chicken-sized dinosaurs), Jurassic (now <i>these</i> were terrible lizards…) and finally the glorious Cretaceous (oh <i>hell</i> yes). The Allosaurus was a late Jurassic beast, usually shown scarfing down chunks of Brontosaurus meat, and he signified a child's very first encounter with the truly massive, peerless carnivores of yore—dominating the imagination for a brief while until you flipped a few pages and stumbled across the Cretaceous-era Tyrannosaurus Rex, king of all predators. (I understand now they have <a href="http://en.wikipedia.org/wiki/Spinosaurus">Spinosaurus</a> and other gigantic brutes that put T. Rex to shame—well, T. Rex is still top lizard in my book.)<br /><br />Anyway, back to our dueling paleontologists:<br /><blockquote>On one occasion, the two scientists had gone on a fossil-collecting expedition to Cope's marl pits in New Jersey, where William Parker Foulke had discovered the holotype specimen of <i>Hadrosaurus foulkii</i>, described by the paleontologist Joseph Leidy; this was one of the first American dinosaur finds, and the pits were still rich with fossils.<br /><br />Though the two parted amicably, Marsh secretly bribed the pit operators to divert the fossils they were discovering to him, instead of Cope. The two began attacking each other in papers and publications, and their personal relations soured. Marsh humiliated Cope by pointing out his reconstruction of the plesiosaur <i>Elasmosaurus</i> was flawed, with the head placed where the tail should have been. ... Cope tried to cover up his mistake by purchasing every copy he could find of the journal it was published in.</blockquote>In the end, Marsh won the Bone Wars, digging up and naming 80 new species of dinosaur, versus Cope's piteous 56. But paleontology itself was the real loser—not only was the profession's good name blighted for decades by their tomfoolery, but "the reported use of dynamite and sabotage by employees of both men destroyed or buried hundreds of potentially critical fossil remains." Oh dear.<br /><br />Even worse, the two miscreants would often hastily <span style="font-style: italic;">slap</span> all sorts of bones together in their race to unveil new species, and they often assembled incorrect skeletons that sowed confusion for decades. Marsh, it turns out, was the wiseguy who fit a human skull on a looming sauropod's torso and named his creation <i>Brontosaurus</i> (it later had to be rechristened <i>Apatosaurus</i>, once it was given its correct head—the real tragedy is that the new name was clumsier and not quite so fearsome: "Apatosaurus" means "deceptive lizard"; right, uh-huh, like there's anything deceptive about that dude.)Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5487450.post-20645860932164790012009-02-02T11:08:00.004-05:002009-02-02T11:15:50.176-05:00Outsourcing the BrainIn a chat with <i>The Philosopher's Magazine</i>, David Chalmers <a href="http://scienceblogs.com/cortex/2009/01/the_iphone_mind.php">lays out</a> his theory of the "extended mind," arguing that our cognitive systems are more than just what's crammed inside that gray matter in our skulls. The mind should be thought of as a larger, extended system that can encompass even gadgets like the iPhone:<br /><img src="http://3.media.tumblr.com/bnIhyDRF9jg4ohoecJj7Pi9Lo1_500.jpg" width="250" height="171" align="right" hspace=10 vspace=10><blockquote>"The key idea is that when bits of the environment are hooked up to your cognitive system in the right way, they are, in effect, part of the mind, part of the cognitive system. So, say I'm rearranging Scrabble tiles on a rack. This is very close to being analogous to the situation when I'm doing an anagram in my head. In one case the representations are out in the world, in the other case they're in here. We say doing an anagram on a rack ought be regarded as a cognitive process, a process of the mind, even though it's out there in the world."<br /><br />This is where the iPhone comes in, as a more contemporary example of how the extended mind works.<br /><br />"A whole lot of my cognitive activities and my brain functions have now been uploaded into my iPhone. It stores a whole lot of my beliefs, phone numbers, addresses, whatever. It acts as my memory for these things. It's always there when I need it." <br /><br />Chalmers even claims it holds some of his desires.<br /><br />"I have a list of all of my favorite dishes at the restaurant we go to all the time in Canberra. I say, OK, what are we going to order? Well, I'll pull up the iPhone—these are the dishes we like here. It's the repository of my desires, my plans. There's a calendar, there's an iPhone calculator, and so on. It's even got a little decision maker that comes up, yes or no."</blockquote>Fine, I buy it. (Scrabble racks!) The <a href="http://consc.net/papers/extended.html">original paper</a> that Chalmers and Andy Clark wrote on the subject invented an Alzheimer's patient named Otto who jots things down in his notebook so that he won't forget them. When you or I need to conjure up an address or name, we just summon it from some musty alcove in our memory. When Otto needs a fact, he flips through his spiral notebook. We consider memory part of the mind, so why can't we say that Otto's notebook is part of <i>his</i> mind? What's the difference? It's a short hop to declaring that an iPhone can be part of one's extended mind, too.<br /><br />Maybe all this just means that "mind" is an unduly foggy term. Let's all thank the relevant deities I'm not a philosopher and don't have to tear hair over this. It is odd, though, how memory works in the Internet age. When I'm writing about energy issues, let's say, then instead of learning all the nitty-gritty details about how utility decoupling works or what the latest data on snowfall in Greenland says, I never bother memorizing the details. Instead, I'll just get a feel for the main concepts and remember <i>where</i> on the Internet I can find more information in a pinch.<br /><br />Now, sure, I do the same thing with books or notes, but I rely on this method <i>much</i> more often now—and, of course, the Internet's always available, making the need to memorize stuff even less acute. My head's filled with fewer actual facts and more search terms or heuristics. Everyone does it. Emily Gould recently wrote a <a href="http://www.emilymagazine.com/?p=417">indispensable post</a> about Mind Google, which works like this: Rather than reaching instinctively for your iPhone to settle some trivia dispute with your friends (who did the voice of Liz in <i>Garfield: A Tale of Two Kitties</i> again?), you just rack your futile brain about it and then, later, when you're in the bathroom gazing at the tiles and thinking about something else entirely, the answer magically slaps you in the face. ("Jennifer Love Hewitt—of freaking <i>course</i>!")<br /><br />So some facts are still up in the old noggin. I mean, it's not like you can just do <i>without</i> facts altogether. It's just more efficient to outsource a larger chunk of stuff to that great hivemind on the Web. Probably a net plus. Though I wonder if someone like (oh, say) Thomas Pynchon could've written <i>Gravity's Rainbow</i> if instead of cramming all that stuff about rocket science and German ballads and Plato and god knows what else in his skull, he was always saying to himself, "Eh, I don't need to remember <i>this</i>, I'll just look it up on Wikipedia later..."<br /><br />One more thing about the extended mind. It ought to include other <i>people</i>, too. Shouldn't it? I know I have a bunch of stories and anecdotes I can't tell—or at least tell well—without some of the other participants around to help me fill in the details and color. In effect, I have to recreate the memory with someone else. I read somewhere once a devastating article about how older widows essentially lose a large chunk of their memory after their partner dies, because there are many events that could only recollect "together," with their spouse. No, wait, maybe this was in a novel? Or a <i>This American Life </i>podcast? Oy, my brain's a barren wasteland—off to wheedle the answer out of the Internet.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5487450.post-36321915130286945432009-02-02T10:58:00.003-05:002009-02-02T11:00:42.866-05:00It's Hue You Know <img src="http://7.media.tumblr.com/bnIhyDRF9jg2qlohW5Kqd4Lqo1_500.jpg"><br /><br />Here's a good Monday diversion: Wikipedia's <a href="http://en.wikipedia.org/wiki/List_of_colors">master list of color names</a>. Learn your cobalts and ceruleans and cyans. Some of the terms on this list are fairly obscure: Would you ever, in a million years, have guessed what color "Razzmatazz" is? (It's a sort of magenta.) But some of the names are hilariously perfect—"old rose" denotes the hue of aging, sickly rose petals. And some of the terms made me realize how easily I could distinguish shades I hadn't even <i>realized</i> were distinguishable until I saw the names for them: I could picture right away the difference between "Islamic green" (think Saudi flag) and "shamrock green."<br /><br />More clicking around... There's also a <a href="http://en.wikipedia.org/wiki/List_of_Crayola_crayon_colors">complete list</a> of every single crayon color that Crayola has seen fit to christen over the years. Some of the names have been changed, sometimes mysteriously so. In 1990, "maize" (a childhood favorite of mine, handy for drawing golden retrievers and buried sea-treasure) was replaced by the weedier name "dandelion." The color itself changed slightly, too: Appropriately, dandelion looks more like the drab patches of yellow you'd see on an abandoned lot while maize evokes sunny Ukrainian wheat fields. Why'd they get rid of it?<br /><br />Some of the Crayola color names are frivolous and totally inapt: I have no use for "laser lemon" or "Fuzzy Wuzzy brown" or even "radical red" (which, perhaps to protect kids from creeping Bolshevism, actually denotes a watery hue that's more pinkish than Pinko). But some of the Crayola names should be added to the original master list of English color names—"Granny Smith green" is an amazing term, instantly visualized, and as best I can tell has no analogue on the grown-ups' list of colors.<br /><br />A few clicks later, we learn that Hungarian is the <i>only</i> known language to have <a href="http://en.wikipedia.org/wiki/Hungarian_language#Two_words_for_.22red.22">two "basic" words for red</a>, "piros" and "vörös." Now, it's not so unusual for a language to have a different set of basic color terms than English does: For instance, both "blue" and "azure" are basic color terms in Russian, while only "blue" is a basic color term in English (in the sense that native English speakers consider azure a <i>type</i> of blue, which is not the case with, say, green). What makes the Hungarian case odd is that red is <i>the</i> most basic color—it's the third color languages acquire a word for, after black and white. Also, there's this<i> </i>queer business:<br /><blockquote>* <b>Expressions where "red" typically translates to "piros": </b>a red road sign, the red line of Budapest Metro, a holiday shown in red in the calendar, ruddy complexion, the red nose of a clown, some red flowers (those of a neutral nature, e.g. tulips), red peppers and paprika, red card suits (hearts and diamonds), red traffic lights, red stripes on a flag, etc.<br /><br />* <b>Expressions where "red" typically translates to "vörös":</b> red army, red wine, red carpet (for receiving important guests), red hair or beard, red lion (the mythical animal), the Red Cross, the novel The Red and the Black, the Red Sea, redshift, red giant, red blood cells, red oak, some red flowers (those with passionate connotations, e.g. roses), red fox, names of ferric and other red minerals, red copper, rust, red phosphorus, the color of blushing with anger or shame, etc.</blockquote>You know, those lists made sense for a few short moments, and then I was lost again.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-6635037139826897012009-02-01T11:37:00.001-05:002009-02-03T10:45:20.145-05:00Recycled Cities, Prefab HomesOver at <i>The Nation</i>, architect Teddy Cruz <a href="http://www.thenation.com/doc/20090216/cruz" mce_href="http://www.thenation.com/doc/20090216/cruz">walks us through</a> Tijuana, which has been built, in large part, using waste materials from neighboring San Diego—garage doors, box crates, old tires, even abandoned post-World War II bungalows that are loaded onto trailers and shipped south. Who knew you could recycle an entire city? There's even a YouTube clip, by filmmaker Laura Hanna: <br /><br /> <object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/UVlOWZfaat0&color1=0xb1b1b1&color2=0xcfcfcf&feature=player_embedded&fs=1"></param><param name="allowFullScreen" value="true"></param><embed src="http://www.youtube.com/v/UVlOWZfaat0&color1=0xb1b1b1&color2=0xcfcfcf&feature=player_embedded&fs=1" type="application/x-shockwave-flash" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />That video, incidentally, was included in the <i>Home Delivery: Fabricating the Modern Dwelling</i> exhibit at the MoMa, which was <a href="http://www.tnr.com/currentissue/story.html?id=febee987-a7bd-48e7-bef2-0c826bcd2558" mce_href="http://www.tnr.com/currentissue/story.html?id=febee987-a7bd-48e7-bef2-0c826bcd2558">reviewed</a> by architecture critic Sarah Williams Goldhagen in the current issue of <i>The New Republic</i>.<br /><br />Goldhagen's review, by the way, is provocative and very much worth reading. She observes that, compared with other countries, prefabricated homes are curiously rare in the United States. (In Sweden, by contrast, 85 percent of homes are largely prefab, and yes, Ikea offers models.) In theory, mass-produced homes could be cheaper, the quality could be more consistent, and—here's the bonus environmental angle—they're likely to be a <a href="http://features.csmonitor.com/environment/2008/08/13/factory-built-homes-may-be-greener/" mce_href="http://features.csmonitor.com/environment/2008/08/13/factory-built-homes-may-be-greener/">great deal greener</a> than the usual contractor-built ("stick-built") variety. There's no reason prefabs have to be hideous, either, or shoddy, or drearily uniform in appearance. Trouble is, building codes vary so widely from town to town in this country that it's impractical for prefab homes to take hold. Is that a good thing? Goldhagen says probably not:<br /><blockquote>What these exhibitions demonstrated is that there is no dearth of splendid ideas for prefabricated mass-produced housing. Were the best of these ideas adopted by the market—and in the right climate, more good ideas would be forthcoming—there is no doubt that the bulk of newly constructed houses would be less toxic to the environment, better built, and last longer with fewer repairs. They would more fully and attractively accommodate the ways people currently live.<br /><br /><img src="http://1.bp.blogspot.com/_7OJsIQFWYZo/SYhldVat-BI/AAAAAAAAAHA/yxd9GdS2-Cc/s400/2597122370_ab0d45ed3d_m.jpg" vspace="10" align="right" hspace="10">Why do we continue to settle for residential litter in ever-more-degrading landscapes? That is the question these exhibitions fail to address. But of course it is not an architectural question. In social policies, better ideas by savvier architects will change little. For quality affordable mass-produced housing to be built, we need to create different conditions for a mass market. A new legislative structure must clear away the obstacles presented by non-standard, municipally controlled building codes and create enforceable national standards for prefab-friendly, environmentally responsible manufacturing and construction practices. Incentives must be offered so that the entrenched and intransigent construction industry, which has made plenty of money on its poorly conceived, shoddily built, environmentally toxic houses, will re-configure itself.<br /><br />If the necessary legislation were passed and new market incentives put in place, and the designers and manufacturers of prefabricated homes made all the real innovations in quality and reduction of price that the automobile industry has made since the Model T, who would walk away from a better designed and better built home for less money?</blockquote>Et voilà—the future of homebuilding in the United States? I have no clue, but assuming energy and environmental issues continue to be central in the coming years (and why wouldn't they be?), I'd expect the question to bubble up now and again. Greening the construction industry and our built environment, after all, pretty much <i>has</i> to be a big part of any push to nudge down those greenhouse-gas emissions.<br /><br />(<i>Cross-posted to <a href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/archive/2009/01/31/recycled-cities-prefab-homes.aspx">The Vine</i></a>)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-48464556448366200122009-01-11T16:32:00.005-05:002009-01-12T10:13:27.602-05:00Not What I Meant!My friend Francesca Mari has a <a href="http://blogs.villagevoice.com/music/archives/2008/12/i_guess_it_wasn.php">great critique</a> of the new film adaptation of <i>Revolutionary Road</i>:<br /><blockquote>Instead of adapting the novel, scriptwriter Justin Haythe seems to have read only the book's dialogue—to have typed up all the quotations, then deleted the document down to two hours.<br /><br /><img src="http://farm4.static.flickr.com/3065/3039716175_611b486241.jpg?v=0" align="right" hspace=10 vspace=10 width=250 height=142>Remaining faithful to a text doesn't mean lifting as many lines as possible. This is especially true with Yates, whose characters, if you were to only listen to what they say, vacillate almost exclusively between pathetic and cruel. The heart of his stories, the stuff that lobs a lump in your throat, comes from what the characters think, and then say in spite of it. Take, for instance, the book's perfectly constructed opening scene, the staging of a community play.<br /><br />"The helplessly blinking cast," Yates writes, "had been afraid that they would end by making fools of themselves, and they had compounded that fear by being afraid to admit it." Gnawing on his knuckles in the front row, Frank watches his wife, initially a vision of the poised actress he fell for, devolve into the stiff, suffering creature who often sleeps on his sofa. After final curtain, he plans to tell April she was wonderful, but then watching her through a mirror as she wipes her makeup off with cold cream, he suddenly thinks twice: "'You were wonderful' might be exactly the wrong thing to say--condescending, or at the very least naïve and sentimental, and much too serious." Frank watches April through the mirror remove her makeup with cold cream. "'Well,' he said instead, 'I guess it wasn't exactly a triumph or anything, was it?'"</blockquote>Haven't seen the movie yet, but I can see why that'd be a problem. Most of the book's best scenes involve a tension, usually cringe-inducing, between what the characters <i>mean</i> to say and what actually comes out of their mouths, either because they overthink things, or because they miscalculate how their words will come across, or because they're overcome by some fleeting childish urge to hurt, or because—most commonly—they just fail to take the other person into account. At one point, Frank plots out in his mind an entire conversation with his wife, and then gets enraged when she doesn't deliver her hoped-for lines. <br /><br />That seems like a devilishly hard thing to convey in a film, assuming that there's not some hokey narrator voicing an interior monologue all the while—sort of like in <i>A Christmas Story</i>—especially since Yates relies on this trick so frequently. But now I'm trying to think of movies that do this well, and my brain's coming up blank.<br /><br />(<i>Image by <a href="http://www.flickr.com/photos/antonyhare/3039716175/">Antony Hare</a></i>)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-24247956160114797002009-01-10T12:10:00.002-05:002009-01-10T18:08:10.208-05:00To the Last Syllable of Recorded TimeHere's a topic that's always mystified me. Go back to the late 17th century. Newton and Leibniz had both independently developed calculus, and, when they weren't bickering over who deserved credit, they were disagreeing about the metaphysical nature of time. Newton believed that space and time were real and independent from each other. No, <a href="http://www.kirjasto.sci.fi/leibnitz.htm">countered</a> Leibniz, time and space are just two interrelated ways of ordering objects: "For space denotes, in terms of possibility, an order to things which exist at the same time, considered as existing together." Clever. But most physicists ended up siding with Newton—our basic intuition is that space and time both exist, and are separate from each other.<br /><br />But then things got tricky. Once general relativity was formulated, we suddenly had this notion of space-time as a continuous fabric. In Einstein's universe, how time flows depends on one's location. In places where gravity is weaker, time runs faster. Technically, you age slightly <a href="http://en.wikipedia.org/wiki/Gravitational_time_dilation">more rapidly</a> living in a penthouse than you would living down on the street level—think of it as a small dose of cosmic income redistribution. So that's one troubling little chink in Newton's theory.<br /><br /><img src="http://data.tumblr.com/bnIhyDRF9gs21536EUuSaMLFo1_400.jpg" align="right" height="333" hspace="10" vspace="10" width="217" />Then we get even bigger problems burrowing down into the quantum universe, where it's not clear that time is even a relevant concept. How would you measure it? There are experiments to nail down all sorts of qualities about particles—their position, their momentum, their spin—but not how they mark time, or how long they'll stay in a certain place. Or, rather, there are plenty of ways to <i>try</i> to measure that, but all result in wildly different answers. Not only that, but electrons and photons don't even appear to be bound by the arrow of time; their quantum states can evolve both forward and backward. An observation of certain particles in the present can affect their past natures, and so on.<br /><br />So how are we supposed to reconcile all this? Some scientists now seem to think that maybe Leibniz was right all along, and we should reconsider his argument more carefully. In a recent issue of <i>New Scientist</i>, Michael Brooks <a href="http://www.newscientist.com/article/mg20026831.500-what-makes-the-universe-tick.html?full=true">interviewed</a> Carlo Rovelli, a theoretical physicist at the University of the Mediterranean, who argues that the simplest approach to time is to stop talking about how things <i>change over time</i> and, instead, just talk about how things relate to each other: "Rather than thinking that a pendulum oscillates with time and the hand of a clock moves in time, we would do better to consider the relationship between the position of a pendulum and the position of the hand." The notion of time is only a meaningful metaphor when dealing with, say, human experience. It's useless, Rovelli argues, for most of the universe.<br /><br />That's a bit wild, and maybe we don't want to abandon Newton just yet. So one potential way to preserve time as a really-existing entity has been offered up by Lee Smolin, who argues that the reason scientists haven't been able to reconcile quantum physics and relativity—and bring the laws of the universe under one tidy grand theory—is that they aren't accounting for the fact that the laws of physics can evolve over time. In the early moments of the universe, after all, we know that the electromagnetic and weak forces were rolled up together. So why can't we see other such evolutions? Time is fundamental—it's just the laws of physics that change. (And, fair enough: There's no <i>logical</i> reason why the laws of physics have to be eternally true—they've only really applied for less than 14 billion years.)<br /><br />Then there's the weird fact that the arrow of time always seems to move forward, as encapsulated by the second law of thermodynamics, which holds that the entropy or disorder in the universe is always increasing. That explains why cream mixes with your coffee over time—technically, it would be possible for me to dump in some cream, mix it around, and then have the two liquids sort themselves out: milk on top, coffee on bottom. But this is <i>staggeringly</i> unlikely from a statistical perspective, and the most probable states are some sort of mixture. Time moving in reverse—i.e., the milk and coffee "unmixing"—is very, very, very, very, unlikely. (Very.) So unlikely from a statistical perspective, in fact, that it basically doesn't happen. <br /><br />But that raises at least one squirm-inducing question: How did the universe, then, start out at a low-entropy, extremely orderly state (which has been getting messier ever since) in the first place? Sean Carroll <a href="http://www.sciam.com/article.cfm?id=the-cosmic-origins-of-times-arrow&print=true">wrote a whole article</a> in <i>Scientific American</i> earlier this year about this and outlined one possible (and possibly wacky) solution:<br /><blockquote>This scenario, proposed in 2004 by Jennifer Chen of the University of Chicago and me, provides a provocative solution to the origin of time asymmetry in our observable universe: we see only a tiny patch of the big picture, and this larger arena is fully time-symmetric. Entropy can increase without limit through the creation of new baby universes.<br /><br />Best of all, this story can be told backward and forward in time. Imagine that we start with empty space at some particular moment and watch it evolve into the future and into the past. (It goes both ways because we are not presuming a unidirectional arrow of time.) Baby universes fluctuate into existence in both directions of time, eventually emptying out and giving birth to babies of their own. On ultralarge scales, such a multiverse would look statistically symmetric with respect to time—both the past and the future would feature new universes fluctuating into life and proliferating without bound. Each of them would experience an arrow of time, but half would have an arrow that was reversed with respect to that in the others.<br /><br />The idea of a universe with a backward arrow of time might seem alarming. If we met someone from such a universe, would they remember the future? Happily, there is no danger of such a rendezvous. In the scenario we are describing, the only places where time seems to run backward are enormously far back in our past—long before our big bang. In between is a broad expanse of universe in which time does not seem to run at all; almost no matter exists, and entropy does not evolve. Any beings who lived in one of these time-reversed regions would not be born old and die young—or anything else out of the ordinary. To them, time would flow in a completely conventional fashion. It is only when comparing their universe to ours that anything seems out of the ordinary—our past is their future, and vice versa. But such a comparison is purely hypothetical, as we cannot get there and they cannot come here.</blockquote>Hm. These are things I don't really understand at all (feel free to drive that point home in comments!) and wonder if it's even worth the effort to try. Probably not. The other maddening fact about time is that there's not, alas, ever enough of it.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-90227884149033011032009-01-10T11:03:00.001-05:002009-01-10T11:05:37.402-05:00Little LeftiesAre most parents closet Marxists? Caleb Crain <a href="http://www.nytimes.com/2009/01/11/books/review/Crain-t.html?_r=1">says</a> of course they are:<br /><blockquote>After all, most parents want their children to be far left in their early years—to share toys, to eschew the torture of siblings, to leave a clean environment behind them, to refrain from causing the extinction of the dog, to rise above coveting and hoarding, and to view the blandishments of corporate America through a lens of harsh skepticism. <br /><br />But fewer parents wish for their children to carry all these virtues into adulthood. It is one thing to convince your child that no individual owns the sandbox and that it is better for all children that it is so. It is another to hope that when he grows up he will donate the family home to a workers’ collective.</blockquote>That's from Crain's amusing review of <i>Tales for Little Rebels: A Collection of Radical Children’s Literature</i>, and I'd say it makes sense. What else is elementary school if not a marginally more humane version of the Soviet Five-Year Plans?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-6633611372356729692009-01-05T21:12:00.004-05:002009-01-08T12:42:39.037-05:00Doom, Sweet DoomOh, man. Back in October, I <a href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/archive/2008/10/16/supervolcanoes-pfft.aspx" mce_href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/archive/2008/10/16/supervolcanoes-pfft.aspx">ran across</a> some evidence that the supervolcano lurking beneath Yellowstone National Park capable of obliterating a good chunk of the United States might actually be losing its vigor, maybe even going dormant. Well, scratch that. Charlie Petit <a href="http://ksjtracker.mit.edu/?p=8257" mce_href="http://ksjtracker.mit.edu/?p=8257">reports</a> that since December 26, there's been an unusually large uptick in earthquake activity beneath Yellowstone—some 400 seismic events in all. Is the beast getting restless? Let's pray not, 'cause here's a quick recap of what doomsday would look like:<br /><br /> <object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/zh9zVXUv-Fs&hl=en&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/zh9zVXUv-Fs&hl=en&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />On the upside, if Yellowstone <i>does</i> burst, we could stop fretting about global warming for a spell, since all that ash would have a <a href="http://earthobservatory.nasa.gov/Features/Volcano/">fairly sizable</a> cooling effect...<br /><br /><b><i>Update:</i></b> Doom averted for now! The AP <a href="http://seattlepi.nwsource.com/local/6420ap_wy_yellowstone_quakes.html">added an update</a> yesterday, reporting that the earthquakes appear to be subsiding, and that, according to one researcher, the seismic shudders "could alter some of the park's thermal features but should not raise any concern about the park's large volcano erupting anytime soon." Now I'm off to find something else to fret about.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-64645481792595971602009-01-05T18:05:00.002-05:002009-01-05T18:15:11.027-05:00Not Poppy, Nor Mandragora, Nor all the Drowsy Syrups of the World...So there's the good jetlag, the kind where you return home from parts unknown, collapse at 9 p.m., wake before dawn, and are just the most productive little honeybee that's ever buzzed. And then there's the other kind, the bad jetlag—awake all night, useless as an earlobe in daylight. Right now, I have the second kind, so I've been trying to stay awake by reading about sleep. People always tell pollsters that they don't sleep enough and wish they got more. That sounds iffy to me. Do people <i>really</i> want more? Why not less? And how bad is a little sleep deprivation, really? <br /><br /><a href="http://www.flickr.com/photos/jewelzybug/3169966291/"><img src="http://farm4.static.flickr.com/3291/3169966291_cb2fe4b65d.jpg?v=0" align="right" hspace=10 vspace=10 width=250 height=187 border=0></a>Jim Horne, who runs the University of Loughborough's Sleep Research Centre in the UK, <a href="http://www.newscientist.com/article/mg20026781.600-time-to-wake-up-to-the-facts-about-sleep.html?full=true">explains</a> in <i>New Scientist</i> that most modern folks probably get enough more than enough sleep, about 7 hours a night. (Presumably excluding parents with newborns.) Americans didn't actually get 9 hours back in 1913, as is sometimes claimed, and just because you snooze for 10 or 11 hours on the weekend doesn't mean you're catching up on some supposed "sleep deficit"—odds are, you're just merrily and needlessly oversleeping, in the same way a person might merrily overeat or overdrink beyond what they need. Sloths will sleep 16 hours a day if caged and bored, versus just 10 in the wild. "Now wait a sec," the critic interjects, "That's well and good, but I <i>must</i> be sleep-deprived, because whenever three p.m. rolls round my eyelids get uncontrollably heavy!" Mine too. But Horne says that's just our bad luck:<br /><blockquote>My team recently investigated these questions by giving around 11,000 adults a questionnaire asking indirectly about perceived sleep shortfall. ... Half the respondents turned out to have a sleep shortfall, averaging 25 minutes a night, and around 20 per cent had excessive daytime sleepiness. However, the people with a sleep deficit were no more likely to experience daytime sleepiness than those without.</blockquote>Can't decide whether that's comforting or no. On an only semi-related note, apparently the brain consumes just as much oxygen when it's "idling" as it does when problem-solving. So, uh, what's the resting brain doing, anyway? Douglas Fox <a href="http://www.newscientist.com/article/mg20026811.500-the-secret-life-of-the-brain.html?full=true">reports</a> on some theories, most of which involve daydreaming—the "idle" brain is likely doing crucial work, sorting through the vast mess of memories in your brain, looking at them introspectively, and deciding which ones to keep and how to classify them—good, bad, threatening, painful. "To prevent a backlog of unstored memories building up," Fox notes, "the network returns to its duties whenever it can." <br /><br />So there's a (scientific!) rationale for staying up late <i>and</i> staring blankly at the wall during the day. That strikes me as exceedingly useful.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-24483045677895381682009-01-04T23:36:00.004-05:002009-01-05T00:53:41.713-05:00In Search of a Greener LaptopIt's never <i>that</i> surprising to hear that some companies aren't quite as eco-friendly as they claim to be, but sometimes the examples are worth examining in detail. Last week, Ben Charny <a href="http://online.wsj.com/article/SB123066532721343231.html">reported</a> in <i>The Wall Street Journal</i> that Apple Inc., which has gone out of its way to flaunt its green cred, doesn't always stack up so well to its competitors: "For example, Dell and Hewlett-Packard report buying much more clean energy than Apple; Dell 58 times more, and Hewlett-Packard five times more." <br /><br /><a href=" http://flickr.com/photos/zilpho/2994217039/"><img src="http://farm4.static.flickr.com/3226/2994217039_8f5ec8ec3b_m.jpg" width=180 height=240 align="right" hspace=10 vspace=10 border=0></a>Now, clean energy is a decent metric, though a computer's impact on the environment mostly depends on how much energy it consumes <i>after</i> it's been purchased. On that score, MacBooks do... okay. That said, I wish these comparisons would look more broadly at product <i>durability</i>. A company making laptops that last five years on average is greener than one whose computers last for just three, consuming fewer resources and spewing less hazardous waste—especially since e-recycling programs are <a href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/archive/2008/11/10/the-toxic-e-waste-racket.aspx">often flawed</a>. And, as <i>EcoGeek</i>'s Hank Green <a href=" http://www.ecogeek.org/content/view/1306/">reported</a> last year, some of Apple's newer iPods and iPhones are designed to be virtually impossible to repair, which ends up encouraging people to junk their old gadgets and buy new ones. <br /><br />For a longer discussion of this topic, Giles Slade's <a href="http://www.amazon.com/Made-Break-Technology-Obsolescence-America/dp/0674022033"><i>Made to Break</i></a> is a fascinating book, and some of his insights on how businesses make short-lived products to encourage excess consumption should be taken to heart when trying to figure out just how green a company really is. True, the EPA considers "product longevity" as a factor in its <a href="http://www.epeat.net/Criteria.aspx#criteriatable">EPEAT labeling</a> for electronics, but that system is voluntary and doesn't work terribly well. Recommending that computers have "modular designs," say, hasn't stopped the average lifespan of personal computers from <a href="http://sustainability.stanford.edu/projects/Green%20My%20Apple-%20Controversy.doc">plummeting</a> from six years in 1997 to two years in 2005. It's likely that rethinking the whole notion of "planned obsolescence" would involve very drastic regulatory (and probably cultural) changes. Maybe that's not desirable—maybe it's a <i>good</i> thing that our cell phones go out of style every 12 months. In the meantime, though, we could probably stand to require better buyback and recycling programs.<br /><br />While we're on the topic, the <i>Journal</i> also had a <a href="http://online.wsj.com/article/SB123059880241541259.html">fantastic dissection</a> of Dell's claims to be "carbon neutral." Turns out, when Dell's executives fling that term around, they're only talking about Dell proper—they're <i>not</i> talking about Dell's suppliers or the people running Dell computers. Which basically means they're talking about just 5 percent of Dell's actual carbon footprint. And even that 5 percent achieves neutrality mostly via carbon offsets, which are often quite dubious (sure they're planting a forest—but how do they know the forest wouldn't have been planted anyway?) Well-meaning corporate initiatives are nice, but they're no substitute for better regulation and carbon pricing.<br /><br />(<i>Cross-posted to <a href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/default.aspx">The Vine</a></i>)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-61598112096163352092009-01-04T18:15:00.006-05:002009-01-04T18:31:12.639-05:00On the Drug Money TrailTalk about unnerving: In this month's <i>New York Review of Books</i>, Marcia Angell <a href="http://www.nybooks.com/articles/22237">argues</a> that "it is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines" when it comes to drugs or medical devices. Big Pharma, she argues, has corrupted the clinical trial process too thoroughly. (Interestingly, the essay focuses primarily on drugs, although the approval processes for medical devices or, say, new surgical techniques are often even <i>less</i> stringent—surely big financial interests have a lot of sway there, too.)<br /><br /><a href="http://flickr.com/photos/bryanchan/327989874/"><img src="http://farm1.static.flickr.com/138/327989874_27a368db6f_m.jpg" style="border-style: none" border=0 align="right" hspace=10 vspace=10></a>Angell's full essay is worth taking for a spin, but note that the problems she discusses are especially acute when it comes to <i>off-label</i> uses of drugs. Once the FDA approves a drug for a specific purpose, doctors can legally prescribe it for any other disease they feel like. So drug companies design and bankroll often-dubious trials to "test" those other uses, pay some esteemed researcher a fat consulting fee so that he'll slap his name on the results (unless the results are unfavorable, in which case into the dustbin they go), and then lobby doctors—with gifts, if necessary—to start prescribing the drug for the new use. Oh yeah, step four: Profit!<br /><br />But if all this subtle—and often not-so-subtle—corruption is hard to eliminate, as Angell says it is, why not create stricter FDA regulations on off-label use? Angell estimates that roughly half of all current prescriptions are off-label, and no doubt many of those are valid, but we've <i>also</i> seen, for instance, a 40-fold increase in the diagnosis of bipolar disorder among children between 1994 and 2003, with many being treated with powerful psychoactive drugs off-label—side-effects and all. It makes little sense that a drug company has to spend years and millions of dollars getting FDA approval for one use of a drug, but then all other uses are automatically kosher, no matter how little reliable data exists. Or is this off-label "loophole" useful and effective in some way I'm not grasping?<br /><br />On a related note, a few weeks ago, <i>The New York Times</i> had a <a href="http://www.nytimes.com/2008/12/18/health/18psych.html?_r=1">great piece</a> on how psychiatrists were gearing up to put together the fifth edition of the <i> Diagnostic and Statistical Manual of Mental Disorders</i>, the "bible" of psychiatry. Edward Shorter, a historian in the field, was quoted arguing that psychiatry is not like, say, cardiology—no one really knows the causes of various mental disorders, so "political, social, and financial" factors often drive the classifications. Financial, indeed: As Angell notes, of the 170 contributors to the DSM-IV, 95 had financial ties to drug companies. <br /><br />Now, does that help explain why, as Christopher Lane reports in his new book, <br /><a href="http://yalepress.yale.edu/yupbooks/book.asp?isbn=9780300124460"><i>Shyness: How Normal Behavior Became a Sickness</i></a>, ordinary shyness went from being listed as a rare "social phobia" in 1980s DSM-III to a full-fledged "social anxiety disorder" in 1994's DSM-IV? Maybe, maybe not, though GlaxoSmithKline certainly found it profitable to push the idea of social-anxiety disorder on the public ("Imagine being allergic to people...") and marketing its antidepressant, Paxil, as a treatment. <br /><br />That's not to say the disorder's fake, or that Paxil can't help people; just that there's a set of murky judgments at work here not grounded in rigorous scientific evidence, and a little more transparency would be nice. But alas, as the <i>Times</i> reported, the contributors to DSM-V have all signed non-disclosure agreements. No peeking behind the curtain. True, the participants have agreed to limit their income from drug companies to "only" $10,000 a year while working on the DSM-V, though, given that drug companies potentially have <i>billions</i> at stake on what does and does not go in the manual, it's not hard to imagine that they might try to, ah, push the envelope a wee bit.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-14659858985758511772009-01-04T13:06:00.008-05:002009-01-04T13:49:27.559-05:00Toooons.Well, lookit what we got here. It <i>is</i> possible to embed music on your lowly little blog! Very nice, and a bouquet of lilacs dispatched to <a href="http://lala.com/">LaLa.com</a>. Now, what the world emphatically does not need is another mp3 blog telling you that, dude, the white scruffy indie kid over here likes the new Fleet Foxes EP. So I'll pass on that. But what the world <i>might</i> need is more Senagalese Afro-Latin big bands from the '80s that recently (albeit only partially) reunited. Like so:<br /><br /><object type="application/x-shockwave-flash" data="http://www.lala.com/external/flash/SingleSongWidget.swf" id="lalaSongEmbed" height="70" width="220"><param name="movie" value="http://www.lala.com/external/flash/SingleSongWidget.swf"><param name="wmode" value="transparent"><param name="allowNetworking" value="all"><param name="allowScriptAccess" value="always"><param name="flashvars" value="songLalaId=360569453762719932&host=www.lala.com"><embed id="lalaSongEmbed" name="lalaSongEmbed" src="http://www.lala.com/external/flash/SingleSongWidget.swf" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" wmode="transparent" allownetworking="all" allowscriptaccess="always" flashvars="songLalaId=360569453762719932&host=www.lala.com" height="70" width="220"></embed></object><div style="font-size: 9px; margin-top: 2px;"><a href="http://www.lala.com/song/360569445172785340/360569453762719932" title="Sutukun - Orchestre Baobab">Sutukun - Orchestre Baobab</a></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-2102703916768113682009-01-04T11:59:00.003-05:002009-01-04T13:49:08.432-05:00Inside the MachineOver the holidays, I picked up an old, yellowing copy of Mike Royko's <a href="http://www.amazon.com/Boss-Richard-J-Daley-Chicago/dp/0452261678"><i>Boss</i></a>, his best-selling and unauthorized—to put it lightly—1971 biography of former Chicago mayor Richard J. Daley. This may sound naïve, but I found the book incredibly eye-opening. Sure, I've always had a hazy sense of how urban machine politics worked, and I've heard a fair bit about how horrifically corrupt and mismanaged many big cities were back in the 1950s and '60s—especially if you were poor or black—but <i>Boss</i> doles out the gory details. And, yes, later historians have softened Royko's portrait of Daley as a corrupt, racist strongman, but as best I can discern, the book's held up reasonably well.<br /><br />Daley didn't create Chicago's Democratic machine, but by the time he won his first mayor race in 1955, he had worked in all of its critical nodes—from key budget positions to state party chair—and was able to consolidate his control over the vast apparatus like no one else before him, or since. Not even Rod Blagojevich. Not even close.<br /><br /><img src="http://upload.wikimedia.org/wikipedia/en/thumb/e/e6/Boss_byMikeRoyko_jacket.jpg/400px-Boss_byMikeRoyko_jacket.jpg" align="right" height="163" hspace="10" vspace="10" width="200" />It went like this: Chicago had some 3,500 precincts, each with captains and assistant captains and underlings galore who devoted their energy to turning out votes—or stealing them, if it came to that—in exchange for well-heeled patronage jobs. Not just government jobs, but jobs at racetracks or public utilities or private industries—the Machine had the power to all sorts of jobs to supporters, since, after all, these utilities and contractors relied on the Daley administration for contracts and business. And the Machine always delivered the votes. (True, it didn't win Illinois for Hubert Humphrey in 1968, though Royko argues that was only because Daley had lost faith in Humphrey by that point and focused on downticket races.)<br /><br />At its peak, the Machine seemed practically omnipotent: Judges came up through the Machine. The top law firms were run by lawyers close to the Machine. Who else would you want handling a zoning dispute for you? And ward bosses usually used their power to enrich themselves. Companies rushed to pay premiums to insurance firms run by ward bosses, lest they fall afoul of the county assessor or zoning board. And Daley stood on top of it all. Only Daley could keep track of all the moving parts. Only Daley, who retained his party chairmanship, knew how much money was flowing in, and how much flowing out.<br /><br />Most of his money came from contractors, since the hallmark of Daley's administration was building lots and lots of stuff in Chicago's downtown. The Dan Ryan Expressway. The John Hancock Building. Most of O'Hare Airport. And on and on and on. All Daley's doing. The revisionist take on Daley <a href="http://en.wikipedia.org/wiki/Richard_J._Daley#Legacy">seems to hold</a> that because he revitalized downtown he saved Chicago from the blight that later afflicted, say, Detroit and Cleveland. Maybe. Another revisionist take is that Daley got things built that a non-boss mayor never could have. Maybe. But there were costs, of course. Old neighborhoods were razed in the name of "urban renewal"—including a loyal Italian ward in the Valley—and replacement housing never built. Commonwealth Edison, which ran a hideously dirty coal plant responsible for one-third of the city's pollution, was allowed to evade pollution laws simply because a boyhood friend of Daley's sat on the board.<br /><br />And Daley's tenure reads like an utter disaster when we're <i>not</i> talking about gleaming buildings. Royko points out the grim irony in the fact that black voters continued giving Daley his electoral majorities despite the fact that he gave them absolutely nothing. (Not that black voters always had a choice: Precinct captains weren't shy about issuing threats in poorer neighborhoods—vote or else you'll lose your public housing, and, yes, I think I <i>will</i> go into the voting booth with you...) During Daley's tenure, the black neighborhoods had some of the worst slums in the country, crumbling schools, and rampant crime—since police often focused more on extortion than crime prevention. (True, the police department did get cleaned up a bit after Daley appointed Orlando Wilson, a UC law professor, as police chief—but Daley was forced to do this after it was revealed that a number of his police officers were running their own armed robbery gang.)<br /><br />Easily the most stunning part of Royko's book is his description of how Daley fended off Chicago's civil rights demonstrators—including Martin Luther King. Daley, of course, had no interest in undoing Chicago's sharply enforced neighborhood segregation. Even if he had wanted to—and there's no indication that he did—the city's white voters, who were prone to violence anytime a black person so much as walked through their lily enclaves, would've revolted. But Daley was able to do what the mayor of Birmingham could not and neutralize King by, essentially, co-opting him. The two would meet and Daley would insist that he was a liberal, point to all his well-meaning social programs, heap promises on high... it was all b.s., but what could King say to that?<br /><br />Another provocative bit in Royko's book was his short discussion of the Black Panthers. The black gangs operating in Chicago during the 1960s were hardly angelic, though some of them did run health clinics and school-lunch programs and broker truces between street gangs. But Daley wasn't chiefly worried about the violence. He was worried that these groups might eventually turn to politics. After all, Daley himself had, as a youth, been a member in a violent Irish gang, the Hamburg Social and Athletic Club, which eventually became a "mainstream" political organization that organized voters, elected local alderman, and launched many a high-flying political career. It wasn't hard to envision the Black Panthers following a similar trajectory, so Daley ordered his police chief to declare war.<br /><br />Like I said, I'm sure there have been plenty of reassessments of Daley since Royko's unrelentingly scathing account (which, incidentally, was banned from many a Chicago bookstore upon release by order of Hizzoner himself). But <i>Boss</i> is definitely worth reading. Very often, people will tell the story of urban decay like this: Cities were flourishing in the 1940s and 1950s but then fell to tatters in the late '60s with the rise of crime, integration, busing, white flight, and so on. Royko makes a very good case that the rot began long before then.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-38377533197928489472009-01-04T11:37:00.002-05:002009-01-04T13:48:50.964-05:00Ree-vamp! Ree-vamp!As it tends to do, New Year's Day came and went this year, and, as always, I made a few half-hearted New Year's resolutions—that's right, lurking somewhere deep within my to-do list is a note to "research gym memberships on Google"... fortunately, that's probably as far as <i>that</i> will ever get. But one thing I expressly did not resolve to do was revive this blog. No sir.<br /><br />But... of course there's a "but." But then an errant bookmark click sent me tumbling down to this musty old site, and, hot damn, it's been eight whole months since I've written anything here. Not that I've <i>abstained</i> from littering the Internet with gibberish, no—<a href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/default.aspx"><i>The Vine</i></a> over at TNR has kept me busy doing plenty of that. But, y'know, I do sort of miss having a personal blog, so maybe I'll take a stab at a relaunch. Luckily, this site has been stripped of all its former readers, which may let me take it in a brand-new direction. Perhaps I'll try less politics, more random and extraneous crap—this could morph into a music blog, or a glorified diary of my social life, or maybe I'll just put my hands in my pocket, hold my head high, and whistle nonchalantly for post after post. La da dee...<br /><br />Anyway, we'll see how long this latest blog incarnation will actually last. Happily, this isn't an official New Year's resolution or anything—nothing filed in triplicate, nothing involving solemn swears—so if it doesn't work, I can just drop it at any time, though, if that happens, I'll no doubt put up some lethargic meta-post explaining why I'm quitting yet again. And yes, it <i>is</i> a good question who I'm writing this post <i>for</i>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5487450.post-44060940665767977932008-04-12T12:13:00.004-04:002008-04-12T12:20:50.778-04:00Work, Work, WorkOkay, this here blog's taking a wee vacation (as if you couldn't tell). For now, I'm helping run <i>The New Republic</i>'s <a href="http://blogs.tnr.com/tnr/blogs/environmentandenergy/default.aspx">new enviro blog</a>, which still needs a name—The Vine? The Weed?—and may just be a temporary thing to coincide with our recent environment issue, but who knows.<br /><br />Speaking of which, I've got <a href="http://www.tnr.com/environmentenergy/story.html?id=61f4ea0d-90bb-4a38-9650-c507fa73efbe">a longer piece</a> in the new issue of <i>TNR</i> on the big fight within SEIU that's going on right now. It's interesting, and I do think the debate at the heart of the feud is substantively quite important, and speaks to broader issues of what unions are for, and how they could survive and grow. I tried to lay things out as fairly and as clearly as I could, so... check it out.Unknownnoreply@blogger.com0