The standard line on the washing machine is that it saves labor. So does the iron, the dishwasher, and the vacuum cleaner. They're magical appliances. Some critics of the French economy and its 35-hour work week are fond of noting that the French don't actually have more leisure time than Americans, because they can't afford as many fancy electric appliances, so they spend more time doing housework. On the other side of this is Betty Friedan, who wrote in The Feminine Mystique that those nifty appliances don't actually reduce the amount of work housewives have to do around the house because "housewifery expands to fill the time available."
Well, it's possible that Betty Friedan was right, in a way. Granted, the dishwasher and washing machine really do save time and energy. I have no intention of hand-washing my clothes and neither do you. It's quicker this way. But what's interesting is that in the past century, a variety of "labor-saving" appliances have been invented and adapted by millions of people across the United States, and yet, somewhat surprisingly, hours spent on work around the house might actually be higher today than they were in 1900.
That's one rather striking conclusion in the middle of "A Century of Work and Leisure," by Valerie A. Ramey and Neville Francis. Although the data is sort of patchy, they estimate that the hours spent on housework by "non-employed women" stayed roughly constant between 1912 and the 1960s, and dipped only slightly after that. The simplest explanation here is that, back in the early 1900s, a lot of housework simply didn't get done, especially in poorer households. Women still worked around the house, but stuff remained dirty. In that case, appliances haven't saved time so much as simply allowed more cleaning to get done in a given amount of time.
Here's another way to look at it, though: studies asking housewives to keep time diaries found that women between 1912 and the 1960s who had electric appliances spent no less time on their housework than those without appliances. Many homes without washing machines and irons, for instance, simply hired laundresses or sent their laundry to commercial facilities, both of which were relatively cheap during this time, thanks to large-scale immigration.
Meanwhile, just as "labor-saving" appliances were becoming increasingly common, the public was becoming increasingly aware of the importance of cleanliness, so the demand for housework rose dramatically. As Friedan noted, housewives with washing machines were suddenly expected to wash the sheets twice a week. The same goes for child care. Despite the fact that families in the early 20th century had more kids, parents—especially mothers—in the postwar period actually spent more time with their kids, perhaps spurred, Ramey and Francis write, by "widely-publicized studies on the effects of parental interaction on children's development."
Meanwhile, "employed men" nowadays seem to do much more housework than they did in the early part of the century—an average of 16 hours a week versus virtually nothing in the 1920s. That's less than half of the roughly 45 hours a week spent by housewives, and less than the 25 hours a week spent by employed women. At any rate, if you add all these up, families as a whole spend about as much time doing housework today as they did in 1912. They presumably get better results for their work—clothes are cleaner, dishes are cleaner, the kids probably study harder—but they don't necessarily have more leisure.
I've never known anyone who was objectively pro-litter. Litter's awful. It's disgusting. We're all agreed. But it seems that the nationwide anti-litter campaign, which began in the 1950s, was a bit less pure in its origins. According to Heather Rogers' Gone Tomorrow: The Hidden Life of Garbage, the entire anti-litter movement was initiated by a consortium of industry groups who wanted to divert the nation's attention away from even more radical legislation to control the amount of waste these companies were putting out. It's a good story worth retelling.
After World War II, the story goes, American manufacturers were running at full blast, and needed American consumers to keep buying more and more junk if they wanted to maintain their profit margins. And since there's an upper limit to how much junk a given family genuinely needs to own, manufacturers had to figure out how to convince consumers to keep throwing their existing stuff out, so that they would buy new stuff.
In part, that meant companies had to ensure that in a few short years consumer goods would become either unfashionable (advertising can do that), or obsolete (simply stop offering customer support for anything a few years old), or broken (like the non-replaceable batteries in iPods that wear out after two years). Giles Slade describes some of these strategies in his book, Made to Break, and they're techniques that have existed for decades now. But another way to ensure that factories could keep churning out junk was to introduce "non-renewable" packaging for products—for instance, the aluminum soda can—that could be produced, trashed, and then produced again.
The problem is that all of this endless—and needless—manufacturing creates a lot of garbage and pollution that generally wreaks havoc on the earth. (Packaging currently accounts for one-third of all trash in the United States today.) And eventually people wised up to this fact. In 1953 Vermont passed a law banning "throwaway bottles," after farmers complained that glass bottles were being tossed into haystacks and being eaten by unsuspecting cows. Suddenly, state legislatures appeared poised to pass laws that would require manufacturers—and the packaging industry in particular—to make less junk in the first place. Horrors.
So that's where litter comes in. In 1953, the packaging industry—led by American Can Company and Owens-Illinois Glass Company, inventors of the one-way can and bottle, respectively—joined up with other industry leaders, including Coca-Cola and the Dixie Cup Company to form Keep America Beautiful (KAB), which still exists today. KAB was well-funded and started a massive media campaign to rail against bad environmental habits on the part of individuals rather than businesses. And that meant cracking down on litter. Within the first few years, KAB had statewide antilitter campaigns planned or running in thirty-two states.
In essence, Keep America Beautiful managed to shift the entire debate about America's garbage problem. No longer was the focus on regulating production—for instance, requring can and bottle makers to use refillable containers, which are vastly less profitable. Instead, the "litterbug" became the real villain, and KAB supported fines and jail time for people who carelessly tossed out their trash, despite the fact that, clearly, "littering" is a relatively tiny part of the garbage problem in this country (not to mention the resource damage and pollution that comes with manufacturing ever more junk in the first place). Environmental groups that worked with KAB early on didn't realize what was happening until years later.
And KAB's campaign worked—by the late 1950s, anti-litter ordinances were being passed in statehouses across the country, while not a single restriction on packaging could be found anywhere. Even today, thanks to heavy lobbying by the packaging industry, only twelve states have deposit laws, despite the fact that the laws demonstrably save energy and reduce consumption by promoting reuse and recycling. (A year after Oregon passed the first such law in 1972, 385 million fewer beverage containers were consumed in the state.) And no state has contemplated anything like Finland's refillable bottle laws, which has reduced the country's garbage output by an estimated 390,000 tons. But hey, at least we're not littering.
So it's a nifty judo throw, as far as it goes. I'm guessing that much the same thing is behind industry promotion of recycling. Again, no one can be "against" recycling. It's very good. But of the three suggestions in the phrase "Reduce, Reuse, Recycle," the last is the practice least effective in curbing the manufacturing of junk. And that's exactly why, during the environmental movement's peak in the 1970s, the industry-funded National Center for Resource Recovery—which was founded by none other than Keep America Beautiful—lobbied state and national legislators to favor recycling as the means to address concerns about rising tides of garbage. It beat forcing people to "reduce" or "reuse."
The catch is that recycling can probably only do so much to limit garbage production. As Rogers' book points out, many materials can't be recycled too often before it gets junked, and a vast amount of material marked for recycling simply gets trashed anyway, or is sent overseas to be dumped. Recycling certainly has very considerable upside, not least of which is that recycled stuff requires vastly less energy to create than making new junk from scratch, but it's only a partway solution to reducing the 230 million tons of trash generated by this country each year, if that's what people think should be done. A longer-term solution is to stop creating so much junk in the first place. Essentially, though, that's what ideas like litter prevention are meant to obscure.
Urban sprawl isn't so bad, it's just misunderstood. That's what Robert Bruegmann's arguing in a cover story for the American Enterprise Magazine. Needless to say, I totally disagree. The essay spends a lot of time fending off complaints that sprawl is ugly—"Class-based aesthetic objections to sprawl have always been the most important force motivating critics"—but then glosses over the really crucial objection here: namely, that sprawled-out cities use up a lot of energy. Bad news when we're burning up the planet.
A 2003 World Bank study comparing various cities in the United States illustrates the difference a bit of sprawl can make. Boston, for instance, isn't the most compact city around, but if its population was as spread out as, say, Atlanta's, then Bostonians would be driving about 9 percent more. If Boston had Atlanta's inferior rail system, driving would increase another 5 percent. In fact, if you could somehow wave a magic wand and move everyone in Boston to a city with all of Atlanta's sprawl-like characteristics, total driving would increase 25 percent.
The relative location of jobs and housing also matter. Bruegmann claims that when urban planners tried to create towns such as Reston, Virginia with an even mix of housing and jobs, the effort failed because people still drove their cars hundreds of miles away to find even better jobs. No data, though. Roll tape to the World Bank study: Again, a city like Boston has a fairly even mix of jobs and housing; if it were to become as unbalanced as, say, Washington D.C., total driving would go up another 9 percent.
Now part of Bruegmann's argument is that sprawl is inevitable—it happens to all cities, even in Europe—because people don't like living in crowded urban areas and want low-density subdivisions and industrial parks and freeways. Well, maybe they do. But that doesn't mean it's impossible for urban planners to constrain sprawl. Compare Vancouver and Seattle. Similar cities in similar areas with similar sorts of people. Yet the former has promoted downtown development and limited freeway expansion and, as a result, has much less sprawl. Just because Parisians are fleeing to the suburbs en masse doesn't mean it's impossible to curb sprawl, and the excessive oil consumption that comes with it.
Moreover, if you want to get political about it (and hey, who doesn't?) my own guess is that America has steadily grown more conservative over the past half-century partly because of urban sprawl. Among other things, city-dwellers organize and use zoning laws to prevent new apartment complexes from being built, and developers decide it's easier to build out in the suburbs.
So that's where people start moving: out of the city. Maybe it's because they want to, as Bruegmann claims. But it's also where the cheap housing is. And it's not hard to imagine that life in the suburbs—where quality of life depends more on lower taxes and individual property rights than on public services, and where one can cloister off with one's own ethnic and religious peers—turns people into Republicans. That's unfair, of course, and suburbs aren't nearly as stale as people make them out to be (Michael Pollan's 2000 essay on this is quite entertaining), but probably not entirely inaccurate. Maybe all that money that's being spent building up left-wing political "infrastructure" should just go towards affordable urban housing and mass transit. An easier fix.
But whatever. By the way, if you want to read about an utopian urban center that seems to work, do check out Bill McKibben's essay on Curitiba, Brazil, which has managed to curtail sprawl rather brilliantly with quality urban planning. "Because of its fine transit system, and because its inhabitants are attracted toward the city center… Curitibans use 25 percent less fuel per capita, even though they are actually more likely to own cars." Plus, despite being a low-income city, Curitiba's beautiful, people truly love living there, and even the slums are "clean" and "decent". If McKibben's picture is accurate, that's a place worth studying.
Tyler Cowen links to a useful study looking at how well Latino immigrants are assimilating. Pretty well, it turns out. Economists tend to agree: First-generation Latino immigrants are poorer than their native counterparts (no kidding: they've just arrived, they speak little English, and they tend to work for the most exploitative companies this country has to offer), but their kids and grandkids do much better:
In a 2003 study by the RAND Corporation, economist James P. Smith finds that successive generations of Latino men have experienced significant improvements in wages and education relative to native Anglos. According to Smith, "the reason is simple: each successive generation has been able to close the schooling gap with native whites which then has been translated into generational progress in incomes. Each new Latino generation not only has had higher incomes than their forefathers, but their economic status converged toward the white men with whom they competed."
Granted, at least in the passage above, Smith is looking at a particular time period (immigrants arriving between 1895-99, along with their kids and grandkids) that's different from the present day in several respects. Notably, there was a decent supply of stable and good-paying manufacturing jobs back then—three-fourths of Ford workers in the 1910s, for instance, were immigrants (mostly Eastern European, granted, but I assume Latinos could find similar sorts of jobs)—which can explain why the immigrant families of old could do so well so quickly.
By contrast, economic mobility today is awful, those sorts of manufacturing jobs are hard to come by, and it's reasonable to think that the current generation of Latino immigrants, many of whom are unskilled and quite poor, will have a much harder time ensuring that their kids go to college and get well-paying jobs. But that's because it's harder for all unskilled workers on the low end of the income spectrum to do that nowadays, and it's an argument for figuring out ways to improve mobility, not an argument for restricting immigration. There's quite obviously nothing about Latinos per se that makes them "unable" to assimilate.
Now George Borjas, the favorite economist of restrictionists everywhere, has written a paper suggesting that Latinos of old faced "pressures" to assimilate that the current wave of immigrants don't. For instance, immigrants before 1965 were a more diverse lot—you had Latinos and Germans and Italians, etc.—so it was harder for immigrants to stay in their "ethnic enclaves." And there was also, in Borjas' words "an ideological climate that boosted social pressures for assimilation and acculturation" that is no longer around. Well, maybe. Nevertheless, Latino immigrants today seem to be "acculturating" just fine:
A comprehensive 2002 survey of Latinos in the United States by the Pew Hispanic Center and Kaiser Family Foundation provides additional evidence of advancement across generations, particularly in terms of English proficiency. Spanish is the primary language among 72% of first-generation Latinos, but this figure falls to 7% among second-generation Latinos and zero among Latinos who are third generation and higher.
Basically, Latino immigrants and their descendents do very well, especially when one considers that many are exploited and underpaid by their employers, and that they live in a country where economic mobility—not to mention public education—is nothing to brag about. I'd also suggest that maybe if the United States had decided over the past fifty years to help Latin America develop, rather than, you know, fuelling wars, installing various dictatorships, and conducting neoliberal "experiments" on countries like Mexico, then perhaps the Latino immigrants who came here would be healthier, wealthier, and better-educated and would have "assimilated" more easily. Fun to imagine.
UPDATE: Ah, okay, my mistake: James P. Smith's actual study discussed a broader time period than what was mentioned above (he looked at generations of immigrants from 1895-1970), although all the points above still hold.
Israeli kibbutzim have always struck me as quite fascinating. Here you have voluntary "socialist" communities—in which income equality is more-or-less guaranteed and all property is communal—that have persisted for most of the 20th century. (Today 120,000 members live in 268 kibbutzim across Israel; hardly a majority, but impressive all the same.) By all accounts, they're still going strong. And that raises all sorts of questions: How do they do it? How do kibbutzim manage to keep their most productive workers from leaving? How do they stop workers from shirking? And so on.
Well, lucky for me, Ran Abramitzky of Stanford wrote a nifty paper last December that looked at some of these questions. Here's a link. It appears that kibbutzim prevent people from fleeing through communal ownership of property. If a super-productive worker believes she's getting shafted by being forced to share her earnings, she can always leave, but she won't be able to take any of her belongings with her. Understandably, people are reluctant to leave. (As well, those raised on a kibbutz tend to have learned kibbutz-specific skills, such as agronomy, which also makes exit difficult.)
Beyond that, kibbutzim seem to prevent workers from shirking through "mutual monitoring" and "peer pressure," carried out through institutions such as the communal dining hall (not to mention lots and lots of gossip). They also place restrictions on people entering from the outside in order to avoid adverse selection and getting saddled with too many unproductive workers. That all makes sense.
But that doesn't mean kibbutzim can survive all manners of adversity. In fact, they've been seriously weakening of late. Between 1983 and 1995—when a bank crisis, combined with high interest rates and a collapse in farm prices, caused a major wealth shock in the kibbutzim—20 percent of kibbutz members left their communes to try their luck in the outside world. As one might expect, the individuals who left were on average more educated (54 percent of migrants had a high school diploma vs. 48 percent of stayers) and less likely to have a low-skilled occupation (13 vs. 23 percent) than the people who stayed.
Now that's not a huge difference (in fact, it's smaller than I would have expected), but it gives modest support to the notion that maintaining full income equality, especially during an economic downturn, is likely to drive out the most productive members of the community.
Anyway, after the crisis of the 1980s, many of Israel's kibbutzim actually moved away from their commitment to full income equality. 39 kibbutzim didn't change at all; but 64 kibbutzim kept most income sharing while allowing varying degrees of differential pay at the margin; and 110 kibbutzim essentially transformed themselves from socialist communes into "social democracies"—letting members keep the bulk of their own earnings, while maintaining a safety net based on income sharing.
Interestingly, Abramitzky found that a kibbutz's ideology had no effect on what level of equality it adopted after the 1980s. Many of the kibbutzim shifting away from full equality belonged to the staunchly left-wing Kibbutz Artzi Movement. For the most part, the poorer kibbutzim with the most people leaving were the ones most likely to induce lower levels of equality. And those kibbutzim that shifted away from full equality were better able to stem the flow of people leaving. (On the other hand, the wealthier the kibbutz, the better it was able to maintain high levels of equality.) Abramitzky notes that these findings run counter to "the view of Kibbutzes as primarily ideological entities." Kibbutz members are actually quite responsive to economic incentives.
It's worth noting that Abramitzky doesn't toss out ideology entirely. In an appendix, he asks why only 2.6 percent of Israelis are in a kibbutz in the first place, and argues that it's probably due to cultural factors: The early kibbutz founders came from Eastern Europe and Russia in the 1910s and 1920s and, influenced by socialist movements at home, were not reluctant to give up the privacy necessary to join a kibbutz. By contrast, later generations of settlers didn't come from the same background and were more reluctant to give up their individualism, even if the kibbutzim did offer great economic benefits. For instance, Sephardic Jewish settlers supposedly preferred the Moshav, a different type of agricultural collective that allowed each farmer to work his own land and earn his own profits, because it allowed them to maintain "traditional family structures." Whatever that might mean.
I've started reading Daniel Cohen's new book, Globalization and its Enemies, which argues that poor countries are poor not because they've been exploited by rich countries and multinational corporations and the IMF and the like, but because they've been unable to enter the global economy, even when they want to.
That may sound like familiar territory, but Cohen actually makes a number of surprising and novel points, and while I'd say that he understates the amount of exploitation going on, there's surely something to his argument that many developing countries suffer not from too much globalization but too little. (I'll try to write more on the book once I'm done; Cohen does put forward a more nuanced account than the usual Economist line that poor countries just need more free trade and everything will be "fine.") So that brings us to Bolivia.
Since the 1980s or so, the vast majority of foreign direct investment from the First World has gone not to poor countries but to other wealthy countries (and China). I have some ideas as to how and why this all came about, but they're probably wrong, so I'll set those aside. What's better-known is that many developing countries have signed various Bilateral Investment Treaties (BITs) over the years in order to try and attract some of those investment flows. These BITs are basically laws that offer a great deal of legal protection to companies that invest in a given country.
Bolivia's former leaders had previously signed a BIT with the United States, under which foreign companies could sue if future Bolivian governments passed laws that undermined their investments. In 2000, when activists in Cochabamba drove Bechtel out of the country, after the company had contracted with the government to privatize the countries water supplies and then raised local water rates, Bechtel sued the Bolivian government under the BIT for $50 million. The company only backed down after a worldwide campaign by activists; it was only the second time a company had ever backed down from such a claim.
Now it's not clear whether the major oil companies—including ExxonMobil, Repsol, Total, British Gas—will sue after Evo Morales' latest move to partially "nationalize" the gas industry (which really just amounts to renegotiating outlandish concessions given to foreign companies by former corrupt leaders, so as to make sure more of the wealth goes to ordinary Bolivians). The firms certainly have the power to do so: When former president Carlos Mesa proposed to raise taxes on natural gas production, he backed down under litigation threats. And the IMF, World Bank, and Inter-American Development Bank all have ways of hurting Bolivia if it doesn't pay up.
But that's still up in the air. What I'm interested in now is whether these BITs—which, among other things, cede democratic decision-making to foreign corporations—actually do encourage foreign investment flows. Are they worth it? Interestingly, a 2004 report by the International Institute for Sustainable Development noted that they probably aren't much good. Countries such as Brazil and Nigeria receive plenty of foreign investment "despite shying away from such treaties," while many smaller countries in Central Africa and Central America have "entered into a raft of BITs" and still attract very little foreign investment. Signing away your sovereignty isn't always the key to success, apparently. (One could also debate the merits of foreign investment itself, but that's another story.)
Jon Margolis has a very interesting piece in the American Prospect today on Canada's water wars. The country has 20 percent of the world's freshwater and only 0.5 percent of the population. Water's becoming scarce in many places around the world. Why shouldn't Canada ship its surplus out? Well, for one, NAFTA would make it difficult for Canada to pass new environmental laws for its lakes once companies start engaging in the water trade:
According to an August 2004, report by the International Joint Commission, one of the bi-national bodies established to govern and protect the Great Lakes, most climate change models predict lower lake levels as the earth warms. And the same report appears to acknowledge that once a body of water has become "a commercial good or saleable commodity," any effort to protect it could fall afoul of NAFTA. The message seems to be that if you want to protect any of the lakes, or perhaps any bays or inlets thereof, pass the law before some company starts selling the water.
Although I'm sure he's aware of it, Margolis doesn't detail the various—and often serious—environmental problems with bringing in the tankers to haul water out of Canada: fluctuations in water level can accelerate erosion and destroy the surrounding soil, and any transport of water risks introducing new species to new environments, with all the disasters that can bring. And once Canada starts selling its water, NAFTA sharply limits what the government can do to address these problems.
Now in the context of this particular article, the case for conservation seems strong. A bunch of American developers want the Southwest to continue its totally unsustainable population explosion, so they're trying to pillage Canadian water supplies. One could suggest that Americans start choosing to live where there are natural water supplies—although that, as Margolis points out, would probably mean depopulating California. Or, as an interim measure, we simply could learn to conserve water; the United States is terrible in that regard, especially our practice of "irrigating fields that produce crops already in surplus."
But neither suggestion really addresses the underlying issue. About 1.5 billion people around the globe lack freshwater. In about 20 years demand for freshwater will exceed supply by 56 percent. As Margolis notes, "in 1997 the United Nations concluded that the best—perhaps the only—way to get water to them was through a system of international markets and trade." I don't know how true that is, exactly; most countries could stand to manage their own resources more carefully before thinking about water from elsewhere, but it sure looks like we'll have to start talking about a global water trade eventually, which, I think, will get rather dicey.
Okay, so I don't really update this blog very regularly anymore. Hopefully that will change in the near future now that I've joined the 21st century and acquired internet at home. Oh yes. But that aside, here's a crucial public transit question that came up in conversation yesterday and isn't yielding to my heroic efforts at Googling an answer.
In Boston, the subway runs until about 12:45, despite the fact that bars close at 2. This encourages drunk driving and the like, not to mention it's a huge pain in the ass. So why doesn't the T run until, say, 2:30? Trying to figure it out, I got a 1999 article from the MIT Tech that reported on a bill being introduced to extend MBTA hours.
Opponents of the bill said that late-night subway rides simply weren't cost-effective, while others were worried both about noise—who wants the Green line rattling through the suburbs at 3 in the morning?—and the dangers in having people lurking around underground stations late at night. Presumably some combination of those reasons can explain why the hours were never extended. Then there's this:
Another factor that limits the scope of the initial extension is the work of maintenance crews who use night hours to perform preventive maintenance on the rails, Rivera said. On the present schedule, the crews have about three hours to complete their tasks, according to Rivera.
Okay, sure. Subways need fixing, and the best time to fix is at night. That explains why all subways shut down at night. Except for one, that is: New York's. (Wikipedia says it's the only 24-hour rapid transit in the world except for the New Jersey-Manhattan PATH and parts of the Chicago 'L'.)
So that's my question: Why can the New York subway stay open all night? Doesn't it need repairs? Now I've heard it's because New York's lines have multiple tracks (for express and local trains), so at night, the trains can switch over to whichever track isn't being fixed at the moment, whereas in Boston, there's only one track per line, so the subways have to shut down. But I think there are 24-hour lines in New York with only one track, no? Do the workers just have to do their maintenance really, really fast, before the next train comes trundling on through? And why didn't other cities build multi-track subways like New York did?
I'm a reporter at The New Republic, mostly covering green issues, apocalypses, general doom. This is my personal site. I also post regularly at The Vine, TNR's enviro-blog.