37
The Blair House Project, Choosing the presidential running mates. By David Greenberg, Posted Friday, March 24, 2000, at 12:00 AM PT The vice presidency began, in 1787, as an afterthought. Having created the Electoral College for choosing the president, the Constitution's framers feared that, come election time, each state's electors would vote for a favorite son, precluding the selection of a president with national appeal. So they decided that every elector would get two votes for president, one of which he had to cast for someone from another state. The candidate netting the second most electoral votes would become vice president. It sounded good, but in 1796, John Adams, a Federalist, won by three electoral votes over Thomas Jefferson, a Democrat-Republican. As a result, the new president and vice president belonged to different political parties. The framers, naively, hadn't planned for political parties. To fix the problem, party leaders in 1800 instructed electors to unite behind a single vice-presidential candidate. But then a different glitch occurred: Jefferson and his running mate, Aaron Burr, each got exactly 73 votes, forcing the House of Representatives to choose one of them for president. Mischievously, Federalist congressmen voted for Burr, stalemating the vote 35 times. Finally, Alexander Hamilton intervened and got his party-mates to let Jefferson prevail. Reform came with the 12 th Amendment, adopted in 1804, which overhauled the vice-presidential election process into its modern form. Now electors would vote for a ticket, rendering the vice president a mere accessory. This yielded yet another unintended consequence: The candidates were no longer of presidential timber. Especially with the rise in the 1830s of political parties, bosses saw that they could use the No. 2 post to build party unity. Typically, they would placate factions that lost out on the presidential choice by picking a veep irrespective of other credentials. With such coarse considerations influencing the choice, mediocrity reigned. Veeps were so expendable that even victorious ones were routinely discarded by their own parties four years later, so as to balance the ticket better with some other no-name. In fact, no president was renominated with the same running mate until William Taft retained James Sherman as his No. 2 in 1912. The stock of vice-presidential candidates rose in 1900 after President William McKinley, looking to replace his deceased 1896 running mate, Garret Hobart, settled on New York Gov. Theodore Roosevelt. While the president ran a sedentary "front porch" campaign, TR barnstormed the country, delivering 673 speeches and serving as the Republicans' public face. TR later became the first No. 2 to accede to the presidency and then win election in his own right. (Calvin Coolidge, Harry Truman, and Lyndon Johnson would all replicate the feat.) But, bosses still called the shots. It took a presidential candidate as strong as Franklin Roosevelt in 1940 to change the process. Having fallen out with the pol-chosen John Nance Garner, his understudy of '32 and '36, FDR

Chapter 10: A Sea of Troubles - University of Hawaiikpatrick/classesHI304Lecture8.doc · Web viewChoosing the presidential running mates. By David Greenberg, Posted Friday, March

  • Upload
    lecong

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

The Blair House Project, Choosing the presidential running mates. By David Greenberg, Posted Friday, March 24, 2000, at 12:00 AM PT

The vice presidency began, in 1787, as an afterthought. Having created the Electoral College for choosing the president, the Constitution's framers feared that, come election time, each state's electors would vote for a favorite son, precluding the selection of a president with national appeal. So they decided that every elector would get two votes for president, one of which he had to cast for someone from another state. The candidate netting the second most electoral votes would become vice president.

It sounded good, but in 1796, John Adams, a Federalist, won by three electoral votes over Thomas Jefferson, a Democrat-Republican. As a result, the new president and vice president belonged to different political parties. The framers, naively, hadn't planned for political parties.

To fix the problem, party leaders in 1800 instructed electors to unite behind a single vice-presidential candidate. But then a different glitch occurred: Jefferson and his running mate, Aaron Burr, each got exactly 73 votes, forcing the House of Representatives to choose one of them for president. Mischievously, Federalist congressmen voted for Burr, stalemating the vote 35 times. Finally, Alexander Hamilton intervened and got his party-mates to let Jefferson prevail.

Reform came with the 12th Amendment, adopted in 1804, which overhauled the vice-presidential election process into its modern form. Now electors would vote for a ticket, rendering the vice president a mere accessory. This yielded yet another unintended consequence: The candidates were no longer of presidential timber. Especially with the rise in the 1830s of political parties, bosses saw that they could use the No. 2 post to build party unity. Typically, they would placate factions that lost out on the presidential choice by picking a veep irrespective of other credentials.

With such coarse considerations influencing the choice, mediocrity reigned. Veeps were so expendable that even victorious ones were routinely discarded by their own parties four years later, so as to balance the ticket better with some other no-name. In fact, no president was renominated with the same running mate until William Taft retained James Sherman as his No. 2 in 1912.

The stock of vice-presidential candidates rose in 1900 after President William McKinley, looking to replace his deceased 1896 running mate, Garret Hobart, settled on New York Gov. Theodore Roosevelt. While the president ran a sedentary "front porch" campaign, TR barnstormed the country, delivering 673 speeches and serving as the Republicans' public face. TR later became the first No. 2 to accede to the presidency and then win election in his own right. (Calvin Coolidge, Harry Truman, and Lyndon Johnson would all replicate the feat.)

But, bosses still called the shots. It took a presidential candidate as strong as Franklin Roosevelt in 1940 to change the process. Having fallen out with the pol-chosen John Nance Garner, his understudy of '32 and '36, FDR insisted upon Henry Wallace as his second banana in 1940—threatening to walk, not run. The party leaders capitulated.

Since 1960, candidate-centered politics and televised conventions with foreordained presidential nominees have placed full control of the veep choice in the nominees' hands. This development has yielded candidates who are ideologically in sync with the presidential nominee. The nominees' desire to make a splash with their first big appointment at the overscripted conventions have also led them to name partners who would themselves make plausible presidents.

There have been missteps. In 1972, George McGovern chose Sen. Thomas Eagleton, unaware that Eagleton had undergone electroshock therapy for depression. Two weeks later, McGovern replaced him with Kennedy in-law Sargent Shriver—a move that made McGovern look inept. The fiasco

prompted Jimmy Carter in 1976 to conduct a thorough vetting of the political, financial, and personal backgrounds of prospective nominees.

Wonks revived the perennial reforms: primaries for the veep slot, transferring the choice to the convention, abolishing the vice presidency altogether. Such prescriptions ignore, however, that most

recent selections have worked out well. Since 1960, vice-presidential nominees Lyndon Johnson, Hubert Humphrey, Ed Muskie, Bob Dole, Walter Mondale, George Bush, and Al Gore all proved to be creditable presidential candidates.http://slate.msn.com/id/77739/

Strife-Cycles, Judge-bashing is hardly new. By Rodger Citron, Posted Friday, April 15, 2005, at 3:16 AM PT

Campaigning against the federal courts is a familiar, even venerable, political sport. Recently, however, a number of politicians have raised the stakes by escalating their attacks upon the federal judiciary and even individual judges. Before the Terri Schiavo case, there were calls for Congress to impeach judges and to pass legislation that would strip the federal courts of jurisdiction in certain cases. After Schiavo's death, Rep. Tom DeLay has continued to press his jeremiad against the federal judiciary.

The persistent attacks upon the federal courts have inspired an outburst of editorial concern over the fate of judicial independence. Three examples: On April 12, Prof. Erwin Chemerinsky of Duke Law School called the "conservative attack on the courts … truly frightening" and urged that it "be denounced by elected officials and academics across the political spectrum"; Ruth Marcus of the Washington Post this week described the "current uproar" as "particularly worrisome"; and on April 5 Dahlia Lithwick warned of "a cocktail of court-stripping legislation, impeachment threats, and term limits" to undermine the

possibility of a "co-equal independent judiciary."

Will the current campaign against the judiciary escalate? The key player in answering this question is President George W. Bush. Although the president has demonstrated his willingness to fight over the judiciary by renominating 20 individuals whose initial judicial bids failed, it is unlikely he will support the increasingly strident campaign against the federal courts. In fact, after Rep. DeLay's remarks last week about federal courts that had "run amok," the president told reporters over the weekend that he "believes in an independent judiciary."

More important, however, history shows that the recent demagoguery over judges is neither particularly shocking nor rare. Indeed at least two prior attacks on the courts—one during the New Deal, the other in the late 1950s—suggest that an even more concerted political attack on the courts would, paradoxically, strengthen judicial independence and judicial review.

President Franklin Roosevelt's clash with the Supreme Court over the New Deal started when the court began to invalidate his social welfare legislation in 1935. The stakes were raised in 1936 after the court struck down the Guffey Act in Carter v. Carter Coal, the Agricultural Adjustment Act in United States v. Butler, and New York's minimum wage law in Morehead v. New York ex. rel. Tipaldo. The decisions were made by a closely divided court and prompted a political firestorm, including personal attacks upon the court far more aggressive than we have seen in recent days: Six justices were hanged in effigy in Ames, Iowa, after the court's decision in Butler.

Roosevelt did not criticize the court during his campaign for re-election in 1936, and he easily defeated Alfred Landon. Immediately after that election, however, Roosevelt began to attack, first in his State of the Union Address and then in February 1937, when he unveiled his court-packing plan, which would have allowed him to add up to six justices to the Supreme Court, one for any justice over the age of 70 who did not retire.

The court-packing proposal failed for many reasons: first, because the court in 1937 made its "switch in time" and began to uphold New Deal legislation; then, Justice Willis Van Devanter resigned, opening a seat for a Roosevelt appointee; and—somewhat surprisingly—because many members of the Democratic Congress did not support it.

The New Deal enjoyed popular support, and the only obstacle to its implementation was the Supreme Court. Yet Roosevelt was unable to persuade the public or his party to support his effort to refashion the court because his plan was widely viewed as an effort to arrogate power. And, as New York University Law Prof. Barry Friedman has explained, the emergence of fascist dictatorships abroad prompted heightened public concern for judicial independence. Critics argued that the court-packing plan would give Roosevelt too much control over the Supreme Court; Sen. William King, for example, described the court as the nation's "Ark of the Covenant" and warned that "impair[ing] the power and authority of the Supreme Court … [would] arouse grave apprehensions in the minds of all thoughtful Americans." Such fears were

FDR had to fight for the New Deal

further amplified by concern that Roosevelt's plan would undermine the court's institutional role of protecting individual liberty and constitutional rights.

The political backlash inspired by the court-packing plan should discourage President Bush from joining the current attack on the federal courts. Just as the Democratic Party controlled the electoral branches in 1937, Bush governs today with a Republican Congress. Any attempt to exercise more control over the federal courts would be viewed as overreaching.

And just as concerns about totalitarianism abroad in 1937 resulted in support for an independent—albeit unpopular—judiciary, so too must President Bush make domestic political decisions with an eye on his efforts to promote democracy in Iraq and the Middle East. The Bush administration's campaign for democracy abroad includes support for an independent judiciary; it cannot afford the appearance of hypocrisy that would result from a domestic campaign against the federal courts.

The second instance of organized public court bashing is similarly illuminating. It occurred as part of the public outcry over the high court's ruling in Brown v. Board of Education in 1954. In Brown, the Warren Court required desegregation of public schools in the South, over the objection of the majority of white citizens and the politicians they elected. The response to Brown by citizens opposed to the decision was especially virulent. Justice Hugo Black, for example, received hate mail, was practically exiled from the State of Alabama, and—as biographer Roger Newman has written—often wore a chest protector provided by the Secret Service when he visited Birmingham. After the court's decision in Brown, Justice Black's son, Hugo Jr., who lived in Birmingham, was hanged in effigy on his lawn.

Brown also inspired political resistance, including legislative efforts to curb the court. Such efforts failed, however, because President Dwight Eisenhower endorsed the decision and took steps to enforce it when challenged. The court may have been most vulnerable after the 1957 term, when it decided a number of domestic-security cases against the government and in favor of Communists and Communist sympathizers. Supporters of segregation in Congress teamed up with congressmen angry over the civil liberties decisions in an effort to strip the court of jurisdiction in cases raising security-related questions. But Eisenhower opposed the legislation, albeit tepidly, and ultimately it failed due to the opposition of Senate Majority Leader Lyndon Johnson. (With his eye on the presidency, Johnson was seeking to distinguish himself from his fellow Southerners.)

Today, proposing jurisdiction-stripping legislation is an especially popular sport for court-bashers. The lesson to be learned from the attacks upon the Warren Court during the Eisenhower era, however, is that unless President Bush is willing to endorse such efforts, they almost certainly will languish and then expire; they are nothing more than flares sent up to rally the right-wing base.

Last year, as Prof. Chemerinsky notes, there were two bills in the House of Representatives that would have stripped the federal courts, including the Supreme Court, of the authority to hear constitutional challenges to the federal Defense of Marriage Act or to the words "under God" in the Pledge of Allegiance. Those two bills passed the House of Representatives but went no further. Pending in the current Congress is another bill that would prevent any federal court, including the Supreme Court, from hearing cases involving the acknowledgment of God by state officials; under the bill, the exercise of jurisdiction in such a case would be an impeachable offense.

This bill, too, almost certainly will not become law. Why? Because for President Bush the cost of endorsing such measures is too high. After Rep. DeLay's remarks last week, the president's comments were clearly an effort to distance himself publicly from the vitriol of the current attack.

The failure of prior campaigns to curb the federal judiciary show that the more extreme the proposal, the less likely it is to succeed. Such legislative proposals detract from the real court-packing plan now pending in the Senate—the so-called "nuclear option," which would eliminate the filibuster for judicial nominations. The debate over the nuclear option has been conducted almost exclusively within the Senate. Although President Bush set the stage by renominating 20 individuals, he otherwise has not actively participated in this debate either.

The failure of Roosevelt's court-packing plan demonstrates that even the president is limited in what he can do to influence the decisions of the federal courts and the actions of the judges who staff them. The failure of Congress's efforts to limit the jurisdiction of federal courts in response to the Warren Court

decisions with which it disagreed shows that court-stripping is a congressional activity with more bark than bite. Today, more than ever, the Supreme Court continues to have the final say on even the most political legal disputes.

Even more telling has been the administration's response to the Supreme Court's decisions last summer rejecting the administration's claim that its power to hold detainees classified as "enemy combatants" was not subject to judicial review. The war on terror is vital to the president, yet he did not criticize the court or otherwise seek to limit its power in such cases. Instead the Justice Department continues to litigate the interpretation of the court's decisions in federal court. Which only serves to reaffirm that the ultimate decision maker, even as to the scope of the Supreme Court's review power, will be the Supreme Court—as it has been since the New Deal.

http://www.slate.com/id/2116737/

No Pain, No Gain? The case against recession, past and present. By Paul Krugman Posted Friday, Jan. 15, 1999, at 12:30 AM PT

Once upon a time there was a densely populated island nation, which, despite its lack of natural resources, had managed through hard work and ingenuity to build itself into one of the world's major industrial powers. But there came a time when the magic stopped working. A brief, overheated boom was followed by a slump that lingered for most of a decade. A country whose name had once been a byword for economic prowess instead became a symbol of faded glory.

Inevitably, a dispute raged over the causes of and cures for the nation's malaise. Many observers attributed the economy's decline to deep structural factors--institutions that failed to adapt to a changing world, missed opportunities to capitalize on new technologies, and general rigidity and lack of flexibility. But a few dissented. While conceding these factors were at work, they insisted that much of the

slump had far shallower roots--that it was the avoidable consequence of an excessively conservative monetary policy, one preoccupied with conventional standards of soundness when what the economy really needed was to roll the printing presses.

Needless to say, the "inflationists" were dismissed by mainstream opinion. Adopting their proposals, argued central bankers and finance ministry officials, would undermine confidence and hence worsen the slump. And even if inflationary policies were to give the economy a false flush of artificial health, they would be counterproductive in the long run because they would relax the pressure for fundamental reform. Better to take the bitter medicine now--to let unemployment rise, to force companies to purge themselves of redundant capacity--than to postpone the day of reckoning.

OK, OK, I've used this writing trick before. The previous paragraphs could describe the current debate about Japan. (I myself am, of course, the most notorious advocate of inflation as a cure for Japan's slump.) But they could also describe Great Britain in the 1920s--a point brought home to me by my vacation reading: the second volume of Robert Skidelsky's biography of John Maynard Keynes, which covers the crucial period from 1920 to 1937. (The volume's title, incidentally, is John Maynard Keynes: The Economist as Savior.)

Skidelsky's book, believe it or not, is actually quite absorbing: Although he was an economist, Keynes led an interesting life--though, to tell the truth, what I personally found myself envying was the way he managed to change the world without having to visit quite so much of it. (Imagine being a prominent economist without once experiencing jet lag, or never taking a business trip where you spent more time getting to and from your destination than you spent at it.) And anyone with an interest in the history of economic thought will find the tale of how Keynes gradually, painfully arrived at his ideas--and of how his emerging vision clashed with rival schools of thought--fascinating. (Click here for an example.)

But the part of Skidelsky's book that really resonates with current events concerns the great debate over British monetary policy in the 1920s. Like the United States, Britain experienced an inflationary boom, fed by real estate speculation in particular, immediately following World War I. In both countries this boom was followed by a nasty recession. But whereas the United States soon recovered and experienced a decade of roaring prosperity before the coming of the Great Depression, Britain's slump never really ended. Unemployment, which had averaged something like 4 percent before the war, stubbornly remained above 10 percent. There is an obvious parallel with modern Japan, whose "bubble economy" of the late 1980s burst eight years ago and has never bounced back.

Almost everyone who thought about it agreed that Britain's long-run relative decline as an economic power had much to do with structural weaknesses: an overreliance on traditional industries such as coal and cotton, a class-ridden educational system that still tried to produce gentlemen rather than engineers and managers, a business culture that had failed to make the transition from the family firm to the modern corporation. (Keynes, never one to mince words, wrote that "[t]he hereditary principle in the transmission of wealth and the control of business is the reason why the leadership of the Capitalist cause is weak and stupid. It is too much dominated by third-generation men.") Similarly, everyone who thinks about it agrees that modern Japan has deep structural problems: a failure to move out of traditional heavy industry, an educational system that stresses obedience

rather than initiative, a business system that insulates big company managers from market reality.

But need structural problems of this kind lead to high unemployment, as opposed to slow growth? Is recession the price of inefficiency? Keynes didn't think so then, and those of us who think along related lines don't think so now. Recessions, we claim, can and should be fought with short-run palliatives; by all means let us work on our structural problems, but meanwhile let us also keep the work force employed by printing enough money to keep consumers and investors spending.

One objection to that proposal is that it will directly do more harm than good. In the 1920s the great and the good believed that an essential precondition for British recovery was a return to the prewar gold standard--at the prewar parity, that is, making a pound worth $4.86. It was believed that this goal was worth achieving even if it required a substantial fall in wages and prices--that is, general deflation. To ratify the depreciation of the pound that had taken place since 1914 in order to avoid that deflation was clearly irresponsible.

In modern times, of course, it would, on the contrary, seem irresponsible to advocate deflation in the name of a historical monetary benchmark (though Hong Kong is currently following a de facto policy of deflation in order to defend the fixed exchange rate between its currency and the U.S. dollar). But orthodoxy continues to prevail against the logic of economic analysis. In the case of Japan, there is a compelling intellectual case for a recovery strategy based on the deliberate creation of "managed inflation." But the great and the good know that price stability is essential and that inflation is always a bad thing.

What really struck me in Skidelsky's account, however, was the extent to which conventional opinion in the 1920s viewed high unemployment as a good thing, a sign that excesses were being corrected and discipline restored--so that even a successful attempt to reflate the economy would be a mistake. And one hears exactly the same argument now. As one ordinarily sensible Japanese economist said to me, "Your proposal would just allow those guys to keep on doing the same old things, just when the recession is finally bringing about change."

In short, in Japan today--and perhaps in the United States tomorrow--behind many of the arguments about why we can't monetize our way out of a recession lies the belief that pain is good, that it builds a stronger economy. Well, let Keynes have the last word: "It is a grave criticism of our way of managing our economic affairs, that this should seem to anyone like a reasonable proposal."

http://slate.msn.com/id/13630

Vulgar Keynesians A penny spent is not a penny earned?By Paul Krugman, Posted Friday, Feb. 7, 1997, at 12:30 AM PT

Economics, like all intellectual enterprises, is subject to the law of diminishing disciples. A great innovator is entitled to some poetic license. If his ideas are at first somewhat rough, if he exaggerates the discontinuity between his vision and what came before, no matter: Polish and perspective can come in due course. But inevitably there are those who follow the letter of the innovator's ideas but misunderstand their spirit, who are more dogmatic in their radicalism than the orthodox were in their orthodoxy. And as ideas spread, they become increasingly simplistic--until what eventually becomes part of the public consciousness, part of what "everyone knows," is no more than a crude caricature of the original.

Such has been the fate of Keynesian economics. John Maynard Keynes himself was a magnificently subtle and innovative thinker. Yet one of his unfortunate if unintentional legacies was a style of thought--call it vulgar Keynesianism--that confuses and befogs economic debate to this day.

Before the 1936 publication of Keynes' The General Theory of Employment, Interest, and Money, economists had developed a rich and insightful theory of microeconomics, of the behavior of individual markets and the allocation of resources among them. But macroeconomics--the study of economy-wide events like inflation and deflation, booms and slumps--was in a state of arrested development that left it utterly incapable of making sense of the Great Depression.

So-called "classical" macroeconomics asserted that the economy had a long-run tendency to return to full employment, and focused only on that long run. Its two main tenets were the quantity theory of money--the assertion that the overall level of prices was proportional to the quantity of money in circulation--and the "loanable funds" theory of interest, which asserted that interest rates would rise or fall to equate total savings with total investment.

Keynes was willing to concede that in some sufficiently long run, these theories might indeed be valid; but, as he memorably pointed out, "In the long run we are all dead." In the short run, he asserted, interest rates were determined not by the balance between savings and investment at full employment but by "liquidity preference"--the public's desire to hold cash unless offered a sufficient incentive to invest in less safe and convenient assets. Savings and investment were still necessarily equal; but if desired savings at full employment turned out to exceed desired investment, what would fall would be not interest rates but the level of employment and output. In particular, if investment demand should fall for whatever reason--such as, say, a stock-market crash--the result would be an economy-wide slump.

It was a brilliant re-imagining of the way the economy worked, one that received quick acceptance from the brightest young economists of the time. True, some realized very early that Keynes' picture was oversimplified; in particular, that the level of employment and output would normally feed back to interest rates, and that this might make a lot of difference. Still, for a number of years after the publication of The General Theory, many economic theorists were fascinated by the implications of that picture, which seemed to take us into a looking-

glass world in which virtue was punished and self-indulgence rewarded.

Consider, for example, the "paradox of thrift." Suppose that for some reason the savings rate--the fraction of income not spent--goes up. According to the early Keynesian models, this will actually lead to a decline in total savings and investment. Why? Because higher desired savings will lead to an economic slump, which will reduce income and also reduce investment demand; since in the end savings and investment are always equal, the total volume of savings must actually fall!

Or consider the "widow's cruse" theory of wages and employment (named after an old folk tale). You might think that raising wages would reduce the demand for labor; but some early Keynesians argued that

redistributing income from profits to wages would raise consumption demand, because workers save less than capitalists (actually they don't, but that's another story), and therefore increase output and employment.

Such paradoxes are still fun to contemplate; they still appear in some freshman textbooks. Nonetheless, few economists take them seriously these days. There are a number of reasons, but the most important can be stated in two words: Alan Greenspan.

After all, the simple Keynesian story is one in which interest rates are independent of the level of employment and output. But in reality the Federal Reserve Board actively manages interest rates, pushing them down when it

thinks employment is too low and raising them when it thinks the economy is overheating. You may quarrel with the Fed chairman's judgment--you may think that he should keep the economy on a looser rein--but you can hardly dispute his power. Indeed, if you want a simple model for predicting the unemployment rate in the United States over the next few years, here it is: It will be what Greenspan wants it to be, plus or minus a random error reflecting the fact that he is not quite God.

But putting Greenspan (or his successor) into the picture restores much of the classical vision of the macroeconomy. Instead of an invisible hand pushing the economy toward full employment in some unspecified long run, we have the visible hand of the Fed pushing us toward its estimate of the noninflationary unemployment rate over the course of two or three years. To accomplish this, the board must raise or lower interest rates to bring savings and investment at that target unemployment rate in line with each other. And so all the paradoxes of thrift, widow's cruses, and so on become irrelevant. In particular, an increase in the savings rate will translate into higher investment after all, because the Fed will make sure that it does.

To me, at least, the idea that changes in demand will normally be offset by Fed policy--so that they will, on average, have no effect on employment--seems both simple and entirely reasonable. Yet it is clear that very few people outside the world of academic economics think about things that way. For example, the debate over the North American Free Trade Agreement was conducted almost entirely in terms of supposed job creation or destruction. The obvious (to me) point that the average unemployment rate over the next 10 years will be what the Fed wants it to be, regardless of the U.S.-Mexico trade balance, never made it into the public consciousness. (In fact, when I made that argument at one panel discussion in 1993, a fellow panelist--a NAFTA advocate, as it happens--exploded in rage: "It's remarks like that that make people hate economists!")

What has made it into the public consciousness--including, alas, that of many policy intellectuals who imagine themselves well informed--is a sort of caricature Keynesianism, the hallmark of which is an uncritical acceptance of the idea that reduced consumer spending is always a bad thing. In the United States, where inflation and the budget deficit have receded for the time being, vulgar Keynesianism has recently staged an impressive comeback. The paradox of thrift and the widow's cruse are both major themes in William Greider's latest book, which I discussed last month. (Although it is doubtful whether Greider is aware of the source of his ideas--as Keynes wrote, "Practical men, who believe themselves quite exempt from any intellectual influence, are usually the slaves of some defunct economist.") It is perhaps not surprising that the same ideas are echoed by John B. Judis in the New Republic; but when you see the idea that higher savings will actually reduce growth treated seriously in Business Week ("Looking for Growth in All the Wrong Places," Feb. 3), you realize that there is a real cultural phenomenon developing.

To justify the claim that savings are actually bad for growth (as opposed to the quite different, more reasonable position that they are not as crucial as some would claim), you must convincingly argue that the Fed is impotent--that it cannot, by lowering interest rates, ensure that an increase in desired savings gets translated into higher investment.

It is not enough to argue that interest rates are only one of several influences on investment. That is like saying that my pressure on the gas pedal is only one of many influences on the speed of my car. So what? I am able to adjust that pressure, and so my car's speed is normally determined by how fast I think I can safely drive. Similarly, Greenspan is able to change interest rates freely (the Fed can double the money supply in a day, if it wants to), and so the level of employment is normally determined by how high he thinks it can safely go--end of story.

No, to make sense of the claim that savings are bad you must argue either that interest rates have no effect on spending (try telling that to the National Association of Homebuilders) or that potential savings are so high compared with investment opportunities that the Fed cannot bring the two in line even at a near-zero interest rate. The latter was a reasonable position during the 1930s, when the rate on Treasury bills was less than one-tenth of 1 percent; it is an arguable claim right now for Japan, where interest rates are about 1 percent. (Actually, I think that the Bank of Japan could still pull that economy out of its funk, and that its passivity is a case of gross malfeasance. That, however, is a subject for another column.) But the bank that holds a mortgage on my house sends me a little notice each month assuring me that the interest rate in America is still quite positive, thank you.

Anyway, this is a moot point, because the people who insist that savings are bad do not think that the Fed is impotent. On the contrary, they are generally the same people who insist that the disappointing performance of the U.S. economy over the past generation is all the Fed's fault, and that we could grow our way out of our troubles if only Greenspan would let us.

Let's quote the Feb. 3 Business Week commentary:

Some contrarian economists argue that forcing up savings is likely to slow the economy, depressing investment rather than sparking it. "You need to stimulate the investment decision," says University of Texas economist James K. Galbraith, a Keynesian. He would rather stimulate growth by cutting interest rates.

So, increasing savings will slow the economy--presumably because the Fed cannot induce an increase in investment by cutting interest rates. Instead, the Fed should stimulate growth by cutting interest rates, which will work because lower interest rates will induce an increase in investment.

Am I missing something?

To read the reply of "Vulgar Keynesian" James K. Galbraith, in which he explains green cheese and Keynes, click here.

http://slate.msn.com/id/1917

Size Does Matter, In defense of macroeconomics.By Paul Krugman, Posted Friday, July 10, 1998, at 12:30 AM PT

If you use the term "microeconomics" in a WordPerfect document, the spelling checker will flag it and suggest "macroeconomics" instead. The spelling checker has a point. You see, macroeconomics has gone out of fashion. Not only academic economists but also some of our most influential economic pundits seem to regard it as bad manners to talk about recessions and recoveries and how governments might alleviate the former and engineer the latter. Ordinarily reasonable people now argue that the business cycle is a trivial matter, unworthy of attention when compared with microeconomic issues like the incentive effects of taxes and regulation. Trying to do anything about recessions is bad for growth, they say, and even thinking about the business cycle is a bad thing, because it distracts people from what really matters.

What is so peculiar about this attitude, which seems to become more prevalent each year, is that we live in a world in which those old-fashioned macroeconomic concerns are more pressing than they have been for generations. Not since the days of John Maynard Keynes have his questions (if not necessarily his answers) been so relevant. So an occasional reminder that the big things do matter, that getting microeconomic policy right is no help if you stumble into a depression, is welcome from any source--even a deficient dictionary.

To see what I'm talking about, consider a recent Washington Post column by Robert Samuelson, in which he seems to dismiss all macroeconomic analysis--all attempts to understand the behavior of aggregates such as gross domestic product and the price level--as useless, even malign. "What we've learned," declares Samuelson, "is that the little picture is the big picture." Economic success, he argues, is simply a matter of getting the incentives right. And he goes on to deride macroeconomics for its "illusion that it could make the whole system run smoothly almost regardless of how the economy's underlying sectors functioned. ... It's as if a car could run at breakneck speed even if the engine was corroded and missing some parts."

One wonders why the usually judicious Samuelson found it necessary to invent this straw man. Who is supposed to have had that illusion? Even when Keynesian macroeconomists were at their most hubristic, none of them claimed that macroeconomic "fine-tuning" could make an economic jalopy into a Porsche. But they did claim that even a Porsche won't perform very well if you don't give it enough gas--that using three workers very efficiently is not much help if the fourth is unemployed because consumers don't spend enough. And this is not an abstract point: Just look at the economic storms ravaging quite a lot of today's world.

For example, Japan's economy has been shrinking at an alarming pace the last few quarters. Is this because Japanese workers have become lazy or because the country's factories have fallen into disrepair? Or to take a more extreme case, has Indonesia become a 15 percent less productive society than it was a year ago? Of course not: Whatever the ultimate sources of the crisis in Asia, the immediate cause of these slumps is a collapse in that good old-fashioned macroeconomic variable, aggregate demand.

I don't know what provoked Samuelson's outburst. But if one of our most well informed economic journalists has come to disdain macroeconomics, this may be because he has been listening to economists themselves. Over the past 30 years, macroeconomics--and especially that part of macroeconomics that concerns itself with recessions and depressions, in which the economy as a whole is less than the sum of its parts--has fallen steadily into disfavor within the economics profession. As late as the mid-1970s, many textbooks still followed the lead of Paul (no relation to Robert) Samuelson's classic 1948 Economics, beginning with the macroeconomics of booms and slumps and turning to microeconomics only in their second half. Nowadays, however, every textbook (yes, even the one I'm writing) relegates macro to the second half. Even within the macroeconomics half, more and more books (like the much-hyped new text by Harvard's N. Gregory Mankiw) dwell on "safe" issues like growth and inflation as long as possible, introducing the question of recessions and what to do about them almost as a footnote.

In graduate education the situation has become even more extreme. While most Ph.D. programs continue to require that students take a year of macroeconomics, more and more of that year is devoted to long-run issues, less and less to that part of the subject that might tell you who Alan Greenspan is and why he might matter. (When I gave an honorific lecture at one prominent department, students there told me that

their macroeconomics course did not even mention money until the last two weeks, and never so much as suggested that monetary policy might have anything to do with business cycles.)

The reasons for this aversion to macroeconomics are a little hard to explain to a lay person. It's not that the business cycle has become less relevant--the U.S. economy has lately had a smooth few years, but macroeconomics was already in retreat during the anything but tranquil '70s and '80s. (I remember one famous anti-Keynesian, challenged during an early '80s conference to explain how his model could be reconciled with the savage recession then gripping the United States, snapping "I'm not interested in the latest residual"--i.e., the latest statistical blip.) Nor did macroeconomics fail the test of empirical relevance. Though it is widely believed that events such as the combination of inflation and unemployment in the 1970s, or the noninflationary growth from 1982 to 1989, baffled and astounded macroeconomists, this turns out to be another of those oddly popular anti-economist legends--similar to the legend that economists refused to believe in increasing returns. The truth is that stagflation was predicted as a possibility long before it emerged as a reality and that the disinflation of the 1980s played out just the way the (old) textbooks said it should.

The real problem with macroeconomics, from a professor's point of view, is the shakiness of its "microfoundations." Most economic theorizing is based on the assumption that individuals behave rationally--that companies set prices to maximize their profits, that workers choose to accept or reject jobs based on a rational calculation of their interests, and so on. You don't have to believe in the literal truth of this assumption to recognize how powerful it is as a working hypothesis. But while macroeconomists generally try to put as much rationality into their models as they can, useful business cycle models--the kind in which Greenspan does matter--always depend crucially on the ad hoc assumption of "sticky prices." In other words, they assume that at least in the short run, companies do not immediately reduce their prices when they cannot sell all their production, and workers do not immediately accept lower wages even when they have trouble finding jobs. This assumption works; that is, it transforms the otherwise incomprehensible reality of the business cycle into something that is not only understandable but, to some extent, controllable. But it makes many economists uncomfortable; it is the classic case of something that works in practice but not in theory.

And so economists have, more and more, simply avoided the subject; and being human, have tended to rationalize that avoidance by asserting that the subject isn't really important anyway.

The trouble with this evasion is, of course, that macroeconomics is important. Paul Samuelson had good reasons for beginning his textbook with Keynesian analysis. He knew that students would not find microeconomics, with its emphasis on efficiency, interesting unless they were first convinced that the economy could achieve more or less full employment, that it need not relapse into depression. He also knew what too many latter-day economists have forgotten: Macroeconomics is crucial to the public credibility of economics as a whole. Analytical, model-oriented thinking came to dominate American economics mainly because supernerds like Samuelson had something useful to say about the Great Depression, and their pompous, windy rivals did not. By abandoning macroeconomics the profession not only leaves the world without guidance it desperately needs; it also risks letting the fuzzy-minded literati reclaim the ground they so deservedly lost 60 years ago.

Of course the little things matter. But the big things matter too, and if economists try to pretend that they don't, one of these days they are going to get stomped on.

If you didn't stop to find out why useful business cycle models still need to incorporate "sticky prices," click here. And if you missed the article by Robert Samuelson on macroeconomic analysis, click here.

http://slate.msn.com/id/1936

Minimum Worth. James Surowiecki, Posted Tuesday, Oct. 6, 1998, at 3:32 PM PT Amidst the chaos that is Flytrap, it's easy to forget that Congress is still occasionally engaged in the process of legislation. Not surprisingly, then, the Senate's 55-44 rejection last week of a proposed $1 increase in the minimum wage passed relatively unnoticed. The increase would have boosted the wage earned by 12 million Americans from $5.15 an hour to $6.15 an hour in two 50-cent increments, beginning on Jan. 1, 1999. The vote was along party lines, with the exception of two Republicans, Arlen Specter and Alfonse D'Amato, who both voted for the increase.

The case against the minimum wage is relatively simple. Academic economists assume that wage increases destroy jobs by keeping employers from hiring unskilled workers. As it happens, there's not much empirical evidence for this, and the most notable academic study of recent years--Card and Krueger's study of employment in New Jersey and Pennsylvania--found no evidence that job creation in New Jersey, which raised the minimum wage, was slower than in Pennsylvania, which didn't. But it's certainly possible that job growth in New Jersey would have been even faster had the minimum not been raised. And obviously there's a point at which a high wage for unskilled workers would deter job creation. It's just not clear that we're anywhere near that point. Even firm opponents of the minimum wage believe that every 20-percent increase in the minimum reduces the employment of young workers by just 2 percent. So the numbers are not huge (though that's of little consolation if you're one of the 2 percent).

In any case, what's underneath the opposition to the minimum wage is the same principle that underlay the opposition to maximum-hours legislation, namely the idea of freedom of contract. Why should the state be able to tell someone he can't work for $1 an hour if he's willing to, and if working for $1 an hour will guarantee him a job? People should be allowed to set their own standards for employment, and enter into contracts without state interference, the argument runs. (This was the basic principle enunciated by the Supreme Court in Lochner v. New York, the 1905 case that struck down state maximum-hours legislation.)

One response to this is simply to make recourse to the empirical evidence, and to argue that there's no evidence that the minimum wage destroys jobs. In an economy running at full employment, in which unskilled workers have actually been losing ground over the last decade, the minimum wage actually does boost the living standards of those at the bottom of the employment ladder. The 10 million workers who are being paid 90 cents an hour more than they were three years ago are more important than the small number of workers who may have lost--or rather, never got--jobs they might otherwise have had.

Still, there's something unsatisfying about this, in part because freedom of contract does seem important and in part because that argument for the minimum wage sounds a little too much like the argument that multinationals shouldn't employ Indonesian workers at bargain-basement wages. So I want to offer a slightly different, and tentative, argument for why the minimum wage in general is good and why an increase at a time of national prosperity makes sense.

Needless to say, we don't come into the world with a perfectly honed sense of value. We determine what things are worth based upon culture and custom. And that determination is, I think, in some measure holistic, which is to say that my sense of what is a fair price for a Mercedes Benz is connected, however obliquely, to my sense of what is a fair price for a hamburger. The most important thing a worker has to figure out is the worth of her time. At heart, all labor involves the trading of time for something else--some product or some service--and employment entails the bartering of time--as embodied in the product/service--for a wage.

The problem is that there's no way to use reason to figure out what your time is worth. There's no universal standard that says that a person's time is equal to X. As a result, an important element in your determination of what your time is worth is everyone else's determination of what their time is worth. That's true both in a concrete sense--if a sizeable minority of workers is willing to flip burgers for $2 an hour, the people who will only flip burgers for $3 an hour are out of luck--and in the broader sense that you get your ideas of worth from the prevailing culture.

One answer to this would be just to let the free market sort it out, to let the tens of millions of negotiating sessions between employers and employees determine the prevailing wage. And in general, of course, that's the best course and is what we do for everyone above the minimum. But what having a minimum wage does is, in the simplest sense, determine the baseline from which all those other negotiating sessions begin. It therefore has a necessary cascading effect, helping to boost wages across the board. More importantly, the minimum wage is a communal expression of value. You might think of it as saying:

"If you're an American, no matter what you do or how educated you are, an hour of your time is worth at least $5.15." There's no question that this infringes on the rights of those who believe their time is worth less than that. But in doing so, it ensures that workers in general will value their time more highly than they otherwise would have. And it seems likely that that's a good thing.

http://www.slate.com/id/1001855

The Sin of Wages, The real reason to oppose the minimum wage. By Steven E. Landsburg Posted Friday, July 9, 2004, at 6:19 AM PT

John Kerry wants to raise the minimum wage, and President Bush, at least in principle, is on board—provided, says the president's spokesman, that it can be done without placing unreasonable costs on "job creators."

The president is trying to cast doubt on Kerry's proposal by alluding to the old canard that minimum wages cause unemployment and therefore hurt the very people they're supposed to help. Obviously that's occasionally true. If you contribute $6 an hour to your employer's bottom line, and if he's forced to pay you $7 an hour, you'll soon find yourself out on the street.

But so what? Sure, you've lost your job. But don't forget, this was a minimum-wage job in the first place. Losing a lousy job might not be a whole lot worse than keeping it. Meanwhile, lots of minimum-wage workers keep their jobs and are presumably grateful to the politicians who raised their wages.

In fact, the power of the minimum wage to kill jobs has been greatly overestimated. Nowadays, most labor economists will tell you that that minimum wages have at most a tiny impact on employment.

Twenty years ago, they'd have told you otherwise. Back then, dozens of published studies concluded that minimum wages had put a lot of people (especially teenagers, blacks, and women) out of work. As the studies continued to pile up, you might think we'd have grown more confident about their common conclusion. Instead, the opposite happened. Even though the studies were all in agreement, they managed to undercut each other.

Here's how: Ordinarily, studies with large sample sizes should be more convincing than studies with small sample sizes. Following the fates of 10,000 workers should tell you more than following the fates of 1,000 workers. But with the minimum-wage studies, that wasn't happening. According to the standard tests of statistical significance, the results of the large-scale studies were, by and large, neither more nor less significant than the results of the small-scale studies. That's screwy. Screwy enough to suggest that the studies being published couldn't possibly be a representative sample of the studies being conducted.

Here's why that matters: Even if minimum wages don't affect employment at all, about five out of every 100 studies will, for unavoidable statistical reasons, appear to show a significant effect. If you could read all 100 studies, that wouldn't be a problem—95 conclude the minimum wage is pretty harmless as far as employment goes, five conclude it's a big job-killer, you realize the latter five are spurious, and you draw the appropriate conclusion. But if the 95 studies that found no effect were deemed uninteresting and never got published, then all you'd see were the spurious five. And then the next year, another five, and the next year another five.

Even when the bulk of all research says one thing, the bulk of all published research can tell a very different and very misleading story.

How do we know what was in all the unpublished research about the minimum wage? Of course we don't know for sure, but here's what we do know: First, the big published studies were no more statistically significant than the small ones. Second, this shouldn't happen if the published results fairly represent all the results. Third, that means there must be some important difference between the published and the unpublished work. And fourth, that means we should be very skeptical of what we see in the published papers.

Now that we've re-evaluated the evidence with all this in mind, here's what most labor economists believe: The minimum wage kills very few jobs, and the jobs it kills were lousy jobs anyway. It is almost impossible to maintain the old argument that minimum wages are bad for minimum-wage workers.

In fact, the minimum wage is very good for unskilled workers. It transfers income to them. And therein lies the right argument against the minimum wage.

Ordinarily, when we decide to transfer income to some group or another—whether it be the working poor, the unemployed, the victims of a flood, or the stockholders of American Airlines—we pay for the transfer out of general tax revenue. That has two advantages: It spreads the burden across all taxpayers, and it makes politicians accountable for their actions. It's easy to look up exactly how much the government gave American, and it's easy to look up exactly which senators voted for it.

By contrast, the minimum wage places the entire burden on one small group: the employers of low-wage workers and, to some extent, their customers. Suppose you're a small entrepreneur with, say, 10 full-time minimum-wage workers. Then a 50 cent increase in the minimum wage is going to cost you about $10,000 a year. That's no different from a $10,000 tax increase. But the politicians who imposed the burden get to claim they never raised anybody's taxes.

If you want to transfer income to the working poor, there are fairer and more honest ways to do it. The Earned Icome Tax Credit, for example, accomplishes pretty much the same goals as the minimum wage but without concentrating the burden on a tiny minority. For that matter, the EITC also does a better job of helping the people you'd really want to help, as opposed to, say, middle-class teenagers working summer jobs. It's pretty hard to argue that a minimum-wage increase beats an EITC increase by any criterion.

The minimum wage is nothing but a huge off-the-books tax paid by a small group of people, with all the proceeds paid out as the equivalent of welfare to a different small group of people. If a tax-and-spend program that arbitrary were spelled out explicitly, voters would recoil. How unfortunate that when it is disguised as a minimum wage, not even our Republican president can manage to muster a principled objection.

http://www.slate.com/id/2103486

Singing the union blues. WHY DO AMERICANS DISTRUST ORGANIZED LABOR MORE THAN THE FAT CATS OF WALL STREET? BY HEATHER CHAPLIN

Perhaps I'm just perverse, but as the Dow flirts with the 10,000 mark and the country continues its unabashed stock market obsession, I find my mind wandering. While the rest of the money-obsessed are toasting the smashing of another Wall Street barrier, I'm wondering what this means for organized labor.

Confession: I grow misty eyed watching any movie that contains a group of people coming together and throwing off their bonds of oppression. I have to rent these movies alone so I can sit in the dark and blow my nose and weep over the beauty of people working to better their lives. It's a fault, I know, probably the result of too much Pete Seeger and Woody Guthrie as a child, but what can I do about it now? This debilitating weakness has already hardened into a full-blown dedication to unionization.

The mystery for me has always been why Americans aren't more dedicated to organized labor, why they aren't more interested in the power of solidarity between working people. You may roll your eyes, but "all for one and one for all," and "an injury to one is an injury to all," are intensely practical expressions when you examine the principles behind them. Soldiers understand the concept, gang members do, even Europeans. How come to Americans, they just seem corny, naive, distasteful even? Is it because "working people" implies something sweaty from which most Americans want to be disassociated? Is it the result of a snow job by corporate interests, convincing us that joining a union means a loss of freedom? Is solidarity simply passé?

Organized labor has certainly done its share in turning people off. Mob connections, for example, are never good for PR, unless you're in the mob, which most American aren't. Neither is small-scale corruption. Yes, like any movement, organized labor has its problems. In addition, it failed to maintain organizing efforts after reaching its peak membership years in the 1950s, and in the 1970s and 1980s, it let important links with community and other social groups disintegrate. Many people don't know it ever had them: that, for example, the 1963 March on Washington, when Martin Luther King Jr. made his "I Have a Dream" speech, was cosponsored by the United Auto Workers. It was called the "March on Washington for Jobs and Freedom."

According to the most recent Gallup poll on the subject, 60 percent of the country approve of labor unions and 31 percent don't, as of August 1997. Nine percent had no opinion. Previous polls in 1991 and 1986 show roughly equivalent sentiments. A more recent study, released by the Labor Research Association, asked a slightly tougher question but got a similar response. That survey found about 56 percent of likely voters think labor unions have a positive effect on the country, compared with 28.5 percent who think the opposite. In 1995, when the association last did the poll, only 49 percent responded positively.

The labor movement considers these numbers encouraging. And bearing in mind that 10 years ago, many were pronouncing organized labor dead, and that 20 years ago approval ratings had fallen to an all-time low of 55 percent, according to Gallup, this is undoubtedly true. On the other hand, Gallup has been doing the same poll since 1936, and approval ratings never dipped as low as 60 percent until 1972. In the 1950s, approval ratings ran to 75 percent.

So I worry. In this time of growing corporate power, we need organized protection. And that's not just the rambling of a Pete Seeger-addled brain. The primarily goals of these entities is making money, not treating people fairly, let alone nicely. I don't think anyone on either end of the spectrum would argue with that. Sometimes, in a happy coincidence, the two come together and it makes financial sense to treat employees well. These new workplaces we read about, where graphic designers create corporate images sitting on Day-Glo beanbag office furniture and computer geniuses get shiatsu massages while creating new ways to shop online, for example, sound fun. Don't kid yourself, though, if it stopped serving the employer's financial interest to treat its employees this way, it would stop doing so. History shows this to be true. Hell, the present shows this to be true. Why do you think companies like Nike move operations to third world countries? For the weather? Because they don't have child labor laws there, because there's no organized labor movement to say, "Um, excuse me ..."

Whatever your problems might be with the AFL-CIO, or a particular local -- too radical, too conservative, whatever -- a cursory glance through 20th century American history and a quick comparative scan around the globe reminds us of the good organized labor has done those of us who don't own the ol' means of

production. The eight-hour work day, the 40-hour work week, child labor laws, the minimum wage, pensions, workplace safety regulations, insurance, vacations, workers comp, the weekend. We assume these things are our innate rights, but they are all the results of hard-fought battles between organized labor and employers. And some of these things are national standards but are not federally mandated, which means they could disappear any time we let them.

So let the champagne flow, let the numbers dazzling our business pages shine. But somewhere in the back of our minds, let's remember the less glamorous flip side of this world of business and money. And let's remember that while our skyrocketing stocks make us feel rich, we don't really control the company, and as long as that's the case, we need organized labor watching our backs. SALON | March 19, 1999

http://archive.salon.com/money/col/chap/1999/03/19chap.html

Noblesse Oblige, By David R. Murray Translated from the French, noblesse oblige means “nobility obligates.” Originally, noblesse oblige was used to suggest that certain requirements of behavior could be legitimately imposed upon persons of noble birth. Noblesse oblige in modern English parlance is a broad literary concept. It suggests that anyone who possesses special talents or gifts is required by society to make the best use of those gifts; that he or she is duty-bound to do his or her best. The concept has been extended to include corporations and even entire nations: a December 14, 1992 article in Time magazine about U.S. involvement in Somalia was titled, “Noblesse oblige for the sole superpower.”

Historic Roots A fairly recent term, noblesse oblige was first used in 1837 by F. A. Kemble who wrote in a letter, “To be sure, if ‘noblesse oblige,’ royalty must do so still more” (OED, p. 453).

The connection between noblesse oblige and royalty continues to this day. It was the title of an April 25, 1994 article in Forbes magazine that profiles Bostonian Martin Lobkowicz, the son of a Czech aristocrat, who fled the Czech Republic at the age of 10. With Czech democratization, Martin Lobkowicz was able to reclaim his family’s estates. He now owns eight castles, artworks by Canaletto, Rubens, Velazquez, and Brueghel and 40 Spanish portraits from the 16th and 17th centuries. He possesses a library of 70,000 volumes and original musical scores, including Beethoven’s original score for the Third Symphony and Mozart’s opera Don Giovanni. And he also owns thousands of acres of forests, a brewery dating to 1466, a vineyard, a spa, and a letter from Beethoven begging the family to increase his pension. But, he says, “We are merely custodians of the cultural treasures that must be preserved for future generations” (Berman, 1994).

In this case, while Mr. Lobkowicz could sell his family’s treasures and collect hundreds of millions of dollars for himself and his family, he feels obligated to maintain them for the people of the Czech Republic. Societal pressures here compel Mr. Lobkowicz to act selflessly and honorably--the very essence of noblesse oblige.

Importance The importance is not the definition of noblesse oblige itself, but an understanding of the strength and power the concept wields over many of the world’s most successful business and civic leaders, as well as to gifted ordinary individuals. People who do not consider themselves noble (i.e., the beneficiaries of any special skill, talent, or benefit) may feel no external compunction to excel. Yet, if this concept is taken broadly, each can be seen as having unique skills and talents that we are obligated to make the best use of.

Ties to the Philanthropic Sector For some donors, a sense of noblesse oblige the key reason underlying their philanthropic activities. Individuals who possess what they perceive as significant wealth (“significant” being different for every donor) often give money away in an effort to do the right thing. They may feel that their amount of wealth is unfair or unwarranted; they may feel guilty about their riches or selfish if they maintain their wealth for themselves. By sharing their riches (either monetary or otherwise) they may reap great joy.

Noblesse oblige also applies to areas apart from money. A particularly talented administrator or manager may feel obligated to help an organization he or she cares about if the organization is foundering. A parent who enjoys learning may volunteer to teach at his or her child’s school. An attorney may provide pro bono services to a church. Noblesse oblige thus may apply to voluntarism as well as to direct gifts of cash.

Key Related Ideas Noblesse oblige is simply one of many donor motivations for giving. It should be considered at the same time as other donor motivations, including public recognition, belief in the recipient organization’s mission, acquisition of social status, mutual aid, serial reciprocity and others. Noblesse oblige is also related to any study of early American philanthropists. Andrew Carnegie’s wealth achieved for him a kind of nobility, a nobility which then required him to give away much of his fortune because of noblesse oblige. http://www.learningtogive.com/papers/concepts/noblesse.html

Driving Force: HENRY FORD He produced an affordable car, paid high wages and helped create a middle class. Not bad for an autocrat. By LEE IACOCCA

Dec. 7, 1998 The only time I ever met Henry Ford, he looked at me and probably wondered, "Who is this little s.o.b. fresh out of college?" He wasn't real big on college graduates, and I was one of 50 in the Ford training course in September 1946, working in a huge drafting room at the enormous River Rouge plant near Detroit.

One day there was a big commotion at one end of the floor and in walked Henry Ford with Charles Lindbergh. They walked down my aisle asking men what they were doing. I was working on a mechanical drawing of a clutch spring (which drove me out of engineering forever), and I was worried that they'd ask me a question because I didn't know what the hell I was doing--I'd been there only 30 days. I was just awestruck by the fact that there was Colonel Lindbergh with my new boss, coming to shake my hand.

The boss was a genius. He was an eccentric. He was no prince in his social attitudes and his politics. But Henry Ford's mark in history is almost unbelievable. In 1905, when there were 50 start-up companies a year trying to get into the auto business, his backers at the new Ford Motor Co. were insisting that the best way to maximize profits was to build a car for the rich.

But Ford was from modest, agrarian Michigan roots. And he thought that the guys who made the cars ought to be able to afford one themselves so that they too could go for a spin on a Sunday afternoon. In typical fashion, instead of listening to his backers, Ford eventually bought them out.

And that proved to be only the first smart move in a crusade that would make him the father of 20th century American industry. When the black Model T rolled out in 1908, it was hailed as America's Everyman car--elegant in its simplicity and a dream machine not just for engineers but for marketing men as well.

Ford instituted industrial mass production, but what really mattered to him was mass consumption. He figured that if he paid his factory workers a real living wage and produced more cars in less time for less money, everyone would buy them.

Almost half a century before Ray Kroc sold a single McDonald's hamburger, Ford invented the dealer-franchise system to sell and service cars. In the same way that all politics is local, he knew that business had to be local. Ford's "road men" became a familiar part of the American landscape. By 1912 there were 7,000 Ford dealers across the country.

In much the same fashion, he worked on making sure that an automotive infrastructure developed along with the cars. Just like horses, cars had to be fed--so Ford pushed for gas stations everywhere. And as his tin lizzies bounced over the rutted tracks of the horse age, he campaigned for better roads, which eventually led to an interstate-highway system that is still the envy of the world.

His vision would help create a middle class in the U.S., one marked by urbanization, rising wages and some free time in which to spend them. When Ford left the family farm at age 16 and walked eight miles to his first job in a Detroit machine shop, only 2 out of 8 Americans lived in the cities. By World War II that figure would double, and the affordable Model T was one reason for it. People flocked to Detroit for jobs, and if they worked in one of Henry's factories, they could afford one of his cars--it's a virtuous circle, and he was the ringmaster. By the time production ceased for the Model T in 1927, more than 15 million cars had been sold--or half the world's output.

Nobody was more of an inspiration to Ford than the great inventor Thomas Alva Edison. At the turn of the century Edison had blessed Ford's pursuit of an efficient, gas-powered car during a chance meeting at Detroit's Edison Illuminating Co., where Ford was chief engineer. (Ford had already worked for the company of Edison's fierce rival, George Westinghouse.)

After the Model T's enormous success, the two visionaries from rural Michigan became friends and business partners. Ford asked Edison to develop an electric storage battery for the car and funded the effort with $1.5 million. Ironically, despite all his other great inventions, Edison never perfected the storage battery. Yet Ford immortalized his mentor's inventive genius by building the Edison Institute in Dearborn.

Ford's great strength was the manufacturing process--not invention. Long before he started a car company, he was an inveterate tinkerer, known for picking up loose scraps of metal and wire and turning them into machines. He'd been putting cars together since 1891. Although by no means the first popular automobile, the Model T showed the world just how innovative Ford was at combining technology and markets.

The company's assembly line alone threw America's Industrial Revolution into overdrive. Instead of having workers put together the entire car, Ford's cronies, who were great tool- and diemakers from Scotland, organized teams that added parts to each Model T as it moved down a line. By the time Ford's sprawling Highland Park plant was humming along in 1914, the world's first automatic conveyor belt could churn out a car every 93 minutes.

The same year, Henry Ford shocked the world with what probably stands as his greatest contribution ever: the $5-a-day minimum-wage scheme. The average wage in the auto industry then was $2.34 for a 9-hr. shift. Ford not only doubled that, he also shaved an hour off the workday. In those years it was unthinkable that a guy could be paid that much for doing something that didn't involve an awful lot of training or education. The Wall Street Journal called the plan "an economic crime," and critics everywhere heaped "Fordism" with equal scorn.

But as the wage increased later to a daily $10, it proved a critical component of Ford's quest to make the automobile accessible to all. The critics were too stupid to comprehend that because Ford had lowered his costs per car, the higher wages didn't matter--except for making it feasible for more people to buy cars.

When Ford stumbled, it was because he wanted to do everything his way. By the late 1920s the company had become so vertically integrated that it was completely self-sufficient. Ford controlled rubber plantations in Brazil, a fleet of ships, a railroad, 16 coal mines, and thousands of acres of timberland and iron-ore mines in Michigan and Minnesota. All this was combined at the gigantic River Rouge plant, a sprawling city of a place where more than 100,000 men worked.

The problem was that for too long they worked on only one model. Although people told him to diversify, Henry Ford had developed tunnel vision. He basically started saying "to hell with the customer," who can have any color as long as it's black. He didn't bring out a new design until the Model A in '27, and by then GM was gaining.

In a sense Henry Ford became a prisoner of his own success. He turned on some of his best and brightest when they launched design changes or plans he had not approved. On one level you have to admire his paternalism. He was so worried that his workers would go crazy with their five bucks a day that he set up a "Sociological Department" to make sure that they didn't blow the money on booze and vice. He banned smoking because he thought, correctly as it turned out, that tobacco was unhealthy. "I want the whole organization dominated by a just, generous and humane policy," he said.

Naturally, Ford, and only Ford, determined that policy. He was violently opposed to labor organizers, whom he saw as "the worst thing that ever struck the earth," and entirely unnecessary--who, after all, knew more about taking care of his people than he? Only when he was faced with a general strike in 1941 did he finally agree to let the United Auto Workers organize a plant.

By then Alfred P. Sloan had combined various car companies into a powerful General Motors, with a variety of models and prices to suit all tastes. He had also made labor peace. That left Ford in the dust, its management in turmoil. And if World War II hadn't turned the company's manufacturing prowess to the business of making B-24 bombers and jeeps, it is entirely possible that the 1932 V-8 engine might have been Ford's last innovation.

In the prewar years there was no intelligent management at Ford. When I arrived at the end of the war, the company was a monolithic dictatorship. Its balance sheet was still being kept on the back of an envelope, and the guys in purchasing had to weigh the invoices to count them. College kids, managers, anyone with book learning was viewed with some kind of suspicion. Ford had done so many screwy things--from terrorizing his own lieutenants to canonizing Adolf Hitler--that the company's image was as low as it could go.

It was Henry Ford II who rescued the legacy. He played down his grandfather's antics, and he made amends with the Jewish business community that Henry Ford had alienated so much with the racist attacks that are now a matter of historical record. Henry II encouraged the "whiz kids" like Robert McNamara and Arjay Miller to modernize management, which put the company back on track. Ford was the first company to get a car out after the war, and it was the only company that had a real base overseas. In fact, one of the reasons that Ford is so competitive today is that from the very beginning, Henry Ford went anywhere there was a road--and usually a river. He took the company to 33 countries at his peak. These days the automobile business is going more global every day, and in that, as he was about so many things, Ford was prescient.

Henry Ford died in his bed at his Fair Lane mansion seven months after I met him, during a blackout caused by a storm in the spring of 1947. He was 83. The fact is, there probably couldn't be a Henry Ford in today's world. Business is too collegial. One hundred years ago, business was done by virtual dictators--men laden with riches and so much power they could take over a country if they wanted to. That's not acceptable anymore. But if it hadn't been for Henry Ford's drive to create a mass market for cars, America wouldn't have a middle class today.

Lee Iacocca was president of Ford, later chairman of Chrysler and last year founded EV Global Motors

http://www.time.com/time/magazine/article/subscriber/0,10987,1101981207-140668,00.html

The People's Hitler. Does Hitler's popularity discredit populism itself? By Adam Shatz, Nov. 19, 1997, The Hitler of History By John Lukacs. Knopf; 320 pages; $26

It is one of the paradoxes of modern biography that Adolf Hitler has seldom been taken seriously as a political leader. The Nazi dictator was not simply the century's most murderous tyrant; he was one of its most brilliant politicians.

Hitler, says Lukacs, was a peculiarly modern demagogue. He created an electrifying fusion of aggressive nationalism and populist rhetoric. Hitler was "extraordinarily aware of his pictorial image," and "understood the popular effect of the cult of the 'star'" on his fans, the ordinary Germans who admired and even loved him. ("If only the Führer knew!"--the cry of many ordinary Germans who felt betrayed by the Nazi regime--suggests the depth of their affection.)

Hitler was extremely adept at sizing up his opponents' weaknesses and understood acutely the "supreme importance of land power" with the "motorization of military movement." As a result, he succeeded, in less than a decade, in making himself the ruler of Europe from the gates of Moscow to the English Channel. His victories emboldened him and his National Socialist followers to visit untold misery upon their victims.

Since the mid-'80s, an insidious form of revisionism has come from reputable German scholars; a quarrel among German historians over the uniqueness and meaning of the Holocaust. Nolte offered an implicit justification of the Holocaust as an anxious, reactive measure sparked by "the previous practices and exterminations by the Russian Revolution." For Nolte, Stalin was the original sinner. Hillgruber added his own provocative twist by proclaiming that German historians were obliged to "identify" with the German soldiers on the Eastern front who were protecting Germany from Bolshevism. The thrust of such interpretations, as Lukacs argues, was to rehabilitate Hitler as a German patriot and anti-Communist.

Lukacs concludes that "their explanations amounted to a kind of relativization" to the point of "defending Hitler." And yet, Lukacs' own corrections to Hitler history are idiosyncratic and often wrong. For instance, Lukacs' claim that Hitler was not a biological racist. Lukacs gleans this insight from a solitary remark by Hitler, in 1945, to the effect that "from the genetic point of view there is no such thing as the Jewish race." If Hitler was less a biological racist than an extreme nationalist, as Lukacs asserts, this was a distinction without a difference to the millions of Germans instructed in such particulars of Social Darwinist "science" as how to tell a Jewish skull from an Aryan one.

Why would Lukacs underplay Hitler the racist? Because he is more intent upon painting Hitler as a populist--a creature of the baleful age that wrested authority from responsible elites and enshrined popular sovereignty. Lukacs, who came of age in Hungary while Hitler was in power, has long described himself as a "reactionary"--a partisan of the patrician mores of pre-World War I Europe. Because Hitler took to the streets and disregarded the niceties of bourgeois politics, Lukacs considers him a friend of the proletariat. While Lukacs is right to point out that Hitler "was contemptuous of [the bourgeoisie's] caution, of their thrift,...of their desire for safety," Hitler did not crush their political parties and send them in droves to labor camps--this fate he reserved for the organized working class. Nor did Hitler try to abolish capitalism, as Lukacs suggests, although, like his adversary Roosevelt, he did

expand state supervision of private industry. Despite his revolutionary rhetoric, his inspired use of modern techniques of collective mobilization, and his willingness to strike up a tactical alliance with Stalin, Hitler remained a committed foe of what he called "Jew-Bolshevism," and indeed, of all leveling ideologies.

Lukacs often writes as though Hitler triumphed in the war. That's because, for Lukacs, the horror of Hitlerism is simply an expression of the horror of modern collectivism. "In one sense Hitler's vision survived him," notes Lukacs. "During the twentieth century the compound of nationalism with socialism has become the nearly universal practice for all states ... [w]hether they call themselves socialist or not....We are all national socialists now." Does this mean that the difference between, say, Swedish

social democracy and Nazi state capitalism is less significant than the similarities? Lukacs would not, of course, go that far. But in using Hitler to illustrate the threat of power passing into the hands of the masses, he ignores an important distinction between mass societies: those ruled by charismatic dictators, unchecked by popular representation; and those governed by democratic institutions. With some exceptions, we are all democrats now. Perverse as this may sound, Hitler is one reason why.

http://slate.msn.com/id/3026/

The Lighter Side of Spam, Finding a funny bone in canned meat.By Rob Walker, Posted Monday, Dec. 9, 2002, at 11:31 AM PT

Spam has what you might call a challenged brand. The challenge isn't that people haven't heard of Spam—pretty much everyone is familiar with the stuff, which was introduced by Hormel back in 1937. According to a history at SPAM.com, the name is a mushing-together of "spiced ham." Perhaps a meat product that is scrambled and pummeled by industrial processes into a brazenly inorganic geometric shape once seemed futuristic and exciting. But like a lot of things that once seemed futuristic and exciting, Spam now seems funny and maybe a little creepy. You can't help but imagine a big vat of, I don't know, whipped pig, being poured into those

cans. It doesn't make you think of ham, it makes you think of Soylent Green. (And as if all this weren't enough, "spam" has of course become the noun referring to e-junk-mail, one of the most annoying aspects of the online age.) To be blunt, the Spam brand lives mostly as a punch line, and the challenge is that everyone has heard of it.

If the Web site is any indication, the masters of the Spam brand are aware of its rep and even have a sense of humor about it. The history cited earlier begins, "Bread lines, Dust Bowls, Bonnie and Clyde, New Deals and plenty of raw deals, the '30s were tough. Yet conditions like that gave rise to heroes"—such as Spam. But elsewhere the site notes that more than 5 billion cans of the stuff have been purchased over the decades, so Spam is not simply a laughing matter.

That's why Spam's new advertising campaign is a little bewildering. The background material from its ad agency is surprisingly straightforward in describing the mission—to revitalize a "high-volume, profitable icon brand that's starting to decline." The theme of the campaign, anchored by two TV spots, is articulated in the tag line "Crazy Tasty."

One ad is set in a brightly colored suburban dining room that suggests a 1970s sitcom. The adolescent son and daughter figures are drinking milk. The boy comments on the delicious mac-and-cheese dinner the family is enjoying. "That's because it's made with Spam," says Dad, who sort of leans across the table and delivers this insight with the conviction that suggests he is a man who might come unglued at any moment. He explains, through a clenched smile, how he made the dish sparkle by adding cubes of Spam. "Wow," says Mom, "I'd sure like some more—but there's none left!" Dad claps his hands and screams, "MORE SPAM!" A Spam van crashes through the wall, to the delight of everyone. "Mmm," the daughter says. "More Spam!" And then everyone laughs like a bunch of lunatics.

A second spot is set at a backyard barbecue. Not Necessarily Jim Carrey is on hand again, explaining to his neighbors, or whoever these people are, how he assembled the Spam-burgers they've been enjoying. Again there is a Spam shortage, and again his face briefly darkens before he summons another Spam van with an unhinged yell.

These ads are, frankly, unnerving. Which is probably why I like them. You might think it's a mistake to suggest that Spam's core constituency is suburban crazies who seem vaguely tortured behind their happy masks and might at any moment embark on some Cheeveresque journey across all the neighborhood's swimming pools in search of canned meat. Won't this make current Spam fans feel laughed at and betrayed?

I doubt it. I think it's actually fairly shrewd of Hormel to show a sense of humor about Spam—and the more twisted, the better. These ads never quite make fun of the, um, product. Besides, I would guess that even the most devoted addict recognizes that there's something sort of funny (peculiar and ha-ha) about it. The oddball humor of the ads makes Spam seem, if not exactly desirable, then at the very least harmless. In this case, that's very much a step in the right direction.

http://slate.msn.com/id/2074884/

An Eyewitness Account

Lawrence Svobida, a wheat farmer from Kansas, witnessed first-hand the searing drought and relentless winds that crippled the southern Great Plains during the 1930's. His vivid account is taken from his memoir, "Farming the Dust Bowl."

"...With the gales came the dust. Sometimes it was so thick that it completely hid the sun. Visibility ranged from nothing to fifty feet, the former when the eyes were filled with dirt which could not be avoided, even with goggles."

"...When I knew that my crop was

irrevocably gone I experienced a deathly feeling which, I hope, can affect a man only once in a lifetime. My dreams and ambitions had been flouted by nature, and my shattered ideals seemed gone forever. Fate had dealt me a cruel blow above which I felt utterly unable to rise."

"...A cloud is seen to be approaching from a distance of many miles…it hangs low, seeming to hug the earth….it appears to be rolling on itself….As it sweeps onward, the landscape is progressively blotted out. Birds fly in terror before the storm, and only those that are strong

of wing may escape. The smaller birds fly until they are exhausted, then fall to the ground, to share the fate of the thousands of jack rabbits which perish from suffocation."

"...With my financial resources at last exhausted and my health seriously, if not permanently impaired, I am at last ready to admit

defeat and leave the Dust Bowl forever. With youth and ambition ground into the very dust itself, I can only drift with the tide." http://www.pbs.org/wgbh/amex/dustbowl/sfeature/eyewitness.html

New Deal Remedies

Hugh Hammond Bennett had been leading a campaign to reform farming practices with the intention of preserving the soil well before Roosevelt became president. In the mid-1930's desperate Dust Bowl farmers took little solace in hearing from Bennett that, "...Americans have been the greatest destroyers of land of any race or people, barbaric or civilized." Despite such statements, Bennett was not insensitive to hardships faced by Dust Bowl farmers

In April 1935, Bennett was on his way to testify before Congress when he learned of a dust storm blowing in from the western plains. As a dusty gloom blotted out the midday sun, Bennett exclaimed, "This, gentlemen, is what I have been talking about." Congress responded by passing the Soil Conservation Act of 1935.

Convincing farmers to approach the land in a new manner would take much effort and a bit of old-fashioned bribery. The federal government paid out one dollar per acre to farmers employing planting and plowing methods aimed at conserving the soil. From 1933 to 1937 such payments provided many Dust Bowl farmers with their only

source of income.

"It was not long until the Federal Government made funds available to farmers who needed financial help to do the necessary work to check the blowing of their fields. [T]he conditions attached were humiliating to many farmers who had long taken pride in their independence. [T]he farmer had to sign papers stating that he was a pauper. Only then would he be given a credit slip entitling him to the supplies he needed most."

Wounded pride and all, most Dust Bowl farmers were immensely appreciative of Roosevelt. For many, federal aid made it possible for them to wait out the blistering years of drought and dust. When the rains finally came at the tail end of the decade and the Southern Plains once again yielded a bountiful harvest, the relationship between the farmer and the federal government remained entwined. Henceforth, a complex, and sometimes controversial, system of price supports and subsidies emerged to form the backbone of federal farm policy.

http://www.pbs.org/wgbh/amex/dustbowl/sfeature/newdeal.html