In current usage, the keyword “corporation” is synonymous with “business corporation,” generally referring to a for-profit organization that can operate at the discretion of its owners and managers free of social and legislative control. The term is derived from the Latin corporatus, the present participle of corporare, which means “form into a body,” and appeared in English by 1530. A business corporation can own property; buy, sell, and control assets, including other corporations; pay or avoid taxes; write or break contracts; make and market products; and engage in every kind of economic activity. At the same time, the persons involved in a corporation have under most circumstances no liability for its debts. Since 1900, the corporation has been the dominant form for organizing capital, production, and financial transactions. By 2000, the corporation had become a dominant force in the global economy, the only alternative to the state as an organizer of large-scale production, a rival to national governments, and a powerful presence in the world's cultures. Of the world's hundred largest economies in 2000, forty-seven were nation-states and fifty-three were corporations.
American studies and cultural studies generally have focused not on the corporation or the corporate form but rather on features of culture and society that the corporation has affected (Trachtenberg 1982; Horwitz 1987; Michaels 1987). This research has produced major reconsiderations of civil rights, community formation, consumerism, culture industries, discrimination, environmental justice, imperialism and colonialism, labor, political agency, and underdevelopment, domains where business has played a major and sometimes controlling role. But the corporate world as such has only rarely been an object of study in itself. The prominent critic Fredric Jameson (1993, 50) noted the reluctance of cultural studies to “look out upon the true Other, the bureaucrat or corporate figure.” The situation has changed little since that time; for instance, the word “corporation” does not make a single appearance in a comprehensive bibliographical essay on the 2005 American Studies Association website (Reed 1999).
Before the mid-nineteenth century, the corporation was a public franchise—a ferry or turnpike company, for example—that received a profit in exchange for reliable service to the common or public good. After the Civil War, corporations increasingly came to reflect private economic interests. Though the Supreme Court, in the early case Trustees of Dartmouth College v. Woodward (17 U.S. 518 (1819)), had held that a public charter possessed the legal status of a private contract, most of the legal foundations for this change were laid in the 1870s and 1880s. In the Slaughter-House Cases (83 U.S. 36 (1873)), the Supreme Court denied that labor had a property interest in a job that required compensation upon dismissal, which left the firm itself as the sole legitimate property interest. In Santa Clara County v. Southern Pacific Railroad Company (118 U.S. 394 (1886)), the Court asserted, without supporting argumentation, that the corporation was a legal person and could not have its property regulated in a way not in conformity with the due process provisions of the Fourteenth Amendment. Through a series of small but unswerving steps, the courts freed the corporation from both public purpose and direct legislative will.
This movement toward corporate independence consolidated several important features of the corporate form. One was limited liability, in which the shareholder was personally insulated from claims for damages or the repayment of debts. Limited liability made it easier to attract a large amount of capital from many investors while retaining concentrated control, since the investor was less likely to insist on control in the absence of liability. Through two further changes, corporations gained the right to own stock in other companies, a right that had been denied to ordinary proprietorships, and stabilized the managerial authority of boards of directors (Roy 1997). A firm could grow through cross-ownership or, even without ownership, could control other firms through interlocking board memberships. This legal framework gave the firm's executives significant independence from the owners, a framework that was influentially defined as the separation of ownership and control (Berle and Means 1932). This phenomenon allowed the corporation even greater distance from the surrounding society, for it was relatively sheltered not only from immediate legislative influence and community pressure but also from the collective will of its own investors. The simultaneous development of concentration of control and immunity from interference transformed the corporation from a public trust into a potential monopoly power with most of the capacities of a parallel government.
Twentieth-century corporate law took the existence of the corporation for granted and sought not to regulate the form so much as to regulate particular industry sectors and management practices. The landmark Sherman Anti-Trust Act (1890) was so vague that its powers were in effect created through enforcement or through later legislation such as the Hepburn Act (1906) and the Mann-Elkins Act (1910), which focused on the power to regulate monopoly pricing or constrain concentrated ownership, and the act was extended through later New Deal legislation such as the Glass-Steagall Act (1933) and, still later, the Bank Holding Company Act (1956). The courts generally rejected the idea that big is bad; rather, plaintiffs had to show that big had a materially bad effect. To the contrary, by the late twentieth century, enormous size was seen by regulators as a competitive necessity; in the 1980s, “ten thousand merger notifications were filed with the antitrust division…. The antitrust division challenged exactly twenty-eight” (L. Friedman 2002, 392). One legal historian summarized the situation by saying that “corporation law had evolved into a flexible, open system of nonrules” that “allowed corporations to do whatever they wished” (ibid., 389).
Support for the corporation came more frequently from courts and legislators than from public opinion. The labor movement consistently challenged three of the corporation's most important impacts on working conditions: the accelerated absorption of skilled, relatively independent workers into the factory system; Taylorization, in which mass production was transformed into a routinized assembly-line process strictly regulated for maximum time efficiency; and managerialism, whose meaning for labor was unilateral control of pay and working conditions by layers of management separated from and generally set against labor. More than a century of major strikes—such as those at Carnegie's steel works at Homestead, Pennsylvania (1892), the Loray Mill in Gastonia, North Carolina (1929), and the Flint Sit-down Strike (1936), down through the United Parcel Service strike (1997), the Los Angeles janitors strike (2000), and the Chicago teachers strike (2012)—were among the most visible expressions of popular opposition to the corporation's independence of, or sovereignty over, the wider society in which it operated.
Corporate power prompted a decades-long movement for “industrial democracy” that sought to put corporate governance on a constitutionalist and democratic footing. Some observers saw collective bargaining, finally legalized by the Wagner Act (1935), as an industrial civil rights movement that transformed management into a government of laws (Lichtenstein 2002, 32–38). But labor never did achieve meaningful joint sovereignty with management in the context of the large corporation. The Taft-Hartley Act (1947) required all trade-union officials to sign an affidavit that they were not Communists, impugning the collective loyalty of labor leaders (managers were not required to sign), and also forbade cross-firm and cross-industry labor coordination (ibid., 114–18). Union membership and influence declined precipitously from the 1970s onward, and the idea of industrial democracy had by the end of the century virtually disappeared from public view. Even as the corporation continued to rely on the state for favorable environmental legislation, tax law, educated workers, and the like, it consolidated its relative autonomy from employees and the public.
Over this period, the corporation became part of the culture of the United States and other countries, and the resulting corporate culture had four dominant features. First, consumption became central. When the corporation collectivized labor and coordinated the production process on a large scale, it enabled the mass production of consumer goods for the first time. This led to increases in the general standard of living and to the rise of a consumer society in which consumption came to be a virtually universal activity and a primary means of expressing personal identity and desire. Second, democracy was equated with capitalism. Mass production and consumption, freedom, self-expression, and personal satisfaction came to be seen as interchangeable and as enabled by corporate capitalism; consumption came to eclipse, if not exactly replace, political sovereignty. Conversely, democracy's best outcome seemed to be affluence rather than public control of the economy and other social forces. Third, efficient organization became synonymous with hierarchical bureaucracy. As the twentieth century wore on, it became increasingly difficult to imagine truth, power, or innovation arising from personal effort, insight, and inspiration unharnessed by economic roles, or effective cooperation without command from above. Fourth, philosophical, spiritual, cultural, and social definitions of progress were eclipsed by technological ones. The rapid commercialization of technical inventions—radio, radiology, transistors—became the measure of the health of a society, and thus society came to require healthy corporations. Building on a long tradition of corporations presenting themselves as public benefactors (Marchand 1998), by the 1980s and 1990s, they were regarded by most political leaders and opinion makers as the leading progressive force in society.
Across these changes, the economy began to appear as a natural system, accessible only to highly trained experts in production, management, and finance and resisting all attempts to soften its effects through public services and social programs. In this new common sense, society had to adapt to the economy, and the corporation was the privileged agent of that adaptation. By 2000, the majority of U.S. leaders appeared to accept the priority of economic laws to social needs, and the corporate system as the authentic voice of those laws. Concurrently, U.S. society lost its feel for the traditional labor theory of value. Real value now seemed to be created by a combination of technological invention and corporate activity. At the end of the twentieth century, cheap manual labor and advanced mental labor had become more important than ever to steadily increasing corporate revenues, and yet the individual's labor contribution was less valued and more difficult to picture.
The tremendous cultural power of the corporate form has not spared it turbulence and even decline. Annual economic growth in the United States and Europe slowed markedly in the 1970s, as did rates of increase in profitability and productivity. Business efforts to maintain profit margins led to continuous price increases that in turn increased wage demands and overall inflation. The United States lost its unchallenged economic preeminence as countries such as France, Germany, Italy, and Japan fully recovered from the devastation of World War II and as the newly industrializing countries of Asia became important competitors. Oil-price shocks and the end of the Bretton Woods currency system were only the most visible sign of this changing economic order (Rosenberg 2003). Internal pressures added to external ones. Job satisfaction was low enough to prompt an important study from the Nixon administration's Department of Labor, and “human relations” management theory increased its attacks on Taylorist regimentation (Newfield 1998). These trends contributed to a sense among some observers that the large corporation was part of the problem, that it had become too inflexible, hierarchical, and expensive to lead the way in a new era of “post-Fordist” globalization (Harvey 1989).
In response to these threats, corporations began a rehabilitation campaign, recasting themselves as the world's only true modernizers, capable of moving the economy and society relentlessly forward, often against their will (T. Friedman 2000, 2005). Though presented as news, nearly all of these claims were tried-and-true standards of the economic liberalism of previous periods: that the markets are inherently efficient and self-regulating in the absence of government interference; that attempts to stabilize employment and incomes place unnatural burdens on these efficient markets, as do consumer protections, banking restrictions, environmental legislation, regional planning, and the like; that the tireless search for ever-cheaper labor, now fully internationalized, is legitimate because it benefits consumers; that corporate giants can “learn to dance” by “reengineering” their companies to simplify their cumbersome bureaucratic layers and routines (Kanter 1990; Hammer and Champy 1993); and that corporations have rejected monopoly in favor of entrepreneurship. By the turn of the twenty-first century, no single corporation or corporate group could be called an empire, but as a group, corporations had unchallenged sovereignty over the economy.
Or almost unchallenged. Economic problems persisted: overall growth remained historically weak while economic inequality mounted steadily, work became less secure, and the public was treated to a long series of trials for corporate fraud. Opposition to corporate influence grew at the end of the twentieth century, though the strongest movements appeared outside the United States. Examples included Argentina, which had modified the regime imposed on it in the 1990s by the U.S.-dominated International Monetary Fund; India, where protests against development projects and intellectual property regimes sponsored by multinational corporations became routine; Malaysia, whose conservative regime rejected U.S. recipes for recovery from the economic crisis of 1997–98; Mexico, where nongovernmental organizations began to build social infrastructure; Venezuela, where strong popular support for social development proved capable of prevailing in elections; and Bolivia, where native peoples toppled two presidents in their attempt to nationalize natural gas reserves. In the United States, protests against the World Trade Organization and the “Washington Consensus” broke out in Seattle in 1999, though they did not become as widespread or sustained as they have been elsewhere.
In the first decades of the twenty-first century, the corporation has been at the center of several major developments. Following the September 11, 2001, attacks on New York and Washington, D.C., some corporations became directly involved in military operations as private contractors (Singer 2003; Dickinson 2011). In various sectors, the privatization of public functions and their revenue streams became an important business strategy. Information and communications technology reached in new ways into private life, ranging from customized marketing and Internet-based data collection via Amazon, Facebook, Google, and similar firms (Andrews 2012) to the collection and delivery to the government of unprecedented and still-unknown quantities of personal data for security and surveillance purposes (Greenwald 2013). Legislation and legal decisions allowed corporations to exert new levels of political management. The most famous case, Citizens United v. Federal Election Commission (558 U.S. 310 (2010)), sanctioned new corporate bodies, often organized as nonprofits, to channel unlimited private funds into elections (Briffault 2012). One basis for the majority's opinion was the Court's recognition in Santa Clara and other cases that “First Amendment protection extends to corporations” (Citizens, 558 U.S. at 25). The Court affirmed the precedent that “the Government cannot restrict political speech based on the speaker's corporate identity” (Citizens, 558 U.S. at 30).
The decisive trend may turn out to be the diverging economic fates of corporations and the middle class, whose prosperity had been the core political justification for tax, trade, employment, and innovation policies that favored business interests. Economically, the 2000s was a “lost decade,” and the mainstream media routinely disseminated evidence that whatever else corporations had been doing for the previous decades, they had not given the majority of the U.S. workforce an inflation-adjusted raise (Mishel et al. 2012; Parlapiano 2011; Schwartz 2013). The sense of economic failure was confirmed by the financial crisis of 2007–8 and the diverging fates of Wall Street, which recovered, and Main Street, which did not. The growing sense that corporations produced inequality rather than prosperity triggered another form of resistance, the Occupy call in 2011 for a society run by and for the 99 percent. Evidence continues to grow that the hierarchical, multidivisional corporation of the twentieth century—with its enormous managerial and executive costs, its monopoly market goals, its mixtures of empowerment and authoritarianism, its definitions of value that exclude social benefits—is less functional and affordable than most leaders had assumed (D. Gordon 1996; Ross 1997; Bamberger and Davidson 1999). And yet any process of inventing postcorporate economic forms would require deeper public knowledge of corporate operations than prevails in the wealthy countries of the early twenty-first century, as well as clearer, more imaginative definitions of democratic economics.